FLOSS Project Planets

Colm O hEigeartaigh: An STS JAAS LoginModule for Apache CXF

Planet Apache - Tue, 2015-06-30 07:27
Last year I blogged about how to use JAAS with Apache CXF, and the different LoginModules that were available. Recently, I wrote another article about using a JDBC LoginModule with CXF. This article will cover a relatively new JAAS LoginModule  added to CXF for the 3.0.3 release. It allows a service to dispatch a Username and Password to a STS (Security Token Service) instance for authentication via the WS-Trust protocol, and also to retrieve the user's roles by extracting them from a SAML token returned by the STS.

1) The STS JAAS LoginModule

The new STS JAAS LoginModule is available in the CXF WS-Security runtime module. It takes a Username and Password from the Callbackhandler passed to the LoginModule, and uses them to create a WS-Security UsernameToken structure. What happens then depends on a configuration setting in the LoginModule.

If the "require.roles" property is set, then the UsernameToken is added to a WS-Trust "Issue" request to the STS, and a "TokenType" attribute is sent in the request (defaults to the standard "SAML2" URI, but can be configured). The client also adds a WS-Trust "Claim" to the request that tells the STS to add the role of the authenticated end user to the request. How the token is added to the WS-Trust request depends on whether the "disable.on.behalf.of" property is set or not. By default, the token is added as an "OnBehalfOf" token in the WS-Trust request. However, if "disable.on.behalf.of" is set to "true", then the credentials are used according to the WS-SecurityPolicy of the STS endpoint. For example, if the policy requires a UsernameToken, then the credentials are added to the security header of the WS-Trust request. If the "require.roles" property is not set, the the UsernameToken is added to a WS-Trust "Validate" request.

The STS validates the received UsernameToken credentials supplied by the end user, and then either creates a token (if the Issue binding was used), or just returns a simple response telling the client whether the validation was successful or not. In the former use-case, the token that is returned is cached meaning that the end user does not have to re-authenticate until the token expires from the cache.

The LoginModule has the following configuration properties:
  • require.roles - If this is defined, then the WS-Trust Issue binding is used, passing the value specified for the "token.type" property as the TokenType, and the "key.type" property for the KeyType. It also adds a Claim to the request for the default "role" URI.
  • disable.on.behalf.of - Whether to disable passing Username + Password credentials via "OnBehalfOf".
  • disable.caching - Whether to disable caching of validated credentials. Default is "false". Only applies when "require.roles" is defined.
  • wsdl.location - The location of the WSDL of the STS
  • service.name - The service QName of the STS
  • endpoint.name - The endpoint QName of the STS
  • key.size - The key size to use (if requesting a SymmetricKey KeyType). Defaults to 256.
  • key.type - The KeyType to use. Defaults to the standard "Bearer" URI.
  • token.type - The TokenType to use. Defaults to the standard "SAML2" URI.
  • ws.trust.namespace - The WS-Trust namespace to use. Defaults to the standard WS-Trust 1.3 namespace.
In addition, any of the standard CXF security configuration tags that start with "ws-security." can be used as documented here. Sometimes it is necessary to set some security configuration depending on the security policy of the WSDL.

Here is an example of the new JAAS LoginModule configuration:



2) A testcase for the new LoginModule

Using an STS via WS-Trust for authentication and authorization can be quite difficult to set up and understand, but the new LoginModule makes it easy. I created a testcase + uploaded it to github:
  • cxf-jaxrs-jaas-sts: This project demonstrates how to use the new STS JAAS LoginModule in CXF to authenticate and authorize a user. It contains a "double-it" module which contains a "double-it" JAX-RS service. It is secured with JAAS at the container level, and requires a role of "boss" to access the service. The "sts" module contains a Apache CXF STS web application which can authenticate users and issue SAML tokens with embedded roles.
To run the test, download Apache Tomcat and do "mvn clean install" in the testcase above. Then copy both wars and the jaas configuration file to the Apache Tomcat install (${catalina.home}):
  • cp double-it/target/cxf-double-it.war ${catalina.home}/webapps
  • cp sts/target/cxf-sts.war ${catalina.home}/webapps
  • cp double-it/src/main/resources/jaas.conf ${catalina.home}/conf
Next set the following system property:
  • export JAVA_OPTS=-Djava.security.auth.login.config=${catalina.home}/conf/jaas.conf
Finally, start Tomcat, open a web browser and navigate to:

http://localhost:8080/cxf-double-it/doubleit/services/100

Use credentials "alice/security" when prompted. The STS JAAS LoginModule takes the username and password, and dispatches them to the STS for validation.

    Categories: FLOSS Project Planets

    Europython: EuroPython 2015: Call for On-site Volunteers

    Planet Python - Tue, 2015-06-30 07:02

    EuroPython is organized and run by volunteers from the Python community, but we’re only a few and we will need more help to make the conference run smoothly.

    We need your help !

    We will need help with the conference and registration desk, giving out the swag bags and t-shirts, session chairing, entrance control, set up and tear down, etc.

    Perks for Volunteers

    In addition to endless fame and glory as official EuroPython Volunteer, we have also added some real-life few perks for you:

    • We will grant each volunteer a compensation of EUR 22 per shift
    • Volunteers will be eligible for student house rooms we have available and can use their compensation to pay for these
    • Get an awesome EuroPython Volunteer T-Shirt that you can keep and show off to your friends :-)
    Register as Volunteer

    Please see our EuroPython Volunteers page for details and the registration form:

    If you have questions, please write to our helpdesk@europython.eu.

    Hope to see you in Bilbao :-)

    Enjoy,

    EuroPython 2015 Team

    Categories: FLOSS Project Planets

    Amazee Labs: Debug Solr queries

    Planet Drupal - Tue, 2015-06-30 07:00
    Debug Solr queries Vasi Chindris Tue, 06/30/2015 - 13:00

    Solr is great! When you have a site even with not so much content and you want to have a full text search, then using Solr as a search engine will improve a lot the speed of the search itself and the accuracy of the results. But, as most of the times happen, all the good things also come with a drawback too. In this case, we talk about a new system which our web application will communicate to. This means that, even if the system is pretty good by default, you have to be able in some cases to understand more deeply how the system works.This means that, besides being able to configure the system, you have to know how you can debug it. We'll see in the following how we can debug the Solr queries which our applications use for searching, but first let’s think of a concrete example when we need to debug a query.

    An example use case

    Let’s suppose we have 2 items which both contain in the title a specific word (let’s say ‘building’). And we have a list where we show search results ordered by their score first, and when they have equal scores by the creation date, desceding. At a first sight, you would say that, because both of them have the word in the title, they have the same score, so you should see the newest item first. Well, it could be that this is not true, and even if they have the word in the title, the scores are not the same.

    Preliminaries

    Let’s suppose we have a system which uses Solr as a search server. In order to be able to debug a query, we first have to be able to run it directly on Solr. The easiest is when Solr is accessible via http from your browser. If not, the Solr must be reached from the same server where your application sits, so you call it from there. I will not insist on this thing, if you managed to get the Solr running for you application you should be able to call it.

    Getting your results

    The next thing you do is to try to make a query with the exact same parameters as your application is doing. To have a concrete example, we will consider here that we have a Drupal site which uses the Search API module with the Apache Solr as the search server. One of the possibilities to get the exact query which is made is to check the SearchApiSolrConnection::makeHttpRequest() method which makes a call to drupal_http_request() using an URL. You could also use the Solr logs to check the query if it is easier. Let's say we search for the word “building”. An example query should look like this:

    http://localhost:8983/solr/select?fl=item_id%2Cscore&qf=tm_body%24value%5E5.0&qf=tm_title%5E13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&wt=json&json.nl=map&q=%22building%22

    If you take that one and run it in the browser, you should see a JSON output with the results, something like:

    To make it look nicer, you can just remove the “wt=json” (and optionally “json.nl=map”) from your URL, so it becomes something like:

    http://localhost:8983/solr/select?fl=item_id%2Cscore&qf=tm_body%24value^5.0&qf=tm_title^13.0&fq=index_id%3A"articles"&fq=hash%3Ao47rod&start=0&rows=10&sort=score desc%2C ds_created desc&q="building"

    which should result in a much nicer, xml output:

    List some additional fields

    So now we have the results from Solr, but all they are containing are the internal item id and the score. Let's add some fields which will help us to see exactly what texts do the items contain. The fields you are probably more interested in are the ones which are in the “qf” variable, in your URL. In this case we have:

    qf=tm_body%24value^5.0&qf=tm_title^13.0

    which means we are probably interested in the “tm_body%24value” and the “ tm_title” fields. To make them appear in the results, we add them to the “fl” variable, so the URL becomes something like:

    http://localhost:8983/solr/select?fl=item_id%2Cscore%2Ctm_body%24value%2Ctm_title&qf=tm_body%24value^5.0&qf=tm_title^13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&q=%22building%22

    And the result should look something like:

    Debug the query

    Now everything is ready for the final step in getting the debug information: adding the debug flag. It is very easy to do that, all you have to do is to add the “debugQuery=true” to your URL, which means it will look like this:

    http://localhost:8983/solr/select?fl=item_id%2Cscore%2Ctm_body%24value%2Ctm_title&qf=tm_body%24value^5.0&qf=tm_title^13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&q=%22building%22&debugQuery=true

    You should see now more debug information, like how the query is parsed, how much time does it take to run, and probably the most important one, how the score of each result is computed. If your browser does not display the formula in an easy-readable way, you can copy and paste it into a text editor, it should look something like:

    As you can see, computing the score of an item is done using a pretty complex formula, with many variables as inputs. A few more details about these variables you can find here: Solr Search Relevancy

    Further reading and useful links

    Categories: FLOSS Project Planets

    "Menno's Musings": IMAPClient 0.13

    Planet Python - Tue, 2015-06-30 06:38

    I'm chuffed to announce that IMAPClient 0.13 is out!

    Here's what's new:

    • Added support for the ID command (as per RFC2971). Many thanks to Eben Freeman from Nylas.
    • Fixed exception with NIL address in envelope address list. Thomas Steinacher gets a big thank you for this one.
    • Fixed a regression in the handling of NIL/None SEARCH responses. Thanks again to Thomas Steinacher.
    • Don't traceback when an unparseable date is seen in ENVELOPE or INTERNALDATE responses. None is now returned instead.
    • Extended timestamp parsing support to allow for quirky timestamp strings which use dots for the time separator.
    • Replaced the horrible INTERNALDATE parsing code
    • The datetime_to_imap top-level function has been moved to the datetime_util module and is now called datetime_to_INTERNALDATE. This will only affect you in the unlikely case that you were importing this function out of the IMAPClient package.
    • The docs for various IMAPClient methods, and the HACKING.rst file have been improved.
    • CONDSTORE live test is now more reliable (especially when running against Gmail)

    See the NEWS.rst file and manual for more details.

    IMAPClient can be installed from PyPI (pip install imapclient) or downloaded from the IMAPClient site.

    I'm also excited to announce that Nylas (formerly Inbox) has now employed me to work on IMAPClient part time. There should be a significant uptick in the development of IMAPClient.

    The next major version of IMAPClient will be 1.0.0, and will be primarily focussed on enhancing TLS/SSL support.

    Categories: FLOSS Project Planets

    Nicola Iarocci: Cerberus 0.9 has been released

    Planet Python - Tue, 2015-06-30 05:16
    A few days ago Cerberus 0.9 was released. It includes a bunch of new cool features, let’s browse through some of them. Collection rules First up is the new set of anyof, allof, noneof and oneof validation rules. anyof allows you to list multiple sets of rules to validate against. The field will be considered […]
    Categories: FLOSS Project Planets

    ERPAL: How we’re building our SaaS business with Drupal

    Planet Drupal - Tue, 2015-06-30 05:00

    Have you ever thought about building your own Software-as-a-Service (SaaS) business based on Drupal? I don't mean selling Drupal as a service but selling your Drupal-based software under a subscription model and using Drupal as the basis for your accounting, administration, deployment and the tool that serves and controls all the business processes of your SaaS business. Yes, you have? That's great! We’ve done the same thing over the last 12 months, and in this blog post I want to share my experiences with you (and we’d be delighted if you shared your experiences in the comments). I’ll show you the components we used to build Drop Guard – a Drupal-auto-updater-as-a-service (DAUaaS ;-)) that includes content delivery and administration, subscription handling, CRM and accounting, all based on ERPAL Platform.

    I’m not talking about a full-featured, mature SaaS business yet, but about a start-up in which expense control matters a lot and where agility is one of the most important parameters for driving growth. Of course, there are many services out there for CRM, payment, content, mailings, accounting, etc. But have you added up all the expenses for those individual services, as well as the time and money you need to integrate them properly? And are you sure you’ve made a solid choice for the future? I want to show you how Drupal, as a highly flexible open source application framework, brings (almost) all those features, saves you money in the early stages of your SaaS business and keeps you flexible and agile in the future. Below you’ll find a list of the tools we used to build the components of the Drop Guard service.

    Components of a SaaS business application

    Content: This is the page where you present all benefits of your service to potential clients. This page is mostly content-driven and provides a list of plans your customers can subscribe to. There’s nothing special about this as Drupal provides you with all the features right out of the box. The strength of Drupal is that it integrates with all the other features listed below, in one system. With the flexible entity structure of Drupal and the Rules module, you can automate your content and mailings to keep users on board during the trail period and convince them of your service to purchase a full subscription.

    Trial registration: Once your user has signed up using just her email address, she’ll want to start and test using your service for free during the trial period. Drupal provides this registration feature right out of the box. To deploy your application (if you run single instances for every user), you could trigger the deployment with Rules. With the commerce_license module you can create an x-day trial license entity and replace it with the commercial entity once the user has bought and paid for a license.

    Checkout: After the trial period is over, your user needs to either buy the service or quit using it. The process can be just like the checkout process in an online store. This step includes a subscription to a recurring payment provider and the completion of a contact form (to create a complete CRM entry for this subscriber). We used Drupal commerce to build a custom checkout process and commerce products to model the subscription plans. To notify the user about the expiration of her trial period, you can send her one or more emails and encourage her to get in touch. Again, Rules and the flexible entity structure of Drupal work perfectly for this purpose.

    Accounting: Your customer data need to be managed in a CRM as they're one of the most valuable information in your SaaS business. If you’ve just started your SaaS business, you don't need a full-featured and expensive CRM system, but one that scales with your business as it grows and can be extended later with additional features, if needed. The first and only required feature is a list of customers (your subscribers) and a list of their orders and related invoices (paid or unpaid). As we use CRM Core to build the CRM, we can extend the contact entities with fields, build filterable lists with views, reference subscriptions (commerce orders) to contacts and create invoices (a bundle of the commerce order entity pre-configured as the ERPAL invoice module).

    Recurring payment: If you run your SaaS business on a subscription-based model where your clients pay for the service periodically, you have two options to process recurring payments. Handling payments by yourself is not worth trying as it’s too risky, insecure and expensive. So, either you use Stripe to handle recurring payments for you or you can use any payment provider to process one time payments and implement the recurring feature in Drupal. There are some other SaaS payment services worth looking at. We've chosen the second option using Paymill to process payments in combination with commerce_license and commerce_license_billing to implement the recurring feature. For every client with an active subscription, an invoice is created every month and the amount is charged via the payment provider. Then the invoice is set to "paid" and the service continues. The invoice can be downloaded in the portal and is accessible for both the SaaS operator and the client as a dataset and/or a PDF file.

    Deployment: Without going into deep details of application deployment, Docker is a powerful tool for deploying single-instance apps for your clients. You may also want to have a look at different API-based Drupal hosting platforms, such as Platform.sh or Pantheon or Acquia Cloud if you want to sell Drupal-based applications via a SaaS model. They will make your deployment very comfortable and easy to integrate. You can use Drupal multi-site instances or the Drupal access system to separate user-related content (the last one can be very tricky and exert performance impacts on big data!). If your app produces a huge amount of data (entities or nodes) I recommend single instances with Docker or a Drupal hosting platform. As Drop Guard automates deployment and therefore doesn’t produce that much data, we manage all our subscribers in one Drupal instance but keep the decoupled update server horizontally scalable.

    Start building your own SaaS business

    If you’re considering building your own SaaS business, there’s no need to start from scratch. ERPAL Platform is freely available, easy-to-customize and uses Drupal contrib modules such as Commerce, CRM Core and Rules to connect all the components necessary to operate a SaaS business process. With ERPAL Platform you have a tool for developing your SaaS business in an agile way, and you can adapt it to whatever comes in the near future. ERPAL Platform includes all the components for CRM and accounting and integrates nicely with Stripe (and many others, thanks to Drupal Commerce) as well as your (recurring) payment provider. We can modify the default behavior with entities, fields, rules and views to extend the SaaS business platform. We used several contrib modules to extend ERPAL Platform to manage licensed products (commerce license and commerce license billing). If you want more information about the core concepts of ERPAL Platform, there’s a previous blog post about how to build flexible business applications with ERPAL Platform.

    This is how we built Drop Guard, a service for automating Drupal updates with integration into development and deployment workflows. As we’ve just started our SaaS business, we’ll keep you posted with updates along our way to becoming a full-fledged, Drupal-based SaaS business. For instance, we plan to add metrics and marketing automation features to drive traffic. We’ll share our experiences with you here and we’d be happy if you’d share yours in the comments!

    Categories: FLOSS Project Planets

    Cocomore: MySQL - Query optimization

    Planet Drupal - Tue, 2015-06-30 04:02

    Queries are the centerpiece of MySQL and they have high optimization potential (in conjunction with indexes). This is specially true for big databases (whatever big means). Modern PHP frameworks tend to execute dozens of queries. Thus, as a first step, it is required to know what the slow queries are. A built-in solution for that is the MySQL slow query log. This can either be activated in my.cnf or dynamically with the --slow_query_log option. In both cases, long_query_time should be reduced to an appropriate value.

    read more

    Categories: FLOSS Project Planets

    Talk Python to Me: #14 Moving from PHP to Python 3 with Patreon

    Planet Python - Tue, 2015-06-30 04:00
    It's uncommon when technology and purpose combine to create something amazing. But that's exactly what's happening here a Patreon. Learn how they are using Python to enable an entirely new type of crowdsourcing for creative endeavours (podcasting, art, open source, and more).<more></more> In this episode, I speak with Albert Shue from Patreon about their journey of converting patreon.com from PHP to Python 3. You will learn some practical techniques for setting up such a project for success and avoiding some of the biggest risks. Links from the show: <div style="font-size: .85em;"> <b>Patreon</b>: <a href='http://patreon.com ' target='_blank'>patreon.com</a> <b>Michael's Campaign</b>: <a href='http://patreon.com/mkennedy' target='_blank'>patreon.com/mkennedy</a> <b>How to write a spelling corrector</b>: <a href='http://norvig.com/spell-correct.html' target='_blank'>norvig.com/spell-correct.html</a> <b>Rollbar</b>: <a href='https://rollbar.com/' target='_blank'>rollbar.com</a> <b>Albert on Twitter</b>: <a href='https://twitter.com/146' target='_blank'>@146</a> <b>Patreon Hiring (1)</b>: <a href='https://medium.com/@jackconte/patreon-needs-data-scientists-c667d6fa2b4a' target='_blank'>via Medium.com</a> <b>Patreon Hiring (2)</b>: <a href='https://www.patreon.com/careers' target='_blank'>patreon.com/careers</a> <b>Stackoverflow 2015 developer survey</b>: <a href='http://stackoverflow.com/research/developer-survey-2015' target='_blank'>stackoverflow.com/research/developer-survey-2015</a> <b>IPython Keynote</b>: <a href='https://www.youtube.com/watch?v=2NSbuKFYyvc' target='_blank'>youtube.com/watch?v=2NSbuKFYyvc</a> <b>Talk Python T-Shirt</b>: <a href='/home/shirt' target='_blank'>talkpythontome.com/home/shirt</a> <b>Sponsor: Codeship</b>: <a href='https://codeship.com/?utm_source=talkpython&utm_medium=podcast&utm_campaign=TalkPython' target='_blank'>codeship.com</a> <b>Sponsor: Hired</b>: <a href='https://hired.com/?utm_source=podcast&utm_medium=talkpythontome&utm_content=display-4k' target='_blank'>hired.com/talkpythontome</a> </div>
    Categories: FLOSS Project Planets

    tanay.co.in: Building a Slack Chatbot Powered by Drupal!

    Planet Drupal - Tue, 2015-06-30 00:45
    */

    Ever Since we moved to Slack for our team’s instant messaging needs, what excited me the most is the nice set of APIs that Slack offers to seamlessly integrate your apps into Slack chat.

    What we needed immediately was a basic bot that handled Karma functionality allowing people to say ‘Thanks’ to others using the usual ‘++’ annotation.

    We were looking at options for various technologies. Node.js is the usual one that you hear a lot when people talk of chatbots these days.

    Drupal was an option. We were skeptical at first. Having Drupal intercept and analyze almost 3- 4 messages every second at its peak hours sounded like trouble. And there would be no levels of caching involved here as each message is unique from a different user and should be processed uniquely.

    But one clear advantage I could see and wanted to have, was to interface the rest of the team from the various complexities of Slack APIs and configuration, interfacing all of that through Drupal APIs. So if anyone needed to extend this bot further, they should not really worry about Slack API and they should be building very simple Drupal Modules.

    Over a weekend, a couple of us teamed to build a chatbot for our Acquia India slack chatrooms, using what we knew best - Drupal. And we launched it on March 1st 2015. Our bot was christened Regina Cassandra. And the bot has been up and running ever since with no downtime or any issues so far.

    The Karma Handling..

    Rarely used, but it can text people..

    And when the Cricket World cup was happening, Regina was busy serving us scores whenever requested..

    Regina also used to give everyone a daily fortune cookie. She doesn’t seem to do that anymore for the API that the fortune cookie module was using seems to be dead now.

    The bot uses Slack’s Outgoing Webhooks to intercept each message posted to the chatrooms, and allows all modules on the chatbot site to intercept the message and respond to it.

    The bot (a Headless Drupal Site) has been hosted on a free-tier Acquia Cloud Subscription. With the decent performance it has had so far and less-than-a-second response times we currently see with the slackbot, we never saw a need to upgrade so far.

    Categories: FLOSS Project Planets

    Matt Raible: The House

    Planet Apache - Tue, 2015-06-30 00:04

    A few months after starting this blog, I wrote about The Cabin. I grew up in a cabin in the backwoods of Montana, with no electricity and no running water. I lived there for 16 years before moving to Oregon for my last two years of high school. As you can imagine, this makes for a good story now that I'm a programmer by trade.

    Since I'm between clients right now, I decided to head back to the cabin to see my parents for a bit. My Dad retired in 2009 and my Mom in 2010. They started building their retirement home just up the hill from the cabin in 2004. My parents moved in two years ago and completed enough of it to show it off at a big party before Trish and my wedding last year.

    The House is a majestic building, hand-built and beautifully crafted. Both the interior and exterior are amazing, with gorgeous trim and a wonderful attention to detail. The porch is possibly the best in the world, high and mighty with a great view of the cabin and garden below.

    The interior is similar to the cabin I grew up in, but much more spacious. There's a wood cook stove, a heat stove on every floor and two indoor bathrooms. It was super nice to avoid walking to the outhouse in the sub-zero temperatures during my stay.

    The main floor, pictured above, is warm and cozy this time of year. The upstairs, where the bedrooms are, is quite roomy and beautifully decorated. It's full of light and has a nice lanai deck facing west.

    The power and water systems supplying the house are all on-demand. The Kohler generator fires up when the batteries need charging and the water pump turns on when the washing machine or dishwasher run. There's an on-demand water heater as well. Solar panel charge the batteries when the sun is shining. A good charge will power lights, internet and the TV for around 24 hours. The generator, and the backup generator, are all powered by propane.

    My favorite thing about The House is everything from my childhood is still nearby. The sauna that was built in 1917, the cabin that was built the following year. There's the log barn my Dad built one winter when he couldn't find work and the Toyota Land Cruiser that we'd rooster tail down the snowy back road to get us home in the dead of winter.

    More photos on Flickr &rightarrow

    My parents built a whole new house and it still feels like home. On today, their 42nd anniversary, I'd like to say Well Done! You created a wonderful paradise in the mountains of Montana. I'm extremely proud of you.

    Categories: FLOSS Project Planets

    Astro Code School: Video - Interview with Lead Instructor Caleb Smith

    Planet Python - Tue, 2015-06-30 00:00

    In this Astro interview video I talk with our Lead Instructor Caleb Smith. We learn about Caleb's formal education, a connection between music and computer programming, and why teaching excites him. Caleb wrote the curriculum and teaches the Astro Code School Python & Django Web Development class.

    Don't forget to subscribe to the Astro Code School YouTube channel. We have a lot more videos in the works.

    Categories: FLOSS Project Planets

    Commerce Guys: Comparing Drupal Commerce & Magento

    Planet Drupal - Mon, 2015-06-29 21:53

    Much ink has been spilled about which open-source ecommerce platform is the “best.” Most comparisons perpetuate what is typically an easy (but usually incorrect) way of understanding these two platforms and whether they are a fit for your business. They are often compared like word processors based on line item comparisons of features instead of powerful business growth engines and visible brand extensions that they are. To limit them to nothing more than published feature sets or architectural comparisons is foolish, unhelpful, and often leads companies down the wrong path. A better approach is to fully understand current and future business requirements and make a decision based on which solution can serve those needs the best.

    At a high level, the most important thing to ask is “do you know what you want and how you want it done?” If you don’t know what you want, then you will likely consider a tool with lots of features out of the box. The tradeoff is that those features come with assumptions that are set in stone. While lots of prepackaged features may feel good now, you risk not being able to adapt as quickly as your competitors or the possibility that modifying those features will lead to incompatibilities down the road. The alternative is a framework where you get a larger feature set and with fewer assumptions. The tradeoff here is that you have more work to do to get off the ground—planning and implementing the exact features and experience you want—but with endless flexibility to mold a solution that exactly meets current and future business requirements. Trying to frame these solutions by purely quantitative just won’t do.

    Let’s take a step back from the deeply rooted (and borderline religious) discussion of frameworks and function sets, and examine at a higher level both Drupal Commerce and Magento. For business owners who are trying to figure out what’s best for them and anyone who has any experience with either technology, let’s talk about what really makes Drupal Commerce different from Magento.  Let’s get away from discussions about classes, architecture, benchmarks, features, etc. and instead, talk about each solution and objectively what problems they solve and which they do not.

    To start off, I’d like to restate a quote (attribute to Adobe SE leads) from Bryan House’s “Competing with Giants” presentation from DrupalCon Denver:

    If you are looking at both (Adobe) CQ5 and Drupal, then one of us is in the wrong place.

    This quote struck me. It sank deep into my soul. In a way, once I let the weight of these words really take hold, it completely changed my way of thinking. To help, consider this slight rewording:

    If you are looking at both Magento and Drupal Commerce, then one of us is in the wrong place.

    The obvious implication of this statement is that both Magento and Drupal Commerce have unique roles in the online commerce ecosystem. They are each geared towards certain types of projects and use cases. Instead of pitting each platform against each other to have a winner based on some arbitrary set of features or architecture, a better approach would be to first establish a clear understanding of customer needs. When the needs of a client are properly applied to the strengths of each platform, one will clearly meet those needs in a way that the other does not. Thus removing the need for a feature comparison.

    Framing the solutions

    What I’d like to endeavor here is (as much as possible) an unbiased and systematic approach to discussing Drupal + Drupal Commerce and Magento as unique solutions to the question of “which commerce platform should I choose?” Keeping the internals aside, here are the particular use-cases that make a lot of sense for a given project. This isn’t a comprehensive list, but if you’re trying to figure out which platform you should be looking at, then take a look. If you find one column aligning with your particular needs—chances are that’s the one that will be a better fit for your business.

    Drupal Commerce Magento Content strategy various types of content with rich relationships, taxonomies, and categories catalog and product content with basic site content or blog Catalog complexity unrestrained catalog creation and product presentation options conventional catalog format and product presentation Product offering non-traditional and mixed-product offerings traditional physical and/or digital product offerings Platform functionality open, flexible feature set and custom application foundation commerce-focused feature set Admin interface basic yet, customizable admin interface robust, rigid admin interface User experience strong, defined vision for bespoke user experience best practice, industry standard user experience Business strategy commerce is a part of larger offering or experience commerce is the strategy Development skill level basic PHP knowledge required advanced PHP knowledge required

    Now that we’ve drawn some lines, let’s discuss.

    Content Strategy

    Drupal Commerce (by way of Drupal) has an extremely powerful content system which allows for boundless creation of content types all with their own custom fields and attributes, editing experience, and a set of rich media tools. Content can be related to each other and those relationships can be harnessed to generate lists of related products and blog posts on product pages, or customized landing pages with unique product listings and content. It’s a breeze to set this up and you can do all of this without touching a line of code. If providing content and information to your customers is vital to your business and how you differentiate yourself from others, Drupal is what you want.

    Magento, on the other hand, has a very basic content system. You can add pages, add some content to category pages, and adding attributes to products is painless. There are even some great built-in blog modules. But once you step outside of this, you’re in custom territory. You’ll either need two systems (like a blog or a CMS) or you’ll end up building it all custom into Magento increasing cost and ongoing support. Again, it’s not that Magento can’t do content at all, just that Magento’s content features are pretty basic. Enterprise does expand on this, but you still have a very limited tool set and code changes (requiring a developer) are usually required to expand on it.

    Catalog Complexity

    Magento offers what any reasonable person might consider to be a wholly conventional approach to catalog management. You have a catalog root, and from there you can create tiers of categories. Products fall into one or more of those categories. In fact, it’s pretty common for a product to exist within multiple groups based on how visitors will look for and try to find those particular products. But Magento is also pretty strict that products really can’t be displayed outside of this hierarchy. Aside from some of the product blocks for up-sells and cross-sells, your ability to display products is completely centered around this. Also, product listings are limited to lists and grids views without additional extensions or modifications.

    Drupal Commerce releases you from this constraint. Products can be organized, tagged, and dynamically added to or removed from product lists automatically. A traditional catalog-like user experience can be built. But the catalog automatically follows how you already organize your product and can use virtually any attribute that exists on a product. And when you want to display your products, you can choose from a number of pluggable styles from tables, grids, lists, and each product can have it’s own customized look and feel in a product list, too. This can make a huge difference as you try to differentiate, promote, and get your visitors engaged in what you have to offer—no matter how many products you have or how complicated they are.

    Product Offering

    If you’re selling physical and/or digital products, both platforms are fairly good at that. In fact, Magento again has a lot of features that don’t require individual setup. Want simple and no-fuss sales of traditional products? Magento can tackle that easily. With Drupal Commerce, you start with a basic product structure and are then free to build exactly what you want no matter how complex it might be.

    When it comes to non-traditional offerings—event registrations, donations, recurring billing, licensing, and subscription service models—Drupal Commerce provides tools to configure or build what you need without having to reinvent the wheel. And best of all, you can mix any and all of these product types pretty easily. So if you want to do registrations, t-shirt sales, and a recurring monthly donation, you can easily support that in a single site and in a single transaction.

    Platform Functionality

    Magento has a well implemented and cohesive commerce feature set. And frankly, if you’re judging a product solely on the published feature set, Magento looks good. That’s not because Drupal Commerce doesn’t have a great feature set—in fact it’s much more expansive than Magento’s—but Drupal Commerce’s flexibility is in the expansive and ever-growing list of modules. It’s hard to quantify. If you’re only looking for a shopping cart and you’re happy with what Magento provides, it may very well be the right choice.

    However, if you are wanting to integrate features that go beyond commerce—you want to define and build your own custom application or create a totally unique customer experience—then Drupal Commerce will be a much better platform enabling you to adapt quickly to business and market changes.  Entire new areas of functionality can be configured and enabled just  like a new feature.  Whether you’re adding customer portals, forums, web services, an LMS, or even back office functionality, Drupal can give you the agility and freedom to change and grow as you need to.

    Admin Interface

    While Drupal’s administrative interface can be endlessly customized and tailored to your specific needs (in many cases without even touching the code), it generally tends to be pretty basic. It is trivial to create new screens, targeted to specific users, that gives specific information and actions that can be performed on that information. In short, you can get what you want, but you’ll have to spend the time configuring it.

    Magento’s administrative interface is comprehensive and gives users a structured, well-defined way to manage the entire store. If you’re willing to use what’s out of the box, then it will serve you well. The pain will come if you ever decide to deviate from the out of the box experience. Customizations require code modification and even “small changes” could require considerable effort to complete.

    User Experience

    When it comes to user experience, Magento delivers a best-practices, industry standard implementation of a traditional shopping cart: you get a catalog, a cart with a one-page checkout, account pages, etc. It’s designed to be a finished product, so you can pretty much trust it will all be there and that it will work well.

    Drupal Commerce provides all of that same functionality, but expects you to expend some effort to make it look good. At a minimum, you’ll need to theme it. That’s not much to ask since you’re likely already doing that for the rest of your site. Drupal’s theme system is extremely powerful and adding unique or advanced features can be really easy. In some cases, little to no theme work is required. In addition, the user experience for path to purchase can be more easily integrated with the content experience, giving the merchant far more content marketing and merchandizing avenues.

    Business Strategy

    Drupal is a powerful platform. If you don’t know what I’m talking about, it is something that can’t be explained in a single paragraph. Drupal can be a community form, a wiki, an LMS, a translatable content management platform, a web services platform, and an online store. In fact, it could be all of these things at one time. If your vision calls for a platform that can do more than one thing, Drupal can rise to the challenge and integrate several business platforms under a single (software and hardware) roof.

    Magento, no surprise here, is a shopping cart. That’s what it does. It does it well, but if you are wanting to integrate Magento with another part of your business (e.g. magazine subscriptions, forum membership, etc.) you’ll have to deal with two independent systems talking with each other. You’ll be synchronizing your data between multiple systems and having to keep everything up to date with custom or 3rd party solutions.

    Development Skills

    If you’re wondering how easy it’ll be to integrate your team with either Drupal Commerce or Magento, here’s what you need to know.

    Magento is a very powerful and complex system. It’s makes heavy use of objects, inheritance, and programming concepts that are confusing to basic and even some moderately experienced PHP developers. Getting acclimated to Magento as a back-end or front-end developer could take weeks or months, depending on the experience level. Also, architecturally, Magento does have some profound gotchas when it comes to adding and developing many extensions on a site. Documentation is so-so but there is a very active community of bloggers, training is available, and Magento support is pretty widely available.

    Drupal Commerce is much simpler and even people with minimal to no PHP experience can customize and pick it up within a few days. While parts of Drupal use objects (such as Views and the entity system) much of it is procedural. Drupal is designed to be much more accessible to individuals without coding experience. This flexibility is made available to non-coders through the various modules (such as Views, Rules, features, VBO, etc.) that offer powerful UIs to manage it. However, when code is necessary, bespoke modules can often be very simple. Documentation is generally very good for things like Drupal and Drupal Commerce, while contributed modules can vary from having non-existent to excellent documentation. Again, a very active and friendly community exists to support Drupal developers and users, and a wide range of training and support is available.

    Conclusion

    When deciding on an open source ecommerce solution, it is important to first look at the fundamentals of your business and identify your priorities. By doing this you will avoid the needless exercise of features comparisons and checklists and quickly conclude that one of these solutions is in the wrong place.  If content is important to how you engage with customers and sell your product and if you want to control and choose how the solution supports your business needs, Drupal + Drupal Commerce is generally the right choice.

    Categories: FLOSS Project Planets

    Davide Moro: Pip for buildout folks

    Planet Python - Mon, 2015-06-29 19:26
    ... or buildout for pip folks.

    In this article I'm going to talk about how to manage software (Python) projects with buildout or pip.

    What do you mean for project?
    A package that contains all the application-specific settings, database configuration, which packages your project will need and where they lives.Projects should be managed like a software if you want to assure the needed quality:
    This blog post is not:
    • intended to be a complete guide to pip or buildout. If you want to know more about pip or buildout
    • talking about how to deploy remotely your projects
    BuildoutI've been using buildout for many years and we are still good friends.
    Buildout definition (from http://www.buildout.org):
    """
    Buildout is a Python-based build system for creating, assembling and deploying applications from multiple parts, some of which may be non-Python-based. It lets you create a buildout configuration and reproduce the same software later. 
    """With buildout you can build and share reproducible environments, not only for Python based components.

    Before buildout (if I remember well the first time I get started to use buildout was in 2007, probably during the very first Plone Sorrento sprint) it was a real pain sharing a complete and working developing environment pointing to the right version of several repositories, etc. With buildout it was questions of minutes.

    From https://pypi.python.org/pypi/mr.developer.
    Probably with pip there is less fun because there isn't a funny picture that celebrates it?!Buildout configuration files are modular and extensible (not only on per-section basis). There are a lot of buildout recipes, probably the one I prefer is mr.developer (https://pypi.python.org/pypi/mr.developer). It allowed me to fetch different versions of the repositories depending on the buildout profile in use, for example:
    • production -> each developed private egg point to a tag version
    • devel -> the same eggs point to the develop/master
    You can accomplish this thing creating different configurations for different profiles, like that:

    [buildout]

    ...

    [sources]

    your_plugin = git git@github.com:username/your_plugin.git

    ...

    I don't like calling ./bin/buildout -c [production|devel].cfgwith the -c syntax because it is too much error prone. I prefer to create a symbolic link to the right buildout profile (called buildout.cfg) and you'll perform the same command both in production or during development always typing:

    $ ./bin/buildout
    This way you'll avoid nasty errors like launching a wrong profile in producion. So use just the plain ./bin/buildout command and live happy.

    With buildout you can show and freeze all the installed versions of your packages providing a versions.cfg file.

    Here you can see my preferred buildout recipes:
    Buildout or not buildout, one of the of the most common needs it is the ability to switch from develop to tags depending on you are in development or production mode and reproduce the same software later. I can't figure out to manage software installations without this quality assurance.

    More info: http://www.buildout.org
    PipLet's see how to create reproducible environments with develop or tags dependencies for production environments with pip (https://pip.pypa.io/en/latest/).

    Basically you specify your devel requirements on a devel-requirements.txt file (the name doesn't matter) pointing to the develop/master/trunk on your repository.

    There is another file that I call production-requirements (the file name doesn't matter) that it is equivalent to the previous one but:
    • without devel dependencies you don't want to install in production mode
    • tagging your private applications (instead of master -> 0.1.1)
    This way it is quite simple seeing which releases are installed in production mode, with no cryptic hash codes.

    You can use now the production-requirements.txt as a template for generating an easy to read requirements.txt. You'll use this file when installing in production.

    You can create a regular Makefile if you don't want to repeat yourself or make scripts if you prefer:
    • compile Sphinx documentation
    • provide virtualenv initialization
    • launch tests against all developed eggs
    • update the final requirements.txt file
    For example if you are particular lazy you can create a script that will create your requirements.txt file using the production-requirements.txt like a template.
    This is a simple script, it is just an example, that shows how to build your requirements.txt omitting lines with grep, sed, etc:
    #!/bin/bash

    pip install -r production-requirements.txt
    pip freeze -r production-requirements.txt | grep -v mip_project | sed '1,2d' > requirements.txtWhen running this script, you should activate another Python environment in order to not pollute the production requirements list with development stuff.

    If you want to make your software reusable and as flexible as possible, you can add a regular setup.py module with optional dependencies, that you can activate depending on what you need. For example in devel-mode you might want to activate an entry point called docs (see -e .[docs] in devel-requirements.txt) with optional Sphinx dependencies. Or in production you can install MySQL specific dependencies (-e .[mysql]).

    In the examples below I'll also show how to refer to external requirements file (url or a file).
    setup.pyYou can define optional extra requirements in your setup.py module.
    mysql_requires = [
    'MySQL-python',
    ]

    docs_requires = [
    'Sphinx',
    'docutils',
    'repoze.sphinx.autointerface',
    ]
    ...

    setup(
    name='mip_project',
    version=version,
    ...
    extras_require={
    'mysql': mysql_requires,
    'docs': docs_requires, ... 
    },devel-requirements.txtOptional extra requirement can be activated using the [] syntax (see -e .[docs]).
    You can also include external requirement files or urls (see -r) and tell pip how to fetch some concrete dependencies (see -e git+...#egg=your_egg).
    -r https://github.com/.../.../blob/VERSION/requirements.txt # Kotti
    Kotti[development,testing]==VERSION

    # devel (to no be added in production)
    zest.releaser

    # Third party's eggs
    kotti_newsitem==0.2
    kotti_calendar==0.8.2
    kotti_link==0.1
    kotti_navigation==0.3.1

    # Develop eggs
    -e git+https://github.com/truelab/kotti_actions.git#egg=kotti_actions
    -e git+https://github.com/truelab/kotti_boxes.git#egg=kotti_boxes
    ...

    -e .[docs]production_requirements.txtThe production requirements should point to tags (see @VERSION).
    -r https://github.com/Kotti/Kotti/blob/VERSION/requirements.txt
    Kotti[development,testing]==VERSION

    # Third party's eggs
    kotti_newsitem==0.2
    kotti_calendar==0.8.2
    kotti_link==0.1
    kotti_navigation==0.3.1

    # Develop eggs
    -e git+https://github.com/truelab/kotti_actions.git@0.1.1#egg=kotti_actions
    -e git+https://github.com/truelab/kotti_boxes.git@0.1.3#egg=kotti_boxes
    ...

    -e .[mysql] requirements.txt
    The requirements.txt is autogenerated based on the production-requirements.txt model file. All the installed versions are appended in alphabetical at the end of the file, it can be a very long list.
    All the tag versions provided in the production-requirements.txt are automatically converted to hash values (@VERSION -> @3c1a191...).
    Kotti==1.0.0a4

    # Third party's eggs
    kotti-newsitem==0.2
    kotti-calendar==0.8.2
    kotti-link==0.1
    kotti-navigation==0.3.1

    # Develop eggs
    -e git+https://github.com/truelab/kotti_actions.git@3c1a1914901cb33fcedc9801764f2749b4e1df5b#egg=kotti_actions-dev
    -e git+https://github.com/truelab/kotti_boxes.git@3730705703ef4e523c566c063171478902645658#egg=kotti_boxes-dev
    ...

    ## The following requirements were added by pip freeze:
    alembic==0.6.7
    appdirs==1.4.0
    Babel==1.3
    Beaker==1.6.4... Final considerationUse pip to install Python packages from Pypi.

    If you’re looking for management of fully integrated cross-platform software stacks, buildout is for you.

    With buildout no Python code needed unless you are going to write new recipes (the plugin mechanism provided by buildout to add new functionalities to your software building, see http://buildout.readthedocs.org/en/latest/docs/recipe.html).

    Instead with pip you can manage also cross-platform stacks but you loose the flexibility of buildout recipes and inheritable configuration files.

    Anyway if you consider buildout too magic or you just need a way to switch from production vs development mode you can use pip as well.
    LinksIf you need more info have a look at the following urls:
    Other useful links:
    Update 20150629If you want an example I've created a pip-based project for Kotti CMS (http://kotti.pylonsproject.org):
      Categories: FLOSS Project Planets

      Jonathan McDowell: What Jonathan Did Next

      Planet Debian - Mon, 2015-06-29 18:22

      While I mentioned last September that I had failed to be selected for an H-1B and had been having discussions at DebConf about alternative employment, I never got around to elaborating on what I’d ended up doing.

      Short answer: I ended up becoming a law student, studying for a Masters in Legal Science at Queen’s University Belfast. I’ve just completed my first year of the 2 year course and have managed to do well enough in the 6 modules so far to convince myself it wasn’t a crazy choice.

      Longer answer: After Vello went under in June I decided to take a couple of months before fully investigating what to do next, largely because I figured I’d either find something that wanted me to start ASAP or fail to find anything and stress about it. During this period a friend happened to mention to me that the applications for the Queen’s law course were still open. He happened to know that it was something I’d considered before a few times. Various discussions (some of them over gin, I’ll admit) ensued and I eventually decided to submit an application. This was towards the end of August, and I figured I’d also talk to people at DebConf to see if there was anything out there tech-wise that I could get excited about.

      It turned out that I was feeling a bit jaded about the whole tech scene. Another friend is of the strong opinion that you should take a break at least every 10 years. Heeding her advice I decided to go ahead with the law course. I haven’t regretted it at all. My initial interest was largely driven by a belief that there are too few people who understand both tech and law. I started with interests around intellectual property and contract law as well as issues that arise from trying to legislate for the global nature of most tech these days. However the course is a complete UK qualifying degree (I can go on to do the professional qualification in NI or England & Wales) and the first year has been about public law. Which has been much more interesting than I was expecting (even, would you believe it, EU law). Especially given the potential changing constitutional landscape of the UK after the recent general election, with regard to talk of repeal of the Human Rights Act and a referendum on exit from the EU.

      Next year will concentrate more on private law, and I’m hoping to be able to tie that in better to what initially drove me to pursue this path. I’m still not exactly sure which direction I’ll go once I complete the course, but whatever happens I want to keep a linkage between my skill sets. That could be either leaning towards the legal side but with the appreciation of tech, returning to tech but with the appreciation of the legal side of things or perhaps specialising further down an academic path that links both. I guess I’ll see what the next year brings. :)

      Categories: FLOSS Project Planets

      X-Team: Drupalwood: Why you should attend a DrupalCon

      Planet Drupal - Mon, 2015-06-29 17:41

      Time flies – it’s already summer, and I hope yours is going well! Seems like just yesterday I was at DrupalCon in Los Angeles, the famous city of movie-making – to make it sound more like a dream… at least my own dream, one that was made true. Because part of our team was invited...

      The post Drupalwood: Why you should attend a DrupalCon appeared first on X-Team.

      Categories: FLOSS Project Planets

      Lunar: Reproducible builds: week 9 in Stretch cycle

      Planet Debian - Mon, 2015-06-29 17:03

      What happened about the reproducible builds effort this week:

      Toolchain fixes

      Norbert Preining uploaded texinfo/6.0.0.dfsg.1-2 which makes texinfo indices reproducible. Original patch by Chris Lamb.

      Lunar submitted recently rebased patches to make the file order of files inside .deb stable.

      akira filled #789843 to make tex4ht stop printing timestamps in its HTML output by default.

      Dhole wrote a patch for xutils-dev to prevent timestamps when creating gzip compresed files.

      Reiner Herrmann sent a follow-up patch for wheel to use UTC as timezone when outputing timestamps.

      Mattia Rizzolo started a discussion regarding the failure to build from source of subversion when -Wdate-time is added to CPPFLAGS—which happens when asking dpkg-buildflags to use the reproducible profile. SWIG errors out because it doesn't recognize the aforementioned flag.

      Trying to get the .buildinfo specification to more definitive state, Lunar started a discussion on storing the checksums of the binary package used in dpkg status database.

      akira discovered—while proposing a fix for simgrid—that CMake internal command to create tarballs would record a timestamp in the gzip header. A way to prevent it is to use the GZIP environment variable to ask gzip not to store timestamps, but this will soon become unsupported. It's up for discussion if the best place to fix the problem would be to fix it for all CMake users at once.

      Infrastructure-related work

      Andreas Henriksson did a delayed NMU upload of pbuilder which adds minimal support for build profiles and includes several fixes from Mattia Rizzolo affecting reproducibility tests.

      Neils Thykier uploaded lintian which both raises the severity of package-contains-timestamped-gzip and avoids false positives for this tag (thanks to Tomasz Buchert).

      Petter Reinholdtsen filled #789761 suggesting that how-can-i-help should prompt its users about fixing reproducibility issues.

      Packages fixed

      The following packages became reproducible due to changes in their build dependencies: autorun4linuxcd, libwildmagic, lifelines, plexus-i18n, texlive-base, texlive-extra, texlive-lang.

      The following packages became reproducible after getting fixed:

      Some uploads fixed some reproducibility issues but not all of them:

      Untested uploaded as they are not in main:

      Patches submitted which have not made their way to the archive yet:

      • #789648 on apt-dater by Dhole: allow the build date to be set externally and set it to the time of the latest debian/changelog entry.
      • #789715 on simgrid by akira: fix doxygen and patch CMakeLists.txt to give GZIP=-n for tar.
      • #789728 on aegisub by Juan Picca: get rid of __DATE__ and __TIME__ macros.
      • #789747 on dipy by Juan Picca: set documentation date for Sphinx.
      • #789748 on jansson by Juan Picca: set documentation date for Sphinx.
      • #789799 on tmexpand by Chris Lamb: remove timestamps, hostname and username from the build output.
      • #789804 on libevocosm by Chris Lamb: removes generated files which include extra information about the build environment.
      • #789963 on qrfcview by Dhole: removes the timestamps from the the generated PNG icon.
      • #789965 on xtel by Dhole: removes extra timestamps from compressed files by gzip and from the PNG icon.
      • #790010 on simbody by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
      • #790023 on stx-btree by akira: pass HTML_TIMESTAMP=NO to Doxygen.
      • #790034 on siscone by akira: removes $datetime from footer.html used by Doxygen.
      • #790035 on thepeg by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
      • #790072 on libxray-spacegroup-perl by Chris Lamb: set $Storable::canonical = 1 to make space_groups.db.PL output deterministic.
      • #790074 on visp by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
      • #790081 on wfmath by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
      • #790082 on wreport by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
      • #790088 on yudit by Chris Lamb: removes timestamps from the build system by passing a static comment.
      • #790122 on clblas by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
      • #790133 on dcmtk by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
      • #790139 on glfw3 by akira: patch for Doxygen timestamps further improved by James Cowgill by removing $datetime from the footer.
      • #790228 on gtkspellmm by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
      • #790232 on ucblogo by Reiner Herrmann: set LC_ALL to C before sorting.
      • #790235 on basemap by Juan Picca: set documentation date for Sphinx.
      • #790258 on guymager by Reiner Herrmann: use the date from the latest debian/changelog as build date
      • #790309 on pelican by Chris Lamb: removes useless (and unreproducible) tests.
      debbindiff development

      debbindiff/23 includes a few bugfixes by Helmut Grohne that result in a significant speedup (especially on larger files). It used to exhibit the quadratic time string concatenation antipattern.

      Version 24 was released on June 23rd in a hurry to fix an undefined variable introduced in the previous version. (Reiner Herrmann)

      debbindiff now has a test suite! It is written using the PyTest framework (thanks Isis Lovecruft for the suggestion). The current focus has been on the comparators, and we are now at 93% of code coverage for these modules.

      Several problems were identified and fixed in the process: paths appearing in output of javap, readelf, objdump, zipinfo, unsqusahfs; useless MD5 checksum and last modified date in javap output; bad handling of charsets in PO files; the destination path for gzip compressed files not ending in .gz; only metadata of cpio archives were actually compared. stat output was further trimmed to make directory comparison more useful.

      Having the test suite enabled a refactoring of how comparators were written, switching from a forest of differences to a single tree. This helped removing dust from the oldest parts of the code.

      Together with some other small changes, version 25 was released on June 27th. A follow up release was made the next day to fix a hole in the test suite and the resulting unidentified leftover from the comparator refactoring. (Lunar)

      Documentation update

      Ximin Luo improved code examples for some proposed environment variables for reference timestamps. Dhole added an example on how to fix timestamps C pre-processor macros by adding a way to set the build date externally. akira documented her fix for tex4ht timestamps.

      Package reviews

      94 obsolete reviews have been removed, 330 added and 153 updated this week.

      Hats off for Chris West (Faux) who investigated many fail to build from source issues and reported the relevant bugs.

      Slight improvements were made to the scripts for editing the review database, edit-notes and clean-notes. (Mattia Rizzolo)

      Meetings

      A meeting was held on June 23rd. Minutes are available.

      The next meeting will happen on Tuesday 2015-07-07 at 17:00 UTC.

      Misc.

      The Linux Foundation announced that it was funding the work of Lunar and h01ger on reproducible builds in Debian and other distributions. This was further relayed in a Bits from Debian blog post.

      Categories: FLOSS Project Planets

      DrupalCon News: DrupalCon Programming Announced - Sessions and Training Selected!

      Planet Drupal - Mon, 2015-06-29 16:22

      One of the most exciting aspects of preparing for a DrupalCon is selecting the sessions that will be presented. It’s always incredibly cool and humbling to see all the great ideas that our community comes up with— and they’re all so great that making the official selections is definitely not an easy process! This time, the Track Chairs had over 500 sessions to read through to determine what content would be presented in Barcelona.

      Categories: FLOSS Project Planets

      Forum One: Join Us at Drupal GovCon!

      Planet Drupal - Mon, 2015-06-29 16:05

      We’re excited for Drupal GovCon coming up on July 22nd! We can’t wait to spend time with the Drupal4Gov community and meet fellow Drupalers from all over! Forum One will be presenting sessions in all four tracks: Site Building, Business and Strategy, Code & DevOps, and Front-end, User Experience and Design! Check out our sessions to learn more about Drupal 8 and other topics!

      Here our are sessions at a glance…

      What’s in Your Audit? A Guide to Auditing Drupal Sites

      Nervous about providing support for a new Drupal site? A comprehensive audit will prepare you to take on Drupal sites that weren’t built by you. Join this session and learn from Forum One’s John Brandenburg as he reviews the audit checklist the our team uses before we take over support work for any Drupal site.

      Drupal 8 for Non-developers

      Drupal 8’s getting close to launching – do you feel like you need a crash course in what this means? Join Forum One’s Chaz Chumley as he demystifies Drupal 8 for you and teaches you all that you need to know about the world of developers.

      The Drupal 8 Decision: Budgets, Bosses, and Bul@#$% Standing between You and the Next World-class CMS

      If you’re wondering how to prepare your organization for upgrading your sites to Drupal 8, join WETA’s Jess Snyder, along with Forum One’s Andrew Cohen and Chaz Chumley as they answer questions about the available talent, budgets, goals, and more in regards to Drupal 8.

      The Building Blocks of D8

      The building blocks of Drupal have changed and now’s the unique time to rethink how to build themes in Drupal 8. Join Chaz Chumley as he dissects a theme and exposes the best practices that we should all be adopting for Drupal 8.

      Building Realtime Applications with Drupal and Node.js

      Drupal 8’s first class REST interface opens up a world of opportunities to build interactive applications. Come learn how to connect a Node application to Drupal to create dynamic updates from Forum One’s William Hurley as he demonstrates the capabilities of both JavaScript and Node.js using Drupal, AngularJS, and Sails.js!

      Automating Deployments

      Are you excited to launch your new website, but getting held down by all the steps it takes for your code to make it online? On top of that, each change requires the same long process all over again… what a nail biting experience! Join William Hurley as he demonstrates the power of Jenkins and Capistrano for managing continuous integration and deployment using your git repository.

      Combining the Power of Views and Rules

      If you’re a beginner who has found the Views module confusing, come check out this session and learn important features of this popular module from Leanne Duca and Forum One’s Onaje Johnston. They’ll also highlight some additional modules that extend the power of Views.

      Paraphrasing Panels, Panelizer and Panopoly

      Have you ever felt that Panels, Panelizer and Panopoly were a bit overwhelming? Well, come to our session from Forum One’s Keenan Holloway. He will go over the best features of each one and how they are invaluable tools. Keenan will also give out a handy cheat sheet to remember it all, so make sure to stop by!

      D3 Data Visualization

      Data visualization is the go to right now! Maps, charts, interactive presentations – what tools do you use to build your visual data story? We feel that D3.js is the best tool, so come listen to Keenan Holloway explain why you should be using D3, how to use D3’s visualization techniques, and more.

      To the Pattern Lab! Better Collaboration in Drupal Using Atomic Design Principles

      Implementing modular design early on in any Drupal project will improve your team’s workflow and efficiency! Attend our session to learn from our very own Daniel Ferro on how to use styleguide/prototyping tools like Pattern Lab to increase collaboration between designers, themers, developers, and your organization on Drupal projects.

      Integrating Mentoring into an Open Source Community that Welcomes and Values New Contributors

      Are you hoping to mentor new contributors? Check out this session where Forum One’s Kalpana Goel and Cathy Theys from BlackMesh will talk about how to integrate mentoring into all the layers of an open source project and how to develop mentoring into a habit. They’ll be using the Drupal community as an example!

      Building an Image Gallery with Drupal 8

      If you’re a beginner looking to set up an image gallery, attend this session! Leanne Duca and Onaje Johnston will guide you in how to set up a gallery in Drupal 8 and how to overcome any challenges you may encounter!

      Painting a Perfect Picture with Gesso

      Attend this session and learn how to design and theme Drupal sites using Atomic Design and the Drupal 8 CSS architecture guidelines from our very own Dan Mouyard! He’ll go over our Gesso theme and our version of Pattern Lab and how they allow us to quickly design and prototype reusable design components, layouts, and pages.

      Can’t make it to all of the sessions? Don’t worry, you’ll be able to catch us outside of our scheduled sessions! If you want to connect, stop by our table or check us out on Twitter (@ForumOne). We can’t wait to see you at DrupalGovCon!

      Categories: FLOSS Project Planets

      Bryan Pendleton: Stuff I'm reading, end of June edition

      Planet Apache - Mon, 2015-06-29 16:02

      Took a day off to enjoy the beautiful weather ... and tonight we're heading into the city for something we've never done before: opera!

      Meanwhile:

      • Oracle v. Google Android-Java copyright case goes back to San Fran: Supreme Court denies Google petitionNow that the Supreme Court has denied Google's petition and appellate attorney Joshua Rosenkranz (of Orrick Herrington Sutcliffe) has once again shown why he was dubbed the "Defibrillator" (for bringing cases back to life that appeared to have been lost), the sizable litigation caravan that had gone from California to Washington DC for the appellate proceedings--where an amazing reversal of fortunes occurred, with Oracle now having the upper hand--can finally head back all the way to the West.
      • The Problem With Putting All the World's Code in GitHubBut Github’s pending emergence as Silicon Valley’s latest unicorn holds a certain irony. The ideals of open source software center on freedom, sharing, and collective benefit—the polar opposite of venture capitalists seeking a multibillion-dollar exit. Whatever its stated principles, Github is under immense pressure to be more than just a sustainable business. When profit motives and community ideals clash, especially in the software world, the end result isn’t always pretty.
      • The costs of forksA request from Emilien Macchi for more collaboration between Fuel and the Puppet OpenStack project kicked off the discussion. Puppet is a configuration management utility that is used by Fuel to assist in deploying OpenStack. But Fuel has forked some of the Puppet modules it uses from the Puppet OpenStack project—which creates Puppet modules for OpenStack components—into its Fuel Library repository. Macchi noted a number of problems with how that has been handled over the last two years.
      • Killing yourself to survive is not the same as innovatingTo be sure, it is a maxim of economics that the returns to innovation are often higher for new entrants than incumbents precisely because new entrant’s don’t care about the sales of existing products. But it is equally important to note that that maxim only arises if the incumbent expects entrants not to win with those new products. As soon as they expect that, because the incumbent has other assets — namely its core brand — the incumbent has a more powerful incentive to develop those new products than entrants. This is one way of looking at Facebook’s acquisitions. They are not so much about becoming a family of brands but instead, after uncertainty is resolved, continuing to fund innovation on products where the incentives to fund entrant’s doing them has fallen, not risen.
      • Collected Wisdom on Business and Life from the HBS Class of 1963The site, If I Knew Then, is actually also a book written by Artie Buerk, a member of the Harvard Business School (HBS) class of 1963 and contains collected wisdom — all in quotation form — from his classmates, gathered in preparation for their 50th reunion.
      • Respect Your Salespeople: They Earn Your SalaryA salesperson’s domain knowledge isn’t just a boon to sales, it’s an important component when sales brings back customer feedback. Engineers know when they’re dealing with a “coin-operated” salesperson, and they have no respect for the species. ‘Ah, he doesn’t know what he’s talking about!’. Actually, engineers don’t have much respect for sales, period, but they’ll listen to a technically competent salesperson, particularly if he or she is bringing back an esoteric bug report.
      • Broadcasters, fighting, and data leakageThe radio station builds an audience, and the third-party trackers leak it away.
      • Google Summer of Code 2015 Frequently Asked QuestionsTimely evaluations of Google Summer of Code students are crucial to us.
      • Expanding the Panama CanalWork began in 2007 to raise the capacity of Gatun Lake and build two new sets of locks, which would accommodate ships carrying up to 14,000 containers of freight, tripling the size limit. Sixteen massive steel gates, weighing an average of 3,100 tons each, were built in Italy and shipped to Panama to be installed in the new locks. Eight years and $5.2 billion later, the expansion project is nearing completion. The initial stages of flooding the canals have begun and the projected opening date has been set for April of 2016.
      Categories: FLOSS Project Planets

      Bryan Pendleton: Santa Clara night two looks like it was a very different show

      Planet Apache - Mon, 2015-06-29 15:48

      Traditionally, the Grateful Dead never play the same show twice.

      And it looks like that tradition has not changed.

      Day Two was completely different from Day One: Setlist & Review | Fare Thee Well Santa Clara Night Two on JamBase .

      It sounds like a lot more singing, and a much more accessible show, with songs plucked from both new and old. I'm sad I missed Wharf Rat and Eyes of The World, two of my favorites, but we had such a wonderful time on Saturday that I'm certainly not complaining!

      Now, on to Chicago; I wonder what additional wonders they will pull out of that hat...

      Categories: FLOSS Project Planets
      Syndicate content