FLOSS Project Planets

Nikola: Nikola v7.8.9 is out! (maintenance release)

Planet Python - Tue, 2017-06-20 15:00

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v7.8.9. This is a maintenance release for the v7 series.

Future releases in the v7 series are going to be small maintenance releases that include bugfixes only, as work on v8.0.0 is underway.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/

Downloads

Install using pip install Nikola or download tarballs on GitHub and PyPI.

Changes
  • Restore missing unminified assets
  • Make failures to get source commit hash non-fatal in github_deploy (Issue #2847)
  • Fix a bug in HTML meta parsing that crashed on <meta> tags without name (Issue #2835)
  • Fix math not showing up in some circumstances (Issue #2841)
Categories: FLOSS Project Planets

Blog keopx: Debugging Drush scripts con Xdebug y PhpStorm

Planet Drupal - Tue, 2017-06-20 13:32
Debugging Drush scripts con Xdebug y PhpStorm

Para configurar correctamente un entornos para depurar con Xdebug y PhpStorm los comandos Drush es necesario realizar una serie de configuración especifica.

  • Configurar una PHP Web Application para depurar por la línea de comandos.
  • Todo el código ejecutado debe estar disponible en el proyecto, incluyendo drush.
    • Ej. Instalando drush como dependencia de composer (también, recuerde ejecutar drush desde su proyecto).
  • Habilitar depuración xdebug para la línea de comandos.
sudo phpenmod xdebug
  • Un enlace simbólico de xdebug.ini de mi directorio /etc/php/7.0/cli/conf.d como estaba usando en /etc/php/7.0/apache/conf.d para la depuración web.
    • Ejemplo de configuración:

    sudo vi /etc/php/7.0/cli/conf.d/20-xdebug.ini

    Y añadimos:

    zend_extension=xdebug.so xdebug.remote_connect_back = 1 xdebug.default_enable = 1 xdebug.remote_autostart = 1 xdebug.remote_enable = 1 xdebug.remote_port = 9000 xdebug.remote_handler = dbgp xdebug.max_nesting_level = 500 xdebug.idekey = PHPSTORM xdebug.profiler_enable_trigger = 1

    Utilice el botón "Listen for PHP Debug connections" de PhpStorm:

    • Establezca el cliente de depuración remota en la línea de comandos utilizando:
    • Editamos ~/.bashrc y añadimos:
    # PHPstorm drush debug export XDEBUG_CONFIG="idekey=PHPSTORM"
    • Establezca la configuración del servidor. Asegúrese de que el nombre que utiliza coincide con el nombre del servidor que configuró en PhpStorm:
      • PHP_IDE_CONFIG = PHPSTORM
    • Ejecute drush.

     

    La verdad es que no recordaba como se configuraba el Xdebug para PhpStorm y Drush y gracias a Juanen (jansete en Drupal) me he vuelto a ponermelo bien y que menos que contribuirlo :D

    keopx Mar, 20/06/2017 - 19:32 Categoria Drupal Drupal 8.x Drush Drupal Planeta Tag Drush Drupal Drupal 8.x Drupal 7.x xdebug debug PhpStorm Añadir nuevo comentario
    Categories: FLOSS Project Planets

    Elevated Third: Elevated Third Ranks No. 1 Among Denver’s Best Places to Work

    Planet Drupal - Tue, 2017-06-20 12:20
    Elevated Third Ranks No. 1 Among Denver’s Best Places to Work Elevated Third Ranks No. 1 Among Denver’s Best Places to Work Nate Gengler Tue, 06/20/2017 - 10:20

    The Denver Business Journal’s annual “Best Places to Work” awards wrapped up with Elevated Third landing the top spot in the “Workplace Wellness” category for small companies. The category recognizes Denver employers with an outstanding commitment to employee well-being.

     As a business practice, committing to employee wellness means that everyone is operating at their highest capacity. When our minds are fresh to focus on the task at hand, we can crank out the best work possible.

     Striking the ideal work-life balance is central to our culture. Where some agencies expect employees to work nights and weekends at the drop of a hat, we are committed to respecting employees’ time beyond the office and staying true to a 40 hour work week.

     We believe that when employees feel valued beyond the output of their work, the workplace is a more positive and productive environment.

    Outside of the office, the Elevated Third team is covered with 3 weeks of Paid Time Off, a subsidized gym membership, a $1,500 Health Reimbursement Account (HRA), and an RTD ecopass. 

    In the office, we are surrounded by a work environment that stimulates creativity and keeps spirits high. Office dogs can be found roaming the hallways, the kitchen is stocked with goodies of a (mostly) healthy variety, and our location on the top floor of the Denver Masonic Building provides plenty of sunlight and the occasional summer breeze.

    We are incredibly proud to be recognized among Denver’s best places to work. Joining our fellow recipients, we believe this commitment to workplace wellness makes Denver a better place to live, work, and do business.

     

    Interested in joining the team? Have a look at our open positions

    Categories: FLOSS Project Planets

    Valuebound: How to hide Order ID from commerce checkout process in Drupal 8

    Planet Drupal - Tue, 2017-06-20 11:51

    In Drupal, many a time we come across a situation where we want to hide certain URL part from end users.

    To achieve this we often use Drupal modules like Pathauto to hide node IDs, taxonomy IDs from URL and replacing them with some patterns (eg. Titles).

    The above scenario can not be achieved for Drupal commerce checkout flow(URLs) as the Drupal modules like PathAuto do not support this. To achieve this in Drupal 7 we often used one of the following ways mentioned below:

    • Commerce Checkout Paths Module.

    • Combination of…

    Categories: FLOSS Project Planets

    I Fix Drupal: After Reverting a Feature Module Profile2 Field Values Are Deleted

    Planet Drupal - Tue, 2017-06-20 11:12
    I recently gave an outline of this problem over in the Drupal community here:https://www.drupal.org/node/1316874#comment-12136170 But I thought it would be interesting to make a more technical post on the subject, so here it is.
    Categories: FLOSS Project Planets

    DataSmith: Setting up BLT with Reservoir

    Planet Drupal - Tue, 2017-06-20 11:09
    Setting up BLT with Reservoir

    Yesterday, Acquia open sourced Reservoir, a new distribution designed for building headless Drupal instances.  The Reservoir team provided a composer project command for setting up a Reservoir instance easily, but it doesn't bundle a VM.  Fortunately, making BLT work with Reservoir isn't difficult.  There are, though, a few steps to be aware of.

    To get started, run the composer project to build a new BLT instance.

    composer create-project --no-interaction acquia/blt-project MY_PROJECT

    Once that completes, you need to add reservoir and (optionally) remove the lightning distro

    composer require acquia/reservoir

    composer remove acquia/lightning

    Next, update the blt/project.yml file.  The key changes you'll want to make here (beyond setting a new project prefix, etc) are a) changing the distro from ligthning to reservoir and b) removing views_ui from the modules:enable list for local environments.*  An excerpt of my git diff for this file looks like...

    profile:
    -    name: lightning
    +    name: reservoir
    local:
    -    enable: [dblog, devel, seckit, views_ui]
    +    enable: [dblog, devel, seckit]

    Once that's done, continue with the BLT setup process from Step 4 (assuming you want to use Drupal VM. Step 5 otherwise).

     

    * If you don't remove views_ui, the world won't explode or anything, but when you run blt setup you'll get errors reported like the ones below:

    blt > setup:toggle-modules:
        [drush] dblog is already enabled.                                                   [ok]
        [drush] The following extensions will be enabled: devel, seckit, views_ui, views
        [drush] Do you really want to continue? (y/n): y
        [drush] Argument 1 passed to                                                     [error]
        [drush] Drupal\Core\Config\Entity\ConfigEntityBase::calculatePluginDependencies()
        [drush] must implement interface
        [drush] Drupal\Component\Plugin\PluginInspectionInterface, null given, called
        [drush] in /var/www/mrpink/docroot/core/modules/views/src/Entity/View.php on
        [drush] line 281 and defined PluginDependencyTrait.php:29
        [drush] E_RECOVERABLE_ERROR encountered; aborting. To ignore recoverable         [error]
        [drush] errors, run again with --no-halt-on-error
        [drush] Drush command terminated abnormally due to an unrecoverable error.       [error]
    [phingcall] /Users/barrett.smith/Desktop/mrpink/./vendor/acquia/blt/phing/tasks/setup.xml:370:8: /Users/barrett.smith/Desktop/mrpink/./vendor/acquia/blt/phing/tasks/setup.xml:374:12: /Users/barrett.smith/Desktop/mrpink/./vendor/acquia/blt/phing/tasks/setup.xml:377:69: Drush exited with code 255
    [phingcall] /Users/barrett.smith/Desktop/mrpink/./vendor/acquia/blt/phing/tasks/setup.xml:350:45: Execution of the target buildfile failed. Aborting.

    BUILD FAILED/Users/barrett.smith/Desktop/mrpink/./vendor/acquia/blt/phing/tasks/local-sync.xml:12:30: Execution of the target buildfile failed. Aborting.
    ; 2 minutes  37.24 seconds

     

    Barrett Tue, 06/20/2017 - 10:09 Tags Add new comment
    Categories: FLOSS Project Planets

    Drupal Association blog: Growing community in Moldova

    Planet Drupal - Tue, 2017-06-20 11:03

    This guest blog post is from Drupal Moldova's Association (not affiliated with Drupal Association). Get a glimpse of what is happening in Moldova's community and how you can get involved.

    Drupal Moldova Association’s mission is to promote Drupal CMS and Open Source technologies in Moldova, and to grow and sustain the local community by organising Events, Camps, Schools, Drupal meetups and various Drupal and Open Source related trainings, and by establishing partnerships with Companies, the Government, and NGO’s.

    Come and share your expertise in Moldova at our events! We're looking for international speakers to speak about Drupal and open source.

    Among DMA’s (short for Drupal Moldova Association) numerous commitments, the following are of special importance:

    • to gather the community around Drupal and Open Source technologies;

    • to train students and professionals who want to learn and work with Drupal;

    • to organise events to keep the community engaged and motivated to improve, learn, and share experience;

    • to make sure Drupal is accessible to everyone by offering scholarships to those who can't afford our programs;

    • to elaborate a well defined program that helps students learn Drupal, acquire enough knowledge to get accepted for internships by IT companies, and be able to build Drupal powered websites;  

    • to assist new IT companies in establishing a local office, promote themselves, collaborate with other companies, and connect with the local Drupal community by giving them the opportunity to support our projects.

    Over the last 5 years, we have been dedicated to achieving our goals! DMA have organized over 20 projects and events, including Drupal Global Training Days, Drupal Schools, and the regional DrupalCamp -- Moldcamp. Our projects have gathered over 700 local and international participants and speakers, and more than 15 International Companies that have supported us during these years (FFW, Adyax, IP Group, Intellix, Endava and many others).

    Moldova is rich in great developers and people driven to take initiative and to grow and place the country on the world map. We are aiming to go beyond our limits and have a bigger impact in the year (‘17-’18), therefore we have created a yearly plan that contains projects similar to those we have done in the past years, as well as new and exciting ones:

    • Drupal School (3 step program), starting with Drupal School 8 plus PHP (step 1):  Drupal School is an educational program - split into 2 months, 25 courses of different levels (Beginner, Intermediate, Advanced).Drupal School aims to introduce people to Drupal 8 and PHP, and help them become Drupal professionals;

    • Moldcamp 2017: Sep - Oct 2017. A regional DrupalCamp that gathers around 150 Drupal professionals, enthusiasts, beginners and any-Drupal-related-folk in one place for knowledge-sharing, presentations, networking, etc. We will announce the event soon and allow speaker registration. Please follow us and don’t miss out on the opportunity;

    • Drupal Global Training Day: Dec 1-2. A one-day workshop that has the purpose of introducing people to Drupal, both code and community.

    • Drupal Meetups: These are organized each month and they allow our community to be active and share knowledge.

    • Tech Pizza: - Jun, Aug, Oct, Dec. A bi-monthly event, where the ICT community can gather in a casual and an informal environment around a pizza and  soda and discuss the latest IT trends and news. The core of this event is a speaker / invitee from abroad with a domain of expertise;

    • Moldova Open Source Conference: March 2018. It is a regional conference for over 200 participants that aims to gather all the Open Source Communities (Wordpress, Laravel, Ruby on Rails, JavaScript, etc.) under one roof, where they will attend sessions that enhance the expertise of existing experts in various Open Source technologies and allow them to mix their technologies into new ideas.

    The proposed program “Drupal and Open Source in Moldova 2017 - 2018” is made possible through the support of USAID and the Swedish Government. Thanks to these organizations we can focus on the quality of our projects make sure they happen as planned. Also, we have a very important partnership with Tekwill / Tekwill Academy, which helps us even more in our quests.

    We start with School of Drupal 8 plus PHP program, which will be held on 19th of June 2017. So far we have 3 sponsors--IPGroup, Adyax and Intellix--and two trainers.

    We, The DMA, believe in pushing the limits! Our long term goal is to build and maintain big an active Open Source community by attracting more local and International participants to our Projects and Events, and continuously improve our sessions. This will make our presence felt in the global Drupal and Open Source communities and markets. Find us on Twitter @drupalmoldova, or on our Facebook page. If you are interested in speaking in Moldova, contact us at info@drupalmoldova.org.

    Categories: FLOSS Project Planets

    Acquia Developer Center Blog: Percona Live 2017 Blog Post: ProxySQL as a Failover Option for Drupal

    Planet Drupal - Tue, 2017-06-20 10:41

    One of the more interesting products to hit the spotlight at this year's Percona Live Open Source Database conference was ProxySQL.

    This open source MySQL proxy server has been around for a couple of years now and keeps adding more features. The current release (1.3.6) has the usual features that you would expect from a proxy server, like load balancing and failover support, but ProxySQL also has database specific features like a query cache and query routing.

    Tags: acquia drupal planet
    Categories: FLOSS Project Planets

    Ian Ozsvald: Kaggle&#8217;s Quora Question Pairs Competition

    Planet Python - Tue, 2017-06-20 10:14

    Kaggle‘s Quora Question Pairs competition has just closed, I’m pleased to say that with 10 days effort I ranked in the top 39th percentile (rank 1346 of 3396 in the private leaderboard). Having just run and spoken at PyDataLondon 2017, taught ML in Romania and worked on several client projects I only freed up time right at the end of this competition. Despite joining at the end I had immense fun – this was my first ‘proper’ Kaggle competition.

    I figured a short retrospective here might be a useful reminder to myself in the future. Things that worked well:

    • Use of github, Jupyter Notebooks, my research module template
    • Python 3.6, scikit-learn, pandas
    • RandomForests (some XGBoost but ultimately just RFs)
    • Dask (great for using all cores when feature engineering with Pandas apply)
    • Lots of text similarity measures, word2vec, some Part of Speech tagging
    • Some light text clean-up (punctuation, whitespace, some mixed case normalisation)
    • Spacy for PoS noun extraction, some NLTK
    • Splitting feature generation and ML exploitation into different Notebooks
    • Lots of visualisation of each distance measure by class (mainly matplotlib histograms on single features)
    • Fully reproducible Notebooks with fixed seeds
    • Debugging code to diagnose the most-wrong guesses from the model (pulling out features and the raw questions was often enough to get a feel for “what it missed” which lead to thoughts on new features that might help)

    Things that I didn’t get around to trying due to lack of time:

    • PoS named entities in Spacy, my own entity recogniser
    • GloVe, wordrank, fasttext
    • Clustering around topics
    • Text clean-up (synonyms, weights & measures normalisation)
    • Use of external corpus (e.g. Stackoverflow) for TF-IDF counts
    • Dask on EC2

    Things that didn’t work so well:

    • Fully reproducible Notebooks (great!) to generate features with no caching of no-need-to-rebuild-yet-again features, so I did a lot of recalculating features (which really hurt in the last 2 days) – possible solution below with named columns
    • Notebooks are still a PITA for debugging, attaching a console with –existing works ok until things start to crash and then it gets sticky
    • Running out of 32GB of RAM several times on my laptop and having a semi-broken system whilst trying to persist partial models to disk – I should have started with an AWS deployment earlier so I could easily turn on more cores+RAM as needed
    • I barely checked the Kaggle forums (only reading the Notebooks concerning the negative resampling requirement) so I missed a whole pile of tricks shared by others, some I folded in on the last day but there’s a huge pile that I missed – I think I might have snuck into the top 20% of rankings if I’d have used this public information
    • Calibrating RandomForests (I’m pretty convinced I did this correctly but it didn’t improve things, I’m not sure why)

    Dask definitely made parallelisation easier with only a few lines of overhead in a function beyond a normal call to apply. The caching, if using something like luigi, would add a lot of extra engineered overhead – not so useful in a rapidly iterating 10 day competition.

    I think next time I’ll try using version-named columns in my DataFrames. Rather than having e.g. “unigram_distance_raw_sentences” I might add “_v0”, if that calculation process is never updated then I can just use a pre-built version of the column. This is a poor-mans caching strategy. If any dependencies existed then I guess luigi/airflow would be the next step. For now at least I think a version number will solve my most immediate time-sink in recent days.

    I hope to enter another competition soon. I’m also hoping to attend the London Kaggle meetup at some point to learn from others.

    Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.
    Categories: FLOSS Project Planets

    A tale of 2 curves

    Planet KDE - Tue, 2017-06-20 08:58

    As my first subject for this animation blog series, we will be taking a look at Animation curves.

    Curves, or better, easing curves, is one of the first concepts we are exposed to when dealing with the subject of animation in the QML space.

    What are they?

    Well, in simplistic terms, they are a description of an X position over a Time axis that starts in (0 , 0) and ends in (1 , 1). These curves are …

    The post A tale of 2 curves appeared first on KDAB.

    Categories: FLOSS Project Planets

    Amazee Labs: Submit your Site Building Session to DrupalCon Vienna

    Planet Drupal - Tue, 2017-06-20 08:19
    Submit your Site Building Session to DrupalCon Vienna

    DrupalCon Vienna will be taking place end of September this year. The site building track is about letting Drupal do the hard work without needing to write code. By assembling the right modules and configurations we can create rich and complex features, without worrying about reinventing the wheel and write complex logic and code.

    Josef Dabernig Tue, 06/20/2017 - 14:19

    Sounds great, right? As excited as I am for helping to put together the program for the site building track, I would like to share a few session ideas, which might be worth submitting. If you have never submitted a session for DrupalCon, this might be a good opportunity to give it a try:  

    Showcases will let others learn from how you built your last exciting Drupal 8 project. Talking points can include which approaches you took, lessons you learnt from working on the project, and what fellow site builders should know when tackling similar problems.

    Module presentations are a great way to explain and highlight best practice solutions. How do you choose from the various competing site building tools available to address problems like layout management, workflows or content modelling? Are the same solutions from Drupal 7 still valid, or what are the latest experiences you've had whilst building Drupal 8 sites and how could this be further developed and enhanced in the future?

    Process descriptions are welcome to help us figure out how site building can best fill the gap between end users, content editors, developers, UX designers and anyone else involved in Drupal web projects. How do you involve your customers and explain site building to them? What does a developer need from a site builder and where do those practices blend? 

    Outside perspectives are also welcomed to learn how problems can be solved the site builder’s way in related web technologies.

    Together with Hernâni Borges de Freitas and Dustin Boeger, we are looking forward to reviewing your exciting and interesting applications. If you aren’t sure what to present, feel free to get in touch via the contact form on my Drupal.org profile or Twitter.

    Thanks for submitting your session by June 28, 23:59 CEST.

    Categories: FLOSS Project Planets

    ComputerMinds.co.uk: Help Drupal help your configuration

    Planet Drupal - Tue, 2017-06-20 08:00

    Define a schema for any bespoke configuration, it's not too hard. It's needed to make it translatable, but Drupal 8 will also validate your config against it so it's still handy on non-translatable sites. As a schema ensures your configuration is valid, your code, or Drupal itself, can trip up without one. Set up a schema and you avoid those problems, and get robust validation for free. Hopefully my example YAML shows how it can be quite simple to do.

    Categories: FLOSS Project Planets

    Frank Wierzbicki: Jython 2.7.1 release candidate 3 released!

    Planet Python - Tue, 2017-06-20 07:11
    On behalf of the Jython development team, I'm pleased to announce that the third release candidate of Jython 2.7.1 is available! This is a bugfix release. Bug fixes include improvements in ssl and pip support.

    Please see the NEWS file for detailed release notes. This release of Jython requires JDK 7 or above.

    This release is being hosted at maven central. There are three main distributions. In order of popularity:
    To see all of the files available including checksums, go to the maven query for org.python+Jython and navigate to the appropriate distribution and version.
    Categories: FLOSS Project Planets

    Cheppers blog: On Being Human at DrupalCon Vienna - Call for Papers closes in a week

    Planet Drupal - Tue, 2017-06-20 07:09

    Around two years ago, when the launch of Drupal 8 was just around the corner and the main topic of concern was the status of the issue queue, the Drupal community slowly started murmuring about a topic outside of technical solutions and patches. As a result, a brand new DrupalCon track was introduced - Being Human. Our COO, Zsófi is the Being Human local track chair at DrupalCon Vienna - this is her Call for Papers.

    Categories: FLOSS Project Planets

    Appnovation Technologies: PHP Speakers Wanted For DrupalCon Vienna 2017

    Planet Drupal - Tue, 2017-06-20 06:43
    PHP Speakers Wanted For DrupalCon Vienna 2017 On 28th June (23:59 Vienna local time (GMT +2)) session submissions will close for DrupalCon Vienna 2017 and we're looking for more great speakers. After volunteering on the Core Conversation track team last year, I am now helping the PHP track team find and select sessions for this year's European conference. As PHP the foundation for...
    Categories: FLOSS Project Planets

    PyCharm: Upgrade Your Testing with Behavior Driven Development

    Planet Python - Tue, 2017-06-20 06:10
    BDD? Why should I care?

    Back in the day, I used to write terrible code. I’m probably not the only one who started out writing terrible PHP scripts in high school that just got the job done. For me, the moment that I started to write better code was the moment that I discovered unit testing. Testing forced me to properly organize my code, and keep classes simple enough that testing them in isolation would be possible. Behavior Driven Development (BDD) testing has the potential to do the same at a program level rather than individual classes.

    A common problem when making software is that different people have different opinions on what the software should do, and these differences only become apparent when someone is disappointed in the end. This is why in large software projects it’s commonplace to spend quite a bit of effort in getting the requirements right.

    If you’re making a small personal project, BDD can help you by forcing you to write down what you want your program to do before you start programming. I speak from experience when I say this helps you to finish your projects. Furthermore, in contrast to regular unit testing you get separation of concerns between your test scenario, and your test code. Which programmer doesn’t become excited about separation of concerns? This one sure does

    The real value of BDD arises for those of you who do contract work for small businesses, and even those with small to medium-sized open source projects, wouldn’t it be useful to have a simple way to communicate exactly what the software should do to everyone involved?

    Okay, so how do I get better software?

    Behavior driven development is just that, development which is driven by the behavior you want from your code. Those of you who do agile probably know the “As a <user>, I want <behavior>, so that <benefit>” template; in BDD a similar template is proposed: “In order to <benefit>, as a <user>, I want <behavior>”. In this template the goal of your feature is emphasized, so let’s do some truth in advertising here:

    In order to show off PyCharm’s cool support for BDD

    As a Product Marketing Manager at JetBrains

    I want to make a reasonably complex example project that shows how BDD works

    This is still rather vague, so let’s come up with an actual example project:

    Feature: Simulate a basic car

    To show off how BDD works, let’s create a sample project which takes the  classic OO car example, and supercharges it. 

    The car should take into account: engine power, basic aerodynamics, rolling  resistance, grip, and brake force.

    To keep things somewhat simple, the engine will supply constant power (this is not realistic, as it results in infinite Torque at zero RPM)

    A key element of BDD is using examples to illustrate the features, so let’s write an example:

     

    Scenario: The car should be able to brake

    The UK highway code says that worst case scenario we need to stop from 60mph (27 m/s) in 73m

    Given that the car is moving at 27 m/s

    When I brake at 100% force

    And 10 seconds pass

    Then I should have traveled less than 73 meters

     

    By writing this example, it becomes clear that our code will need to be aware of time passing, keep track of the car’s speed, distance traveled, and the amount of brakes that is applied at a given point in time.

    If you have complex examples, or want to check a couple of similar examples, you can use ASCII tables in your feature file to do this. To keep this blog post to a reasonable length I won’t discuss those, but you can check the code on GitHub to see an example, or read more in the behave docs.

    Feature files and steps

    In BDD, you first write a feature file which describes the feature, with examples that outline how the feature is supposed to behave in certain cases. Using BDD tools, you should then be able to test the scenarios in these feature files automatically.

    To make the scenarios testable, they need to be structured in a specific way:

    Given a precondition

    When an action

    Then a postcondition

    So let’s take our feature from above, add some scenarios, and create a feature file which we will put in a ‘features’ directory in our project. As always, you can follow along with the code on GitHub.

    Most BDD tools also support starting a sentence in the scenario with ‘And’ which will behave the same way as the sentence before it, so ‘Given, And’ will behave just like ‘Given, Given’.

    Next you define steps in code, which execute the test. For this, we need a BDD tool. In Python a good choice of tool is behave. An important note here, the newest version of Behave at the time of writing (Behave 1.2.5) is not compatible with Python 3.6, so please use Python 3.5!

    If at this point we run behave, it will detect our feature and scenarios, but tell us that all of our steps are still undefined. So let’s have a look at how we can make the scenario testable. Let’s implement the “car should be able to brake” scenario:

    @given("that the car is moving at (?P<speed>\d+) m/s") def step_impl(context, speed):   context.car.speed = float(speed) @when("I brake at (?P<brake_force>\d+)% force") def step_impl(context, brake_force):   context.car.set_brake(brake_force) @step("(?P<seconds>\d+) seconds? pass(?:es)?") def step_impl(context, seconds):   context.car.simulate(seconds) @then("I should have traveled less than (?P<distance>\d+) meters") def step_impl(context, distance):   assert_that(context.car.odometer, less_than(float(distance)))

    This code goes into a file in the /features/steps folder. By writing the test code first you’re forced to think about what you would like your eventual application code to look like. Also note that we’re using PyHamcrest (a library which provides better matchers between expected and actual values) here to define the assertions, as they give us a lot more helpful error messages than regular assertions when they fail.

    Running Behave Tests

    To run our Behave tests in PyCharm, we need to add a Behave run configuration. To do this, just add a run configuration like any other, but select Behave:

     

    You don’t need to configure anything else. If you run behave without specifying anything, Behave will execute all the feature files in your project. So let’s run it:

    We can see that our feature is tested, using all of the scenarios that we’ve defined for our feature. As we haven’t written any code yet, all tests fail, and everything is red So let’s write some code and see what it looks like after we finish:

    Much better, right? Now let’s say we made a mistake, for example, if we forgot the deltaT term when adding the acceleration to the car’s speed, we will see that the acceleration tests fail. Of course PyCharm makes it easy for us to put a breakpoint in the code, and then debug our test:

    To Conclude

    BDD is a tool which has the potential to make your software better. This blog post was a very short introduction, and I hope it’s been enough to get some of you interested in giving it a shot! Please let me know in the comments if you’d like to read more about BDD in the future.

    Categories: FLOSS Project Planets

    Foteini Tsiami: Internationalization, part one

    Planet Debian - Tue, 2017-06-20 06:00

    The first part of internationalizing a Greek application, is, of course, translating all the Greek text to English. I already knew how to open a user interface (.ui) file with Glade and how to translate/save it from there, and mail the result to the developers.

    If only it was that simple! I learned that the code of most open source software is kept on version control systems, which fortunately are a bit similar to Wikis, which I was familiar with, so I didn’t have a lot of trouble understanding the concepts. Thanks to a very brief git crash course from my mentors, I was able to quickly start translating, committing, and even pushing back the updated files.

    The other tricky part was internationalizing the python source code. There Glade couldn’t be used, a text editor like Pluma was needed. And the messages were part of the source code, so I had to be extra careful not to break the syntax. The English text then needed to be wrapped around _(), which does the gettext call which dynamically translates the messages into the user language.

    All this was very educative, but now that the first part of the internationalization, i.e. the Greek-to-English translations, are over, I think I’ll take some time to read more about the tools that I used!


    Categories: FLOSS Project Planets

    Agiledrop.com Blog: AGILEDROP: DrupalCon sessions about DevOps

    Planet Drupal - Tue, 2017-06-20 04:55
    Last time, we gathered together DrupalCon Baltimore sessions about Front End. Before that, we explored the area of Site Building, Drupal Showcase, Coding and Development, Project Management and Case Studies. And that was not our last stop. This time, we looked at sessions that were presented in the area of DevOps. 100% Observability by Jason Yee from Datadog In this session, the author broke down the expansive monitoring landscape into 5 categories and provided a framework to help users ensure full coverage. He also touched why these categories are important to users business and shared the… READ MORE
    Categories: FLOSS Project Planets

    S. Lott: NMEA Data Acquisition -- An IoT Exercise with Python

    Planet Python - Tue, 2017-06-20 04:00
    Here's the code: https://github.com/slott56/NMEA-Tools. This is Python code to do some Internet of Things (IoT) stuff. Oddly, even when things connected by a point-to-point serial interface, it's still often called IoT. Even though there's no "Internetworking."

    Some IoT projects have a common arc: exploration, modeling, filtering, and persistence. This is followed by the rework to revise the data models and expand the user stories. And then there's the rework conundrum. Stick with me to see just how hard rework can be.

    What's this about? First some background. Then I'll show some code.

    Part of the back story is here: http://www.itmaybeahack.com/TeamRedCruising/travel-2017-2018/that-leaky-hatch--chartplot.html

    In the Internet of Things Boaty (IoT-B) there are devices called chart-plotters. They include GPS receivers, displays, and controls. And algorithms. Most important is the merging of GPS coordinates and an active display. You see where your boat is.

    Folks with GPS units in cars and on their phones have an idea core feature set of a chart plotter. But the value of a chart plotter on a boat is orders of magnitude above the value in a car.

    At sea, the hugeness and importance of the chartplotter is magnified. The surface of the a large body of water is (almost) trackless. Unless you're really familiar with it, it's just water, generally opaque. The depths can vary dramatically. A shoal too shallow for your boat can be more-or-less invisible and just ahead. Bang. You're aground (or worse, holed.)

    A chart -- and knowledge of your position on that chart -- is a very big deal. Once you sail out of sight of land, the chart plotter becomes a life-or-death necessity. While I can find the North American continent using only a compass, I'm not sure I could find the entrance to Chesapeake Bay without knowing my latitude. (Yes, I have a sextant. Would I trust my life to my sextant skills?)

    Modern equipment uses modern hardware and protocols. N2K (NMEA 2000), for example, is powered Ethernet connectivity that uses a simplified backbone with drops for the various devices. Because it's Ethernet, they're peers, and interconnection is simplified. See http://www.digitalboater.com for some background.
    The Interface IssueThe particularly gnarly problem with chart plotters is the lack of an easy-to-live-with interface.

    They're designed to be really super robust, turn-it-on-and-it-works products. Similar to a toaster, in many respects. Plug and play. No configuration required.

    This is a two-edged sword. No configuration required bleeds into no configuration possible.

    The Standard Horizon CP300i uses NT cards. Here's a reader device. Note the "No Longer Available" admonition. All of my important data is saved to the NT card. But. The card is useless except for removable media backup in case the unit dies.

    What's left? The NMEA-0183 interface wiring.
    NMEA Serial EIA-422The good news is that the NMEA wiring is carefully documented in the CP300i owner's manual. There are products like this NMEA-USB Adaptor. A few wire interconnections and we can -- at least in principle -- listen to this device.

    The NMEA standard was defined to allow numerous kinds of devices to work together. When it was adopted (in 1983), the idea was that a device would be a "talker" and other devices would be "listeners." The intent was to have a lot of point-to-point conversations: one talker many listeners.

    A digital depth meter or wind meter, for example, could talk all day, pushing out message traffic with depth or wind information. A display would be a listener and display the current depth or wind state.

    A centralized multiplexer could collect from multiple listeners and then stream the interleaved messages as a talker. Here's an example. This would allow many sensors to be combined onto a single wire. A number of display devices could listen to traffic on the wire, pick out messages that made sense to them, and display the details.

    Ideally, if every talker was polite about their time budget, hardly anything would get lost.

    In the concrete case of the CP300i, there are five ports. usable in various combinations. There are some restrictions that seem to indicate some hardware sharing among the ports. The product literature describes a number of use cases for different kinds of interconnections including a computer connection.

    Since NMEA is EIA-422 is RS-232, some old computer serial ports could be wired up directly. My boat originally had an ancient Garmin GPS and an ancient windows laptop using an ancient DB-9 serial connector. I saved the data by copying files off the hard drive and threw the hardware away.

    A modern Macintosh, however, only handles USB. Not direct EAI-422 serial connections. An adaptor is required.

    What we will have, then, is a CP300i in talker mode, and a MacBook Pro (Retina, 13-inch, Late 2013) as listener.
    Drivers and InfrastructureThis is not my first foray in the IoT-B world. I have a BU-353 GPS antenna. This can be used directly by the GPSNavX application on the Macintosh. On the right-ish side of the BU-353 page are Downloads. There's a USB driver listed here. And a GPS Utility to show position and satellites and the NMEA data stream.

    Step 1. Install this USB driver.

    Step 2. Install the MAC OS X GPS Utility. I know the USB interface works because I can see the BU-353 device using this utility.

    Step 3. Confirm with GPSNavX. Yes. The chart shows the little boat triangle about where I expect to be.

    Yay! Phase I of the IoT-B is complete. We have a USB interface. And we can see an NMEA-0183 GPS antenna. It's transmitting in standard 4800 BAUD mode. This is the biggest hurdle in many projects: getting stuff to talk.

    In the project Background section on Git Hub, there's a wiring diagram for the USB to NMEA interface.

    Also, the Installation section says install pyserial. https://pypi.python.org/pypi/pyserial. This is essential to let Python apps interact with the USB driver.
    Data ExplorationStart here: NMEA Reference Manual. This covers the bases for the essential message traffic nicely. The full NMEA standard has lots of message types. We only care about a few of them. We can safely ignore the others.

    As noted in the project documentation, there's a relatively simple message structure. The messages arrive more-or-less constantly. This leads to an elegant Pythonic design: an Iterator.

    We can define a class which implements the iterator protocol (__iter__() and __next__()) that will consume lines from the serial interface and emit the messages which are complete and have a proper checksum. Since the fields of a message are comma-delimited, might as well split into fields, also.

    It's handy to combine this with the context manager protocol (__enter__() and __exit__()) to create a class that can be used like this.

    with Scanner(device) as GPS:
    for sentence_fields in GPS:
    print(sentence_fields)

    This is handy for watching the messages fly past. The fields are kind of compressed. It's a light-weight compression, more like a lack of useful punctuation than proper compression.

    Consequently, we'll need to derive fields from the raw sequences of bytes. This initial exploration leads straight to the next phase of the project.
    ModelingWe can define a data model for these sentences using a Sentence class hierarchy. We can use a simple Factory function to emit Sentence objects of the appropriate subclass given a sequence of fields in bytes. Each subclass can derive data from the message.

    The atomic fields seem to be of seven different types.
    • Text. This is a simple decode using ASCII encoding.
    • Latitude. The values are in degrees and float minutes.
    • Longitude. Similar to latitude.
    • UTC date. Year, month, and day as a triple.
    • UTC time. Hour, minute, float seconds as a triple.
    • float. 
    • int.
    Because fields are optional, we can't naively use the built-in float() and int() functions to convert bytes to numbers. We'll have to have a version that works politely with zero-length strings and creates None objects.

    We can define a simple field definition tuple, Field = namedtuple('Field', ['title', 'name', 'conversion']). This slightly simplifies definition of a class.

    We can define a class with a simple list of field conversion rules.

    class GPWPL(Sentence):
    fields = [
    Field('Latitude', 'lat_src', lat),
    Field('N/S Indicator', 'lat_h', text),
    Field('Longitude', 'lon_src', lon),
    Field('E/W Indicator', 'lon_h', text),
    Field("Name", "name", text),
    ]

    The superclass __init__() uses the sequence of field definitions to apply conversion functions (lat(), lon(), text()) to the bytes, populating a bunch of attributes. We can then use s.lat_src to see the original latitude 2-tuple from the message. A property can deduce the actual latitude from the s.lat_src and s.lat_h fields.

    For each field, apply the function to the value, and set this as an attribute.

    for field, arg in zip(self.fields, args[1:]):
    try:
    setattr(self, field.name, field.conversion(arg))
    except ValueError as e:
    self.log.error(f"{e} {field.title} {field.name} {field.conversion} {arg}")

    This sets attributes with useful values derived from the bytes provided in the arguments.

    The factory leverages a cool name-to-class mapping built by introspection.

    sentence_class_map = {
    class_.__name__.encode('ascii'): class_
    for class_ in Sentence.__subclasses__()
    }
    class_= self.sentence_class_map.get(args[0])

    This lets us map a sentence header (b"GPRTE") to a class (GPRTE) simply. The get() method can use an UnknownSentence subclass as a default.
    Modeling AlternativesAs we move forward, we'll want to change this model. We could use a cooler class definition style, something like this. We could then iterate of the keys in the class __dict__ to set the attribute values.

    class GPXXX(Sentence):
    lat_src = Latitude(1)
    lat_h = Text(2)
    lon_src = Longitude(3)
    lon_h = Text(4)
    name = Text(5)

    The field numbers are provided to be sure the right bunch of bytes are decoded.

    Or maybe even something like this:

    class GPXXX(Sentence):
    latitude = Latitude(1, 2)
    longitude = Longitude(3, 4)
    name = Text(5)

    This would combine source fields to create the useful value. It would be pretty slick. But it requires being *sure* of what a sentence' content is. When exploring, this isn't the way to start. The simplistic list of field definitions comes right off web sites without too much intermediate translation that can lead to confusion.

    The idea is to borrow the format from the SiRF reference and start with Name, Example, Unit, and Description in each Field definition. That can help provide super-clear documentation when exploring. The http://aprs.gids.nl/nmea/ information has similar tables with examples. Some of the http://freenmea.net/docs examples only have names.

    The most exhaustive seems to be http://www.catb.org/gpsd/NMEA.html. This, also, only has field names and position numbers. The conversions are usually pretty obvious.
    FilteringA talker -- well -- talks. More or less constantly. There are delays to allow time to listen and time for multiplexers to merge in other talker streams.

    There's a cycle of messages that a device will emit. Once you've started decoding the sentences, the loop is obvious.

    For an application where you're gathering real-time track or performance data, of course, you'll want to capture the background loop. It's a lot of data. At about 80 bytes times 8 background messages on a 2-second cycle, you'll see 320 bytes per second, 19K per minute, 1.1M per hour, 27.6M per day. You can record everything for 38 days to and be under a Gb.

    The upper bound for 4800 BAUD is 480 bytes per second. 41M per day. 25 days to record a Gb of raw data.

    For my application, however, I want to capture the data not in the background loop.

    It works like this.
    1. I start the laptop collecting data.
    2. I reach over to the chartplotter and push a bunch of buttons to get to a waypoint transfer or a route transfer.
    3. The laptop shows the data arriving. The chartplotter claims it's done sending.
    4. I stop collecting data. In the stream of data are my waypoints or routes. Yay!
    A reject filter is an easy thing: Essentially it's filter(lambda s: s._name not in reject_set, source). A simple set of names to reject is the required configuration for this filter.
    PersistenceHow do we save these messages?

    We have several choices.
    1. Stream of Bytes. The protocol uses \r\n as line endings. We could (in principle) cat /dev/cu.usbserial-A6009TFG >capture.nmea. Pragmatically, that doesn't always work because the 4800 BAUD setting is hard to implement. But the core idea of "simply save the bytes" works.
    2. Stream of Serialized Objects. 
      1. We can use YAML to cough out the objects. If the derived attributes were all properties, it would have worked out really well. If, however, we leverage __init__() to set attributes, this becomes awkward.
      2. We can work around the derived value problems by using JSON with our own Encoder to exclude the derived fields. This is a bit more complex, than it needs to be. It permits exploration though.
    3. GPX, KML, or CSV. Possible, but. These seems to be a separate problem.
    When transforming data, it's essential to avoid "point-to-point" transformation among formats. It's crucial to have a canonical representation and individual converters. In this case, we have NMEA to canonical, persist the canonical, and canonical to GPX (or KML, or CSV.)ReworkYes. There's a problem here.  Actually there are several problems.
    1. I got the data I wanted. So, fixing the design flaws isn't essential anymore. I may, but... I should have used descriptors.
    2. In the long run, I really need a three-way synchronization process between computer, new chart plotter and legacy chart plotter. 
    Let's start with the first design issue: lazy data processing.
    The core Field/Sentence design should have looked like this:
    class Field:
    def __init__(self, position, function, description):
    self.position = position
    self.function = function
    self.description = description
    def __get__(self, object, class_):
    print(f"get {object} {class_}")
    transform = self.function
    return transform(object.args[self.position])

    class Sentence:
    f0 = Field(0, int, "Item Zero")
    f1 = Field(1, float, "Item One")
    def __init__(self, *args):
    self.args = args

    This makes all of the properties into lazy computations. It simplifies persistence because the only real attribute value is the tuple of arguments captured from the device.

    >>> s = Sentence(b'1', b'2.3')
    >>> s.f1
    1
    >>> s.f2
    2.3
    That would have been a nicer design because serialization would have been trivial. Repeated access to the fields might have become costly. We have a tradeoff issue here that depends on the ultimate use case. For early IoT efforts, flexibility is central, and the computation costs don't matter. At some point, there may be a focus on performance, where extra code to save time has merit.

    Synchronization is much more difficult. I need to pick a canonical representation. Everything gets converted to a canonical form. Differences are identified. Then updates are created: either GPX files for the devices that handle that, or NMEA traffic for the device which updated over the wire.
    ConclusionThis IoT project followed a common arc: Explore the data, define a model, figure out how to filter out noise, figure out how to persist the data. Once we have some data, we realize the errors we made in our model.

    A huge problem is the pressure to ship an MVP (Minimally Viable Product.) It takes a few days to build this. It's shippable.

    Now, we need to rework it. In this case, throw most of the first release away. Who has the stomach for this? It's essential, but it's also very difficult.

    A lot of good ideas from this blog post are not in the code. And this is the way a lot of commercial software happens: MVP and move forward with no time for rework.
    Categories: FLOSS Project Planets

    Rich Bowen: Software Morghulis

    Planet Apache - Tue, 2017-06-20 03:36

    In George R R Martin’s books “A Song of Fire and Ice” (which you may know by the name “A Game of Thrones”), the people of Braavos,
    have a saying – “Valar Morghulis” – which means “All men must die.” As you follow the story, you quickly realize that this statement is not made in a morbid, or defeatist sense, but reflects on what we must do while alive so that the death, while inevitable, isn’t meaningless. Thus, the traditional response is “Valar Dohaeris” – all men must serve – to give meaning to their life.

    So it is with software. All software must die. And this should be viewed as a natural part of the life cycle of software development, not as a blight, or something to be embarrassed about.

    Software is about solving problems – whether that problem is calculating launch trajectories, optimizing your financial investments, or entertaining your kids. And problems evolve over time. In the short term, this leads to the evolution of the software solving them. Eventually, however, it may lead to the death of the software. It’s important what you choose to do next.

    You win, or you die

    One of the often-cited advantages of open source is that anybody can pick up a project and carry it forward, even if the original developers have given up on it. While this is, of course, true, the reality is more complicated.

    As we say at the Apache Software Foundation, “Community > Code”. Which is to say, software is more than just lines of source code in a text file. It’s a community of users, and a community of developers. It’s documentation, tutorial videos, and local meetups. It’s conferences, business deals and interpersonal relationships. And it’s real people solving real-world problems, while trying to beat deadlines and get home to their families.

    So, yes, you can pick up the source code, and you can make your changes and solve your own problems – scratch your itch, as the saying goes. But a software project, as a whole, cannot necessarily be kept on life support just because someone publishes the code publicly. One must also plan for the support of the ecosystem that grows up around any successful software project.

    Eric Raymond just recently released the source code for the 1970s
    computer game Colossal Cave Adventure on Github. This is cool, for us greybeard geeks, and also for computer historians. It remains to be seen whether the software actually becomes an active open source project, or if it has merely moved to its final resting place.

    The problem that the software solved – people want to be entertained – still exists, but that problem has greatly evolved over the years, as new and different games have emerged, and our expectations of computer games have radically changed. The software itself is still an enjoyable game, and has a huge nostalgia factor for those of us who played it on greenscreens all those years ago. But it doesn’t measure up to the alternatives that are now available.

    Software Morghulis. Not because it’s awful, but because its time has
    passed.

    Winter is coming

    The words of the house of Stark in “A Song of Fire and Ice”, are “Winter is coming.” As with “Valar Morghulis,” this is about planning ahead for the inevitable, and not being caught surprised and unprepared.

    How we plan for our own death, with insurance, wills, and data backups, isn’t morbid or defeatist. Rather, it is looking out for those that will survive us. We try to ensure continuity of those things which are possible, and closure for those things which are not.

    Similarly, Planning ahead for the inevitable death of a project isn’t defeatist. Rather, it shows concern for the community. When a software project winds down, there will often be a number of people who will continue to use it. This may be because they have built a business around it. It may be because it perfectly solves their particular problem. And it may be that they simply can’t afford the time, or cost, of migrating to something else.

    How we plan for the death of the project prioritizes the needs of this community, rather than focusing merely on the fact that we, the developers, are no longer interested in working on it, and have moved on to something else.

    At Apache, we have established the Attic as a place for software projects to come to rest once the developer community has dwindled. While the project itself may reach a point where they can no longer adequately shepherd the project, the Foundation as a whole still has a responsibility to the users, companies, and customers, who rely on the software itself.

    The Apache Attic provides a place for the code, downloadable releases, documentation, and archived mailing lists, for projects that are no longer actively developed.

    In some cases, these projects are picked up and rejuvenated by a new community of developers and users. However, this is uncommon, since there’s usually a very good reason that a project has ceased operation. In many cases, it’s because a newer, better solution has been developed for the problem that the project solved. And in many cases, it’s because, with the evolution of technology, the problem is no longer important to a large enough audience.

    However, if you do rely on a particular piece of software, you can rely on it always being available there.

    The Attic does not provide ongoing bug fixes or make additional releases. Nor does it make any attempt to restart communities. It is
    merely there, like your grandmother’s attic, to provide long-term storage. And, occasionally, you’ll find something useful and reusable as you’re looking through what’s in there.

    Software Dohaeris

    The Apache Software Foundation exists to provide software for the public good. That’s our stated mission. And so we must always be looking out for that public good. One critical aspect of that is ensuring that software projects are able to provide adequate oversight, and continuing support.

    One measure of this is that there are always (at least) three members of the Project Management Committee (PMC) who can review commits, approve releases, and ensure timely security fixes. And when that’s no longer the case, we must take action, so that the community depending on the code has clear and correct expectations of what they’re downloading.

    In the end, software is a tool to accomplish a task. All software must serve. When it no longer serves, it must die.

    Categories: FLOSS Project Planets
    Syndicate content