FLOSS Project Planets
As we wrote previously:
The Libre Tea Computer Card is built with an Allwinner A20 dual core processor configured to use the main CPU for graphics; it has 2 GB of RAM and 8 GB of NAND Flash; and it will come pre-installed with Parabola GNU/Linux-libre, an FSF-endorsed fully-free operating system.
It's a device with a lot of potential, and purchasing one will help support the development of even more hardware that respects users' freedom. While we will have to make a final evaluation before granting Respects Your Freedom certification, we have high hopes given the history of the developers involved and the details currently available.
When we last wrote about the project, there was an outpouring of support, helping it get significantly closer to its funding goal (at the time of this writing, 85% there). But with just days left, we need to make one final push. Can you be the one to put the project over the top by backing the Libre Tea Computer Card? The final deadline is Friday, August 26th, so now is the time to act if you want to help promote the creation of devices whose software is fully under your control.
The TWG coding standards committee is announcing two coding standards changes for final discussion. These appear to have reached a point close enough to consensus for final completion. The new process for proposing and ratifying changes is documented on the coding standards project page.
Official coding standards updates now ratified:
- Prefer != to <> for NOT EQUALS
- Stop disallowing camelCase for local variables / parameters
- Should we require a blank line after <?php?
Issues awaiting core approval:
- [policy] Define coding standards for anonymous functions (closures)
- Add type hinting to function declaration coding standards - sidelined on discussions around how to handle versioned coding standards (for which there is a separate issue
Issues that just need a little TLC (you can help!):
- [Policy, no patch] PHP 5.4 short array syntax coding standards - we just need some specific proposed language and this will be ratified
- [policy, no patch] Standardize indenting on chained method calls is blocked on the related coder rule issue
- [Policy, no patch] Delete permission to pad spacing in a block of related assignments needs more support - do you want this change?
These proposals will be re-evaluated during the next coding standards meeting currently scheduled for August 30th. At that point the discussion may be extended, or if clear consensus has been reached one or more policies may be dismissed or ratified and moved to the next step in the process.
For anyone who's ever looked up a definition of a Drupal term and been left wondering what it all means, here are some practical real world explanations you can use to navigate the Drupalverse. Watch this space and use comments to send us your feedback and requests.The Discipline of Dev Ops
Dev Ops, or Development Operations, is the intersection between IT managed hosting support and development. While it is a specialization in many organizations, senior developers, tech leads, and architects should be conversant in the various systems and tools to be used by your IT team or provider.
One of the primary goals of Dev Ops is to create standardized operating system iterations that are consistently reliable and easily replicable. Your unique infrastructure or hosting service plays a big role in these systems, which is why they tend to be customized to each project.
Standardized Dev Ops systems are used to create local and remote development environments, as well as staging and production environments, which all function in the same way. Having good Dev Ops systems in place means that your organization can support continuous development practices like version control and automated testing.
For any site that’s even moderately complex, having Dev Ops standards is huge. You don’t have to try to become a Dev Ops genius yourself: instead, you can find an organization like FFW to provide the level of Dev Ops help and support that is appropriate for the size and scope of your project.Defining a Display
Displays, unlike Dev Ops, are a little simpler. A Display in Drupal typically refers to how queried data is organized and shown to visitors. It is usually used in connection with a native database query referred to as a View.
One View (or database query) can have several Displays sorted in different ways. For instance, a set of queried data can be output in the following ways:
- a sortable table
- a grid
- as consecutive field sets
- in a rotating banner
- as a calendar or list of coming events
- as points on a map
… and these are only just a few examples of the many different kinds of Displays.The Details Around Distributions
A Distribution is a pre-developed assembly of database data, code, and files. Distributions commonly include saved content, configuration settings, Drupal core, contributed and custom modules, libraries, and a custom theme. It’s basically a pre-built Drupal site.
Most people first become acquainted with Distributions as different iterations of Drupal that are built for specific use cases or verticals, such as e-commerce or publishing. Many distributions are robust, production-ready applications that will save you tremendous work. They let you take advantage of the distribution sponsor’s subject matter expertise.
There are other kinds of distributions, such as ones developed mainly for marketing purposes to showcase what Drupal can do and how Drupal can be used. Both of these types of distributions have value, but it is important to differentiate between the two.
Distributions can be vetted in much the same way that a Drupal module or theme can be vetted. When evaluating a Distribution, I always like to ask the following questions:
- Who are the contributors?
- What is their experience?
- Is the project actively maintained and are new features or versions planned?
The other primary consideration when vetting a Distribution is how much complexity and effort is required to ‘unravel’ a distribution. Many organizations have found that the more fully realized distributions are difficult to customize around their specific workflows and therefore are more expensive to change than starting fresh with a more basic version of Drupal.
In recent weeks we've been making several small changes to Drupal.org: precursors to bigger things to come. First, we moved the user activity links to a user menu in the header. Next, we're moving the search function from the header to the top navigation. These changes aren't just to recover precious pixels so you can better enjoy those extra long issue summaries—these are the first step towards a new front page on Drupal.org.
As the Drupal 8 life-cycle has moved from development, to release, to adoption, we have adapted Drupal.org to support the needs of the project in the moment. And today, the need of the moment is to support the adoption journey.
As we make these changes you'll see echoes of the visual style we used when promoting the release of Drupal 8.
The Drupal wordmark region will help to define Drupal, and promote trying a demo.
A ribbon will promote contextual CTAs like learning more about Drupal 8.
The news feed will be tweaked.
DrupalCon will have a permanent home on the front page.
Community stats and featured case studies will be carried over(but may evolve).
The home page sponsorship format may change.
… a sneak preview of some new page elements and styles you'll see in the new home page.
Our first deployment will introduce the new layout and styles. Additional changes will follow as we introduce content to support our turn towards the adoption journey. Drupal evaluators beginning their adoption journey want to know who uses Drupal, and what business needs Drupal can solve. We will begin promoting specific success stories: solutions built in Drupal to meet a concrete need.What's next?
We're continuing to refine our content model and editorial workflow for the new front page. You'll see updates in the Drupal.org change notifications as we get closer to deployment.
Wondering why we're making these changes now? This turn towards the adoption journey is part of our changing priorities for the next 12 months.
In this guide, we’ll set up Travis CI to rebuild a Nikola website and host it on GitHub Pages.Why?
By using Travis CI to build your site, you can easily blog from anywhere you can edit text files. Which means you can blog with only a web browser and GitHub.com or try a service like Prose.io. You also won’t need to install Nikola and Python to write. Or a real computer, a mobile phone could probably access one of those services and write something.Caveats
- The build might take a couple minutes to finish (1:30 for the demo site; YMMV)
- When you commit and push to GitHub, the site will be published unconditionally. If you don’t have a copy of Nikola for local use, there is no way to preview your site.
- A computer for the initial setup that can run Nikola and the Travis CI command-line tool (written in Ruby) — you need a Unix-like system (Linux, OS X, *BSD, etc.); Windows users should try Bash on Ubuntu on Windows (available in Windows 10 starting with Anniversary Update) or a Linux virtual machine.
- A GitHub account (free)
- A Travis CI account linked to your GitHub account (free)
Start by creating a new Nikola site and customizing it to your liking. Follow the Getting Started guide. You might also want to add support for other input formats, namely Markdown, but this is not a requirement (unless you want to use Prose.io).
After you’re done, you must configure deploying to GitHub in Nikola. Make your first deployment from your local computer and make sure your site works right. Don’t forget to set up .gitignore. Moreover, you must set GITHUB_COMMIT_SOURCE = False — otherwise, Travis CI will go into an infinite loop.
If everything works, you can make some change to your site (so you see that rebuilding works), but don’t commit it just yet.Setting up Travis CI
Next, we need to set up Travis CI. To do that, make sure you have the ruby and gem tools installed on your system. If you don’t have them, install them from your OS package manager.
First, download/copy the .travis.yml file (note the dot in the beginning; the downloaded file doesn’t have it!) and adjust the real name, e-mail (used for commits; line 12/13), and the username/repo name on line 21. If you want to render your site in another language besides English, add the appropriate Ubuntu language pack to the list in this file.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31# Travis CI config for automated Nikola blog deployments language: python cache: apt sudo: false addons: apt: packages: - language-pack-en-base python: - 3.5 before_install: - git config --global user.name 'Travis CI' - git config --global user.email 'travis@invalid' - git config --global push.default 'simple' - pip install --upgrade pip wheel - echo -e 'Host github.com\n StrictHostKeyChecking no' >> ~/.ssh/config - eval "$(ssh-agent -s)" - chmod 600 id_rsa - ssh-add id_rsa - git remote rm origin - git remote add origin firstname.lastname@example.org:USERNAME/REPO.git - git fetch origin master - git branch master FETCH_HEAD install: - pip install 'Nikola[extras]' script: - nikola build && nikola github_deploy -m 'Nikola auto deploy [ci skip]' notifications: email: on_success: change on_failure: always
Next, we need to generate a SSH key for Travis CI.echo id_rsa >> .gitignore echo id_rsa.pub >> .gitignore ssh-keygen -C TravisCI -f id_rsa -N ''
Open the id_rsa.pub file and copy its contents. Go to GitHub → your page repository → Settings → Deploy keys and add it there. Make sure Allow write access is checked.
And now, time for our venture into the Ruby world. Install the travis gem:gem install --user-install travis
You can then use the travis command if you have configured your $PATH for RubyGems; if you haven’t, the tool will output a path to use (eg. ~/.gem/ruby/2.0.0/bin/travis)
We’ll use the Travis CI command-line client to log in (using your GitHub password), enable the repository and encrypt our SSH key. Run the following three commands, one at a time (they are interactive):travis login travis enable travis encrypt-file id_rsa --add
Commit everything to GitHub:git add . git commit -am "Automate builds with Travis CI"
Hopefully, Travis CI will build your site and deploy. Check the Travis CI website or your e-mail for a notification. If there are any errors, make sure you followed this guide to the letter.
Drutopia is an initiative within the Drupal project that prioritizes putting the best online tools into the hands of grassroots groups. By embracing the liberatory possibilities of free software and supporting people-centred economic models, Drutopia aims to revolutionize the way we work and cooperate.
Drutopia is at once an ethos of Drupal development and a fresh take on Drupal distributions for users to build upon, all based in a governance model that gives users a large role in the direction of the project.
Core values of the Drutopia initiative include:
- Be inclusive regarding gender, gender identity, sexual orientation, ethnicity, ability, age, religion, geography and class.
- Commit to protection of personal information and privacy and freedom from surveillance.
- Put collaboration and cooperation above competition.
- Prioritize human needs over private profit.
- Foster non-hierarchical structures and collective decision-making.
Drutopia focuses on shared solutions. Drupal excels at providing the tools to develop and distribute specialized website platforms that can be freely shared, reused, and adapted. Of the three most-used free software content management systems (CMSs) – WordPress, Joomla!, and Drupal – only Drupal has the built-in ability to package and share highly developed distributions.
Distributions are essential in attracting and meeting the needs of groups that want to support the free software movement but don’t have the technical know-how or resources to create a site from scratch. For developers, too, distributions hold a lot of potential because they do the heavy lifting of initial setup, allowing developers and site builders to bypass many hours of unnecessary effort. Drupal distributions so far have been held back by a series of factors that Drutopia aims to address.
Drutopia is about returning to Drupal’s roots in free software and progressive social change. Since its founding years, the Drupal free software project has both reflected and contributed to the democratic potential of the internet: to empower citizens to freely collaborate and organize outside the control of governments and corporate media. Long before it powered Fortune 500 sites and whitehouse.gov, Drupal was a tool of choice for small, grassroots, change-oriented groups.
This initiative aims to reclaim Drupal for the communities and groups that have always been its core users and adopters and have contributed to much of its best innovation.
Join us at drutopia.org.
The Winnipeg City’s NOW (Neighbourhoods Of Winnipeg) Portal is an initiative to create a complete neighbourhood web portal for its citizens. At the core of the project we have a set of about 47 fully linked, integrated and structured datasets of things of interests to Winnipegers. The focal point of the portal is Winnipeg’s 236 neighbourhoods, which define the main structure of the portal. The portal has six main sections: topics of interests, maps, history, census, images and economic development. The portal is meant to be used by citizens to find things of interest in their neibourhood, to learn their history, to see the images of the things of interest, to find tools to help economic development, etc.
The NOW portal is not new; Structured Dynamics was also its main technical contractor for its first release in 2013. However we just finished to help Winnipeg City’s NOW team to migrate their older NOW portal from OSF 1.x to OSF 3.x and from Drupal 6 to Drupal 7; we also trained them on the new system. Major improvements accompany this upgrade, but the user interface design is essentially the same.
The first thing I will do is to introduce each major section of the portal and I will explain the main features of each. Then I will discuss the new improvements of the portal.Datasets
A NOW portal user won’t notice any of this, but the main feature of the portal is the data it uses. The portal manages 47 datasets (and growing) of fully structured, integrated and linked datasets of things of interests to Winnipegers. What the portal does is to manage entities. Each kind of entity (swimming pools, parks, places, images, addresses, streets, etc.) are defined with multiple properties and values. Several of the entities reference other entities in other datasets (for example, an assessment parcel from the Assessment Parcels dataset references neighbourhoods entities and property addresses entities from their respective datasets).
The fact that these datasets are fully structured and integrated means that we can leverage these characteristics to create a powerful search experience by enabling filtering of the information on any of the properties, to bias the searches depending where a keyword search match occurs, etc.
Here is the list of all the 47 datasets that currently exists in the portal:
- Aboriginal Service Providers
- Neighbourhoods of Winnipeg City
- Economic Development Images
- Recreation & Leisure Images
- Neighbourhoods Images
- Volunteer Images
- Library Images
- Parks Images
- Census 2006
- Census 2001
- Winnipeg Internal Websites
- Winnipeg External Websites
- Heritage Buildings and Resources
- NOW Local Content Dataset
- Outdoor Swimming Pools
- Zoning Parcels
- School Divisions
- Property Addresses
- Wading Pools
- Electoral wards of Winnipeg City
- Assessment Parcels
- Community Centres
- Police Service Centers
- Community Gardens
- Leisure Centres
- Parks and Open Spaces
- Community Committee
- Commercial real estates
- Sports and Recreation Facilities
- Community Characterization Areas
- Indoor Swimming Pools
- Neighbourhood Clusters
- Fire and Paramedic Stations
- Bus Stops
- Fire and Paramedic Service Images
- Animal Services Images
- Skateboard Parks
- Daycare Nurseries
- Indoor Soccer Fields
- Truck Routes
- Fire Stations
- Paramedic Stations
- Spray Parks Pads
The most useful feature of the portal to me is its full-text search engine. It is simple, clean and quite effective. The search engine is configured to try to give the most relevant results a NOW portal user may be searching. For example, it will positively bias some results that comes from some specific datasets, or matches that occurs in specific property values. The goal of this biasing is to improve the quality of the returned results. This is somewhat easy to do since the context of the portal is well known and we can easily boost scoring of search results since everything is fully structured.
Another major gain is that all the search results are fully templated. The search results do not simply return a title and some description for your search results. It does template all the information the system has about the matched results, but also displays the most relevant information to the users in the search results.
For example, if I search for a indoor swimming pool, in most of the cases it may be to call the front desk to get some information about the pool. This is why different key information will be displayed directly in the search results. That way, most of the users won’t even have to click on the result to get the information they were looking for directly in the search results page.
Here is an example of a search for the keywords main street. As you can notice, you are getting different kind of results. Each result is templated to get the core information about these entities. You have the possibility to focus on particular kind of entities, or to filter by their location in specific neighbourhoods.
Now let’s see some of the kind of entities that can be searched on the portal and how they are presented to the users.
Here is an example of an assessment parcel that is located in the St. John’s neighbourhood. The address, the value, the type and the location of the parcel on a map is displayed directly into the search results.
Another kind of entity that can be searched are the property addresses. These are located on a map, the value of the parcels and the building and the zoning of the address is displayed. The property is also linked to its assessment parcel entity which can be clicked to get additional information about the parcel.
Another interesting type of entity that can be searched are the streets. What is interesting in this case is that you get the complete outline of the street directly on a map. That way you know where it starts and where it ends and where it is located in the city.
There are more than a thousand geo-localized images of all different things in the city that can be searched. A thumbnail of the image and the location of the thing that appears on the image appears in the search results.
If you were searching for a nursery for your new born child, then you can quickly see the name, location on a map and the phone number of the nursery directly in the search result.
There are just a few examples of the fifty different kind of entities that can appear like this in the search results.Mapping
The mapping tool is another powerful feature of the portal. You can search like if you were using the full-text search engine (the top search box on the portal) however you will only get the results that can be geo-localized on a map. You can also simply browse entities from a dataset or you can filter entities by their properties/values. You can persist entities you find on the map and save the map for future reference.
In the example below, it shows that someone searched for a street (main street) and then he persisted it on the map. Then he search for other things like nurseries and selected the ones that are near the street he persisted, etc. That way he can visualize the different known entities in the portal on a map to better understand where things are located in the city, what exists near a certain location, within a neighbourhood, etc.
Census information is vital to the good development of a city. They are necessary to understand the trends of a sector, who populates it, etc., such that the city and other organizations may properly plan their projects to have has much impact as possible.
These are some of the reason why one of the main section of the site is dedicated to census data. Key census indicators have been configured in the portal. Then users can select different kind of regions (neighbourhood clusters, community areas and electoral wards) to get the numbers for each of these indicators. Then they can select multiple of these regions to compare each other. A chart view and a table view is available for presenting the census data.
The City took the time to write the history of each of its neighbourhoods. In additional to that, they hired professional photographs to photograph the points of interests of the city, to geo-localize them and to write a description for each of these photos. Because of this dedication, users of the portal can learn a much about the city in general and the neighbourhood they live in. This is what the History and Image sections of the website are about.
Historic buildings are displayed on a map and they can be browsed from there.
Images of points of interests in the neighbourhood are also located on a map.
Ever wondered in which neighbourhood you live in? No problem, go on the home page, put your address in the Find your Neighbourhood section and you will know it right away. From there you can learn more about your neighbourhood like its history, the points of interest, etc.
Your address will be located on a map, and your neighbourhood will be outlined around it. Not only you will know in which neighbourhood you live, but you will also know where you live within it. From there you can click on the name of the neigbourhood to get to the neighbourhood’s page and start learning more about it like its history, to see photos of points of interest that exists in your neighbourhood, etc.
Because all the content of the portal is fully structured, it is easy to browse its content using a well defined topic structure. The city developed its own ontology that is used to help the users browse the content of the portal by browsing topics of interest. In the example below, I clicked the Economic Development node and then the Land use topic. Finally I clicked the Map button to display things that are related to land use: in this case, zoning and assessment parcels are displayed to the user.
This is another way to find meaningful and interesting content from the portal.
Depending on the topic you choose, and the kind of information related to that topic, you may end up with different options like a map, a list of links to documents related to that topic, etc.Export Content
Now that I made an overview of each of the main features of the portal, let’s go back to the geeky things. The first thing I said about this portal is that at its core, all information it manages is fully structured, integrated and linked data. If you get to the page of an entity, you have the possibility to see the underlying data that exists about it in the system. You simply have to click the Export tab at the top of the entity’s page. Then you will have access to the description of that entity in multiple different formats.
In the future, the City should (or at least I hope will) make the whole set of datasets fully downloadable. Right now you only have access to that information via that export feature per entity. I hope because this NOW portal is fully disconnected from another initiative by the city: data.winnipeg.ca, which uses Socrata. The problem is that barely any of the datasets from NOW are available on data.winnipeg.ca, and the ones that are appearing are the raw ones (semi-structured, un-documented, un-integrated and non-linked) all the normalization work, the integration work, the linkage work done by the NOW team hasn’t been leveraged to really improve the data.winnipeg.ca datasets catalog.New with the upgrades
Those who are familiar with the NOW portal will notice a few changes. The user interface did not change that much, but multiple little things got improved in the process. I will cover the most notable of these changes.
The major changes that happened are in the backend of the portal. The data management in OSF for Drupal 7 is incompatible with what was available in Drupal 6. The management of the entities became easier, the configuration of OSF networks became a breeze. A revisioning system has been added, the user interface is more intuitive, etc. There is no comparison possible. However, portal users’ won’t notice any of this, since these are all site administrator functions.
The first thing that users will notice is the completely new full-text search engine. The underlying search engine is almost the same, but the presentation is far better. All entity types have gotten their own special template, which are displayed in a special way in the search results. Most of the time results should be much more relevant, filtering is easier and cleaner. The search experience is much better in my view.
The overall site performance is much better since different caching strategies have been put in place in OSF 3.x and OSF for Drupal. This means that most of the features of the portal should react more swiftly.
Now every type of entity managed by the portal is templated: their webpage is templated in specific ways to optimize the information they want to convey to users along with their search result “mini page” when they get returned as the result of a search query.
Multi-linguality is now fully supported by the portal, however not everything is currently templated. However expect a fully translated NOW portal in French in the future.Creating a Network of Portals
One of the most interesting features that goes with this upgrade is that the NOW portal is now in a position to participate into a network of OSF instances. What does that mean? Well, it means that the NOW portal could create partnerships with other local (regional, national or international) organizations to share datasets (and their maintenance costs).
Are there other organizations that uses this kind of system? Well, there is at least another one right in Winnipeg City: MyPeg.ca, also developed by Structured Dynamics. MyPeg uses RDF to model its information and uses OSF to manage its information. MyPeg is a non-profit organization that uses census (and other indicator) data to do studies on the well being of Winnipegers. The team behind MyPeg.ca are research experts in indicator data. Their indicator datasets (which includes census data) is top notch.
Let’s hypothetize that there would be interest between the two groups to start collaborating. Let’s say that the NOW portal would like to use MyPeg’s census datasets instead of its own since they are more complete, accurate and include a larger number of important indicators. What they basically want is to outsource the creation and maintenance of the census/indicators data to a local, dedicated and highly professional organization. The only things they would need to do is to:
- Formalize their relationship by signing a usage agreement
- The NOW portal would need to configure the MyPeg.ca OSF network into their OSF for Drupal instance
- The NOW portal would need to register the datasets it want to use from MyPeg.ca.
Once these 3 steps are done, taking no more than a couple of minutes, then the system administrators of the NOW portal could start using the MyPeg.ca indicator datasets like they were existing on their own network. (The reverse could also be true for MyPeg.) Everything would be transparent to them. From then on, all the fixes and updates performed by MyPeg.ca to their indicator datasets would immediately appear on the NOW portal and accessible to its users.
This is one possibility to collaborate. Another possibility would be to simply on a routine basis (every month, every 6 months, every year) share the serialized datasets such that the NOW portal re-import the dataset from the files shared by MyPeg.ca. This is also possible since both organizations use the same Ontology to describe the indicator data. This means that no modification is required by the City to take that new information into account, they only have to import and update their local datasets. This is the beauty of ontologies.Conclusion
The new NOW portal is a great service for citizens of Winnipeg City. It is also a really good example of a web portal that leverages fully structured, integrated and linked data. To me, the NOW portal is a really good example of the features that should go along with a municipal data portal.
It’s august and probably you are on holiday, life seems beautiful and you hope this period never ends, but… Happy or not September is about to arrive, and your daily routine is too. Don’t be afraid though: in these months the WikiToLearn community is working hard to provide you the best WikiToLearn you’ve seen so far.
From a brand new homepage to a better organization for news and social pages: you’re going to love it! September is not that sad though: why? If the new WikiToLearn isn’t enough for you, probably Akademy is: the annual word summit of KDE, this year happening in Berlin with QtCon, is one of the greates events for FOSS and we are taking part to it! Why is it so special for us? First of all because we’re part of the KDE community and we are looking forward to meet other members, share opinions and help each other, but also because this period is going to be special: KDE has its 20th birthday while Free Software Foundation Europe and VideoLAN both have their 15th birthday. Not over yet: you know who’s celebrating its birthday too in the same period? WikiToLearn!
During these months we worked hard to create local communities, to spread the word about our project, to give more attention and help to new users and to come up with a better communication plan that allows you to be always up to date on what’s going on in our community. September is not that far and it’s full of great news, get ready and prepare yourself!
Watch out: #wtlatakademy #wtlbirthday and others can become viral on our social pages in few weeks, we’re going to Akademy!
There are many wordy articles on configuring your web server’s TLS ciphers. This is not one of them. Instead I will share a configuration which is both compatible enough for today’s needs and scores a straight “A” on Qualys’s SSL Server Test.
Python Piedmont Triad User Group: PYPTUG Monthly meeting August 30 2016 (flask-restplus, openstreetmap)
WhatMeeting will start at 6:00pm.
We will open on an Intro to PYPTUG and on how to get started with Python, PYPTUG activities and members projects, in particular some updates on the Quadcopter project, then on to News from the community.
Then on to the main talk.
Main Talk: Building a RESTful API with Flask-Restplus and Swaggerby Manikandan RamakrishnanBio:Manikandan Ramakrishnan is a Data Engineer with Inmar Inc.
Abstract: Building an API and documenting it properly is like having a cake and eating it too. Flask-RESTPlus is a great Flask extension that makes it really easy to build robust REST APIs quickly with minimal setup. With its built in Swagger integration, it is extremely simple to document the endpoints and to enforce request/response models.
We will have some time for extemporaneous "lightning talks" of 5-10 minute duration. If you'd like to do one, some suggestions of talks were provided here, if you are looking for inspiration. Or talk about a project you are working on.One lightning talk will cover OpenStreetMap
WhenTuesday, August 30th 2016
Meeting starts at 6:00PM
WhereWake Forest University, close to Polo Rd and University Parkway:
room: Manchester 241 Wake Forest University, Winston-Salem, NC 27109
See also this campus map (PDF) and also the Parking Map (PDF) (Manchester hall is #20A on the parking map)
And speaking of parking: Parking after 5pm is on a first-come, first-serve basis. The official parking policy is:"Visitors can park in any general parking lot on campus. Visitors should avoid reserved spaces, faculty/staff lots, fire lanes or other restricted area on campus. Frequent visitors should contact Parking and Transportation to register for a parking permit."Mailing List
Don't forget to sign up to our user group mailing list:
It is the only step required to become a PYPTUG member.
RSVP on meetup:https://www.meetup.com/PYthon-Piedmont-Triad-User-Group-PYPTUG/events/233095834/
I'm in Pretoria, South Africa at the H3ABioNet hackathon which is developing workflows for Illumina chip genotyping, imputation, 16S rRNA sequencing, and population structure/association testing. Currently, I'm working with the imputation stream and we're using Nextflow to deploy an IMPUTE-based imputation workflow with Docker and NCSA's openstack-based cloud (Nebula) underneath.
The OpenStack command line clients (nova and cinder) seem to be pretty usable to automate bringing up a fleet of VMs and the cloud-init package which is present in the images makes configuring the images pretty simple.
Now if I just knew of a better shared object store which was supported by Nextflow in OpenStack besides mounting an NFS share, things would be better.
You can follow our progress in our git repo: [https://github.com/h3abionet/chipimputation]
Have you ever seen through a plane’s window, or in Google Maps, some precisely defined circles on the Earth? Typically many of them, close to each other? Something like this:
Do you know what they are? If you are thinking of irrigation circles, you are wrong. Do not believe the lies of the conspirators. Those are, undoubtedly, proofs of extraterrestrial visitors on earth.
As I want to be ready for the first contact I need to know where these guys are working. It should be easy with so many satellite images at hand.
So I asked the machine learning experts around here to lend me a hand. Surprisingly, they refused. Mumbling I don’t know what about irrigation circles. Very suspicious. But something else they mentioned is that a better initial approach would be to use some computer-vision detection technique.
So, there you go. Those damn conspirators gave me the key.Circles detection
So now, in the Python ecosystem computer vision means OpenCV. And as it happens, this library has got the HoughCircles module which finds circles in an image. Not surprising: OpenCV has a bazillion of useful modules like that.
Lets make it happen.
First of all, I’m going to use Landsat 8 data. I’ll choose scene 229/82 for two reasons:
- I know it includes circles, and
- it includes my house (I want to meet the extraterrestrials living close by, not those in Area 51)
The first issue I have to solve is that the HoughCircles functionfinds circles in a grayscale image using a modification of the Hough transform
Well, grayscale does not exactly match multi-band Landsat 8 data, but each one of the bands can be treated as a single grayscale image. Now, a circle can express itself differently in different bands, because each band has its own way to sense the earth. So, the detector can define slightly different center coordinates for the same circle. For that reason, if two centers are too close then I’m going to keep only one of them (and discard the other as repeated).
Next, I need to determine the maximum and minimum circle’s radius. Typically, those circles sizes vary, from 400 mts up to 800 mts. That is between 13 and 26 Landsat pixels (30 mts). That’s a starting point. For the rest of the parameters I’ll just play around and try different values (not very scientific, I’m sorry).
So I run my script (which you can see in this Jupyter notebook) and without too much effort I can see that the circles are detected:
By changing the parameters I get to detect more (getting more false-positives) or less circles (missing some real ones). As usual, there’s a trade-off there.Filter-out false positives
These circles only make sense in farming areas. If I configure the program not to miss real circles, then I get a lot of false positives. There are too many detected circles in cities, clouds, mountains, around rivers, etc.
That’s a whole new problem that I will need to solve. I can use vegetation indices, texture computation, machine learning. There’s a whole battery of possibilities to explore. Intuition, experience, domain knowledge, good data-science practices and ufology will help me out here. Unlucky enough, all that is out of the scope of this post.
So, my search for aliens will continue.Mental disorder disclaimer
I hope it’s clear that all the search for aliens story is fictional. Just an amusing way to present the subject.
Once clarified that, the technical aspects in the post are still valid.
To help our friends of Kilimo, we developed an irrigation circles’ detector prototype. As hinted before, instead of approaching the problem with machine learning we attacked it using computer vision techniques.
On August 13th, I had the pleasure of enjoying another Drupal Camp Asheville. This has become one of my favorite Drupal camps because of the location and quality of camp organization. It has the right balance of structure, while maintaining a grassroots feel that encourages open discussion and sharing.
I’m going to Akademy! Akademy 2016, as part of QtCon, that is. I missed last year in A Coruña because it conflicted with my family summer vacation, but this year is just fine (although if I was a university student I’d be annoyed that Akademy was smack-dab in the middle of the first week of classes — you can’t please everyone).
Two purely social things I will be doing are baking cookies and telling stories about dinosaurs. I have a nice long train ride to Berlin to think of those stories. But, as those of you who have been following my BSD posts know, the dinosaurs are not so backwards anymore. Qt 5.6 is doing an exp-run on FreeBSD, so it will be in the tree Real Soon Now ™, and the Frameworks are lined up, etc. etc. For folks following the plasma5 branch in area51 this is all old hat; that tends to follow the release of new KDE software — be it Frameworks, or Plasma, or Applications, or KDevelop — by a few days. The exciting thing is having this all in the official ports tree, which means that it becomes more accessible to downstreams as well.
Er .. yeah, dinosaurs. Technically, I’m looking forward to talking about Qt on BSD and KDE Plasma desktop and other technologies on BSD, and about the long-term effects of this year’s Randa meeting. I have it on good authority that KDE Emerge^WRunda^W KDE Cauldron is being investigated for the BSDs as well.
Plasma 5.8 will be our first long-term supported release in the Plasma 5 series. We want to make this a release as polished and stable as possible. One area we weren’t quite happy with was our multi-screen user experience. While it works quite well for most of our users, there were a number of problems which made our multi-screen support sub-par.
Let’s take a step back to define what we’re talking about.
Multi-screen support means that connecting more than one screen to your computer. The following use cases give good examples of the scope:
- Static workstation A desktop computer with more than one display connected, the desktop typically spans both screens to give more screen real estate.
- Docking station A laptop computer that is hooked up to a docking station with additional displays connected. This is a more interesting case, since different configurations may be picked depending on whether the laptop’s lid is closed or not, and how the user switches between displays.
- Projector The computer is connected to a projector or TV.
The idea is that the user plugs in or starts up with that configuration, if the user has already configured this hardware combination, this setup is restored. Otherwise, a reasonable guess is done to put the user to a good starting point to fine-tune the setup.
This is the job of KScreen. At a technical level, kscreen consists of three parts:
- system settings module This can be reached through system settings
- kscreen daemon Run in a background process, this component saves, restores and creates initial screen configurations.
- libkscreen This is the library providing the screen setup reading and writing API. It has backends for X11, Wayland, and others that allow to talk to the exact same programming interface, independent of the display server in use.
At an architectural level, this is a sound design: the roles are clearly separated, the low-level bits are suitably abstracted to allow re-use of code, the API presents what matters to the user, implementation details are hidden. Most importantly, aside from a few bugs, it works as expected, and in principle, there’s no reason why it shouldn’t.
So much for the theory. In reality, we’re dealing with a huge amount of complexity. There are hardware events such as suspending, waking up with different configurations, the laptop’s lid may be closed or opened (and when that’s done, we don’t even get an event that it closed, displays come and go, depending on their connection, the same piece of hardware might support completely different resolutions, hardware comes with broken EDID information, display connectors come and go, so do display controllers (crtcs); and on top of all that: the only way we get to know what actually works in reality for the user is the “throw stuff against the wall and observe what sticks” tactic.
This is the fabric of nightmares. Since I prefer to not sleep, but hack at night, I seemed to be the right person to send into this battle. (Coincidentally, I was also “crowned” kscreen maintainer a few months ago, but let’s stick to drama here.)
So, anyway, as I already mentioned in an earlier blog entry, we had some problems restoring configurations. In certain situations, displays weren’t enabled or positioned unreliably, or kscreen failed to restore configurations altogether, making it “forget” settings.
Debugging these issues is not entirely trivial. We need to figure out at which level they happen (for example in our xrandr implementation, in other parts of the library, or in the daemon. We also need to figure out what happens exactly, and when it does. A complex architecture like this brings a number of synchronization problems with it, and these are hard to debug when you have to figure out what exactly goes on across log files. In Plasma 5.8, kscreen will log its activity into one consolidated, categorized and time-stamped log. This rather simple change has already been a huge help in getting to know what’s really going on, and it has helped us identify a number of problems.
A tool which I’ve been working on is kscreen-doctor. On the one hand, I needed a debugging helper tool that can give system information useful for debugging. Perhaps more importantly I know I’d be missing a command-line tool to futz around with screen configurations from the command-line or from scripts as Wayland arrives. kscreen-doctor allows to change the screen configuration at runtime, like this:
Disable the hdmi output, enable the laptop panel and set it to a specific mode
$ kscreen-doctor output.HDMI-2.disable output.eDP-1.mode.1 output.eDP-1.enable
Position the hdmi monitor on the right of the laptop panel
$ kscreen-doctor output.HDMI-2.position.0,1280 output.eDP-1.position.0,0
Please note that kscreen-doctor is quite experimental. It’s a tool that allows to shoot yourself in the foot, so user discretion is advised. If you break things, you get to keep the pieces. I’d like to develop this into a more stable tool in kscreen, but for now: don’t complain if it doesn’t work or eat your hamster.
Another neat testing tool is Wayland. The video wall configuration you see in the screenshot is unfortunately not real hardware I have around here. What I’ve done instead is run a Wayland server with these “virtual displays” connected, which in turn allowed me to reproduce a configuration issue. I’ll spare you the details of what exactly went wrong, but this kind of tricks allows us to reproduce problems with much more hardware than I ever want or need in my office. It doesn’t stop there, I’ve added this hardware configuration to our unit-testing suite, so we can make sure that this case is covered and working in the future.
Using the Drupal module Hook Update Deploy Tools to move node content can be an important part to a deployment strategy.
- How do I export and import nodes using Hook Update Deploy Tools? >> Read the project page or a quick how-to.
- What is the unique ID that connects an export to an import?
- What are the risks of this import export model?
- What if I am using an entity reference or a taxonomy that does not exist on production?
- Does the import show up as a revision?
- What happens if the import does not validate?
- What if the alias or path is already in use by another node?
- What if the alias or path is already in use by a View or used by a menu router?
- Is there a limit to the number of nodes that can be imported this way?
To create the export file, the node id is used to create the file. After that, the filename and 'unique id' references the alias of that node. So when you import the node, the node id on the production site will be determined by looking up the alias of the node. If a matching alias is found, that is the node that gets updated. If no matching alias is found, a new node gets created. The alias becomes the unique id.What are the risks of this import export model?
At present the known risks are:
- If the exported node uses entity references that do not exist on prod, the entity reference will either not be made, or reference an entity that is using that entity id on prod. This can be mitigated by exporting your source node while using a recent copy of the production DB.
- If the exported node uses taxonomy terms that do not exist on prod, the tag may import incorrectly. This can be mitigated by exporting your source node while using a recent copy of the production DB.
- if you are using pathato and the existing pattern on the production site is different than the pattern on your sandbox. The imported node will end up with a different alias, resulting in an invalid import. The imported node will be deleted since it failed validation and the hook_update_N will fail. This can be mitigated by exporting your source node while using a recent copy of the production DB.
- File attachments. There is currently not a way to bring attached files along with them unless the files already exist with a matching fid on production.
See answers 1 and 2 in What are the risks of this import export model?Does the import show up as a revision?
Yes it does, and the revison note contains the imported note, but also indicates it was imported with Hook Update Deploy Tools. The revision will take on the status of the exported node. If the exported node was unpublished, the impoirted revision will be unpublished.What happens if the import does not validate?
If the import was to an existing node, the update revision wil be deleted and return the node to its last published revision. If the import was for a node that did not exist on the site, the node and its first revision will be deleted. In either case, if the import was run through a hook_update_N, that update will fail and allow it to be re-run once the issue is resolved.What if the alias or path is already in use by another node?
If the alias is in use by a node, that node will be updated by the import. The alias is the unique id that links them not the nid.What if the alias or path is already in use by a View or used by a menu router?
If the alias is in use on the site by something other than a node, the import will be prevented. If the import is being run by a hook_update_N() then the update will fail and can be run when the issue is resolved.Is there a limit to the number of nodes that can be imported this way?
Technically, there is no real limit. Realistically, it is not a great workflow to move all of your content this way. It is not a good workflow. This export import method is best reserved for mission critical pages like forms or thankyou pages that go along with a Feature deployment. It is also good for pages that often get destroyed during early site development like style guides and example pages.
Marc Drummond (mdrummond), Front-end developer at Lullabot, Drupal core contributor, and self-processed Star Wars expert joins Kelley and Mike to discuss all the things the Drupal front-end community has been talking about lately. We also discuss the next major version of Drupal, whether or not a major Drupal contrib module will be deprecated, as well as our picks of the week.Interview
- Component-based rendering
- How would we implement this in Drupal? (contrib: Zen + Components?)
- Create a new user-facing core theme
- How can people get involved in components and/or the new theme? Drupaltwig on Slack
- Is Composer too hard?, Will Composer be a barrier for sitebuilders?
- Writing tests for Drupal
- Responsive images
- Liberty theme
- Windup theme
- The Fall, 2016 session of Drupal Career Online begins September 26; applications are now open.
- Introduction to Drupal 8 Module Development at DrupalCon Dublin.
- Proposal: Deprecate Field Collections for Drupal 8, focus on Entity Reference Revisions & Paragraphs.
- The Average Web Page (Data from Analyzing 8 Million Websites).
- There will never be a Drupal 9 vs. There will be a Drupal 9, and here is why.
- MyDropWizard.com - Long-term-support services for Drupal 6, 7, and 8 sites.
- WebEnabled.com - devPanel.
- Mike - Smart Trim module.
- Kelley - Drupal Security Team shield on Drupal.org project pages. I’m looking at you, Typogrify and Administration Menu!
- Marc - Making web accessibility great again: Auditing the US Presidential Candidates Websites for Accessibility, and nested doc root on Pantheon.
- Midwest Drupal Summit - August 19-21, 2016.
- DrupalCon Dublin - September 26-30, 2016.
- NEDCamp September 30 - October 1, 2016.
- Docker for Mac
- Writing a fantasy novel
- DrupalCamp Twin Cities
- Chunk-y Town - performed by Marc Drummond at Twin Cities DrupalCamp 2016.
If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.
In my previous post I explained why there will be a Drupal 9 even though we have previously unseen possibilities to add new things within Drupal 8.x.y. Now I'd like to dispel another myth, that initiatives are only there to add those new things.
Drupal 8 introduced initiatives to the core development process with the intention that even core development became too big to follow, understand or really get involved with in general. However because there are key areas that people want to work in, it makes sense to set up focused groups to organize work in those areas and support each other in those smaller groups. So initiatives like Configuration Management, Views in Core, Web Services, Multilingual, etc. were set up and mostly worked well, not in small part because it is easier to devote yourself to improving web services capabilities or multilingual support as opposed to "make Drupal better". Too abstract goals are harder to sign up for, a team with a thousand people is harder to feel a member of.
Given the success of this approach, even after the release of Drupal 8.0.0, we continued using this model and there are now several groups of people working on making things happen in Drupal 8.x. Ongoing initiatives include API-first, Media, Migrate, Content Workflows and so on. Several of these are primarily working on fixing bugs and plugging holes. A significant part of Migrate and API-first work to date was about fixing bugs and implementing originally intended functionality for example.
The wonder of these initiatives is they are all groups of dedicated people who are really passionate about that topic. They not only have plan or meta issues linked in the roadmap but also have issue tags and have regular meeting times. The Drupal 8 core calendar is full of meetings happening almost every single workday (that said, somehow people prefer Wednesdays and avoid Fridays).
If you have an issue involving usability, a bug with a Drupal web service API, a missing migration feature and so on, your best choice is to bring it to the teams already focused on the topics. The number and diverse areas of teams already in place gives you a very good chance that whatever you are intending to work on is somehow related to one or more of them. And since no issue will get done by one person (you need a reviewer and a committer at minimum), your only way to get something resolved is to seek interested parties as soon as possible. Does it sound like you are demanding time from these folks unfairly? I don't think so. As long as you are genuinely interested to solve the problem at hand, you are in fact contributing to the team which is for the benefit of everyone. And who knows, maybe you quickly become an integral team member as well.
Thanks for contributing and happy team-match finding!
Ps. If your issue is no match for an existing team, the friendly folks at #drupal-contribute in IRC are also there to help.
This article covers, how to send email programmatically in your Drupal 8 site. There are two main steps to send an email using Drupal 8. First we need to implement hook_mail() to define email templates and the second step is to use the mail manager to send emails using these templates. Let's see an example for sending an email from the custom module, also the following name spaces.DrupalDrupal 8Drupal Planet