FLOSS Project Planets

DrupalEasy: DrupalEasy Podcast: New Orleans Day 2

Planet Drupal - Thu, 2016-05-19 13:35

Direct .mp3 file download.

Hosts Ryan Price, Mike Anello, Kelley Curry and Anna Kalata are joined by guests Suzanne Dergacheva (of Evolving Web) Dave Hall (of the newly anointed Drupal 8 Workflow Initiative) and Steve Edwards to discuss Day 2 of DrupalCon. Ryan also breaks into interviews with Symfony's creator, Fabien Potencier, and the local New Orleans Drupal community representative, Eric Schmidt. Finally we hear some fun non-Drupal things each panelist did in the week.

Check in later this week for more episodes from DrupalCon New Orleans 2016.

Follow us on Twitter Intro Music

Glory Glory Code of Conduct from #Prenote

By Adam Juran, Campbell Vertesi and Jeremy "JAM" Macguire

Subscribe

Subscribe to our podcast on iTunesGoogle Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Categories: FLOSS Project Planets

ImageX Media: Managing the Creative Process

Planet Drupal - Thu, 2016-05-19 12:48

Inherent in the design process is the debate between subjective and objective quality. Can a design be called objectively “good” and, if so, what is it that makes it good? Or is the quality of a design entirely in the subjective eye of the beholder? However, implicit in this debate is the assumption that design is a single thing that can be viewed as a whole, rather than different elements that each play a role in the overall user experience. 

Categories: FLOSS Project Planets

PythonClub - A Brazilian collaborative blog about Python: Python com Unittest, Travis CI, Coveralls e Landscape (Parte 4 de 4)

Planet Python - Thu, 2016-05-19 12:43

Fala pessoal, tudo bem?

Na terceira parte deste tutorial, aprendemos a usar o Coveralls para gerar relatórios de testes para o nosso projeto. A próxima ferramenta que iremos estudar será o serviço Landscape. Neste tutorial serei breve, já que o uso default da ferramenta é bem simples.

Sobre o Landscape

Landscape é uma ferramenta online semelhante ao já conhecido PyLint, ou seja, é um verificador de bugs, estilo e de qualidade de código para Python.

Quando ativamos a análise do Landscape em nosso repositório, ele é executado após cada push ou pull request e realiza uma varredura em nosso código fonte Python atrás de possíveis bugs, como por exemplo variáveis sendo usadas antes de serem declaradas, nomes reservados sendo usados como nomes de variáveis e etc. Ele também verifica se a formatação do seu código esta seguindo a PEP8 e aponta possíveis falhas de design em seu código.

Uma vez que a análise esteja finalizada, a ferramenta indica em porcentagem a "qualidade" do nosso código, ou em palavras mais precisas, o quanto nosso código está bem escrito segundo as boas práticas de desenvolvimento. Vale deixar claro que o Landscape não verifica se seu código funciona corretamente, isso é responsabilidade dos testes que você escreveu, como foi visto na primeira parte do tutorial.

Semelhante as ferramentas dos tutoriais anteriores, o Landscape é totalmente gratuito para projetos opensource.

Criando uma conta

O processo de inscrição é simples. No topo da página temos a permissão de nos inscrevermos usando a conta do Github. Realize a inscrição e vamos as configurações.

Ativando o serviço

De todas as ferramentas apresentadas, esta é a mais simples de configurar. O único passo necessário aqui é ativar o serviço para o nosso repositório. Como exemplo, estarei usando o mesmo repositório dos últimos tutoriais. Clique aqui para visualizar o repositório.

Assim que realizar o cadastro, vamos nos deparar com uma tela contendo a listagem dos nosso repositórios que estão utilizando o serviço. Se você nunca usou o serviço provavelmente não terá nenhum repositório, então faça o seguinte: clique no botão Sync with Github now, para realizar a sincronização com a sua conta do Github. Assim que a sincronização estiver completa, clique no botão Add repository.

Ao clicar, seremos levados a uma tela com a listagem de todos os repositórios que temos permissão de escrita. Procure o repositório que deseja ativar o serviço (lembrando que o Landscape funciona apenas para projetos Python) e o selecione (basta clicar sobre o nome do repositório).

Adicione o repositório clicando no botão verde Add Repository, logo abaixo da lista. Seremos novamente redirecionados a tela inicial, agora com o repositório escolhido já visível.

Inclusive, a partir desse momento, o Coveralls já irá iniciar a análise do seu projeto. Clique no nome do repositório para ver mais detalhes da analise.

No caso do meu projeto de teste, temos que a "saúde" do código está em 100%, ou seja, nenhuma parte do código apresenta erros de estilo, bugs e está utilizando boas práticas de programação em todo seu escopo.

Na barra lateral localizada à esquerda da página, temos alguns items, entre os quais os mais importantes são descritos a seguir:

  • Error: são instruções no código que provavelmente indicam um erro. Por exemplo, quando referenciamos uma variável sem declará-la antes ou realizamos a chamada de algum método inexistente.
  • Smells: são sinais ou sintomas no código que possivelmente indicam uma falha no projeto do software. Diferentemente de um bug, code smells não indicam uso incorreto da linguagem de programação e nem impedem o software de funcionar. Ao invés disso, eles indicam falhas no design do projeto que podem atrasar seu desenvolvimento ou mesmo ser a porta de entrada para bugs no futuro. Exemplos de code smells são: métodos ou códigos duplicados, classes muito grandes, uso forçado de algum design pattern quando o mesmo poderia ser substituído por um código mais simples e fácil de manter, métodos muito longos ou com excessivo números de parâmetros e por aí vai. A lista pode crescer muito haha... para mais detalhes leia.
  • Style: como o nome sugere, este item exibe os erros de estilo em seu código indicando trechos de código que não estão seguindo as regras de estilo da PEP8, trechos de códigos com identação incorreta e etc.

Como último passo, agora somente nos resta adicionar uma badge no arquivo README.md em nosso repositório. Assim poderemos ver a porcentagem de "saúde" do nosso projeto sem precisar acessar a página do Landscape.

Na página com o resultado da análise (onde é exibido a porcentagem de "saúde" do seu projeto), podemos pegar a badge do Landscape. No canto superior direito da tela, você encontra os botões abaixo:

Clique na badge (onde está escrito health) e a seguinte janela será exibida:

Selecione o texto da opção Markdown e cole-o no README.md do seu repositório. O meu README.md ficou assim:

# Codigo Avulso Test Tutorial [![Build Status](https://travis-ci.org/mstuttgart/codigo-avulso-test-tutorial.svg?branch=master)](https://travis-ci.org/mstuttgart/codigo-avulso-test-tutorial) [![Coverage Status](https://coveralls.io/repos/github/mstuttgart/codigo-avulso-test-tutorial/badge.svg?branch=master)](https://coveralls.io/github/mstuttgart/codigo-avulso-test-tutorial?branch=master) [![Code Health](https://landscape.io/github/mstuttgart/codigo-avulso-test-tutorial/master/landscape.svg?style=flat)](https://landscape.io/github/mstuttgart/codigo-avulso-test-tutorial/master)

Também é possível configurar o Landscape para que o mesmo exclua algum diretório/arquivo da análise (muito útil com arquivos de interface compilados, usando por quem trabalha com PyQt/PySide) entre outras opções, mas isso fica para um tutorial futuro.

Abaixo podemos ver as três badges que adicionamos em nosso projeto. Clique aqui para acessar o repositório.

Conclusão

Pronto pessoal, agora temos o nosso repositório exibindo informações sobre os testes unitários, relatórios de testes e analises de qualidade de código. Isso não garante que seu projeto seja livre de falhas e bugs, mas te ajuda a evitá-los.

Vale lembrar que todas essas ferramentas ajudam muito, mas nada substitui o senso crítico e o hábito de sempre usar boas práticas durante o desenvolvimento. Por isso sempre busque aprender mais, estudar mais, ser humilde e ouvir quem tem mais experiência que você. Enfim, ser um programador e uma pessoa melhor a cada dia. Fica o conselho para todos nós, incluindo para este que vos escreve.

Espero que tenham gostado desta série de tutoriais. Obrigado por ler até aqui e até o próximo post.

Publicado originalmente: python-com-unittest-travis-ci-coveralls-e-landscape-parte-4-de-4

Categories: FLOSS Project Planets

Michal Čihař: wlc 0.3

Planet Debian - Thu, 2016-05-19 12:00

wlc 0.3, a command line utility for Weblate, has been just released. This is probably first release which is worth using so it's probably also worth of bigger announcement.

It is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version will be 2.7 (current git is okay as well, it is now running on both demo and hosting servers).

How to use it? First you will probably want to store the credentials, so that your requests are authenticated (you can do unauthenticated requests as well, but obviously only read only and on public objects), so lets create ~/.config/weblate:

[weblate] url = https://hosted.weblate.org/api/ [keys] https://hosted.weblate.org/api/ = APIKEY

Now you can do basic commands:

$ wlc show weblate/master/cs ... last_author: Michal Čihař last_change: 2016-05-13T15:59:25 revision: 62f038bb0bfe360494fb8dee30fd9d34133a8663 share_url: https://hosted.weblate.org/engage/weblate/cs/ total: 1361 total_words: 6144 translate_url: https://hosted.weblate.org/translate/weblate/master/cs/ translated: 1361 translated_percent: 100.0 translated_words: 6144 url: https://hosted.weblate.org/api/translations/weblate/master/cs/ web_url: https://hosted.weblate.org/projects/weblate/master/cs/

You can find more examples in wlc documentation.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

Categories: FLOSS Project Planets

Platform.sh: Automatically sanitize your database on development environments

Planet Drupal - Thu, 2016-05-19 11:01

You’re developing your site on Platform.sh and you love the fact that you get exact copies of your production site for every Git branch that you push.

But now that you think about it, you realize that all those copies used by your development team to implement new features or fixes contain production data (like user emails, user passwords…). And that all the people working on the project will have access to that sensitive data.

So you come up with the idea to write a custom script to automatically sanitize the production data every time you copy the production site or synchronize your development environments. Next you think of a way to automatically run that script. Possibly a custom Jenkins job that you will maintain yourself. But, of course, you will need to update this Jenkins job for every new project you work on. Plus, you will have to figure out the permissions for this script to give proper access to your site.

But wait, what if I told you that all this hassle can be handled in a simple deployment hook that Platform.sh provides?

Categories: FLOSS Project Planets

Pronovix: Brightcove Video Connect for Drupal 8 - Part 2: Installation & Configuration

Planet Drupal - Thu, 2016-05-19 11:00

Part 2 in a 4-part blog series covering the various aspects of the new Brightcove Video Connect module for Drupal 8. This second part details the Installation & Configuration steps required to get the module up and running properly.

Categories: FLOSS Project Planets

BlackMesh: The DrupalCon NOLA Aftermath

Planet Drupal - Thu, 2016-05-19 10:19

 

Last week, I attended my fourth North America DrupalCon with the BlackMesh team. I can honestly say that DrupalCon New Orleans has been the best con to date!

If you are unfamiliar with DrupalCon, it is the biggest annual Drupal community event in the world, which brings together developers, designers, strategists, and more to meet, learn, and give back to the community. BlackMesh has been attending DrupalCon since 2008, and throughout the past several years we’ve attended some incredible gatherings.

 

So, what made DrupalCon NOLA 2016 so great for BlackMesh?

Because it was in New Orleans?

Not exactly, but that was a huge perk! The sun was out, the food was good, and the music was loud.

 

Quality trumps Quantity

The Drupal Association and community have done a great job reaching out to new people and getting them involved in this conference. Because there was a greater variety of sessions and summits at this year’s DrupalCon, we talked with more than just developers – we met government agency representatives, business executives, and entrepreneurs who were highly interested in our managed services and kept our team busy … unique needs mean quality leads!

It was wonderful to meet so many new people, and in a way – I am quoting a friend of mine – it was like “summer camp.” It was great to see so many old friends!

 

Drupal & Government

This year entailed an entire summit devoted to Drupaling for government; our president, Eric Mandel, and our CTO, Jason Ford, attended the event, which consisted of a very insightful panel discussion regarding Drupal development and hosting for city, state, and federal governments.

I unfortunately could not attend, but talking with Eric and Jason afterward I could hear their excitement as they shared with me their summit experience. They had some phenomenal conversations with fellow attendees; government employees and contractors spoke openly about what works best for them, which Eric and Jason said was very beneficial and insightful. The summit also offered a new focus on security, especially in regard to recognizing that rather than being a hindrance to agencies, it needs to be part of the solution. Finally, Larry Gillick, Deputy Director of Digital Strategy, Department of the Interior, gave a fantastic presentation about the DOI PaaS and its success – the guys found this truly informative and motivating.

Party Time

My personal favorite highlight of the week was the BlackMesh Happy Hour event at The Jaxson. It is located at the Jackson Brewery with a fantastic view of the Mississippi River. With flashing Mardi Gras beads, cold beverages on tap, and incredibly fun people, it was truly a BlackMesh party, New Orleans style!

Thank you to everyone who came out – you can see all of the photos of the event here!

 

Brand Recognition

It was amazing to have a significant number of attendees – particularly first time attendees – approach us because they’ve heard about the good work BlackMesh does in the Drupal community. We even spotted people wearing BlackMesh shirts from previous DrupalCons (our super soft t-shirts have seem to be a big hit every year). What a great feeling!

Brand awareness is more than just creating word-of-mouth buzz, it is seeing someone at the airport wearing a BlackMesh t-shirt. :-)

 

Looking Forward

DrupalCon is one of our best events of the year. With all the inspiring sessions, late (but fun) nights out, and productive discussions, it’s no wonder why we at BlackMesh are already looking forward to the next conference … get ready, Dublin!

DrupalDrupalCon New Orleans
Categories: FLOSS Project Planets

Matt Glaman: Drupal Commerce and Migrate status

Planet Drupal - Thu, 2016-05-19 09:55
First Steps

Plans were made back in December 2015 to put effort into the ability to support Migrate with Drupal Commerce to speed up adoption of Drupal Commerce 2.0. Commerce Migrate for Drupal 8 will provide migrations from Drupal Commerce 1.x, Ubercart for D6, and Ubercart for D7. Ideally, this module will also support other vendors, such as Magento and WooCommerce.

Before official work began on the 8.x branch, we had a contributor start with an Ubercart 6 port! Contributor  created a sandbox fork and commenced on the Drupal 8 work. The code as mentioned earlier has been working for creativepragmatic to continue development for their Drupal 6 to Drupal 8 site migration.

Midwest Drupal Camp

 kicked off the official start of the Commerce Migrate 8.x branch. This was the merger of creativepragmatic's work. I sprinted on creating a database test fixture for Commerce Migrate's tests. I chose the Commerce Kickstart 2 demonstration store as our test base! So that means all tests are proving we can migrate a Commerce Kickstart 2 demo site to Drupal Commerce 2.x. Work was somewhat slow and stopped short, as 8.1.x was pending to be released and saw a change to how Migrate worked: "." We left MidCamp, however, with the database test fixture and initial tests and migration components.

DrupalCon New Orleans

Work on Commerce Migrate remained on pause until DrupalCon New Orleans. By this time Drupal 8.1.1 was released and the Migrate module was slightly more mature. Our focus during the conference was to push forward the Commerce 1.x to Commerce 2.x migration path since there is a method to test it.

During DrupalCon, a few conference goers approach the booth with questions about Ubercart D6 to Commerce 2.x sites. As mentioned previously, creativepragmatic wrote the initial code. Until there is a sanitized sample dataset, we cannot fully work on the Ubercart migrations or guarantee them. (If you have data you would like to contribute, please contact me!)

Headway was made, however, on the Commerce 1.x migration front. Tests have updated to the kernel test format, a change in Migrate. These tests are now passing on billing profile, line item, and product (variations in 2.x) and product type entities. A process plugin to handle migrating Commerce Price fields from 1.x to 2.x was added and is running on product and line item values. Other fields do not yet to have a supported migration.

What is next?

The next steps are to provide a process plugin to migrate Addressfield field data to the field provided by Address. We also must create process plugins for each of the reference fields provided by Commerce 1.x: product, profile, and line item. With these items completed, orders will be able to be completely migrated.

The largest task will be the migration of Commerce 1.x product displays to Commerce 2.x product entities. This requires finding nodes that have a product reference field.

While migrating from an existing site and data might not quite work, yet, you can start using Migrate to import data. See the Commerce Demo module I am working on: https://github.com/mglaman/commerce_demo. It provides an example of importing a CSV that you might receive from an ERP/PIM and creates Drupal Commerce products, variations, and attributes.

Also, check out creativepragmatic's original work, which has some documentation on initial migration gotchas: https://github.com/creativepragmatic/commerce_migrate

Categories: FLOSS Project Planets

Import Python: ImportPython Issue 73

Planet Python - Thu, 2016-05-19 09:41

Worthy Read
150x Faster SQL Analysis
Get up to 150x faster queries with Periscope’s caching system.Sponsor
Python 3: An Intro to Encryptionpython3, encryptionPython 3 doesn’t have very much in its standard library that deals with encryption. Instead, you get hashing libraries. We’ll take a brief look at those in the chapter, but the primary focus will be on the following 3rd party packages: PyCrypto and cryptography. We will learn how to encrypt and decrypt strings with both of these libraries.

How to Build an SMS Slack Bot in PythonbotBots can be a super useful bridge between Slack channels and external applications. Let’s code a simple Slack bot as a Python application that combines the Slack API with the Twilio SMS API so a user can send and receive Slack messages via SMS.

Decorated Concurrency - Python multiprocessing made really really easyThere's a new interesting wrapper on Python multiprocessing called deco, written by Alex Sherman and Peter Den Hartog, both at University of Wisconsin - Madison. It makes Python multiprocessing really really easy.

Powering the Python Package IndexpypiThe Python Package Index, or as most call it “PyPI” is a central part of the ecosystem of Python. It serves as a central registry of names, helping to prevent collision between different projects as well as the default repository that most Python users go to when looking for software.For most, what powers this service is largely opaque to them — it’s (usually) there when they need it and who or what powers it is largely a mystery to them, but what and who really powers PyPI?

Deploying your Django API in one Docker Container in AWSdjango, dockerDocker is a wonderful technology revolutionazing how to run microservices. You may have some experience with it or may not. I will show in this short post how to run a Docker container in Amazon Web Services(EC2) with your whole API in it. This trick is only useful for APIs or django projects that doesn't need to scale and doesn't really have too much traffic.

Scaling a startup from side project to 20 million hits/month - an interview with railwayapi.com creator KaustubhPythonAnywhere is a Python development and hosting environment that displays in your web browser and runs on our servers. They're already set up with everything you need. It's easy to use, fast, and powerful. There's even a useful free plan. In this interview Kaustubh talks about his experience of using PythonAnywhere.

Facebook Trending RSS Feed Fetcher (w/ Python 3.5)A quickie Python 3.5 script that parses the PDF-listing of RSS feeds that Facebook uses to monitor for breaking news stories to add to its Trending Section.On May 12, 2016,

Apache Spark 2.0: Introduction to Structured Streamingmachine learningMichael Armbrust and Tathagata Das explain updates to Spark version 2.0, demonstrating how stream processing is now more accessible with Spark SQL and DataFrame APIs. Video. Code snippets in Python.

List of Python tips :: help to improvecore pythonSimple code snippets, stuff we need in everyday coding. Worth a quick glance.

Pygrunn: Micropython, internet of pythonic things - Lars de Riddercore pythonmicropython is a project that wants to bring python to the world of microprocessors. Micropython is a lean and fast implementation of python 3 for microprocessors. It was funded in 2013 on kickstarter. Originally it only ran on a special “pyboard”, but it has now been ported to various other microprocessors. Why use micropython? Easy to learn, with powerful features. Native bitwise operations. Ideal for rapid prototyping. (You cannot use cpython, mainly due to RAM usage.)

Import C++ files directly from Python!Anyone looking to speed up critical regions of their script should have a look.

Python ASCII animation decorating for slow functions: this can be useful for someone?Pretty cool decorator. Check out the gif screen-cast.

Pygrunn: Understanding PyPy and using it in production - Peter Odding/Bart Kroonpypypypy is “the faster version of python”. There are actually quite a lot of python implementation. cpython is the main one. There are also JIT compilers. Pypy is one of them. It is by far the most mature. PyPy is a python implementation, compliant with 2.7.10 and 3.2.5. And it is fast!.

Get on Track w/ JIRA.
JIRA Software is the #1 software dev tool used by agile teams. Get started for free!Sponsor
Djangofriendly » Django host reviews and resourcesdjangoDjangofriendly is a community resource for finding the friendliest Django hosting environments.

Asterix - a component initalization system for pythoncore pythonDescribe the initialization of your application and let asterix manage the startup for you. It will ensure that the correct dependencies are started in order, so you don't need any dirty hacks to have your initialization flow. Also, it allows you to build separate stacks for test/dev/production and even for web/batch applications, loading just what you need.


Jobs

Python Developer at Panzer Solutions LLCSeattle, WA, United States

Backend / Fullstack Engineers at UberSan Francisco, CA, United StatesThe Driver Growth team at Uber is hiring. Our goal is to help our driver-partners across the world join the Uber platform efficiently and at scale, and create tools and products that help them maximize their earnings and ultimately drive long-term growth and loyalty.

Software Engineer (Python) | 5+ Years Experience | Hyderabad at finstopHyderabad, Telangana, IndiaWe are looking for an experienced, highly motivated and resourceful software engineer to join finstop team and help us build, integrate, deliver technology solutions; day-to-day maintenance and manage development cycles for our financial platform.



Upcoming Conference / User Group Meet
PyCon Singapore 2016
GeoPython 2016
PyConTW 2016 in Taiwan
PyData Berlin 2016

Projects
Facebook-Message-Bot - 76 Stars, 10 ForkUsing Python Flask to develop Facebook Message Platform
expyre - 45 Stars, 2 ForkA pythonic wrapper over `atd` to schedule deletion of files/directories.
TranscriptBot - 21 Stars, 0 ForkTranscribe your meetings to Slack in real time
leather - 18 Stars, 0 ForkPython charting for 80% of humans.
pyconfig - 15 Stars, 0 ForkA tool for generating xcconfig files from a simple DSL
atom-python - 12 Stars, 0 ForkironSource.atom SDK for Python
2016_bop_semifinal - 9 Stars, 3 Fork2016 bop senifinal python code
awesome-contributions - 8 Stars, 1 ForkAWESOME GitHub Contributions viewer!
InstagramCrawler - 2 Stars, 0 ForkSelenium based script to download photos.
AnsibleRest - 2 Stars, 0 ForkSkeleton RESTful framework around Ansible
rate-limit - 1 Stars, 0 ForkToken bucket implementation for rate limiting (python recipe)
Categories: FLOSS Project Planets

Python Software Foundation: Brett Cannon wins Frank Willison Award

Planet Python - Thu, 2016-05-19 08:35

This morning at OSCON, O'Reilly Media gave Brett Cannon the Frank Willison Memorial Award. The award recognizes Cannon's contributions to CPython as a core developer and project manager for over a decade.
Beginning in 2002, the Frank Willison Memorial Award for Contributions to the Python Community is given annually to an outstanding contributor to the Python community. The award was established in memory of Frank Willison, a Python enthusiast and O'Reilly editor-in-chief, who died in July 2001. Tim O'Reilly wrote In Memory of Frank Willison, which includes a collection of quotes from Frank's insightful and witty writing. O'Reilly Media maintains an online archive of Frank Willison's column, "Frankly Speaking".
O'Reilly Media presents the Frank Willison Memorial Award annually at OSCON, the O'Reilly Open Source Convention. The recipient is chosen in consultation with Guido van Rossum and delegates of the Python Software Foundation. Contributions can encompass so much more than code. A successful software community requires time, dedication, communication, and education as well as elegant code. With the Frank Willison Memorial Award, we hoped to acknowledge all of those things.   — Tim O'Reilly  In the open source community, project management is an often underrated skill: given a problem to be solved, and a proposed solution for solving it, define the concrete steps necessary to get a group of volunteers from the point of saying "We should do something about this" to "We have solved that problem".
Brett Cannon has repeatedly volunteered to handle project management responsibilities that have significantly improved the CPython core development infrastructure, from migration to a dedicated bugs.python.org infrastructure, to the initial switch to a distributed version control system, to the current adoption of a more automated development workflow.

Brett Cannon Since he began as a core developer in 2003, Brett has dedicated significant time to ensuring that the design, implementation, and development of essential parts of the CPython reference interpreter are accessible to new contributors. He wrote the first versions of the Python Developer's Guide and the design documentation for the CPython compiler. He converted the bulk of the import system's implementation from C to Python, created the "devinabox" project to make it easier for new contributors to get started at development sprints, wrote the "Python-dev Summaries" articles from 2002 to 2005, and moderated the python-ideas mailing list since it began in December 2006.
Brett has served on the PSF Board of Directors from 2006-2010, and again from 2013-2014, and was PSF Vice President in 2006-2007, and Executive Vice President from 2007-2010. He is also a gracious ambassador for the Python development community. His thoughtful manner, genuine kindness, and sense of humor have inspired many at PyCons over the years. Whether helping a new contributor understand a code snippet at a sprint or encouraging a new speaker with his confidence in them, Brett shares his positive character with us.
Categories: FLOSS Project Planets

hypothesis.works articles: Announcing Hypothesis Legacy Support

Planet Python - Thu, 2016-05-19 08:00

For a brief period, Python 2.6 was supported in Hypothesis for Python. Because Python 2.6 has been end of lifed for some time, I decided this wasn’t a priority and support was dropped in Hypothesis 2.0.

I’ve now added it back, but under a more restrictive license.

If you want to use Hypothesis on Python 2.6, you can now do so by installing the hypothesislegacysupport package. This will allow you to run Hypothesis on Python 2.6.

Note that by default this is licensed under the GNU Affero General Public License 3.0. If you want to use it in commercial software you will likely want to buy a commercial license. Email us at licensing@hypothesis.works to discuss details.

Read more...
Categories: FLOSS Project Planets

Petter Reinholdtsen: I want the courts to be involved before the police can hijack a news site DNS domain (#domstolkontroll)

Planet Debian - Thu, 2016-05-19 08:00

I just donated to the NUUG defence "fond" to fund the effort in Norway to get the seizure of the news site popcorn-time.no tested in court. I hope everyone that agree with me will do the same.

Would you be worried if you knew the police in your country could hijack DNS domains of news sites covering free software system without talking to a judge first? I am. What if the free software system combined search engine lookups, bittorrent downloads and video playout and was called Popcorn Time? Would that affect your view? It still make me worried.

In March 2016, the Norwegian police seized (as in forced NORID to change the IP address pointed to by it to one controlled by the police) the DNS domain popcorn-time.no, without any supervision from the courts. I did not know about the web site back then, and assumed the courts had been involved, and was very surprised when I discovered that the police had hijacked the DNS domain without asking a judge for permission first. I was even more surprised when I had a look at the web site content on the Internet Archive, and only found news coverage about Popcorn Time, not any material published without the right holders permissions.

The seizure was widely covered in the Norwegian press (see for example Hegnar Online and ITavisen and NRK), at first due to the press release sent out by Økokrim, but then based on protests from the law professor Olav Torvund and lawyer Jon Wessel-Aas. It even got some coverage on TorrentFreak.

I wrote about the case a month ago, when the Norwegian Unix User Group (NUUG), where I am an active member, decided to ask the courts to test this seizure. The request was denied, but NUUG and its co-requestor EFN have not given up, and now they are rallying for support to get the seizure legally challenged. They accept both bank and Bitcoin transfer for those that want to support the request.

If you as me believe news sites about free software should not be censored, even if the free software have both legal and illegal applications, and that DNS hijacking should be tested by the courts, I suggest you show your support by donating to NUUG.

Categories: FLOSS Project Planets

PyCon: Childcare spots are still available for PyCon 2016!

Planet Python - Thu, 2016-05-19 07:09

A venue as exciting as the city of Montréal in 2014–15 and now Portland in 2016–17 often tempts attendees with children to want to go ahead and bring them along, turning what could have been simply a business trip into a full family vacation to a new city. Other attendees are in circumstances that make it impossible to leave their children at home, threatening to rule out PyCon entirely unless children can be accommodated.

For both of these reasons, PyCon is proud to be offering childcare again for Portland 2016 — our third year of being able to offer this service to parents who are attending the conference.

And we are especially grateful to our 2016 Childcare Sponsors: Facebookand Instagram!

Without the generous support of these Childcare Sponsors, parents would be facing a bill four times greaterthan the $50 per child per day that we are able to offer this year. By providing this generous subsidy, Facebook and Instagram are working to make the conference possible for parents who might otherwise have been not able to consider it.

Visit our Childcare page to learn more:

https://us.pycon.org/2016/childcare/

Several spots are still open — so if childcare could make your PyCon visit even better, there is still time to sign up!

Categories: FLOSS Project Planets

Blair Wadman: Create a Drupal 8 module using the Drupal Console

Planet Drupal - Thu, 2016-05-19 05:00

Developing custom modules in Drupal 8 is a daunting prospect for many. Whether you're still learning Drupal 7 module development or are more experienced, Drupal 8 represents a significant shift in the underlying architecture and the way modules are constructed.  

Categories: FLOSS Project Planets

Larry Garfield: HTML Application or Network Application?

Planet Drupal - Thu, 2016-05-19 02:45

There has been much discussion in the last few years of "web apps". Most of the discussion centers around whether "web apps" that do not degrade gracefully, use progressive enhancement, have bookmarkable pages, use semantic tags, and so forth are "Doing It Wrong(tm)", or if JavaScript is sufficiently prevalent that a JavaScript-dependent site/app is reasonable.

What I fear is all too often missing from these discussions is that there isn't one type of "web app". Just because two "things" use HTTP doesn't mean they're conceptually even remotely the same thing.

read more

Categories: FLOSS Project Planets

Drupal.org blog: The Drupal.org Complexity

Planet Drupal - Wed, 2016-05-18 22:06

At DrupalCon New Orleans, during both Dries's keynote and at the State of Drupal Core Conversation, question of whether/when to move to Github came up again.

Interestingly, I was already deep into researching ways we could reduce the cost of our collaboration tools while bringing in new contributors. This post is meant to serve as a little history of how we got to where we are and to provide information about how we might choose to go forward.

It's complex

To say Drupal.org is complex is an understatement. There are few systems with more integration points than Drupal.org and its related sites and services.

The ecosystem is complex with lots of services that share integrations like login (Bakery single sign on) and cross-site code/themes.

It all starts with the code

The slogan "come for the code stay for the community" is accurate. The community would not exist without the unifying effort of collaborating to create the code. The growth of the community is primarily because of the utility the code provides and the (relative) ease of creating a wide range of websites and applications using Drupal core combined with contributed modules and themes that allow that framework to be extended.

Drupal.org was an extension of the development of Drupal for a very long time. Up until Drupal 6, Drupal.org was always upgraded the day of the release of a new version. When Drupal was smaller with a more limited scope, this made a lot of sense. With the release of Drupal 6 and the surge in usage of Drupal, more and more contributors started working on the Drupal.org infrastructure and creating new sites and services to speed the collaborative work of the community.

One of the biggest transitions during the Drupal 6 lifecycle and community surge was The Great Git Migration. Much of the complexity of Drupal.org and the related sites and services was created during this time period. Understanding that timeline will help in understanding just how much work went into Drupal.org and Drupal at that time.

The Great Git Migration

In the Great Git Migration, all of the history of Drupal code was migrated to Git from CVS. The timeline for migrating to git was about what you would expect. Community conversation took time, getting volunteers to start the process took time, finally, there was a phase were dedicated (paid) resources were contracted to finish the work.

Our repos are vast

We have over 35,000 projects that total over 50 GB on disk.

All of the Git repos on Drupal.org are associated with Projects.(e.g. modules, themes, distributions, etc.)

We have issues

At the end of 2015, there were nearly 900,000 issues on Drupal.org. Drupal core alone has over 74,000 issues—over 14,000 of those issues are open. Open issues is not an indicator of code quality, but it is an indicator of how many people have contributed to a project. In general, the more issues a project has, the more challenging it is for maintainers to continuously triage those bug reports, feature requests, plans, tasks and support requests.

The issue queues are part project management, part bug tracking. As such, they are organic and messy and have lots of rules that have been documented over years of community development. We have 23 pages of documentation dedicated to explaining how to use the issue queues. There is additional documentation dedicated to how to properly fill out an issue for core, for Drupal.org, and for numerous other contributed projects.

Issues are integrated into collaboration on Drupal.org

Those issues belong to projects and are connected to the Git repos through hooks that show a system comment when an issue is related by node ID (issue number) to a commit in Git.

Issues can have patches uploaded to them that are the primary means of suggesting a change to code hosted on Drupal.org. The patch-based workflow has extensive documentation, but it is not a simple task for a novice user to jump in and start contributing.

Most Git hosting solutions (Github, Gitlab, Bitbucket, etc.) either have some version of an issue or at least integrate with an issue tracking system (Jira, Pivotal Tracker, etc.) and provide pull request functionality (a.k.a. merge requests).

Having the same name is where the similarities and consistencies stop. Issues on Drupal.org have status, priority, category, component, tagging and more that are unique to Drupal project workflow. It would be a significant exercise to remap all of those categorizations to a new system.

Packaging

If the projects are what you can browse and find, and the issues are how you collaborate and change the code, the next most important service for Drupal is likely the packaging system.

Packaging is based on project maintainers creating a release of the code by associating a branch of the Git repository with the release. Every 5 minutes, our automation infrastructure checks for new releases and will package those releases into a downloadable file to represent the project.

Few developers actually access this directly from the project page anymore. They are much more likely to use Git, Drush, Console or Composer to automate that part of the workflow. Drush, and to some extent Composer, both use the packaged files when a command is issued. Also, the Drupal feature of just putting the code in the correct directory and it will run—with no compiling—is fundamental to the history of Drupal site building.

Updates

Another crucial Drupal service, updates is built into how Drupal core checks on itself to see if it is up to date.

The 1.3 million plus websites that call home to updates.drupal.org get back XML that is then parsed by that installation's update status module; that updates module has different names depending on the version of Drupal. Each month, about 12 terabytes of traffic to our CDN is requests for updates XML. Considering this is a bunch of text files, this is an amazing number to consider. Some sites call home once a week, some once a day, and some do it every few minutes. (Really people! Be nice to your free updates service. Telling your server to ask for updates daily is plenty frequent enough.)

Tallying the unique site keys that request this information is how we get our usage statistics. While this is probably not the most precise way to measure our usage, it is directionally accurate. There are plenty of development sites in those stats and plenty of production websites that don't call home. It should roughly balance out. To be anymore precise, users of Drupal would have to give up some privacy. We've balanced accuracy with privacy.

Because of our awesome CDN (thanks, Fastly!), we are able to deliver up to date packages and updates information in milliseconds after we update the underlying data.

Composer

On May 3rd, we launched the alpha version of our Composer endpoints on Drupal.org. If you don't know about Composer, you should read up on it. Composer is package management for PHP. (It's similar to what NPM does for Node.js or RubyGems does for Ruby.)

Core developers have been using Composer for some time as a means to manage the dependencies of PHP libraries that are now included in core.

The Composer endpoints allow any Drupal site developer to use composer install to build out their websites.

The new Composer service will also allow contrib project maintainers to using composer.json files to define the requirements for their modules and themes. The service even translates project release versions into semantic versioning. Semantic versioning was the biggest reason we could not "just" use Packagist.org like other projects in the PHP community.

This is all a huge benefit, but more importantly, we now have deep integration between a best practice approach to PHP dependency management and the Drupal.org code repos that can scale to our community needs.

Testing with DrupalCI

Speaking of needs, DrupalCI ran 67,000 test runs in January 2016. Each test run for Drupal core has 18,511 tests per run. That means over 100,000 assertions (steps) in the unit and functional tests that make sure Drupal's code is stable and that an accepted patch does not create a regression.

At the time of this post, we are using Amazon Web Services cc2.8xlarge EC2 spot instances for our testbots. These bots are powerful. They have 2 processors with 8 hardware cores. AWS claims they can provide 88 EC2 compute units per instance. They are packed with processing power because we have a lot of tests to run. While there are bigger instances, the combination of price and power allows us to keep Drupal core complete test runs right around 30 minutes. We autoscale up to 20 of these instances depending on demand, which keeps queue times low and allows maintainers to get quick feedback about whether a patch works or not.

I truly believe that getting DrupalCI up and stable is what allowed Drupal 8 to get to a full release last fall. Without it, we would have continued to struggle with test times that were well over an hour and a system that required surplus testbots to be manually spun up when a big sprint was happening. That was costly and a huge time waste.

If anyone asks me "what's the most important thing your team did in 2015", I can unequivocally say "unblocking core development to get Drupal 8 released."

Issue credits

The second most important service we built in 2015—but certainly the more visible—is a system for crediting users and organizations that contribute on Drupal.org.

Issue credits sprang forth from an idea that Dries proposed around DrupalCon Austin in June of 2014. At the time, his intent was a means of structuring commit messages or using commit notes to provide the credit. Eventually, we shifted the implementation to focus on participation in issues rather than code commits. This made it possible to credit collaboration that did not result in a code change.

I won't get into the specifics; I wrote a A guide to issue credits and the Drupal.org marketplace earlier this year. Issue credits have been extremely successful.

As there name implies, we store the data about credits as a relationship to closed issues. Issue credits touch issues, projects, users, organizations and the marketplace on Drupal.org.

Why not just migrate all of this complexity to Github?

Why can't we just move all this to Github?

— said lots of people, often

To be fair, this is a challenging discussion. Angie Byron (webchick) wrote an amazingly concise summary of the Github issue on Groups.drupal.org.

That wiki/discussion/bikeshed was heated. The conversation lasted over two years. I started as CTO about 6 months into the conversation. Along with a couple of other themes, the Github move has been a constant background conversation that has defined much of my time leading the Drupal.org team.

How are these services connected?

To truly understand the problem of a migration of this scale, we have to look at how all of the major Drupal.org services are connected.

Each block in this diagram is a service. Each line is a point of integration between the services. Some of these services are on Druapl.org or subsites with thousands of lines of custom code defining the interactions. Other services are not built in Drupal and represent projects in Java (Jenkins) or Python (our Git daemon) with varying degrees of customization and configuration.

As the diagram suggests, it is truly a web of integrations. Pull one or more services out of this ecosystem and you have to either refactor a ton of code or remove a critical component of how the community collaborates and how our users build sites with Drupal.

It's kinda like a great big game of Jenga.

What would a migration to Github require?

Please believe me when I say that if it were "easy" or "simple", we would have made either moved to Github or at least upgraded our Git collaboration with nifty new tools on our own infrastructure.

However, disrupting the development of Drupal 8 would have been devastating to the project. We were correct to collectively backlog this project.

So if we were to try this migration now, what would it take? First, you have to consider the services that Github would effectively replace.

Github replaces:

  • Git repositories
  • Issues
  • Patches (they would become pull requests)
  • Git viewing (and we'd get inline editing for quick fix and onboarding)

That's four (4!) services that we would not have to maintain anymore. Awesome! Cost savings everywhere! Buy a boat!

Wait a second. You have 16 integration points that you need to refactor. Some of them would come with the new system. Issues, pull requests, repos and the viewer would all just work with huge improvements. That leaves us with 12 integration points that would require a ton of discovery and refactoring.

  1. Users - we have 100,000 Drupal.org users that are pretty engaged. (We have over 1 million user accounts—but that number is likely a little inflated by spam accounts.) Do we make them all get Github accounts? Do we integrate Github login to Drupal.org? Do we just link the accounts like Symfony does?
  2. Projects - Github is not a project browsing experience. Drupal.org is a canonical repository where the "one true project" lives for packaging and updates. At the very least, we have to integrate our projects with Github. Does that mean we have to keep a Git repo associated to the project that has hooks to pull in changes from Github?
  3. Testing - One of the less complex integration refactors would be getting DrupalCI integrated with pull requests. That effort would still be a months long project.

And DrupalCI would be its own effort to migrate to another testing service because it is tailored to the issue queue workflow and tightly integrated with projects.

Those are just a few of the major integration points.

I have a personal goal to detail every single integration and get that documented somewhere on Drupal.org. I don't think that level of documentation will increase the ability for others to contribute to the Drupal.org infrastructure—though that would be a pleasant side effect. I do think it is necessary for us to continue to support and maintain our systems and ensure that all of the tribal knowledge from the Drupal.org team can be passed on.

What would it cost?

I have joked that it would take roughly 1 million dollars (USD) to complete a Github migration. (Cue Dr. Evil.) That is only partially meant in jest.

As anyone who has estimated a large project knows, there is a point of uncertainty that leads project owners to guess at what they are willing to pay for the project.

If we take the four biggest lifts in the Drupal project's history, what do we get?

  1. Drupal.org redesign - There were tens of people involved in the project, hundreds giving feedback. The timeline was about a year from start to implementation.
  2. The Great Git Migration - There were tens of people involved in the project. Far fewer users gave feedback, but the project took about two years from brainstorming to initial commit to the Git repos—with a few months of clean up after.
  3. Drupal.org upgrades to Drupal 7 - The project took about two years with tens of people involved in about 8 months of clean up issues.
  4. Drupal 8 - 5 years of development by over 3,000 contributors.

I don't think than anyone would argue that each of these projects would have been bid at well over $1 million. I would put a migration to Github at somewhere between the complexity of The Great Git Migration and Drupal 8.

In none of these cases did the Drupal Association actually spend $1 million USD in project dollars. However, in all of the projects, there was lengthy discussion followed by substantial volunteer contribution, and then a significant bit of paid work to finish the job. It's a pattern that makes sense and will likely repeat itself over and over.

Would it be worth it?

I'm going to go back to the summary on the Github discussion. There are reasons why both options seem to be the best possible option or the worst possible option.

Would a best practice workflow and toolset be worth the change? Absolutely. Github (or Gitlab) tools are easier for newcomers to learn. Further, because we are using PHP and Javascript libraries that are hosted on Github, we could get contributions from developers and designers that are involved in those projects and do not wish to have account on Drupal.org.

The drawbacks are considerable. We cannot afford a full migration right now. Dries put it well at DrupalCon Los Angeles during core conversations. The Drupal Association is not a bag of money. With significant growth of revenue, there is a long term possibility of more paid developer resources, but not in the short term. It is too much to ask volunteers to give up a year of their life to run the project as a community initiative. That leads to burn out and frustration.

We should also consider whether the disruption to the current collaboration workflow will be worth it. I don't think so. Not if that disruption meant stalling the update of contrib projects that are critical to solidifying Drupal 8 adoption. (Though I could argue that much of this upgrade to Drupal 8 work is being performed on Github as some—perhaps many—developers prefer those tools.)

Is there a middle ground?

Drupal spends a lot of time getting to the middle ground. Many of the best innovations in Drupal come from getting to the middle ground—from reaching a general consensus and then allowing someone who has support and time iron out the details.

So for the first step, we should add functionality to projects on Drupal.org that allow maintainers to shift their workflow to Github while still publishing their project on Drupal.org. This allows the canonical browsing of projects, the continued support of the security team, and most importantly the continued distribution of Drupal through Composer, release packaging and the updates system.

We have a solid way forward for these integrations as the requirements are narrow enough in scope to accomplish in a 4-6 month timeframe using dedicated resources. We would still need to figure out how to award an issue credit to someone that participated in an issue on Github. We might be able to institute commit credits that could be parsed into issue credits from the participation on Github, but it would not be as inclusive as the current model.

It would be important to phase in this new feature rather than make a wholesale change. Once that integration is in place, we could extend DrupalCI to test pull requests similar to how we currently test patches submitted to an issue.

Stay flexible

We need to be flexible. GitHub has a lot of potential as tool for open source distribution and collaboration—likely for the foreseeable future. However, not every major project is on GitHub. The Linux Kernel uses Git repositories with cli tools and a patch-based workflow that relies heavily on email. It works for them. Wordpress is still on Subversion—even though they've started to accept some pull requests on GitHub. These projects are poised to make the right decision rather than a rash decision.

The sky will not fall if we keep our current model, but we are losing opportunities to grow as a community of contributors. Rather than a wholesale migration, we must understand the value and history of this web of integration points. Targeting our efforts on specific integrations points can achieve our goal of opening our doors to the developers who live and breathe GitHub, without losing the character of our collaboration. And in the long run, this focus on services and integrations can make us more adaptable to the next change in the broader development landscape.

Republished from joshuami.com

Categories: FLOSS Project Planets

Hynek Schlawack: Conditional Python Dependencies

Planet Python - Wed, 2016-05-18 20:00

Since the inception of wheels that install Python packages without executing arbitrary code, we need a static way to encode conditional dependencies for our packages. Thanks to PEP 508 we do have a blessed way but sadly the prevalence of old setuptools versions makes it a minefield to use.

Categories: FLOSS Project Planets

Lullabot: Rebuilding POP in D8 - Development Environments

Planet Drupal - Wed, 2016-05-18 18:30

This is the second in a series of articles about building a website for a small non-profit using Drupal 8. These articles assume that the reader is already familiar with Drupal 7 development, and focuses on what is new / different in putting together a Drupal 8 site.

In the last article, I talked about Drupal 8's new block layout tools and how they are going to help us build the POP website without relying on external modules like Context. Having done some basic architectural research, it is now time to dive into real development. The first part of that, of course, is setting up our environments. There are quite a few new considerations in getting even this simple a setup in place for Drupal 8, so lets start digging into them.

My Setup

I wanted a pretty basic setup. I have a local development environment setup on laptop, then I wanted to host the code on github, and be able to push updates to a dev server so that my partner Nicole could see them, make comments, and eventually begin entering new content into the site. This is going to be done using a pretty basic dev/stage/live setup, along with using a QA tool we've built here at Lullabot called Tugboat. We'll be going into the details of workflow and deployment in the next article, but there is actually a bunch of new functionality in Drupal 8 surrounding environment and development settings. So what all do we need to know to get this going? Lets find out!

Local Settings

In past versions of Drupal, devs would often modify settings.php to include a localized version to store environment-specific information like database settings or API keys. This file does not get put into version control, but is instead created by hand in each environment to ensure that settings from one do not transfer to another inadvertently. In Drupal 8 this functionality is baked into core.

At the bottom of your settings.php are three commented out lines:

# if (file_exists(__DIR__ . '/settings.local.php')) { # include __DIR__ . '/settings.local.php'; # }

If you uncomment these lines and place a file named settings.local.php into the same directory as your settings.php, Drupal will automatically see it and include it, along with whatever settings you put in. Drupal core even ships with an example.settings.local.php which you can copy and use as your own. This example file includes several settings pre-configured which can be helpful to know about.

Caching

There are several settings related to caching in the example.settings.local.php which are useful to know about. $settings['cache']['bins']['render'] controls what cache backend is used for the render cache, and $settings['cache']['bins']['dynamic_page_cache'] controls what cache backend is used for the page cache. There are commented out lines for both of these which set the cache to cache.backend.null, which is a special cache backend that is equivalent to turning caching off for the specified setting.

The cache.backend.null cache backend is defined in the development.services.yml file, which is by default included in the example.settings.local.php with this line:

$settings['container_yamls'][] = DRUPAL_ROOT . '/sites/development.services.yml';

If you want to disable caching as described above, then you must leave this line uncommented. If you comment it out, you will get a big ugly error the next time you try and run a cache rebuild.

Drush error message when the null caching backend has not been enabled.

The development.services.yml file is actually itself a localized configuration file for a variety of other Drupal 8 settings. We'll circle back to this a bit later in the article.

Other Settings

example.settings.local.php also includes a variety of other settings that can help during development. One such setting is rebuild_access. Drupal 8 includes a file called rebuild.php, which you can access from a web browser in order to rebuild Drupal's caches in situations where the Drupal admin is otherwise inaccessible. Normally you need a special token to access rebuild.php, however by setting $settings['rebuild_access'] = TRUE, you can access rebuild without a token for specific environments (like your laptop.)

Another thing you can do is turn on or off CSS and Javascript preprocessing, or show/hide testing modules and themes. It is worth taking the time to go through this file and see what all is available to you in addition to the usual things you would put in a local settings file like your database information.

Trusted Hosts

One setting you'll want to set that isn't pre-defined in example.settings.local.php is trusted_host_patterns. In earlier versions of Drupal, it was relatively easy for attackers to spoof your HTTP host in order to do things like rewrite the link in password reset emails, or poison the cache so that images and links pointed to a different domain. Drupal offers the trusted_host_patterns setting to allow users to specify exactly what hosts Drupal should respond to requests for. For the site www.example.com, you would set this up as follows.

$settings['trusted_host_patterns'] = array( '^www\.example\.com$', );

If you want your site to respond to all subdomains of example.com, you would add an entry like so:

$settings['trusted_host_patterns'] = array( '^www\.example\.com$', '^.+\.example\.com$', );

Trusted hosts can be added as needed to this array dependent on your needs. This is also something you'll want to set up on a per-environment basis in a local.settings.php, since each environment will have its own trusted hosts.

Local Service Settings

When Drupal 8 started merging in components from Symfony, we introduced the concept of "services". A service is simply an object that performs a single piece of functionality which is global to your application. For instance, Symfony uses a Mailer service which is used globally to send email. Some other examples of services are Twig (for template management) and Session Handling.

Symfony uses a file called services.yml for managing configuration for services, and just like with our settings.local.php, we can use a file called development.services.yml to manage our localized service configuration. As we saw above, this file is automatically included when we use Drupal 8's default local settings file. If you add this file to your .gitignore, then we can use it for environment-specific configuration just like we do with settings.local.php.

The full scale of configuration that can be managed through services.yml is well outside the scope of this article. The main item of interest from a development standpoint is Twig debugging. When you set debug: true in the twig.config portion of your services configuration file, your HTML output will have a great deal of debugging information added to it. You can see an example of this below:

Drupal page output including Twig debugging information.

Every template hook is outlined in the HTML output, so that you can easily determine where that portion of markup is coming from. This is extremely useful, especially for people who are new to Drupal theming. This does come with a cost in terms of performance, so it should not be turned on in production, but for development it is a vital tool.

Configuration Management

One of the major features of Drupal 8 is its new configuration management system. This allows configuration to be exported from one site and imported on another site with the ease of deploying any other code changes. Drupal provides all installations with a sync directory which is where configuration is exported to and imported from. Be default this directory is located in Drupal's files directory, however this is not the best place for it considering the sensitive data that can be stored in your configuration. Ideally you will want to store it outside of your webroot. For my installation I have setup a directory structure like this:

Sample Drupal 8 directory structure.

The Drupal installation lives inside docroot, util contains build scripts and other tools that are useful for deployments (more on this in the next article) and config/sync is where my configuration files are going to live. To make this work, you must change settings.php as follows:

$config_directories = array( CONFIG_SYNC_DIRECTORY => '../config/sync', );

Note that this will be the same for all sites, so you will want to set it in your main settings.php, not a local.settings.php.

Having done all this, we are now setup for work and ready to setup our development workflow for pushing changes upstream and reviewing changes as they are worked on. That will be the subject of our next article so stay tuned!

Categories: FLOSS Project Planets

Stig Sandbeck Mathisen: Puppet 4 uploaded to Debian experimental

Planet Debian - Wed, 2016-05-18 18:00

I’ve uploaded puppet 4.4.2-1 to Debian experimental.

Please test with caution, and expect sharp corners. This is a new major version of Puppet in Debian, with many new features and potentially breaking changes, as well as a big rewrite of the .deb packaging. Bug reports for src:puppet are very welcome.

As previously described in #798636, the new package names are:

  • puppet (all the software)

  • puppet-agent (package containing just the init script and systemd unit for the puppet agent)

  • puppet-master (init script and systemd unit for starting a single master)

  • puppet-master-passenger (This package depends on apache2 and libapache2-mod-passenger, and configures a puppet master scaled for more than a handful of puppet agents)

Lots of hugs to the authors, keepers and maintainers of autopkgtest, debci, piuparts and ruby-serverspec for their software. They helped me figure out when I had reached “good enough for experimental”.

Some notes:

  • To use exported resources with puppet 4, you need a puppetdb installation and a relevant puppetdb-terminus package on your puppet master. This is not available in Debian, but is available from Puppet’s repositories.

  • Syntax highlighting for Emacs and Vim are no longer built from the puppet package. Standalone packages will be made.

  • The packaged puppet modules need an overhaul of their dependencies to install alongside this version of puppet. Testing would probably also be great to see if they actually work.

I sincerely hope someone finds this useful. :)

Categories: FLOSS Project Planets

Jonathan McDowell: First steps with the ATtiny45

Planet Debian - Wed, 2016-05-18 17:25

These days the phrase “embedded” usually means no console (except, if you’re lucky, console on a UART for debugging) and probably busybox for as much of userspace as you can get away with. You possibly have package management from OpenEmbedded or similar, though it might just be a horrible kludged together rootfs if someone hates you. Either way it’s rare for it not to involve some sort of hardware and OS much more advanced than the 8 bit machines I started out programming on.

That is, unless you’re playing with Arduinos or other similar hardware. I’m currently waiting on some ESP8266 dev boards to arrive, but even they’re quite advanced, with wifi and a basic OS framework provided. A long time ago I meant to get around to playing with PICs but never managed to do so. What I realised recently was that I have a ready made USB relay board that is powered by an ATtiny45. First step was to figure out if there were suitable programming pins available, which turned out to be all brought out conveniently to the edge of the board. Next I got out my trusty Bus Pirate, installed avrdude and lo and behold:

$ avrdude -p attiny45 -c buspirate -P /dev/ttyUSB0 Attempting to initiate BusPirate binary mode... avrdude: Paged flash write enabled. avrdude: AVR device initialized and ready to accept instructions Reading | ################################################## | 100% 0.01s avrdude: Device signature = 0x1e9206 (probably t45) avrdude: safemode: Fuses OK (E:FF, H:DD, L:E1) avrdude done. Thank you.

Perfect. I then read the existing flash image off the device, disassembled it, worked out it was based on V-USB and then proceeded to work out that the only interesting extra bit was that the relay was hanging off pin 3 on IO port B. Which led to me knocking up what I thought should be a functionally equivalent version of the firmware, available locally or on GitHub. It’s worked with my basic testing so far and has confirmed to me I understand how the board is set up, meaning I can start to think about what else I could do with it…

Categories: FLOSS Project Planets
Syndicate content