KnackForge: How to update Drupal 8 core?

Planet Drupal - Sat, 2018-03-24 01:01
How to update Drupal 8 core?

Let's see how to update your Drupal site between 8.x.x minor and patch versions. For example, from 8.1.2 to 8.1.3, or from 8.3.5 to 8.4.0. I hope this will help you.

  • If you are upgrading to Drupal version x.y.z

           x -> is known as the major version number

           y -> is known as the minor version number

           z -> is known as the patch version number.

Sat, 03/24/2018 - 10:31
Categories: FLOSS Project Planets

PreviousNext: Scrum Masters are only effective when they are co-located with their teams

Planet Drupal - Tue, 2017-08-22 00:18

Browsing through the interweb I happened across this bold statement a few weeks ago. A statement so bold, it inspired me to write a blog post in response.

by irma.kelly / 22 August 2017

Scrum Masters being co-located with their teams, sure it is the best and most favourable scenario for teams working on complex projects, but to go as far as to say that Scrum Masters are ONLY effective in this instance - nope. Sorry, I have to graciously disagree.

Obviously there are different challenges that come with facilitating Agile ceremonies and interacting with the team remotely as opposed to face-to-face. A completely different approach needs to be taken on my behalf to keep the team engine purring away.

Personally for me, the “different approach” I take with managing remote teams, as opposed to co-located teams is to ensure uber transparency and over-communication on my part in regards to the all of the work that the team currently have in-flight. On my part this includes:

  • Ensuring that work in flight includes “Acceptance Criteria” and a “Definition of Done” agreed to by both the team and the client. This ensures that both the client and the team have an agreed vision of the product we are building. More importantly, it removes the need to make assumptions about a solution on both sides

  • The use of an online and up-to-date Kanban board that both the client and the team can freely access

  • Complete honesty with the client and the team in regards to all aspects of the project. Especially during the trickier and stressful moments of project delivery. If something is starting to go pear shaped, call it out early - don’t hide it!​

There are a plethora of tools now available that help enable remote collaboration. I thought it might be worthwhile sharing some of the tools that the teams at PNX use to make remote collaboration simpler.

Slack / Go To Meetings / Google Hangouts

With a large percentage of our internal staff located across Australia, these are PNX’s go-to tools for remote collaboration. We utilise both GoToMeeting and Google Hangouts (depending on individual client preferences) as tools to enable our daily stand-ups with our clients. Daily stand-ups and the ability to quickly ask via a hangout or GoToMeeting has drastically reduced the amount of email correspondence between PNX and our clients. The result? A reduction in idle time, as questions can be answered relatively quickly instead of waiting for a reply via email.

Access to an online Kanban board

The ultimate in uber transparency. There is nothing more satisfying for an Agile Delivery Manager than to see tickets move to the right of the board. Likewise for our clients! Each ticket on the board details who the work is assigned to and the status of the task. At a glance, anyone with access to the project kanban board can see the status of work for a given sprint.

Google Sheets - My favourite go-to tool, when it comes to interactive Agile ceremonies

The most common question I’m asked about working with remote teams is “how do you facilitate an Agile ceremony like a Retrospective with a remote team?” My favourite go-to tool for this is Google Sheets. Before each retro, I spend a half hour putting the retro board together on a Sheet. I try and mix it up every retro as well, using different Retro techniques to keep things interesting.  I mark defined spaces on the sheet where comments are to go, and I share the sheet with the team. Facilitating the Retrospective via a video conference (if possible), I timebox the retro using a timer app shared on my desktop. The team then fill in the Google Sheet in real time. The virtual equivalent of walking up to a physical board, and placing a post-it up there! I have replaced all of the original text captured during the retro with lorem ipsum text. What's said in retro - stays in retro! We had a little fun with the below retro as you can see!

For sensitive conversations - A video conference (or the phone)

The tools above are handy for enabling remote collaboration but for sensitive conversations with a colleague or client in a remote location, a video conference (where you can see each other) is a must. Sensitive conversations are fraught with danger via chat or email and a neutral tone is difficult to convey when we’re in the thick of things. If a video conference is not possible, though, simply pick up the phone.

I’d love to hear about some of the tools you use with your team to enable remote working. What are your recommended tools of choice?

Tagged Remote Working

Posted by irma.kelly
Agile Delivery Manager

Dated 22 August 2017

Add new comment
Categories: FLOSS Project Planets

Chapter Three: How to Prevent Duplicate Terms During a Drupal 8 Migration

Planet Drupal - Mon, 2017-08-21 23:26

In this post I will show a custom process plugin that I created to migrate taxonomy terms. The plugin handles the creation of new terms and prevents duplicates.

Below is a portion of the migration template. In the example, I am migrating new terms into keywords vocabulary via field_keywords field.

field_keywords: - plugin: existing_term # Destination (Drupal) vocabulary name vocabulary: keywords # Source query should return term name source: term_name - plugin: skip_on_empty method: row

This is the source code for the process plugin.

Categories: FLOSS Project Planets

Anwesha Das: The mistakes I did in my blog posts

Planet Python - Mon, 2017-08-21 23:18

Today we will be discussing the mistakes I did with my blog posts.
I started (seriously) writing blogs a year back. A few of my posts got a pretty nice response. The praise put me in seventh heaven. I thought I was a fairly good blogger.But after almost a year of writing, one day I chanced upon one of my older posts and reading it sent me crashing down to earth.

There was huge list of mistakes I made

The post was a perfect example of TLDR. I previously used to judge a post based on quantity. The larger the number of words, the better! (Typical lawyer mentality!)

The title and the lead paragraph were vague.

The sentences were long (far too long).

There were plenty grammatical mistakes.

I lost the flow of thought, broke the logical chain in many places.

The measures I took to solve my problem

I was upset. I stopped writing for a month or so.
After the depressed, dispirited phase was over, I got back up, dusted myself off and tried to find out ways to make be a better writer.

Talks, books, blogs:

I searched for talks, writings, books on “how to write good blog posts” and started reading, and watching videos. I tried to follow those while writing my posts.

Earlier I used to take a lot of time (a week) to write each post. I used to flit from sentence to new sentence. I used to do that so I do not forget the latest idea or next thought that popped into my head.
But that caused two major problems:

First, the long writing time also meant long breaks. The interval broke my chain of thought anyway. I had to start again from the beginning. That resulted in confusing views and non-related sentences.

Secondly, it also caused the huge length of the posts.

Now I dedicate limited time, a few hours, for each post, depending on the idea.
And I strictly adhere to those hours. I use Tomato Timer to keep a check on the time. During that time I do not go to my web browser, check my phone, do any household activity and of course, ignore my husband completely.
But one thing I am not being able to avoid is, “Mamma no working. Let's play” situation. :)
I focus on the sentence I am writing. I do not jump between sentences. I’ve made peace with the fear of losing one thought and I do not disturb the one I am working on. This keeps my ideas clear.

To finish my work within the stipulated time
- I write during quieter hours, especially in the morning, - I plan what to write the day before, - am caffeinated while writing

Sometimes I can not finish it in one go. Then before starting the next day I read what I wrote previously, aloud.


Previously after I finished writing, I used to correct only the red underlines. Now I take time and follow four steps before publishing a post:

  • correct the underlined places,
  • check grammar,
  • I read the post aloud at least twice. This helps me to hear my own words and correct my own mistakes.
  • I have some friends to check my post before publishing. An extra human eye to correct errors.
Respect the readers

This single piece of advice has changed my posts for better.
Respect the reader.
Don’t give them any false hopes or expectations.

With that in mind, I have altered the following two things in my blog:

Vague titles

I always thought out of the box, and figured that sarcastic titles would showcase my intelligence. A off hand, humourous title is good. How utterly wrong I was.

People search by asking relevant question on the topic.
Like for hardware () project with esp8266 using micropython people may search with
- “esp8266 projects” - “projects with micropython” - “fun hardware projects” etc. But no one will search with “mybunny uncle” (it might remind you of your kindly uncle, but definitely not a hardware project in any sense of the term).

People find your blogs by RSS feed or searching in any search engine.
So be as direct as possible. Give a title that describes core of the content. In the words of Cory Doctorow write your headlines as if you are a Wired service writer.

Vague Lead paragraph

Lead paragraph; the opening paragraph of your post must be explanatory of what follows. Many times, the lead paragraph is the part of the search result.

Avoid conjunctions and past participles

I attempt not to use any conjunction, connecting clauses or past participle tense. These make a sentence complicated to read.

Use simple words

I use simple, easy words in contrast to hard, heavy and huge words. It was so difficult to make the lawyer (inside me) understand that - “simple is better than complicated”.

The one thing which is still difficult for me is - to let go. To accept the fact all of my posts will not be great/good.
There will be faults in them, which is fine.
Instead of putting one’s effort to make a single piece better, I’d move on and work on other topics.

Categories: FLOSS Project Planets

The Faces of Open Source: Mark Radcliffe

Open Source Initiative - Mon, 2017-08-21 22:52

In this fourth episode of Shane Martin Coughlan's, "The Faces of Open Source Law," we continue our introductions to the vibrant open source community, through discussions with some of it's most active contributors.

Shane's series may focus on legal issues, but through his discussions, you'll also find a wealth of information related to broader topics related to development, community and contributions. We're also very lucky to include in this series interviews, some of the folks who have helped the OSI grow to become the internationally recognized organization it is today. This week is no different with an interview with the OSI's legal counsel.

As in previous episodes, Shane provides "production notes", offering his own insights from the interviews.


Mark is someone that is known to everyone. He has been involved in open source law since it became commercially viable and he has insight into market adoption from startups through to multinationals. Mark also has been involved in helping NGOs such as Open Source Initiative, but in our interview I wanted to stick closely to the commercial side of things, for the simple reason that his ability to express market concerns is second to none.

It is noteworthy that in shooting this interview I already had the germ of season two of Faces of Open Source Law in my mind, I was thinking it would be using long-form interviews, and I wanted to have Mark as one of the interview subjects. This meant our season one interview is more closely focused than the others in the series, and in some ways it is biased towards setting the scene for a much longer conversational discussion in due course.

Other episodes:

Categories: FLOSS Research

Aigars Mahinovs: Debconf 17 photo retrospective

Planet Debian - Mon, 2017-08-21 14:49

Debconf17 has come and gone by too fast, so we all could use a moment looing back at all the fun and serious happenings of the main event in the Debian social calendar. You can find my full photo gallery on Google, Flickr and Debconf Share.

Make sure to check out the Debconf17 group photo and as an extra special treat for you - enjoy the "living" Debconf17 group photo!


Categories: FLOSS Project Planets

Look what you have done^W^Wdo!

Planet KDE - Mon, 2017-08-21 14:11

You are using Kate or KDevelop and often editing directly the sources of Markdown files, Qt UI files, SVG files, Dot graph files and whatever else formats which are based on plain text files?

And you are having to use a workflow to check the current state which is saving the file and (re)loading it in a separate viewer application?

Could be better, right? Perhaps like this:

Nice mock-up. Or is it? Seems not! So, KParts-based preview plugin for Kate & KDevelop coming soon near you to a well-stocked package repository? Read the whole story where things are heading.

And a Markdown kpart? Never seen before. Because it is new. Actually seems the week of Markdown in the KDE community, there is also a patch on review to add Markdown support for Okular. Sadly Markdown support in Okular with the Okular kpart would not yet replace that separate Markdown kpart, because Okular does not support webview-style display (single “reactive” page), misses support for data streaming without touching the filesystem, and has too much chrome in the kpart for simple usage. Hope is on the future and somebody (you?) improving that. I am doing my share of improving things with this preview plugin, so I am out

Categories: FLOSS Project Planets

Acquia Developer Center Blog: GovHack 2017: Interacting with Government Data

Planet Drupal - Mon, 2017-08-21 11:26

GovHack is an annual hackathon that runs in Australia and New Zealand, where participants have 46 hours to create innovative new products using the open data published by government bodies. It started in 2007 with a single event held in Canberra, and has now grown to more than 40 locations and 3,000 participants.

Acquia was thrilled to provide support to GovHack for a 2nd year in 2017.

Tags: acquia drupal planet
Categories: FLOSS Project Planets

Chromatic: How To: Link to Dynamic Routes in Drupal 8

Planet Drupal - Mon, 2017-08-21 09:45

Properly linking to pages with dynamic routes can be tricky. Here's how to do it properly.

Categories: FLOSS Project Planets

Doug Hellmann: smtpd — Sample Mail Servers — PyMOTW 3

Planet Python - Mon, 2017-08-21 09:00
The smtpd module includes classes for building simple mail transport protocol servers. It is the server-side of the protocol used by smtplib . Read more… This post is part of the Python Module of the Week series for Python 3. See PyMOTW.com for more articles from the series.
Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Katherine Scott

Planet Python - Mon, 2017-08-21 08:30

This week we welcome Katherine Scott (@kscottz) as our PyDev of the Week! Katherine was was the lead developer of the SimpleCV computer vision library and co-author of the SimpleCV O’Reilly Book. You can check out Katherine’s open source projects over on Github. Let’s take a few moments to get to know her better!

Can you tell us a little about yourself (hobbies, education, etc):

A quick summary about me:

I am currently the image analytics team lead at Planet Labs. Planet is one of the largest satellite imaging companies in the world and my team helps take Planet’s daily satellite imagery and turn into actionable information. We currently image the entire planet every day at ~3m resolution and not only do I get to see that data, but I also have the resources to apply my computer vision skills to our whole data set. On top of this I get to work stuff in space! It goes without saying that I absolutely love my job. I am also on the board of the Open Source Hardware Association and I help put together the Open Hardware Summit.

Prior to working at Planet i co-founded two success start-up Tempo Automation and SightMachine. Prior to founding those two start-ups I worked at really awesome research and development company called Cybernet Systems. While I was at Cybernet I did computer vision, augmented reality, and robotics research.

I graduated from the University of Michigan in 2005 with dual degrees in computer engineering and electrical engineering. To put myself through school I worked as a research assistant with a couple of really awesome labs where I did research on MEMS neural prosthetics and the RHex Robot (a cousin to the Big Dog robot you may be familiar with). In 2010 I decided to go back to school to get my masters degree at Columbia University. I majored in computer science with a focus on computer vision and robotics. It was at the tail end of grad school that I got bit by the start-up bug and helped start Sight Machine.

My hobbies are currently constrained by my tiny apartment in San Francisco, but I like to build and make stuff (art, hardware, software, etc) in my spare time. I am also really into music so I go to a lot of live shows. As I’ve gotten older I’ve found that I need to exercise if I want to stay in front of a screen so I like to walk, bike, and do pilates. I am also the owner of three pet rats. I started keeping rats after working with them in the lab during college.

Why did you start using Python?

I was almost exclusively a C/C++/C# user for the first ten years I was an engineer. There was some Lua and Java mixed in here and there but I spent 90% of my time writing C++ from scratch. When I started at SightMachine I switched over to Python to help build a computer vision library called SimpleCV for the company. I fell in love almost immediately. Python allowed me to abstract away a lot of the compiler, linker, and memory management related tasks and focus more on computer vision algorithm development. The sheer volume of scientific and mathematical libraries was also a fantastic resource.

What other programming languages do you know and which is your favorite?
I’ve been a professional engineer now for twelve years so I have basically seen it all and done it all. I’ve done non-trivial projects in C, C++, C#, Java, Javascript and Python and I’ve dabbled using some of the more esoteric languages like lisp, lua, coffee script, and ocaml. Python is my favorite because the “batteries are included.” With so many libraries and packages out there it is like having a super power, if I can dream it up I can code it.

What projects are you working on now?

My job keeps me very busy right now but it is super rewarding as I feel like we are giving everyone on Earth an ability to see the planet in real time. In April Planet released a Kaggle competition that focuses on detecting illegal mining and deforestation in the Amazon. More recently I just wrapped up working on my latest Pycon Talk and putting together the speaker list for Open Hardware Summit. With this stuff out of the way I starting a couple of new projects with some far left activist groups in the Bay Area. We are trying to put together an activist hack-a-thon where we develop tools for Bay Area non-profits. The project I am going to focus on specifically is a tool to systematically mine and analyze the advertising content of hate speech websites in an effort to defund them. These projects are still in the planning stage, but I am hoping to have them up and running by late summer.

Which Python libraries are your favorite (core or 3rd party)?

The whole scientific python community is amazing and I am a huge fan of Project Jupyter. Given my line of work I use OpenCV, Keras, Scikit, Pandas, and Numpy on a daily basis. Now that I am doing GIS work I have been exploring that space quite a bit. Right now I am getting really familiar with GeoPandas, Shapely, GDAL’s python bindings, and libraries the provide interfaces to Open Street Maps just to name a few. I also want to give a big shout out to the Robot Operating System and the Open Source Robotics Foundation.

Is there anything else you’d like to say?

I have a lot of things I could say but most of them would become a rant. I will say I try to make myself available over the internet, particularly to younger engineers just learning their craft. If you have questions about my field or software engineering in general, don’t hesitate to reach out.

Thanks for doing the interview!

Categories: FLOSS Project Planets

QtWebKit on FreeBSD

Planet KDE - Mon, 2017-08-21 07:36

Over at lwn.net, there is an article on the coming WebKitGTK apocalypse. Michael Catanzaro has pointed out the effects of WebKit’s stalled development processes before, in 2016.

So here’s the state of WebKit on FreeBSD, from the KDE-FreeBSD perspective (and a little bit about the GTK ports, too).

  • Qt4 WebKit is doubly unmaintained; Qt4 is past its use-before date, and its WebKit hasn’t been meaningfully updated in years. It is also, unfortunately, the WebKit used in the only officially available KDE ports, so (e.g.) rekonq on FreeBSD is the KDE4 version. Its age shows by, among other things, not even rendering a bunch of current KDE.org sites properly.
  • Qt5 WebKit port has been updated, just this weekend, to the annulen fork. That’s the “only a year and a half behind” state of WebKit for Qt5. The port has been updated to the alpha2 release of annulen, and should be a drop-in replacement for WebKit for all ports using WebKit from Qt5. There’s only 34 ports using Qt5-webkit, most of them KDE Applications (e.g. Calligra, which was updated to the KF5-version some time back).
  • Webkit1-gtk2 and -gtk3 look like they are unmaintained,
  • Webkit2-gtk3 looks like it is maintained and was recently updated to 2.16.6 (latest stable release).

So .. the situation is not particularly good, perhaps even grim for Qt4 / KDE4 users (similar to the situation for KDE4 users on Debian stable). The transition of KDE FreeBSD ports to Qt5 / Frameworks 5 / Plasma 5 and KDE Applications will improve things considerably, updating QtWebKit and providing QtWebEngine, both of which are far more up-to-date than what they are replacing.

Categories: FLOSS Project Planets

PyCharm: PyCharm 2017.2.2 RC is now available

Planet Python - Mon, 2017-08-21 07:09

Another set of bugs have been fixed, and PyCharm 2017.2.2 RC is available now from our confluence page.

Improvements in this release:

  •  Code insight and inspection fixes: “method may be static” issues, and misidentification of Python 3.7
  • Django: Cache conflict with Jinja template project, and Ctrl+Click on widget templates
  • Docker: Docker Compose environment variable issues
  • JavaScript: go to declaration, g0 t0 implementation
  • And much more, check out the release notes for details

We’d like to thank our users who have reported these bugs for helping us to resolve them! If you find a bug, please let us know on YouTrack.

If you use Django, but don’t have PyCharm Professional Edition yet, you may be interested to learn about our Django Software Foundation promotion. You can get a 30% discount on PyCharm, and support the Django Software Foundation at the same time.

-PyCharm Team
The Drive to Develop

Categories: FLOSS Project Planets

Nuvole: Stable release for Config Split and Config Filter

Planet Drupal - Mon, 2017-08-21 04:39
Celebrating the 8.x-1.0 release of our configuration management helper modules.

One year ago we released the first public version of Config Split with the goal to simplify working with Drupal cores configuration management. The main motivation was to find a solution for having development modules in local environments and avoiding to deploy their configuration. To achieve this in the cleanest way possible we decided to interact with Drupal only during the configuration import and export operations by altering what is read from and written to the \Drupal\Core\Config\StorageInterface.

We quickly realized that this is a powerful way to interact with how Drupal sees the configuration to import and so we split off the code that does the heavy lifting into its own module and now Config Ignore and Config Role Split use the same mechanism to do their thing.

Config Split now has documentation pages you are welcome to contribute to and there will be a session at DrupalCon in Vienna in which I will show how it can be used to manage a sites configuration in interesting ways.

If you were an early adopter (pre beta3) but have not updated to a recent version, you will need to install Config Filter first or patch core. The workaround and legacy code has now been removed and the current code is going to be backwards compatible in the future. So if you use the APC class loader, make sure to clear the apc/apcu cache or you might get an error after updating.

Tags: Drupal 8Drupal PlanetDrupalConConfiguration Management
Categories: FLOSS Project Planets

Building Qt on Debian

Planet KDE - Mon, 2017-08-21 03:03

I recently followed the advice of @sehurlburt to offer help to other developers. As I work with Qt and embedded Linux on a daily basis, I offered to help. (You should do the same!)

As it is easy to run out of words on Twitter, so here comes a slightly more lengthy explanation on how I build the latest and greatest of Qt for my Debian machine. Notice that there are easier ways to get Qt – you can install it from packages, or use the installer provided from The Qt Company. But if you want to build it yourself for whatever reason, this is how I do it.

First step is to get the build dependencies to your system. This might feel tricky, but apt-get can help you here. To get the dependencies for Qt 5, simply run sudo apt-get build-dep libqt5core5a and you are set.

Next step would be to get the Qt source tarball. You get it by going to https://www.qt.io/download/, select the open source version (unless you hold a commercial license) and then click the tiny View All Downloads link under the large Your download section. There you can find source packages for both Qt and Qt Creator.

Having downloaded and extracted the Qt tarball, enter the directory and configure the build. I usually do something like
./configure -prefix /home/e8johan/work/qt/5.9.0/inst -nomake examples -nomake tests. That should build everything, but skip examples and tests (you can build these later if you want to). The prefix should point to someplace in your home directory. The prefix has had some peculiar behaviour earlier, so I try to make sure not to have a final dash after the path. When the configuration has been run, you can look at the config.summary file (or the a bit higher up in the console output) and you can see a nice summary of what you are about to build. If this list looks odd, you need to look into the dependencies manually. Once you are happy, simply run make. If you want to speed things up, use the -j option with the highest number you dare (usually number of CPU cores plus one). This will parallelize the build.

Once the build is done (this takes a lot of time, expect at least 45 minutes with a decent machine), you need to install Qt. Run make install to do so. As you install Qt to someplace in your home directory, you do not need to use sudo.

The entry point to all of Qt is the qmake tool produced by your build (i.e. prefix/bin/qmake). If you run qmake -query you can see that it knows its version and installation point. This is why you cannot move a Qt installation around to random locations without hacking qmake. I tend to create a link (using ln -s) to this binary to somewhere in my path so that I can run qmake-5.9.0 or qmake-5.6.1 or whatnot to invoke a specific qmake version (caveat: when changing Qt version in a build tree, run qmake-version -recursive from the project root to change all Makefiles to the correct Qt version, otherwise you will get very “interesting” results).

Armed with this knowledge, we can go ahead and build QtCreator. It should be a matter of extracting the tarball, running the appropriate qmake in the root of the extracted code followed by make. QtCreator does not have to be installed, instead, just create a link to the qtcreator binary in the bin/ sub directory.

Running QtCreator, you can add Qt builds under Tools -> Options… -> Build & Run. Here, add a version by pointing at its qmake file, e.g. the qmake-5.9.0 link you just created. Then it is just a matter of picking Qt version for your project and build away.

Disclaimer! This is how I do things, but it might not be the recommended or even the right way to do it.

Categories: FLOSS Project Planets

Catalin George Festila: Using pip into shell to install and use pymunk.

Planet Python - Mon, 2017-08-21 02:38
The tutorial for today will show how to use pip into python shell to install a python package.
The first step is show in the next image:

Categories: FLOSS Project Planets

Tomasz Früboes: Look ma, I made a browser game!

Planet Python - Mon, 2017-08-21 01:56

I’ve been using python for a while now. First, it was slightly forced at me as a configuration language of large software framework (C++ based) of one of the largest physics experiments to date. Then I’ve implemented data analysis in python that was a basis of my Ph.D. dissertation. Finally, for the last couple of years, I was using python for a wide range of activities that one would call “data science”.

Some while ago I’ve decided that trying something different would be essential for my mental hygiene. As a typical nerd, I didn’t go for rock climbing or piano lessons, but choose to apply python in a different manner. That is to flirt with the web world.

Last time I’ve tried web page programming, netscape browser was still a thing. And in each HTML file created you had to include an animated gif of a digger saying “under construction”. As you can see my webdev skills at that point were somewhat dated…

I knew earlier that python offers a number of web frameworks, with different philosophies behind them. I decided to go for flask, as it is quite popular and seems to be more DIY than django. As an initial project I decided to implement a tic-tac-toe game, with the following goals:

  • AI  must be implemented server side to force AJAX use
  • As much UI as possible is done via CSS
  • App is flask based
A basic AJAX example

It turns out, basic AJAX part is rather easy to implement in flask. Example below demonstrates basic javascript (browser) – python (server) interaction using AJAX. File structure of this simple experiment is the following:

├── basic | ├── __init__.py │ ├── server.py │ └── templates │ ├── index.html │ └── script.js ├── setup.py

Python part (server.py) looks the following

import datetime from flask import Flask, send_from_directory, jsonify app = Flask(__name__) app.config.from_object(__name__) @app.route('/') def index(): return send_from_directory("templates", "index.html") @app.route('/<string:name>') def static_files(name): return send_from_directory("templates", name) @app.route('/hello') def hello(): result="Hey, I saw that! You clicked at {}".format(datetime.datetime.now()) return jsonify(result=result)

Lines 4-5 are needed by flask to create the application. The “@app.route” decorator is a flask way to define what functions should be used to handle given URLs. The first two functions are created to handle standard requests for web page files (e.g. index.html). As a response, predefined files from templates directory are served (note: nothing is generated dynamically). The last function will be our AJAX handler. It simply returns a string with the current time and date, packed as a JSON (which stands for JavaScript Object Notation).

Client (javascript) part is the following:

function receive_hello(data){ $("#mytarget").html(data.result) } function send_hello(jqevent) { $.getJSON("/hello", {}, receive_hello) } $( document ).ready(function() { $( window ).click(send_hello); });

There are a couple of things happening in the listing above:

  • first of all, note the “$” sign being used a number of times. It is a way to call the jQuery javascript library. Apart from providing lots of time-saving utility functions, this shields you against subtle (and often back-stabbing) differences of javascript implementations between web browsers.
  • lines 9-11 translate to the following: “when web page finishes rendering, register a callback (the send_hello function) that will be called each time the web page is clicked inside the browser”
  • lines 5-7 define the mentioned send_hello function. Inside, the get_json helper function from jQuery is used. The first argument defines the URL (note the most part of URL – server address – is omitted here). The second argument is the (empty) arguments call list. Finally, the last argument defines which function to call when response from the server comes.
  • lines 1-3 takes any data server happily sent us and puts it on the webpage. The ‘#mytarget’ string defines where to put content (it is an identifier inside HTML source)
Getting the code and running

The source code for the example above (and not yet mentioned tictactoe implementation) can be downloaded in the following way:

git clone https://github.com/fruboes/tictactoe.git cd tictactoe/ virtualenv venv source venv/bin/activate pip install -e basic/ pip install -e tictactoe/

In order to run simply execute the following

cd basic/ ./start_basic.sh

You should see a link in the terminal. After opening it and clicking anywhere on the web page in the browser you should see current date and time printed in the window. Subsequent clicks should update the time.

Da game!

If you followed the setup instructions above, you have already downloaded the source code for the tictactoe implementation. Since this was supposed to be an exercise in web development, the AI part was intentionally left stupid. If you care enough, you may want to experiment and improve it. I won’t go through the sources, since essentially this is an extended version of the basic example above.

In order to run the game, you should execute the tictactoe/start.sh script. Or you may use a running instance at http://tictactoe.pythonanywhere.com/ (as this is a free tier of pythonanywhere, I’ll keep it running for next month).

Lessons learnt along the way

Basic web development with python is rather easy to start. If you are going to try it yourself, the following may be helpful:

  • Maybe surprisingly, only a basic understanding of html5, javascript and css is needed for such exercises. I found “A Software Engineer Learns HTML5, JavaScript and jQuery: A guide to standards-based web applications” book quite helpfull, as it contains rather condensed introduction to the topic.
  • Thirst thing you should do after learning basics of javascript is to learn the javascript debugger bundled with your browser. Playing a little with rest of web developer tools you’ll find in your browser may be beneficial. At least you’ll know what’s there to use if needed.
  • If you want to play with web development, learn at least the basics of jQuery (a javascript library). As I wrote above – jQuery provides lots of time-saving utility functions and shields you against subtle (and often back-stabbing) differences of javascript implementations between web browsers.
    • In order to learn jQuery basics, I took this free MOOC, courtesy of udacity.
  • Inbetween creating the game described and writing this post I have learned basics of the Bootstrap framework. Using it to implement the tictactoe grid would probably be much simpler than implementing with CSS.
    • To learn Bootstrap, I have also gone for another free MOOC, this time by Microsoft and edx.
Categories: FLOSS Project Planets

Codementor: Getting Started with Scraping in Python

Planet Python - Mon, 2017-08-21 01:13
A useful guide to how to get started web scraping using Python.
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-08-20

Planet Apache - Sun, 2017-08-20 19:58
Categories: FLOSS Project Planets

Simple is Better Than Complex: How to Use Celery and RabbitMQ with Django

Planet Python - Sun, 2017-08-20 15:54

Celery is an asynchronous task queue based on distributed message passing. Task queues are used as a strategy to distribute the workload between threads/machines. In this tutorial I will explain how to install and setup Celery + RabbitMQ to execute asynchronous in a Django application.

To work with Celery, we also need to install RabbitMQ because Celery requires an external solution to send and receive messages. Those solutions are called message brokers. Currently, Celery supports RabbitMQ, Redis, and Amazon SQS as message broker solutions.

Table of Contents Why Should I Use Celery?

Web applications works with request and response cycles. When the user access a certain URL of your application the Web browser send a request to your server. Django receive this request and do something with it. Usually it involves executing queries in the database, processing data. While Django does his thing and process the request, the user have to wait. When Django finalize its job processing the request, it sends back a response to the user who finally will see something.

Ideally this request and response cycle should be fast, otherwise we would leave the user waiting for way too long. And even worse, our Web server can only serve a certain number of users at a time. So, if this process is slow, it can limit the amount of pages your application can serve at a time.

For the most part we can work around this issue using cache, optimizing database queries, and so on. But there are some cases that theres no other option: the heavy work have to be done. A report page, export of big amount of data, video/image processing are a few examples of cases where you may want to use Celery.

We don’t use Celery through the whole project, but only for specific tasks that are time-consuming. The idea here is to respond to the user as quick as possible, and pass the time-consuming tasks to the queue so to be executed in the background, and always keep the server ready to respond to new requests.


The easiest way to install Celery is using pip:

pip install Celery

Now we have to install RabbitMQ.

Installing RabbitMQ on Ubuntu 16.04

To install it on a newer Ubuntu version is very straightforward:

apt-get install -y erlang apt-get install rabbitmq-server

Then enable and start the RabbitMQ service:

systemctl enable rabbitmq-server systemctl start rabbitmq-server

Check the status to make sure everything is running smooth:

systemctl status rabbitmq-server Installing RabbitMQ on Mac

Homebrew is the most straightforward option:

brew install rabbitmq

The RabbitMQ scripts are installed into /usr/local/sbin. You can add it to your .bash_profile or .profile.

vim ~/.bash_profile

Then add it to the bottom of the file:

export PATH=$PATH:/usr/local/sbin

Restart the terminal to make sure the changes are in effect.

Now you can start the RabbitMQ server using the following command:


Installing RabbitMQ on Windows and Other OSs

Unfortunately I don’t have access to a Windows computer to try things out, but you can find the installation guide for Windows on RabbitMQ’s Website.

For other operating systems, check the Downloading and Installing RabbitMQ on their Website.

Celery Basic Setup

First, consider the following Django project named mysite with an app named core:

mysite/ |-- mysite/ | |-- core/ | | |-- migrations/ | | |-- templates/ | | |-- apps.py | | |-- models.py | | +-- views.py | |-- templates/ | |-- __init__.py | |-- settings.py | |-- urls.py | +-- wsgi.py |-- manage.py +-- requirements.txt

Add the CELERY_BROKER_URL configuration to the settings.py file:


CELERY_BROKER_URL = 'amqp://localhost'

Alongside with the settings.py and urls.py files, let’s create a new file named celery.py.


import os from celery import Celery os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings') app = Celery('mysite') app.config_from_object('django.conf:settings', namespace='CELERY') app.autodiscover_tasks()

Now edit the __init__.py file in the project root:


from .celery import app as celery_app __all__ = ['celery_app']

This will make sure our Celery app is important every time Django starts.

Creating Our First Celery Task

We can create a file named tasks.py inside a Django app and put all our Celery tasks into this file. The Celery app we created in the project root will collect all tasks defined across all Django apps listed in the INSTALLED_APPS configuration.

Just for testing purpose, let’s create a Celery task that generates a number of random User accounts.


import string from django.contrib.auth.models import User from django.utils.crypto import get_random_string from celery import shared_task @shared_task def create_random_user_accounts(total): for i in range(total): username = 'user_{}'.format(get_random_string(10, string.ascii_letters)) email = '{}@example.com'.format(username) password = get_random_string(50) User.objects.create_user(username=username, email=email, password=password) return '{} random users created with success!'.format(total)

The important bits here are:

from celery import shared_task @shared_task def name_of_your_function(optional_param): pass # do something heavy

Then I defined a form and a view to process my Celery task:


from django import forms from django.core.validators import MinValueValidator, MaxValueValidator class GenerateRandomUserForm(forms.Form): total = forms.IntegerField( validators=[ MinValueValidator(50), MaxValueValidator(500) ] )

This form expects a positive integer field between 50 and 500. It looks like this:

Then my view:


from django.contrib.auth.models import User from django.contrib import messages from django.views.generic.edit import FormView from django.shortcuts import redirect from .forms import GenerateRandomUserForm from .tasks import create_random_user_accounts class GenerateRandomUserView(FormView): template_name = 'core/generate_random_users.html' form_class = GenerateRandomUserForm def form_valid(self, form): total = form.cleaned_data.get('total') create_random_user_accounts.delay(total) messages.success(self.request, 'We are generating your random users! Wait a moment and refresh this page.') return redirect('users_list')

The important bit is here:


Instead of calling the create_random_user_accounts directly, I’m calling create_random_user_accounts.delay(). This way we are instructing Celery to execute this function in the background.

Then Django keep processing my view GenerateRandomUserView and returns smoothly to the user.

But before you try it, check the next section to learn how to start the Celery worker process.

Starting The Worker Process

Open a new terminal tab, and run the following command:

celery -A mysite worker -l info

Change mysite to the name of your project. The result is something like this:

Now we can test it. I submitted 500 in my form to create 500 random users.

The response is immediate:

Meanwhile, checking the Celery Worker Process:

[2017-08-20 19:11:17,485: INFO/MainProcess] Received task: mysite.core.tasks.create_random_user_accounts[8799cfbd-deae-41aa-afac-95ed4cc859b0]

Then after a few seconds, if we refresh the page, the users are there:

If we check the Celery Worker Process again, we can see it completed the execution:

[2017-08-20 19:11:45,721: INFO/ForkPoolWorker-2] Task mysite.core.tasks.create_random_user_accounts[8799cfbd-deae-41aa-afac-95ed4cc859b0] succeeded in 28.225658523035236s: '500 random users created with success!' Managing The Worker Process in Production with Supervisord

If you are deploying your application to a VPS like DigitalOcean you will want to run the worker process in the background. In my tutorials I like to use Supervisord to manage the Gunicorn workers, so it’s usually a nice fit with Celery.

First install it (on Ubuntu):

sudo apt-get install supervisor

Then create a file named mysite-celery.conf in the folder: /etc/supervisor/conf.d/mysite-celery.conf:

[program:mysite-celery] command=/home/mysite/bin/celery worker -A web --loglevel=INFO directory=/home/mysite/mysite user=nobody numprocs=1 stdout_logfile=/home/mysite/logs/celery.log stderr_logfile=/home/mysite/logs/celery.log autostart=true autorestart=true startsecs=10 ; Need to wait for currently executing tasks to finish at shutdown. ; Increase this if you have very long running tasks. stopwaitsecs = 600 stopasgroup=true ; Set Celery priority higher than default (999) ; so, if rabbitmq is supervised, it will start first. priority=1000

In the example below, I’m considering my Django project is inside a virtual environment. The path to my virtual environment is /home/mysite/.

Now reread the configuration and add the new process:

sudo supervisorctl reread sudo supervisorctl update

If you are not familiar with deploying Django to a production server and working with Supervisord, maybe this part will make more sense if you check this post from the blog: How to Deploy a Django Application to Digital Ocean.

Further Reading

Those are the basic steps. I hope this helped you to get started with Celery. I will leave here a few useful references to keep learning about Celery:

And as usual, the code examples used in this tutorial is available on GitHub:


Referral Link

If you want to try this setup in a Ubuntu cloud server, you can use this referral link to get a $10 free credit from Digital Ocean.

Categories: FLOSS Project Planets
Syndicate content