FLOSS Project Planets

GVSO Blog: [Social API] Creating a Social Auth implementer #1 - kicking off

Planet Drupal - Tue, 2016-07-19 16:00
[Social API] Creating a Social Auth implementer #1 - kicking off

In the last few months we have been working on the Social API project which tries to harmonize Social Networking functionality in Drupal. This project is divided in four main components which are:

gvso Tue, 07/19/2016 - 16:00 Tags Drupal Drupal Planet Social API
Categories: FLOSS Project Planets

LevelTen Interactive: Get a Free Website Valuation at the ROW Roadshow!

Planet Drupal - Tue, 2016-07-19 14:55

By now, you may have heard that we’re putting on a show: the ROW Roadshow, to be exact!

ROW stands for Results Oriented Websites, and it means just what it says. We think that all of our clients – and everyone in the United States! – should have a meaningful web presence, with a website that produces real results and helps them grow their company.

The ROW Roadshow is our way of taking that

...Read more
Categories: FLOSS Project Planets

pythonwise: aenumerate - enumerate for async for

Planet Python - Tue, 2016-07-19 14:39
Python's new async/await syntax helps a lot with writing async code. Here's a little utility that provides the async equivalent of enumerate.

Categories: FLOSS Project Planets

Control F'd: Python math errors with decimals (Part II)

Planet Python - Tue, 2016-07-19 14:25

Part 1 of this series on using floats in python can be found here

Categories: FLOSS Project Planets

pythonwise: Slap a --help on it

Planet Python - Tue, 2016-07-19 13:44
Sometimes we write "one off" scripts to deal with certain task. However most often than not these scripts live more than just the one time. This is very common in ops related code that for some reason people don't apply the regular coding standards to.

It really upsets me when I try to see what a script is doing, run it with --help flag and it happily deletes the database while I wait :) It's so easy to add help support in the command line. In Python we do it with argparse, and we role our own in bash. Both cases it's extra 3 lines of code.

Please be kind to future self and add --help support to your scripts.
Categories: FLOSS Project Planets

pythonwise: Removing String Columns from a DataFrame

Planet Python - Tue, 2016-07-19 13:43
Sometimes you want to work just with numerical columns in a pandas DataFrame. The rule of thumb is that everything that has a type of object is something not numeric (you can get fancier with numpy.issubdtype). We're going to use the DataFrame dtypes with some boolean indexing to accomplish this.

In [1]: import pandas as pd

In [2]: df = pd.DataFrame([
...: [1, 2, 'a', 3],
...: [4, 5, 'b', 6],
...: [7, 8, 'c', 9],
...: ])

In [3]: df
0 1 2 3
0 1 2 a 3
1 4 5 b 6
2 7 8 c 9

In [4]: df.dtypes
0 int64
1 int64
2 object
3 int64
dtype: object

In [5]: df[df.columns[df.dtypes != object]]
0 1 3
0 1 2 3
1 4 5 6
2 7 8 9

In [6]:

Categories: FLOSS Project Planets

pythonwise: Work with AppEngine SDK in the REPL

Planet Python - Tue, 2016-07-19 13:42
Working again with AppEngine for Python. Here's a small code snippet that will let you work with your code in the REPL (much better than the previous solution).
What I do in IPython is:

In [1]: %run initgae.py

In [2]: %run app.py

And then I can work with my code and test things out.
Categories: FLOSS Project Planets

pythonwise: Testing numpy Code

Planet Python - Tue, 2016-07-19 13:10

Notebook here.

Categories: FLOSS Project Planets

Joey Hess: Re: Debugging over email

Planet Debian - Tue, 2016-07-19 12:57

Lars wrote about the remote debugging problem.

I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.

I want, nay, need to improve this.

This is also something I've thought about on and off, that affects me most every day.

I've found that building the test suite into the program, such that users can run it at any time, is a great way to smoke out problems. If a user thinks they have problem A but the test suite explodes, or also turns up problems B C D, then I have much more than the user's problem report to go on. git annex test is a good example of this.

Asking users to provide a recipe to reproduce the bug is very helpful; I do it in the git-annex bug report template, and while not all users do, and users often provide a reproducion recipe that doesn't quite work, it's great in triage to be able to try a set of steps without thinking much and see if you can reproduce the bug. So I tend to look at such bug reports first, and solve them more quickly, which tends towards a virtuous cycle.

I've noticed that reams of debugging output, logs, test suite failures, etc can be useful once I'm well into tracking a problem down. But during triage, they make it harder to understand what the problem actually is. Information overload. Being able to reproduce the problem myself is far more valuable than this stuff.

I've noticed that once I am in a position to run some commands in the environment that has the problem, it seems to be much easier to solve it than when I'm trying to get the user to debug it remotely. This must be partly psychological?

Partly, I think that the feeling of being at a remove from the system, makes it harder to think of what to do. And then there are the times where the user pastes some output of running some commands and I mentally skip right over an important part of it. Because I didn't think to run one of the commands myself.

I wonder if it would be helpful to have a kind of ssh equivilant, where all commands get vetted by the remote user before being run on their system. (And the user can also see command output before it gets sent back, to NACK sending of personal information.) So, it looks and feels a lot like you're in a mosh session to the user's computer (which need not have a public IP or have an open ssh port at all), although one with a lot of lag and where rm -rf / doesn't go through.

Categories: FLOSS Project Planets

Wiki, what’s going on? (Part 7)

Planet KDE - Tue, 2016-07-19 12:14


Tears followed by joy and happiness, discussions followed by great moments all together, problems followed by their solution and enthusiasm. Am I talking about my family? More or less, because actually I am talking about a family: the WikiToLearn community!

This last period was full of ups and downs, but that is inevitable in such a project. We are a big family and we do have to face problems, but with willingness and devotion to what we are doing we can manage to overcome such problems and make things go as we want to – or, at least, try to do so.

We are putting our best efforts in what we are doing: our devs are working hard to release the 0.8 version, a first step toward our main goal – the 1.0; promo team is now trying to start local hives (or groups) in different countries; editors have their summer plans to review contents and to create new ones; the importing group is ready and very soon we are having more and more high-quality books available. Members of our family are working together to make WikiToLearn great and to give you (yes, you!) the best place we can give you to study and to create collaborative textbooks!

We are focused on September: with the beginning of the new academic year we have to fully exploit our potential and move toward #operation1000; moreover in September we are also celebrating our first birthday!

Guys, great things are coming: stay tuned!

L'articolo Wiki, what’s going on? (Part 7) sembra essere il primo su Blogs from WikiToLearn.

Categories: FLOSS Project Planets

Cheeky Monkey Media: Behat with Drupal - Tutorial

Planet Drupal - Tue, 2016-07-19 12:08
Behat with Drupal - Tutorial Anonymous (not verified) Tue, 07/19/2016 - 16:08

On our first day as interns at Cheeky Monkey, we (Jared and Jordan) were given the task of exploring the somewhat uncharted waters of using Behat, an open source BDD (Behavior-driven development) testing framework, with Drupal 7.


Why BDD Testing?

We all know that testing is important, but why do we bother with “BDD” testing?

Behavior-driven development testing is exactly what it sounds like, testing the behavior of the site. This makes the tests very different than say a unit test.

Categories: FLOSS Project Planets

Acquia Developer Center Blog: Drupal 8 Tutorials for Beginners

Planet Drupal - Tue, 2016-07-19 11:22

Maybe you are already grokking Drupal 8's new configuration management system; maybe you've already absorbed D8's embrace of object-oriented code.

But that doesn't mean you should scoff at easy on-ramp introductions to Drupal 8. Solid overviews of Drupal 8 can be tremendously valuable when you're working on a team that includes non-technical members. They also come in handy when you are advocating in-house for D8 adoption.

Tags: acquia drupal planet
Categories: FLOSS Project Planets

Lars Wirzenius: Debugging over email

Planet Debian - Tue, 2016-07-19 11:08

I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.

I want, nay, need to improve this. I've been thinking about this for a while, and talking with friends about it, and here's my current ideas.

First idea: have a script that gathers as much information as possible, which the user can run. For example, log files, full configuration, full environment, etc. The user would then mail the output to me. The information will need to be anonymised suitably so that no actual secrets are leaked. This would be similar to Debian's package specific reportbug scripts.

Second idea: make it less likely that the user needs help solving their issue, with better error messages. This would require error messages to have sufficient explanation that a user can solve their problem. That doesn't necessarily mean a lot of text, but also code that analyses the situation when the error happens to include things that are relevant for the problem resolving process, and giving error messages that are as specific as possible. Example: don't just fail saying "write error", but make the code find out why writing caused an error.

Third idea: in addition to better error messages, might provide diagnostics tools as well.

A friend suggested having a script that sets up a known good set of operations and verifies they work. This would establish a known-working baseline, or smoke test, so that we can rule things like "software isn't completely installed".

Do you have ideas? Mail me (liw@liw.fi) or tell me on identi.ca (@liw) or Twitter (@larswirzenius).

Categories: FLOSS Project Planets

KDE blowing out candles on FISL 17!

Planet KDE - Tue, 2016-07-19 10:45

Decorated booth.

During the last week happened another edition of FISL, the Free Software International Forum, which is held since 2000 in the city of Porto Alegre, Rio Grande do Sul, Brazil. Our participation, as I had already announced here, was very special because we celebrate 20 years of the KDE community this year. The birthday it is only in October but as we could not pass up the opportunity to celebrate this date in such an important event as the FISL, we have prepared a special program.

On the first day it happened our mini-event, the Engrenagem (“Gear” in English), in which members of our community presentend several talks on various issues related to KDE. The Engrenagem was opened by David Edmundson talk, one of the Plasma developers.


My talk was the next. Its title was “20 anos de KDE: de Desktop a Guarda-Chuva de Projetos” (20 years of KDE: From Desktop to Project Umbrella). I presented the evolution process of our community, which led it from a desktop project to a incubator community. For those who did not attend the event the talk was recorded and it is available here. Below I also make available the slides of my presentation:

In addition to our talks, we have also prepared some surprises for the community of fans who showed up there. On the penultimate day of the event we had a special time in which we decorate our booth with balloons and a few other things, completing with birthday cake and candles! ❤

Our cake!



Who has not seen our talks can watch them here, searching by the “Engrenagem” term. All talks were recorded.

Who want to check out our photos at FISL, just visit our flickr

Categories: FLOSS Project Planets

Django Weblog: DSF Code of Conduct committee releases transparent documentation

Planet Python - Tue, 2016-07-19 10:00

Almost exactly three years ago Django community adopted a Code of Conduct, we were one of the first communities to do so in the tech industry. Since then, we have come a long way and learned about how to effectively enforce the Code of Conduct.

Today we're proud to open source the documentation that describes how the Django Code of Conduct committee enforces our Code of Conduct. This documentation covers the structure of Code of Conduct committee membership, the process of handling Code of Conduct violations, our decision making process, record keeping, and transparency.

In addition, we're also publishing summarized statistics about Code of Conduct issues handled by the committee thus far. We're hoping this is just the beginning of making our work more transparent to the wider community.

We believe this documentation will help keep ourselves accountable to the Django community, as well as offer an insight into how decisions are made and issues are dealt with. We also hope that sharing our experiences is going to help other communities to not only adopt, but also implement and enforce the Code of Conduct.

The DSF Code of Conduct committee looks forward to your feedback and contributions!

Categories: FLOSS Project Planets

Machinalis: A day with mypy: Part 3

Planet Python - Tue, 2016-07-19 09:22

This is the third and final post in my trilogy about applying the python static typing tool mypy to a real world open source project (I chose pycodestyle as an example). The setup for this can be found at Part 1, and the details of my findings are in Part 2.

On this post I will answer, based on the results, the questions that I initially proposed which were:

  1. Does it help me discover actual bugs?
  2. Does the process of adding types help making the code base more understandable?
  3. How mature is mypy itself? Is it usable right now, does it have a lot of bugs?
  4. Is the type system flexible enough to express the kind of actual dynamic tricks that developers like us use in actual, production python code?
  5. Does it feel practical/usable?
  6. What other things that I didn’t expect can be learned from the experience?
Discovery of actual bugs

Given that pycodestyle is a small, widely used and tested code base here, I didn’t expect to find any serious problem in it. Unsurprisingly, I didn’t find any type bugs in it; however I found many error-prone constructions (like functions that apparently returned a bool but sometimes actually returned an empty string or list instead of False) which might lead to hard to find problems. I also found some redundant or unused code, and some code that had unnecessarily complicated flow controls with variables that changed type back and forth, and that would be really hard to refactor (and mypy also helped me make sure the refactor was better)

My opinion here is that the result was mildly positive in this situation (stable project). Trying this on a code base while it is being modified/developed will probably have more interesting conclusions given that there are probably more type bugs to find.

Adding types to make the code base more understandable

The changes brought by mypy here were huge in several different ways.

First, my personal understanding (as an outsider to the code that had to learn it) grew very quickly and steadily throughout the process of adding annotations. I don’t think I would have achieved the level of knowledge that I did if I had spent the same time just looking at the code and/or making some quick diagrams on paper which is my usual way to approach this task.

Second, important aspects of the code design itself, like the call relations and the shape of some complicated data structures, surfaced up and turned into a very visible and explicit artifact that can help me (as the annotator) or other people (as consumers of the annotations) understand how pycodestyle works. This could be compared to the benefits of good code documentation (I could even use annotations without mypy) with the difference that mypy allows me to be sure that this specific kind of documentation is consistent and up to date, so I can fully trust it. My opinion is that the readability of the code grew in a huge amount.

Lastly, many specific details of the implementation that were complicated and hard to read were flagged as a problem by mypy, and the refactored version ended up being much cleaner and readable.

I think this is the single largest benefit of the static typing approach and the use of mypy.

The maturity of mypy

If you’ve read part two you’ll notice that I found a fairly large amount of “paper cuts” and usability issues. None of the problems I found were a big show stopper, but I can say that they slowed me down a bit, and there’s room for improvement both in user friendliness and stability.

Most, if not all of the issues I found, seem like something that is superficial and will probably be fixed in future versions, so even if usability and stability are concerns I don’t feel that they are something to be worried about.

Flexibility of the type system vs dynamic tricks

I found some problems here, but in general they were fewer than I expected. On the other hand I found some issues in places where I haven’t foreseen problems beforehand (mostly, booleans and the semantic of short circuit operators in Python).

But something that was really fresh for me was the gradual typing approach: I have worked in Python for a long time, and I’ve also used different kinds of statically typed languages (from C to Haskell going through Eiffel) and the “feel” of the tool is a bit different to all of them. Some highly dynamic code (like the optparse argument parser which has run-time configured attributes) was just not covered by the type specs, and the typechecker knows that parts is dynamically typed and does not complain. I artificially pushed to cover most of the pycodestyle code and found some minor issues (but most of them easy to “silence” without much effort), probably in a real case scenario I would have covered a bit less.

There are some features of the type system that are uncommon in other statically typed imperative languages like Union types and some overload support in types (even if the language doesn’t support it in runtime) that made it easier to describe unusual cases. The implementation of typevars and generics make the system quite expressive. In a language like Python there will always be scenarios where the system isn’t flexible enough, but I got widely more than enough to call it successful.

If I had to mention a weak point here it’s probably around callables and function signatures. Python has an extremely rich way of specifying function signatures (varargs, keyword args, open keyword args, keyword only, argument packing, optional arguments with defaults, ...) and being able to specify that a function matches a given signature is not always possible (although there are some proposals), and that would be useful for describing functional style APIs.

Usability/applicability to real world use

The tool is working and producing useful results. It was very fast on the pycodestyle codebase, and generally fast when working with stub files. I made some quick experiments with larger codebases (the Django web framework) and it has a somewhat slow checking time when it starts following deep import chains in large projects.

Even if I found some usability issues that I have already mentioned, I got to be quite productive and managed to cover a lot of code with a reasonable amount of effort (even as a first time user of mypy and not knowing the code I was annotating beforehand). It is not the most polished piece of my software development toolkit but it definitely adds value right now and that will improve in the future.

Some supporting evidence of this are the reports from the developers of mypy (most of them working for Dropbox) which report using it in a very large code base with positive results (you can listen to this podcast from the mypy team for further details).

One large limiting factor may be the support of third party libraries (and completion/polish of the python stdlib stubs). My experience didn’t cover much of this because pycodestyle is built just on bare python, but I’m quite sure that the value of static typing is higher if the lower levels of your stack are annotated, and currently very few things besides the standard library support mypy. My guess is that mypy will be weaker when you’re just gluing together high level pieces of a framework (for an unannotated framework), and stronger when your code has a lot of programming and design of your own built on standard python or annotated code.

Other conclusions

Regarding the third library support one limitation of the approach provided currently is that if you want to add support for a library, your options are:

  1. Create stub files. This can work, but there’s no way to type check that stub definitions are consistent with your method implementations, and being in separate files they are hard to maintain in sync
  2. Adding annotations to your code, which forces full code checking which is much slower (although there’s some work being done in incremental checking that should help here).

Other problem of adding annotations is that the nice syntax is the python 3 one, but many library authors want to also support python 2 for a few more years; so the only reasonable way is to add python 2 style annotations (which are especially formatted comments). However they look uglier, and will be more effort (converting them to python 3 style) a few years from today. Solving that could boost efforts to get more annotations in python libraries.


Mypy is a useful tool for projects now, and its applicability will grow over time. There’s a lot of work ahead in terms of making it more stable, supporting more libraries, documenting it better, establishing conventions on how to use it, and making it easier to use and to integrate with the developer workflow. Having some official support (at least on the annotation language) from the python project is a good guarantee that this work will eventually be there. But even without that, the value today is already positive. Applied in the parts of the code where a static typing style is more effective provides a significant boost on code readability and maintainability. Its gradual nature allows leaving unchecked your most dynamic code, or code that depends on unsupported libraries and still get the benefit on the rest of your codebase.

I’m looking forward to use it in future projects, and see how mypy evolves, but I’m quite confident that with some time and community support this tool may turn into a standard piece of the Python development stack. There are a couple of efforts here at Machinalis to help support more pieces of the python libs we normally use (Django and web tools, and data science/machine learning tools).

Mypy is certainly something I’d recommend to consider for every project, given the possibility of adding advantages for your products or customers. And if you’re already using mypy I’d love to hear what you’re applying it to!

Categories: FLOSS Project Planets

Zivtech: How to Use SQL-Dump and SCP

Planet Drupal - Tue, 2016-07-19 09:15

It’s been almost six months since I started at Zivtech as a Junior Developer, and I could never fit everything I’ve learned into one blog post. That said, one of the best things I’ve learned is how to compress my database and copy it across servers. These two commands are drush sql-dump and scp. If you’re unfamiliar with Drush, you can find some background information here.

I learned how to use drush sql-dump while using Probo.CI, which is our internally-developed continuous integration tool. Since we use Probo.CI on every project, I had to figure out how to set it up. Essentially, you have to upload your database to the Probo.CI app to ensure that your new feature will work on the site when you’re testing your pull request. Here is some helpful documentation. The fourth step in this documentation is:

Step 4: Use Drush to get and compress your database in a sql file.

If you wish to compress your database you’ll need to use gzip. Other zip files won’t work.

$ drush sql-dump | gzip > dev.sql.gz

To a new developer, this might look a little intimidating, but it’s just like it sounds; you’re dumping your database into a gzip file, dev.sql.gz, which will then be uploaded. In this example of using Probo, you’re uploading the gzip file with probo-uploader (a command line client for uploading files). Using Probo was a great entry point into learning drush sql-dump which, as you’ll see below, can be used in other capacities.

I’ve also had to use drush sql-dump alongside scp, which means secure copy. So I’ve dumped my database into a .gz file; great, but now what? Somehow I have to get the database to another environment or server. In the previous example, I didn’t need to copy the database to a server; everything could be done either locally or from my virtual machine. There are times when you need to import a database to a particular environment. If you need to copy the database from a remote host to the local host:

$ scp your_username@remotehost.edu:foobar.txt /some/local/directory

If you need to copy it from the local host to a remote host:

$ scp foobar.txt your_username@remotehost.edu:/some/remote/directory

Credit for the above examples goes to scott@hypexr.org. For more scenarios, go here.

Once you’ve copied the gzip file to the environment it needs to be in, you need to DROP your current database so you can import the new one you just copied. That’s where these commands come in:

$ drush sql-drop $ drush sql-cli < ~/my-sql-dump-file-name.sql

For more information, here’s another great resource.

That’s all there is to it. You dumped the database you wanted into a gzip file, secure copied it to the environment, dropped the old database, and then imported the new one. That’s how it works!

Categories: FLOSS Project Planets

Dirk Eddelbuettel: Rcpp 0.12.6: Rolling on

Planet Debian - Tue, 2016-07-19 08:49

The sixth update in the 0.12.* series of Rcpp has arrived on the CRAN network for GNU R a few hours ago, and was just pushed to Debian. This 0.12.6 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, and the 0.12.5 release in May --- making it the tenth release at the steady bi-montly release frequency. Just like the previous release, this one is once again more of a refining maintenance release which addresses small bugs, nuisances or documentation issues without adding any major new features. That said, some nice features (such as caching support for sourceCpp() and friends) were added.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 703 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by about fourty packages from the last release in May!

Similar to the previous releases, we have contributions from first-time committers. Artem Klevtsov made na_omit run faster on vectors without NA values. Otherwise, we had many contributions from "regulars" like Kirill Mueller, James "coatless" Balamuta and Dan Dillon as well as from fellow Rcpp Core contributors. Some noteworthy highlights are encoding and string fixes, generally more robust builds, a new iterator-based approach for vectorized programming, the aforementioned caching for sourceCpp(), and several documentation enhancements. More details are below.

Changes in Rcpp version 0.12.6 (2016-07-18)
  • Changes in Rcpp API:

    • The long long data type is used only if it is available, to avoid compiler warnings (Kirill Müller in #488).

    • The compiler is made aware that stop() never returns, to improve code path analysis (Kirill Müller in #487 addressing issue #486).

    • String replacement was corrected (Qiang in #479 following mailing list bug report by Masaki Tsuda)

    • Allow for UTF-8 encoding in error messages via RCPP_USING_UTF8_ERROR_STRING macro (Qin Wenfeng in #493)

    • The R function Rf_warningcall is now provided as well (as usual without leading Rf_) (#497 fixing #495)

  • Changes in Rcpp Sugar:

    • Const-ness of min and max functions has been corrected. (Dan Dillon in PR #478 fixing issue #477).

    • Ambiguities for matrix/vector and scalar operations have been fixed (Dan Dillon in PR #476 fixing issue #475).

    • New algorithm header using iterator-based approach for vectorized functions (Dan in PR #481 revisiting PR #428 and addressing issue #426, with futher work by Kirill in PR #488 and Nathan in #503 fixing issue #502).

    • The na_omit() function is now faster for vectors without NA values (Artem Klevtsov in PR #492)

  • Changes in Rcpp Attributes:

    • Add cacheDir argument to sourceCpp() to enable caching of shared libraries across R sessions (JJ in #504).

    • Code generation now deals correctly which packages containing a dot in their name (Qiang in #501 fixing #500).

  • Changes in Rcpp Documentation:

    • A section on default parameters was added to the Rcpp FAQ vignette (James Balamuta in #505 fixing #418).

    • The Rcpp-attributes vignette is now mentioned more prominently in question one of the Rcpp FAQ vignette.

    • The Rcpp Quick Reference vignette received a facelift with new sections on Rcpp attributes and plugins begin added. (James Balamuta in #509 fixing #484).

    • The bib file was updated with respect to the recent JSS publication for RProtoBuf.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Mike Driscoll: Python 201: An Intro to mock

Planet Python - Tue, 2016-07-19 08:30

The unittest module now includes a mock submodule as of Python 3.3. It will allow you to replace portions of the system that you are testing with mock objects as well as make assertions about how they were used. A mock object is used for simulating system resources that aren’t available in your test environment. In other words, you will find times when you want to test some part of your code in isolation from the rest of it or you will need to test some code in isolation from outside services.

Note that if you have a version of Python prior to Python 3, you can download the Mock library and get the same functionality.

Let’s think about why you might want to use mock. One good example is if your application is tied to some kind of third party service, such as Twitter or Facebook. If your application’s test suite goes out and retweets a bunch of items or “likes” a bunch of posts every time its run, then that is probably undesirable behavior since it will be doing that every time the test is run. Another example might be if you had designed a tool for making updates to your database tables easier. Each time the test runs, it will do some updates on the same records every time and could wipe out valuable data.

Instead of doing any of those things, you can use unittest’s mock. It will allow you to mock and stub out those kinds of side-effects so you don’t have to worry about them. Instead of interacting with the third party resources, you will be running your test against a dummy API that matches those resources. The piece that you care about the most is that your application is calling the functions it’s supposed to. You probably don’t care as much if the API itself actually executes. Of course, there are times when you will want to do an end-to-end test that does actually execute the API, but those tests don’t need mocks!

Simple Examples

The Python mock class can mimic and other Python class. This allows you to examine what methods were called on your mocked class and even what parameters were passed to them. Let’s start by looking at a couple of simple examples that demonstrate how to use the mock module:

>>> from unittest.mock import Mock >>> my_mock = Mock() >>> my_mock.__str__ = Mock(return_value='Mocking') >>> str(my_mock) 'Mocking'

In this example, we import Mock class from the unittest.mock module. Then we create an instance of the Mock class. Finally we set our mock object’s __str__ method, which is the magic method that controls what happens if you call Python’s str function on an object. In this case, we just return the string “Mocking”, which is what you see when we actually execute the str() function at the end.

The mock module also supports five asserts. Let’s take a look at how at a couple of those in action:

>>> from unittest.mock import Mock >>> class TestClass(): ... pass ... >>> cls = TestClass() >>> cls.method = Mock(return_value='mocking is fun') >>> cls.method(1, 2, 3) 'mocking is fun' >>> cls.method.assert_called_once_with(1, 2, 3) >>> cls.method(1, 2, 3) 'mocking is fun' >>> cls.method.assert_called_once_with(1, 2, 3) Traceback (most recent call last): Python Shell, prompt 9, line 1 File "/usr/local/lib/python3.5/unittest/mock.py", line 802, in assert_called_once_with raise AssertionError(msg) builtins.AssertionError: Expected 'mock' to be called once. Called 2 times. >>> cls.other_method = Mock(return_value='Something else') >>> cls.other_method.assert_not_called() >>>

First off, we do our import and create an empty class. Then we create an instance of the class and add a method that returns a string using the Mock class. Then we call the method with three integers are arguments. As you will note, this returned the string that we set earlier as the return value. Now we can test out an assert! So we call the **assert_called_once_with** assert which will assert if we call our method two or more times with the same arguments. The first time we call the assert, it passes. So then we call the method again with the same methods and run the assert a second time to see what happens.

As you can see, we got an AssertionError. To round out the example, we go ahead and create a second method that we don’t call at all and then assert that it wasn’t called via the assert_not_called assert.

Side Effects

You can also create side effects of mock objects via the side_effect argument. A side effect is something that happens when you run your function. For example, some videogames have integration into social media. When you score a certain number of points, win a trophy, complete a level or some other predetermined goal, it will record it AND also post about it to Twitter, Facebook or whatever it is integrated with. Another side effect to running a function is that it might be tied to closely with your user interface and cause it to redraw unnecessarily.

Since we know about these kinds of side effect up front, we can mock them in our code. Let’s look at a simple example:

from unittest.mock import Mock     def my_side_effect(): print('Updating database!')   def main(): mock = Mock(side_effect=my_side_effect) mock()   if __name__ == '__main__': main()

Here we create a function that pretends to update a database. Then in our main function, we create a mock object and give it a side effect. Finally we call our mock object. If you do this, you should see a message printed to stdout about the database being updated.

The Python documentation also points out that you can make side effect raise an exception if you want to. One fairly common reason to want to raise an exception if you called it incorrectly. An example might be that you didn’t pass in enough arguments. You could also create a mock that raises a Deprecation warning.


The mock module also supports the concept of auto-speccing. The autospec allows you to create mock objects that contain the same attributes and methods of the objects that you are replacing with your mock. They will even have the same call signature as the real object! You can create an autospec with the create_autospec function or by passing in the autospec argument to the mock library’s patch decorator, but we will postpone looking at patch until the next section.

For now, let’s look at an easy-to-understand example of the autospec:

>>> from unittest.mock import create_autospec >>> def add(a, b): ... return a + b ... >>> mocked_func = create_autospec(add, return_value=10) >>> mocked_func(1, 2) 10 >>> mocked_func(1, 2, 3) Traceback (most recent call last): Python Shell, prompt 5, line 1 File "<string>", line 2, in add File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/unittest/mock.py", line 181, in checksig sig.bind(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/inspect.py", line 2921, in bind return args[0]._bind(args[1:], kwargs) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/inspect.py", line 2842, in _bind raise TypeError('too many positional arguments') from None builtins.TypeError: too many positional arguments

In this example, we import the create_autospec function and then create a simple adding function. Next we use create_autospec() by passing it our add function and setting its return value to 10. As long as you pass this new mocked version of add with two arguments, it will always return 10. However, if you call it with the incorrect number of arguments, you will receive an exception.

The patch

The mock module has a neat little function called patch that can be used as a function decorator, a class decorator or even a context manager. This will allow you to easily create mock classes or objects in a module that you want to test as it will be replaced by a mock.

Let’s start out by creating a simple function for reading web pages. We will call it webreader.py. Here’s the code:

import urllib.request     def read_webpage(url): response = urllib.request.urlopen(url) return response.read()

This code is pretty self-explanatory. All it does is take a URL, opens the page, reads the HTML and returns it. Now in our test environment we don’t want to get bogged down reading data from websites especially is our application happens to be a web crawler that downloads gigabytes worth of data every day. Instead, we want to create a mocked version of Python’s urllib so that we can call our function above without actually downloading anything.

Let’s create a file named mock_webreader.py and save it in the same location as the code above. Then put the following code into it:

import webreader   from unittest.mock import patch     @patch('urllib.request.urlopen') def dummy_reader(mock_obj): result = webreader.read_webpage('https://www.google.com/') mock_obj.assert_called_with('https://www.google.com/') print(result)   if __name__ == '__main__': dummy_reader()

Here we just import our previously created module and the patch function from the mock module. Then we create a decorator that patches urllib.request.urlopen. Inside the function, we call our webreader module’s read_webpage function with Google’s URL and print the result. If you run this code, you will see that instead of getting HTML for our result, we get a MagicMock object instead. This demonstrates the power of patch. We can now prevent the downloading of data while still calling the original function correctly.

The documentation points out that you can stack path decorators just as you can with regular decorators. So if you have a really complex function that accesses databases or writes file or pretty much anything else, you can add multiple patches to prevent side effects from happening.

Wrapping Up

The mock module is quite useful and very powerful. It also takes some time to learn how to use properly and effectively. There are lots of examples in the Python documentation although they are all simple examples with dummy classes. I think you will find this module useful for creating robust tests that can run quickly without having unintentional side effects.

Related Reading

Categories: FLOSS Project Planets

Chris Lamb: Python quirk: os.stat's return type

Planet Debian - Tue, 2016-07-19 06:20
import os import stat st = os.stat('/etc/fstab') # __getitem__ x = st[stat.ST_MTIME] print((x, type(x))) # __getattr__ x = st.st_mtime print((x, type(x))) (1441565864, <class 'int'>) (1441565864.3485234, <class 'float'>)
Categories: FLOSS Project Planets
Syndicate content