FLOSS Project Planets

Blue Drop Shop: I give you 600-plus Drupal talks. You’re Welcome!

Planet Drupal - Mon, 2018-11-19 14:38
I give you 600-plus Drupal talks. You’re Welcome! bds-admin Mon, 11/19/2018 - 13:38
Categories: FLOSS Project Planets

DrupalCon News: A First Look at DrupalCon Seattle’s Scholarship & Grant Recipients

Planet Drupal - Mon, 2018-11-19 13:06

We extend a hearty congratulations to our 10 scholarship and 12 grant recipients. A global team of community members were given the green light to award more funds than ever before, aiming to have a cross-section of contributors in attendance at DrupalCon Seattle 2019. As a result, we’re awarding aid to people from nearly every continent: six attendees from Europe, nine attendees from the Americas, two attendees from Africa, four attendees from South Asia, and one attendee from Australia.
 

Categories: FLOSS Project Planets

Appnovation Technologies: Acquia Engage Wrap-Up & Your Digital Strategy Roadmap for 2019

Planet Drupal - Mon, 2018-11-19 09:18
Acquia Engage Wrap-Up & Your Digital Strategy Roadmap for 2019 What an amazing week! Our team was in Austin for Acquia Engage, the digital transformation conference that brings together digital leaders and innovators to discuss the whys, whats and hows of delivering best-in-class customer experiences. This 3-day conference kicked off with a warm welcome reception, which provid...
Categories: FLOSS Project Planets

Getting Started With Qt for WebAssembly

Planet KDE - Mon, 2018-11-19 07:37

We’ve previously blogged about some of the features of Qt for WebAssembly. In this blog post we’ll take a look at how to get started: building Qt, building your application, and finally deploying the application.

If you would like to know more about this topic, then please join me for the Qt for WebAssembly webinar on November 27th.

Emscripten

The first step is installing emscripten. Please refer to the emscripten documentation for how to do so, and also note that Qt requires a unix host system: GNU/linux, macOS, or Windows subsystem for Linux. When done you should have a working em++ compiler in the path:

$ em++ --version
emcc (Emscripten gcc/clang-like replacement) 1.38.16 (commit 7a0e27441eda6cb0e3f1210e6837cae4b080ab4c)

Qt for WebAssembly applications are also Emscripten-based applications. Qt makes use of many of its features and so can application code.

Qt

Next, install the Qt 5.12 sources, for example using the online installer:

Build Qt from source and specify that we are cross-compiling for wasm using emscripten:

$ ~/Qt/5.12.0/Src/configure -xplatform wasm-emscripten
$ make

This Qt build is different from standard desktop builds in two additional ways: It is a static build, and does not support threads. Depending on how your application is structured and which features you use this may pose a problem. One way to find out is to make a separate “-static -no-feature-thread” desktop Qt build, and then debug/fix any issues there. The reason this may be preferable is that the build-debug cycle is usually faster on desktop, and you have a working debugger.

Your Application

Finally, build the application. Qmake is the currently supported build system.

$ /path/to/qt-wasm/qtbase/bin/qmake
$ make

This will produce several output files:

Name Producer Purpose app.html Qt HTML container qtloader.js Qt JS API for loading Qt apps app.js emscripten app runtime and JS API app.wasm emscripten app binary

Here, the app.wasm contains the majority (if not all) of the application and Qt code, while the .js files provide loading and run-time support.

The .html file provides the html page structure which contains the application as a <canvas> element. The default version of this file displays a Qt logo during the loading and compile stage and contains a simple HTML page structure which makes the application use the entire browser viewport. You will probably want to replace this with application-specific branding and perhaps integrate with existing html content.

For qtloader.js our intention is to have a public and stable API for loading Qt-based applications, but we are not there yet and the API in that file is subject so change.

The files are plain data files and can be served from any http server; there is no requirement for any special active server component or plugin. Note that loading from the file system is not supported by current browsers. I use Python http.server for development, which works well.

When deploying applications we do recomned using a server that supports compression. The following table gives an indication of what the expected file sizes (for the main .wasm file) are:

Qt Modules gzip brotli Core Gui 2.8MB 2.1MB Core Gui Widgets 4.3MB 3.2MB Core Gui Widgets Quick Charts 8.6MB 6.3MB

gzip is a good default choice as compressor and is supported by most web servers. brotli provides a nice compression gain and is supported by all wasm-enabled browsers.

Slate

The final result of all this should be your application running in a web browser, here represented by the Slate app created by my colleague Mitch. Slate is a Qt Quick Controls 2 based image editor.

A live version is available as well. If you’ve looked at this demo before the news is that it should be actually usable now: local file access is possible and there are few or none visual glitches.

For those that are attending Qt World Summit in Berlin next month: I look forward to seeing you there. If you a are not going then let me link to the webinar once again.

The post Getting Started With Qt for WebAssembly appeared first on Qt Blog.

Categories: FLOSS Project Planets

Dx Experts: Drupal 8 and GitLab CI setup guide

Planet Drupal - Mon, 2018-11-19 05:14
GitLab CI for Drupal 8 websites. We use a simple approach that applies any committed changes to your Drupal 8 website automatically
Categories: FLOSS Project Planets

OpenSense Labs: Drupal + JAMstack: A Paradigm Shift

Planet Drupal - Mon, 2018-11-19 01:48
Drupal + JAMstack: A Paradigm Shift Shankar Mon, 11/19/2018 - 12:18

Delicious. Sweet. Yummy. You may taste the jam and say these words or just the thought of jars containing jam stacked together may propel you to blurt out these words. Talking about the stack, in the technological space, we no longer talk about operating systems, specific web servers, backend programming languages or databases. In the web development arena, we think of different sorts of development stacks like the LAMP stack, the MEAN stack etc. And there is a new kid in the block called JAMstack which is not about specific technologies but a new way of building websites and apps.


What is the concept behind JAMstack?

To build a web development project with the JAMstack, three integral criteria should be met namely:

  • JavaScript: It governs the dynamic programming during the request/response cycle running completely on the client (e.g. Vue.js, React.js etc.)
  • APIs: Abstraction of all the server-side processes or database actions into reusable APIs is done which are accessed over HTTPS with JS (e.g. Twilio, Stripe)
  • Markup: By leveraging a site generator for content sites or a build tool for web apps, templated markup must be prebuilt at deploy time (e.g. Gatsby.js, Webpack).
Benefits of JAMstack
  • High performance: When it comes to reducing the time to first byte, pre-built files served over Content Delivery Network (CDN) enhances web performance. You do not have to wait for the pages to build on the fly as JAMstack allows you to generate them at deploy time.
A high TTFB penalizes the Speed Index of a page
Source: DareBoost Blog
  • Robust Security: Abstraction of server-side processes into microservice APIs minimises the security threats. Also, the domain expertise of specialist third-party services can be utilised.
  • Better Scalability: CDNs are a great way of ensuring scalability when your deployment amounts to a stack of files that can be served anywhere.
  • Great developer experience: More targeted development and debugging can be done by the developers with loose coupling and separation of controls. Moreover, the need to administer a separate stack for content and marketing goes away with the expanding selection of CMS options for site generators.
Best practices
  • The entire site must be served on CDN. Rather than living on a single server, JAMstack projects can be distributed as they do not rely on server-side code.
  • Atomic deploys should be employed. This ensures that no alterations go live until all the changed files have been uploaded.
  • Governance of instant cache purges by CDN ensures that when a deploy goes live it really goes live.
  • Everything should dwell in Git. This minimises contributor friction and streamlines staging and testing workflows.
  • Utilise modern build tools like Babel, PostCSS, Webpack etc.
  • Automate markup builds. This is because JAMstack markup is prebuilt and content alterations won’t go live unless and until you run another build.
How is JAMstack different from other development stacks? Source: Memory Leak’s blog | Medium

Jamstack is an alternative to the LAMP (Linux, Apache, MySQL, PHP) and MEAN (MongoDB, Express.js, Angular and Node.js) stacks.

LAMP is used to build dynamic websites and web apps. In this, pages are reconstructed from a database on request instead of being held as flat documents ready for delivery. It is easy to add content and modify them. However, JAMstack delivers content at a much faster speed.

MEAN and LAMP are more similar to each other and are very different from JAMstack.

MEAN is also designed for building dynamic websites and apps. All the MEAN components are written in JavaScripts. The key difference is that MEAN still assembles pages from databases on request like LAMP. MEAN and LAMP are more similar to each other and are very different from JAMstack.

Source: Memory Leak’s blog | MediumImplementing JAMstack and Drupal together

Web development projects that rely on a tight coupling between client and server is not built with the JAMstack. This comprises of a site built with a server-side CMS like Drupal, a monolithic server-run web app relying on backend language and a single page app that is using isomorphic rendering for creating views on the server at runtime. So, how can Drupal and JAMstack work together?


In a session held at Bay Area Drupal Camp 2018, a demonstration showed a method of integration of Gatsby with Drupal. Gatsby is one of the leading JAMstack based static page generators. The demo showed that Gatsby is not a replacement for Drupal and Drupal would still control the content, site structure and how content is created. Whereas Gatsby would be governing little things like the public facing site.

Gatsby is not a replacement for Drupal and Drupal would still control the content

It talked about ‘Gatsby Drupal Kit’ which is under development stages that can help jumpstart Gatsby-Drupal integrations. It is designed to work with a minimal Drupal install as a jumping off point and provide a structure that can be elongated to a larger and complex site.

The demonstration focused on a base Drupal 8 site connected with Gatsby and the best practices for making Gatsby work for real sites in production. The emphasis was also on the sane patterns for translating Drupal’s structure into Gatsby components, templates, and pages.

Conclusion

Once you have fully understood the specific risks and put in place appropriate workflows, the JAMstack exhibits its share of opportunities. Creating a static site takes time and needs an architecture involving orchestration of several solutions. Today, it may seem intricate but so was your first dynamic site involving choosing a host, master FTP, juggling the web server logs and so on. With experience, JAMstack users would be more and more adept in leveraging its full potential.

OpenSense Labs has been making the digital transformation dreams come true for its partners with a suite of services.

Contact us at hello@opensenselabs.com to leverage JAMstack for Drupal sites.

blog banner blog image Drupal 8 Drupal JAMstack Gatsby JavaScript API Markup Web Performance Content Delivery Network CDN Blog Type Articles Is it a good read ? On
Categories: FLOSS Project Planets

Test and Code: 53: Seven Databases in Seven Weeks - Luc Perkins

Planet Python - Mon, 2018-11-19 01:30

Luc Perkins joins the show to talk about "Seven Databases in Seven Weeks: A guide to modern databases and the NoSQL movement."

We discuss a bit about each database: Redis, Neo4J, CouchDB, MongoDB, HBase, Postgres, and DynamoDB.

Special Guest: Luc Perkins.

Sponsored By:

Support Test and Code

Links:

<p>Luc Perkins joins the show to talk about &quot;Seven Databases in Seven Weeks: A guide to modern databases and the NoSQL movement.&quot;</p> <p>We discuss a bit about each database: Redis, Neo4J, CouchDB, MongoDB, HBase, Postgres, and DynamoDB.</p><p>Special Guest: Luc Perkins.</p><p>Sponsored By:</p><ul><li><a rel="nofollow" href="http://testandcode.com/pycharm">PyCharm Professional</a>: <a rel="nofollow" href="http://testandcode.com/pycharm">We have a special offer for you: any time before December 1, you can get an Individual PyCharm Professional 4-month subscription for free! If you value your time, you owe it to yourself to try PyCharm.</a></li></ul><p><a rel="payment" href="https://www.patreon.com/testpodcast">Support Test and Code</a></p><p>Links:</p><ul><li><a title="Seven Databases in Seven Weeks, Second Edition: A Guide to Modern Databases and the NoSQL Movement" rel="nofollow" href="https://7dbs.io/">Seven Databases in Seven Weeks, Second Edition: A Guide to Modern Databases and the NoSQL Movement</a></li><li><a title="PostgreSQL" rel="nofollow" href="https://www.postgresql.org/">PostgreSQL</a></li><li><a title="Redis" rel="nofollow" href="https://redis.io/">Redis</a></li><li><a title="Neo4j Graph Database" rel="nofollow" href="https://neo4j.com/">Neo4j Graph Database</a></li><li><a title="CouchDB" rel="nofollow" href="http://couchdb.apache.org/">CouchDB</a></li><li><a title="MongoDB" rel="nofollow" href="https://www.mongodb.com/">MongoDB</a></li><li><a title="HBase" rel="nofollow" href="https://hbase.apache.org/">HBase</a></li><li><a title="DynamoDB" rel="nofollow" href="https://aws.amazon.com/dynamodb/">DynamoDB</a></li></ul>
Categories: FLOSS Project Planets

gamingdirectional: Create Enemy Missile and Enemy Missile Manager

Planet Python - Mon, 2018-11-19 01:13

In this article we will create two new classes, enemy missile class and the enemy missile manager class. The enemy missile manager class will be called during each game loop by the enemy manager class to create new enemy missile as well as to update the position of those missiles and draw them on the game scene. First of all, we will create the enemy missile class which will be used by the enemy...

Source

Related posts: Pygame tutorial 2 – Moving the object with keyboard Pygame’s Color class demo Create the spaceship in Rock Sweeper Group all the sprites together with Pygame How to move and flip the gaming character with Pygame Playing background music with Pygame Moving the sprite with Vector in Pygame How to detect boundary in Pygame Create player missile manager and player missile class in Pygame Draw Polygon with Pygame
Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Mike Müller

Planet Python - Mon, 2018-11-19 01:05

This week we welcome Mike Müller (@pyacademy) as our PyDev of the Week. Mike is the creator of Python Academy and has been teaching Python for over 14 years. Mike has spoken at PyCon for several years and was featured on the Talk Python podcast two years ago. Let’s take a few moments to learn more about Mike!

Can you tell us a little about yourself (hobbies, education, etc):

I studied hydrology and water resources and earned a five-year degree from Dresden University of Technology, Germany. After that I went on studying for a MS in the same field at The University of Arizona, AZ, USA. Then I continued my studies of water resources and was awarded a Ph.D. from the University of Cottbus, Germany. I worked in this field in consulting and research for 11 years at a research institute and four years at a consulting office.

In my limited spare time I do some calisthenics, i.e. bodyweight training to keep fit. Pull-ups are fun.

Categories: FLOSS Project Planets

Code Karate: Drupal 8 Configuration Read-only Module

Planet Drupal - Sun, 2018-11-18 22:49
Episode Number: 219

The Drupal 8 Configuration Read-only module allows you to lock down some of your environments to prevent users from making configuration changes. This lets you use the Drupal 8 configuration management system to push up all your changes, while preventing you from changing any settings, content types, views, or any other configuration on your production website.

Tags: DevOpsDrupalContribDrupal 8Drupal PlanetDeployment
Categories: FLOSS Project Planets

Podcast.__init__: Entity Extraction, Document Processing, And Knowledge Graphs For Investigative Journalists with Friedrich Lindenberg

Planet Python - Sun, 2018-11-18 19:28
Investigative reporters have a challenging task of identifying complex networks of people, places, and events gleaned from a mixed collection of sources. Turning those various documents, electronic records, and research into a searchable and actionable collection of facts is an interesting and difficult technical challenge. Friedrich Lindenberg created the Aleph project to address this issue and in this episode he explains how it works, why he built it, and how it is being used. He also discusses his hopes for the future of the project and other ways that the system could be used.Summary

Investigative reporters have a challenging task of identifying complex networks of people, places, and events gleaned from a mixed collection of sources. Turning those various documents, electronic records, and research into a searchable and actionable collection of facts is an interesting and difficult technical challenge. Friedrich Lindenberg created the Aleph project to address this issue and in this episode he explains how it works, why he built it, and how it is being used. He also discusses his hopes for the future of the project and other ways that the system could be used.

Preface
  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200 Gbit/s private networking, scalable shared block storage, node balancers, and a 40 Gbit/s public network, all controlled by a brand new API you’ve got everything you need to scale up. Go to podcastinit.com/linode today to get a $20 credit and launch a new server in under a minute.
  • Visit the site to subscribe to the show, sign up for the newsletter, and read the show notes. And if you have any questions, comments, or suggestions I would love to hear them. You can reach me on Twitter at @Podcast__init__ or email hosts@podcastinit.com)
  • To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
  • Join the community in the new Zulip chat workspace at podcastinit.com/chat
  • Registration for PyCon US, the largest annual gathering across the community, is open now. Don’t forget to get your ticket and I’ll see you there!
  • Your host as usual is Tobias Macey and today I’m interviewing Friedrich Lindenberg about Aleph, a tool to perform entity extraction across documents and structured data
Interview
  • Introductions
  • How did you get introduced to Python?
  • Can you start by explaining what Aleph is and how the project got started?
  • What is investigative journalism?
    • How does Aleph fit into their workflow?
    • What are some other tools that would be used alongside Aleph?
    • What are some ways that Aleph could be useful outside of investigative journalism?
  • How is Aleph architected and how has it evolved since you first started working on it?
  • What are the major components of Aleph?
    • What are the types of documents and data formats that Aleph supports?
  • Can you describe the steps involved in entity extraction?
    • What are the most challenging aspects of identifying and resolving entities in the documents stored in Aleph?
  • Can you describe the flow of data through the system from a document being uploaded through to it being displayed as part of a search query?
  • What is involved in deploying and managing an installation of Aleph?
  • What have been some of the most interesting or unexpected aspects of building Aleph?
  • Are there any particularly noteworthy uses of Aleph that you are aware of?
  • What are your plans for the future of Aleph?
Keep In Touch Picks Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppMsgPack 0.2.3

Planet Debian - Sun, 2018-11-18 17:32

Another maintenance release of RcppMsgPack got onto CRAN today. Two new helper functions were added and not unlike the previous 0.2.2 release in, some additional changes are internal and should allow compilation on all CRAN systems.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves. RcppMsgPack brings both the C++ headers of MessagePack as well as clever code (in both R and C++) Travers wrote to access MsgPack-encoded objects directly from R.

Changes in version 0.2.3 (2018-11-18)
  • New functions msgpack_read and msgpack_write for efficient direct access to MessagePackage content from files (#13).

  • Several internal code polishes to smooth compilation (#14 and #15).

Courtesy of CRANberries, there is also a diffstat report for this release.

More information is on the RcppRedis page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

The Digital Cat: Clean architectures in Python: a step-by-step example

Planet Python - Sun, 2018-11-18 15:00

In 2015 I was introduced by my friend Roberto Ciatti to the concept of Clean Architecture, as it is called by Robert Martin. The well-known Uncle Bob talks a lot about this concept at conferences and wrote some very interesting posts about it. What he calls "Clean Architecture" is a way of structuring a software system, a set of consideration (more than strict rules) about the different layers and the role of the actors in it.

As he clearly states in a post aptly titled The Clean Architecture, the idea behind this design is not new, being built on a set of concepts that have been pushed by many software engineers over the last 3 decades. One of the first implementations may be found in the Boundary-Control-Entity model proposed by Ivar Jacobson in his masterpiece "Object-Oriented Software Engineering: A Use Case Driven Approach" published in 1992, but Martin lists other more recent versions of this architecture.

I will not repeat here what he had already explained better than I can do, so I will just point out some resources you may check to start exploring these concepts:

The purpose of this post is to show how to build a web service in Python from scratch using a clean architecture. One of the main advantages of this layered design is testability, so I will develop it following a TDD approach. The project was initially developed from scratch in around 3 hours of work. Given the toy nature of the project some choices have been made to simplify the resulting code. Whenever meaningful I will point out those simplifications and discuss them.

If you want to know more about TDD in Python read the posts in this category.

Project overview

The goal of the "Rent-o-matic" project (fans of Day of the Tentacle may get the reference) is to create a simple search engine on top of a dataset of objects which are described by some quantities. The search engine shall allow to set some filters to narrow the search.

The objects in the dataset are storage rooms for rent described by the following quantities:

  • An unique identifier
  • A size in square meters
  • A renting price in Euro/day
  • Latitude and longitude

As pushed by the clean architecture model, we are interested in separating the different layers of the system. The architecture is described by four layers, which however can be implemented by more than four actual code modules. I will give here a brief description of those layers.

Entities

This is the level in which the domain models are described. Since we work in Python, I will put here the class that represent my storage rooms, with the data contained in the database, and whichever data I think is useful to perform the core business processing.

It is very important to understand that the models in this layer are different from the usual models of framework like Django. These models are not connected with a storage system, so they cannot be directly saved or queried using methods of their classes. They may however contain helper methods that implement code related to the business rules.

Use cases

This layer contains the use cases implemented by the system. In this simple example there will be only one use case, which is the list of storage rooms according to the given filters. Here you would put for example a use case that shows the detail of a given storage room or every business process you want to implement, such as booking a storage room, filling it with goods, etc.

Interface Adapters

This layer corresponds to the boundary between the business logic and external systems and implements the APIs used to exchange data with them. Both the storage system and the user interface are external systems that need to exchange data with the use cases and this layer shall provide an interface for this data flow. In this project the presentation part of this layer is provided by a JSON serializer, on top of which an external web service may be built. The storage adapter shall define here the common API of the storage systems.

External interfaces

This part of the architecture is made by external systems that implement the interfaces defined in the previous layer. Here for example you will find a web server that implements (REST) entry points, which access the data provided by use cases through the JSON serializer. You will also find here the storage system implementation, for example a given database such as MongoDB.

API and shades of grey

The word API is of uttermost importance in a clean architecture. Every layer may be accessed by an API, that is a fixed collection of entry points (methods or objects). Here "fixed" means "the same among every implementation", obviously an API may change with time. Every presentation tool, for example, will access the same use cases, and the same methods, to obtain a set of domain models, which are the output of that particular use case. It is up to the presentation layer to format data according to the specific presentation media, for example HTML, PDF, images, etc. If you understand plugin-based architectures you already grasped the main concept of a separate, API-driven component (or layer).

The same concept is valid for the storage layer. Every storage implementation shall provide the same methods. When dealing with use cases you shall not be concerned with the actual system that stores data, it may be a MongoDB local installation, a cloud storage system or a trivial in-memory dictionary.

The separation between layers, and the content of each layer, is not always fixed and immutable. A well-designed system shall also cope with practical world issues such as performances, for example, or other specific needs. When designing an architecture it is very important to know "what is where and why", and this is even more important when you "bend" the rules. Many issues do not have a black-or-white answer, and many decisions are "shades of grey", that is it is up to you to justify why you put something in a given place.

Keep in mind however, that you should not break the structure of the clean architecture, in particular you shall be inflexible about the data flow (see the "Crossing boundaries" section in the original post of Robert Martin). If you break the data flow, you are basically invalidating the whole structure. Let me stress it again: never break the data flow. A simple example of breaking the data flow is to let a use case output a Python class instead of a representation of that class such as a JSON string.

Project structure

Let us take a look at the final structure of the project

The global structure of the package has been built with Cookiecutter, and I will run quickly through that part. The rentomatic directory contains the following subdirectories: domain, repositories, REST, serializers, use_cases. Those directories reflect the layered structure introduced in the previous section, and the structure of the tests directory mirrors this structure so that tests are easily found.

Source code

You can find the source code in this GitHub repository. Feel free to fork it and experiment, change it, and find better solutions to the problem I will discuss in this post. The source code contains tagged commits to allow you to follow the actual development as presented in the post. You can find the current tag in the Git tag: <tag name> label under the section titles. The label is actually a link to the tagged commit on GitHub, if you want to see the code without cloning it.

Project initialization Git tag: step01

Update: this Cookiecutter package creates an environment like the one I am creating in this section. I will keep the following explanation so that you can see how to manage requirements and configurations, but for your next project consider using this automated tool.

I usually like maintaining a Python virtual environment inside the project, so I will create a temporary virtualenv to install cookiecutter, create the project, and remove the virtualenv. Cookiecutter is going to ask you some questions about you and the project, to provide an initial file structure. We are going to build our own testing environment, so it is safe to answer no to use_pytest. Since this is a demo project we are not going to need any publishing feature, so you can answer no to use_pypi_deployment_with_travis as well. The project does not have a command line interface, and you can safely create the author file and use any license.

virtualenv venv3 -p python3 source venv3/bin/activate pip install cookiecutter cookiecutter https://github.com/audreyr/cookiecutter-pypackage

Now answer the questions, then finish creating the project with the following code

deactivate rm -fR venv3 cd rentomatic virtualenv venv3 -p python3 source venv3/bin/activate

Get rid of the requirements_dev.txt file that Cookiecutter created for you. I usually store virtualenv requirements in different hierarchical files to separate production, development and testing environments, so create the requirements directory and the relative files

mkdir requirements touch requirements/prod.txt touch requirements/dev.txt touch requirements/test.txt

The test.txt file will contain specific packages used to test the project. Since to test the project you also need to install the packages for the production environment the file will first include the production one.

-r prod.txt pytest tox coverage pytest-cov

The dev.txt file will contain packages used during the development process and shall install also test and production package

-r test.txt pip wheel flake8 Sphinx

(taking advantage of the fact that test.txt already includes prod.txt).

Last, the main requirements.txt file of the project will just import requirements/prod.txt

-r prod.txt

Obviously you are free to find the project structure that better suits your need or preferences. This is the structure we are going to use in this project but nothing forces you to follow it in your personal projects.

This separation allows you to install a full-fledged development environment on your machine, while installing only testing tools in a testing environment like the Travis platform and to further reduce the amount of dependencies in the production case.

As you can see, I am not using version tags in the requirements files. This is because this project is not going to be run in a production environment, so we do not need to freeze the environment.

Remember at this point to install the development requirements in your virtualenv

$ pip install -r requirements/dev.txt Miscellaneous configuration

The pytest testing library needs to be configured. This is the pytest.ini file that you can create in the root directory (where the setup.py file is located)

[pytest] minversion = 2.0 norecursedirs = .git .tox venv* requirements* python_files = test*.py

To run the tests during the development of the project just execute

$ py.test -sv

If you want to check the coverage, i.e. the amount of code which is run by your tests or "covered", execute

$ py.test --cov-report term-missing --cov=rentomatic

If you want to know more about test coverage check the official documentation of the Coverage.py and the pytest-cov packages.

I strongly suggest the use of the flake8 package to check that your Python code is PEP8 compliant. This is the flake8 configuration that you can put in your setup.cfg file

[flake8] ignore = D203 exclude = .git, venv*, docs max-complexity = 10

To check the compliance of your code with the PEP8 standard execute

$ flake8

Flake8 documentation is available here.

Note that every step in this post produces tested code and a of coverage of 100%. One of the benefits of a clean architecture is the separation between layers, and this guarantees a great degree of testability. Note however that in this tutorial, in particular in the REST sections, some tests have been omitted in favour of a simpler description of the architecture.

Domain models

Git tag: step02

Let us start with a simple definition of the StorageRoom model. As said before, the clean architecture models are very lightweight, or at least they are lighter than their counterparts in a framework.

Following the TDD methodology the first thing that I write are the tests. Create the tests/domain/test_storageroom.py and put this code inside it

import uuid from rentomatic.domain.storageroom import StorageRoom def test_storageroom_model_init(): code = uuid.uuid4() storageroom = StorageRoom(code, size=200, price=10, longitude=-0.09998975, latitude=51.75436293) assert storageroom.code == code assert storageroom.size == 200 assert storageroom.price == 10 assert storageroom.longitude == -0.09998975 assert storageroom.latitude == 51.75436293 def test_storageroom_model_from_dict(): code = uuid.uuid4() storageroom = StorageRoom.from_dict( { 'code': code, 'size': 200, 'price': 10, 'longitude': -0.09998975, 'latitude': 51.75436293 } ) assert storageroom.code == code assert storageroom.size == 200 assert storageroom.price == 10 assert storageroom.longitude == -0.09998975 assert storageroom.latitude == 51.75436293

With these two tests we ensure that our model can be initialized with the correct values and that can be created from a dictionary. In this first version all the parameters of the model are required. Later we could want to make some of them optional, and in that case we will have to add the relevant tests.

Now let's write the StorageRoom class in the rentomatic/domain/storageroom.py file. Do not forget to create the __init__.py file in the subdirectories of the project, otherwise Python will not be able to import the modules.

from rentomatic.shared.domain_model import DomainModel class StorageRoom(object): def __init__(self, code, size, price, latitude, longitude): self.code = code self.size = size self.price = price self.latitude = latitude self.longitude = longitude @classmethod def from_dict(cls, adict): room = StorageRoom( code=adict['code'], size=adict['size'], price=adict['price'], latitude=adict['latitude'], longitude=adict['longitude'], ) return room DomainModel.register(StorageRoom)

The model is very simple, and requires no further explanation. One of the benefits of a clean architecture is that each layer contains small pieces of code that, being isolated, shall perform simple tasks. In this case the model provides an initialization API and stores the information inside the class.

The from_dict method comes in handy when we have to create a model from data coming from another layer (such as the database layer or the query string of the REST layer).

One could be tempted to try to simplify the from_dict function, abstracting it and providing it through a Model class. Given that a certain level of abstraction and generalization is possible and desirable, the initialization part of the models shall probably deal with various different cases, and thus is better off being implemented directly in the class.

The DomainModel abstract base class is an easy way to categorize the model for future uses like checking if a class is a model in the system. For more information about this use of Abstract Base Classes in Python see this post.

Since we have a method creates an object form a dictionary it is useful to have a method that returns a dictionary version of the object. This allows us to easily write a comparison operator between objects, that we will use later in some tests.

The new tests in tests/domain/test_storageroom.py are

def test_storageroom_model_to_dict(): storageroom_dict = { 'code': uuid.uuid4(), 'size': 200, 'price': 10, 'longitude': -0.09998975, 'latitude': 51.75436293 } storageroom = StorageRoom.from_dict(storageroom_dict) assert storageroom.to_dict() == storageroom_dict def test_storageroom_model_comparison(): storageroom_dict = { 'code': uuid.uuid4(), 'size': 200, 'price': 10, 'longitude': -0.09998975, 'latitude': 51.75436293 } storageroom1 = StorageRoom.from_dict(storageroom_dict) storageroom2 = StorageRoom.from_dict(storageroom_dict) assert storageroom1 == storageroom2

and the new methods of the object in rentomatic/domain/storageroom.py are

def to_dict(self): return { 'code': self.code, 'size': self.size, 'price': self.price, 'latitude': self.latitude, 'longitude': self.longitude, } def __eq__(self, other): return self.to_dict() == other.to_dict() Serializers

Git tag: step03

Our model needs to be serialized if we want to return it as a result of an API call. The typical serialization format is JSON, as this is a broadly accepted standard for web-based API. The serializer is not part of the model, but is an external specialized class that receives the model instance and produces a representation of its structure and values.

To test the JSON serialization of our StorageRoom class put in the tests/serializers/test_storageroom_serializer.py file the following code

import datetime import json import uuid import pytest from rentomatic.serializers import storageroom_serializer as srs from rentomatic.domain.storageroom import StorageRoom def test_serialize_domain_storageroom(): code = uuid.uuid4() room = StorageRoom( code=code, size=200, price=10, longitude=-0.09998975, latitude=51.75436293 ) expected_json = """ {{ "code": "{}", "size": 200, "price": 10, "longitude": -0.09998975, "latitude": 51.75436293 }} """.format(code) json_storageroom = json.dumps(room, cls=srs.StorageRoomEncoder) assert json.loads(json_storageroom) == json.loads(expected_json) def test_serialize_domain_storageruum_wrong_type(): with pytest.raises(TypeError): json.dumps(datetime.datetime.now(), cls=srs.StorageRoomEncoder)

Put in the rentomatic/serializers/storageroom_serializer.py file the code that makes the test pass

import json class StorageRoomEncoder(json.JSONEncoder): def default(self, o): try: to_serialize = { 'code': str(o.code), 'size': o.size, 'price': o.price, "latitude": o.latitude, "longitude": o.longitude, } return to_serialize except AttributeError: return super().default(o)

Providing a class that inherits from json.JSONEncoder let us use the json.dumps(room, cls=StorageRoomEncoder) syntax to serialize the model.

There is a certain degree of repetition in the code we wrote, and this is the annoying part of a clean architecture. Since we want to isolate layers as much as possible and create lightweight classes we end up somehow repeating certain types of actions. For example the serialization code that assigns attributes of a StorageRoom to JSON attributes is very similar to that we use to create the object from a dictionary. Not exactly the same, obviously, but the two functions are very close.

Use cases (part 1)

Git tag: step04

It's time to implement the actual business logic our application wants to expose to the outside world. Use cases are the place where we implement classes that query the repository, apply business rules, logic, and whatever transformation we need for our data, and return the results.

With those requirements in mind, let us start to build a use case step by step. The simplest use case we can create is one that fetches all the storage rooms from the repository and returns them. Please note that we did not implement any repository layer yet, so our tests will mock it.

This is the skeleton for a basic test of a use case that lists all the storage rooms. Put this code in the tests/use_cases/test_storageroom_list_use_case.py

import uuid import pytest from unittest import mock from rentomatic.domain.storageroom import StorageRoom from rentomatic.use_cases import storageroom_use_cases as uc @pytest.fixture def domain_storagerooms(): storageroom_1 = StorageRoom( code=uuid.uuid4(), size=215, price=39, longitude=-0.09998975, latitude=51.75436293, ) storageroom_2 = StorageRoom( code=uuid.uuid4(), size=405, price=66, longitude=0.18228006, latitude=51.74640997, ) storageroom_3 = StorageRoom( code=uuid.uuid4(), size=56, price=60, longitude=0.27891577, latitude=51.45994069, ) storageroom_4 = StorageRoom( code=uuid.uuid4(), size=93, price=48, longitude=0.33894476, latitude=51.39916678, ) return [storageroom_1, storageroom_2, storageroom_3, storageroom_4] def test_storageroom_list_without_parameters(domain_storagerooms): repo = mock.Mock() repo.list.return_value = domain_storagerooms storageroom_list_use_case = uc.StorageRoomListUseCase(repo) result = storageroom_list_use_case.execute() repo.list.assert_called_with() assert result == domain_storagerooms

The test is straightforward. First we mock the repository so that is provides a list() method that returns the list of models we created above the test. Then we initialize the use case with the repo and execute it, collecting the result. The first thing we check is if the repository method was called without any parameter, and the second is the effective correctness of the result.

This is the implementation of the use case that makes the test pass. Put the code in the rentomatic/use_cases/storageroom_use_case.py

class StorageRoomListUseCase(object): def __init__(self, repo): self.repo = repo def execute(self): return self.repo.list()

With such an implementation of the use case, however, we will soon experience issues. For starters, we do not have a standard way to transport the call parameters, which means that we do not have a standard way to check for their correctness either. The second problem is that we miss a standard way to return the call results and consequently we lack a way to communicate if the call was successful of if it failed, and in the latter case what are the reasons of the failure. This applies also to the case of bad parameters discussed in the previous point.

We want thus to introduce some structures to wrap input and outputs of our use cases. Those structures are called request and response objects.

Requests and responses

Git tag: step05

Request and response objects are an important part of a clean architecture, as they transport call parameters, inputs and results from outside the application into the use cases layer.

More specifically, requests are objects created from incoming API calls, thus they shall deal with things like incorrect values, missing parameters, wrong formats, etc. Responses, on the other hand, have to contain the actual results of the API calls, but shall also be able to represent error cases and to deliver rich information on what happened.

The actual implementation of request and response objects is completely free, the clean architecture says nothing about them. The decision on how to pack and represent data is up to us.

For the moment we just need a StorageRoomListRequestObject that can be initialized without parameters, so let us create the file tests/use_cases/test_storageroom_list_request_objects.py and put there a test for this object.

from rentomatic.use_cases import request_objects as ro def test_build_storageroom_list_request_object_without_parameters(): req = ro.StorageRoomListRequestObject() assert bool(req) is True def test_build_file_list_request_object_from_empty_dict(): req = ro.StorageRoomListRequestObject.from_dict({}) assert bool(req) is True

While at the moment this request object is basically empty, it will come in handy as soon as we start having parameters for the list use case. The code of the StorageRoomListRequestObject is the following and goes into the rentomatic/use_cases/request_objects.py file

class StorageRoomListRequestObject(object): @classmethod def from_dict(cls, adict): return StorageRoomListRequestObject() def __nonzero__(self): return True

The response object is also very simple, since for the moment we just need a successful response. Unlike the request, the response is not linked to any particular use case, so the test file can be named tests/shared/test_response_object.py

from rentomatic.shared import response_object as ro def test_response_success_is_true(): assert bool(ro.ResponseSuccess()) is True

and the actual response object is in the file rentomatic/shared/response_object.py

class ResponseSuccess(object): def __init__(self, value=None): self.value = value def __nonzero__(self): return True __bool__ = __nonzero__ Use cases (part 2)

Git tag: step06

Now that we have implemented the request and response object we can change the test code to include those structures. Change the tests/use_cases/test_storageroom_list_use_case.py to contain this code

import uuid import pytest from unittest import mock from rentomatic.domain.storageroom import StorageRoom from rentomatic.use_cases import request_objects as ro from rentomatic.use_cases import storageroom_use_cases as uc @pytest.fixture def domain_storagerooms(): storageroom_1 = StorageRoom( code=uuid.uuid4(), size=215, price=39, longitude=-0.09998975, latitude=51.75436293, ) storageroom_2 = StorageRoom( code=uuid.uuid4(), size=405, price=66, longitude=0.18228006, latitude=51.74640997, ) storageroom_3 = StorageRoom( code=uuid.uuid4(), size=56, price=60, longitude=0.27891577, latitude=51.45994069, ) storageroom_4 = StorageRoom( code=uuid.uuid4(), size=93, price=48, longitude=0.33894476, latitude=51.39916678, ) return [storageroom_1, storageroom_2, storageroom_3, storageroom_4] def test_storageroom_list_without_parameters(domain_storagerooms): repo = mock.Mock() repo.list.return_value = domain_storagerooms storageroom_list_use_case = uc.StorageRoomListUseCase(repo) request_object = ro.StorageRoomListRequestObject.from_dict({}) response_object = storageroom_list_use_case.execute(request_object) assert bool(response_object) is True repo.list.assert_called_with() assert response_object.value == domain_storagerooms

The new version of the rentomatic/use_case/storageroom_use_cases.py file is the following

from rentomatic.shared import response_object as ro class StorageRoomListUseCase(object): def __init__(self, repo): self.repo = repo def execute(self, request_object): storage_rooms = self.repo.list() return ro.ResponseSuccess(storage_rooms)

Let us consider what we have achieved with our clean architecture up to this point. We have a very lightweight model that can be serialized to JSON and which is completely independent from other parts of the system. The code also contains a use case that, given a repository that exposes a given API, extracts all the models and returns them contained in a structured object.

We are missing some objects, however. For example, we have not implemented any unsuccessful response object or validated the incoming request object.

To explore these missing parts of the architecture let us improve the current use case to accept a filters parameter that represents some filters that we want to apply to the extracted list of models. This will generate some possible error conditions for the input, forcing us to introduce some validation for the incoming request object.

Requests and validation

Git tag: step07

I want to add a filters parameter to the request. Through that parameter the caller can add different filters by specifying a name and a value for each filter (for instance {'price_lt': 100} to get all results with a price lesser than 100).

The first thing to do is to change the request object, starting from the test. The new version of the tests/use_cases/test_storageroom_list_request_objects.py file is the following

from rentomatic.use_cases import request_objects as ro def test_build_storageroom_list_request_object_without_parameters(): req = ro.StorageRoomListRequestObject() assert req.filters is None assert bool(req) is True def test_build_file_list_request_object_from_empty_dict(): req = ro.StorageRoomListRequestObject.from_dict({}) assert req.filters is None assert bool(req) is True def test_build_storageroom_list_request_object_with_empty_filters(): req = ro.StorageRoomListRequestObject(filters={}) assert req.filters == {} assert bool(req) is True def test_build_storageroom_list_request_object_from_dict_with_empty_filters(): req = ro.StorageRoomListRequestObject.from_dict({'filters': {}}) assert req.filters == {} assert bool(req) is True def test_build_storageroom_list_request_object_with_filters(): req = ro.StorageRoomListRequestObject(filters={'a': 1, 'b': 2}) assert req.filters == {'a': 1, 'b': 2} assert bool(req) is True def test_build_storageroom_list_request_object_from_dict_with_filters(): req = ro.StorageRoomListRequestObject.from_dict({'filters': {'a': 1, 'b': 2}}) assert req.filters == {'a': 1, 'b': 2} assert bool(req) is True def test_build_storageroom_list_request_object_from_dict_with_invalid_filters(): req = ro.StorageRoomListRequestObject.from_dict({'filters': 5}) assert req.has_errors() assert req.errors[0]['parameter'] == 'filters' assert bool(req) is False

As you can see I added the assert req.filters is None check to the original two tests, then I added 5 tests to check if filters can be specified and to test the behaviour of the object with an invalid filter parameter.

To make the tests pass we have to change our StorageRoomListRequestObject class. There are obviously multiple possible solutions that you can come up with, and I recommend you to try to find your own. This is the one I usually employ. The file rentomatic/use_cases/request_object.py becomes

import collections class InvalidRequestObject(object): def __init__(self): self.errors = [] def add_error(self, parameter, message): self.errors.append({'parameter': parameter, 'message': message}) def has_errors(self): return len(self.errors) > 0 def __nonzero__(self): return False __bool__ = __nonzero__ class ValidRequestObject(object): @classmethod def from_dict(cls, adict): raise NotImplementedError def __nonzero__(self): return True __bool__ = __nonzero__ class StorageRoomListRequestObject(ValidRequestObject): def __init__(self, filters=None): self.filters = filters @classmethod def from_dict(cls, adict): invalid_req = InvalidRequestObject() if 'filters' in adict and not isinstance(adict['filters'], collections.Mapping): invalid_req.add_error('filters', 'Is not iterable') if invalid_req.has_errors(): return invalid_req return StorageRoomListRequestObject(filters=adict.get('filters', None))

Let me review this new code bit by bit.

First of all, two helper objects have been introduced, ValidRequestObject and InvalidRequestObject. They are different because an invalid request shall contain the validation errors, but both can be converted to booleans.

Second, the StorageRoomListRequestObject accepts an optional filters parameter when instantiated. There are no validation checks in the __init__() method because this is considered to be an internal method that gets called when the parameters have already been validated.

Last, the from_dict() method performs the validation of the filters parameter, if it is present. I leverage the collections.Mapping abstract base class to check if the incoming parameter is a dictionary-like object and return either an InvalidRequestObject or a ValidRequestObject instance.

Since we can now tell bad requests from good ones we need to introduce a new type of response as well, to manage bad requests or other errors in the use case.

Responses and failures

Git tag: step08

What happens if the use case encounter an error? Use cases can encounter a wide set of errors: validation errors, as we just discussed in the previous section, but also business logic errors or errors that come from the repository layer. Whatever the error, the use case shall always return an object with a known structure (the response), so we need a new object that provides a good support for different types of failures.

As happened for the requests there is no unique way to provide such an object, and the following code is just one of the possible solutions.

The first thing to do is to expand the tests/shared/test_response_object.py file, adding tests for failures.

import pytest from rentomatic.shared import response_object as res from rentomatic.use_cases import request_objects as req @pytest.fixture def response_value(): return {'key': ['value1', 'value2']} @pytest.fixture def response_type(): return 'ResponseError' @pytest.fixture def response_message(): return 'This is a response error'

This is some boilerplate code, basically pytest fixtures that we will use in the following tests.

def test_response_success_is_true(response_value): assert bool(res.ResponseSuccess(response_value)) is True def test_response_failure_is_false(response_type, response_message): assert bool(res.ResponseFailure(response_type, response_message)) is False

Two basic tests to check that both the old ResponseSuccess and the new ResponseFailure objects behave consistently when converted to boolean.

def test_response_success_contains_value(response_value): response = res.ResponseSuccess(response_value) assert response.value == response_value

The ResponseSuccess object contains the call result in the value attribute.

def test_response_failure_has_type_and_message(response_type, response_message): response = res.ResponseFailure(response_type, response_message) assert response.type == response_type assert response.message == response_message def test_response_failure_contains_value(response_type, response_message): response = res.ResponseFailure(response_type, response_message) assert response.value == {'type': response_type, 'message': response_message}

These two tests ensure that the ResponseFailure object provides the same interface provided by the successful one and that the type and message parameter are accessible.

def test_response_failure_initialization_with_exception(): response = res.ResponseFailure(response_type, Exception('Just an error message')) assert bool(response) is False assert response.type == response_type assert response.message == "Exception: Just an error message" def test_response_failure_from_invalid_request_object(): response = res.ResponseFailure.build_from_invalid_request_object(req.InvalidRequestObject()) assert bool(response) is False def test_response_failure_from_invalid_request_object_with_errors(): request_object = req.InvalidRequestObject() request_object.add_error('path', 'Is mandatory') request_object.add_error('path', "can't be blank") response = res.ResponseFailure.build_from_invalid_request_object(request_object) assert bool(response) is False assert response.type == res.ResponseFailure.PARAMETERS_ERROR assert response.message == "path: Is mandatory\npath: can't be blank"

We sometimes want to create responses from Python exceptions that can happen in the use case, so we test that ResponseFailure objects can be initialized with a generic exception.

And last we have the tests for the build_from_invalid_request_object() method that automate the initialization of the response from an invalid request. If the request contains errors (remember that the request validates itself), we need to put them into the response message.

The last test uses a class attribute to classify the error. The ResponseFailure class will contain three predefined errors that can happen when running the use case, namely RESOURCE_ERROR, PARAMETERS_ERROR, and SYSTEM_ERROR. This categorization is an attempt to capture the different types of issues that can happen when dealing with an external system through an API. RESOURCE_ERROR contains all those errors that are related to the resources contained in the repository, for instance when you cannot find an entry given its unique id. PARAMETERS_ERROR describes all those errors that occur when the request parameters are wrong or missing. SYSTEM_ERROR encompass the errors that happen in the underlying system at operating system level, such as a failure in a filesystem operation, or a network connection error while fetching data from the database.

The use case has the responsibility to manage the different error conditions arising from the Python code and to convert them into an error description made of one of the three types I just described and a message.

Let's write the ResponseFailure class that makes the tests pass. This can be the initial definition of the class. Put it in rentomatic/shared/response_object.py

class ResponseFailure(object): RESOURCE_ERROR = 'ResourceError' PARAMETERS_ERROR = 'ParametersError' SYSTEM_ERROR = 'SystemError' def __init__(self, type_, message): self.type = type_ self.message = self._format_message(message) def _format_message(self, msg): if isinstance(msg, Exception): return "{}: {}".format(msg.__class__.__name__, "{}".format(msg)) return msg

Through the _format_message() method we enable the class to accept both string messages and Python exceptions, which is very handy when dealing with external libraries that can raise exceptions we do not know or do not want to manage.

@property def value(self): return {'type': self.type, 'message': self.message}

This property makes the class comply with the ResponseSuccess API, providing the value attribute, which is an aptly formatted dictionary.

def __nonzero__(self): return False __bool__ = __nonzero__ @classmethod def build_from_invalid_request_object(cls, invalid_request_object): message = "\n".join(["{}: {}".format(err['parameter'], err['message']) for err in invalid_request_object.errors]) return cls(cls.PARAMETERS_ERROR, message)

As explained before, the PARAMETERS_ERROR type encompasses all those errors that come from an invalid set of parameters, which is the case of this function, that shall be called whenever the request is wrong, which means that some parameters contain errors or are missing.

Since building failure responses is a common activity it is useful to have helper methods, so I add three tests for the building functions to the tests/shared/test_response_object.py file

def test_response_failure_build_resource_error(): response = res.ResponseFailure.build_resource_error("test message") assert bool(response) is False assert response.type == res.ResponseFailure.RESOURCE_ERROR assert response.message == "test message" def test_response_failure_build_parameters_error(): response = res.ResponseFailure.build_parameters_error("test message") assert bool(response) is False assert response.type == res.ResponseFailure.PARAMETERS_ERROR assert response.message == "test message" def test_response_failure_build_system_error(): response = res.ResponseFailure.build_system_error("test message") assert bool(response) is False assert response.type == res.ResponseFailure.SYSTEM_ERROR assert response.message == "test message"

We add the relevant methods to the class and change the build_from_invalid_request_object() method to leverage the build_parameters_error() new method. Change the rentomatic/shared/response_object.py file to contain this code

@classmethod def build_resource_error(cls, message=None): return cls(cls.RESOURCE_ERROR, message) @classmethod def build_system_error(cls, message=None): return cls(cls.SYSTEM_ERROR, message) @classmethod def build_parameters_error(cls, message=None): return cls(cls.PARAMETERS_ERROR, message) @classmethod def build_from_invalid_request_object(cls, invalid_request_object): message = "\n".join(["{}: {}".format(err['parameter'], err['message']) for err in invalid_request_object.errors]) return cls.build_parameters_error(message) Use cases (part 3)

Git tag: step09

Our implementation of responses and requests is finally complete, so now we can implement the last version of our use case. The use case correctly returns a ResponseSuccess object but is still missing a proper validation of the incoming request.

Let's change the test in the tests/use_cases/test_storageroom_list_use_case.py file and add two more tests. The resulting set of tests (after the domain_storagerooms fixture) is the following

import pytest from unittest import mock from rentomatic.domain.storageroom import StorageRoom from rentomatic.shared import response_object as res from rentomatic.use_cases import request_objects as req from rentomatic.use_cases import storageroom_use_cases as uc @pytest.fixture def domain_storagerooms(): [...] def test_storageroom_list_without_parameters(domain_storagerooms): repo = mock.Mock() repo.list.return_value = domain_storagerooms storageroom_list_use_case = uc.StorageRoomListUseCase(repo) request_object = req.StorageRoomListRequestObject.from_dict({}) response_object = storageroom_list_use_case.execute(request_object) assert bool(response_object) is True repo.list.assert_called_with(filters=None) assert response_object.value == domain_storagerooms

This is the test we already wrote, but the assert_called_with() method is called with filters=None to reflect the added parameter. The import line has slightly changed as well, given that we are now importing both response_objects and request_objects. The domain_storagerooms fixture has not changed and has been omitted from the code snippet to keep it short.

def test_storageroom_list_with_filters(domain_storagerooms): repo = mock.Mock() repo.list.return_value = domain_storagerooms storageroom_list_use_case = uc.StorageRoomListUseCase(repo) qry_filters = {'a': 5} request_object = req.StorageRoomListRequestObject.from_dict({'filters': qry_filters}) response_object = storageroom_list_use_case.execute(request_object) assert bool(response_object) is True repo.list.assert_called_with(filters=qry_filters) assert response_object.value == domain_storagerooms

This test checks that the value of the filters key in the dictionary used to create the request is actually used when calling the repository.

def test_storageroom_list_handles_generic_error(): repo = mock.Mock() repo.list.side_effect = Exception('Just an error message') storageroom_list_use_case = uc.StorageRoomListUseCase(repo) request_object = req.StorageRoomListRequestObject.from_dict({}) response_object = storageroom_list_use_case.execute(request_object) assert bool(response_object) is False assert response_object.value == { 'type': res.ResponseFailure.SYSTEM_ERROR, 'message': "Exception: Just an error message" } def test_storageroom_list_handles_bad_request(): repo = mock.Mock() storageroom_list_use_case = uc.StorageRoomListUseCase(repo) request_object = req.StorageRoomListRequestObject.from_dict({'filters': 5}) response_object = storageroom_list_use_case.execute(request_object) assert bool(response_object) is False assert response_object.value == { 'type': res.ResponseFailure.PARAMETERS_ERROR, 'message': "filters: Is not iterable" }

This last two tests check the behaviour of the use case when the repository raises an exception or when the request is badly formatted.

Change the file rentomatic/use_cases/storageroom_use_cases.py to contain the new use case implementation that makes all the test pass

from rentomatic.shared import response_object as res class StorageRoomListUseCase(object): def __init__(self, repo): self.repo = repo def execute(self, request_object): if not request_object: return res.ResponseFailure.build_from_invalid_request_object(request_object) try: storage_rooms = self.repo.list(filters=request_object.filters) return res.ResponseSuccess(storage_rooms) except Exception as exc: return res.ResponseFailure.build_system_error( "{}: {}".format(exc.__class__.__name__, "{}".format(exc)))

As you can see the first thing that the execute() method does is to check if the request is valid, otherwise returns a ResponseFailure build with the same request object. Then the actual business logic is implemented, calling the repository and returning a success response. If something goes wrong in this phase the exception is caught and returned as an aptly formatted ResponseFailure.

Intermezzo: refactoring

Git tag: step10

A clean architecture is not a framework, so it provides very few generic features, unlike products like for example Django, which provide models, ORM, and all sorts of structures and libraries. Nevertheless, some classes can be isolated from our code and provided as a library, so that we can reuse the code. In this section I will guide you through a refactoring of the code we already have, during which we will isolate common features for requests, responses, and use cases.

We already isolated the response object. We can move the test_valid_request_object_cannot_be_used from tests/use_cases/test_storageroom_list_request_objects.py to tests/shared/test_response_object.py since it tests a generic behaviour and not something related to the StorageRoom model and use cases.

Then we can move the InvalidRequestObject and ValidRequestObject classes from rentomatic/use_cases/request_objects.py to rentomatic/shared/request_object.py, making the necessary changes to the StorageRoomListRequestObject class that now inherits from an external class.

The use case is the class that undergoes the major changes. The UseCase class is tested by the following code in the tests/shared/test_use_case.py file

from unittest import mock from rentomatic.shared import request_object as req, response_object as res from rentomatic.shared import use_case as uc def test_use_case_cannot_process_valid_requests(): valid_request_object = mock.MagicMock() valid_request_object.__bool__.return_value = True use_case = uc.UseCase() response = use_case.execute(valid_request_object) assert not response assert response.type == res.ResponseFailure.SYSTEM_ERROR assert response.message == \ 'NotImplementedError: process_request() not implemented by UseCase class'

This test checks that the UseCase class cannot be actually used to process incoming requests.

def test_use_case_can_process_invalid_requests_and_returns_response_failure(): invalid_request_object = req.InvalidRequestObject() invalid_request_object.add_error('someparam', 'somemessage') use_case = uc.UseCase() response = use_case.execute(invalid_request_object) assert not response assert response.type == res.ResponseFailure.PARAMETERS_ERROR assert response.message == 'someparam: somemessage'

This test runs the use case with an invalid request and check if the response is correct. Since the request is wrong the response type is PARAMETERS_ERROR, as this represents an issue in the request parameters.

def test_use_case_can_manage_generic_exception_from_process_request(): use_case = uc.UseCase() class TestException(Exception): pass use_case.process_request = mock.Mock() use_case.process_request.side_effect = TestException('somemessage') response = use_case.execute(mock.Mock) assert not response assert response.type == res.ResponseFailure.SYSTEM_ERROR assert response.message == 'TestException: somemessage'

This test makes the use case raise an exception. This type of error is categorized as SYSTEM_ERROR, which is a generic name for an exception which is not related to request parameters or actual entities.

As you can see in this last test the idea is that of exposing the execute() method in the UseCase class and to call the process_request() method defined by each child class, which is the actual use case we are implementing.

The rentomatic/shared/use_case.py file contains the following code that makes the test pass

from rentomatic.shared import response_object as res class UseCase(object): def execute(self, request_object): if not request_object: return res.ResponseFailure.build_from_invalid_request_object(request_object) try: return self.process_request(request_object) except Exception as exc: return res.ResponseFailure.build_system_error( "{}: {}".format(exc.__class__.__name__, "{}".format(exc))) def process_request(self, request_object): raise NotImplementedError( "process_request() not implemented by UseCase class")

While the rentomatic/use_cases/storageroom_use_cases.py now contains the following code

from rentomatic.shared import use_case as uc from rentomatic.shared import response_object as res class StorageRoomListUseCase(uc.UseCase): def __init__(self, repo): self.repo = repo def process_request(self, request_object): domain_storageroom = self.repo.list(filters=request_object.filters) return res.ResponseSuccess(domain_storageroom) The repository layer

Git tag: step11

The repository layer is the one in which we run the data storage system. As you saw when we implemented the use case we access the data storage through an API, in this case the list() method of the repository. The level of abstraction provided by a repository level is higher than that provided by an ORM or by a tool like SQLAlchemy. The repository layer provides only the endpoints that the application needs, with an interface which is tailored on the specific business problems the application implements.

To clarify the matter in terms of concrete technologies, SQLAlchemy is a wonderful tool to abstract the access to an SQL database, so the internal implementation of the repository layer could use it to access a PostgreSQL database. But the external API of the layer is not that provided by SQLAlchemy. The API is a (usually reduced) set of functions that the use cases call to get the data, and indeed the internal implementation could also use raw SQL queries on a proprietary network interface. The repository does not even need to be based on a database. We can have a repository layer that fetches data from a REST service, for example, or that makes remote procedure calls through a RabbitMQ network.

A very important feature of the repository layer is that it always returns domain models, and this is in line with what framework ORMs usually do.

I will not deploy a real database in this post. I will address that part of the application in a future post, where there will be enough space to implement two different solutions and show how the repository API can mask the actual implementation.

Instead, I am going to create a very simple memory storage system with some predefined data. I think this is enough for the moment to demonstrate the repository concept.

The first thing to do is to write some tests that document the public API of the repository. The file containing the tests is tests/repository/test_memrepo.py.

First we add some data that we will be using in the tests. We import the domain model to check if the results of the API calls have the correct type

import pytest from rentomatic.domain.storageroom import StorageRoom from rentomatic.shared.domain_model import DomainModel from rentomatic.repository import memrepo @pytest.fixture def storageroom_dicts(): return [ { 'code': 'f853578c-fc0f-4e65-81b8-566c5dffa35a', 'size': 215, 'price': 39, 'longitude': -0.09998975, 'latitude': 51.75436293, }, { 'code': 'fe2c3195-aeff-487a-a08f-e0bdc0ec6e9a', 'size': 405, 'price': 66, 'longitude': 0.18228006, 'latitude': 51.74640997, }, { 'code': '913694c6-435a-4366-ba0d-da5334a611b2', 'size': 56, 'price': 60, 'longitude': 0.27891577, 'latitude': 51.45994069, }, { 'code': 'eed76e77-55c1-41ce-985d-ca49bf6c0585', 'size': 93, 'price': 48, 'longitude': 0.33894476, 'latitude': 51.39916678, } ]

Since the repository object will return domain models, we need a helper function to check the correctness of the results. The following function checks the length of the two lists, ensures that all the returned elements are domain models and compares the codes. Note that we can safely employ the isinstance() built-in function since DomainModel is an abstract base class and our models are registered (see the rentomatic/domain/storagerooms.py)

def _check_results(domain_models_list, data_list): assert len(domain_models_list) == len(data_list) assert all([isinstance(dm, DomainModel) for dm in domain_models_list]) assert set([dm.code for dm in domain_models_list] ) == set([d['code'] for d in data_list])

We need to be able to initialize the repository with a list of dictionaries, and the list() method without any parameter shall return the same list of entries.

def test_repository_list_without_parameters(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) _check_results( repo.list(), storageroom_dicts )

The list() method shall accept a filters parameter, which is a dictionary. The dictionary keys shall be in the form <attribute>__<operator>, similar to the syntax used by the Django ORM. So to express that the price shall be less than 65 we can write filters={'price__lt': 60}.

A couple of error conditions shall be checked: using an unknown key shall raise a KeyError exception, and using a wrong operator shall raise a ValueError exception.

def test_repository_list_with_filters_unknown_key(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) with pytest.raises(KeyError): repo.list(filters={'name': 'aname'}) def test_repository_list_with_filters_unknown_operator(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) with pytest.raises(ValueError): repo.list(filters={'price__in': [20, 30]})

Let us then test that the filtering mechanism actually works. We want the default operator to be __eq, which means that if we do not put any operator an equality check shall be performed.

def test_repository_list_with_filters_price(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) _check_results( repo.list(filters={'price': 60}), [storageroom_dicts[2]] ) def test_repository_list_with_filters_price_eq(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) _check_results( repo.list(filters={'price__eq': 60}), [storageroom_dicts[2]] ) def test_repository_list_with_filters_price_lt(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) _check_results( repo.list(filters={'price__lt': 60}), [storageroom_dicts[0], storageroom_dicts[3]]) def test_repository_list_with_filters_price_gt(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) _check_results( repo.list(filters={'price__gt': 60}), [storageroom_dicts[1]] ) def test_repository_list_with_filters_size(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) _check_results( repo.list(filters={'size': 93}), [storageroom_dicts[3]] ) def test_repository_list_with_filters_size_eq(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) _check_results( repo.list(filters={'size__eq': 93}), [storageroom_dicts[3]] ) def test_repository_list_with_filters_size_lt(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) _check_results( repo.list(filters={'size__lt': 60}), [storageroom_dicts[2]] ) def test_repository_list_with_filters_size_gt(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) _check_results( repo.list(filters={'size__gt': 400}), [storageroom_dicts[1]] ) def test_repository_list_with_filters_code(storageroom_dicts): repo = memrepo.MemRepo(storageroom_dicts) _check_results( repo.list(filters={'code': '913694c6-435a-4366-ba0d-da5334a611b2'}), [storageroom_dicts[2]] )

The implementation of the MemRepo class is pretty simple, and I will not dive into it line by line.

from rentomatic.domain import storageroom as sr class MemRepo: def __init__(self, entries=None): self._entries = [] if entries: self._entries.extend(entries) def _check(self, element, key, value): if '__' not in key: key = key + '__eq' key, operator = key.split('__') if operator not in ['eq', 'lt', 'gt']: raise ValueError('Operator {} is not supported'.format(operator)) operator = '__{}__'.format(operator) if key in ['size', 'price']: return getattr(element[key], operator)(int(value)) elif key in ['latitude', 'longitude']: return getattr(element[key], operator)(float(value)) return getattr(element[key], operator)(value) def list(self, filters=None): if not filters: result = self._entries else: result = [] result.extend(self._entries) for key, value in filters.items(): result = [e for e in result if self._check(e, key, value)] return [sr.StorageRoom.from_dict(r) for r in result] The REST layer (part1)

Git tag: step12

This is the last step of our journey into the clean architecture. We created the domain models, the serializers, the use cases and the repository. We are actually missing an interface that glues everything together, that is gets the call parameters from the user, initializes a use case with a repository, runs the use case that fetches the domain models from the repository and converts them to a standard format. This layer can be represented by a wide number of interfaces and technologies. For example a command line interface (CLI) can implement exactly those steps, getting the parameters via command line switches, and returning the results as plain text on the console. The same underlying system, however, can be leveraged by a web page that gets the call parameters from a set of widgets, perform the steps described above, and parses the returned JSON data to show the result on the same page.

Whatever technology we want to use to interact with the user to collect inputs and provide results we need to interface with the clean architecture we just built, so now we will create a layer to expose an HTTP API. This can be done with a server that exposes a set of HTTP addresses (API endpoints) that once accessed return some data. Such a layer is commonly called a REST layer, because usually the semantic of the addresses comes from the REST recommendations.

Flask is a lightweight web server with a modular structure that provides just the parts that the user needs. In particular, we will not use any database/ORM, since we already implemented our own repository layer.

Please keep in mind that this part of the project, together with the repository layer, is usually implemented as a separate package, and I am keeping them together just for the sake of this introductory tutorial.

Let us start updating the requirements files. The dev.txt file shall contain Flask

-r test.txt pip wheel flake8 Sphinx Flask

And the test.txt file will contain the pytest extension to work with Flask (more on this later)

-r prod.txt pytest tox coverage pytest-cov pytest-flask

Remember to run pip install -r requirements/dev.txt again after those changes to actually install the new packages in your virtual environment.

The setup of a Flask application is not complex, but a lot of concepts are involved, and since this is not a tutorial on Flask I will run quickly through these steps. I will however provide links to the Flask documentation for every concept.

I usually define different configurations for my testing, development, and production environments. Since the Flask application can be configured using a plain Python object (documentation), I created the file rentomatic/settings.py to host those objects

import os class Config(object): """Base configuration.""" APP_DIR = os.path.abspath(os.path.dirname(__file__)) # This directory PROJECT_ROOT = os.path.abspath(os.path.join(APP_DIR, os.pardir)) class ProdConfig(Config): """Production configuration.""" ENV = 'prod' DEBUG = False class DevConfig(Config): """Development configuration.""" ENV = 'dev' DEBUG = True class TestConfig(Config): """Test configuration.""" ENV = 'test' TESTING = True DEBUG = True

Read this page to know more about Flask configuration parameters. Now we need a function that initializes the Flask application (documentation), configures it and registers the blueprints (documentation). The file rentomatic/app.py contains the following code

from flask import Flask from rentomatic.rest import storageroom from rentomatic.settings import DevConfig def create_app(config_object=DevConfig): app = Flask(__name__) app.config.from_object(config_object) app.register_blueprint(storageroom.blueprint) return app

The application endpoints need to return a Flask Response object, with the actual results and an HTTP status. The content of the response, in this case, is the JSON serialization of the use case response.

Let us write a test step by step, so that you can perfectly understand what is going to happen in the REST endpoint. The basic structure of the test is

[SOME PREPARATION] [CALL THE API ENDPOINT] [CHECK RESPONSE DATA] [CHECK RESPONDSE STATUS CODE] [CHECK RESPONSE MIMETYPE]

So our first test tests/rest/test_get_storagerooms_list.py is made of the following parts

@mock.patch('rentomatic.use_cases.storageroom_use_cases.StorageRoomListUseCase') def test_get(mock_use_case, client): mock_use_case().execute.return_value = res.ResponseSuccess(storagerooms)

Remember that we are not testing the use case here, so we can safely mock it. Here we make the use case return a ResponseSuccess instance containing a list of domain models (that we didn't define yet).

http_response = client.get('/storagerooms')

This is the actual API call. We are exposing the endpoint at the /storagerooms address. Note the use of the client fixture provided by pytest-flask.

assert json.loads(http_response.data.decode('UTF-8')) == [storageroom1_dict] assert http_response.status_code == 200 assert http_response.mimetype == 'application/json'

These are the three checks previously mentioned. The second and the third ones are pretty straightforward, while the first one needs some explanations. We want to compare http_response.data with [storageroom1_dict], which is a list with a Python dictionary containing the data of the storageroom1_domain_model object. Flask Response objects contain a binary representation of the data, so first we decode the bytes using UTF-8, then convert them in a Python object. It is much more convenient to compare Python objects, since pytest can deal with issues like the unordered nature of dictionaries, while this is not possible when comparing two strings.

The final test file, with the test domain model and its dictionary is

import json from unittest import mock from rentomatic.domain.storageroom import StorageRoom from rentomatic.shared import response_object as res storageroom1_dict = { 'code': '3251a5bd-86be-428d-8ae9-6e51a8048c33', 'size': 200, 'price': 10, 'longitude': -0.09998975, 'latitude': 51.75436293 } storageroom1_domain_model = StorageRoom.from_dict(storageroom1_dict) storagerooms = [storageroom1_domain_model] @mock.patch('rentomatic.use_cases.storageroom_use_cases.StorageRoomListUseCase') def test_get(mock_use_case, client): mock_use_case().execute.return_value = res.ResponseSuccess(storagerooms) http_response = client.get('/storagerooms') assert json.loads(http_response.data.decode('UTF-8')) == [storageroom1_dict] assert http_response.status_code == 200 assert http_response.mimetype == 'application/json'

If you run pytest you'll notice that the test suite fails because of the app fixture, which is missing. The pytest-flask plugin provides the client fixture, but relies on the app fixture which has to be provided. The best place to define it is in tests/conftest.py

import pytest from rentomatic.app import create_app from rentomatic.settings import TestConfig @pytest.yield_fixture(scope='function') def app(): return create_app(TestConfig)

It's time to write the endpoint, where we will finally see all the pieces of the architecture working together.

The minimal Flask endpoint we can put in rentomatic/rest/storageroom.py is something like

blueprint = Blueprint('storageroom', __name__) @blueprint.route('/storagerooms', methods=['GET']) def storageroom(): [LOGIC] return Response([JSON DATA], mimetype='application/json', status=[STATUS])

The first part of our logic is the creation of a StorageRoomListRequestObject. For the moment we can ignore the optional querystring parameters and use an empty dictionary

def storageroom(): request_object = ro.StorageRoomListRequestObject.from_dict({})

As you can see I'm creating the object from an empty dictionary, so querystring parameters are not taken into account for the moment. The second thing to do is to initialize the repository

repo = mr.MemRepo()

The third thing the endpoint has to do is the initialization of the use case

use_case = uc.StorageRoomListUseCase(repo)

And finally we run the use case passing the request object

response = use_case.execute(request_object)

This response, however, is not yet an HTTP response, and we have to explicitly build it. The HTTP response will contain the JSON representation of the response.value attribute.

return Response(json.dumps(response.value, cls=ser.StorageRoomEncoder), mimetype='application/json', status=200)

Note that this function is obviously still incomplete, as it returns always a successful response (code 200). It is however enough to pass the test we wrote. The whole file is the following

import json from flask import Blueprint, Response from rentomatic.use_cases import request_objects as req from rentomatic.repository import memrepo as mr from rentomatic.use_cases import storageroom_use_cases as uc from rentomatic.serializers import storageroom_serializer as ser blueprint = Blueprint('storageroom', __name__) @blueprint.route('/storagerooms', methods=['GET']) def storageroom(): request_object = req.StorageRoomListRequestObject.from_dict({}) repo = mr.MemRepo() use_case = uc.StorageRoomListUseCase(repo) response = use_case.execute(request_object) return Response(json.dumps(response.value, cls=ser.StorageRoomEncoder), mimetype='application/json', status=200)

This code demonstrates how the clean architecture works in a nutshell. The function we wrote is however not complete, as it doesn't consider querystring parameters and error cases.

The server in action

Git tag: step13

Before I fix the missing parts of the endpoint let us see the server in action, so we can finally enjoy the product we have been building during this long post.

To actually see some results when accessing the endpoint we need to fill the repository with some data. This part is obviously required only because of the ephemeral nature of the repository we are using. A real repository would wrap a persistent source of data and providing data at this point wouldn't be necessary. To initialize the repository we have to define some data, so add these dictionaries to the rentomatic/rest/storageroom.py file

storageroom1 = { 'code': 'f853578c-fc0f-4e65-81b8-566c5dffa35a', 'size': 215, 'price': 39, 'longitude': -0.09998975, 'latitude': 51.75436293, } storageroom2 = { 'code': 'fe2c3195-aeff-487a-a08f-e0bdc0ec6e9a', 'size': 405, 'price': 66, 'longitude': 0.18228006, 'latitude': 51.74640997, } storageroom3 = { 'code': '913694c6-435a-4366-ba0d-da5334a611b2', 'size': 56, 'price': 60, 'longitude': 0.27891577, 'latitude': 51.45994069, }

And then use them to initialise the repository

repo = mr.MemRepo([storageroom1, storageroom2, storageroom3])

To run the web server we need to create a wsgi.py file in the main project folder (the folder where setup.py is stored)

from rentomatic.app import create_app app = create_app()

Now we can run the Flask development server

$ flask run

At this point, if you open your browser and navigate to http://localhost:5000/storagerooms, you can see the API call results. I recommend installing a formatter extension for the browser to better check the output. If you are using Chrome try JSON Formatter.

The REST layer (part2)

Git tag: step14

Let us cover the two missing cases in the endpoint. First I introduce a test to check if the endpoint correctly handles querystring parameters. Add it to the tests/rest/test_get_storagerooms_list.py file

@mock.patch( 'rentomatic.use_cases.storageroom_use_cases.StorageRoomListUseCase') def test_get_failed_response(mock_use_case, client): mock_use_case().execute.return_value = \ res.ResponseFailure.build_system_error('test message') http_response = client.get('/storagerooms') assert json.loads(http_response.data.decode('UTF-8')) == \ {'type': 'SYSTEM_ERROR', 'message': 'test message'} assert http_response.status_code == 500 assert http_response.mimetype == 'application/json'

This makes the use case return a failed response and check that the HTTP response contains a formatted version of the error. To make this test pass we have to introduce a proper mapping between domain responses codes and HTTP codes in the rentomatic/rest/storageroom.py file

from rentomatic.shared import response_object as res STATUS_CODES = { res.ResponseSuccess.SUCCESS: 200, res.ResponseFailure.RESOURCE_ERROR: 404, res.ResponseFailure.PARAMETERS_ERROR: 400, res.ResponseFailure.SYSTEM_ERROR: 500 }

Then we need to create the Flask response with the correct code in the definition of the endpoint

return Response(json.dumps(response.value, cls=ser.StorageRoomEncoder), mimetype='application/json', status=STATUS_CODES[response.type])

The second and last test is a bit more complex. As before we will mock the use case, but this time we will also patch StorageRoomListRequestObject. We do this because we need to know if the request object is initialized with the correct parameters from the command line. So, step by step

@mock.patch('rentomatic.use_cases.storageroom_use_cases.StorageRoomListUseCase') def test_request_object_initialisation_and_use_with_filters(mock_use_case, client): mock_use_case().execute.return_value = res.ResponseSuccess([])

This is, like, before, a patch of the use case class that ensures the use case will return a ResponseSuccess instance.

internal_request_object = mock.Mock()

The request object will be internally created with StorageRoomListRequestObject.from_dict, and we want that function to return a known mock object, which is the one we initialized here.

request_object_class = 'rentomatic.use_cases.request_objects.StorageRoomListRequestObject' with mock.patch(request_object_class) as mock_request_object: mock_request_object.from_dict.return_value = internal_request_object client.get('/storagerooms?filter_param1=value1&filter_param2=value2')

Here we patch StorageRoomListRequestObject and we assign a known output to the from_dict() method. Then we call the endpoint with some querystring parameters. What should happen is that the from_dict() method of the request is called with the filter parameters and that the execute() method of the use case instance is called with the internal_request_object.

mock_request_object.from_dict.assert_called_with( {'filters': {'param1': 'value1', 'param2': 'value2'}} ) mock_use_case().execute.assert_called_with(internal_request_object)

The endpoint function shall be changed somehow to reflect this new behaviour and to make the test pass. The whole code of the new storageroom() Flask method is the following

import json from flask import Blueprint, request, Response from rentomatic.use_cases import request_objects as req from rentomatic.shared import response_object as res from rentomatic.repository import memrepo as mr from rentomatic.use_cases import storageroom_use_cases as uc from rentomatic.serializers import storageroom_serializer as ser blueprint = Blueprint('storageroom', __name__) STATUS_CODES = { res.ResponseSuccess.SUCCESS: 200, res.ResponseFailure.RESOURCE_ERROR: 404, res.ResponseFailure.PARAMETERS_ERROR: 400, res.ResponseFailure.SYSTEM_ERROR: 500 } storageroom1 = { 'code': 'f853578c-fc0f-4e65-81b8-566c5dffa35a', 'size': 215, 'price': 39, 'longitude': '-0.09998975', 'latitude': '51.75436293', } storageroom2 = { 'code': 'fe2c3195-aeff-487a-a08f-e0bdc0ec6e9a', 'size': 405, 'price': 66, 'longitude': '0.18228006', 'latitude': '51.74640997', } storageroom3 = { 'code': '913694c6-435a-4366-ba0d-da5334a611b2', 'size': 56, 'price': 60, 'longitude': '0.27891577', 'latitude': '51.45994069', } @blueprint.route('/storagerooms', methods=['GET']) def storageroom(): qrystr_params = { 'filters': {}, } for arg, values in request.args.items(): if arg.startswith('filter_'): qrystr_params['filters'][arg.replace('filter_', '')] = values request_object = req.StorageRoomListRequestObject.from_dict(qrystr_params) repo = mr.MemRepo([storageroom1, storageroom2, storageroom3]) use_case = uc.StorageRoomListUseCase(repo) response = use_case.execute(request_object) return Response(json.dumps(response.value, cls=ser.StorageRoomEncoder), mimetype='application/json', status=STATUS_CODES[response.type])

Note that we extract the querystring parameters from the global request object provided by Flask. Once the querystring parameters are in a dictionary, we just need to create the request object from it.

Conclusions

Well, that's all! Some tests are missing in the REST part, but as I said I just wanted to show a working implementation of a clean architecture and not a fully developed project. I suggest that you try to implement some changes, for example:

  • another endpoint like the access to a single resource (/storagerooms/<code>)
  • a different repository, connected to a real DB (you can use SQLite, for example)
  • implement a new querystring parameter, for example the distance from a given point on the map (use geopy to easily compute distances)

While you develop your code always try to work following the TDD approach. Testability is one of the main features of a clean architecture, so don't ignore it.

Whether you decide to use a clean architecture or not, I really hope this post helped you to get a fresh view on software architectures, as happened to me when I first discovered the concepts exemplified here.

Updates

2016-11-15: Two tests contained variables with a wrong name (artist), which came from an initial version of the project. The name did not affect the tests. Added some instructions on the virtual environment and the development requirements.

2016-12-12: Thanks to Marco Beri who spotted a typo in the code of step 6, which was already correct in the GitHub repository. He also suggested using the Cookiecutter package by Ardy Dedase. Thanks to Marco and to Ardy!

2018-11-18 Two years have passed since I wrote this post and I found some errors that I fixed, like longitude and latitude passed as string instead of floats. I also moved the project from Flask-script to the Flask development server and added a couple of clarifications here and there.

Feedback

Feel free to use the blog Google+ page to comment the post. The GitHub issues page is the best place to submit corrections.

Categories: FLOSS Project Planets

Lars Wirzenius: Retiring from Debian

Planet Debian - Sun, 2018-11-18 11:59

I've started the process of retiring from Debian. Again. This will be my third time. It'll take a little while I take care of things to do this cleanly: uploading packages to set Maintainer to QA, removing myself from Plant Debian, sending the retirement email to -private, etc.

I've had a rough year, and Debian has also stopped being fun for me. There's a number of Debian people saying and doing things that I find disagreeable, and the process of developing Debian is not nearly as nice as it could be. There's way too much friction pretty much everywhere.

For example, when a package maintainer uploads a package, the package goes into an upload queue. The upload queue gets processed every few minutes, and the packages get moved into an incoming queue. The incoming queue gets processed every fifteen minutes, and packages get imported into the master archive. Changes to the master archive get pushed to main mirrors every six hours. Websites like lintian.debian.org, the package tracker, and the Ultimate Debian Database get updated at time. (Or their updates get triggered, but it might take longer for the update to actually happen. Who knows. There's almost no transparency.)

The developer gets notified, by email, when the upload queue gets processed, and when the incoming queue gets processed. If they want to see current status on the websites (to see if the upload fixed a problem, for example), they may have to wait for many more hours, possibly even a couple of days.

This was fine in the 1990s. It's not fine anymore.

That's not why I'm retiring. I'm just tired. I'm tired of dragging myself through high-friction Debian processes to do anything. I'm tired of people who should know better tearing open old wounds. I'm tired of all the unconstructive and aggressive whinging, from Debian contributors and users alike. I'm tired of trying to make things better and running into walls of negativity. (I realise I'm not being the most constructive with this blog post and with my retirement. I'm tired.)

I wish everyone else a good time making Debian better, however. Or whatever else they may be doing. I'll probably be back. I always have been, when I've retired before.

Categories: FLOSS Project Planets

Gbyte blog: Why you should be using the Simple XML sitemap 3.0 release candidate

Planet Drupal - Sun, 2018-11-18 09:06

The third major version of simple_sitemap has been seven months in the making. The module has been rewritten from the ground up and now features a more reliable generation process, a significantly more versatile API and many new functionalities.

Major new features Ability to create any type of sitemap via plugins

The 8.x-3.x release allows not only to customize the URL generation through URL generator plugins as 2.x did, but also creating custom sitemap types that mix and match a sitemap generator along with several URL generators to create any type of sitemap.

This 3-plugin system coupled with the new concept of sitemap variants makes it possible to run several types of sitemaps on a single Drupal instance. Now e.g a Google news sitemap can coexist with your hreflang sitemap.

A sitemap variant can but does not need to be coupled to entity types/bundles. When creating a sitemap generator, one can define where the content source is located and what to do with it upon sitemap generation/deletion.

Ability to create sitemap variants of various sitemap types via UI

In 3.x links form a specific entity bundle can be indexed in a specific sitemap variant with its own URL. This means, that apart from /sitemap.xml, there can be e.g

  • /products/sitemap.xml,
  • /files/sitemap.xml or
  • /news/sitemap.xml.

All of these can be completely different sitemap types linking to Drupal entities, external resources. or both. They could also be indexing other sitemaps. The name, label and weight of each variant can also be set in the UI.

Categories: FLOSS Project Planets

Contributing to the kde userbase wiki

Planet KDE - Sun, 2018-11-18 07:26

This is the story about how I started more than one month ago contributing to the KDE project.

So, one month ago, I found a task on the Phabricator instance from KDE, about the deplorable state of the KDE userbase wiki. The wiki contains a lot of screenshot dating back to the KDE 4 era and some are even from the KDE 3 era. It’s a problem, because a wiki is something important in the user experience and can be really useful for new users and experienced ones alike.

Lucky for us, even though Plasma and the KDE applications did change a lot in the last few years, most of the changes are new features and UI/UX improvements, so most of the information are still up-to-date. So most of the work is only updating screenshots. But up-to-date screenshots are also quite important, because when the user see the old screenshots, he can think that the instructions are also outdated.

So I started, updating the screenshots one after the other. (Honestly when I started, I didn’t think it would take so long, not because the process was slow or difficult, but because of the amount of outdated screenshots.)

But I also learned a lot about KDE doing this. For example did you know that Blink (Chrome webengine) is a fork of Webkit (Safari webengine) and Webkit is a fork of KHTML (Konqueror webengine). I also learned about the existence of lesser-known KDE apps, for example Kile (a latex IDE), Calligra (an office suite), KFloppy (a floppy disk editor), …

As a non-native english speaker, I found out updating screenshots and quickly checking if the information is up-to-date is easier as I first though. There aren’t a lot of requirements, you only need a Phabricator account and the default Breeze theme installed. The Phabricator account is easy to create and the default theme should already be installed.

Then for each wiki entry, you only need to download the software, find all outdated screenshots in the wiki entry, take a new screenshot for each old screenshot, and upload the new screenshot.

For all icons, I quickly generated a png from the svg file with the following command:

convert -density 1200 -resize 128x128 -background transparent /usr/share/icons/breeze/apps/48/okular.svg okular.png

It’s not finished, there are still a lot of outdated screenshots in the wiki, but every day the amount decreases. :)

And you, dear reader, can also help. Like I said: this job doesn’t need any programming skills or perfect english skills, just a bit of motivation. If you need help, there are some instructions available to get started editing the page:

And you, dear reader, can also help. Like I said: this job doesn’t need any programming skills or perfect english skills, just a bit of motivation. If you need help, there are some instructions available to get started editing the wiki: Start Contributing, Markup help, Quick Start. You can also contact me: on the fediverse @carl@linuxrocks.online or on reddit (/u/ognarb1).

Thanks to XYQuadrat for proofreading this blog post. :D

Categories: FLOSS Project Planets

libqaccessibilityclient v0.3.0

Planet KDE - Sun, 2018-11-18 06:26

Hi, I’ve been asked to make a new release of libqaccessibilityclient, which seemed like a good idea. So here we go: https://download.kde.org/stable/libqaccessibilityclient/ – version 0.3.0 is now available. I’d like to say thanks to the KDE sysadmins for being super fast.

Now if I wasn’t involved with the accessibility project, I’d have no clue what this is about… so What is libqaccessibilityclient?

It’s a small library that can help understand/use the accessibility information available on DBus. It could be used to write assistive applications such as screen readers. Right now my main purpose for it is to understand what’s going on, so I use it as debugging helper. There are now two small helper applications, one has been there before and shows a complete tree of accessibility objects, so the representation of applications as screen readers see them. The second one is new, it just dumps the same tree on the command line. I used this to find out KWin’s state, since doing anything while pressing alt-tab is hard. I could run it on the command line with a sleep and then see what KWin reported while I pressed alt-tab.

By the way, we’ve been organizing our work on a phabricator board here, feel free to comment and help out with some of the tasks, especially when it comes to Plasma keyboard handling.

Categories: FLOSS Project Planets

Video Editing for foss-gbg

Planet KDE - Sun, 2018-11-18 06:13

Editing videos for foss-gbg and foss-north has turned into something that I do on almost a montly basis. I’ve tried a few workflows, but landed in using kdenlive and, when needed, Audacity. I’m not a very advanced audio person, so if kdenlive would incorporate basic noise reduction and a compressor, I stay within one tool.

Before I describe the actual process, I want to mention something about the hardware involved. There are so many things that you can do when producing this type of contents. However, all the pieces that you add to the puzzle is another point of failure. The motto is KISS – Keep It Simple Stupid. Hence, we use a single video camera with an integrated microphone. This is either an action cam, or a JVC video camera. In most cases this just works. In some cases the person talking has a microphone and then we try to place the camera close to a speaker. It has happened that we’ve recorded someone whispering just by the camera…

As we don’t have a dedicated microphone for the speaker, we get an audio stream that includes the reaction of the audience. That is in my opinion a good thing. It captures the mood of the event. However, we also get quite a lot of background noise which is bad. For this, I rely on this workflow from Rich Bowen. Basically, I extract the audio stream from the recording, massage it in Audacity, and then re-introduce it.

I’ve found it easier to cut the video prior to fixing the audio. This usually means find the start and the end of the talk, but in some cases it is more complex. E.g. removing parts of the Q&A due to reasons, or cutting out a demo that makes no sense when watching the video.

Once in Audacity, I generally pick out a “silent” part of the recoding to learn a noise profile. I then apply a noise reduction effect to the entire recording. This commonly produces a somewhat distorted sound (like if spoken into a can), but the voice of the speaker comes across nicely. After that, I usually apply a compressor effect to balance the loud and quite parts better. I’ve noticed that speakers often start out with a loud voice, and then softens the voice during the talk. For such cases, the compressor helps. It also helps balancing the sound level during Q&A where the audience might be quite or loud compared to the speaker depending on the layout of the venue.

Once the video and audio are cut and filtered, we need some intro and exit screens. I create these using LibreOffice Impress. I have created a template for the title page with the title of the talk and the name of the speaker, followed by a slide with room for the sponsor logo. This has a white background as logos mix badly with the crazy yellow colour of foss-gbg. Finally there is an exit slide which just says foss-gbg.se. I then export the slides to pdf and use ImageMagick to create pngs from them. Since I’m lazy, I just produce huge pngs that I mogrify to the right size. The entire flow looks like this:

libreoffice --headless --convert-to pdf slides.odp
convert -density 300 -depth 8 -quality 85 slides.pdf slides.png
mogrify -resize 1920x1080 slides*.png

The very last step of the process is to overlap the intro and exit screens with the start and end of the video in a good way. I mix this with fading the audio in and out. The trickiest is fading in, as it is nice to hear the first words of the speaker but you don’t want the noise from the audience. I’ve found that no matter what, you need to fade in the sound, even if the fade only lasts for a fraction of a second. Fading out is easy as things usually ends in an applause.

Then it is all about clicking render, remembering to change the name of the output file and uploading to the foss-gbg YouTube channel.

Categories: FLOSS Project Planets

PyBites: PyBites Twitter Digest - Issue 36, 2018

Planet Python - Sun, 2018-11-18 02:44
Pycon US 2019 registration is open! Get your early bird tickets now!

Registration for @pycon 2019 is open & once again we are selling the first 800 tickets at a discounted rate! Don't… https://t.co/ck27UnY2vv

— Python Software (@ThePSF) November 17, 2018 Python 3 is the way!

Last month, 39.3% of downloads from PyPI were for Python 3 (excluding downloads where we don’t know what Python ver… https://t.co/oxbuo9WJiU

— Donald Stufft (@dstufft) November 02, 2018 What a brilliant story! Hard work, dedication to the cause and intentional practice pays off!

Submitted by @Erik

How I switched careers and got a developer job in 10 months: a true story https://t.co/4zEk07PkYK

— freeCodeCamp.org (@freeCodeCamp) November 17, 2018 We can delete Github Issues now. Wow!

Submitted by @BryanKimani

You've been asking for it. You know the issue(s). Delete 'em.

Categories: FLOSS Project Planets

Pages