FLOSS Project Planets

The Digital Cat: Flask project setup: TDD, Docker, Postgres and more - Part 2

Planet Python - Mon, 2020-07-06 08:00

In this series of posts I explore the development of a Flask project with a setup that is built with efficiency and tidiness in mind, using TDD, Docker and Postgres.

Catch-up

In the previous post we started from an empty project and learned how to add the minimal code to run a Flask project. Then we created a static configuration file and a management script that wraps the flask and docker-compose commands to run the application with a specific configuration

In this post I will show you how to run a production-ready database alongside your code in a Docker container, both in your development setup and for the tests.

Step 1 - Adding a database container

A database is an integral part of a web application, so in this step I will add my database of choice, Postgres, to the project setup. To do this I need to add a service in the docker-compose configuration file

File: docker/development.yml

version: '3.4' services: db: image: postgres environment: POSTGRES_DB: ${POSTGRES_DB} POSTGRES_USER: ${POSTGRES_USER} POSTGRES_HOSTNAME: ${POSTGRES_HOSTNAME} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} ports: - "${POSTGRES_PORT}:5432" volumes: - pgdata:/var/lib/postgresql/data web: build: context: ${PWD} dockerfile: docker/Dockerfile environment: FLASK_ENV: ${FLASK_ENV} FLASK_CONFIG: ${FLASK_CONFIG} command: flask run --host 0.0.0.0 volumes: - ${PWD}:/opt/code ports: - "5000:5000" volumes: pgdata:

The variables starting with POSTGRES_ are requested by the PostgreSQL Docker image. In particular, remember that POSTGRESQL_DB is the database that gets created by default when you create the image, and also the one that contains data on other databases as well, so for the application we usually want to use a different one.

Notice also that for the db service I'm creating a persistent volume, so that the content of the database is not lost when we tear down the container. For this service I'm using the default image, so no build step is needed.

To orchestrate this setup we need to add those variables to the JSON configuration

File: config/development.json

[ { "name": "FLASK_ENV", "value": "development" }, { "name": "FLASK_CONFIG", "value": "development" }, { "name": "POSTGRES_DB", "value": "postgres" }, { "name": "POSTGRES_USER", "value": "postgres" }, { "name": "POSTGRES_HOSTNAME", "value": "localhost" }, { "name": "POSTGRES_PORT", "value": "5432" }, { "name": "POSTGRES_PASSWORD", "value": "postgres" } ]

These are all development variables so there are no secrets. In production we will need a way to keep the secrets in a safe place and convert them into environment variables. The AWS Secret Manager for example can directly map secrets into environment variables passed to the containers, saving you from having to explicitly connect to the service with the API.

We can run the ./manage.py compose up -d and ./manage.py compose down here to check that the database container works properly.

CONTAINER ID IMAGE COMMAND ... PORTS NAMES 9b5828dccd1c docker_web "flask run --host 0.…" ... 0.0.0.0:5000->5000/tcp docker_web_1 4440a18a1527 postgres "docker-entrypoint.s…" ... 0.0.0.0:5432->5432/tcp docker_db_1

Now we need to connect the application to the database and to do this we can leverage flask-postgresql. As we will use this at every stage of the life of the application, the requirement goes among the production ones. We also need psycopg2 as it is the library used to connect to Postgres.

File: requirements/production.txt

Flask flask-sqlalchemy psycopg2

Remember to run pip install -r requirements/development.txt to install the requirements locally and ./manage.py compose build web to rebuild the image.

At this point I need to create a connection string in the configuration of the application. The connection string parameters come fromt he same environment variables used to spin up the db container

File: application/config.py

import os basedir = os.path.abspath(os.path.dirname(__file__)) class Config(object): """Base configuration""" user = os.environ["POSTGRES_USER"] password = os.environ["POSTGRES_PASSWORD"] hostname = os.environ["POSTGRES_HOSTNAME"] port = os.environ["POSTGRES_PORT"] database = os.environ["APPLICATION_DB"] SQLALCHEMY_DATABASE_URI = ( f"postgresql+psycopg2://{user}:{password}@{hostname}:{port}/{database}" ) SQLALCHEMY_TRACK_MODIFICATIONS = False class ProductionConfig(Config): """Production configuration""" class DevelopmentConfig(Config): """Development configuration""" class TestingConfig(Config): """Testing configuration""" TESTING = True

As you can see, here I use the variable APPLICATION_DB and not POSTGRES_DB, so I need to specify that as well in the config file

File: config/development.json

[ { "name": "FLASK_ENV", "value": "development" }, { "name": "FLASK_CONFIG", "value": "development" }, { "name": "POSTGRES_DB", "value": "postgres" }, { "name": "POSTGRES_USER", "value": "postgres" }, { "name": "POSTGRES_HOSTNAME", "value": "localhost" }, { "name": "POSTGRES_PORT", "value": "5432" }, { "name": "POSTGRES_PASSWORD", "value": "postgres" }, { "name": "APPLICATION_DB", "value": "application" } ]

At this point the application container needs to access some of the Postgres environment variables and the APPLICATION_DB one

File: docker/development.yml

version: '3.4' services: db: image: postgres environment: POSTGRES_DB: ${POSTGRES_DB} POSTGRES_USER: ${POSTGRES_USER} POSTGRES_HOSTNAME: ${POSTGRES_HOSTNAME} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} ports: - "${POSTGRES_PORT}:5432" volumes: - pgdata:/var/lib/postgresql/data web: build: context: ${PWD} dockerfile: docker/Dockerfile environment: FLASK_ENV: ${FLASK_ENV} FLASK_CONFIG: ${FLASK_CONFIG} APPLICATION_DB: ${APPLICATION_DB} POSTGRES_USER: ${POSTGRES_USER} POSTGRES_HOSTNAME: ${POSTGRES_HOSTNAME} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} POSTGRES_PORT: ${POSTGRES_PORT} command: flask run --host 0.0.0.0 volumes: - ${PWD}:/opt/code ports: - "5000:5000" volumes: pgdata:

Running compose now spins up both Flask and Postgres but the application is not properly connected to the database yet.

Resources Step 2 - Connecting the application and the database

To connect the Flask application with the database running in the container I need to initialise an SQLAlchemy object and add it to the application factory.

File: application/models.py

from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy()

File: application/app.py

from flask import Flask def create_app(config_name): app = Flask(__name__) config_module = f"application.config.{config_name.capitalize()}Config" app.config.from_object(config_module) from application.models import db db.init_app(app) @app.route("/") def hello_world(): return "Hello, World!" return app

A pretty standard way to manage the database in Flask is to use flask-migrate, that adds some commands that allow us to create migrations and apply them.

With flask-migrate you have to create the migrations folder once and for all with flask db init and then, every time you change your models, run flask db migrate -m "Some message" and flask db upgrade. As both db init and db migrate create files in the current directory we now face a problem that every Docker-based setup has to face: file permissions.

The situation is the following: the application is running in the Docker container as root, and there is no connection between the users namespace in the container and that of the host. The result is that if the Docker container creates files in a directory that is mounted from the host (like the one that contains the application code in our example), those files will result as belonging to root. While this doesn't make impossible to work (we usually can become root on our devlopment machines), it is annoying to say the least. The solution is to run those commands from outside the container, but this requires the Flask application to be configured.

Fortunately I wrapped the flask command in the manage.py script, which loads all the required environment variables. Let's add falsk-migrate to the production requirements, together

File: requirements/production.txt

Flask flask-sqlalchemy psycopg2 flask-migrate

Remember to run pip install -r requirements/development.txt to install the requirements locally and ./manage.py compose build web to rebuild the image.

Now we can initialise a Migrate object and add it to the application factory

File: application/models.py

from flask_sqlalchemy import SQLAlchemy from flask_migrate import Migrate db = SQLAlchemy() migrate = Migrate()

File: application/app.py

from flask import Flask def create_app(config_name): app = Flask(__name__) config_module = f"application.config.{config_name.capitalize()}Config" app.config.from_object(config_module) from application.models import db, migrate db.init_app(app) migrate.init_app(app, db) @app.route("/") def hello_world(): return "Hello, World!" return app

I can now run the database initialisation script

$ ./manage.py flask db init` Creating directory /home/leo/devel/flask-tutorial/migrations ... done Creating directory /home/leo/devel/flask-tutorial/migrations/versions ... done Generating /home/leo/devel/flask-tutorial/migrations/env.py ... done Generating /home/leo/devel/flask-tutorial/migrations/README ... done Generating /home/leo/devel/flask-tutorial/migrations/script.py.mako ... done Generating /home/leo/devel/flask-tutorial/migrations/alembic.ini ... done Please edit configuration/connection/logging settings in '/home/leo/devel/flask-tutorial/migrations/alembic.ini' before proceeding.

And, when we will start creating models we will use the commands ./manage.py flask db migrate and ./manage.py flask db upgrade. You will find a complete example at the end of this post.

Resources Step 3 - Testing setup

I want to use a TDD approach as much as possible when developing my applications, so I need to setup a good testing enviroment upfront, and it has to be as ephemereal as possible. It is not unusual in big projects to create (or scale up) infrastructure components explicitly to run tests, and through Docker and docker-compose we can easily do the same. Namely, I will:

  1. spin up a test database in a container without permanent volumes
  2. initialise it
  3. run all the tests against it
  4. tear down the container

This approach has one big advantage, which is that it requires no previous setup and can this be executed on infrastructure created on the fly. It also has disadvantages, however, as it can slow down the testing part of the application, which should be as fast as possible in a TDD setup. Tests that involve the databse, however, should be considered integration tests, and not run continuously in a TDD process, which is impossible (or very hard) when using a framework that merges the concept of entity and database model. If you want to know more about this read my post on the clean architecture, and the book that I wrote on the subject.

Another advantage of this setup is it that we might need other things during the test, e.g. Celery, other databases, other servers. They can all be created through the docker-compose file.

Generally speaking testing is an umbrella under which many different things can happen. As I will use pytest I can run the full suite, but I might want to select specific tests, mentioning a single file or using the powerful -k option that allows me to select tests by pattern-matching their name. For this reason I want to map the management command line to that of pytest.

Let's add pytest to the testing requirements

File: requirements/testing.txt

-r production.txt pytest coverage pytest-cov

As you can see I also use the coverage plugin to keep an eye on how well I cover the code with the tests. Remember to run pip install -r requirements/development.txt to install the requirements locally and ./manage.py compose build web to rebuild the image.

File: manage.py

#! /usr/bin/env python import os import json import signal import subprocess import time import click # Ensure an environment variable exists and has a value def setenv(variable, default): os.environ[variable] = os.getenv(variable, default) setenv("APPLICATION_CONFIG", "development") def configure_app(config): # Read configuration from the relative JSON file with open(os.path.join("config", f"{config}.json")) as f: config_data = json.load(f) # Convert the config into a usable Python dictionary config_data = dict((i["name"], i["value"]) for i in config_data) for key, value in config_data.items(): setenv(key, value) @click.group() def cli(): pass @cli.command(context_settings={"ignore_unknown_options": True}) @click.argument("subcommand", nargs=-1, type=click.Path()) def flask(subcommand): configure_app(os.getenv("APPLICATION_CONFIG")) cmdline = ["flask"] + list(subcommand) try: p = subprocess.Popen(cmdline) p.wait() except KeyboardInterrupt: p.send_signal(signal.SIGINT) p.wait() def docker_compose_cmdline(config): configure_app(os.getenv("APPLICATION_CONFIG")) docker_compose_file = os.path.join("docker", f"{config}.yml") if not os.path.isfile(docker_compose_file): raise ValueError(f"The file {docker_compose_file} does not exist") return [ "docker-compose", "-p", config, "-f", docker_compose_file, ] @cli.command(context_settings={"ignore_unknown_options": True}) @click.argument("subcommand", nargs=-1, type=click.Path()) def compose(subcommand): cmdline = docker_compose_cmdline(os.getenv("APPLICATION_CONFIG")) + list(subcommand) try: p = subprocess.Popen(cmdline) p.wait() except KeyboardInterrupt: p.send_signal(signal.SIGINT) p.wait() @cli.command() @click.argument("filenames", nargs=-1) def test(filenames): os.environ["APPLICATION_CONFIG"] = "testing" configure_app(os.getenv("APPLICATION_CONFIG")) cmdline = docker_compose_cmdline(os.getenv("APPLICATION_CONFIG")) + ["up", "-d"] subprocess.call(cmdline) cmdline = docker_compose_cmdline(os.getenv("APPLICATION_CONFIG")) + ["logs", "db"] logs = subprocess.check_output(cmdline) while "ready to accept connections" not in logs.decode("utf-8"): time.sleep(0.1) logs = subprocess.check_output(cmdline) cmdline = ["pytest", "-svv", "--cov=application", "--cov-report=term-missing"] cmdline.extend(filenames) subprocess.call(cmdline) cmdline = docker_compose_cmdline(os.getenv("APPLICATION_CONFIG")) + ["down"] subprocess.call(cmdline) if __name__ == "__main__": cli()

Notable changes are

  • The environment configuration code is now in the function configure_app. This allows me to force the variable APPLICATION_CONFIG inside the script and then configure the environment, which saves me from having to call tests with APPLICATION_CONFIG=testing flask test.
  • Both commands flask and compose use the development configuration. Since that is the default value of the APPLICATION_CONFIG variable they just have to call the configure_app function.
  • The docker-compose command line is needed both in the compose and in the test commands, so I isolated some code into a function called docker_compose_cmdline which returns a list as needed by subprocess functions. The command line now uses also the -p (project name) option to give a prefix to the containers. This way we can run tests while running the development server.
  • The test command forces APPLICATION_CONFIG to be testing, which loads the file config/testing.json, then runs docker-compose using the file docker/testing.yml (both file have not bee created yet), runs the pytest command line, and tears down the testing database container. Before running the tests the script waits for the service to be available. Postgres doesn't allow connection until the database is ready to accept them.

File: config/testing.json

[ { "name": "FLASK_ENV", "value": "production" }, { "name": "FLASK_CONFIG", "value": "testing" }, { "name": "POSTGRES_DB", "value": "postgres" }, { "name": "POSTGRES_USER", "value": "postgres" }, { "name": "POSTGRES_HOSTNAME", "value": "localhost" }, { "name": "POSTGRES_PORT", "value": "5433" }, { "name": "POSTGRES_PASSWORD", "value": "postgres" }, { "name": "APPLICATION_DB", "value": "test" } ]

Note that here I specified 5433 for the POSTGRES_PORT. This allows us to spin up the test database container while the development one is running, as that will use port 5432 and you can't have two different containers using the same port on the host. A more general solution could be to leave Docker pick a random host port for the container and then use that, but this requires a bit more code to be properly implemented, so I will come back to this problem when setting up the scenarios.

The last piece of setup that we need is the orchestration configuration for docker-compose

File: docker/testing.yml

version: '3.4' services: db: image: postgres environment: POSTGRES_DB: ${POSTGRES_DB} POSTGRES_USER: ${POSTGRES_USER} POSTGRES_HOSTNAME: ${POSTGRES_HOSTNAME} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} ports: - "${POSTGRES_PORT}:5432"

Now we can run ./manage.py test and get

Creating network "testing_default" with the default driver Creating testing_db_1 ... done ========================= test session starts ========================= platform linux -- Python 3.7.5, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 -- /home/leo/devel/flask-tutorial/venv3/bin/python3 cachedir: .pytest_cache rootdir: /home/leo/devel/flask-tutorial plugins: cov-2.10.0 collected 0 items Coverage.py warning: No data was collected. (no-data-collected) ----------- coverage: platform linux, python 3.7.5-final-0 ----------- Name Stmts Miss Cover Missing ----------------------------------------------------- application/app.py 11 11 0% 1-21 application/config.py 13 13 0% 1-31 application/models.py 4 4 0% 1-5 ----------------------------------------------------- TOTAL 28 28 0% ======================= no tests ran in 0.07s ======================= Stopping testing_db_1 ... done Removing testing_db_1 ... done Removing network testing_default Resources Step 4 - Initialise the testing database

When you develop a web application and then run it in production, you typically create the database once and then upgrade it through migrations. When running tests we need to create the database every time, so I need to add a way to run SQL commands on the testing database before I run pytest.

As running sql commands directly on the the database is often useful I will create a function that wraps the boilerplate for the connection. The command that creates the initial database at that point will be trivial.

File: manage.py

#! /usr/bin/env python import os import json import signal import subprocess import time import click import psycopg2 from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT # Ensure an environment variable exists and has a value def setenv(variable, default): os.environ[variable] = os.getenv(variable, default) setenv("APPLICATION_CONFIG", "development") def configure_app(config): # Read configuration from the relative JSON file with open(os.path.join("config", f"{config}.json")) as f: config_data = json.load(f) # Convert the config into a usable Python dictionary config_data = dict((i["name"], i["value"]) for i in config_data) for key, value in config_data.items(): setenv(key, value) @click.group() def cli(): pass @cli.command(context_settings={"ignore_unknown_options": True}) @click.argument("subcommand", nargs=-1, type=click.Path()) def flask(subcommand): configure_app(os.getenv("APPLICATION_CONFIG")) cmdline = ["flask"] + list(subcommand) try: p = subprocess.Popen(cmdline) p.wait() except KeyboardInterrupt: p.send_signal(signal.SIGINT) p.wait() def docker_compose_cmdline(config): configure_app(os.getenv("APPLICATION_CONFIG")) docker_compose_file = os.path.join("docker", f"{config}.yml") if not os.path.isfile(docker_compose_file): raise ValueError(f"The file {docker_compose_file} does not exist") return [ "docker-compose", "-p", config, "-f", docker_compose_file, ] @cli.command(context_settings={"ignore_unknown_options": True}) @click.argument("subcommand", nargs=-1, type=click.Path()) def compose(subcommand): cmdline = docker_compose_cmdline(os.getenv("APPLICATION_CONFIG")) + list(subcommand) try: p = subprocess.Popen(cmdline) p.wait() except KeyboardInterrupt: p.send_signal(signal.SIGINT) p.wait() def run_sql(statements): conn = psycopg2.connect( dbname=os.getenv("POSTGRES_DB"), user=os.getenv("POSTGRES_USER"), password=os.getenv("POSTGRES_PASSWORD"), host=os.getenv("POSTGRES_HOSTNAME"), port=os.getenv("POSTGRES_PORT"), ) conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) cursor = conn.cursor() for statement in statements: cursor.execute(statement) cursor.close() conn.close() @cli.command() def create_initial_db(): configure_app(os.getenv("APPLICATION_CONFIG")) try: run_sql([f"CREATE DATABASE {os.getenv('APPLICATION_DB')}"]) except psycopg2.errors.DuplicateDatabase: print( f"The database {os.getenv('APPLICATION_DB')} already exists and will not be recreated" ) @cli.command() @click.argument("filenames", nargs=-1) def test(filenames): os.environ["APPLICATION_CONFIG"] = "testing" configure_app(os.getenv("APPLICATION_CONFIG")) cmdline = docker_compose_cmdline(os.getenv("APPLICATION_CONFIG")) + ["up", "-d"] subprocess.call(cmdline) cmdline = docker_compose_cmdline(os.getenv("APPLICATION_CONFIG")) + ["logs", "db"] logs = subprocess.check_output(cmdline) while "ready to accept connections" not in logs.decode("utf-8"): time.sleep(0.1) logs = subprocess.check_output(cmdline) run_sql([f"CREATE DATABASE {os.getenv('APPLICATION_DB')}"]) cmdline = ["pytest", "-svv", "--cov=application", "--cov-report=term-missing"] cmdline.extend(filenames) subprocess.call(cmdline) cmdline = docker_compose_cmdline(os.getenv("APPLICATION_CONFIG")) + ["down"] subprocess.call(cmdline) if __name__ == "__main__": cli()

As you can see I took the opportunity to write the create_initial_db command as well, that just runs the very same SQL command that creates the testing database, but in any configuration I will use.

Before moving on I think it's time to refactor the manage.py file. Refactoring is not mandatory, but I feel like some parts of the script are not generic enough, and when I will add the scenarios I will definitely need my functions to be flexible.

The new script is

#! /usr/bin/env python import os import json import signal import subprocess import time import click import psycopg2 from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT # Ensure an environment variable exists and has a value def setenv(variable, default): os.environ[variable] = os.getenv(variable, default) setenv("APPLICATION_CONFIG", "development") APPLICATION_CONFIG_PATH = "config" DOCKER_PATH = "docker" def app_config_file(config): return os.path.join(APPLICATION_CONFIG_PATH, f"{config}.json") def docker_compose_file(config): return os.path.join(DOCKER_PATH, f"{config}.yml") def configure_app(config): # Read configuration from the relative JSON file with open(app_config_file(config)) as f: config_data = json.load(f) # Convert the config into a usable Python dictionary config_data = dict((i["name"], i["value"]) for i in config_data) for key, value in config_data.items(): setenv(key, value) @click.group() def cli(): pass @cli.command(context_settings={"ignore_unknown_options": True}) @click.argument("subcommand", nargs=-1, type=click.Path()) def flask(subcommand): configure_app(os.getenv("APPLICATION_CONFIG")) cmdline = ["flask"] + list(subcommand) try: p = subprocess.Popen(cmdline) p.wait() except KeyboardInterrupt: p.send_signal(signal.SIGINT) p.wait() def docker_compose_cmdline(commands_string=None): config = os.getenv("APPLICATION_CONFIG") configure_app(config) compose_file = docker_compose_file(config) if not os.path.isfile(compose_file): raise ValueError(f"The file {compose_file} does not exist") command_line = [ "docker-compose", "-p", config, "-f", compose_file, ] if commands_string: command_line.extend(commands_string.split(" ")) return command_line @cli.command(context_settings={"ignore_unknown_options": True}) @click.argument("subcommand", nargs=-1, type=click.Path()) def compose(subcommand): cmdline = docker_compose_cmdline() + list(subcommand) try: p = subprocess.Popen(cmdline) p.wait() except KeyboardInterrupt: p.send_signal(signal.SIGINT) p.wait() def run_sql(statements): conn = psycopg2.connect( dbname=os.getenv("POSTGRES_DB"), user=os.getenv("POSTGRES_USER"), password=os.getenv("POSTGRES_PASSWORD"), host=os.getenv("POSTGRES_HOSTNAME"), port=os.getenv("POSTGRES_PORT"), ) conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) cursor = conn.cursor() for statement in statements: cursor.execute(statement) cursor.close() conn.close() def wait_for_logs(cmdline, message): logs = subprocess.check_output(cmdline) while message not in logs.decode("utf-8"): time.sleep(0.1) logs = subprocess.check_output(cmdline) @cli.command() def create_initial_db(): configure_app(os.getenv("APPLICATION_CONFIG")) try: run_sql([f"CREATE DATABASE {os.getenv('APPLICATION_DB')}"]) except psycopg2.errors.DuplicateDatabase: print( f"The database {os.getenv('APPLICATION_DB')} already exists and will not be recreated" ) @cli.command() @click.argument("filenames", nargs=-1) def test(filenames): os.environ["APPLICATION_CONFIG"] = "testing" configure_app(os.getenv("APPLICATION_CONFIG")) cmdline = docker_compose_cmdline("up -d") subprocess.call(cmdline) cmdline = docker_compose_cmdline("logs db") wait_for_logs(cmdline, "ready to accept connections") run_sql([f"CREATE DATABASE {os.getenv('APPLICATION_DB')}"]) cmdline = ["pytest", "-svv", "--cov=application", "--cov-report=term-missing"] cmdline.extend(filenames) subprocess.call(cmdline) cmdline = docker_compose_cmdline("down") subprocess.call(cmdline) if __name__ == "__main__": cli()

Notable changes:

  • I created two new functions app_config_file and docker_compose_file that encapsulate the creation of the file paths.
  • I isolated the code that waits for a message in the database container logs, creating the wait_for_logs function.
  • The docker_compose_cmdline now receives a string and converts it into a list internally. This way expressing commands is more natural, as it doesn't require the ugly list syntax that subprocess works with.
Resources
  • Psycopg – PostgreSQL database adapter for Python
Step 5 - Fixtures for tests

Pytest uses fixtures for tests, so we should prepare some basic ones that will be generally useful. First let's include pytest-flask, which provides already some basic fixtures

File: requirements/testing.txt

-r production.txt pytest coverage pytest-cov pytest-flask

Then add the app and the database fixtures to the tests/conftest.py file. The first is required by pytest-flask itself (it's used by other fixtures) and the second one is useful every time you need to interact with the database itself.

File: tests/conftest.py

import pytest from application.app import create_app from application.models import db @pytest.fixture def app(): app = create_app("testing") return app @pytest.fixture(scope="function") def database(app): with app.app_context(): db.drop_all() db.create_all() yield db

As you can see, the database fixture uses the drop_all and create_all methods to reset the database. The reason is that this fixture is recreated for each function, and we can't be sure a previous function left the database clean. As a matter of fact, we might be almost sure of the opposite.

Resources Bonus step - A full TDD example

Before wrapping up this post, I want to give you a full example of the TDD process that I would follow given the current state of the setup, which is already complete enough to start the development of an application. Let's pretend my goal is that of adding a User model that can be created with an id (primary key) and an email fields.

First of all I write a test that creates a user in the database and then retrieves it, checking its attributes

File: tests/test_user.py

from application.models import User def test__create_user(database): email = "some.email@server.com" user = User(email=email) database.session.add(user) database.session.commit() user = User.query.first() assert user.email == email

Running this test results in an error, because the User model does not exist

$ ./manage.py test Creating network "testing_default" with the default driver Creating testing_db_1 ... done ====================================== test session starts ====================================== platform linux -- Python 3.7.5, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 -- /home/leo/devel/flask-tutorial/venv3/bin/python3 cachedir: .pytest_cache rootdir: /home/leo/devel/flask-tutorial plugins: flask-1.0.0, cov-2.10.0 collected 0 items / 1 error ============================================ ERRORS ============================================= ___________________________ ERROR collecting tests/tests/test_user.py ___________________________ ImportError while importing test module '/home/leo/devel/flask-tutorial/tests/tests/test_user.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: venv3/lib/python3.7/site-packages/_pytest/python.py:511: in _importtestmodule mod = self.fspath.pyimport(ensuresyspath=importmode) venv3/lib/python3.7/site-packages/py/_path/local.py:704: in pyimport __import__(modname) venv3/lib/python3.7/site-packages/_pytest/assertion/rewrite.py:152: in exec_module exec(co, module.__dict__) tests/tests/test_user.py:1: in <module> from application.models import User E ImportError: cannot import name 'User' from 'application.models' (/home/leo/devel/flask-tutorial/application/models.py) ----------- coverage: platform linux, python 3.7.5-final-0 ----------- Name Stmts Miss Cover Missing ----------------------------------------------------- application/app.py 11 9 18% 6-21 application/config.py 14 14 0% 1-32 application/models.py 4 0 100% ----------------------------------------------------- TOTAL 29 23 21% =================================== short test summary info =================================== ERROR tests/tests/test_user.py !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ====================================== 1 error in 0.20s ======================================= Stopping testing_db_1 ... done Removing testing_db_1 ... done Removing network testing_default $

I won't show here all the steps of the strict TDD methodology, and implement directly the final solution, which is

File: application/models.py

from flask_sqlalchemy import SQLAlchemy from flask_migrate import Migrate db = SQLAlchemy() migrate = Migrate() class User(db.Model): __tablename__ = "users" id = db.Column(db.Integer, primary_key=True) email = db.Column(db.String, unique=True, nullable=False)

With this model the test passes

$ ./manage.py test Creating network "testing_default" with the default driver Creating testing_db_1 ... done ================================== test session starts ================================== platform linux -- Python 3.7.5, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 -- /home/leo/devel/flask-tutorial/venv3/bin/python3 cachedir: .pytest_cache rootdir: /home/leo/devel/flask-tutorial plugins: flask-1.0.0, cov-2.10.0 collected 1 item tests/test_user.py::test__create_user PASSED ----------- coverage: platform linux, python 3.7.5-final-0 ----------- Name Stmts Miss Cover Missing ----------------------------------------------------- application/app.py 11 1 91% 19 application/config.py 14 0 100% application/models.py 8 0 100% ----------------------------------------------------- TOTAL 33 1 97% =================================== 1 passed in 0.14s =================================== Stopping testing_db_1 ... done Removing testing_db_1 ... done Removing network testing_default $

Please not that this is a very simple example and that in a real case I would add some other tests before accepting this code. In particular we should check that the email field can be empty, and maybe also test some validation on that field.

Once we are satisfied by the code we can generate the migration in the database. Spin up the development environment with

$ ./manage compose up -d

If this is the first time I spin up the environment I have to create the application database and to initialise the migrations, so I run

$ ./manage.py create-initial-db

which returns no output, and

$ ./manage.py flask db init Creating directory /home/leo/devel/flask-tutorial/migrations ... done Creating directory /home/leo/devel/flask-tutorial/migrations/versions ... done Generating /home/leo/devel/flask-tutorial/migrations/env.py ... done Generating /home/leo/devel/flask-tutorial/migrations/README ... done Generating /home/leo/devel/flask-tutorial/migrations/script.py.mako ... done Generating /home/leo/devel/flask-tutorial/migrations/alembic.ini ... done Please edit configuration/connection/logging settings in '/home/leo/devel/flask-tutorial/migrations/alembic.ini' before proceeding.

Once this is done (or if that was already done), I can create the migration with

$ ./manage.py flask db migrate -m "Initial user model" INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. INFO [alembic.autogenerate.compare] Detected added table 'users' Generating /home/leo/devel/flask-tutorial/migrations/versions/7a09d7f8a8fa_initial_user_model.py ... done

and finally apply the migration with

$ ./manage.py flask db upgrade INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> 7a09d7f8a8fa, Initial user model

After this I can safely commit my code and move on with the next requirement.

Final words

I hope this post already showed you why a good setup can make the difference. The project is clean and wrapping the command in the management script plus the centralised config proved to be a good choice as it allowed me to solve the problem of migrations and testing in (what I think is) an elegant way. In the next post I'll show you how to easily create scenarios where you can test queries with only specific data in the database. If you find my posts useful please share them with whoever you thing might be interested.

Feedback

Feel free to reach me on Twitter if you have questions. The GitHub issues page is the best place to submit corrections.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Week 5 : Adding Functionality to the UI

Planet Python - Mon, 2020-07-06 07:55

The first phase of coding is over, and I am really happy that I passed :)) 4 weeks back, I wasn't sure of my capabilities, but every day now feels good; like I am going to accomplish a lot! Every week has been productive, but this week, I was able to accomplish lots of features, which really made my mentors happy :)) 

What did you do this week?

Since the UI Design had been done, I spent this week adding functionalities to the buttons, removing bugs. Not sure if it would make sense without the UI, but here is what I did last week : 

  • Multiple Files can now be added simultaneously
  • Multiple Files can be displayed simultaneously
  • Extra buttons like Remove and Remove All for clearing the list
  • Double clicking on any KML file in the listWidget opens up a custom dialog box UI.
  • Bug fixing is done.
  • Single button "Add KML File" for adding & loading multiple KML Files simultaneously
  • Enhancement --> duplicate files are not loaded again.
  • Reduced Size of ListWidget; will get further removed after removing extra part of UI (colours, linewidth layout)
  • check uncheck feature to load KML layer

What is coming up next?

I am mostly done with the part, but still 2 3 important features are left and they are tough to implement : 

  • Add functionality to KML Overlay checkbox ( Disabled at the moment in UI)
  • Add functionality for customize UI for individual KML Files
  • Add functionality to Merge KML Files button

If I am able to do this, it would be a strong boost to my work, as I would be done with the main aim of my project! 
If time permits, I'll start writing tests too.

Did you get stuck anywhere?

I was on such a roll, that I wanted to finish the whole feature last week. But there is a method plot_kml, which has confounded me. Sometimes I think I understand it, but it slips out. Its crucial to the remaining features, so I have to work it out!

See you next week :))

Categories: FLOSS Project Planets

Week #5 Status Report [Preset Editor MyPaint Engine]

Planet KDE - Mon, 2020-07-06 07:21
The second month of Google Summer of Code has been started. This month my prime focus would be to improve the mypaint brush engine, fix some issues and also I aim to add a better preset editor than the one provided by other painting applications.
Last week I worked solely on the preset editor for the newly available MyPaint brush engine in Krita.
This was in accordance to the mockup as was presented in the proposal and contained only the very basic 3-4 settings for the engine. All most all of the applications with MyPaint integration just provide some 3-4 basic settings for the engine. Though the library does provide api to add all of the mentioned settings.
Preset Editor
This week I will look at how shall I add more settings and customization options to it. I have some thing like this in mind. Though as per the standard procedure will discuss it with Boud first.
Mockup for Preset Editor with All the Settings
This week I shall focus on this and will try to add as much settings as possible in the widget.
Till then, Good Bye :)

Categories: FLOSS Project Planets

PSF GSoC students blogs: GSoC: Week #6

Planet Python - Mon, 2020-07-06 07:08

Hi!

What did you do this week?

I implemented the offset feature in ‘get_tiles_straight’ both with and without a region of interest for raw and ‘memory’ datasets, fixed the bug in ‘get_tiles_w_copy’ which marks the completion of the offset feature for rest of the formats except HDF5 and wrote tests for benchmarking. I spent rest of the time looking into possible ways of implementing the reshaping feature.

What is coming up next?

I’ll benchmark the offset feature for the different formats, try to improve it and finish it for HDF5. I’ll continue working towards the general reshaping feature and work on its integration with the UDF interface.

Did you get stuck anywhere?

I haven’t figured out how to deal with the K2IS format properly. I’m not sure if I should implement my feature into the existing sync flag, but I’m working on it. In raw files, setting a negative offset results in empty but colored frames in the GUI but not when used with the Python API in a Jupyter notebook.

Categories: FLOSS Project Planets

OpenSense Labs: The definitive guide to Drupal 9

Planet Drupal - Mon, 2020-07-06 06:12
The definitive guide to Drupal 9 Shankar Mon, 07/06/2020 - 15:42

Technology is changing at the speed of light. Fuelled by the democratisation of innovation, the tempo of change and adoption is multiplying. Today, 5G is a major talking point in the industry. IoT is changing at scale. Data is becoming the centre of the IT universe with digital twins, spatial computing, artificial intelligence, deep analytics and new applied versions of technology all being dependent on data platforms. And, Hyperloop is leveraging magnetic levitation and big vacuum pumps to let those bus-sized vehicles zip along at speeds approaching Mach 1. In a world, where disruptive technologies are changing the ways we see everything around us, what happens to the existing technological solutions? Continuous innovation is the biggest mantra that can help them sustain in the long run and evolve with changing technological landscape. This is exactly how Drupal, one of the leading open-source content management systems, has remained powerful after almost two decades of existence. Introducing Drupal 9!


Since Dries Buytaert open-sourced the software behind Drop.org and released Drupal 1.0.0 on 15th January 2001, it has come a long way. It has weathered headwinds. It has grown rapidly. It has powered small and medium businesses to large enterprises around the world. Supported by an open-source community, which is made up of people from different parts of the globe, it has kept on becoming better and better with time. Drupal 9, the new avatar of Drupal, with intuitive solutions for empowering business users, cutting-edge new features that help dive into new digital channels, and easy upgrades, is the future-ready CMS. Amidst all the changes that are happening in the digital landscape, Drupal is here to thrive! Websites and web applications, built using Drupal 9, will be much more elegant!

The excitement in the air: Launch of Drupal 9

When the Drupal 8 was released back in 2015, it was a totally different world altogether. The celebrations were in full swing. But, as a matter of fact, Drupal 9 launch in 2020 wasn’t a low-key affair either. In spite of the Covid-19 pandemic, the Drupal Community and the Drupal Association made sure that the virtual celebrations were right on top. The community built CelebrateDrupal.org as a central hub for virtual celebrations and enabling the Drupal aficionados to share their excitement.


Even since Drupal 9.0.0-beta1 was released, which included all the dependency updates, updated platform requirements, stable APIs, and the features that will be shipped with Drupal 9, it raised the excitement levels to the sky-high. The beta release marked Drupal 9 as API-complete. Eventually, on June 3, 2020, the world saw the simultaneous release of Drupal 9.0.0 and Drupal 8.9.0.


Drupal 8.9 is a long-term support version, or the final minor version of Drupal 8, which will receive bug fixes and security coverage until November 2021 with no feature development. On the contrary, Drupal 9 development and support will keep on continuing beyond 2021. Drupal 8.9 includes most of the changes that Drupal 9 does and retains backwards compatibility layers added via Drupal 8’s release. The only difference is in the Drupal 9’s updated dependencies and removal of deprecated code.

Source: Drupal.org

If you have an existing Drupal site, opting to update to Drupal 8.9 is a perfect option. This will make sure maximum compatibility and the least possible alterations required for the Drupal 9 update. Or, if you are creating a new Drupal website, it gives you the option of choosing between Drupal 8.9 and Drupal 9. Going for Drupal 9 would be the most logical option as it gives you forward compatibility with later releases.

Traversing the world of Drupal 9


First things first - with the onset of Drupal 9, a rebranding has taken place as well. The new Drupal brand represents the fluidity and modularity of Drupal in addition to the Drupal Community’s strong belief system of coming together to build the best of the web.


If one asks what exactly is Drupal 9, all you can say is that it is not a reinvention of Drupal. It is a cleaned-up version of Drupal 8. So, what’s new in Drupal 9?

Drupal 9 has not only removed deprecated code but updated third-party dependencies for ensuring longer security support for your website’s building blocks and leverage new capabilities.

Since the adoption of semantic versioning in Drupal 8, adding new features in minor releases of Drupal has been possible instead of waiting for major version releases. To keep the Drupal platform safe and up to date, Drupal 9 has revised some third-party dependencies:

  • Symfony: Drupal 9 uses Symfony 4.4. But, Drupal 8 uses Symfony 3 and the update to Symfony 4 breaks backwards compatibility with Symfony 4. Even though Symfony 3’s end of life is November 2021, Drupal 8 users get enough time to strategise, plan and update to Drupal 9.
  • Twig: Drupal 9 will also move from Twig 1 to Twig 2.
  • Environment requirements: Drupal 9 will need at least PHP 7.3 for enhanced security and stability. If Drupal 9 is being run on Apache, it will require at least version 2.4.7.
  • Database backend: For all supported database backends within Drupal 9, database version requirements will be increased.
  • CKEditor: Soon, CKEditor 5 will be added in Drupal 9.x and CKEditor 4 will be deprecated for removal in Drupal 10.
  • jQuery and jQuery UI: While Drupal 9 still relies on jQuery, most of the jQuery UI components are removed from core.
  • PHPUnit: Drupal 8 requires PHPUnit 8.

Drupal 9 comes with the same structured content-based system that all the Drupalers love about it. Layout Builder in core enables you to reuse blocks and customise every part of the page. Built-in JSON:API support helps you develop progressively and fully decoupled applications. BigPipe in core ensures fantastic web performance and scalability. Bult-in media library helps you manage reusable media. There is multilingual support as well. You get better keyboard navigation and accessibility. Its mobile-first UI would change your mobile experience forever. The integrated configuration management system could be used with development and staging environment support.

Therefore, other than those provided by the updated dependencies, Drupal 9.0 does not include new features. It has the same features as Drupal 8.9. Drupal 9.x releases will continue to see new backwards-compatible features being added every six months after Drupal 9.0.

Migration to Drupal 9


While Drupal 9 is definitely the way to go, one needs to know certain things before upgrading from Drupal 7 or Drupal 8 to Drupal 9. Drupal 7 and Drupal 8 are not completely lost yet. They are here to stay for a while.

Drupal 7, which was slated to be end-of-life in November 2021, will now be getting community support till November 28, 2022. The decision comes after considering the impact of the Coronavirus outbreak and that a large number of sites are still using Drupal 7 in 2020. On the other hand, Drupal 8, which is dependent on Symfony 3 and since Symfony 3 will be end-of-life in November 2021, will, as a result, see the end of community support on November 2, 2021.

Symfony 4 will be end-of-life in November 2023. With Drupal 9 using Symfony 4.4, it is bound to stop receiving support at the end of 2023. (There is no official confirmation on dates yet for Drupal 9 EOL.) If that happens, Drupal 10 will be released in 2022 which means it will be released before Drupal 9 becomes end-of-life.


To upgrade to Drupal 9, the know-how of upgrade tools is essential:

  • For migrating your content and site configuration, Core Migrate module suite is perfect. 
  • The Upgrade Status Module would give your details on contributed project availability.
  • In case of Drupal 8 websites, the Upgrade Rector module would automate updates of several common deprecated code to the latest Drupal 9 compatible code.
  • In case of Drupal 7, the process of scanning and converting outdated code on your site can be handled by Drupal Module Upgrader.
  • Using drupal-check and/or the Drupal 8 version of Upgrade Status in your development environment helps you ensure whether or not a Drupal 8 update is also compatible with Drupal 9. You can also make use of phpstan-drupal from the command line or as part of a continuous integration system to check for deprecations and bugs.
  • You can use IDEs or code editors that understand ‘@deprecated’ annotations


The best option is to upgrade directly from Drupal 7 to Drupal 9 as this ensures that your upgraded site has maximum expected life. When your site requires a functionality provided by modules that are available in Drupal 8 but not yet in a Drupal 9 compatible release, you can also migrate to Drupal 8 first (Drupal 8.8 or 8.9) and then eventually move to Drupal 9.

While updating from Drupal 8 to Drupal 9, it is important to ensure that the hosting environment matches the platform requirements of Drupal 9. You need to update to Drupal 8.8.x or 8.9.x, update all the contributed projects and make sure that they are Drupal 9 compatible. Also, you need to make the custom code Drupal 9 compatible. Once set, all you need to do is update the core codebase to Drupal 9 and run update.php.

Future of Drupal 9

It’s very important to make Drupal more and more intuitive for all the users in the coming years. One of the foremost achievements of Drupal 9 is the streamlined upgrade experience. Upgrading from Drupal 8 to Drupal 9 is a lot easier than moving from Drupal 7 to 8. And, it will continue to be a smoother process when the time comes to migrate from Drupal 9 to Drupal 10.

Drupal 9 will continue to receive feature updates twice a year just like Drupal 8 did. For instance, the experimental Claro administration theme is being stabilised. The new Olivero frontend theme is already being developed and is being optimised for accessibility and tailored to frontend experiences. It is specifically being designed for marketers, site designers and content editors with a lot of emphasis on responsive design. Automated Updates Initiative, which began in Drupal 8, is also in the works.

There’s an awful lot of development going on behind-the-scenes. The upcoming releases of Drupal 9.x would definitely come packed with exciting new features. We are waiting!

Conclusion

Drupal is awesome because it’s always on the cutting edge. It has always been a CMS that provides extensibility, flexibility and freedom. Drupal’s foundation has always been in structured data which works really well in today’s demand for multichannel interactions. Having one of the biggest open source communities, it has the support of thousands and thousands of people adding more features to it, enhancing security and creating new extensions.

The large community of Drupal embraces change right away as the big developments happen. That is exactly why Drupal has been able to offer fantastic web experiences all these years. Drupal 9 is the result of its community’s commitment to enabling innovation and building something great.

Undoubtedly, Drupal 9 is the best and most modern version of Drupal yet. It marks another big milestone in the realm of web content management and digital experience. It’s time for you to start planning a migration path if you are still on Drupal 7 or Drupal 8. If you are starting out a new website project in Drupal, there shouldn’t be any ambiguities over choosing Drupal 9. Contact us at hello@opensenselabs.com to build the most innovative, creative and magnificent website ever using Drupal 9 or to migrate from Drupal 7 or 8 to Drupal 9.

blog banner blog image Drupal 9 Blog Type Articles Is it a good read ? On
Categories: FLOSS Project Planets

Jonathan Dowland: Review: Roku Express

Planet Debian - Mon, 2020-07-06 05:55

I don't generally write consumer reviews, here or elsewhere; but I have been so impressed by this one I wanted to mention it.

For Holly's birthday this year, taking place under Lockdown, we decided to buy a year's subscription to "Disney+". Our current TV receiver (A Humax Freesat box) doesn't support it so I needed to find some other way to get it onto the TV.

After a short bit of research, I bought the "Roku Express" streaming media player. This is the most basic streamer that Roku make, bottom of their range. For a little bit more money you can get a model which supports 4K (although my TV obviously doesn't: it, and the basic Roku, top out at 1080p) and a bit more gets you a "stick" form-factor and a Bluetooth remote (rather than line-of-sight IR).

I paid £20 for the most basic model and it Just Works. The receiver is very small but sits comfortably next to my satellite receiver-box. I don't have any issues with line-of-sight for the IR remote (and I rely on a regular IR remote for the TV itself of course). It supports Disney+, but also all the other big name services, some of which we already use (Netflix, YouTube BBC iPlayer) and some of which we didn't, since it was too awkward to access them (Google Play, Amazon Prime Video). It has now largely displaced the FreeSat box for accessing streaming content because it works so well and everything is in one place.

There's a phone App that remote-controls the box and works even better than the physical remote: it can offer a full phone-keyboard at times when you need to input text, and can mute the TV audio and put it out through headphones attached to the phone if you want.

My aging Plasma TV suffers from burn-in from static pictures. If left paused for a duration the Roku goes to a screensaver that keeps the whole frame moving. The FreeSat doesn't do this. My Blu Ray player does, but (I think) it retains some static elements.

Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in June 2020

Planet Debian - Mon, 2020-07-06 04:11

Welcome to the June 2020 report from the Reproducible Builds project. In these reports we outline the most important things that we and the rest of the community have been up to over the past month.

What are reproducible builds?

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security.

But whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into seemingly secure software during the various compilation and distribution processes.

News

The GitHub Security Lab published a long article on the discovery of a piece of malware designed to backdoor open source projects that used the build process and its resulting artifacts to spread itself. In the course of their analysis and investigation, the GitHub team uncovered 26 open source projects that were backdoored by this malware and were actively serving malicious code. (Full article)

Carl Dong from Chaincode Labs uploaded a presentation on Bitcoin Build System Security and reproducible builds to YouTube:

The app intended to trace infection chains of Covid-19 in Switzerland published information on how to perform a reproducible build.

The Reproducible Builds project has received funding in the past from the Open Technology Fund (OTF) to reach specific technical goals, as well as to enable the project to meet in-person at our summits. The OTF has actually also assisted countless other organisations that promote transparent, civil society as well as those that provide tools to circumvent censorship and repressive surveillance. However, the OTF has now been threatened with closure. (More info)

It was noticed that Reproducible Builds was mentioned in the book End-user Computer Security by Mark Fernandes (published by WikiBooks) in the section titled Detection of malware in software.

Lastly, reproducible builds and other ideas around software supply chain were mentioned in a recent episode of the Ubuntu Podcast in a wider discussion about the Snap and application stores (at approx 16:00).


Distribution work

In the ArchLinux distribution, a goal to remove .doctrees from installed files was created via Arch’s ‘TODO list’ mechanism. These .doctree files are caches generated by the Sphinx documentation generator when developing documentation so that Sphinx does not have to reparse all input files across runs. They should not be packaged, especially as they lead to the package being unreproducible as their pickled format contains unreproducible data. Jelle van der Waa and Eli Schwartz submitted various upstream patches to fix projects that install these by default.

Dimitry Andric was able to determine why the reproducibility status of FreeBSD’s base.txz depended on the number of CPU cores, attributing it to an optimisation made to the Clang C compiler []. After further detailed discussion on the FreeBSD bug it was possible to get the binaries reproducible again [].

In March 2018, a wishlist request was filed for the NixOS distribution by Bryan Alexander Rivera touching on how to install specific/deterministic versions of packages which was closed with “won’t fix” resolution this month.

For the GNU Guix operating system, Vagrant Cascadian started a thread about collecting reproducibility metrics and Jan “janneke” Nieuwenhuizen posted that they had further reduced their “bootstrap seed” to 25% which is intended to reduce the amount of code to be audited to avoid potential compiler backdoors.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update as well as made the following changes within the distribution itself:

Debian

Holger Levsen filed three bugs (#961857, #961858 & #961859) against the reproducible-check tool that reports on the reproducible status of installed packages on a running Debian system. They were subsequently all fixed by Chris Lamb [][][].

Timo Röhling filed a wishlist bug against the debhelper build tool impacting the reproducibility status of 100s of packages that use the CMake build system which led to a number of tests and next steps. []

Chris Lamb contributed to a conversation regarding the nondeterministic execution of order of Debian maintainer scripts that results in the arbitrary allocation of UNIX group IDs, referencing the Tails operating system’s approach this []. Vagrant Cascadian also added to a discussion regarding verification formats for reproducible builds.

47 reviews of Debian packages were added, 37 were updated and 69 were removed this month adding to our knowledge about identified issues. Chris Lamb identified and classified a new uids_gids_in_tarballs_generated_by_cmake_kde_package_app_templates issue [] and updated the paths_vary_due_to_usrmerge as deterministic issue, and Vagrant Cascadian updated the cmake_rpath_contains_build_path and gcc_captures_build_path issues. [][][].

Lastly, Debian Developer Bill Allombert started a mailing list thread regarding setting the -fdebug-prefix-map command-line argument via an environment variable and Holger Levsen also filed three bugs against the debrebuild Debian package rebuilder tool (#961861, #961862 & #961864).

Development

On our website this month, Arnout Engelen added a link to our Mastodon account [] and moved the SOURCE_DATE_EPOCH git log example to another section []. Chris Lamb also limited the number of news posts to avoid showing items from (for example) 2017 [].

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. It is used automatically in most Debian package builds. This month, Mattia Rizzolo bumped the debhelper compatibility level to 13 [] and adjusted a related dependency to avoid potential circular dependency [].

Upstream work

The Reproducible Builds project attempts to fix unreproducible packages and we try to to send all of our patches upstream. This month, we wrote a large number of such patches including:

Bernhard M. Wiedemann also filed reports for frr (build fails on single-processor machines), ghc-yesod-static/git-annex (a filesystem ordering issue) and ooRexx (ASLR-related issue).

diffoscope

diffoscope is our in-depth ‘diff-on-steroids’ utility which helps us diagnose reproducibility issues in packages. It does not define reproducibility, but rather provides a helpful and human-readable guidance for packages that are not reproducible, rather than relying essentially-useless binary diffs.

This month, Chris Lamb uploaded versions 147, 148 and 149 to Debian and made the following changes:

  • New features:

    • Add output from strings(1) to ELF binaries. (#148)
    • Dump PE32+ executables (such as EFI applications) using objdump(1). (#181)
    • Add support for Zsh shell completion. (#158)
  • Bug fixes:

    • Prevent a traceback when comparing PDF documents that did not contain metadata (ie. a PDF /Info stanza). (#150)
    • Fix compatibility with jsondiff version 1.2.0. (#159)
    • Fix an issue in GnuPG keybox file handling that left filenames in the diff. []
    • Correct detection of JSON files due to missing call to File.recognizes that checks candidates against file(1). []
  • Output improvements:

    • Use the CSS word-break property over manually adding U+200B zero-width spaces as these were making copy-pasting cumbersome. (!53)
    • Downgrade the tlsh warning message to an ‘info’ level warning. (#29)
  • Logging improvements:

  • Testsuite improvements:

    • Update tests for file(1) version 5.39. (#179)
    • Drop accidentally-duplicated copy of the --diff-mask tests. []
    • Don’t mask an existing test. []
  • Codebase improvements:

    • Replace obscure references to WF with “Wagner-Fischer” for clarity. []
    • Use a semantic AbstractMissingType type instead of remembering to check for both types of ‘missing’ files. []
    • Add a comment regarding potential security issue in the .changes, .dsc and .buildinfo comparators. []
    • Drop a large number of unused imports. [][][][][]
    • Make many code sections more Pythonic. [][][][]
    • Prevent some variable aliasing issues. [][][]
    • Use some tactical f-strings to tidy up code [][] and remove explicit u"unicode" strings [].
    • Refactor a large number of routines for clarity. [][][][]

trydiffoscope is the web-based version of diffoscope. This month, Chris Lamb also corrected the location for the celerybeat scheduler to ensure that the clean/tidy tasks are actually called which had caused an accidental resource exhaustion. (#12)

In addition Jean-Romain Garnier made the following changes:

  • Fix the --new-file option when comparing directories by merging DirectoryContainer.compare and Container.compare. (#180)
  • Allow user to mask/filter diff output via --diff-mask=REGEX. (!51)
  • Make child pages open in new window in the --html-dir presenter format. []
  • Improve the diffs in the --html-dir format. [][]

Lastly, Daniel Fullmer fixed the Coreboot filesystem comparator [] and Mattia Rizzolo prevented warnings from the tlsh fuzzy-matching library during tests [] and tweaked the build system to remove an unwanted .build directory []. For the GNU Guix distribution Vagrant Cascadian updated the version of diffoscope to version 147 [] and later 148 [].

Testing framework

We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org. Amongst many other tasks, this tracks the status of our reproducibility efforts across many distributions as well as identifies any regressions that have been introduced. This month, Holger Levsen made the following changes:

  • Debian-related changes:

    • Prevent bogus failure emails from rsync2buildinfos.debian.net every night. []
    • Merge a fix from David Bremner’s database of .buildinfo files to include a fix regarding comparing source vs. binary package versions. []
    • Only run the Debian package rebuilder job twice per day. []
    • Increase bullseye scheduling. []
  • System health status page:

    • Add a note displaying whether a node needs to be rebooted for a kernel upgrade. []
    • Fix sorting order of failed jobs. []
    • Expand footer to link to the related Jenkins job. []
    • Add archlinux_html_pages, openwrt_rebuilder_today and openwrt_rebuilder_future to ‘known broken’ jobs. []
    • Add HTML <meta> header to refresh the page every 5 minutes. []
    • Count the number of ignored jobs [], ignore permanently ‘known broken’ jobs [] and jobs on ‘known offline’ nodes [].
    • Only consider the ‘known offline’ status from Git. []
    • Various output improvements. [][]
  • Tools:

    • Switch URLs for the Grml Live Linux and PureOS package sets. [][]
    • Don’t try to build a disorderfs Debian source package. [][][]
    • Stop building diffoscope as we are moving this to Salsa. [][]
    • Merge several “is diffoscope up-to-date on every platform?” test jobs into one [] and fail less noisily if the version in Debian cannot be determined [].

In addition: Marcus Hoffmann was added as a maintainer of the F-Droid reproducible checking components [], Jelle van der Waa updated the “is diffoscope up-to-date in every platform” check for Arch Linux and diffoscope [], Mattia Rizzolo backed up a copy of a “remove script” run on the Codethink-hosted ‘jump server‘ [] and Vagrant Cascadian temporarily disabled the fixfilepath on bullseye, to get better data about the ftbfs_due_to_f-file-prefix-map categorised issue.

Lastly, the usual build node maintenance was performed by Holger Levsen [][], Mattia Rizzolo [] and Vagrant Cascadian [][][][][].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Eli Schwartz, Holger Levsen, Jelle van der Waa and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Categories: FLOSS Project Planets

PSF GSoC students blogs: GSoC: Week 6: class InputEngine

Planet Python - Mon, 2020-07-06 03:12
What did I do this week?

I have started working on input engine this week. Currently, we only have csv2cve which accepts csv file of vendor, product and version as input and produces list of CVEs as output. Currently, csv2cve is separate module with separate command line entry point. I have created a module called input_engine that can process data from any input format (currently csv and json).User can now add remarks field in csv or json which can have any value from following values ( Here, values in parenthesis are aliases for that specific type. )

  1. NewFound (1, n, N)
  2. Unexplored (2, u, U)
  3. Mitigated, (3, m, M)
  4. Confirmed (4, c, C)
  5. Ignored (5, i, I)

I have added --input-file(-i) option in the cli.py to specify input file which input_engine parses and create intermediate data structure that will be used by output_engine to display data according to remarks. Output will be displayed in the same order as priority given to the remarks. I have also created a dummy csv2cve which just calls cli.py with -i option as argument specified in csv2cve. Here, is example usage of -i as input file to produce CVE:  cve-bin-tool -i=test.csv  and User can also use -i to supplement remarks data while scanning directory so that output will be sorted according to remarks. Here is example usage for that: cve-bin-tool -i=test.csv /path/to/scan.

I have also added test cases for input_engine and removed old test cases of the csv2cve.

What am I doing this week? 

I have exams this week from today to 9th July. So, I won't be able to do much during this week but I will spend my weekend improving input_engine like giving more fine-grained control to provide remarks and custom severity.

Have I got stuck anywhere?

No, I didn't get stuck anywhere this week :)

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Philip James

Planet Python - Mon, 2020-07-06 01:05

This week we welcome Philip James (@phildini) as our PyDev of the Week! Philip is a core contributor for Beeware project. He has worked on several  other open source projects that you’ll learn about in this interview. He is also a popular speaker at PyCons and DjangoCons. You can find out more about Philip on his website or check out his work on Github.

Let’s spend some time getting to know Philip better!

Can you tell us a little about yourself (hobbies, education, etc):

My name is Philip, but I’m probably better known on the internet as phildini. That nickname came from a stage name; I used to do magic shows in high school for pocket money. In the Python community, I’m maybe best known as a frequent conference speaker, I’ve spoken at PyCons and DjangoCons around the world for the past 5 years. Beyond being a speaker, I’ve helped organize some Python meetups and conferences, and I serve on the PSF Conduct Working Group as it’s Chair. I’m also one of the early Core Contributors to the BeeWare project.

I’m the Head of Engineering at a personal finance company called Trim, where we try to automate saving people money on things like their Internet bill. I also co-run a publishing company and print shop called Galaxy Brain with a friend I met while I was at Patreon. We started as a Risograph print shop, making a zine about wine called Adult Juice Box and doing art prints. Galaxy Brain has been moving into software with the pandemic, because accessing our studio is harder, but we’re planning on keeping the printing going once things calm down. It’s kind of hilarious to us that we moved into software as an afterthought; I think we both resisted it for so long because the software is our day job.

Why did you start using Python?

I can remember helping to run a youth retreat in the Santa Cruz mountains in… I want to say 2005 or 2006, and one of the adults on the trip, who’s still a very good friend, showing me Python on a computer we had hooked up to one of the camp’s projectors. My first Python lesson happened on a 6-foot widescreen. Then in college, I took a couple courses on web applications and didn’t want to use PHP, so I started building apps in Django. That got me my first job in programming, then a job at Eventbrite, which got me into speaking, and the rest is history.

What other programming languages do you know and which is your favorite?

College theoretically taught me C and Java, but I know them like some people know ancient Greek — I can read it, but good luck speaking it. Towards the end of college I picked up some C#, and I really enjoyed my time in that language. It hit a lot of nice compromises between direct management and object-oriented modern languages, and I think a lot of that had to do with the fact that Visual Studio was such an incredible IDE.

Since I moved into web programming, I’ve picked up Javascript and Ruby, enough that I can write things in them but not enough to feel comfortable starting a project with them. Web development is in this really weird place right now, where you can maybe get away with only knowing Javascript, but you need a working familiarity with HTML, CSS, Javascript, Python, Ruby, and Shell to be effective at a high level. Maybe you just need to be good at googling those things.

I’ve recently started going deep on a language called ink, which is a language for writing Interactive Fiction games. We used to use this term “literate programming” way more; ink (along with twine and some others) is how you “program literature”. You can use ink to make standalone games or export it into a format that will drive narrative events in more complex Unity games. Stories and narratives don’t lend themselves well to modularization in the way programmers think of it, so it’s been fun watching my optimize-everything programmer brain clash with my get-the-narrative-out writer brain as I learn ink.

What projects are you working on now?

The trick is getting me to stop working on projects. Right now there’s my day job, as well as a host of Galaxy Brain projects. VictoryPic is a little slack app for bringing an Instagram-like experience to Slack. Hello Caller is a tool for doing podcast call-in shows. I’ve got some scripts I put up for building an “on-air” monitor for my office using a raspberry pi and CircuitPlayground Express. I’m writing a scraping library for the game store itch, so that I can do some interesting video game streaming projects. All those are in Python, for the most part, and then there’s the Interactive Fiction game, written in ink, that I’m working on for Galaxy Brain’s wine zine.

I also continue to write for Adult Juice Box, and run a podcast called Thought & A Chaser

Which Python libraries are your favorite (core or 3rd party)?

I think the CSV and sqlite libraries in the standard library are the two most important batteries Python comes with, outside the core language. With those two libraries, you can build a deeper well of data-driven apps than any other language I’ve seen. Outside of the stdlib, requests is the first library I reach for when I’m starting a project, and Django is how I build most of the projects I listed up above. Django is the most powerful set of abstractions for building webapps I’ve seen, in any language.

How did you get involved with the Beeware project?

I got involved in Beeware because of my speaking career. I was accepted to speak at DjangoCon Europe in Budapest a few years back, and met Dr. Russeell Keith-Magee, the creator of Beeware, along with Katie McLaughlin, one of the original Core Contributors. We started chatting about Beeware there, and I hacked on it a bit at sprints, and then I saw them at DjangoCon US in Philadelphia, and then again at PyCon US in Portland, and I had kept working on Beeware during that time and at sprints for those events. At PyCon I got the commit bit and became a Core Contributor.

The thing I take away from this story, that I tell others who want to get involved, is two-fold: (1) Submit talks to conferences, early and often. Being a conference speaker may or may not be good for your career, but it’s incredible for your sense of community and your friend circle within the tech communities you care about. (2) Show Up. There is immeasurable power in being consistent, in showing up to help regularly, in trying to ease the burdens of the people and projects you care about. The people you value in turn value those who Show Up, even if they’re not able to voice it.

Which Beeware package do you enjoy the most?

It feels like cheating to say Briefcase, but I really think Briefcase is the most interesting part of the whole project, because it’s the closest on solving Python’s ecosystem and package management problems. We shouldn’t have to teach people what pip is to let them benefit from what Python can do.

Is there anything else you’d like to say?

I think it’s important for those of us in programming especially to remember that our communities are not as insular as we think; we exist in the world, and this world has granted quite a bit to many of us. We need to be thinking about how we can give back, not just to the tech communities but to the world as a whole. Programming can be a force for justice, but sometimes the greatest difference is made when we show up to our local protest or city council meeting.

Thanks for doing the interview, Philip!

The post PyDev of the Week: Philip James appeared first on The Mouse Vs. The Python.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Blog #3 (29th Jun - 6th Jul)

Planet Python - Mon, 2020-07-06 01:03

Hey everyone we are done with the first third of the program and I will use this blog to both give the weekly update as well as summarize the current state of progress. In the past 4 weeks , we have created a new number-parser library from scratch and build an MVP that is being continuously improved.

Last week was spent fine-tuning the parser to retrieve the relevant data from the CLDR RBNF repo. This 'rule based number parser' (RBNF) repo is basically a Java library that converts a number (23) to the corresponding word. (twenty-three) It has a lot of hard-coded values and data that are very useful to our library and thus we plan to extract all this information accurately and efficiently.

In addition to this there are multiple nuances in each of the language that was being taken care , accents in languages. For eg) the french '0' is written as zéro with (accent aigu over the e ) However we don't expect the users to enter these accents each time hence we need to normalise (i.e remove) these accents.

The most challenging aspect was definitely understanding (which I am still not completely clear) the CLDR RBNF structure , there is only a little documentation explaining some of the basic rules however it's tough to identify which are the relevant rules and which aren't.

Originally I was hoping to add more tests as well in this week however all this took longer than expected so the testing aspect is going to be pushed to the current week.

Categories: FLOSS Project Planets

Mike Driscoll: Using Widgets in Jupyter Notebook (Video)

Planet Python - Sun, 2020-07-05 21:23

Learn how to use Jupyter Notebook’s built-in widgets in this video tutorial.

Get the book: https://leanpub.com/jupyternotebook101/

The post Using Widgets in Jupyter Notebook (Video) appeared first on The Mouse Vs. The Python.

Categories: FLOSS Project Planets

Python⇒Speed: Massive memory overhead: Numbers in Python and how NumPy helps

Planet Python - Sun, 2020-07-05 20:00

Let’s say you want to store a list of integers in Python:

list_of_numbers = [] for i in range(1000000): list_of_numbers.append(i)

Those numbers can easily fit in a 64-bit integer, so one would hope Python would store those million integers in no more than ~8MB: a million 8-byte objects.

In fact, Python uses more like 35MB of RAM to store these numbers. Why? Because Python integers are objects, and objects have a lot of memory overhead.

Let’s see what’s going on under the hood, and then how using NumPy can get rid of this overhead.

Read more...
Categories: FLOSS Project Planets

Enrico Zini: COVID-19 and Capitalism

Planet Debian - Sun, 2020-07-05 18:00
Astroturfing: How To Spot A Fake Movement capitalism covid19 news politics Crowds on Demand - Protests, Rallies and Advocacy archive.org 2020-07-06 If the Reopen America protests seem a little off to you, that's because they are. In this video we're going to talk about astroturfing and how insidious it i... Volunteers 3D-Print Unobtainable $11,000 Valve For $1 To Keep Covid-19 Patients Alive; Original Manufacturer Threatens To Sue capitalism covid19 health news Volunteers produce 3D-printed valves for life-saving coronavirus treatments archive.org 2020-07-06 Techdirt has just written about the extraordinary legal action taken against a company producing Covid-19 tests. Sadly, it's not the only example of some individuals putting profits before people. Here's a story from Italy, which is... Germany tries to stop US from luring away firm seeking coronavirus vaccine capitalism covid19 health news archive.org 2020-07-06 Berlin is trying to stop Washington from persuading a German company seeking a coronavirus vaccine to move its research to the United States. He Has 17,700 Bottles of Hand Sanitizer and Nowhere to Sell Them capitalism covid19 news archive.org 2020-07-06 Amazon cracked down on coronavirus price gouging. Now, while the rest of the world searches, some sellers are holding stockpiles of sanitizer and masks. Theranos vampire lives on: Owner of failed blood-testing biz's patents sues maker of actual COVID-19-testing kit capitalism covid19 news archive.org 2020-07-06 And 3D-printed valve for breathing machine sparks legal threat How an Austrian ski paradise became a COVID-19 hotspot capitalism covid19 news archive.org 2020-07-06 Ischgl, an Austrian ski resort, has achieved tragic international fame: hundreds of tourists are believed to have contracted the coronavirus there and taken it home with them. The Tyrolean state government is now facing serious criticism. EURACTIV Germany reports. Hospitals Need to Repair Ventilators. Manufacturers Are Making That Impossible capitalism covid19 health news archive.org 2020-07-06 We are seeing how the monopolistic repair and lobbying practices of medical device companies are making our response to the coronavirus pandemic harder. Homeless people in Las Vegas sleep 6 feet apart in parking lot as thousands of hotel rooms sit empty capitalism covid19 news privilege archive.org 2020-07-06 Las Vegas, Nevada has come under criticism after reportedly setting up a temporary homeless shelter in a parking lot complete with social distancing barriers.
Categories: FLOSS Project Planets

Nikola: Nikola v8.1.1 is out!

Planet Python - Sun, 2020-07-05 17:44

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v8.1.1. This release is mainly due to an incorrect PGP key being used for the PyPI artifacts; three regressions were also fixed in this release.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/

Downloads

Install using pip install Nikola.

Changes Bugfixes
  • Default to no line numbers in code blocks, honor CodeHilite requesting no line numbers. Listing pages still use line numbers (Issue #3426)

  • Remove duplicate MathJax config in bootstrap themes (Issue #3427)

  • Fix doit requirement to doit>=0.32.0 (Issue #3422)

Categories: FLOSS Project Planets

Glyph Lefkowitz: Zen Guardian

Planet Python - Sun, 2020-07-05 16:44

There should be one — and preferably only one — obvious way to do it.

— Tim Peters, “The Zen of Python”

Moshe wrote a blog post a couple of days ago which neatly constructs a wonderful little coding example from a scene in a movie. And, as we know from the Zen of Python quote, there should only be one obvious way to do something in Python. So my initial reaction to his post was of course to do it differently — to replace an __init__ method with the new @dataclasses.dataclass decorator.

But as I thought about the code example more, I realized there are a number of things beyond just dataclasses that make the difference between “toy”, example-quality Python, and what you’d do in a modern, professional, production codebase today.

So let’s do everything the second, not-obvious way!

There’s more than one way to do it

— Larry Wall, “The Other Zen of Python” Getting started: the __future__ is now

We will want to use type annotations. But, the Guard and his friend are very self-referential, and will have lots of annotations that reference things that come later in the file. So we’ll want to take advantage of a future feature of Python, which is to say, Postponed Evaluation of Annotations. In addition to the benefit of slightly improving our import time, it’ll let us use the nice type annotation syntax without any ugly quoting, even when we need to make forward references.

So, to begin:

1from __future__ import annotations Doors: safe sets of constants

Next, let’s tackle the concept of “doors”. We don’t need to gold-plate this with a full blown Door class with instances and methods - doors don’t have any behavior or state in this example, and we don’t need to add it. But, we still wouldn’t want anyone using using this library to mix up a door or accidentally plunge to their doom by accidentally passing "certian death" when they meant certain. So a Door clearly needs a type of its own, which is to say, an Enum:

1 2 3 4 5from enum import Enum class Door(Enum): certain_death = "certain death" castle = "castle" Questions: describing type interfaces

Next up, what is a “question”? Guards expect a very specific sort of value as their question argument and we if we’re using type annotations, we should specify what it is. We want a Question type that defines arguments for each part of the universe of knowledge that these guards understand. This includes who they are themselves, who the set of both guards are, and what the doors are.

We can specify it like so:

1 2 3 4 5 6 7from typing import Protocol, Sequence class Question(Protocol): def __call__( self, guard: Guard, guards: Sequence[Guard], doors: Sequence[Door] ) -> bool: ...

The most flexible way to define a type of thing you can call using mypy and typing is to define a Protocol with a __call__ method and nothing else1. We could also describe this type as Question = Callable[[Guard, Sequence[Guard], Door], bool] instead, but as you may be able to infer, that doesn’t let you easily specify names of arguments, or keyword-only or positional-only arguments, or required default values. So Protocol-with-__call__ it is.

At this point, we also get to consider; does the questioner need the ability to change the collection of doors they’re passed? Probably not; they’re just asking questions, not giving commands. So they should receive an immutable version, which means we need to import Sequence from the typing module and not List, and use that for both guards and doors argument types.

Guards and questions: annotating existing logic with types

Next up, what does Guard look like now? Aside from adding some type annotations — and using our shiny new Door and Question types — it looks substantially similar to Moshe’s version:

1 2 3 4 5 6 7 8 9 10 11 12 13from dataclasses import dataclass @dataclass class Guard: _truth_teller: bool _guards: Sequence[Guard] _doors: Sequence[Door] def ask(self, question: Question) -> bool: answer = question(self, self._guards, self._doors) if not self._truth_teller: answer = not answer return answer

Similarly, the question that we want to ask looks quite similar, with the addition of:

  1. type annotations for both the “outer” and the “inner” question, and
  2. using Door.castle for our comparison rather than the string "castle"
  3. replacing List with Sequence, as discussed above, since the guards in this puzzle also have no power to change their environment, only to answer questions.
  4. using the [var] = value syntax for destructuring bind, rather than the more subtle var, = value form
1 2 3 4 5 6 7 8 9def question(guard: Guard, guards: Sequence[Guard], doors: Sequence[Door]) -> bool: [other_guard] = (candidate for candidate in guards if candidate != guard) def other_question( guard: Guard, guards: Sequence[Guard], doors: Sequence[Door] ) -> bool: return doors[0] == Door.castle return other_guard.ask(other_question) Eliminating global state: building the guard post

Next up, how shall we initialize this collection of guards? Setting a couple of global variables is never good style, so let’s encapsulate this within a function:

1 2 3 4 5 6 7from typing import List def make_guard_post() -> Sequence[Guard]: doors = list(Door) guards: List[Guard] = [] guards[:] = [Guard(True, guards, doors), Guard(False, guards, doors)] return guards Defining the main point

And finally, how shall we actually have this execute? First, let’s put this in a function, so that it can be called by things other than running the script directly; for example, if we want to use entry_points to expose this as a script. Then, let's put it in a "__main__" block, and not just execute it at module scope.

Secondly, rather than inspecting the output of each one at a time, let’s use the all function to express that the interesting thing is that all of the guards will answer the question in the affirmative:

1 2 3 4 5 6def main() -> None: print(all(each.ask(question) for each in make_guard_post())) if __name__ == "__main__": main() Appendix: the full code

To sum up, here’s the full version:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55from __future__ import annotations from dataclasses import dataclass from typing import List, Protocol, Sequence from enum import Enum class Door(Enum): certain_death = "certain death" castle = "castle" class Question(Protocol): def __call__( self, guard: Guard, guards: Sequence[Guard], doors: Sequence[Door] ) -> bool: ... @dataclass class Guard: _truth_teller: bool _guards: Sequence[Guard] _doors: Sequence[Door] def ask(self, question: Question) -> bool: answer = question(self, self._guards, self._doors) if not self._truth_teller: answer = not answer return answer def question(guard: Guard, guards: Sequence[Guard], doors: Sequence[Door]) -> bool: [other_guard] = (candidate for candidate in guards if candidate != guard) def other_question( guard: Guard, guards: Sequence[Guard], doors: Sequence[Door] ) -> bool: return doors[0] == Door.castle return other_guard.ask(other_question) def make_guard_post() -> Sequence[Guard]: doors = list(Door) guards: List[Guard] = [] guards[:] = [Guard(True, guards, doors), Guard(False, guards, doors)] return guards def main() -> None: print(all(each.ask(question) for each in make_guard_post())) if __name__ == "__main__": main() Acknowledgments

I’d like to thank Moshe Zadka for the post that inspired this, as well as Nelson Elhage, Jonathan Lange, Ben Bangert and Alex Gaynor for giving feedback on drafts of this post.

  1. I will hopefully have more to say about typing.Protocol in another post soon; it’s the real hero of the Mypy saga, but more on that later... 

Categories: FLOSS Project Planets

GSoC'20 progress : Phase I

Planet KDE - Sun, 2020-07-05 14:30
Wrapping up the first phase of Google Summer of Code
Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Check-In: Week 6

Planet Python - Sun, 2020-07-05 13:35


Make sure to check out Project FURY : https://github.com/fury-gl/fury

Hey ! 
Spherical harmonics, Continued!

What did I do this week

Last week I added a basic implementation of Spherical harmonics based actors. However, the implementation was quite restricted and we needed to add support for more accurate generation of spherical harmonics. So the task assigned this week was to implement the spherical harmonics function within the shader rather than passing variables as uniforms. This was quite an challenging task as it involved understanding of mathematical formulae and implementing them using existing GLSL functions.
The output of the implementation is shown below

 

 

While , i was able to complete the task the frame rate for the generated output was quite lower than expected. 



The code for the above render is available at the branch :

https://github.com/lenixlobo/fury/tree/Spherical-Harmonics  

What's coming up next

The next task is to discuss possible performance improvements with the mentors and also look into alternative ideas to add spherical harmonics as actors in FURY.

Did I get stuck anywhere

Spherical harmonics involve a lot of complicated math behind the hood as a result the generated output has a very poor frame rate. Currently, we are looking into improving this.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Check-in #6

Planet Python - Sun, 2020-07-05 12:03
Translation, Reposition, Rotation.

Hello and welcome to my 6th weekly check-in. The first evaluation period officially ends and I am very excited to move on to the second coding period. I will be sharing my progress with handling specific object's properties among various multiple objects rendered by a single actor. I am mainly focusing on making it easier to translate, rotate and reposition a particular object, so that I can use them to render physics simulations more efficiently. The official repository of my sub-org, FURY can always be found here.

What did you do this week?

Last week I worked on physics simulations rendered in FURY with the help of pyBullet. Now the simulations were highly un-optimized, specially the brick wall simulation as each brick was rendered by its own actor. In other words, 1 brick = 1 actor. Now my objective was to render all the bricks using a single actor, but before jumping into the simulation I had to figure out how to modify specific properties of an individual object. Thanks to my mentor's PR, I was able to experiment my implementations quickly.

Translation:

The algorithm behind translation is to first identify the vertices of the object, then bring the vertices to the origin by subtracting their centers and then adding the displacement vector. The said operation can be achieved by the following snippet:

# Update vertices positions vertices[object_index * sec: object_index * sec + sec] = \ (vertices[object_index * sec: object_index * sec + sec] - centers[object_index]) + transln_vector

Rotation:

The algorithm behind rotation is to first calculate the difference between the vertices and the center of the object. Once we get the resultant matrix, we matrix multiply it with the rotation matrix and then we further add the centers back to it so that we preserve the position of the object. Rotation matrix can be defined as:

where gamma, beta and alpha corresponds to the angle of rotation along Z-axis, Y-axis and X-axis.

def get_R(gamma, beta, alpha): """ Returns rotational matrix. """ r = [ [np.cos(alpha)*np.cos(beta), np.cos(alpha)*np.sin(beta)*np.sin(gamma) - np.sin(alpha)*np.cos(gamma), np.cos(alpha)*np.sin(beta)*np.cos(gamma) + np.sin(alpha)*np.sin(gamma)], [np.sin(alpha)*np.cos(beta), np.sin(alpha)*np.sin(beta)*np.sin(gamma) + np.cos(alpha)*np.cos(gamma), np.sin(alpha)*np.sin(beta)*np.cos(gamma) - np.cos(alpha)*np.sin(gamma)], [-np.sin(beta), np.cos(beta)*np.sin(gamma), np.cos(beta)*np.cos(gamma)] ] r = np.array(r) return r vertices[object_index * sec: object_index * sec + sec] = \ (vertices[object_index * sec: object_index * sec + sec] - centers[object_index])@get_R(0, np.pi/4, np.pi/4) + centers[object_index]

Reposition:

Repositioning is similar to that of translation, except in this case, while repositioning we update centers with the new position value.

new_pos = np.array([1, 2, 3]) # Update vertices positions vertices[object_index * sec: object_index * sec + sec] = \ (vertices[object_index * sec: object_index * sec + sec] - centers[object_index]) + new_pos centers[object_index] = new_pos What is coming up next?

Currently, I am yet to figure out the orientation problem. Once I figure that out I will be ready to implement simulations without any major issues. I am also tasked with creating a wrecking ball simulation and a quadruped robot simulation.

Did you get stuck anywhere?

I did face some problems while rotating objects. My mentors suggested me to implement it via rotation matrix. I still haven't figured out the orientation problem, which I plan to work on next. Apart from these I did not face any major issues.

Thank you for reading, see you next week!!
Categories: FLOSS Project Planets

Ian Ozsvald: Weekish notes

Planet Python - Sun, 2020-07-05 11:42

I gave another iteration of my Making Pandas Fly talk sequence for PyDataAmsterdam recently and received some lovely postcards from attendees as a result. I’ve also had time to list new iterations of my training courses for Higher Performance Python (October) and Software Engineering for Data Scientists (September), both will run virtually via Zoom & Slack in the UK timezone.

I’ve been using my dtype_diet tool to time more performance improvements with Pandas and I look forward to talking more on this at EuroPython this month.

In baking news I’ve improved my face-making on sourdough loaves (but still have work to do) and I figure now is a good time to have a crack at dried-yeast baking again.

 

Ian is a Chief Interim Data Scientist via his Mor Consulting. Sign-up for Data Science tutorials in London and to hear about his data science thoughts and jobs. He lives in London, is walked by his high energy Springer Spaniel and is a consumer of fine coffees.

The post Weekish notes appeared first on Entrepreneurial Geekiness.

Categories: FLOSS Project Planets

PSF GSoC students blogs: [Week 5] Check-in

Planet Python - Sun, 2020-07-05 10:21
 

1. What did you do this week?

  • Add more test cases to cover more functions.
2. Difficulty

No difficulties this week.

3. What is coming up next?
  • Use unumpy multimethods.
  • Improve documentation.
  • Publish a simple version of udiff on pypi.
Categories: FLOSS Project Planets

Pages