FLOSS Project Planets

OSTraining: How to Group Entity Fields in Tabs in Drupal 8

Planet Drupal - Thu, 2019-08-15 01:46

Extensive nodes (or other types of entities) with many text fields, such as biographies, often remain unread because of the huge (and discouraging) amount of text.

The Drupal 8 "Field Group" module allows you to group fields and to present them in containers like vertical or horizontal tabs, accordions or just plain wrappers. It lets you group fields in the frontend of your site, and in the backend as well.

Keep reading to learn how to use this module!

Categories: FLOSS Project Planets

OSTraining: How to Group Entity Fields in Tabs in Drupal 8

Planet Drupal - Thu, 2019-08-15 01:46

Extensive nodes (or other types of entities) with many text fields, such as biographies, often remain unread because of the huge (and discouraging) amount of text.

The Drupal 8 "Field Group" module allows you to group fields and to present them in containers like vertical or horizontal tabs, accordions or just plain

wrappers. Field Group lets you group fields in the Frontend of your site, and in the Backend as well.

Keep reading to learn how to use this module!

Categories: FLOSS Project Planets

Malthe Borch: Using built-in transparent compression on MacOS

Planet Python - Wed, 2019-08-14 17:41

Ever since DriveSpace on MS-DOS (or really, Stacker), we've had transparent file compression, with varying degrees of automation; in fact, while the DriveSpace-compression on MS-DOS was a fully automated affair, the built-in transparent compression in newer filesystems such as ZFS, Btrfs, APFS (and even HFS+), is engaged manually on a per-file or folder basis.

But no one's using it!

On my system, compressing /Applications saved 18GB (38.7%).

MacOS doesn't actually come with a utility to do this even though the core functionality is included, so you'll need to install an open source tool in order to use it.

$ brew install afsctool

To compress a file or folder, use the -c flag like so:

$ afsctool -c /Applications

(You might need to use root for some application and/or system files).

Categories: FLOSS Project Planets

Continuum Analytics Blog: Accessing Remote Data with a Generalized File System

Planet Python - Wed, 2019-08-14 15:31

Originally created for the needs of Dask, we have spun out a general file system implementation and specification, to provide all users with simple access to many local, cluster, and remote storage media. Dask and Intake…

The post Accessing Remote Data with a Generalized File System appeared first on Anaconda.

Categories: FLOSS Project Planets

Agaric Collective: Migrating addresses into Drupal

Planet Drupal - Wed, 2019-08-14 14:55

Today we will learn how to migrate addresses into Drupal. We are going to use the field provided by the Address module which depends on the third-party library commerceguys/addressing. When migrating addresses you need to be careful with the data that Drupal expects. The address components can change per country. The way to store those components also varies per country. These and other important consideration will be explained. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD address whose machine name is ud_migrations_address. The migration to execute is udm_address. Notice that this migration writes to a content type called UD Address and one field: field_ud_address. This content type and field will be created when the module is installed. They will also be removed when the module is uninstalled. The demo module itself depends on the following modules: address and migrate.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type and fields are automatically created and deleted.

The recommended way to install the Address module is using composer: composer require drupal/address. This will grab the Drupal module and the commerceguys/addressing library that it depends on. If your Drupal site is not composer-based, an alternative is to use the Ludwig module. Read this article if you want to learn more about this option. In the example, it is assumed that the module and its dependency were obtained via composer. Also, keep an eye on the Composer Support in Core Initiative as they make progress.

Source and destination sections

The example will migrate three addresses from the following countries: Nicaragua, Germany, and the United States of America (USA). This makes it possible to show how different countries expect different address data. As usual, for any migration you need to understand the source. The following code snippet shows how the source and destination sections are configured:

source: plugin: embedded_data data_rows: - unique_id: 1 first_name: 'Michele' last_name: 'Metts' company: 'Agaric LLC' city: 'Boston' state: 'MA' zip: '02111' country: 'US' - unique_id: 2 first_name: 'Stefan' last_name: 'Freudenberg' company: 'Agaric GmbH' city: 'Hamburg' state: '' zip: '21073' country: 'DE' - unique_id: 3 first_name: 'Benjamin' last_name: 'Melançon' company: 'Agaric SA' city: 'Managua' state: 'Managua' zip: '' country: 'NI' ids: unique_id: type: integer destination: plugin: 'entity:node' default_bundle: ud_address

Note that not every address component is set for all addresses. For example, the Nicaraguan address does not contain a ZIP code. And the German address does not contain a state. Also, the Nicaraguan state is fully spelled out: Managua. On the contrary, the USA state is a two letter abbreviation: MA for Massachusetts. One more thing that might not be apparent is that the USA ZIP code belongs to the state of Massachusetts. All of this is important because the module does validation of addresses. The destination is the custom ud_address content type created by the module.

Available subfields

The Address field has 13 subfields available. They can be found in the schema() method of the AddresItem class. Fields are not required to have a one-to-one mapping between their schema and the form widgets used for entering content. This is particularly true for addresses because input elements, labels, and validations change dynamically based on the selected country. The following is a reference list of all subfields for addresses:

  1. langcode for language code.
  2. country_code for country.
  3. administrative_area for administrative area (e.g., state or province).
  4. locality for locality (e.g. city).
  5. dependent_locality for dependent locality (e.g. neighbourhood).
  6. postal_code for postal or ZIP code.
  7. sorting_code for sorting code.
  8. address_line1 for address line 1.
  9. address_line2 for address line 2.
  10. organization for company.
  11. given_name for first name.
  12. additional_name for middle name.
  13. family_name for last name:

Properly describing an address is not trivial. For example, there are discussions to add a third address line component. Check this issue if you need this functionality or would like to participate in the discussion.

Address subfield mappings

In the example, only 9 out of the 13 subfields will be mapped. The following code snippet shows how to do the processing of the address field:

field_ud_address/given_name: first_name field_ud_address/family_name: last_name field_ud_address/organization: company field_ud_address/address_line1: plugin: default_value default_value: 'It is a secret ;)' field_ud_address/address_line2: plugin: default_value default_value: 'Do not tell anyone :)' field_ud_address/locality: city field_ud_address/administrative_area: state field_ud_address/postal_code: zip field_ud_address/country_code: country

The mapping is relatively simple. You specify a value for each subfield. The tricky part is to know the name of the subfield and the value to store in it. The format for an address component can change among countries. The easiest way to see what components are expected for each country is to create a node for a content type that has an address field. With this example, you can go to /node/add/ud_address and try it yourself. For simplicity sake, let’s consider only 3 countries:

  • For USA, city, state, and ZIP code are all required. And for state, you have a specific list form which you need to select from.
  • For Germany, the company is moved above first and last name. The ZIP code label changes to Postal code and it is required. The city is also required. It is not possible to set a state.
  • For Nicaragua, the Postal code is optional. The State label changes to Department. It is required and offers a predefined list to choose from. The city is also required.

Pay very close attention. The available subfields will depend on the country. Also, the form labels change per country or language settings. They do not necessarily match the subfield names. Moreover, the values that you see on the screen might not match what is stored in the database. For example, a Nicaraguan address will store the full department name like Managua. On the other hand, a USA address will only store a two-letter code for the state like MA for Massachusetts.

Something else that is not apparent even from the user interface is data validation. For example, let’s consider that you have a USA address and select Massachusetts as the state. Entering the ZIP code 55111 will produce the following error: Zip code field is not in the right format. At first glance, the format is correct, a five-digits code. The real problem is that the Address module is validating if that ZIP code is valid for the selected state. It is not valid for Massachusetts. 55111 is a ZIP code for the state of Minnesota which makes the validation fail. Unfortunately, the error message does not indicate that. Nine-digits ZIP codes are accepted as long as they belong to the state that is selected.

Finding expected values

Values for the same subfield can vary per country. How can you find out which value to use? There are a few ways, but they all require varying levels of technical knowledge or access to resources:

  • You can inspect the source code of the address field widget. When the country and state components are rendered as select input fields (dropdowns), you can have a look at the value attribute for the option that you want to select. This will contain the two-letter code for countries, the two-letter abbreviations for USA states, and the fully spelled string for Nicaraguan departments.
  • You can use the Devel module. Create a node containing an address. Then use the devel tab of the node to inspect how the values are stored. It is not recommended to have the devel module in a production site. In fact, do not deploy the code even if the module is not enabled. This approach should only be used in a local development environment. Make sure no module or configuration is committed to the repo nor deployed.
  • You can inspect the database. Look for the records in a table named node__field_[field_machine_name], if migrating nodes. First create some example nodes via the user interface and then query the table. You will see how Drupal stores the values in the database.

If you know a better way, please share it in the comments.

The commerceguys addressing library

With version 8 came many changes in the way Drupal is developed. Now there is an intentional effort to integrate with the greater PHP ecosystem. This involves using already existing libraries and frameworks, like Symfony. But also, making code written for Drupal available as external libraries that could be used by other projects. commerceguys\addressing is one example of a library that was made available as an external library. That being said, the Address module also makes use of it.

Explaining how the library works or where its fetches its database is beyond the scope of this article. Refer to the library documentation for more details on the topic. We are only going to point out some things that are relevant for the migration. For example, the ZIP code validation happens at the validatePostalCode() method of the AddressFormatConstraintValidator class. There is no need to know this for a migration project. But the key thing to remember is that the migration can be affected by third-party libraries outside of Drupal core or contributed modules. Another example, is the value for the state subfield. Address module expects a subdivision as listed in one of the files in the resources/subdivision directory.

Does the validation really affect the migration? We have already mentioned that the Migrate API bypasses Form API validations. And that is true for address fields as well. You can migrate a USA address with state Florida and ZIP code 55111. Both are invalid because you need to use the two-letter state code FL and use a valid ZIP code within the state. Notwithstanding, the migration will not fail in this case. In fact, if you visit the migrated node you will see that Drupal happily shows the address with the data that you entered. The problems arrives when you need to use the address. If you try to edit the node you will see that the state will not be preselected. And if you try to save the node after selecting Florida you will get the validation error for the ZIP code.

This validation issues might be hard to track because no error will be thrown by the migration. The recommendation is to migrate a sample combination of countries and address components. Then, manually check if editing a node shows the migrated data for all the subfields. Also check that the address passes Form API validations upon saving. This manual testing can save you a lot of time and money down the road. After all, if you have an ecommerce site, you do not want to be shipping your products to wrong or invalid addresses. ;-)

Technical note: The commerceguys/addressing library actually follows ISO standards. Particularly, ISO 3166 for country and state codes. It also uses CLDR and Google's address data. The dataset is stored as part of the library’s code in JSON format.

Migrating countries and zone fields

The Address module offer two more fields types: Country and Zone. Both have only one subfield value which is selected by default. For country, you store the two-letter country code. For zone, you store a serialized version of a Zone object.

What did you learn in today’s blog post? Have you migrated address before? Did you know the full list of subcomponents available? Did you know that data expectations change per country? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories: FLOSS Project Planets

TechBeamers Python: Append Vs. Extend in Python List

Planet Python - Wed, 2019-08-14 14:51

In this tutorial, you’ll explore the difference between append and extend methods of Python List. Both these methods are used to manipulate the lists in their specific way. The append method adds a single or a group of items (sequence) as one element at the tail of a list. On the other hand, the extend method appends the input elements to the end as part of the original list. After reading the above description about append() and extend(), it may seem a bit confusing to you. So, we’ll explain each of these methods with examples and show the difference between

The post Append Vs. Extend in Python List appeared first on Learn Programming and Software Testing.

Categories: FLOSS Project Planets

Real Python: An Effective Python Environment: Making Yourself at Home

Planet Python - Wed, 2019-08-14 10:00

When you’re first learning a new programming language, a lot of your time and effort go into understanding the syntax, code style, and built-in tooling. This is just as true for Python as it is for any other language. Once you gain enough familiarity to be comfortable with the ins and outs of Python, you can start to invest time into building a Python environment that will foster your productivity.

Your shell is more than a prebuilt program provided to you as-is. It’s a framework on which you can build an ecosystem. This ecosystem will come to fit your needs so that you can spend less time fiddling and more time thinking about the next big project you’re working on.

Although no two developers have the same setup, there are a number of choices everyone faces when cultivating their Python environment. It’s important to understand each of these decisions and the options available to you!

By the end of this article, you’ll be able to answer questions like:

  • What shell should I use? What terminal should I use?
  • What version(s) of Python can I use?
  • How do I manage dependencies for different projects?
  • How can I make my tools do some of the work for me?

Once you’ve answered these questions for yourself, you can embark on the journey of creating a Python environment to call your very own. Let’s get started!

Free Bonus: Click here to get access to a free 5-day class that shows you how to avoid common dependency management issues with tools like Pip, PyPI, Virtualenv, and requirements files.

Shells

When you use a command-line interface (CLI), you execute commands and see their output. A shell is a program that provides this (usually text-based) interface to you. Shells often provide their own programming language that you can use to manipulate files, install software, and so on.

There are more unique shells than could be reasonably listed here, so you’ll see a few prominent ones. Others differ in syntax or enhanced features, but they generally provide the same core functionality.

Unix Shells

Unix is a family of operating systems first developed in the early days of computing. Unix’s popularity has lasted through today, heavily inspiring Linux and macOS. The first shells were developed for use with Unix and Unix-like operating systems.

Bourne Shell (sh)

The Bourne shell—developed by Stephen Bourne for Bell Labs in 1979—was one of the first to incorporate the idea of environment variables, conditionals, and loops. It has provided a strong basis for many other shells in use today and is still available on most systems at /bin/sh.

Bourne-Again Shell (bash)

Built on the success of the original Bourne shell, bash introduced improved user-interaction features. With bash, you get Tab completion, history, and wildcard searching for commands and paths. The bash programming language provides more data types, like arrays.

Z Shell (zsh)

zsh combines many of the best features from other shells along with a few of its own tricks into one experience. zsh offers autocorrection of misspelled commands, shorthand for manipulating multiple files, and advanced options for customizing your command prompt.

zsh also provides a framework for deep customization. The Oh My Zsh project supplies a rich set of themes and plugins, and is often used hand in hand with zsh.

macOS will ship with zsh as its default shell starting with Catalina, speaking to the shell’s popularity. Consider acquainting yourself with zsh now so that you’ll be comfortable with it going forward.

Xonsh

If you’re feeling particularly adventurous, you can give Xonsh a try. Xonsh is a shell that combines some features of other Unix-like shells with the power of Python syntax. You can use the language you already know to accomplish tasks on your filesystem and so on.

Although Xonsh is powerful, it lacks the compatibility other shells tend to share. You might not be able to run many existing shell scripts in Xonsh as a result. If you find that you like Xonsh, but compatibility is a concern, then you can use Xonsh as a supplement to your activities in a more widely used shell.

Windows Shells

Similarly to Unix-like operating systems, Windows also offers a number of options when it comes to shells. The shells offered in Windows vary in features and syntax, so you may need to try several to find one you like best.

CMD (cmd.exe)

CMD (short for “command”) is the default CLI shell for Windows. It’s the predecessor to COMMAND.COM, the shell built for DOS (disk operating system).

Because DOS and Unix evolved independently, the commands and syntax in CMD are markedly different from shells built for Unix-like systems. However, CMD still provides the same core functionality for browsing and manipulating files, running commands, and viewing output.

PowerShell

PowerShell was released in 2006 and also ships with Windows. It provides Unix-like aliases for most commands, so if you’re coming to Windows from macOS or Linux or have to use both, then PowerShell might be great for you.

PowerShell is vastly more powerful than CMD. With PowerShell you can:

  • Pipe the output of one command to the input of another
  • Automate tasks through the exposed Windows management features
  • Use a scripting language to accomplish complex tasks
Windows Subsystem for Linux

Microsoft has released a Windows subsystem for Linux (WSL) for running Linux directly on Windows. If you install WSL, then you can use zsh, bash, or any other Unix-like shell. If you want strong compatibility across your Windows and macOS or Linux environments, then be sure to give WSL a try. You may also consider dual-booting Linux and Windows as an alternative.

See this comparison of command shells for exhaustive coverage.

Terminal Emulators

Early developers used terminals to interact with a central mainframe computer. These were devices with a keyboard and a screen or printer that would display computed output.

Today, computers are portable and don’t require separate devices to interact with them, but the terminology still remains. Whereas a shell provides the prompt and interpreter you use to interface with text-based CLI tools, a terminal emulator (often shortened to terminal) is the graphical application you run to access the shell.

Almost any terminal you encounter should support the same basic features:

  • Text colors for syntax highlighting in your code or distinguishing meaningful text in command output
  • Scrolling for viewing an earlier command or its output
  • Copy/paste for transferring text in or out of the shell from other programs
  • Tabs for running multiple programs at once or separating your work into different sessions
macOS Terminals

The terminal options available for macOS are all full-featured, differing mostly in aesthetics and specific integrations with other tools.

Terminal

If you’re using a Mac, then you may have used the built-in Terminal app before. Terminal supports all the usual functionality, and you can also customize the color scheme and a few hotkeys. It’s a nice enough tool if you don’t need many bells and whistles. You can find the Terminal app in Applications → Utilities → Terminal on macOS.

iTerm2

I’ve been a long-time user of iTerm2. It takes the developer experience on Mac a step further, offering a much wider palette of customization and productivity options that enable you to:

  • Integrate with the shell to jump quickly to previously entered commands
  • Create custom search term highlighting in the output from commands
  • Open URLs and files displayed in the terminal with Cmd+click

A Python API ships with the latest versions of iTerm2, so you can even improve your Python chops by developing more intricate customizations!

iTerm2 is popular enough to enjoy first-class integration with several other tools, and has a healthy community building plugins and so on. It’s a good choice because of its more frequent release cycle compared to Terminal, which only updates as often as macOS does.

Hyper

A relative newcomer, Hyper is a terminal built on Electron, a framework for building desktop applications using web technologies. Electron apps are heavily customizable because they’re “just JavaScript” under the hood. You can create any functionality that you can write the JavaScript for.

On the other hand, JavaScript is a high-level programming language and won’t always perform as well as low-level languages like Objective-C or Swift. Be mindful of the plugins you install or create!

Windows Terminals

As with the shell options, Windows terminal options vary widely in utility. Some are tightly bound to a particular shell as well.

Command Prompt

Command Prompt is the graphical application you can use to work with CMD in Windows. Like CMD, it’s a bare-bones tool for getting a few small things done. Although Command Prompt and CMD provide fewer features than other alternatives, you can be confident that they’ll be available on nearly every Windows installation and in a consistent place.

Cygwin

Cygwin is a third-party suite of tools for Windows that provides a Unix-like wrapper. This was my preferred setup when I was in Windows, but you may consider adopting the Windows Subsystem for Linux as it receives more traction and polish.

Windows Terminal

Microsoft recently released an open source terminal for Windows 10 called Windows Terminal. It lets you work in CMD, PowerShell, and even the Windows Subsystem for Linux. If you need to do a fair amount of shell work in Windows, then Windows Terminal is probably your best bet! Windows Terminal is still in late beta, so it doesn’t ship with Windows yet. Check the documentation for instructions on getting access.

Python Version Management

With your choice of terminal and shell made, you can focus your attention on your Python environment specifically.

Something you’ll eventually run into is the need to run multiple versions of Python. Projects you use may only run on certain versions, or you may be interested in creating a project that supports multiple Python versions. You can configure your Python environment to accommodate these needs.

macOS and most Unix operating systems come with a version of Python installed by default. This is often called the system Python. The system Python works just fine, but it’s usually out of date. As of this writing, macOS High Sierra still ships with Python 2.7.10 as the system Python.

Note: You’ll almost certainly want to install the latest version of Python at a minimum, so you’ll have at least two versions of Python already.

It’s important that you leave the system Python as the default, because many parts of the system rely on the default Python being a specific version. This is one of many great reasons to customize your Python environment!

How do you navigate this? Tooling is here to help.

pyenv

pyenv is a mature tool for installing and managing multiple Python versions. I recommend installing it with Homebrew. After you’ve got pyenv installed, you can install multiple versions of Python into your Python environment with a few short commands:

$ pyenv versions * system $ python --version Python 2.7.10 $ pyenv install 3.7.3 # This may take some time $ pyenv versions * system 3.7.3

You can manage which Python you’d like to use in your current session, globally, or on a per-project basis as well. pyenv will make the python command point to whichever Python you specify. Note that none of these overrides the default system Python for other applications, so you’re safe to use them however they work best for you within your Python environment:

$ pyenv global 3.7.3 $ pyenv versions system * 3.7.3 (set by /Users/dhillard/.pyenv/version) $ pyenv local 3.7.3 $ pyenv versions system * 3.7.3 (set by /Users/dhillard/myproj/.python-version) $ pyenv shell 3.7.3 $ pyenv versions system * 3.7.3 (set by PYENV_VERSION environment variable) $ python --version Python 3.7.3

Because I use a specific version of Python for work, the latest version of Python for personal projects, and multiple versions for testing open source projects, pyenv has proven to be a fairly smooth way for me to manage all these different versions within my own Python environment. See Managing Multiple Python Versions with pyenv for a detailed overview of the tool.

conda

If you’re in the data science community, you might already be using Anaconda (or Miniconda). Anaconda is a sort of one-stop shop for data science software that supports more than just Python.

If you don’t need the data science packages or all the things that come pre-packaged with Anaconda, pyenv might be a better lightweight solution for you. Managing Python versions is pretty similar in each, though. You can install Python versions similarly to pyenv, using the conda command:

$ conda install python=3.7.3

You’ll see a verbose list of all the dependent software conda will install, and it will ask you to confirm.

conda doesn’t have a way to set the “default” Python version or even a good way to see which versions of Python you’ve installed. Rather, it hinges on the concept of “environments,” which you can read more about in the following sections.

Virtual Environments

Now you know how to manage multiple Python versions. Often, you’ll be working on multiple projects that need the same Python version.

Because each project has its own set of dependencies, it’s a good practice to avoid mixing them. If all the dependencies are installed together in a single Python environment, then it will be difficult to discern where each one came from. In the worst cases, two different projects may depend on two different versions of a package, but with Python you can only have one version of a package installed at one time. What a mess!

Enter virtual environments. You can think of a virtual environment as a carbon copy of a base version of Python. If you’ve installed Python 3.7.3, for example, then you can create many virtual environments based off of it. When you install a package in a virtual environment, you do it in isolation from other Python environments you may have. Each virtual environment has its own copy of the python executable.

Tip: Most virtual environment tooling provides a way to update your shell’s command prompt to show the current active virtual environment. Make sure to do this if you frequently switch between projects so you’re sure you’re working inside the correct virtual environment.

venv

venv ships with Python versions 3.3+. You can create virtual environments just by passing it a path at which to store the environment’s python, installed packages, and so on:

$ python -m venv ~/.virtualenvs/my-env

You activate a virtual environment by sourcing its activate script:

$ source ~/.virtualenvs/my-env/bin/activate

You exit the virtual environment using the deactivate command, which is made available when you activate the virtual environment:

(my-env)$ deactivate

venv is built on the wonderful work and successes of the independent virtualenv project. virtualenv still provides a few interesting features of its own, but venv is nice because it provides the utility of virtual environments without requiring you to install additional software. You can probably get pretty far with it if you’re working mostly in a single Python version in your Python environment.

If you’re already managing multiple Python versions (or plan to), then it could make sense to integrate with that tooling to simplify the process of making new virtual environments with specific versions of Python. The pyenv and conda ecosystems both provide ways to specify the Python version to use when you create new virtual environments, covered in the following sections.

pyenv-virtualenv

If you’re using pyenv, then pyenv-virtualenv enhances pyenv with a subcommand for managing virtual environments:

// Create virtual environment $ pyenv virtualenv 3.7.3 my-env // Activate virtual environment $ pyenv activate my-env // Exit virtual environment (my-env)$ pyenv deactivate

I switch contexts between a large handful of projects on a day-to-day basis. As a result, I have at least a dozen distinct virtual environments to manage in my Python environment. What’s really nice about pyenv-virtualenv is that you can configure a virtual environment using the pyenv local command and have pyenv-virtualenv auto-activate the right environments as you switch to different directories:

$ pyenv virtualenv 3.7.3 proj1 $ pyenv virtualenv 3.7.3 proj2 $ cd /Users/dhillard/proj1 $ pyenv local proj1 (proj1)$ cd ../proj2 $ pyenv local proj2 (proj2)$ pyenv versions system 3.7.3 3.7.3/envs/proj1 3.7.3/envs/proj2 proj1 * proj2 (set by /Users/dhillard/proj2/.python-version)

pyenv and pyenv-virtualenv have provided a particularly fluid workflow in my Python environment.

conda

You saw earlier that conda treats environments, rather than Python versions, as the main method of working. conda has built-in support for managing virtual environments:

// Create virtual environment $ conda create --name my-env python=3.7.3 // Activate virtual environment $ conda activate my-env // Exit virtual environment (my-env)$ conda deactivate

conda will install the specified version of Python if it isn’t already installed, so you don’t have to run conda install python=3.7.3 first.

pipenv

pipenv is a relatively new tool that seeks to combine package management (more on this in a moment) with virtual environment management. It mostly abstracts the virtual environment management from you, which can be great as long as things go smoothly:

$ cd /Users/dhillard/myproj // Create virtual environment $ pipenv install Creating a virtualenv for this project… Pipfile: /Users/dhillard/myproj/Pipfile Using /path/to/pipenv/python3.7 (3.7.3) to create virtualenv… ✔ Successfully created virtual environment! Virtualenv location: /Users/dhillard/.local/share/virtualenvs/myproj-nAbMEAt0 Creating a Pipfile for this project… Pipfile.lock not found, creating… Locking [dev-packages] dependencies… Locking [packages] dependencies… Updated Pipfile.lock (a65489)! Installing dependencies from Pipfile.lock (a65489)… 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0/0 — 00:00:00 To activate this project's virtualenv, run pipenv shell. Alternatively, run a command inside the virtualenv with pipenv run. // Activate virtual environment (uses a subshell) $ pipenv shell Launching subshell in virtual environment… . /Users/dhillard/.local/share/virtualenvs/test-nAbMEAt0/bin/activate // Exit virtual environment (by exiting subshell) (myproj-nAbMEAt0)$ exit

pipenv does all the heavy lifting of creating a virtual environment and activating it for you. If you look carefully, you can see that it also creates a file called Pipfile. After you first run pipenv install, this file contains just a few things:

[[source]] name = "pypi" url = "https://pypi.org/simple" verify_ssl = true [dev-packages] [packages] [requires] python_version = "3.7"

In particular, note that it shows python_version = "3.7". By default, pipenv creates a virtual Python environment using the same Python version it was installed under. If you want to use a different Python version, then you can create the Pipfile yourself before running pipenv install and specify the version you want. If you have pyenv installed, then pipenv will use it to install the specified Python version if necessary.

Abstracting virtual environment management is a noble goal of pipenv, but it does get hung up with hard-to-read errors occasionally. Give it a try, but don’t worry if you feel confused or overwhelmed by it. The tool, documentation, and community will grow and improve around it as it matures.

To get an in-depth introduction to virtual environments, be sure to read Python Virtual Environments: A Primer.

Package Management

For many of the projects you work on, you’ll probably need some number of third-party packages. Those packages may have their own dependencies in turn. In the early days of Python, using packages involved manually downloading files and pointing Python at them. Today, we’re fortunate to have a variety of package management tools available to us.

Most package managers work in tandem with virtual environments, isolating the packages you install in one Python environment from another. Using the two together is where you really start to see the power of the tools available to you.

pip

pip (pip installs packages) has been the de facto standard for package management in Python for several years. It was heavily inspired by an earlier tool called easy_install. Python incorporated pip into the standard distribution starting in version 3.4. pip automates the process of downloading packages and making Python aware of them.

If you have multiple virtual environments, then you can see that they’re isolated by installing a few packages in one:

$ pyenv virtualenv 3.7.3 proj1 $ pyenv activate proj1 (proj1)$ pip list Package Version ---------- --------- pip 19.1.1 setuptools 40.8.0 (proj1)$ python -m pip install requests Collecting requests Downloading .../requests-2.22.0-py2.py3-none-any.whl (57kB) 100% |████████████████████████████████| 61kB 2.2MB/s Collecting chardet<3.1.0,>=3.0.2 (from requests) Downloading .../chardet-3.0.4-py2.py3-none-any.whl (133kB) 100% |████████████████████████████████| 143kB 1.7MB/s Collecting certifi>=2017.4.17 (from requests) Downloading .../certifi-2019.6.16-py2.py3-none-any.whl (157kB) 100% |████████████████████████████████| 163kB 6.0MB/s Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests) Downloading .../urllib3-1.25.3-py2.py3-none-any.whl (150kB) 100% |████████████████████████████████| 153kB 1.7MB/s Collecting idna<2.9,>=2.5 (from requests) Downloading .../idna-2.8-py2.py3-none-any.whl (58kB) 100% |████████████████████████████████| 61kB 26.6MB/s Installing collected packages: chardet, certifi, urllib3, idna, requests Successfully installed packages $ pip list Package Version ---------- --------- certifi 2019.6.16 chardet 3.0.4 idna 2.8 pip 19.1.1 requests 2.22.0 setuptools 40.8.0 urllib3 1.25.3

pip installed requests, along with several packages it depends on. pip list shows you all the currently installed packages and their versions.

Warning: You can uninstall packages using pip uninstall requests, for example, but this will only uninstall requests—not any of its dependencies.

A common way to specify project dependencies for pip is with a requirements.txt file. Each line in the file specifies a package name and, optionally, the version to install:

scipy==1.3.0 requests==2.22.0

You can then run python -m pip install -r requirements.txt to install all of the specified dependencies at once. For more on pip, see What is Pip? A Guide for New Pythonistas.

pipenv

pipenv has most of the same basic operations as pip but thinks about packages a bit differently. Remember the Pipfile that pipenv creates? When you install a package, pipenv adds that package to Pipfile and also adds more detailed information to a new lock file called Pipfile.lock. Lock files act as a snapshot of the precise set of packages installed, including direct dependencies as well as their sub-dependencies.

You can see pipenv sorting out the package management when you install a package:

$ pipenv install requests Installing requests… Adding requests to Pipfile's [packages]… ✔ Installation Succeeded Pipfile.lock (444a6d) out of date, updating to (a65489)… Locking [dev-packages] dependencies… Locking [packages] dependencies… ✔ Success! Updated Pipfile.lock (444a6d)! Installing dependencies from Pipfile.lock (444a6d)… 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 5/5 — 00:00:00

pipenv will use this lock file, if present, to install the same set of packages. You can ensure that you always have the same set of working dependencies in any Python environment you create using this approach.

pipenv also distinguishes between development dependencies and production (regular) dependencies. You may need some tools during development, such as black or flake8, that you don’t need when you run your application in production. You can specify that a package is for development when you install it:

$ pipenv install --dev flake8 Installing flake8… Adding flake8 to Pipfile's [dev-packages]… ✔ Installation Succeeded ...

pipenv install (without any arguments) will only install your production packages by default, but you can tell it to install development dependencies as well with pipenv install --dev.

poetry

poetry addresses additional facets of package management, including creating and publishing your own packages. After installing poetry, you can use it to create a new project:

$ poetry new myproj Created package myproj in myproj $ ls myproj/ README.rst myproj pyproject.toml tests

Similarly to how pipenv creates the Pipfile, poetry creates a pyproject.toml file. This recent standard contains metadata about the project as well as dependency versions:

[tool.poetry] name = "myproj" version = "0.1.0" description = "" authors = ["Dane Hillard <github@danehillard.com>"] [tool.poetry.dependencies] python = "^3.7" [tool.poetry.dev-dependencies] pytest = "^3.0" [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api"

You can install packages with poetry add (or as development dependencies with poetry add --dev):

$ poetry add requests Using version ^2.22 for requests Updating dependencies Resolving dependencies... (0.2s) Writing lock file Package operations: 5 installs, 0 updates, 0 removals - Installing certifi (2019.6.16) - Installing chardet (3.0.4) - Installing idna (2.8) - Installing urllib3 (1.25.3) - Installing requests (2.22.0)

poetry also maintains a lock file, and it has a benefit over pipenv because it keeps track of which packages are subdependencies. As a result, you can uninstall requests and its dependencies with poetry remove requests.

conda

With conda, you can use pip to install packages as usual, but you can also use conda install to install packages from different channels , which are collections of packages provided by Anaconda or other providers. To install requests from the conda-forge channel, you can run conda install -c conda-forge requests.

Learn more about package management in conda in Setting Up Python for Machine Learning on Windows.

Python Interpreters

If you’re interested in further customization of your Python environment, you can choose the command line experience you have when interacting with Python. The Python interpreter provides a read-eval-print loop (REPL), which is what comes up when you type python with no arguments in your shell:

>>>Python 3.7.3 (default, Jun 17 2019, 14:09:05) [Clang 10.0.1 (clang-1001.0.46.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 2 + 2 4 >>> exit()

The REPL reads what you type, evaluates it as Python code, and prints the result. Then it waits to do it all over again. This is about as much as the default Python REPL provides, which is sufficient for a good portion of typical work.

IPython

Like Anaconda, IPython is a suite of tools supporting more than just Python, but one of its main features is an alternative Python REPL. IPython’s REPL numbers each command and explicitly labels each command’s input and output. After installing IPython (python -m pip install ipython), you can run the ipython command in place of the python command to use the IPython REPL:

>>>Python 3.7.3 Type 'copyright', 'credits' or 'license' for more information IPython 6.0.0.dev -- An enhanced Interactive Python. Type '?' for help. In [1]: 2 + 2 Out[1]: 4 In [2]: print("Hello!") Out[2]: Hello!

IPython also supports Tab completion, more powerful help features, and strong integration with other tooling such as matplotlib for graphing. IPython provided the foundation for Jupyter, and both have been used extensively in the data science community because of their integration with other tools.

The IPython REPL is highly configurable too, so while it falls just shy of being a full development environment, it can still be a boon to your productivity. Its built-in and customizable magic commands are worth checking out.

bpython

bpython is another alternative REPL that provides inline syntax highlighting, tab completion, and even auto-suggestions as you type. It provides quite a few of the quick benefits of IPython without altering the interface much. Without the weight of the integrations and so on, bpython might be good to add to your repertoire for a while to see how it improves your use of the REPL.

Text Editors

You spend a third of your life sleeping, so it makes sense to invest in a great bed. As a developer, you spend a great deal of your time reading and writing code, so it follows that you should invest time in setting up your Python environment’s text editor just the way you like it.

Each editor offers a different set of key bindings and model for manipulating text. Some require a mouse to interact with them effectively, whereas others can be controlled with only the keyboard. Some people consider their choice of text editor and customizations some of the most personal decisions they make!

There are so many options to choose from in this arena, so I won’t attempt to cover it in detail here. Check out Python IDEs and Code Editors (Guide) for a broad overview. A good strategy is to find a simple, small text editor for quick changes and a full-featured IDE for more involved work. Vim and PyCharm, respectively, are my editors of choice.

Python Environment Tips and Tricks

Once you’ve made the big decisions about your Python environment, the rest of the road is paved with little tweaks to make your life a little easier. These tweaks each save minutes or seconds alone, but they collectively save you hours of time.

Making a certain activity easier reduces your cognitive load so you can focus on the task at hand instead of the logistics surrounding it. If you notice yourself performing an action over and over, then consider automating it. Use this wonderful chart from XKCD to determine if it’s worth automating a particular task.

Here are a few final tips.

Know your current virtual environment

As mentioned earlier, it’s a great idea to display the active Python version or virtual environment in your command prompt. Most tools will do this for you, but if not (or if you want to customize the prompt), the value is usually contained in the VIRTUAL_ENV environment variable.

Disable unnecessary, temporary files

Have you ever noticed *.pyc files all over your project directories? These files are pre-compiled Python bytecode—they help Python start your application faster. In production, these are a great idea because they’ll give you some performance gain. During local development, however, they’re rarely useful. Set PYTHONDONTWRITEBYTECODE=1 to disable this behavior. If you find use cases for them later, then you can easily remove this from your Python environment.

Customize your Python interpreter

You can affect how the REPL behaves using a startup file. Python will read this startup file and execute the code it contains before entering the REPL. Set the PYTHONSTARTUP environment variable to the path of your startup file. (Mine’s at ~/.pystartup.) If you’d like to hit Up for command history and Tab for completion like your shell provides, then give this startup file a try.

Conclusion

You learned about many facets of the typical Python environment. Armed with this knowledge, you can:

  • Choose a terminal with the aesthetics and enhanced features you like
  • Choose a shell with as many (or as few) customization options as you need
  • Manage multiple versions of Python on your system
  • Manage multiple projects that use a single version of Python, using virtual Python environments
  • Install packages in your virtual environments
  • Choose a REPL that suits your interactive coding needs

When you’ve got your Python environment just so, I hope you’ll share screenshots, screencasts, or blog posts about your perfect setup ✨

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Drudesk: Drupal 8 for marketers: key benefits & useful modules

Planet Drupal - Wed, 2019-08-14 09:51

It‘s easy and enjoyable to create marketing campaigns, drive leads, and tell your brand’s story to the world if your website is on the right CMS. Drupal 8’s benefits will definitely impress any marketer. So let’s take a closer look at the greatness of Drupal 8 for marketers, see what makes it so valuable, and name a few useful modules.

Categories: FLOSS Project Planets

Catalin George Festila: Python 3.7.3 : Using the flask - part 014.

Planet Python - Wed, 2019-08-14 06:20
Today I worked on YouTube search with flask and Google A.P.I. project. The source code is simple to understand and you can test any A.P.I. from google using this way. I created a new Google project with YouTube A.P.I. version 3 and with the A.P.I. key. I use this key to connect with flask python module. I used the isodate python module. You can see the source code on my GitHub repo named flask_yt
Categories: FLOSS Project Planets

Introducing Qt Quick 3D: A high-level 3D API for Qt Quick

Planet KDE - Wed, 2019-08-14 06:03

As Lars mentioned in his Technical Vision for Qt 6 blog post, we have been researching how we could have a deeper integration between 3D and Qt Quick. As a result we have created a new project, called Qt Quick 3D, which provides a high-level API for creating 3D content for user interfaces from Qt Quick. Rather than using an external engine which can lead to animation synchronization issues and several layers of abstraction, we are providing extensions to the Qt Quick Scenegraph for 3D content, and a renderer for those extended scene graph nodes.

Does that mean we wrote yet another 3D Solution for Qt?  Not exactly, because the core spatial renderer is derived from the Qt 3D Studio renderer. This renderer was ported to use Qt for its platform abstraction and refactored to meet Qt project coding style.

“San Miguel” test scene running in Qt Quick 3D

What are our Goals?  Why another 3D Solution? Unified Graphics Story

The single most important goal is that we want to unify our graphics story. Currently we are offering two comprehensive solutions for creating fluid user interfaces, each having its own corresponding tooling.  One of these solutions is Qt Quick, for 2D, the other is Qt 3D Studio, for 3D.  If you limit yourself to using either one or the other, things usually work out quite fine.  However, what we found is that users typically ended up needing to mix and match the two, which leads to many pitfalls both in run-time performance and in developer/designer experience.

Therefore, and for simplicity’s sake, we aim have one runtime (Qt Quick), one common scene graph (Qt Quick Scenegraph), and one design tool (Qt Design Studio).  This should present no compromises in features, performance or the developer/designer experience. This way we do not need to further split our development focus between more products, and we can deliver more features and fixes faster.

Intuitive and Easy to Use API

The next goal is for Qt Quick 3D is to provide an API for defining 3D content, an API that is approachable and usable by developers without the need to understand the finer details of the modern graphics pipeline.  After all, the majority of users do not need to create specialized 3D graphics renderers for each of their applications, but rather just want to show some 3D content, often alongside 2D.  So we have been developing Qt Quick 3D with this perspective in mind.

That being said, we will be exposing more and more of the rendering API over time which will make more advanced use cases, needed by power-users, possible.

At the time of writing of this post we are only providing a QML API, but the goal in the future is to provide a public C++ API as well.

Unified Tooling for Qt Quick

Qt Quick 3D is intended to be the successor to Qt 3D Studio.  For the time being Qt 3D Studio will still continue to be developed, but long-term will be replaced by Qt Quick and Qt Design Studio.

Here we intend to take the best parts of Qt 3D Studio and roll them into Qt Quick and Qt Design Studio.  So rather than needing a separate tool for Qt Quick or 3D, it will be possible to just do both from within Qt Design Studio.  We are working on the details of this now and hope to have a preview of this available soon.

For existing users of Qt 3D Studio, we have been working on a porting tool to convert projects to Qt Quick 3D. More on that later.

First Class Asset Conditioning Pipeline

When dealing with 3D scenes, asset conditioning becomes more important because now there are more types of assets being used, and they tend to be much bigger overall.  So as part of the Qt Quick 3D development effort we have been looking at how we can make it as easy as possible to import your content and bake it into efficient runtime formats for Qt Quick.

For example, at design time you will want to specify the assets you are using based on what your asset creation tools generate (like FBX files from Maya for 3D models, or PSD files from Photoshop for textures), but at runtime you would not want the engine to use those formats.  Instead, you will want to convert the assets into some efficient runtime format, and have them updated each time the source assets change.  We want this to be an automated process as much as possible, and so want to build this into the build system and tooling of Qt.

Cross-platform Performance and Compatibility

Another of our goals is to support multiple native graphics APIs, using the new Rendering Hardware Interface being added to Qt. Currently, Qt Quick 3D only supports rendering using OpenGL, like many other components in Qt. However, in Qt 6 we will be using the QtRHI as our graphics abstraction and there we will be able to support rendering via Vulkan, Metal and Direct3D as well, in addition to OpenGL.

What is Qt Quick 3D? (and what it is not)

Qt Quick 3D is not a replacement for Qt 3D, but rather an extension of Qt Quick’s functionality to render 3D content using a high-level API.

Here is what a very simple project with some helpful comments looks like:

import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick3D 1.0 Window { id: window visible: true width: 1280 height: 720 // Viewport for 3D content View3D { id: view anchors.fill: parent // Scene to view Node { id: scene Light { id: directionalLight } Camera { id: camera // It's important that your camera is not inside // your model so move it back along the z axis // The Camera is implicitly facing up the z axis, // so we should be looking towards (0, 0, 0) z: -600 } Model { id: cubeModel // #Cube is one of the "built-in" primitive meshes // Other Options are: // #Cone, #Sphere, #Cylinder, #Rectangle source: "#Cube" // When using a Model, it is not enough to have a // mesh source (ie "#Cube") // You also need to define what material to shade // the mesh with. A Model can be built up of // multiple sub-meshes, so each mesh needs its own // material. Materials are defined in an array, // and order reflects which mesh to shade // All of the default primitive meshes contain one // sub-mesh, so you only need 1 material. materials: [ DefaultMaterial { // We are using the DefaultMaterial which // dynamically generates a shader based on what // properties are set. This means you don't // need to write any shader code yourself. // In this case we just want the cube to have // a red diffuse color. id: cubeMaterial diffuseColor: "red" } ] } } } }

The idea is that defining 3D content should be as easy as 2D.  There are a few extra things you need, like the concepts of Lights, Cameras, and Materials, but all of these are high-level scene concepts, rather than implementation details of the graphics pipeline.

This simple API comes at the cost of less power, of course.  While it may be possible to customize materials and the content of the scene, it is not possible to completely customize how the scene is rendered, unlike in Qt 3D via the its customizable framegraph.  Instead, for now there is a fixed forward renderer, and you can define with properties in the scene how things are rendered.  This is like other existing engines, which typically have a few possible rendering pipelines to choose from, and those then render the logical scene.

A Camera orbiting around a Car Model in a Skybox with Axis and Gridlines (note: stutter is from the 12 FPS GIF )

What Can You Do with Qt Quick 3D?

Well, it can do many things, but these are built up using the following scene primitives:

Node

Node is the base component for any node in the 3D scene.  It represents a transformation in 3D space, and but is non-visual.  It works similarly to how the Item type works in Qt Quick.

Camera

Camera represents how a scene is projected to a 2D surface. A camera has a position in 3D space (as it is a Node subclass) and a projection.  To render a scene, you need to have at least one Camera.

Light

The Light component defines a source of lighting in the scene, at least for materials that consider lighting.  Right now, there are 3 types of lights: Directional (default), Point and Area.

Model

The Model component is the one visual component in the scene.  It represents a combination of geometry (from a mesh) and one or more materials.

The source property of the Mesh component expects a .mesh file, which is the runtime format used by Qt Quick 3D.  To get mesh files, you need to convert 3D models using the asset import tool.  There are also a few built-in primitives. These can be used by setting the following values to the source property: #Cube, #Cylinder, #Sphere, #Cone, or #Rectangle.

We will also be adding a programmatic way to define your own geometry at runtime, but that is not yet available in the preview.

Before a Model can be rendered, it must also have a Material. This defines how the mesh is shaded.

DefaultMaterial and Custom Materials

The DefaultMaterial component is an easy to use, built-in material.  All you need to do is to create this material, set the properties you want to define, and under the hood all necessary shader code will be automatically generated for you.  All the other properties you set on the scene are taken into consideration as well. There is no need to write any graphics shader code (such as, vertex or fragment shaders) yourself.

It is also possible to define so-called CustomMaterials, where you do provide your own shader code.  We also provide a library of pre-defined CustomMaterials you can try out by just adding the following to your QML imports:

import QtQuick3D.MaterialLibrary 1.0

Texture

The Texture component represents a texture in the 3D scene, as well as how it is mapped to a mesh.  The source for a texture can either be an image file, or a QML Component.

A Sample of the Features Available 3D Views inside of Qt Quick

To view 3D content inside of Qt Quick, it is necessary to flatten it to a 2D surface.  To do this, you use the View3D component.  View3D is the only QQuickItem-based component in the whole API.  You can either define the scene as a child of the View3D or reference an existing scene by setting the scene property to the root Node of the scene you want to renderer.

If you have more than one camera, you can also set which camera you want to use to render the scene.  By default, it will just use the first active camera defined in the scene.

Also it is worth noting that View3D items do not necessarily need to be rendered to off-screen textures before being rendered.  It is possible to set one of the 4 following render modes to define when the 3D content is rendered:

  1. Texture: View3D is a Qt Quick texture provider and renders content to an texture via an FBO
  2. Underlay: View3D is rendered before Qt Quick’s 2D content is rendered, directly to the window (3D is always under 2D)
  3. Overlay: View3D is rendered after Qt Quick’s 2D content is rendered, directly to the window (3D is always over 2D)
  4. RenderNode: View3D is rendered in-line with the Qt Quick 2D content.  This can however lead to some quirks due to how Qt Quick 2D uses the depth buffer in Qt 5.

 

2D Views inside of 3D

It could be that you also want to render Qt Quick content inside of a 3D scene.  To do so, anywhere where an Texture is taken as a property value (for example, in the diffuseMap property of default material), you can use a Texture with its sourceItem property set, instead of just specifying a file in the source property. This way the referenced Qt Quick item will be automatically rendered and used as a texture.

The diffuse color textures being mapped to the cubes are animated Qt Quick 2D items.

3D QML Components

Due to Qt Quick 3D being built on QML, it is possible to create reusable components for 3D as well.  For example, if you create a Car model consisting of several Models, just save it to Car.qml. You can then instantiate multiple instance of Car by just reusing it, like any other QML type. This is very important because this way 2D and 3D scenes can be created using the same component model, instead of having to deal with different approaches for the 2D and 3D scenes.

Multiple Views of the Same Scene

Because scene definitions can exist anywhere in a Qt Quick project, its possible to reference them from multiple View3Ds.  If you had multiple cameras in a scene, you could even render from each one to a different View3D.

4 views of the same Teapot scene. Also changing between 3 Cameras in the Perspective view.

Shadows

Any Light component can specify that it is casting shadows.  When this is enabled, shadows are automatically rendered in the scene.  Depending on what you are doing though, rendering shadows can be quite expensive, so you can fine-tune which Model components cast and receive shadows by setting additional properties on the Model.

Image Based Lighting

In addition to the standard Light components, its possible to light your scene by defining a HDRI map. This Texture can be set either for the whole View3D in its SceneEnvironment property, or on individual Materials.

Animations

Animations in Qt Quick 3D use the same animation system as Qt Quick.  You can bind any property to an animator and it will be animated and updated as expected. Using the QtQuickTimeline module it is also possible to use keyframe-based animations.

Like the component model, this is another important step in reducing the gap between 2D and 3D scenes, as no separate, potentially conflicting animation systems are used here.

Currently there is no support for rigged animations, but that is planned in the future.

How Can You Try it Out?

The intention is to release Qt Quick 3D as a technical preview along with the release of Qt 5.14.  In the meantime it should be possible to use it already now, against Qt 5.12 and higher.

To get the code, you just need to build the QtQuick3D module which is located here:

https://git.qt.io/annichol/qtquick3d

What About Tooling?

The goal is that it should be possible via Qt Design Studio to do everything you need to set up a 3D scene. That means being able to visually lay out the scene, import 3D assets like meshes, materials, and textures, and convert those assets into efficient runtime formats used by the engine.

A demonstration of early Qt Design Studio integration for Qt Quick 3D

Importing 3D Scenes to QML Components

Qt Quick 3D can also be used by writing QML code manually. Therefore, we also have some stand-alone utilities for converting assets.  Once such tool is the balsam asset conditioning tool.  Right now it is possible to feed this utility an asset from a 3D asset creation tool like Blender, Maya, or 3DS Max, and it will generate a QML component representing the scene, as well as any textures, meshes, and materials it uses.  Currently this tool supports generating scenes from the following formats:

  • FBX
  • Collada (dae)
  • OBJ
  • Blender (blend)
  • GLTF2

To convert the file myTestScene.fbx you would run:

./balsam -o ~/exportDirectory myTestScene.fbx

Thiswould generate a file called MyTestScene.qml together with any assets needed. Then you can just use it like any other Component in your scene:

import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick3D 1.0 Window { width: 1920 height: 1080 visible: true color: "black" Node { id: sceneRoot Light { } Camera { z: -100 } MyTestScene { } } View3D { anchors.fill: parent scene: sceneRoot } }

We are working to improve the assets generated by this tool, so expect improvements in the coming months.

Converting Qt 3D Studio Projects

In addition to being able to generate 3D QML components from 3D asset creation tools, we have also created a plugin for our asset import tool to convert existing Qt 3D Studio projects.  If you have used Qt 3D Studio before, you will know it generates projects in XML format to define the scene.  If you give the balsam tool a UIP or UIA project generated by Qt 3D Studio, it will also generate a Qt Quick 3D project based on that.  Note however that since the runtime used by Qt 3D Studio is different from Qt Quick 3D, not everything will be converted. It should nonetheless give a good approximation or starting point for converting an existing project.  We hope to continue improving support for this path to smooth the transition for existing Qt 3D Studio users.

Qt 3D Studio example application ported using Qt Quick 3D’s import tool. (it’s not perfect yet)

What About Qt 3D?

The first question I expect to get is why not just use Qt 3D?  This is the same question we have been exploring the last couple of years.

One natural assumption is that we could just build all of Qt Quick on top of Qt 3D if we want to mix 2D and 3D.  We intended to and started to do this with the 2.3 release of Qt 3D Studio.  Qt 3D’s powerful API provided a good abstraction for implementing a rendering engine to re-create the behavior expected by Qt Quick and Qt 3D Studio. However, Qt 3D’s architecture makes it difficult to get the performance we needed on an entry level embedded hardware. Qt 3D also comes with a certain overhead from its own limited runtime as well as from being yet another level of abstraction between Qt Quick and the graphics hardware.  In its current form, Qt 3D is not ideal to build on if we want to reach a fully unified graphics story while ensuring continued good support for a wide variety of platforms and devices ranging from low to high end.

At the same time, we already had a rendering engine in Qt 3D Studio that did exactly what we needed, and was a good basis for building additional functionally.  This comes with the downside that we no longer have the powerful APIs that come with Qt 3D, but in practice once you start building a runtime on top of Qt 3D, you already end up making decisions about how things should work, leading to a limited ability to customize the framegraph anyway. In the end the most practical decision was to use the existing Qt 3D Studio rendering engine as our base, and build off of that.

What is the Plan Moving Forward?

This release is just a preview of what is to come.  The plan is to provide Qt Quick 3D as a fully supported module along with the Qt 5.15 LTS.  In the meantime we are working on further developing Qt Quick 3D for release as a Tech Preview with Qt 5.14.

For the Qt 5 series we are limited in how deeply we can combine 2D and 3D because of binary compatibility promises.  With the release of Qt 6 we are planning an even deeper integration of Qt Quick 3D into Qt Quick to provide an even smoother experience.

The goal here is that we want to be able to be as efficient as possible when mixing 2D and 3D content, without introducing any additional overhead to users who do not use any 3D content at all.  We will not be doing anything drastic like forcing all Qt Quick apps to go through the new renderer, only ones who are mixing 2D and 3D.

In Qt 6 we will also be using the Qt Rendering Hardware Interface to render Qt Quick (including 3D) scenes which should eliminate many of the current issues we have today with deployment of OpenGL applications (by using DirectX on Windows, Metal on macOS, etc.).

We also want to make it possible for end users to use the C++ Rendering API we have created more generically, without Qt Quick.  The code is there now as private API, but we are waiting until the Qt 6 time-frame (and the RHI porting) before we make the compatibility promises that come with public APIs.

Feedback is Very Welcome!

This is a tech preview, so much of what you see now is subject to change.  For example, the API is a bit rough around the edges now, so we would like to know what we are missing, what doesn’t make sense, what works, and what doesn’t. The best way to provide this feedback is through the Qt Bug Tracker.  Just remember to use the Qt Quick: 3D component when filing your bugs/suggestions.

The post Introducing Qt Quick 3D: A high-level 3D API for Qt Quick appeared first on Qt Blog.

Categories: FLOSS Project Planets

Dropsolid: Debugging segmentation faults in Drupal

Planet Drupal - Wed, 2019-08-14 04:15
14 Aug

Segfaults in Drupal are not a common occurrence - but when they do pop up, they can pose some tricky challenges... Our DevOps Engineer Mattias Michaux sheds some light on how to debug segmentation faults.
 

As a Drupal web developer, it can be frightening to encounter a so-called segmentation fault or segfault. This is a type of failure raised by hardware with memory protection, notifying you that the software has attempted to access a restricted area of memory. This kind of fault is common in languages with low-level memory management, such as C. Coding in a scripting language like PHP usually implies that you will be spared from segfaults, but on rare occasions these kind of errors can still pop up. And if they do, they tend to leave no stack trace or meaningful error clue in the PHP log. This is because segfault errors originate in the layers located below the PHP engine. Let me talk you through a recent consulting project that involved a curious segmentation fault.
 

Segfault case study

Not so long ago, one of our clients started experiencing seemingly random ‘503 service unavailable’ errors, at random times on random pages. That’s plenty of randomness, without much of a clue to start from.
 

The segfaults had started occurring after an update from PHP 5.6 to PHP 7.2 and the switch from mod_php - PHP is executed as a module within the Apache process -  to PHP-FPM. Just to clarify: PHP runs as a standalone service that Apache connects to.
 

We tried to reproduce the errors on a copy of the site and infrastructure of the project. The segfault was only reproducible when we ran a scraping tool on the site. There was no apparent connection between the pages it occurred on, and the error came up with less than 1% of the total requests. Looking at the logs, there seemed to be no direct cause for this error.

 

 

[proxy_fcgi:error] [pid 10272] AH01067: Failed to read FastCGI header [proxy_fcgi:error] [pid 10272] (104)Connection reset by peer: AH01075: Error dispatching request to ****

Example of the unhelpful Apache error message

 

 

The message above only tells us that something went wrong, but it provides no indication as to what the cause might be. Next, the PHP-FPM log indicated a segmentation fault:

WARNING: [pool web] child 3824 exited on signal 11 (SIGSEGV) after 3.353763 seconds from start

 

The syslog entry didn’t turn out to be very helpful either:

kernel: [4734894.041892] traps: php-fpm7.2[3760] general protection ip:555ce4cb6342 sp:7ffccab418c8 error:0 kernel: [4734894.041897] in php-fpm7.2[555ce4a51000+411000]

 

To find out what the root cause might be, we needed to recompile PHP with debugging enabled. This allowed us to produce core dumps, which contain a full backtrace. A backtrace is a summary of how the program got to a particular point. It displays one line per frame for many frames, starting with the frame that is currently being executed. I suggest finding a guide like this one on how to compile your php with debugging enabled.
 

After compiling PHP, we copied the php.ini and pool config from the original PHP version to the freshly compiled one. Next, we altered the config, so now the PHP-FPM pool used a separate name and socket path, and we changed that socket in the vhost. After starting the new PHP-FPM instance and running the crawler to reproduce the issue, we quickly saw the same 503 errors showing up, but this time with a core dump. We started comparing the dumps and tried to find a pattern. Fortunately, most cases pointed to the same thing: the unserialize() function of large objects. This made us look for specific PHP bugs that involved unserializing or memory allocation. We found an interesting one - in fact, someone had encountered almost the same issue before.
 

We decided to try the same approach by disabling garbage collection in the settings file:

ini_set('zend.enable_gc', 0);

 

We reran the scraper for multiple hours and no issues occurred. There are other documented cases where PHP’s garbage collection interferes with Drupal’s processing of huge amounts of data and objects.  Because disabling the garbage collection globally could have negative effects on the resource consumption of the live site, we also looked for a way to only disable garbage collection partially. This is indeed possible by patching  includes/cache.inc so _cache_get_object doesn’t run gc while fetching data.
 

This puzzling problem took several hours and two people to solve, but in the end we managed to diagnose correctly and we implemented a solid solution. An interesting case that led us to explore unusual parts of Drupal - and possibly something to bear in mind next time your Drupal environment produces a similar type of error.
 

gdb /opt/php/7.2/sbin/php-fpm /tmp/coredump-php-fpm.28651 -ex "source /opt/php-src/php7.2-build/php-7.2.17/.gdbinit" -ex "zbacktrace" --batch | grep "\[0x" [0x7f2c2b4220e0] unserialize("O:4:"view":49:{s:8:"db_table";s:10:"views_view";s:10:"base_table";s:18:"commerce_line_item";s:10:"base_field";s:3:"nid";s:4:"name";s:25:"commerce_reports_products";s:3:"vid";s:0:"";s:11:"description";s:0:"";s:3:"tag";s:16:"commerce_reports";s:10:"human_nam...") [internal function] [0x7f2c2b421f40] DrupalDatabaseCache->prepareItem(object[0x7f2c2b421f90]) /home/project/www/includes/cache.inc:520 [0x7f2c2b421d80] DrupalDatabaseCache->getMultiple(reference) /home/project/www/includes/cache.inc:433 [0x7f2c2b421cf0] cache_get_multiple(reference, "cache_views") /home/project/www/includes/cache.inc:113 [0x7f2c2b421a70] _ctools_export_get_defaults_from_cache("views_view", array(24)[0x7f2c2b421ad0]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:746 [0x7f2c2b421390] _ctools_export_get_defaults("views_view", array(24)[0x7f2c2b4213f0]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:649 [0x7f2c2b4211a0] _ctools_export_get_some_defaults("views_view", array(24)[0x7f2c2b421200], array(1)[0x7f2c2b421210]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:783 [0x7f2c2b420210] ctools_export_load_object("views_view", "names", array(1)[0x7f2c2b420280]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:493 [0x7f2c2b420080] ctools_export_crud_load("views_view", "search_results") /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:81 [0x7f2c2b41ff50] views_get_view("search_results") /home/project/www/sites/all/modules/contrib/views/views.module:1683 [0x7f2c2b41fb50] views_block_view("-exp-search_results-page") /home/project/www/sites/all/modules/contrib/views/views.module:772 [0x7f2c2b41fa60] module_invoke("views", "block_view", array(1)[0x7f2c2b41fad0]) /home/project/www/includes/module.inc:934 [0x7f2c2b41f410] _block_render_blocks(array(0)[0x7f2c2b41f460]) /home/project/www/modules/block/block.module:911 [0x7f2c2b41f2d0] block_list("branding") /home/project/www/modules/block/block.module:690 [0x7f2c2b41f200] block_get_blocks_by_region("branding") /home/project/www/modules/block/block.module:319 [0x7f2c2b41eee0] block_page_build(reference) /home/project/www/modules/block/block.module:270 [0x7f2c2b41ecf0] drupal_render_page(reference) /home/project/www/includes/common.inc:5914 [0x7f2c2b41e570] drupal_deliver_html_page(array(1)[0x7f2c2b41e5c0]) /home/project/www/includes/common.inc:2761 [0x7f2c2b41e3d0] drupal_deliver_page(array(1)[0x7f2c2b41e420], "") /home/project/www/includes/common.inc:2634 [0x7f2c2b41e100] menu_execute_active_handler() /home/project/www/includes/menu.inc:542 [0x7f2c2b41e030] (main) /home/project/www/index.php:21

 

dropsolid8
Categories: FLOSS Project Planets

Krita Sprint News <2019-08-14 Wed>

Planet KDE - Wed, 2019-08-14 02:32

I am in the Netherlands I came for the Krita Sprint and I have done a lot of progress with my Animated Brush for the Google Summer of Code Read More...

Categories: FLOSS Project Planets

Web Wash: Use Taxonomy Terms as Webform Options in Drupal 8

Planet Drupal - Tue, 2019-08-13 21:30

Webform has a pretty robust system for managing lists of options. When you create a select box, you can define its options just for that element, or use a predefined list. If you go to Structure, Webforms, Configurations and click on Options, you can see all the predefined options and you can create your own.

If you want to learn how to create your own predefined options check out our tutorial; How to Use Webform Predefined Options in Drupal 8.

One thing to be aware of is that all of these options are stored as config files, which makes perfect sense, it is configuration.

But what if you want editors to manage the options?

Depending on how you deploy Drupal sites if you change an option only on the production site, your change will be overridden the next time you deploy to production because you import all new configuration changes.

To work around this, you could look at using Webform Config Ignore.

Another way of managing options is by using the Taxonomy system. An editor would simply manage all the terms from the Taxonomy page and nothing will be stored in config files.

In this tutorial, you’ll learn how to create a select element which uses a taxonomy vocabulary instead of the standard options.

Categories: FLOSS Project Planets

Python Engineering at Microsoft: What’s New for Python in Visual Studio (16.3 Preview 2)

Planet Python - Tue, 2019-08-13 19:41

Today, we are releasing Visual Studio 2019 (16.3 Preview 2), which contains an updated testing experience for Python developers. We are happy to announce that the popular Python testing framework pytest is now supported. Additionally, we have re-worked the unittest experience for Python users in this release. 

Continue reading to learn more about how you can enable and configure pytest and/or unittest for your development environment. What’s even better is that each testing framework is supported in both project mode and in Open Folder scenarios.   

 

Enabling and Configuring Testing for Projects

Configuring and working with Python tests in Visual Studio is easier than ever before.

 For users who are new to the testing experience within Visual Studio 2019 for Python projects, right-click on the project name and select the ‘Properties’ option. This option opens the project designer, which allows you to configure tests by going to the ‘Test’ tab.

From this tab, simply click the ’Test Framework’ dropdown box to select the testing framework you wish to use:

  • For unittest, we use the project’s root directory for test discovery. This is a default setting that can be modified to include the path to the folder that contains your tests (if your tests are included in a sub-directory). We also use the unittest framework’s default pattern for test filenames (this also can be modified if you use a different file naming system for your test files). Prior to this release, unittest discovery was automatically initiated for the user. Now, the user is required to manually configure testing.
  • For pytest, you can specify a .ini configuration file which contains test filename patterns in addition to many other testing options.

Once you select and save your changes in the window, test discovery is initiated in the Test Explorer. If the Test Explorer is not open, navigate to the Toolbar and select Test > Windows > Test Explorer. Test Discovery can take up to 60 seconds, after which the test discovery process will end.

Once in the Test Explorer window, you have the ability to re-run your tests (by clicking the ‘Run All’ button or pressing CTRL + R,A) as well as view the status of your test runs.  Additionally, you can see the total number of tests your project contains and the duration of test runs:

If you wish to keep working while tests are running in the background but want to monitor the progress of your test run, you can go to the Output window and choose ‘Show output from: Tests’:

We have also made it simple for users with pre-existing projects that contain test files to quickly continue working with their code in Visual Studio 2019. When you open a project that contains testing configuration files (e.g. a .ini file for pytest), but you have not installed or enabled pytest, you will be prompted to install the necessary packages and configure them for the Python environment in which you are working:

For open folder scenarios (described below), these informational bars will also be triggered if you have not configured your workspace for pytest or unittest.

 

Configuring Tests for Open Folder Scenarios

In this release of Visual Studio 2019, users can configure tests to work in our popular open folder scenario.

To configure and enable tests, navigate to the Solution explorer, click the “Show All Files” icon to show all files in the current folder and select the PythonSettings.json file within the ‘Local Settings’ folder. (If this file doesn’t exist, create one in ‘Local Settings’ folder). Next, add the field TestFramework: “pytest” to your settings file or TestFramework: “unittest” depending on the testing framework you wish to use.

 

For the unittest framework, If UnitTestRootDirectory and/or UnitTestPattern are not specified in PythonSettings.json, they are added and assigned default values of “.” and “test*.py”, respectively.

As in project mode, editing and saving any file triggers test discovery for the test framework that you specified. If you already have the Test Explorer window open, clicking CTRL + R,A also triggers discovery.

Note: If your folder contains a ‘src’ directory which is separate from the folder that contains your tests, you’ll need to specify the path to the src folder in your PythonSettings.json with the setting SearchPaths:

 

 

Debugging Tests

In this latest release, we’ve updated test debugging to use our new ptvsd 4 debugger, which is faster and more reliable than ptvsd 3. We’ve added an option so that you can use the legacy debugger if you run into issues. To enable it, go to Tools > Options > Python > Debugging > Use Legacy Debugger and check the box to enable it.

As in previous releases, if you wish to debug a test, set an initial breakpoint in your code, then right-click the test (or a selection) in Test Explorer and select Debug Selected Tests. Visual Studio starts the Python debugger as it would for application code.

Note: everyone that tries to debug a test will find that the debugging does not automatically end when the debugging session completes. This is a known issue and the current workaround is to click ‘Stop Debugging’ (Shift + F5).

There’s also the ‘Test Detail Summary’ view that allows you to see the Stack Trace of a failed test which makes troubleshooting failed tests even easier. To access this view, simply click on the test within the Test Explorer that you wish to inspect and the ‘Test Detail Summary’ window will appear.

 

Try it Out!

Be sure to download Visual Studio 2019 (16.3 Preview 2), install the Python Workload, and give feedback or view a list of existing issues on our GitHub repo.

The post What’s New for Python in Visual Studio (16.3 Preview 2) appeared first on Python.

Categories: FLOSS Project Planets

Lazy Qt Models from QVariant

Planet KDE - Tue, 2019-08-13 18:00

In Calamares there is a debug window; it shows some panes of information and one of them is a tree view of some internal data in the application. The data itself isn’t stored as a model though, it is stored in one big QVariantMap. So to display that map as a tree, the code needs to provide a Qt model so that then the regular Qt views can do their thing.

Each key in the map is a node in the tree to be shown; if the value is a map or a list, then sub-nodes are created for the items in the map or the list, and otherwise it’s a leaf that displays the string associated with the key. In the screenshot you can see the branding key which is a map, and that map contains a bunch of string values.

Historically, the way this map was presented as a model was as follows:

  • A JSON document representing the contents of the map is made,
  • The JSON document is rendered to text,
  • A model is created from the JSON text using dridk’s QJsonModel,
  • That model is displayed.

This struck me as a long-way-around. Even if there’s only a few dozen items overall in the tree, it looks like a lot of copying and buffer management going on. The code where all this happens, though, is only a few lines – it looks harmless enough.

I decided that I wanted to re-do this bit of code – dropping the third-party code in the process, and so simplifying Calamares a little – by using the data from the QVariant directly, with only a “light weight” amount of extra data. If I was smart, I would consult more closely with Marek Krajewski’s Hands-On High Performance Programming with Qt 5, but .. this was a case of “I feel this is more efficient” more than doing the smart thing.

I give you VariantModel.

This is strongly oriented towards the key-value display of a QVariantMap as a tree, but it could possibly be massaged into another form. It also is pushy in smashing everything into string form. It could probably use data from the map more directly (e.g. pixmaps) and be even more fancy that way.

Most of my software development is pretty “plain”. It is straightforward code. This was one of the rare occasions that I took out pencil and paper and sketched a data structure before coding (or more accurate: I did a bunch of hacking, got nowhere, and realised I’d have to do some thinking before I’d get anywhere – cue tea and chocolate).

What I ended up with was a QVector of quintptrs (since a QModelIndex can use that quintptr as intenal data). The length of the vector is equal to the number of nodes in the tree, each node is assigned an index in the tree (I used depth-first traversal along whatever arbitrary yet consistent order Qt gives me the keys, enumerating each node as it is encountered). In the vector, I store the parent index of each node, at the index of the node itself. The root is index 0, and has a special parent.

The image shows how a tree with nine nodes can be enumerated into a vector, and then how the vector is populated with parents. The root gets index 0, with a special parent. The first child of the root gets index 1, parent 0. The first child of that node gets index 2, parent 1; since it is a leaf node, its sibling gets index 3, parent 1 .. the whole list of nine parents looks like this:

-1, 0, 1, 1, 0, 0, 5, 5, 5

For QModelIndex purposes, this vector of numbers lets us do two things:

  • the number of children of node n is the number of entries in this vector with n as parent (e.g. a simple QVector::count()).
  • given a node n, we can find out its parent node (it’s at index n in the vector) but also which row it occupies (in QModelIndex terms), by counting how many other nodes have the same parent that occur before it.

In order to get the data from the QVariant, we have to walk the tree, which requires a bunch of parent lookups and recursively descending though the tree once the parents are all found.

Changing the underlying map isn’t immediately fatal, but changing the number of nodes (expecially in intermediate levels) will do very strange things. There is a reload() method to re-build the list and parents indexes if the underlying data changes – in that sense it’s not a very robust model. It might make sense to memoize the data as well while walking the tree – again, I need to read more of Marek’s work.

I’m kinda pleased with the way it turned out; the consumers of the model have become slightly simpler, even if the actual model code (from QJsonModel to VariantModel) isn’t much smaller. There’s a couple of places in Calamares that might benefit from this model besides the debug window, so it is likely to get some revisions as I use it more.

Categories: FLOSS Project Planets

Roberto Alsina: Episodio 5: Muchos Pythons

Planet Python - Tue, 2019-08-13 16:55

Una pseudo secuela de "Puede Fallar" mostrando varias cosas:

  • Anvil: una manera de hacer aplicaciones web full-stack con Python!
  • Skulpt

Y mucho más!

La aplicación que muestro en el video: En Anvil

El código: lo podés clonar

Detalle: "lo de twitter" quedó reducido a un botón adentro de la aplicación, pero sirvió como disparador :-)

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #381 (Aug. 13, 2019)

Planet Python - Tue, 2019-08-13 15:30

#381 – AUGUST 13, 2019
View in Browser »

Traditional Face Detection With Python

In this course on face detection with Python, you’ll learn about a historically important algorithm for object detection that can be successfully applied to finding the location of a human face within an image.
REAL PYTHON video

Three Techniques for Inverting Control, in Python

Inversion of Control, in which code delegates control using plugins, is a powerful way of modularising software. It may sound complicated, but it can be achieved in Python with very little work. This article examines three different techniques for handling IOC in Python.
DAVID SEDDON

Save 40% on Your Order at manning.com

Take the time to learn something new! Manning Publications are offering 40% off everything at manning.com, including everything Pythonic. Just enter the code pycoders40 at the cart before you checkout to save →
MANNING PUBLICATIONS sponsor

NumPy 1.17.0 Drops Python 2 Support

“The 1.17.0 release contains a number of new features that should substantially improve its performance and usefulness. The Python versions supported are 3.5-3.7, note that Python 2.7 has been dropped.”
MAIL-ARCHIVE.COM

print() Function Deep Dive (17,000 Words Guide)

Learn all there is to know about the print() function in Python and discover some of its lesser-known features. Avoid common mistakes, take your “hello world” to the next level, and know when to use a better alternative.
REAL PYTHON

An Overview of the Python Tooling Landscape

An opinionated guide to tooling in Python covering pyenv, poetry, black, flake8, isort, pre-commit, pytest, coverage, tox, Azure Pipelines, sphinx, and readthedocs.
ADITHYA BALAJI

Learn How Python Async Web Frameworks Work by Writing One From Scratch

OLEH KUCHUK

Discussions Tools for Remote/Pair-Programming In-The-Cloud?

MAIL.PYTHON.ORG

Who Says Python Programmers Don’t Have a Sense of Humor?

RAYMOND HETTINGER

As an Expert, Which Bad Habits Would You Advice Beginner Python Programmers to Avoid?

REDDIT

Python Is Eating the World (Discussion)

HACKER NEWS

Python Jobs Senior Python Developer (Austin, TX)

InQuest

Backend and DataScience Engineers (London, Relocation & Visa Possible)

Citymapper Ltd

Software Engineering Lead (Houston, TX)

SimpleLegal

Software Engineer (Multiple US Locations)

Invitae

Python Software Engineer (Munich, Germany)

Stylight GmbH

Senior Software Developer (Edmonton, AB)

Levven Electronics Ltd.

Lead Data Scientist (Buffalo, NY)

Utilant LLC

Python Developer (Remote)

418 Media

More Python Jobs >>>

Articles & Tutorials Inheritance and Composition: A Python OOP Guide

In this step-by-step tutorial, you’ll learn about inheritance and composition in Python. You’ll improve your object oriented programming skills by understanding how to use inheritance and composition and how to leverage them in their design.
REAL PYTHON

An Intro to Flake8

Flake8 is a style guide enforcement tool for Python that you can use in place of PyLint to help you find errors in your code and more closely follow PEP8. This article shows you how to get up and running with Flake8.
MIKE DRISCOLL

Python Developers Are in Demand on Vettery

Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today →
VETTERY sponsor

Organizing PythonPune Meetups

PythonPune is a meetup group in Pune India. This blog post is about how the author got involved in organizing the meetup and what the process looks like.
BHAVIN GANDHI

Keras for Beginners: Implementing a Convolutional Neural Network

A beginner-friendly guide on using Keras to implement a simple Convolutional Neural Network (CNN) in Python.
VICTOR ZHOU

What Every Developer Should Learn Early On

“These are a few of the things I wish they were teaching at university instead of pure theory.”
RYLAND GOLDSTEIN

Making an Interactive Projected Surface With Python + OpenCV

“Computer vision + music = life-sized rhythm games”
TSUKURU.CLUB

Python Libraries for Interpretable Machine Learning

REBECCA VICKERY

Using Django Signals to Simplify and Decouple Code

ROBLEY GORI

Testing Scientific Code

PHILIPP JUNG

Projects & Code austin: Frame Stack Sampler for CPython

“The most interesting use of Austin is probably in conjunction with FlameGraph to profile Python applications while they are running, without the need of instrumentation. This means that Austin can be used on production code with little or even no impact on performance.”
GITHUB.COM/P403N1X87

jupyter-black: Black Formatter for Jupyter Notebook

GITHUB.COM/DRILLAN

moviepy: Video Editing With Python

GITHUB.COM/ZULKO

LibCST: Concrete Syntax Tree Parser and Serializer Library

Parses Python 3.7 source code as a CST tree that keeps all formatting details (comments, whitespaces, parentheses, etc). It’s useful for building automated refactoring (codemod) applications and linters.
GITHUB.COM/INSTAGRAM

chart: Python Charts With 0 Dependencies

GITHUB.COM/MAXHUMBER • Shared by Max Humber

mintotp: Minimal TOTP Generator in 20 Lines of Python

GITHUB.COM/SUSAM

mesapy: Memory-Safe Python Based on PyPy

GITHUB.COM/MESALOCK-LINUX

pip-tools: Set of Tools to Keep Your Pinned Python Dependencies Fresh

GITHUB.COM/JAZZBAND

Poetry: Python Dependency Management and Packaging Made Easy

EUSTACE.IO

scalpl: Seamlessly Operate on Nested Dictionaries

GITHUB.COM/DUCDETRONQUITO

Events PyBay

August 15 to August 19, 2019
PYBAY.COM

PyCon Korea 2019

August 15 to August 19, 2019
PYCON.KR

Karlsruhe Python User Group (KaPy)

August 16, 2019
BL0RG.NET

PyDelhi User Group Meetup

August 17, 2019
MEETUP.COM

Dominican Republic Python User Group

August 20, 2019
PYTHON.DO

Kiwi PyCon X

August 23 to August 26, 2019
PYTHON.NZ

IndyPy Web Conf 2019

August 23 to August 24, 2019
INDYPY.ORG

Happy Pythoning!
This was PyCoder’s Weekly Issue #381.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Hook 42: Exploring Contenta CMS

Planet Drupal - Tue, 2019-08-13 15:03
Exploring Contenta CMS Lindsey Gemmill Tue, 08/13/2019 - 19:03
Categories: FLOSS Project Planets

Steve Kemp: That time I didn't find a kernel bug, or did I?

Planet Debian - Tue, 2019-08-13 14:00

Recently I saw a post to the linux kernel mailing-list containing a simple fix for a use-after-free bug. The code in question originally read:

hdr->pkcs7_msg = pkcs7_parse_message(buf + buf_len, sig_len); if (IS_ERR(hdr->pkcs7_msg)) { kfree(hdr); return PTR_ERR(hdr->pkcs7_msg); }

Here the bug is obvious once it has been pointed out:

  • A structure is freed.
    • But then it is dereferenced, to provide a return value.

This is the kind of bug that would probably have been obvious to me if I'd happened to read the code myself. However patch submitted so job done? I did have some free time so I figured I'd scan for similar bugs. Writing a trivial perl script to look for similar things didn't take too long, though it is a bit shoddy:

  • Open each file.
  • If we find a line containing "free(.*)" record the line and the thing that was freed.
  • The next time we find a return look to see if the return value uses the thing that was free'd.
    • If so that's a possible bug. Report it.

Of course my code is nasty, but it looked like it immediately paid off. I found this snippet of code in linux-5.2.8/drivers/media/pci/tw68/tw68-video.c:

if (hdl->error) { v4l2_ctrl_handler_free(hdl); return hdl->error; }

That looks promising:

  • The structure hdl is freed, via a dedicated freeing-function.
  • But then we return the member error from it.

Chasing down the code I found that linux-5.2.8/drivers/media/v4l2-core/v4l2-ctrls.c contains the code for the v4l2_ctrl_handler_free call and while it doesn't actually free the structure - just some members - it does reset the contents of hdl->error to zero.

Ahah! The code I've found looks for an error, and if it was found returns zero, meaning the error is lost. I can fix it, by changing to this:

if (hdl->error) { int err = hdl->error; v4l2_ctrl_handler_free(hdl); return err; }

I did that. Then looked more closely to see if I was missing something. The code I've found lives in the function tw68_video_init1, that function is called only once, and the return value is ignored!

So, that's the story of how I scanned the Linux kernel for use-after-free bugs and contributed nothing to anybody.

Still fun though.

I'll go over my list more carefully later, but nothing else jumped out as being immediately bad.

There is a weird case I spotted in ./drivers/media/platform/s3c-camif/camif-capture.c with a similar pattern. In that case the function involved is s3c_camif_create_subdev which is invoked by ./drivers/media/platform/s3c-camif/camif-core.c:

ret = s3c_camif_create_subdev(camif); if (ret < 0) goto err_sd;

So I suspect there is something odd there:

  • If there's an error in s3c_camif_create_subdev
    • Then handler->error will be reset to zero.
    • Which means that return handler->error will return 0.
    • Which means that the s3c_camif_create_subdev call should have returned an error, but won't be recognized as having done so.
    • i.e. "0 < 0" is false.

Of course the error-value is only set if this code is hit:

hdl->buckets = kvmalloc_array(hdl->nr_of_buckets, sizeof(hdl->buckets[0]), GFP_KERNEL | __GFP_ZERO); hdl->error = hdl->buckets ? 0 : -ENOMEM;

Which means that the registration of the sub-device fails if there is no memory, and at that point what can you even do?

It's a bug, but it isn't a security bug.

Categories: FLOSS Project Planets

Pages