FLOSS Project Planets

Catalin George Festila: Python Qt5 - the drag and drop feature.

Planet Python - Sun, 2019-10-06 01:04
Today I tested drag and drop feature with PyQt5. Python 3.7.4 (default, Jul 9 2019, 16:32:37) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)] on linuxThis is a simple example using setAcceptDrops and setDragEnabled: import sys from PyQt5.QtWidgets import QApplication, QWidget, QListWidget, QHBoxLayout,QListWidgetItem from PyQt5.QtGui import QIcon class Window(QWidget): def __init__(self):
Categories: FLOSS Project Planets

This week in KDE: apps, apps apps!

Planet KDE - Sun, 2019-10-06 00:02

It’s been a big week for Dolphin with some new features, plus various improvements for other apps. Among them, KDE developer Christoph Cullmann went on a High DPI rampage and and fixed visual glitches in Kate and Okular on Windows when using a High DPI scale factor, and made great progress towards fixing the infamous line glitches in Konsole when using fractional scaling. Though still not quite perfect, it’s much better now.

Beyond that, a bunch of great things are in development which I can’t announce yet, but I guarantee that you’ll like them once they land in the coming weeks!

New Features Bugfixes & Performance Improvements User Interface Improvements How You Can Help

Check out https://community.kde.org/Get_Involved and find out ways to help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Finally, consider making a tax-deductible donation to the KDE e.V. foundation.

Categories: FLOSS Project Planets

Full Stack Python: How to Add Maps to Django Web App Projects with Mapbox

Planet Python - Sun, 2019-10-06 00:00

Building interactive maps into a Django web application can seem daunting if you do not know where to begin, but it is easier than you think if you use a developer tool such as Mapbox.

In this post we will build a simple Django project with a single app and add an interactive map like the one you see below to the webpage that Django renders with the Mapbox Maps API.

Our Tools

Python 3 is strongly recommended for this tutorial because Python 2 will no longer be supported starting January 1, 2020. Python 3.6.5 to was used to build this tutorial. We will also use the following application dependencies to build our application:

If you need help getting your development environment configured before running this code, take a look at this guide for setting up Python 3 and Django on Ubuntu 16.04 LTS.

This blog post's code is also available on GitHub within the maps-django-mapbox directory of the blog-code-examples repository. Take the code and use it for your own purposes because it is all provided under the MIT open source license.

Installing Dependencies

Start the Django project by creating a new virtual environment using the following command. I recommend using a separate directory such as ~/venvs/ (the tilde is a shortcut for your user's home directory) so that you always know where all your virtualenvs are located.

python3 -m venv djangomaps

Activate the virtualenv with the activate shell script:

source djangomaps/bin/activate

The command prompt will change after activating the virtualenv:

Remember that you have to activate your virtualenv in every new terminal window where you want to use dependencies in the virtualenv.

We can now install the Django package into the activated but otherwise empty virtualenv.

pip install django==2.0.5

Look for the following output to confirm Django installed correctly from PyPI.

Downloading https://files.pythonhosted.org/packages/23/91/2245462e57798e9251de87c88b2b8f996d10ddcb68206a8a020561ef7bd3/Django-2.0.5-py3-none-any.whl (7.1MB) 100% |████████████████████████████████| 7.1MB 231kB/s Collecting pytz (from django==2.0.5) Using cached https://files.pythonhosted.org/packages/dc/83/15f7833b70d3e067ca91467ca245bae0f6fe56ddc7451aa0dc5606b120f2/pytz-2018.4-py2.py3-none-any.whl Installing collected packages: pytz, django Successfully installed django-2.0.5 pytz-2018.4

The Django dependency is ready to go so now we can create our project and add some awesome maps to the application.

Building Our Django Project

We can use the Django django-admin.py tool to create the boilerplate code structure to get our project started. Change into the directory where you develop your applications. For example, I typically use /Users/matt/devel/py/. Then run the following command to start a Django project named djmaps:

django-admin.py startproject djmaps

The django-admin.py command will create a directory named djmaps along with several subdirectories that you should be familiar with if you have previously worked with Django.

Change directories into the new project.

cd djmaps

Create a new Django app within djmaps.

python manage.py startapp maps

Django will generate a new folder named maps for the project. We should update the URLs so the app is accessible before we write our views.py code.

Open djmaps/djmaps/urls.py. Add the highlighted lines so that URLs will check the maps app for appropriate URL matching.

""" (comments) """ ~~from django.conf.urls import include from django.contrib import admin from django.urls import path urlpatterns = [ ~~ path('', include('maps.urls')), path('admin/', admin.site.urls), ]

Save djmaps/djmaps/urls.py and open djmaps/djmaps/settings.py. Add the maps app to settings.py by inserting the highlighted line:

# Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ~~ 'maps', ]

Make sure you change the default DEBUG and SECRET_KEY values in settings.py before you deploy any code to production. Secure your app properly with the information from the Django production deployment checklist so that you do not add your project to the list of hacked applications on the web.

Save and close settings.py.

Next change into the djmaps/maps directory. Create a new file named urls.py to contain routes for the maps app.

Add these lines to the empty djmaps/maps/urls.py file.

from django.conf.urls import url from . import views urlpatterns = [ url(r'', views.default_map, name="default"), ]

Save djmaps/maps/urls.py and open djmaps/maps/views.py add the following two highlighted lines. You can keep the boilerplate comment or delete it.

from django.shortcuts import render ~~def default_map(request): ~~ return render(request, 'default.html', {})

Next, create a directory for your template files named templates under the djmaps/maps app directory.

mkdir templates

Create a new file named default.html within djmaps/maps/templates that contains the following Django template markup.

<!DOCTYPE html> <html> <head> <title>Interactive maps for Django web apps</title> </head> <body> <h1>Map time!</h1> </body> </html>

We can test out this static page to make sure all of our code is correct, then we'll use Mapbox to embed a customizable map within the page. Change into the base directory of your Django project where the manage.py file is located. Execute the development server with the following command:

python manage.py runserver

The Django development server will start up with no issues other than an unapplied migrations warning.

Performing system checks... System check identified no issues (0 silenced). You have 14 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions. Run 'python manage.py migrate' to apply them. May 21, 2018 - 12:47:54 Django version 2.0.5, using settings 'djmaps.settings' Starting development server at Quit the server with CONTROL-C.

Open a web browser and go to localhost:8000.

Our code works, but boy is that a plain-looking HTML page. Let's make the magic happen by adding JavaScript to the template to generate maps.

Adding Maps with Mapbox

Head to mapbox.com in your web browser to access the Mapbox homepage.

Click on "Get Started" or "Get Started for free" (the text depends on whether or not you already have a Mapbox account).

Sign up for a new free developer account or sign in to your existing account.

Click the "JS Web" option.

Choose "Use the Mapbox CDN" for the installation method. The next two screens show some code that you should add to your djmaps/maps/templates/default.html template file. The code will look like the following but you will need to replace the mapboxgl.accessToken line with your own access token.

<!DOCTYPE html> <html> <head> <title>Interactive maps for Django web apps</title> ~~ <script src='https://api.mapbox.com/mapbox-gl-js/v0.44.2/mapbox-gl.js'></script> ~~ <link href='https://api.mapbox.com/mapbox-gl-js/v0.44.2/mapbox-gl.css' rel='stylesheet' /> </head> <body> <h1>Map time!</h1> ~~ <div id='map' width="100%" style='height:400px'></div> ~~ <script> ~~ mapboxgl.accessToken = {{ mapbox_access_token }}; ~~ var map = new mapboxgl.Map({ ~~ container: 'map', ~~ style: 'mapbox://styles/mapbox/streets-v10' ~~ }); ~~ </script> </body> </html>

Re-open djmaps/maps/views.py to update the parameters passed into the Django template.

from django.shortcuts import render def default_map(request): ~~ # TODO: move this token to Django settings from an environment variable ~~ # found in the Mapbox account settings and getting started instructions ~~ # see https://www.mapbox.com/account/ under the "Access tokens" section ~~ mapbox_access_token = 'pk.my_mapbox_access_token' ~~ return render(request, 'default.html', ~~ { 'mapbox_access_token': mapbox_access_token })

The Mapbox access token should really be stored in the Django settings file, so we left a "TODO" note to handle that as a future step.

Now we can try our webpage again. Refresh localhost:8000 in your web browser.

Sweet, we've got a live, interactive map! It's kind of weird thought how it is zoomed out to view the entire world. Time to customize the map using a few JavaScript parameters.

Customizing the Map

We can modify the map by changing parameters for the style, zoom level, location and many other attributes.

We'll start by changing the location that the initial map centers in on as well as the zoom level.

Re-open djmaps/maps/templates/default.html and modify the first highlighted lines so it ends with a commas and add the two new highlighted lines shown below.

<!DOCTYPE html> <html> <head> <title>Interactive maps for Django web apps</title> <script src='https://api.mapbox.com/mapbox-gl-js/v0.44.2/mapbox-gl.js'></script> <link href='https://api.mapbox.com/mapbox-gl-js/v0.44.2/mapbox-gl.css' rel='stylesheet' /> </head> <body> <h1>Map time!</h1> <div id='map' width="100%" style='height:400px'></div> <script> mapboxgl.accessToken = {{ mapbox_access_token }}; var map = new mapboxgl.Map({ container: 'map', ~~ style: 'mapbox://styles/mapbox/streets-v10', ~~ center: [-77.03, 38.91], ~~ zoom: 9 }); </script> </body> </html>

The first number, -77.03, for the center array is the longitude and the second number, 38.91, is the latitude. Zoom level 9 is much closer to the city than the default which was the entire world at level 0. All of the customization values are listed in the Mapbox GL JS API documentation.

Now refresh the page at localhost:8000 to reload our map.

Awesome, now we are zoomed in on Washington, D.C. and can still move around to see more of the map. Let's make a couple other changes to our map before wrapping up.

Again back in djmaps/maps/templates/default.html change the highlighted line for the style key to the mapbox://styles/mapbox/satellite-streets-v10 value. That will change the look from an abstract map style to satellite image data. Update zoom: 9 so that it has a comma at the end of the line and add bearing: 180 as the last key-value pair in the configuration.

<!DOCTYPE html> <html> <head> <title>Interactive maps for Django web apps</title> <script src='https://api.mapbox.com/mapbox-gl-js/v0.44.2/mapbox-gl.js'></script> <link href='https://api.mapbox.com/mapbox-gl-js/v0.44.2/mapbox-gl.css' rel='stylesheet' /> </head> <body> <h1>Map time!</h1> <div id='map' width="100%" style='height:400px'></div> <script> mapboxgl.accessToken = {{ mapbox_access_token }}; var map = new mapboxgl.Map({ container: 'map', ~~ style: 'mapbox://styles/mapbox/satellite-streets-v10', ~~ center: [-77.03, 38.91], ~~ zoom: 9, ~~ bearing: 180 }); </script> </body> </html>

Save the template and refresh localhost:8000.

The map now provides a satellite view with streets overlay but it is also... "upside down"! At least the map is upside down compared to how most maps are drawn, due to the bearing: 180 value, which modified this map's rotation.

Not bad for a few lines of JavaScript in our Django application. Remember to check the Mapbox GL JS API documentation for the exhaustive list of parameters that you can adjust.

What's Next?

We just learned how to add interactive JavaScript-based maps to our Django web applications, as well as modify the look and feel of the maps. Next try out some of the other APIs Mapbox provides including:

Questions? Let me know via a GitHub issue ticket on the Full Stack Python repository, on Twitter @fullstackpython or @mattmakai.

Do you see a typo, syntax issue or wording that's confusing in this blog post? Fork this page's source on GitHub and submit a pull request with a fix or file an issue ticket on GitHub.

Categories: FLOSS Project Planets

Norbert Preining: RIP (for now) Calibre in Debian

Planet Debian - Sat, 2019-10-05 22:14

The current purge of all Python2 related packages has a direct impact on Calibre. The latest version of Calibre requires Python modules that are not (anymore) available for Python 2, which means that Calibre >= 4.0 will for the foreseeable future not be available in Debian.

I just have uploaded a version of 3.48 which is the last version that can run on Debian. From now on until upstream Calibre switches to Python 3, this will be the last version of Calibre in Debian.

In case you need newer features (including the occasional security fixes), I recommend switching to the upstream installer which is rather clean (installing into /opt/calibre, creating some links to the startup programs, and installing completions for zsh and bash. It also prepares an uninstaller that reverts these changes.


Categories: FLOSS Project Planets

Applied Pokology: Values of the world, unite! - Offsets in Poke

GNU Planet! - Sat, 2019-10-05 20:00
Early in the design of what is becoming GNU poke I was struck by a problem that, to my surprise, would prove not easy to overcome in a satisfactory way: would I make a byte-oriented program, or a bit-oriented program? Considering that the program in question was nothing less than an editor for binary data, this was no petty dilemma.
Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in September 2019

Planet Debian - Sat, 2019-10-05 16:43

Welcome to the September 2019 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up over the past month. As a quick refresher of what our project is about, whilst anyone can inspect the source code of free software for malicious changes, most software is distributed to end users or servers as precompiled binaries. The motivation behind the reproducible builds effort is to ensure zero changes have been introduced during these compilation processes. This is achieved by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In September’s report, we cover:

  • Media coverage & eventsmore presentations, preventing Stuxnet, etc.
  • Upstream newskernel reproducibility, grafana, systemd, etc.
  • Distribution workreproducible images in Arch Linux, policy changes in Debian, etc.
  • Software developmentyet more work on diffoscope, upstream patches, etc.
  • Misc news & getting in touchfrom our mailing list how to contribute, etc

If you are interested in contributing to our project, please visit our Contribute page on our website.

Media coverage & events

This month Vagrant Cascadian attended the 2019 GNU Tools Cauldron in Montréal, Canada and gave a presentation entitled Reproducible Toolchains for the Win (video).

In addition, our project was highlighted as part of a presentation by Andrew Martin at the All Systems Go conference in Berlin titled Rootless, Reproducible & Hermetic: Secure Container Build Showdown, and Björn Michaelsen from the Document Foundation presented at the 2019 LibreOffice Conference in Almería in Spain on the status of reproducible builds in the LibreOffice office suite.

In academia, Anastasis Keliris and Michail Maniatakos from the New York University Tandon School of Engineering published a paper titled ICSREF: A Framework for Automated Reverse Engineering of Industrial Control Systems Binaries (PDF) that speaks to concerns regarding the security of Industrial Control Systems (ICS) such as those attacked via Stuxnet. The paper outlines their ICSREF tool for reverse-engineering binaries from such systems and furthermore demonstrates a scenario whereby a commercial smartphone equipped with ICSREF could be easily used to compromise such infrastructure.

Lastly, It was announced that Vagrant Cascadian will present a talk at SeaGL in Seattle, Washington during November titled There and Back Again, Reproducibly.

2019 Summit

Registration for our fifth annual Reproducible Builds summit that will take place between 1st → 8th December in Marrakesh, Morocco has opened and personal invitations have been sent out.

Similar to previous incarnations of the event, the heart of the workshop will be three days of moderated sessions with surrounding “hacking” days and will include a huge diversity of participants from Arch Linux, coreboot, Debian, F-Droid, GNU Guix, Google, Huawei, in-toto, MirageOS, NYU, openSUSE, OpenWrt, Tails, Tor Project and many more. If you would like to learn more about the event and how to register, please visit our our dedicated event page.

Upstream news

Ben Hutchings added documentation to the Linux kernel regarding how to make the build reproducible. As he mentioned in the commit message, the kernel is “actually” reproducible but the end-to-end process was not previously documented in one place and thus Ben describes the workflow and environment needed to ensure a reproducible build.

Daniel Edgecumbe submitted a pull request which was subsequently merged to the logging/journaling component of systemd in order that the output of e.g. journalctl --update-catalog does not differ between subsequent runs despite there being no changes in the input files.

Jelle van der Waa noticed that if the grafana monitoring tool was built within a source tree devoid of Git metadata then the current timestamp was used instead, leading to an unreproducible build. To avoid this, Jelle submitted a pull request in order that it use SOURCE_DATE_EPOCH if available.

Mes (a Scheme-based compiler for our “sister” bootstrappable builds effort) announced their 0.20 release.

Distribution work

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution. Thunderbird and kernel-vanilla packages will be among the larger ones to become reproducible soon and there were additional Python patches to help reproducibility issues of modules written in this language that have C bindings.

OpenWrt is a Linux-based operating system targeting embedded devices such as wireless network routers. This month, Paul Spooren (aparcar) switched the toolchain the use the GCC version 8 by default in order to support the -ffile-prefix-map= which permits a varying build path without affecting the binary result of the build []. In addition, Paul updated the kernel-defaults package to ensure that the SOURCE_DATE_EPOCH environment variable is considered when creating the the /init directory.

Alexander “lynxis” Couzens began work on working on a set of build scripts for creating firmware and operating system artifacts in the coreboot distribution.

Lukas Pühringer prepared an upload which was sponsored by Holger Levsen of python-securesystemslib version 0.11.3-1 to Debian unstable. python-securesystemslib is a dependency of in-toto, a framework to protect the integrity of software supply chains.

Arch Linux

The mkinitcpio component of Arch Linux was updated by Daniel Edgecumbe in order that it generates reproducible initramfs images by default, meaning that two subsequent runs of mkinitcpio produces two files that are identical at the binary level. The commit message elaborates on its methodology:

Timestamps within the initramfs are set to the Unix epoch of 1970-01-01. Note that in order for the build to be fully reproducible, the compressor specified (e.g. gzip, xz) must also produce reproducible archives. At the time of writing, as an inexhaustive example, the lzop compressor is incapable of producing reproducible archives due to the insertion of a runtime timestamp.

In addition, a bug was created to track progress on making the Arch Linux ISO images reproducible.


In July, Holger Levsen filed a bug against the underlying tool that maintains the Debian archive (“dak”) after he noticed that .buildinfo metadata files were not being automatically propagated in the case that packages had to be manually approved in “NEW queue”. After it was pointed out that the files were being retained in a separate location, Benjamin Hof proposed a patch for the issue that was merged and deployed this month.

Aurélien Jarno filed a bug against the Debian Policy (#940234) to request a section be added regarding the reproducibility of source packages. Whilst there is already a section about reproducibility in the Policy, it only mentions binary packages. Aurélien suggest that it:

… might be a good idea to add a new requirement that repeatedly building the source package in the same environment produces identical .dsc files.

In addition, 51 reviews of Debian packages were added, 22 were updated and 47 were removed this month adding to our knowledge about identified issues. Many issue types were added by Chris Lamb including buildpath_in_code_generated_by_bison, buildpath_in_postgres_opcodes and ghc_captures_build_path_via_tempdir.

Software development Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb uploaded versions 123, 124 and 125 and made the following changes:

  • New features:

    • Add /srv/diffoscope/bin to the Docker image path. (#70)
    • When skipping tests due to the lack of installed tool, print the package that might provide it. []
    • Update the “no progressbar” logging message to match the parallel missing tlsh module warnings. []
    • Update “requires foo” messages to clarify that they are referring to Python modules. []
  • Testsuite updates:

    • The test_libmix_differences ELF binary test requires the xxd tool. (#940645)
    • Build the OCaml test input files on-demand rather than shipping them with the package in order to prevent test failures with OCaml 4.08. (#67)
    • Also conditionally skip the identification and “no differences” tests as we require the Ocaml compiler to be present when building the test files themselves. (#940471)
    • Rebuild our test squashfs images to exclude the character device as they requires root or fakeroot to extract. (#65)
  • Many code cleanups, including dropping some unnecessary control flow [], dropping unnecessary pass statements [] and dropping explicitly inheriting from object class as it unnecessary in Python 3 [].

In addition, Marc Herbert completely overhauled the handling of ELF binaries particularly around many assumptions that were previously being made via file extensions, etc. [][][] and updated the testsuite to support a never version of the coreboot utilities. []. Mattia Rizzolo then ensured that diffoscope does not crash when the progress bar module is missing but the functionality was requested [] and made our version checking code more lenient []. Lastly, Vagrant Cascadian not only updated diffoscope to versions 123 and 125, he enabled a more complete test suite in the GNU Guix distribution. [][][][][][]

Project website

There was yet more effort put into our our website this month, including:

In addition, Cindy Kim added in-toto to our “Who is Involved?” page, James Fenn updated our homepage to fix a number of spelling and grammar issues [] and Peter Conrad added BitShares to our list of projects interested in Reproducible Builds [].


strip-nondeterminism is our tool to remove specific non-deterministic results from successful builds. This month, Marc Herbert made a huge number of changes including:

  • GNU ar handler:
    • Don’t corrupt the pseudo file mode of the symbols table.
    • Add test files for “symtab” (/) and long names (//).
    • Don’t corrupt the SystemV/GNU table of long filenames.
  • Add a new $File::StripNondeterminism::verbose global and, if enabled, tell the user that ar(1) could not set the symbol table’s mtime.

In addition, Chris Lamb performed some issue investigation with the Debian Perl Team regarding issues in the Archive::Zip module including a problem with corruption of members that use bzip compression as well as a regression whereby various metadata fields were not being updated that was reported in/around Debian bug #940973.

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org.

  • Alexander “lynxis” Couzens:
    • Fix missing .xcompile in the build system. []
    • Install the GNAT Ada compiler on all builders. []
    • Don’t install the iasl ACPI power management compiler/decompiler. []
  • Holger Levsen:
    • Correctly handle the $DEBUG variable in OpenWrt builds. []
    • Fefactor and notify the #archlinux-reproducible IRC channel for problems in this distribution. []
    • Ensure that only one mail is sent when rebooting nodes. []
    • Unclutter the output of a Debian maintenance job. []
    • Drop a “todo” entry as we vary on a merged /usr for some time now. []

In addition, Paul Spooren added an OpenWrt snapshot build script which downloads .buildinfo and related checksums from the relevant download server and attempts to rebuild and then validate them for reproducibility. []

The usual node maintenance was performed by Holger Levsen [][][], Mattia Rizzolo [] and Vagrant Cascadian [][].


reprotest is our end-user tool to build same source code twice in different environments and then check the binaries produced by each build for differences. This month, a change by Dmitry Shachnev was merged to not use the faketime wrapper at all when asked to not vary time [] and Holger Levsen subsequently released this as version 0.7.9 as dramatically overhauling the packaging [][].

Misc news & getting in touch

On our mailing list Rebecca N. Palmer started a thread titled Addresses in IPython output which points out and attempts to find a solution to a problem with Python packages, whereby objects that don’t have an explicit string representation have a default one that includes their memory address. This causes problems with reproducible builds if/when such output appears in generated documentation.

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa, Mattia Rizzolo and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Categories: FLOSS Project Planets

Glyph Lefkowitz: A Few Bad Apples

Planet Python - Sat, 2019-10-05 14:32

I’m a little annoyed at my Apple devices right now.

Time to complain.

“Trust us!” says Apple.

“We’re not like the big, bad Google! We don’t just want to advertise to you all the time! We’re not like Amazon, just trying to sell you stuff! We care about your experience. Magical. Revolutionary. Courageous!”

But I can’t hear them over the sound of my freshly-updated Apple TV — the appliance which exists solely to play Daniel Tiger for our toddler — playing the John Wick 3 trailer at full volume automatically as soon as it turns on.

For the aforementioned toddler.

I should mention that it is playing this trailer while specifically logged in to a profile that knows their birth date1 and also their play history2.

I’m aware of the preferences which control autoplay on the home screen; it’s disabled now. I’m aware that I can put an app other than “TV” in the default spot, so that I can see ads for other stuff, instead of the stuff “TV” shows me ads for.

But the whole point of all this video-on-demand junk was supposed to be that I can watch what I want, when I want — and buying stuff on the iTunes store included the implicit promise of no advertisements.

At least Google lets me search the web without any full-screen magazine-style ads popping up.

Launch the app store to check for new versions?

I can’t install my software updates without accidentally seeing HUGE ads for new apps.

Launch iTunes to play my own music?

I can’t play my own, purchased music without accidentally seeing ads for other music — and also Apple’s increasingly thirsty, desperate plea for me to remember that they have a streaming service now. I don’t want it! I know where Spotify is if I wanted such a thing, the whole reason I’m launching iTunes is that I want to buy and own the music!

On my iPhone, I can’t even launch the Settings app to turn off my WiFi without seeing an ad for AppleCare+, right there at the top of the UI, above everything but my iCloud account. I already have AppleCare+; I bought it with the phone! Worse, at some point the ad glitched itself out, and now it’s blank, and when I tap the blank spot where the ad used to be, it just shows me this:

I just want to use my device, I don’t need ad detritus littering every blank pixel of screen real estate.

Knock it off, Apple.

  1. less than 3 years ago 

  2. Daniel Tiger, Doctor McStuffins, Word World; none of which have super significant audience overlap with the John Wick franchise 

Categories: FLOSS Project Planets

Ritesh Raj Sarraf: Setting a Lotus Pot

Planet Debian - Sat, 2019-10-05 12:26
Experiences setting up a Lotus Pond Pot A novice’s first time experience setting up a Lotus pond and germinating the Lotus seeds to a full plant The trigger

Our neighbors have a very nice Lotus setup in the front of their garden, with flowers blooming in it. It is really a pleasing experience to see it. With lifestyles limiting to specifics, I’m glad to be around with like minded people. So we decided to set up a Lotus Pond Pot in our garden too.

Hunting the pot

About 2/3rd of our garden has been laid out with Mexican Grass, with the exception of some odd spots where there are 2 large concrete tanks in the ground and other small tanks. The large one is fairly big, with a dimension around 3.5 x 3.5 ft. We wanted to make use of that spot. I checked out some of the available pots and found a good looking circular pot carved in stone, of around 2 ft, but it was quite expensive and not fitting my budget.

With the stone pot out of my budget, the other options left out were

  • One molded in cement
  • Granite

We looked at another pot maker who made pots by molds in cement. They had some small size pot samples of around 1x1 feet but the finished product samples didn’t look very good. From the available samples, we weren’t sure how well would the pot what we wanted, would look like. Also, they weren’t having a proportionate price difference. The vendor would take a months time to make one. With no ready made sample of that size in place, this was something we weren’t very enthused to explore.

We instead chose to explore the possibility of building one with granite slabs. First reason being, granite vendors were more easily available. But second and most important, given the spot I had chosen, I felt a square granite based setup will be an equally good fit. Also, the granite based pot was coming in budget friendly. Finally, we settled with a 3 x 3 ft granite pot.

And this is what our initial Lotus Pot looked like.

Note: The granite pot was quite heavy, close to around 200 Kilograms. It took us 3 people to place it at the designated spot

Actual Lotus pot

As you can see from the picture above, there’s another pot in the larger granite pot itself. Lotus’ grow in water and sludge. So we needed to provide it with the same setup. We used one of the Biryani Pots to prepare for the sludge. This is where the Lotus’ roots, tuber, will live and grow, inside the water and sludge.

With an open pot, when you have water stored, you have other challenges to take care of.

  • Aeration of the water
  • Mosquitoes

We bought a solar water fountain to take care of the water. It is a nice device that works very well under sunlight. Remember, for your Lotus, you need a well sun-lit area. So, the solar water fountain was a perfect fit for this scenario.

But, with water, breed mosquitoes. As most would do, we chose to put in some fish into the pond to take care of that problem. We put in some Mollies, which we hope would produce more. Also the waste they produce, should work as a supplement fertilizer for the Lotus plant.

Aeration is very important for the fish and so far our Solar Water Fountain seems to be doing a good job there, keeping the fishes happy and healthy.

So in around a week, we had the Lotus Pot in place along with a water fountain and some fish.

The Lotus Plant itself

With the setup in place, it was now time to head to the nearby nursery and get the Lotus plant. It was an utter surprise to us to not find the Lotus plant in any of the nurseries. After quite a lot of searching around, we came across one nursery that had some lotus plants. They didn’t look in good health but that was the only option we had. But the bigger disappointment was the price, which was insanely high for a plant. We returned back home without the Lotus, a little disheartened.

Thank you Internet

The internet has connected the world so well. I looked up the internet and was delighted to see so many people sharing experiences in the form of articles and YouTube videos about Lotus. You can find all the necessary information about. With it, we were all charged up to venture into the next step, to grow the lotus from scratch seeds instead.

The Lotus seeds

First, we looked up the internet and placed an order for 15 lotus seeds. Soon, they were delivered. And this is what Lotus seeds look like.

Lotus Seeds I had, never in my life before, seen lotus seeds. The shell is very very hard. An impressive remnant of the Lotus plant.

Germinating the seed

Germinating the seed is an experience of its own, given how hard the lotus’ shell is. There are very good articles and videos on the internet explaining on the steps on how to germinate a seed. In a gist, you need to scratch the pointy end of the seed’s shell enough to see the inner membrane. And then you need to submerge it in water for around 7-10 days. Every day, you’ll be able to witness the germination process.

Lotus seed with the pointy end Make sure to scratch the pointy end well enough, while also ensuring to not damage the inner membrane of the seed. You also need to change the water in the container on a daily basis and keep it in a decently lit area.

Here’s an interesting bit, specific to my own experience. Turned out that the seeds we purchased online wasn’t of good quality. Except for one, none of the other seeds germinated. And the one that did, did not sprout out proper. It popped a single shoot but the shoot did not grow much. In all, it didn’t work.

But while we were waiting for the seeds to be delivered, my wife looked at the pictures of the seeds that I was studying about online, realized that we already had the lotus seeds at home. Turns out, these seeds are used in our Hindu Puja Rituals. The seeds are called कऎल ŕ¤—ŕ¤ŸŕĽŕ¤Ÿŕ¤ž in Hindi. We had some of the seeds from the puja remaining. So we had in parallel, used those seeds for germination and they sprouted out very well.

Unfortunately, I had not taken any pictures of them during the germination phase

The sprouting phase should be done in a tall glass. This will allow for the shoots to grow long, as eventually, it needs to be set into the actual pot. During the germination phase, the shoot will grow daily around an inch or so, ultimately targeting to reach the surface of the water. Once it reaches the surface, eventually, at the base, it’ll start developing the roots.

Now is time to start the preparation to sow the seed into the sub-pot which has all the sludge

Sowing your seeds Initial Lotus growing in the sludge

This picture is from a couple of days after I sowed the seeds. When you transfer the shoots into the sludge, use your finger to submerge into the sludge making room for the seed. Gently place the seed into the sludge. You can also cover the sub-pot with some gravel just to keep the sludge intact and clean.

Once your shoots get accustomed to the new environment, they start to grow again. That is what you see in the picture above, where the shoot reaches the surface of the water and then starts developing into the beautiful lotus leaf

Lotus leaf

It is a pleasure to see the Lotus leaves floating on the water. The flowering is going to take well over a year, from what I have read on the internet. But the Lotus Leaves, its veins and the water droplets on it, for now itself, are very soothing to watch

Lotus leaf veins

Lotus leaf veins and water droplets Final Result

As of today, this is what the final setup looks like. Hopefully, in a year’s time, there’ll be flowers

Lotus pot setup now
Categories: FLOSS Project Planets

GNU Guix: GNU Guix maintainer collective expands

GNU Planet! - Sat, 2019-10-05 08:29

In July, we—Ricardo Wurmus and Ludovic Courtès—called for volunteers to join us in maintaining Guix. We are thrilled to announce that three brave hackers responded and that they’re now officially co-maintainers! The Guix maintainer collective now consists of Marius Bakke, Maxim Cournoyer, and Tobias Geerinckx-Rice, in addition to Ricardo and Ludovic. You can reach us all by email at guix-maintainers@gnu.org, a private alias.

So what does it mean to be a maintainer? There are some duties:

  1. Enforcing GNU and Guix policies, such as the project’s commitment to be released under a copyleft free software license (GPLv3+) and to follow the Free System Distribution Guideline (FSDG).

  2. Enforcing our code of conduct: maintainers are the contact point for anyone who wants to report abuse.

  3. Making decisions, about code or anything, when consensus cannot be reached. We’ve probably never encountered such a situation before, though!

Maintainers should have a good idea of what’s going on, but the other responsibilities can (and should! :-)) be delegated. Maybe you, dear reader, can help on one of them? Here are some examples:

  • Making releases. Any experienced developer can take this responsibility for some time.

  • Dealing with development and its everyday issues as well as long-term roadmaps, branch merges, code review, bug triage, all that.

  • Participating in Outreachy and Google Summer of Code (GSoC).

  • Organizing the Guix Days before FOSDEM and our presence at FOSDEM and other conferences.

  • Taking care of Guix money kindly donated by dozens of people and held at the FSF. A Spending Committee currently consisting of Tobias, Ricardo, and Ludovic, is responsible for deciding on, well, what to spend money on. Maintainers should also keep in touch with the “Guix Europe” non-profit registered in France, currently spearheaded by Manolis Ragkousis and Andreas Enge, and which has been providing financial support for hardware and events.

  • Keeping the build farm infrastructure up and running, extending it, thinking about hosting issues, etc.

  • Keeping the web site up-to-date.

  • Looking after people: making sure to promote people who are very involved in leadership position; dubbing new committers, new maintainers, new members of the spending committee. Supporting new initiatives. Generally trying to make sure everyone’s happy. :-)

With now five people on-board, we’ll probably be able to improve some of our processes and be able to scale better. You’re welcome to share your ideas on guix-devel@gnu.org or directly at guix-maintainers@gnu.org!

More generally, we think rotating responsibilities is a great way to bring new ideas and energy into the project. We are super happy and grateful that Maxim, Marius, and Tobias are taking on this challenge—thank you folks!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

Categories: FLOSS Project Planets

Recently Used ioslave

Planet KDE - Sat, 2019-10-05 04:05

With D7446 landing, the new ioslave recentlyused:/ ioslave will become user visible with KDE Frameworks 5.63. This differential revision adds two entries "Recent Files" and "Recent Locations" to the place panel (in dolphin and open/save dialogs)

It leverages the ioslave recentlyused:/ introduced in D22144, allowing to access KActivity data. KActivity is the service that provides "recent" elements to kickoff menu and is activity aware as the name suggests.

So now "Recent Files", "Recent Locations" in the places panel share the same underlining data with kickoff.

But recentlyused:/ can be used to create your virtual folders for recent files or folders. For instance

  • recentlyused:/files?type=video/*,audio/*

To filter recently accessed video and audio files.

  • recentlyused:/files?path=/home/meven/kde/src/*&type=text/plain

To filter recently accessed text files in any subdirectory of /home/meven/kde/src/

  • recentlyused:/locations?path=/home/meven/kde/src/*

To filter recently accessed folders in any subdirectory of /home/meven/kde/src/

You can read the documentation for more details.

When working on this new feature, It was a great time to improve KActivity. So I allowed KActivity to ingest data from gtk applications in differential D23112.

I want to thank Ivan Lukić for building KActivity service and library and reviewing most of this work. And I want to thank all the other reviewers involved.

Categories: FLOSS Project Planets

John Goerzen: Resurrecting Ancient Operating Systems on Debian, Raspberry Pi, and Docker

Planet Debian - Fri, 2019-10-04 18:09

I wrote recently about my son playing Zork on a serial terminal hooked up to a PDP-11, and how I eventually bought a vt420 (ok, some vt420s and vt510s, I couldn’t stop at one) and hooked it up to a Raspberry Pi.

This led me down another path: there is a whole set of hardware and software that I’ve never used. For some, it fell out of favor before I could read (and for others, before I was even born).

The thing is – so many of these old systems have a legacy that we live in today. So much so, in fact, that we are now seeing articles about how modern CPUs are fast PDP-11 emulators in a sense. The PDP-11, and its close association with early Unix, lives on in the sense that its design influenced microprocessors and operating systems to this day. The DEC vt100 terminal is, nowadays, known far better as that thing that is emulated, but it was, in fact, a physical thing. Some goes back into even mistier times; Emacs, for instance, grew out of the MIT ITS project but was later ported to TOPS-20 before being associated with Unix. vi grew up in 2BSD, and according to wikipedia, was so large it could barely fit in the memory of a PDP-11/70. Also in 2BSD, a buggy version of Zork appeared — so buggy, in fact, that the save game option was broken. All of this happened in the late 70s.

When we think about the major developments in computing, we often hear of companies like IBM, Microsoft, and Apple. Of course their contributions are undeniable, and emulators for old versions of DOS are easily available for every major operating system, plus many phones and tablets. But as the world is moving heavily towards Unix-based systems, the Unix heritage is far more difficult to access.

My plan with purchasing and setting up an old vt420 wasn’t just to do that and then leave. It was to make it useful for modern software, and also to run some of these old systems under emulation.

To that end, I have released my vintage computing collection – both a script for setting up on a system, and a docker image. You can run Emacs and TECO on TOPS-20, zork and vi on 2BSD, even Unix versions 5, 6, and 7 on a PDP-11. And for something particularly rare, RDOS on a Data General Nova. I threw in some old software compiled for modern systems: Zork, Colossal Cave, and Gopher among them. The bsdgames collection and some others are included as well.

I hope you enjoy playing with the emulated big-iron systems of the 70s and 80s. And in a dramatic turnabout of scale and cost, those machines which used to cost hundreds of thousands of dollars can now be run far faster under emulation on a $35 Raspberry Pi.

Categories: FLOSS Project Planets

CMake 3.15.4 landed in FreeBSD

Planet KDE - Fri, 2019-10-04 18:00

We (and this is a “we” that means “I pushed a button, but other people did all the real work”) just landed the latest CMake release, version 3.15.4, in the official FreeBSD ports tree.

This is part and parcel of the kind of weekly maintainence that the KDE-FreeBSD group goes through: building lots of other stuff. We’re happy to be responsible for code that hundreds of other ports depend on, but it brings a bunch of extra work with it. I probably build gcc and llvm a few times a week just testing new KDE bits and pieces (because in between those tests, the official ports for other parts, like those compilers, have updated as well).

For KDE development, this means that after installing the KDE packages, followed by devel/kdevelop you can dive right in with:

  • Clang 8 or 9 (depends on which FreeBSD version)
  • boost 1.71
  • Qt 5.13.0
  • KDE Frameworks 5.62
  • KDE Plasma 5.16.5
  • KDevelop 5.4.2

You can watch our progress (or how we’re keeping up) over on repology, which is a darn useful resource for figuring out what to package.

Categories: FLOSS Project Planets

PyBites: Code Challenge 64 - PyCon ES 2019 Marvel Challenge

Planet Python - Fri, 2019-10-04 16:00

There is an immense amount to be learned simply by tinkering with things. - Henry Ford

Hey Pythonistas,

This weekend is Pycon ES and in the unlikely event you get bored, you can always do some coding with PyBites. Two more good reasons to do so:

  1. there are prizes / give aways,
  2. your PRs count towards Hacktoberfest (t-shirt). Fire up your editors and let's get coding!
The Challenge

Most of this challenge is open-ended. We really want to give you creative powers. Here is what we are going to do:

  1. Create an account on https://developer.marvel.com. Upon confirming your email you sould get an API key.

  2. Write code to successfully make requests to the API, check out the docs and the boilerplate code provided in the challenge directory (as usual make your virtual env and install requirements / requests for starters!)

  3. To be good citizens make a function to download the main 6 endpoints: characters, comics, creators, events, series, and stories. Save the JSON outputs in a data folder.

  4. Now the fun part, here we let you totally free: look through the data and tell us / our community a story. Make stunning data vizualizations and share them on our slack, in our new #marvel channel.

  5. PR your work on our platform before Friday 11th of Oct. 2019 23.59 AoE (again remember, this also adds up for that Hacktoberfest t-shirt!). The 3 best submissions win one of our prizes:

Good luck and impress your fellow Pythonistas! Ideas for future challenges? use GH Issues.

Get serious, take your Python to the next level ...

At PyBites we're all about creating Python ninjas through challenges and real-world exercises. Read more about our story.

We are happy and proud to share that we now hear monthly stories from our users that they're landing new Python jobs. For many this is a dream come true, especially as they're often landing roles with significantly higher salaries!

Our 200 Bites of Py exercises are geared toward instilling the habit of coding frequently, if not daily which will dramatically improve your Python and problem solving skills. This is THE number one skillset necessary to becoming a linchpin in the industry and will enable you to crush it wherever codes need to be written.

Take our free trial and let us know on Slack how it helps you improve your Python!

>>> from pybites import Bob, Julian Keep Calm and Code in Python!
Categories: FLOSS Project Planets

Codementor: A concise resource repository for machine learning!

Planet Python - Fri, 2019-10-04 15:17
A concise repository of machine learning bookmarks.
Categories: FLOSS Project Planets

Codementor: Getting to Know Go, Python, and Benchmarks

Planet Python - Fri, 2019-10-04 10:48
This article was written by Vadym Zakovinko (Solution Architect) for Django Stars (https://djangostars.com). Hello, my name is Vadym, and this is my story about how I started learning Go, what it...
Categories: FLOSS Project Planets

Ben Hutchings: Kernel Recipes 2019, part 2

Planet Debian - Fri, 2019-10-04 09:30

This conference only has a single track, so I attended almost all the talks. This time I didn't take notes but I've summarised all the talks I attended. This is the second and last part of that; see part 1 if you missed it.

XDP closer integration with network stack

Speaker: Jesper Dangaard Brouer

Details and slides: https://kernel-recipes.org/en/2019/xdp-closer-integration-with-network-stack/

Video: Youtube

The speaker introduced XDP and how it can improve network performance.

The Linux network stack is extremely flexible and configurable, but this comes at some performance cost. The kernel has to generate a lot of metadata about every packet and check many different control hooks while handling it.

The eXpress Data Path (XDP) was introduced a few years ago to provide a standard API for doing some receive packet handling earlier, in a driver or in hardware (where possible). XDP rules can drop unwanted packets, forward them, pass them directly to user-space, or allow them to continue through the network stack as normal.

He went on to talk about how recent and proposed future extensions to XDP allow re-using parts of the standard network stack selectively.

This talk was supposed to be meant for kernel developers in general, but I don't think it would be understandable without some prior knowledge of the Linux network stack.

Faster IO through io_uring

Speaker: Jens Axboe

Details and slides: https://kernel-recipes.org/en/2019/talks/faster-io-through-io_uring/

Video: Youtube. (This is part way through the talk, but the earlier part is missing audio.)

The normal APIs for file I/O, such as read() and write(), are blocking, i.e. they make the calling thread sleep until I/O is complete. There is a separate kernel API and library for asynchronous I/O (AIO), but it is very restricted; in particular it only supports direct (uncached) I/O. It also requires two system calls per operation, whereas blocking I/O only requires one.

Recently the io_uring API was introduced as an entirely new API for asynchronous I/O. It uses ring buffers, similar to hardware DMA rings, to communicate operations and completion status between user-space and the kernel, which is far more efficient. It also removes most of the restrictions of the current AIO API.

The speaker went into the details of this API and showed performance comparisons.

The Next Steps toward Software Freedom for Linux

Speaker: Bradley Kuhn

Details: https://kernel-recipes.org/en/2019/talks/the-next-steps-toward-software-freedom-for-linux/

Slides: http://ebb.org/bkuhn/talks/Kernel-Recipes-2019/kernel-recipes.html

Video: Youtube

The speaker talked about the importance of the GNU GPL to the development of Linux, in particular the ability of individual developers to get complete source code and to modify it to their local needs.

He described how, for a large proportion of devices running Linux, the complete source for the kernel is not made available, even though this is required by the GPL. So there is a need for GPL enforcement—demanding full sources from distributors of Linux and other works covered by GPL, and if necessary suing to obtain them. This is one of the activities of his employer, Software Freedom Conservancy, and has been carried out by others, particularly Harald Welte.

In one notable case, the Linksys WRT54G, the release of source after a lawsuit led to the creation of the OpenWRT project. This is still going many years later and supports a wide range of networking devices. He proposed that the Conservancy's enforcement activity should, in the short term, concentrate on a particular class of device where there would likely be interest in creating a similar project.

Suricata and XDP

Speaker: Eric Leblond

Details and slides: https://kernel-recipes.org/en/2019/talks/suricata-and-xdp/

Video: Youtube

The speaker described briefly how an Intrusion Detection System (IDS) interfaces to a network, and why it's important to be able to receive and inspect all relevant packets.

He then described how the Suricata IDS uses eXpress Data Path (XDP, explained in an earlier talk) to filter and direct packets, improving its ability to handle very high packet rates.

CVEs are dead, long live the CVE!

Speaker: Greg Kroah-Hartman

Details and slides: https://kernel-recipes.org/en/2019/talks/cves-are-dead-long-live-the-cve/

Video: Youtube

Common Vulnerabilities and Exposures Identifiers (CVE IDs) are a standard, compact way to refer to specific software and hardware security flaws.

The speaker explained problems with the way CVE IDs are currently assigned and described, including assignments for bugs that don't impact security, lack of assignment for many bugs that do, incorrect severity scores, and missing information about the changes required to fix the issue. (My work on CIP's kernel CVE tracker addresses some of these problems.)

The average time between assignment of a CVE ID and a fix being published is apparently negative for the kernel, because most such IDs are being assigned retrospectively.

He proposed to replace CVE IDs with "change IDs" (i.e. abbreviated git commit hashes) identifying bug fixes.

Driving the industry toward upstream first

Speaker: Enric Balletbo i Serra

Details snd slides: https://kernel-recipes.org/en/2019/talks/driving-the-industry-toward-upstream-first/

Video: Youtube

The speaker talked about how the Chrome OS developers have tried to reduce the difference between the kernels running on Chromebooks, and the upstream kernel versions they are based on. This has succeeded to the point that it is possible to run a current mainline kernel on at least some Chromebooks (which he demonstrated).

Formal modeling made easy

Speaker: Daniel Bristot de Oliveira

Details and slides: https://kernel-recipes.org/en/2019/talks/formal-modeling-made-easy/

Video: Youtube

The speaker explained how formal modelling of (parts of) the kernel could be valuable. A formal model will describe how some part of the kernel works, in a way that can be analysed and proven to have certain properties. It is also necessary to verify that the model actually matches the kernel's implementation.

He explained the methodology he used for modelling the real-time scheduler provided by the PREEMPT_RT patch set. The model used a number of finite state machines (automata), with conditions on state transitions that could refer to other state machines. He added (I think) tracepoints for all state transitions in the actual code and a kernel module that verified that at each such transition the model's conditions were met.

In the process of this he found a number of bugs in the scheduler.

Kernel documentation: past, present, and future

Speaker: Jonathan Corbet

Details and slides: https://kernel-recipes.org/en/2019/kernel-documentation-past-present-and-future/

Video: Youtube

The speaker is the maintainer of the Linux kernel's in-tree documentation. He spoke about how the documentation has been reorganised and reformatted in the past few years, and what work is still to be done.

GNU poke, an extensible editor for structured binary data

Speaker: Jose E Marchesi

Details and slides: https://kernel-recipes.org/en/2019/talks/gnu-poke-an-extensible-editor-for-structured-binary-data/

Video: Youtube

The speaker introduced and demonstrated his project, the "poke" binary editor, which he thinks is approaching a first release. It has a fairly powerful and expressive language which is used for both interactive commands and scripts. Type definitions are somewhat C-like, but poke adds constraints, offset/size types with units, and types of arbitrary bit width.

The expected usage seems to be that you write a script ("pickle") that defines the structure of a binary file format, use poke interactively or through another script to map the structures onto a specific file, and then read or edit specific fields in the file.

Categories: FLOSS Project Planets

Stack Abuse: Solving Systems of Linear Equations with Python's Numpy

Planet Python - Fri, 2019-10-04 08:57

The Numpy library can be used to perform a variety of mathematical/scientific operations such as matrix cross and dot products, finding sine and cosine values, Fourier transform and shape manipulation, etc. The word Numpy is short-hand notation for "Numerical Python".

In this article, you will see how to solve a system of linear equations using Python's Numpy library.

What is a System of Linear Equations?

Wikipedia defines a system of linear equations as:

In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same set of variables.

The ultimate goal of solving a system of linear equations is to find the values of the unknown variables. Here is an example of a system of linear equations with two unknown variables, x and y:

Equation 1:

4x + 3y = 20 -5x + 9y = 26

To solve the above system of linear equations, we need to find the values of the x and y variables. There are multiple ways to solve such a system, such as Elimination of Variables, Cramer's Rule, Row Reduction Technique, and the Matrix Solution. In this article we will cover the matrix solution.

In the matrix solution, the system of linear equations to be solved is represented in the form of matrix AX = B. For instance, we can represent Equation 1 in the form of a matrix as follows:

A = [[ 4 3] [-5 9]] X = [[x] [y]] B = [[20] [26]]

To find the value of x and y variables in Equation 1, we need to find the values in the matrix X. To do so, we can take the dot product of the inverse of matrix A, and the matrix B as shown below:

X = inverse(A).B

If you are not familiar with how to find the inverse of a matrix, take a look at this link to understand how to manually find the inverse of a matrix. To understand the matrix dot product, check out this article.

Solving a System of Linear Equations with Numpy

From the previous section, we know that to solve a system of linear equations, we need to perform two operations: matrix inversion and a matrix dot product. The Numpy library from Python supports both the operations. If you have not already installed the Numpy library, you can do with the following pip command:

$ pip install numpy

Let's now see how to solve a system of linear equations with the Numpy library.

Using the inv() and dot() Methods

First, we will find inverse of matrix A that we defined in the previous section.

Let's first create the matrix A in Python. To create a matrix, the array method of the Numpy module can be used. A matrix can be considered as a list of lists where each list represents a row.

In the following script we create a list named m_list, which further contains two lists: [4,3] and [-5,9]. These lists are the two rows in the matrix A. To create the matrix A with Numpy, the m_list is passed to the array method as shown below:

import numpy as np m_list = [[4, 3], [-5, 9]] A = np.array(m_list)

To find the inverse of a matrix, the matrix is passed to the linalg.inv() method of the Numpy module:

inv_A = np.linalg.inv(A) print(inv_A)

The next step is to find the dot product between the inverse of matrix A, and the matrix B. It is important to mention that matrix dot product is only possible between the matrices if the inner dimensions of the matrices are equal i.e. the number of columns of the left matrix must match the number of rows in the right matrix.

To find the dot product with the Numpy library, the linalg.dot() function is used. The following script finds the dot product between the inverse of matrix A and the matrix B, which is the solution of the Equation 1.

B = np.array([20, 26]) X = np.linalg.inv(A).dot(B) print(X)


[2. 4.]

Here, 2 and 4 are the respective values for the unknowns x and y in Equation 1. To verify, if you plug 2 in place of the unknown x and 4 in the place of the unknown y in equation 4x + 3y, you will see that the result will be 20.

Let's now solve a system of three linear equations, as shown below:

4x + 3y + 2z = 25 -2x + 2y + 3z = -10 3x -5y + 2z = -4

The above equation can be solved using the Numpy library as follows:

Equation 2:

A = np.array([[4, 3, 2], [-2, 2, 3], [3, -5, 2]]) B = np.array([25, -10, -4]) X = np.linalg.inv(A).dot(B) print(X)

In the script above the linalg.inv() and the linalg.dot() methods are chained together. The variable X contains the solution for Equation 2, and is printed as follows:

[ 5. 3. -2.]

The value for the unknowns x, y, and z are 5, 3, and -2, respectively. You can plug these values in Equation 2 and verify their correctness.

Using the solve() Method

In the previous two examples, we used linalg.inv() and linalg.dot() methods to find the solution of system of equations. However, the Numpy library contains the linalg.solve() method, which can be used to directly find the solution of a system of linear equations:

A = np.array([[4, 3, 2], [-2, 2, 3], [3, -5, 2]]) B = np.array([25, -10, -4]) X2 = np.linalg.solve(A,B) print(X2)


[ 5. 3. -2.]

You can see that the output is same as before.

A Real-World Example

Let's see how a system of linear equation can be used to solve real-world problems.

Suppose, a fruit-seller sold 20 mangoes and 10 oranges in one day for a total of $350. The next day he sold 17 mangoes and 22 oranges for $500. If the prices of the fruits remained unchanged on both the days, what was the price of one mango and one orange?

This problem can be easily solved with a system of two linear equations.

Let's say the price of one mango is x and the price of one orange is y. The above problem can be converted like this:

20x + 10y = 350 17x + 22y = 500

The solution for the above system of equations is shown here:

A = np.array([[20, 10], [17, 22]]) B = np.array([350, 500]) X = np.linalg.solve(A,B) print(X)

And here is the output:

[10. 15.]

The output shows that the price of one mango is $10 and the price of one orange is $15.


The article explains how to solve a system of linear equations using Python's Numpy library. You can either use linalg.inv() and linalg.dot() methods in chain to solve a system of linear equations, or you can simply use the solve() method. The solve() method is the preferred way.

Categories: FLOSS Project Planets

Codementor: Django vs Ruby on Rails: Web Frameworks Comparison

Planet Python - Fri, 2019-10-04 07:26
There are more than 90 web development frameworks out there. No wonder it’s hard to choose the one that’ll suit your project best. Still, there are at least two major frameworks that are widely...
Categories: FLOSS Project Planets