Feeds

Julian Andres Klode: An - EPYC - Focal Upgrade

Planet Debian - Sat, 2020-04-25 15:28

Ubuntu “Focal Fossa” 20.04 was released two days ago, so I took the opportunity yesterday and this morning to upgrade my VPS from Ubuntu 18.04 to 20.04. The VPS provides:

  • SMTP via Postfix
  • Spam filtering via rspamd
  • HTTP(S) via nginx and letsencrypt (certbot)
  • Weechat relay
  • OpenVPN server
  • Shadowsocks proxy
  • Unbound recursive DNS resolver, for the spam filtering

I rebooted one more time than necessary, though, as my cloud provider Hetzner recently started offering 2nd generation EPYC instances which I upgraded to from my Skylake Xeon based instance. I switched from the CX21 for 5.83€/mo to the CPX11 for 4.15€/mo. This involved a RAM downgrade - from 4GB to 2GB, but that’s fine, the maximum usage I saw was about 1.3 GB when running dose-distcheck (running hourly); and it’s good for everyone that AMD is giving Intel some good competition, I think.

Anyway, to get back to the distribution upgrade - it was fairly boring. I started yesterday by taking a copy of the server and launching it locally in a lxd container, and then tested the upgrade in there; to make sure I’m prepared for the real thing :)

I got a confusing prompt from postfix as to which site I’m operating (which is a normal prompt, but I don’t know why I see it on an upgrade); and a few config files I had changed locally.

As the server is managed by ansible, I just installed the distribution config files and dropped my changes (setting DPkg::Options { "--force-confnew"; };" in apt.conf), and then after the upgrade, ran ansible to redeploy the changes (after checking what changes it would do and adjusting a few things).

There are two remaining flaws:

  1. I run rspamd from the upstream repository, and that’s not built for focal yet. So I’m still using the bionic binary, and have to keep bionic’s icu 60 and libhyperscan4 around for it.

    This is still preventing CI of the ansible config from passing for focal, because it won’t have the needed bionic packages around.

  2. I run weechat from the upstream repository, and apt can’t tell the versions apart. Well, it can for the repositories, because they have Size fields - but status does not. Hence, it merges the installed version with the first repository it sees.

    What happens is that it installs from weechat.org, but then it believes the installed version is from archive.ubuntu.com and replaces it each dist-upgrade.

    I worked around it by moving the weechat.org repo to the front of sources.list, so that the it gets merged with that instead of the archive.ubuntu.com one, as it should be, but that’s a bit ugly.

I also should start the migration to EC certificates for TLS, and 0-RTT handshakes, so that the initial visit experience is faster. I guess I’ll have to move away from certbot for that, but I have not investigated this recently.

Categories: FLOSS Project Planets

PyBites: When to Write Classes in Python And Why it Matters

Planet Python - Sat, 2020-04-25 12:30

When people come to Python one of the things they struggle with is OOP (Object Oriented Programming). Not so much the syntax of classes, but more when and when not to use them. If that's you, read on.

In this article I will give you some insights that will get you clarity on this.

Classes are incredibly useful and robust, but you need to know when to use them. Here are some considerations.

1. You need to keep state

For example, if you need to manage a bunch of students and grades, or when you build a game that keeps track of attempts, score etc (Hangman example).

Basically, when you have data and behavior (= variables and methods) that go together, you would use a class.

2. Bigger projects - classes favor code organization and reusability

I often use the example of a Report class. You can have a base class with shared attributes like report name, location and rows. But when you go into specifics like formats (xml, json, html), you could override a generate_report method in the subclass.

Or think about vehicles:

Vehicles include wagons, bicycles, motor vehicles (motorcycles, cars, trucks, buses), railed vehicles (trains, trams), watercraft (ships, boats), amphibious vehicles (screw-propelled vehicle, hovercraft), aircraft (airplanes, helicopters) and spacecraft (Wikipedia)

When you see hierarchies like this, using classes leads to better code organization, less duplication, and reusable code.

This becomes specially powerful if you have hundreds of subclasses and you need to make a fundamental change. You can make a single change in the base class (parent) and all child classes pick up the change (keeping things DRY).

Note: although I like inheritance, composition is often preferred because it is more flexible. When I get to a real world use case, I will write another article about it ...

3. Encapsulation

You can separate internal vs external interfaces, hiding implementation details. You can better isolate or protect your data, giving consumers of our classes only certain access rights (think API design).

It's like driving a car and not having to know about the mechanics. When I start my car, I can just operate with the common interface I know and should operate with: gas, brake, clutch, steering wheel, etc. I don't have to know how the engine works.

Your classes can hide this complexity from the consumer as well, which makes your code easier to understand (elegant).

Another key benefit of this is that all related data is grouped together. It's nice to talk about person.name, person.age, person.height, etc. Imagine having to keep all this similar data in separate variables, it'll get messy and unmaintainable very quickly.

Enforcing a contract

We gave an example here of Abstract base classes (ABCs) which let you force derived classes to implement certain behaviors.

By applying the abstractmethod decorator to a method in your base class, you force subclasses to implement this method.

BONUS: Better understanding of Python

Everything in Python is an object. Understanding classes and objects makes you better prepared to use Python's data model and full feature set, which will lead to cleaner and more "pythonic" code.

... the Python data model, and it describes the API that you can use to make your own objects play well with the most idiomatic language features. You can think of the data model as a description of Python as a framework. It formalizes the interfaces of the building blocks of the language itself, such as sequences, iterators, functions, classes, context managers, and so on. - Fluent Python

Lastly, a lot of important design patterns are drawn from OOP, just Google "object oriented design patterns", even if you don't use classes day to day, you will read a lot of code that has them!

(Funny trivia: today we extracted classes from standard library modules.)

Main takeaway

Classes are great if you need to keep state, because they containerize data (variables) and behavior (methods) that act on that data and should logically be grouped together.

This leads to code that is better organized (cleaner) and easier to reuse.

Avoid classes

With that said, OOP is not always the best solution. Here are some thoughts when to avoid classes:

  1. The most straightforward reason is if your class has just a constructor and one method. Then just use a function (and watch this great talk: Stop writing classes)

  2. Small (one off) command line scripts probably don't need classes.

  3. If you can accomplish the same with a context manager or a generator, this might be cleaner and "Pythonic".

I hope this helps you decide when to use classes and when not.

Regardless you should have them in your arsenal.

And there is no better way to learn more about them through some of our resources:

  1. Check out our articles: How to Write a Python Class and How to Write a Python Subclass.

  2. Don't spend more than 10-15 min on it though. The best way to learn is to ACTUALLY CODE! So start our OOP learning path today and start to write classes / code OOP.

  3. Once you get past the basics, read my article about Python's magic / special methods: Enriching Your Python Classes With Dunder (Magic, Special) Methods

  4. As we said before, games make for good OOP practice. Try to to create a card game (and submit it here).

Comment your wins below in the comments and hope to see you in the forums on our platform ...

-- Bob

(Photo by Daniel McCullough on Unsplash)

Categories: FLOSS Project Planets

PyBites: Building a Stadia Tracker Site Using Django

Planet Python - Sat, 2020-04-25 11:20
My First Django Project Index

I've been writing code for about 15 years (on and off) and Python for about 4 or 5 years. With Python it's mostly small scripts and such. I’ve never considered myself a ‘real programmer’ (Python or otherwise).

About a year ago, I decided to change that (for Python at the very least) when I set out to do 100 Days Of Web in Python from Talk Python To Me. Part of that course were two sections taught by Bob regarding Django. I had tried learn Flask before and found it ... overwhelming to say the least.

Sure, you could get a ‘hello world’ app in 5 lines of code, but then what? If you wanted to do just about anything it required ‘something’ else.

I had tried Django before, but wasn't able to get over the 'hump' of deploying. Watching the Django section in the course made it just click for me. Finally, a tool to help me make AND deploy something! But what?

The Django App I wanted to create

A small project I had done previously was to write a short script for my Raspberry Pi to tell me when LA Dodger (Baseball) games were on (it also has beloved Dodger Announcer Vin Scully say his catch phrase, “It’s time for Dodger baseball!!!”).

I love the Dodgers. But I also love baseball. I love baseball so much I have on my bucket list a trip to visit all 30 MLB stadia. Given my love of baseball, and my new found fondness of Django, I thought I could write something to keep track of visited stadia. I mean, how hard could it really be?

What does it do?

My Django Site uses the MLB API to search for games and allows a user to indicate a game seen in person. This allows them to track which stadia you've been to. My site is composed of 4 apps:

  • Users
  • Content
  • API
  • Stadium Tracker

The API is written using Django Rest Framework (DRF) and is super simple to implement. It’s also really easy to changes to your models if you need to.

The Users app was inspired by Will S Vincent ( a member of the Django Software Foundation, author, and podcaster). He (and others) recommend creating a custom user model to more easily extend the User model later on. Almost all of what’s in my Users App is directly taken from his recommendations.

The Content App was created to allow me to update the home page, and about page (and any other content based page) using the database instead of updating html in a template.

The last App, and the reason for the site itself, is the Stadium Tracker! I created a search tool that allows a user to find a game on a specific day between two teams. Once found, the user can add that game to ‘Games Seen’. This will then update the list of games seen for that user AND mark the location of the game as a stadium visited. The best part is that because the game is from the MLB API I can do some interesting things:

  1. I can get the actual stadium from visited which allows the user to indicate historic (i.e. retired) stadia
  2. I can get details of the game (final score, hits, runs, errors, stories from MLB, etc) and display them on a details page.

That's great and all, but what does it look like?

The Search Tool

Stadia Listing National League West

American League West

What’s next?

I had created a roadmap at one point and was able to get through some (but not all) of those items. Items left to do:

  • Get Test coverage to at least 80% across the app (currently sits at 70%)
  • Allow users to be based on social networks (right now I’m looking at Twitter, and Instagram) probably with the Django Allauth Package
  • Add ability to for minor league team search and stadium tracking (this is already part of the MLB API, I just never implemented it)
  • Allow user to search for range of dates for teams
  • Update the theme ... it’s the default MUI CSS which is nice, but I’d rather it was something a little bit different
  • Convert Swagger implementation from django-rest-swagger to drf-yasg

Final Thoughts

Writing this app did several things for me.

First, it removed some of the tutorial paralysis that I felt. Until I wrote this I didn’t think I was a web programmer (and I still don’t really), and therefore had no business writing a web app.

Second, it taught me how to use git more effectively. This directly lead to me contributing to Django itself (in a very small way via updates to documentation). It also allowed me to feel comfortable enough to write my first post on this very blog.

Finally, it introduced me to the wonderful ecosystem around Django. There is so much to learn, but the great thing is that EVERYONE is learning something. There isn’t anyone that knows it all which makes it easier to ask questions! And helps me in feeling more confident to answer questions when asked.

The site is deployed on Heroku and can be seen here. The code for the site can be seen here.

-- Ryan

(Cover photo by Jose Morales on Unsplash)

Categories: FLOSS Project Planets

Reinhard Tartler: Building Packages with Buildah in Debian

Planet Debian - Sat, 2020-04-25 08:57
1 Building Debian Packages with buildahBuilding packages in Debian seems to be a solved problem. But is it? At the bottom, installing the dpkg-dev package provides all the basic tools needed. Assuming that you already succeeded with creating the necessary packaging metadata (i.e., debian/changelog, debian/control, debian/copyright, etc., and there are great helper tools for this such ash dh-make, dh-make-golang, etc.,) it should be as simple as invoking the dpkg-buildpackage tool. So what's the big deal here?

The issue is that dpkg-buildpackage expects to be called with an appropriately setup build context, that is, it needs to be called in an environment that satisfies all build dependencies on the system. Let's say you are building a package for Debian unstable on your Debian stable system (this is the common scenario for the official Debian build machines), you would need your build to link against libraries in unstable, not stable. So how to tell the package build process where to find its dependencies?

The answer (in Debian and many other Linux distributions) is you do not at all. This is actually a somewhat surprising answer for software developers without a Linux distribution development background1. Instead, chroots "simulate" an environment that has all dependencies that we want to build against at the system locations, that is. /usr/lib, etc.

Chroots are basically full system installations in a subdirectory that includes system and application libraries. In order to use them, a package build needs to use the chroot(2) system call, which is a privileged operation. Also creating these system installations is a somewhat finicky process. In Debian, we have tools that make this process easier, the most popular ones are probably pbuilder(1)2 and sbuild(1)3. Still, they are somewhat clumsy to use, add significant levels of abstraction in the sense that they do quite a bit of magic behind the scenes to work hide the fact that privileged operations (need root to run chroot(2), etc.) are required. They are also somewhat brittle, for instance, if a build process is aborted (SIGKILL, or system crash), you may end up with temporary directories and files under userids other than your own that again may require root-privileges to cleanup.

What if there was an easy way to do all of the above without any process running as root? Enter rootless buildah.

Modern Linux container technologies allow unprivileged users to "untie" a process from the system (cf. the unshare(2) system call). This means that a regular user may run a process (and its child processes) in an "environment" where system calls behave differently and provide a configurable amount of isolation levels. This article demonstrates a novel build tool buildah, which is:
  • easy to setup build environment from the command line
  • secure, as no process needs to run as root
  • simple in architecture, requires no running daemon like for example docker
  • convenient to use: you can debug your debian/rules interactively
Architecturally, buildah is written in golang and compiled as a (mostly) statically linked executable. It builds on top a number of libraries written in golang, including github.com/containers/storage and github.com/containers/image. The overlay functionality is provided by the fuse-overlayfs(1) utility.1.1 Preparation:The Kernel in Debian bullseye (and in buster, and recent Ubuntu kernels to the best of my knowledge) do support usernamespaces, but leave them disabled by default. Here is how to enable them:
echo kernel.unprivileged_userns_clone = 1 | sudo tee -a /etc/sysctl.d/containers.conf
systemctl -p /etc/sysctl.d/containers.conf
I have to admit that I'm not entirely sure why the Debian kernels don't enable usernamespaces by default. I've found a reference on stackexchange4 that claims this disables some "hardening" features in the Debian kernel. I also understand this step is not necessary if you chose to compile and run a vanilla upstream kernel. I'd appreciate a better reference and am happy to update this text.
$ c=$(buildah from docker.io/debian:sid)
Getting image source signatures
Copying blob 2bbc6b8c460d done
Copying config 9b90abe801 done
Writing manifest to image destination
Storing signatures
This command downloads the image from docker.io and stores it locally in your home directory. Let's install essential utilities for building Debian packages:
1: buildah run $c apt-get update -qq
2: buildah run $c apt-get install dpkg-dev -y
3: buildah config --workingdir /src $c
4: buildah commit $c dpkg-dev
The command on line 1 and 2 execute the installation of compilers and dpkg development tools such as dpkg-buildpackage, etc. The buildah config command in line 4 arranges that whenever you start a shell in the container, the current working directory in the container is in /src. Don't worry about this location not existing yet, we will make sources from your host system available there. The last command creates an OCI image with the name dpkg-dev. BTW, the name you use for the image in the commit command can be used in podman (but not the "containers"). See 5 and 6 for a comparison between podman and buildah.
buildah images -a
This output might look like this:
| REPOSITORY | TAG | IMAGE | ID | CREATED | SIZE | | |
| localhost/dpkg-dev | latest | b85c34f95d3e | 16 | seconds | ago | 406 | MB |
| docker.io/library/debian | sid | 9b90abe801db | 11 | hours | ago | 124 | MB |
1.2 Running a package buildNow we have a working container with the reference in the variable $c. To use it conveniently with source packages that I have stored in /srv/scratch/packages/containers, let's introduce a shell alias r like this:
alias r='buildah run --tty -v /srv/scratch/packages/containers:/src $c '
This allows you to easily execute commands in that container:
r pwd
r bash
The last command will give you an interactive shell that we'll be using for building packages!
siretart@x1:~/scratch/packages/containers/libpod$ r bash

root@x1:/src# cd golang-github-openshift-imagebuilder

root@x1:/src/golang-github-openshift-imagebuilder# dpkg-checkbuilddeps
dpkg-checkbuilddeps: error: Unmet build dependencies: debhelper (>= 11) dh-golang golang-any golang-github-containers-storage-dev (>= 1.11) golang-github-docker-docker-dev (>= 18.09.3+dfsg1) golang-github-fsouza-go-dockerclient-dev golang-glog-dev

root@x1:/src/golang-github-openshift-imagebuilder# apt-get build-dep -y .
Note, using directory '.' to get the build dependencies
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
[...]

root@x1:/src/golang-github-openshift-imagebuilder# dpkg-checkbuilddeps

root@x1:/src/golang-github-openshift-imagebuilder#
Now we have a environment that has all build-dependencies available and we are ready to build the package:
root@x1:/src/golang-github-openshift-imagebuilder# dpkg-buildpackage -us -uc -b
Assuming the package builds, the package build results are placed in /src inside the container, and are visible at ~/scratch/packages/containers on the host. There you can inspect, extract or even install them. The latter part allows you to interactively rebuild packages against updated dependencies, without the need of setting up an apt archive or similar.2 AvailabilityThe buildah(1) tool is available in debian/bullseye since 2019-12-17. Since is a golang binary, it only links dynamically against system libraries such as libselinux and libseccomp, which are all available in buster-backports. I'd expect buildah to just work on a debian/buster system as well provided you install those system libraries and possibly the backported Linux kernel.

Keeping the package at the latest upstream version is challenging because of its fast development pace that picks up new dependencies on golang libraries and new versions of existing libraries with every new upstream release. In general, this requires updating several source packages, and in many cases also uploading new source packages to the Debian archive that need to be processed by the Debian ftp-masters.

As an exercise, I suggest to install the buildah package from bullseye, 'git clone' the packaging repository from https://salsa.debian.org/go-team/packages/golang-github-containers-buildah and build the latest version yourself. Note, I would expect the above to even work on a Fedora laptop.

The Debian packages have not swept into the Ubuntu distributions yet, but I expect them to be included in the Ubuntu 20.10 release. In the mean time, ubuntu users can install the package that are provided by the upstream maintainers in the "Project Atomic" PPA at https://launchpad.net/~projectatomic/+archive/ubuntu/ppa3 Related ToolsThe buildah tool is accompanied with two "sister" tools:

The Skopeo package provides tooling to work with remote images registries. This allows you download, upload images to remote registries, convert container images between different formats. It has been available in Debian since 2020-04-20.

The podman tool is a 'docker' replacement. It provides a command-line interface that mimics the original docker command to an extend that a user familiar with docker might want to place this in their ~/.bashrc file:
alias docker='podman'
Unfortunately, at the time of writing podman is still being processed by the ftp-masters since 01-03-2020. At this point, I recommend building the package from our salsa repository at https://salsa.debian.org/debian/libpod4 ConclusionBuilding packages in the right build context is a fairly technical issue for which many tools have been written for. They come with different trade-off when it comes to usability. Containers promise a secure alternative to the tried and proven chroot-based approaches, and the buildah makes using this technology very easy to use.

I'd love to get into a conversation with you on how these tools work for you, and would like to encourage participation and assistance with keeping the complicated software stack up-to-date in Debian (and by extension, in derived distribution such as Ubuntu, etc.).Footnotes:1
At my day-job, we build millions of lines of C++ code on Solaris10 and AIX6, where concepts such as "chroots" are restricted to the super user 'root' and is therefore not available to developers, not even through wrappers. Instead, libraries and headers are installed into "refroots", that is, subdirectories that mimic the structure of the "sysroot" directories that are used in the embedded Linux community for cross-compiling packages, and we use Makefiles that set include flags (-I rules and -L flags) to tell the compiler and linker where to look.2
http://wiki.debian.org/pbuilder3
http://wiki.debian.org/sbuild4
https://security.stackexchange.com/questions/209529/what-does-enabling-kernel-unprivileged-userns-clone-do5
https://developers.redhat.com/blog/2019/02/21/podman-and-buildah-for-docker-users/6
https://podman.io/blogs/2018/10/31/podman-buildah-relationship.html
Categories: FLOSS Project Planets

Talk Python to Me: #261 Monitoring and auditing machine learning

Planet Python - Sat, 2020-04-25 04:00
Traditionally, when we have depended upon software to make a decision with real-world implications, that software was deterministic. It had some inputs, a few if statements, and we could point to the exact line of code where the decision was made. And the same inputs lead to the same decisions. <br/> <br/> Nowadays, with the rise of machine learning and neural networks, this is much more blurry. How did the model decide? Has the model and inputs drifted apart, so the decisions are outside what it was designed for? <br/> <br/> These are just some of the questions discussed with our guest, Andrew Clark, on this episode of Talk Python To Me.<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Andrew on Twitter</b>: <a href="https://twitter.com/aclarkdata1" target="_blank" rel="noopener">@aclarkdata1</a><br/> <b>Andrew on LinkedIn</b>: <a href="https://www.linkedin.com/in/andrew-clark-b326b767/" target="_blank" rel="noopener">linkedin.com</a><br/> <b>Monitaur</b>: <a href="https://monitaur.ai/" target="_blank" rel="noopener">monitaur.ai</a><br/> <br/> <b>scikit-learn</b>: <a href="https://scikit-learn.org/stable/" target="_blank" rel="noopener">scikit-learn.org</a><br/> <b>networkx</b>: <a href="https://networkx.github.io/" target="_blank" rel="noopener">networkx.github.io</a><br/> <b>Missing Number Package</b>: <a href="https://github.com/ResidentMario/missingno" target="_blank" rel="noopener">github.com</a><br/> <b>alibi package</b>: <a href="https://github.com/SeldonIO/alibi" target="_blank" rel="noopener">github.com</a><br/> <b>shap package</b>: <a href="https://github.com/slundberg/shap" target="_blank" rel="noopener">github.com</a><br/> <b>aequitas package</b>: <a href="https://github.com/dssg/aequitas" target="_blank" rel="noopener">github.com</a><br/> <b>audit-ai package</b>: <a href="https://github.com/pymetrics/audit-ai" target="_blank" rel="noopener">github.com</a><br/> <b>great_expectations package</b>: <a href="https://github.com/great-expectations/great_expectations" target="_blank" rel="noopener">github.com</a><br/></div><br/> <strong>Sponsors</strong><br/> <br/> <a href='https://talkpython.fm/linode'>Linode</a><br> <a href='https://talkpython.fm/exercise'>Reuven's Weekly Python Exercises</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>
Categories: FLOSS Project Planets

Contributing Public Transport Metadata

Planet KDE - Sat, 2020-04-25 04:00

In the last post I described how we handle public transport line metadata in KPublicTransport, and what we use that for. Here’s now how you can help to review and improve these information in Wikidata and OpenStreetMap, where it not only benefits KPublicTransport, but everyone.

OpenStreetMap

Let’s consider Berlin’s subway line U1 as an example on how things are ideally represented in OSM and Wikidata.

As a single bi-directional line, there are three relevant elements in OSM for this:

We have some heuristics to merge route elements without a route_master as well, but things get a lot more reliable if this is set up properly and all route elements are members of the corresponding route_master relation.

On the route_master relation, there’s a number of fields we use:

  • ref for the (short) line name.
  • colour for the line color in #RRGGBB notation. Note the British English spelling.
  • Any of route_master, line, passenger or service to determine the mode of transportation (bus, tram, subway, rapid transit, regional or long-distance train, etc). This is unfortunately not always an exact science, the lines between those can be blurry, and there are various exotic special cases (e.g. the San Francisco cable cars or the Wuppertal Schwebebahn). See the OSM wiki for details.

Editing these fields (called “tags” in OSM speak) is fairly straightforward, compared to setting up entire routes from scratch, and most of the time is all that’s needed.

Wikidata

Next to the information in OSM, we ideally have two related items in Wikidata for each line:

  • One item representing the line, an instance of of a railway line, tram line, etc, or even better, a more specific subclass thereof. Q99691 in the above example.
  • One item representing a set of lines belonging to the same “product”, ie. typically a network of lines of the same mode of transportation in a given city. This is a often an instance of rapid transit, tram system, etc, or anything in that type hierarchy. Q68646 for the Berlin subway network in our example.

On those items we are particularly interested in the following properties:

Linking Wikidata and OpenStreetMap

Once we have elements in both OSM and Wikidata we still need to link them together. Sometimes that’s even all that’s missing, and is therefore particularly easy to contribute.

On the OSM side, there is a wikidata tag that should be set on the route_master relation and contain the Wikidata item identifier (eg. Qxxx) of the item representing that line. In some cases links to the product item exist instead, not ideal but better than nothing, if there are no per-line items in Wikidata.

On the Wikidata side, there is the OSM relation ID (P402) property that should be set on the item representing a line, and contain the relation number from the OSM side (58767 in our example).

Conflicting Information

Even with everything already set up and linked properly, reviewing the individual information is useful. Since some of the information are duplicated between Wikidata and OSM they might go out of sync. Particularly prone to that seem to be colors and the mode of transportation.

In our example this is indeed the case for the color (at the time of writing):

  • OSM has it set to #52B447
  • Wikidata has it set to #65BD00

At least both some similar shade of green, but sometimes the differences are much more significant than this.

Line Logos

The visually most impactful part are the line and product logos. Those are Wikimedia Commons files referenced via the logo image (P154) property in Wikidata. In our example that’s Berlin_U1.svg and U-Bahn.svg.

KPublicTransport’s code is looking for the following criteria before considering a logo for use:

  • The file type should be SVG or PNG. SVG is preferred, as that has the best chance to scale down reasonably when displayed in an application.
  • The file size should not exceeding 10kB, as we need to download this on demand, likely on a mobile data link. For the commonly fairly simply structured logos done in SVG this is rarely a problem, but there are files out there in the multi-100kB range as well that could benefit from optimizations.
  • A license that does not require individual attribution, such as CC0. That is the majority, however a number of files use variations of the CC-BY license family as well. Sadly we cannot use those, as providing appropriate individual attribution to the author is simply not practical for every tiny icon we display in a list somewhere.

When adding new logos, or improving exist ones, those are probably criteria you want to keep in mind. In cases the logos changed over time, this can be modelled in Wikidata as well. In those cases consider adding start time (P580) and end time (P582) qualifiers to the logo property (if known), or to the very least ensure that the current variant of the logo has preferred rank (the little up/down arrows on the left side).

Contribute!

Reviewing, fixing and completing these information around your home or other places of interest to you is a very easy way to help, all you need is a web browser. Doing this will directly improve the experience of using KDE Itinerary or KTrip for yourself and others, as well as helping everyone else who is building things with this data.

Categories: FLOSS Project Planets

Codementor: iBuildApp: Android app maker review

Planet Python - Sat, 2020-04-25 00:49
You can develop android apps like Live Lounge (https://liveloungeapk.vip/) with iBuildApp maker
Categories: FLOSS Project Planets

This week in KDE: so many videos for you

Planet KDE - Sat, 2020-04-25 00:41

Version 20.04.0 of KDE’s apps has been released! Go check it out; there’s amazing stuff in there.

Work proceeds on the Breeze Evolution task for Plasma 5.19. In particular, the System Tray visual overhaul subtask is nearly complete and our tray popups are looking better than ever:

http://s1.webmshare.com/7q3MK.webm

Other work is proceeding nicely as well!

New Features Bugfixes & Performance Improvements User Interface Improvements How You Can Help

Just keep being awesome, and rest when you need it. These are hard times. Don’t beat yourself up for not doing more; it’s enough. We’ll get through it.

Categories: FLOSS Project Planets

Junichi Uekawa: Troubleshooting mdns issuses.

Planet Debian - Fri, 2020-04-24 20:18
Troubleshooting mdns issuses. I've noticed that mdns wasn't working on crostini. It seems like avahi-browse -a returns empty results. Chrome browser also doesn't seem to resolve .local host names, so that might be related.

Categories: FLOSS Project Planets

Maui Weekly Report 4

Planet KDE - Fri, 2020-04-24 20:01

Today, we bring you a new report on the Maui Project progress.

Are you a developer and want to start developing cross-platform and convergent apps, targeting, among other things, the upcoming Linux Mobile devices? Then join us on Telegram: https://t.me/mauiproject.

If you are interested in testing this project and helping out with translations or documentation, you are also more than welcome.

The Maui Project is free software from the KDE Community developed by the Nitrux team.
This post contains some code snippets to give you an idea of how to use MauiKit. For more detailed documentation, get in touch with us or subscribe to the news feed to keep up to date with the upcoming tutorial.

We are present on Twitter and Mastodon:
https://twitter.com/maui_project
https://mastodon.technology/@mauiproject

MauiKit Controls ToolBar

New properties have been added for better controlling the layout of the toolbar: farLeftContent and farRightContent. The two new features work like the previous leftContent, middleContent, rightContent.

By default, the items inside the toolbar will be placed from left to right using the leftContent property, is no positioning is explicitly used.

Page

The Page component features involving the toolbars: floating and auto-hiding have been improved. The floating properties remain auto exclusive from the pullback toolbar, meaning a toolbar can be either pullback or be floating, but not both.

New features were added to support the same header behavior for the footer:  autohideFooter and floatingFooter, and also new features for better controlling the floating and auto-hiding actions:  autohideFooterMargins and autohideHeaderMargins, autohideFooterDelay, autohideHeaderDelay, etc.

Doodle

The doodle component has been added and has basic features and still is work on progress, what is it? It allows us to grab an image source or UI item to an image to be able to add doodles and annotations; this can be later on saved as images or just extract the doodles and process those as notes or whatever.

It is being used for testing purposes on Pix to doodle on pictures, on Library to doodle over PDF, and finally on Nota to doodle over text files. It can be expanded and support iPad and tablets with stylus and so on.

TabBar

The TabBar is now scrollable by a mouse wheel on desktops when the number of tabs does not fit the available width.

Apps Pix

I am testing the new doodle component.

All clickable items now respect the single click property from the system.

A detailed view was added by using the MauiKit control AltBrowser, which allows switching between a grid and a list view.

Below is a video showing Pix using MauiKit Pages with auto-hiding and floating toolbars and the new details view.

https://nxos.org/wp-content/uploads/2020/04/Peek-2020-04-22-12-44.mp4 Cinema

Cinema is a new video player and collection manager. The app development will be documented in a series of tutorials.

You can follow the tutorial here. A first post has been published at:

A MauiKit App. Episode 1

Vvave

All the clickable items now respect the system single-click property.

Index

The split views handles are now more clear and easy to discover for resizing the split views.

Nota

Now each tab with the editor can have its embedded terminal. This increases productivity when using nota for hacking

https://nxos.org/wp-content/uploads/2020/04/Peek-2020-04-22-12-47.mp4 In Progress

CSD is being tested on Station, Buho and Vvave

The implementation for drawing the window controls correctly based on the system for integration is going to be refactored by making use of psifidotos work; you can read about it here:

https://psifidotos.blogspot.com/2020/04/article-rounded-plasma-and-qt-csds.html

Testing

The optional CSD property is now being loaded from a global config file at  ~/.config/org.kde.maui/mauiproject.conf with group GLOBAl and key CSD(bool). It can still be overridden by the app, forcing to use CSD or not.

Doodles for iPad and other tablets with a stylus or pencil, for quick taking notes and share them, the doodle can take an image as the source or an item that get captured as an image: the doodle and the source can be converted and saved into an image or just take the doodle to do whatever like image processing to extract the text and save it as plain text(for class notes, etc.)


The post Maui Weekly Report 4 appeared first on Nitrux — #YourNextOS.

Categories: FLOSS Project Planets

Ahmed Bouchefra: Connecting Python 3 and Electron/Node.JS: Building Modern Desktop Apps

Planet Python - Fri, 2020-04-24 20:00

In this post, you’ll learn about the possible ways that you can use to connect or integrate Python with Node.js and Electron with simple examples.

We’ll introduce Electron for Python developers, a great tool if you want to build GUIs for your Python apps with modern web technologies based on HTML, CSS and JavaScript. We’ll also see different ways to connect Python and Electron such as child_process, python-shell and an HTTP (Flask) server.

Prerequisites

This tutorial is designed for Python developers that want to build desktop applications and GUIs with modern web technologies, HTML, CSS, and JS and related frameworks.

To be able to follow this tutorial comfortably, you will need to have a few prerequisites:

  • Knowledge of Python is a must.
  • You should be comfortable with working with web technologies like JavaScript, HTML, and CSS.
  • You should also be familiar with installing and using Node.js packages from npm.
Step 1 - Setting up a Development Environment

In this section, we’ll set up a development environment for running our examples. We need to have Node.js together with NPM and Python 3 installed on our machine.

Installing Node.js and NPM

There are various ways that you can follow to install Node.js and NPM on your development machine, such as using:

  • the official binaries for your target operating system,
  • the official package manager for your system,
  • NVM (Node Version Manager) for installing and managing multiple versions of Node.js in the same machine.

Let’s keep it simple, simply go to the official website and download the binaries for your target operating system then follow the instructions to install Node.js and NPM on your system.

Setting up a Python Virtual Environment

There is a big chance that you already have Python 3 installed on your development machine. In case it’s not installed, the simplest way is to go to the official website and grab the binaries for your target system.

You can make sure you have Python 3 installed on your system by opening a command-line interface and running the following command:

$ python --version Python 3.7.0

Now, let’s set up a virtual environment.

Creating a Virtual Environment

In this section, you’ll use venv to create an isolated virtual environment for running your example and install the required packages.

A virtual environment allows you to create an environment for isolating the dependencies of your current project. This will allow you to avoid conflicts between the same packages that have different versions.

In Python 3, you can make use of the venv module to create a virtual environment.

Now, head over to your terminal and run the following command to create a virtual environment:

$ python -m venv env

Next, you need to activate the environment using following command:

$ source env/bin/activate

On Windows, you can activate the virtual environment using the Scripts\activate.bat file as follows:

$ env\Scripts\activate.bat

That’s it. You now have your virtual environment activated, and you can install the packages for your example.

Step 4 - Creating a Node.js Application

Now that we have setup our development environment for Python and Electron development by installing Node.js together with npm and creating a Python virtual environment, let’s proceed to create our Electron application.

Let’s start by creating a folder for our project and create a package.json file inside it using the following commands:

$ mkdir python-nodejs-example $ cd python-nodejs-example $ npm init -y

The init command of npm will generate a package.json file with the following default values:

{ "name": "python-nodejs-example", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC" }

You can customize the values in this file as you see fit in your project and you can also simply go with the deafult values for this simple example.

Next, we need to create two index.html and index.js files inside the project’s folder:

$ touch index.js $ touch index.html How to Communicate Between Electron and Python

In this section, we’ll see the various available ways that you can use to achieve communications between Electron and Python processes.

What is IPC?

According to Wikipedia:

In computer science, inter-process communication or interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests.

IPC refers to a set of mechanisms supported by operating systems to enable different, local or remote, processes to communicate with each other. For example, in our case, we want to allow communications between the Electron process and the Python process.

Let’s see some ways to achieve IPC.

Spawning a Python Process Using child_process

Node.js provides the {child_process](https://nodejs.org/api/child_process.html) module whih allows you to spawn child processes.

Let’s use it to spawn a Python process and run a simple calc.py script.

We’ll make use of simplecalculator to do simple calculations in Python, so we first run the following command to install it:

$ sudo pip install simplecalculator

First, inside your project’s folder, create a py folder and create a calc.py file inside of it:

$ mkdir py & cd py $ touch calc.py

Open the calc.py file and add the following Python code which performs a calculation and prints the result to the standard output:

from sys import argv from calculator.simple import SimpleCalculator def calc(text): """based on the input text, return the operation result""" try: c = SimpleCalculator() c.run(text) return c.log[-1] except Exception as e: print(e) return 0.0 if __name__ == '__main__': print(calc(argv[1]))

Next, create a renderer.js file, and add the following code to spawn a Python process and execute the py/calc.py script:

function sendToPython() { var python = require('child_process').spawn('python', ['./py/calc.py', input.value]); python.stdout.on('data', function (data) { console.log("Python response: ", data.toString('utf8')); result.textContent = data.toString('utf8'); }); python.stderr.on('data', (data) => { console.error(`stderr: ${data}`); }); python.on('close', (code) => { console.log(`child process exited with code ${code}`); }); } btn.addEventListener('click', () => { sendToPython(); }); btn.dispatchEvent(new Event('click'));

Next, open the index.html file and update it as follows:

<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Calling Python from Electron!</title> </head> <body> <h1>Simple Python Calculator!</h1> <p>Input something like <code>1 + 1</code>.</p> <input id="input" value="1 + 1"></input> <input id="btn" type="button" value="Send to Python!"></input> </br> Got <span id="result"></span> <script src="./renderer.js"></script> </body> </html> Using python-shell

After seeing how to use child_process to do communication between Electron and Python, let’s now see how to use python-shell .

python-shell is an npm package that provides an easy way to run Python scripts from Node.js with basic and efficient inter-process communication and error handling.

You can use python-shell for:

  • Spawning Python scripts,
  • Switching between text, JSON and binary modes,
  • Doing data transfers through stdin and stdout streams,
  • Getting stack traces in case of errors.

Go to your terminal, and run the following command to install python-shell from npm:

$ npm install --save python-shell

As the time of this writing, python-shell v1.0.8 is installed in our project.

Next, open the renderer.js file and update the sendToPython() function as follows:

function sendToPython() { var { PythonShell } = require('python-shell'); let options = { mode: 'text', args: [input.value] }; PythonShell.run('./py/calc.py', options, function (err, results) { if (err) throw err; // results is an array consisting of messages collected during execution console.log('results: ', results); result.textContent = results[0]; }); } Using Client-Server Communication

Let’s now see another way to achieve communication betweeen Python and Electron using an HTTP server.

Head back to your terminal and run the following command to install Flask and Flask-Cors:

$ pip install flask $ pip install Flask-Cors

Next, in the py folder of your project, create a server.py file and add the following code to run a Flask server that simply performs a calculation and returns the result as an HTTP response:

import sys from flask import Flask from flask_cors import cross_origin from calculator.simple import SimpleCalculator def calcOp(text): """based on the input text, return the operation result""" try: c = SimpleCalculator() c.run(text) return c.log[-1] except Exception as e: print(e) return 0.0 app = Flask(__name__) @app.route("/<input>") @cross_origin() def calc(input): return calcOp(input) if __name__ == "__main__": app.run(host='127.0.0.1', port=5001)

Next, open the renderer.js file and add the following code to spawn Python and run the server.py file:

let input = document.querySelector('#input') let result = document.querySelector('#result') let btn = document.querySelector('#btn') function sendToPython() { var { PythonShell } = require('python-shell'); let options = { mode: 'text' }; PythonShell.run('./py/server.py', options, function (err, results) { if (err) throw err; // results is an array consisting of messages collected during execution console.log('response: ', results); }); } function onclick(){ fetch(`http://127.0.0.1:5001/${input.value}`).then((data)=>{ return data.text(); }).then((text)=>{ console.log("data: ", text); result.textContent = text; }).catch(e=>{ console.log(e); }) } sendToPython(); btn.addEventListener('click', () => { onclick(); }); btn.dispatchEvent(new Event('click')) Recap

In this tutorial, we’ve introduced Electron for Python developers which can be a great tool if they want to build GUIs for their Python apps with modern web technologies based on HTML, CSS and JavaScript. We’ve also seen different ways to connect Python and Electron such as child_process, python-shell and an HTTP (Flask) server.

Categories: FLOSS Project Planets

Nikola: Automating Nikola rebuilds with GitHub Actions

Planet Python - Fri, 2020-04-24 18:24

In this guide, we’ll set up GitHub Actions to rebuild a Nikola website and host it on GitHub Pages.

See also: Travis CI version of this guide.

Why?

By using GitHub Actions to build your site, you can easily blog from anywhere you can edit text files. Which means you can blog with only a web browser and GitHub.com. You also won’t need to install Nikola and Python to write. You won’t need a real computer either — a mobile phone could probably access GitHub.com and write something.

Caveats
  • The build might take a couple minutes to finish (1:30 for the demo site; YMMV)

  • When you commit and push to GitHub, the site will be published unconditionally. If you don’t have a copy of Nikola for local use, there is no way to preview your site.

What you need
  • A computer for the initial setup that can run Nikola. You can do it with any OS (Linux, macOS, *BSD, but also Windows).

  • A GitHub account (free)

Setting up Nikola

Start by creating a new Nikola site and customizing it to your liking. Follow the Getting Started guide. You might also want to add support for other input formats, namely Markdown, but this is not a requirement.

After you’re done, you must configure deploying to GitHub in Nikola. There are a few important things you need to take care of:

  • Make your first deployment from your local computer and make sure your site works right. Don’t forget to set up .gitignore.

  • The GITHUB_COMMIT_SOURCE and GITHUB_REMOTE_NAME settings are overridden, so you can use values appropriate for your local builds.

  • Ensure that the correct branch for GitHub Pages is set on GitHub.com.

If everything works, you can make some change to your site (so you see that rebuilding works), but don’t commit it just yet.

Setting up GitHub Actions

Next, we need to set up GitHub Actions. This is really straightforward.

On your source branch, create a file named .github/workflows/main.yml with the following contents:

github-workflow.yml (Source)

on: [push] jobs:   nikola_build:     runs-on: ubuntu-latest     name: 'Deploy Nikola to GitHub Pages'     steps:     - name: Check out       uses: actions/checkout@v2     - name: Build and Deploy Nikola       uses: getnikola/nikola-action@v2

There might be a newer version of the action available, you can check the latest version in the getnikola/nikola-action repo on GitHub.

By default, the action will install the latest stable release of Nikola[extras]. If you want to use the bleeding-edge version from master, or want to install some extra dependencies, you can provide a requirements.txt file in the repository.

Commit everything to GitHub:

git add . git commit -am "Automate builds with GitHub Actions"

Hopefully, GitHub will build your site and deploy. Check the Actions tab in your repository or your e-mail for build details. If there are any errors, make sure you followed this guide to the letter.

Categories: FLOSS Project Planets

Pages