FLOSS Project Planets

More foss in the north

Planet KDE - Fri, 2019-06-21 16:35

Today is midsummer eve. In Sweden, this is probably slightly larger than Christmas. Everyone goes someplace to meet someone and enjoy a day of food, dance and entertainment. And you’re supposed to have flowers on your head as shown below!

This year, midsummer is on June 21, which marks four months from the first foss-north event outside of Gothenburg. That’s right – foss-north is going to Stockholm on October 21 and the theme will be IoT and Security. Make sure to save the date!

We have a venue and three great speakers lined up. There will be a CFP during July and the final speakers will be announced towards September. We’re also looking for sponsors (hint hint nudge nudge).

Now I’m off to enjoy the last hour of midsummer and enjoy the shortest night of the year. Take care and I’ll see you in Stockholm this autumn!

Categories: FLOSS Project Planets

Simon Josefsson: OpenPGP smartcard under GNOME on Debian 10 Buster

Planet Debian - Fri, 2019-06-21 14:09

Debian buster is almost released, and today I celebrate midsummer by installing (a pre-release) of it on my Lenovo X201 laptop. Everything went smooth, except for the usual issues with smartcards under GNOME. I use a FST-01G running Gnuk, but the same issue apply to all OpenPGP cards including YubiKeys. I wrote about this problem for earlier releases, read Smartcards on Debian 9 Stretch and Smartcards on Debian 8 Jessie. Some things have changed – now GnuPG‘s internal ccid support works, and dirmngr is installed by default when you install Debian with GNOME. I thought I’d write a new post for the new release.

After installing Debian and logging into GNOME, I start a terminal and attempt to use the smartcard as follows.

jas@latte:~$ gpg --card-status gpg: error getting version from 'scdaemon': No SmartCard daemon gpg: OpenPGP card not available: No SmartCard daemon jas@latte:~$

The reason is that the scdaemon package is not installed. Install it as follows.

jas@latte:~$ sudo apt-get install scdaemon

After this, gpg --card-status works. It is now using GnuPG’s internal CCID library, which appears to be working. The pcscd package is not required to get things working any more — however installing it also works, and you might need pcscd if you use other applications that talks to the smartcard.

jas@latte:~$ gpg --card-status Reader ...........: 234B:0000:FSIJ-1.2.14-67252015:0 Application ID ...: D276000124010200FFFE672520150000 Version ..........: 2.0 Manufacturer .....: unmanaged S/N range Serial number ....: 67252015 Name of cardholder: Simon Josefsson Language prefs ...: sv Sex ..............: male URL of public key : https://josefsson.org/key-20190320.txt Login data .......: jas Signature PIN ....: not forced Key attributes ...: ed25519 cv25519 ed25519 Max. PIN lengths .: 127 127 127 PIN retry counter : 3 3 3 Signature counter : 658 KDF setting ......: off Signature key ....: A3CC 9C87 0B9D 310A BAD4 CF2F 5172 2B08 FE47 45A2 created ....: 2019-03-20 23:40:49 Encryption key....: A9EC 8F4D 7F1E 50ED 3DEF 49A9 0292 3D7E E76E BD60 created ....: 2019-03-20 23:40:26 Authentication key: CA7E 3716 4342 DF31 33DF 3497 8026 0EE8 A9B9 2B2B created ....: 2019-03-20 23:40:37 General key info..: sub ed25519/51722B08FE4745A2 2019-03-20 Simon Josefsson <simon@josefsson.org> sec# ed25519/D73CF638C53C06BE created: 2019-03-20 expires: 2019-10-22 ssb> ed25519/80260EE8A9B92B2B created: 2019-03-20 expires: 2019-10-22 card-no: FFFE 67252015 ssb> ed25519/51722B08FE4745A2 created: 2019-03-20 expires: 2019-10-22 card-no: FFFE 67252015 ssb> cv25519/02923D7EE76EBD60 created: 2019-03-20 expires: 2019-10-22 card-no: FFFE 67252015 jas@latte:~$

As before, using the key does not work right away:

jas@latte:~$ echo foo|gpg -a --sign gpg: no default secret key: No public key gpg: signing failed: No public key jas@latte:~$

This is because GnuPG does not have the public key that correspond to the private key inside the smartcard.

jas@latte:~$ gpg --list-keys jas@latte:~$ gpg --list-secret-keys jas@latte:~$

You may retrieve your public key from the clouds as follows. With Debian Buster, the dirmngr package is installed by default so there is no need to install it. Alternatively, if you configured your smartcard with a public key URL that works, you may type “retrieve” into the gpg --card-edit interactive interface. This could be considered slightly more reliable (at least from a self-hosting point of view), because it uses your configured URL for retrieving the public key rather than trusting clouds.

jas@latte:~$ gpg --recv-keys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE gpg: key D73CF638C53C06BE: 1 signature not checked due to a missing key gpg: key D73CF638C53C06BE: public key "Simon Josefsson <simon@josefsson.org>" imported gpg: no ultimately trusted keys found gpg: Total number processed: 1 gpg: imported: 1 jas@latte:~$

Now signing with the smart card works! Yay! Btw: compare the output size with the output size in the previous post to understand the size advantage with Ed25519 over RSA.

jas@latte:~$ echo foo|gpg -a --sign -----BEGIN PGP MESSAGE----- owGbwMvMwCEWWKTN8c/ddRHjaa4khlieP//S8vO5OkpZGMQ4GGTFFFkWn5nTzj3X kGvXlfP6MLWsTCCFDFycAjARscUM/5MnXTF9aSG4ScVa3sDiB2//nPSVz13Mkpbo nlzSezowRZrhn+Ky7/O6M7XljzzJvtJhfPvOyS+rpyqJlD+buumL+/eOPywA =+WN7 -----END PGP MESSAGE-----

As before, encrypting to myself does not work smoothly because of the trust setting on the public key. Witness the problem here:

jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org gpg: 02923D7EE76EBD60: There is no assurance this key belongs to the named user sub cv25519/02923D7EE76EBD60 2019-03-20 Simon Josefsson <simon@josefsson.org> Primary key fingerprint: B1D2 BD13 75BE CB78 4CF4 F8C4 D73C F638 C53C 06BE Subkey fingerprint: A9EC 8F4D 7F1E 50ED 3DEF 49A9 0292 3D7E E76E BD60 It is NOT certain that the key belongs to the person named in the user ID. If you *really* know what you are doing, you may answer the next question with yes. Use this key anyway? (y/N) gpg: signal Interrupt caught ... exiting jas@latte:~$

You update the trust setting with the gpg --edit-key command.

jas@latte:~$ gpg --edit-key simon@josefsson.org gpg (GnuPG) 2.2.12; Copyright (C) 2018 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Secret subkeys are available. pub ed25519/D73CF638C53C06BE created: 2019-03-20 expires: 2019-10-22 usage: SC trust: unknown validity: unknown ssb cv25519/02923D7EE76EBD60 created: 2019-03-20 expires: 2019-10-22 usage: E card-no: FFFE 67252015 ssb ed25519/80260EE8A9B92B2B created: 2019-03-20 expires: 2019-10-22 usage: A card-no: FFFE 67252015 ssb ed25519/51722B08FE4745A2 created: 2019-03-20 expires: 2019-10-22 usage: S card-no: FFFE 67252015 [ unknown] (1). Simon Josefsson <simon@josefsson.org> gpg> trust pub ed25519/D73CF638C53C06BE created: 2019-03-20 expires: 2019-10-22 usage: SC trust: unknown validity: unknown ssb cv25519/02923D7EE76EBD60 created: 2019-03-20 expires: 2019-10-22 usage: E card-no: FFFE 67252015 ssb ed25519/80260EE8A9B92B2B created: 2019-03-20 expires: 2019-10-22 usage: A card-no: FFFE 67252015 ssb ed25519/51722B08FE4745A2 created: 2019-03-20 expires: 2019-10-22 usage: S card-no: FFFE 67252015 [ unknown] (1). Simon Josefsson <simon@josefsson.org> Please decide how far you trust this user to correctly verify other users' keys (by looking at passports, checking fingerprints from different sources, etc.) 1 = I don't know or won't say 2 = I do NOT trust 3 = I trust marginally 4 = I trust fully 5 = I trust ultimately m = back to the main menu Your decision? 5 Do you really want to set this key to ultimate trust? (y/N) y pub ed25519/D73CF638C53C06BE created: 2019-03-20 expires: 2019-10-22 usage: SC trust: ultimate validity: unknown ssb cv25519/02923D7EE76EBD60 created: 2019-03-20 expires: 2019-10-22 usage: E card-no: FFFE 67252015 ssb ed25519/80260EE8A9B92B2B created: 2019-03-20 expires: 2019-10-22 usage: A card-no: FFFE 67252015 ssb ed25519/51722B08FE4745A2 created: 2019-03-20 expires: 2019-10-22 usage: S card-no: FFFE 67252015 [ unknown] (1). Simon Josefsson <simon@josefsson.org> Please note that the shown key validity is not necessarily correct unless you restart the program. gpg> quit jas@latte:~$

Confirm gpg --list-keys indicate that the key is now trusted, and encrypting to yourself should work.

jas@latte:~$ gpg --list-keys /home/jas/.gnupg/pubring.kbx ---------------------------- pub ed25519 2019-03-20 [SC] [expires: 2019-10-22] B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE uid [ultimate] Simon Josefsson <simon@josefsson.org> sub ed25519 2019-03-20 [A] [expires: 2019-10-22] sub ed25519 2019-03-20 [S] [expires: 2019-10-22] sub cv25519 2019-03-20 [E] [expires: 2019-10-22] jas@latte:~$ gpg --list-secret-keys /home/jas/.gnupg/pubring.kbx ---------------------------- sec# ed25519 2019-03-20 [SC] [expires: 2019-10-22] B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE uid [ultimate] Simon Josefsson <simon@josefsson.org> ssb> ed25519 2019-03-20 [A] [expires: 2019-10-22] ssb> ed25519 2019-03-20 [S] [expires: 2019-10-22] ssb> cv25519 2019-03-20 [E] [expires: 2019-10-22] jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org gpg: checking the trustdb gpg: marginals needed: 3 completes needed: 1 trust model: pgp gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u gpg: next trustdb check due at 2019-10-22 -----BEGIN PGP MESSAGE----- hF4DApI9fuduvWASAQdA4FIwM27EFqNK1I5eZERaZVDAXJDmYLZQHjZD8TexT3gw 7SDaeTLm7s0QSyKtsRugRpex6eSVhfA3WG8fUOyzbNv4o7AC/TQdhZ2TDtXZGFtY 0j8BRYIjVDbYOIp1NM3kHnMGHWEJRsTbtLCitMWmLdp4C98DE/uVkwjw98xEJauR /9ZNmmvzuWpaHuEJNiFjORA= =tAXh -----END PGP MESSAGE----- jas@latte:~$

The issue with OpenSSH and GNOME Keyring still exists as in previous releases.

jas@latte:~$ ssh-add -L The agent has no identities. jas@latte:~$ echo $SSH_AUTH_SOCK /run/user/1000/keyring/ssh jas@latte:~$

The trick we used last time still works, and as far as I can tell, it is still the only recommended method to disable the gnome-keyring ssh component. Notice how we also configure GnuPG’s gpg-agent to enable SSH daemon support.

jas@latte:~$ mkdir ~/.config/autostart jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/ jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf

Log out of GNOME and log in again. Now the environment variable points to gpg-agent’s socket, and SSH authentication using the smartcard works.

jas@latte:~$ echo $SSH_AUTH_SOCK /run/user/1000/gnupg/S.gpg-agent.ssh jas@latte:~$ ssh-add -L ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILzCFcHHrKzVSPDDarZPYqn89H5TPaxwcORgRg+4DagE cardno:FFFE67252015 jas@latte:~$

Topics for further discussion and research this time around includes:

  1. Should scdaemon (and possibly pcscd) be pre-installed on Debian desktop systems?
  2. Could gpg --card-status attempt to import the public key and secret key stub automatically? Alternatively, some new command that automate the bootstrapping of a new smartcard.
  3. Should GNOME keyring support smartcards?
  4. Why is GNOME keyring used by default for SSH rather than gpg-agent?
  5. Should gpg-agent default to enable the SSH daemon?
  6. What could be done to automatically infer the trust setting for a smartcard based private key?

Thanks for reading and happy smartcarding!

Categories: FLOSS Project Planets

LabPlot getting prettier and also support for online datasets

Planet KDE - Fri, 2019-06-21 12:06
Introduction
Hello everyone! I'm participating in Google Summer of Code for the second time. I'm working on KDE's LabPlot, just like last year. I'm very happy that I can work again with my former and current mentor Kristóf Fábián, and with Alexander Semke, an invaluable member of the LabPlot team, who is like a second mentor to me. At first, let me introduce you my current project: 
"There are many internet pages providing data sets for educational and academic purposes concerning various fields of science, and not only (astrophysics, statistics, medicine, etc.). Some tools used in the scientific field provide some "wrappers" for such online sources and allow the user to easily investigate these data sets and work with them in all kinds of applications, whilst the technical details and methodology like the fetching of data from the server and parsing are done completely transparent for the user. The user doesn’t even know what happens in the “background”.
The goal of this project is to add similar functionality to LabPlot. This would make LabPlot more fit for educational purposes, students and teachers could use LabPlot for visualizing and analyzing data connected to the currently studied field. And also could bring LabPlot into the life of the average student."
If the synopsis caught your attention and you are interested in the project itself, you can check out my proposal, to find out in detail what the project really is about.

Bonding period
Let's start with the bonding period. I used this time to investigate/analyze already existing solutions for uploading/downloading with KNS3 also its API documentation, checking out various welcome screens of other applications to get some inspiration, checking out some simpler caching implementations.

I communicated with my mentor and others from the LabPlot team to properly  design the project and the course of implementation. I also tried to get involved in the KDE community.

First monthAt the end of the first month I can state that fortunately I was able to make a quite good progress. Everything is successfully implemented from what was proposed for the first month, and I could also proceed to other tasks. Now let's see what's been done.
Dealing with datasets
The very first step was to implement a new widget, called ImportDatasetWidget which could provide the functionality to:
  • list the available categories and subcategories of datasets
  • list the available datasets for a certain subcategory
  • refresh the list of datasets and delete the downloaded metadata files

ImportDatasetWidgetThe user can select from the categories and subcategories of the available datasets, as visible in the picture above. In order to visualize these I used a QTreeWidget. When the user clicks on a subcategory then every dataset, belonging to it, is listed in a QListView. The user is also provided with the possibility to search a particular category/subcategory, since we estimate that there will be a considerable amount of datasets by the end of the project. This is also the case with the dataset list.
We had to create metadata files in order to record additional information about datasets, and also to divide them into categories and subcategories. We use a metadata data file which contains every category and subcategory and a list of datasets for every subcategory. Additionally there is a metadata file for every dataset containing various data about the dataset itself.
In the "Datasets" section we highlight every dataset the metadata of which is locally available (in the labplot directory located in the user's home directory). When the user clicks on the "Clear cache" button every file is deleted from the above mentioned directory. The "Refresh" button provides the possibility to refresh the locally available metadata file, which contains the categories and subcategories.In order to make possible the import of datasets into LabPlot, and saving them into Spreadsheets I had to implement a helper class: DatasetHandler. This class processes a dataset's metadata file, configures the Spreadsheet into which the data will be loaded, downloads the dataset, processes it (based on the preferences present in the metadata file) then loads its content into the spreadsheet.
 ImportDatasetWidget basic functionality

There also is a "Add new Dataset" button. This makes it possible for the user to add own datasets to LabPlot's list. When the button is clicked a new dialog is shown to the user: DatasetMetadataManagerDialog
Adding new dataset to the collection
This dialog provides an interface for the user, so the user can easily set the options necessary for the dataset's metadata file. The user doesn't have to create the metadata file himself/herself, the dialog does this instead based on the data provided by the user. The dialog also adds the new dataset's name to the categories' metadata file. Therefore the user can easily add new datasets, and later load them easily, using the ImportDatasetWidget. 

 Adding new dataset
While implementing these functionalities we were faced with some problems, which still need to be solved. We wanted to make possible the download and upload to store.kde.org, using the KNS3 library.  So users could really add new datasets to the basic list, and could download only those metadata files which are needed. This was our intention, however KNS3 gave us a hard time. It provides its functionality only by two dialogs, which we wouldn't prefer to use. We want to incorporate the dialogs' functionality into LabPlot somehow, but we didn't figure this out yet. but, we are thinking about it. Another problem with KNS3 is that according to KDE's mailing list, uploading with KNS3 is disabled for an indefinite amount of time due to errors caused by the library. Therefore the question arises: Do we need downloading if uploading is not possible? 

Initial Welcome Screen Due to successfully implementing quite fast the dataset part and also to the difficulties caused by KNS3, I started to design and to create a prototype for our Welcome Screen. The first step was to decide what technology we want to adapt in order to implement this welcome screen. We considered creating a widget based GUI, or using QML for this purpose. We chose QML, because despite  it being more complex and cumbersome to work with, it offers a greater freedom to implement ideas. So this is the main reason this technology was chosen. Then we had to think through, what functionalities we want to be provided by the welcome screen. We came up with:
  • Recently opened projects
  • Help section: Documentation, FAQ, etc.
  • Exploring datasets
  • Example projects
  • Latest release information 
  • News section
The current state of the welcome screenEvery part, except the Examples section, is fully functional. In the recent projects section the user can choose from the lastly opened projects and load them only by a click. The help section navigates the user to the Documentation, FAQ, Features and Support part of LabPlot's web page. In the release section the user can read about the last release. The news section is connected to the RSS feed of LabPlot's webpage, so the user can see the new posts of the web page. A central piece is the "Start exploring data" section where the user can browse the available datasets, display information about them, and open them with only one click. The functionality of the welcome screen is presented in the following video:



Finally I'd like to say some words about the next steps. The first will be implementing the "Examples" section of the welcome screen. In order to do so we'll have to design a metadata file for the example  projects and nonetheless we'll need to create some example projects to have something to work with. When functionally the welcome screen works, we can proceed to refining the design itself, making it more pleasant and adapting it to the user's theme's color. We'll still have to figure out what to do with the KNS3 library, should we use it or not, and if yes then how, in what manner. Nonetheless, we have to collect more datasets in order to provide the users of LabPlot with a considerable dataset collection.

This is it for now. I will continue to work on the project alongside Kristóf and Alexander. I think we form a quite good team. I'm thankful to them, for their guidance. When anything new will be finished and running I'll let you know. 
See you soon!Bye!
Categories: FLOSS Project Planets

Day 26

Planet KDE - Fri, 2019-06-21 12:01

I spent my first two weeks of GSoC (and the three weeks before it starts) trying to figure out how the Khipu’s code works but I didn’t get it. I was putting my effort trying to plot vectors but there were a enourmous structure of the code that I’d need understand before and it was unexpected for me. So, one of my menthors, Tomaz, suggested that I could change my project and try to refactor Khipu. So this last week I started a new interface, because the current interface is very simple and can be better.
I’m studying QML and already started the new interface, you can see below:

I’m in the end of my semester at college, so I need to split my time with GSoC and my college tasks, so now I’m going slowly but on the next month I have my vacation and I’ll have all of my time dedicated to it.
My menthors have helped me a lot so far, and I would like to say thanks for the patience, and say sorry for KDE for my initial project and for waste the first weeks on a thing that didn’t produce anything.

Categories: FLOSS Project Planets

Codementor: Building Restful API with Flask, Postman &amp; PyTest - Part 2 (Read Time: 10 Mins)

Planet Python - Fri, 2019-06-21 11:25
Today we shall cover the creation of mock endpoints in Postman. To help in the designing & prototyping of API endpoints for the expense manager project using Flask and pytest in part 3.
Categories: FLOSS Project Planets

OPTASY: How to Upgrade to Drupal 9: Just Identify and Remove Any Deprecated Code from Your Website

Planet Drupal - Fri, 2019-06-21 11:01
How to Upgrade to Drupal 9: Just Identify and Remove Any Deprecated Code from Your Website radu.simileanu Fri, 06/21/2019 - 15:01

This is no news anymore: preparing to upgrade to Drupal 9 is just a matter of... cleaning your website of all deprecated code. 

No major disruption from Drupal 8. No more compatibility issues to expect (with dread)...

“Ok, but how do I know if my website's using any deprecated APIs or functions? How do I check for deprecations, identify them and then... update my code?”

2 legitimate questions that must be “haunting” you these days, whether you're a:
 

Categories: FLOSS Project Planets

Plasma Vision

Planet KDE - Fri, 2019-06-21 10:19

The Plasma Vision got written a couple years ago, a short text saying what Plasma is and hopes to create and defines our approach to making a useful and productive work environment for your computer.  Because of creative differences it was never promoted or used properly but in my quest to make KDE look as up to date in its presence on the web as it does on the desktop I’ve got the Plasma sprinters who are meeting in Valencia this week to agree to adding it to the KDE Plasma webpage.

 

Categories: FLOSS Project Planets

OpenSense Labs: Drupal in the age of FinTech

Planet Drupal - Fri, 2019-06-21 09:41
Drupal in the age of FinTech Shankar Fri, 06/21/2019 - 19:11 "There are hundreds of startups with a lot of brains and money working on various alternatives to traditional banking" - Jamie Dimon, CEO, JPMorgan Chase

FinTech and the disruption it can cause to the traditional banking systems is now a hot topic of debate in the banking conferences. Global venture capital funds are super-bullish on this front and are accentuating investments in the FinTech companies. Thanks to the burgeoning demand of FinTech in recent times, more crowdsourcing platforms are letting artists or fledgling entrepreneurs to crowd-source capital from a large constituency of online donors or investors.


For instance, peer to peer (P2P) lending, the high-tech equivalent of borrowing money from friends, helps in raising a loan from an online community at a mutually negotiated interest rate. Also, digital wallet providers allow people to zip money across borders even without any bank accounts using handheld devices.

Amalgamation of these technologies, which goes under the umbrella term FinTech, is expected to metamorphose the way all of us use banking and financial services. And Drupal can act as the perfect content management framework for building a great FinTech platform.

A portmanteau of financial technology


Financial technology, which is referred to as FinTech, illustrates the evolving intersection of financial services and technology. FinTech allows people to innovate while transacting business ranging from digital money to double-entry bookkeeping.

The lines between technology and the financial services are blurring

Since the advent of the internet revolution and later the mobile internet revolution, financial technology has grown multifold. Originally referred to   technology applied to the back office of banks or trading firms, FinTech now caters to a broad variety of technological interventions into personal and commercial finance.

According to EY’s FinTech Adoption Index, one-third of consumers leverage at least two or more FinTech services and more and more of these consumers are also aware of FinTech being a part of their daily lives.

FinTech encompasses the startups, technology companies or even legacy providers. Startups use technology to offer existing financial services at affordable costs and to provide new tech-driven solutions. Incumbent financial enterprises look to acquire or work with startups to drive digital innovation. Technology companies offer payment tools. All these can be seen as FinTech. Surely, the lines between technology and the financial services are blurring.

Origins of FinTech Source: 16Best

In broad lines, the financial industry has seen a gargantuan shift over the years with the way it is leveraged in the times of rapid technological advancements. 16Best has compiled a brief history of FinTech which shows how the gap between financial services and the technology has got bridged over the years.

The gap between financial services and the technology has got bridged over the years.

In 1918, the Fedwire Funds service began offering electronic funds transfer. And while the Great Depression was ravaging the world’s economies, IBM provided some solace with its 801 Bank Proof Cach Machine that offered the means for faster cheque processing. Subsequently, credit cards and ATMs came into existence in the ‘50s and ‘60s.

In 1971, first all-electronic trading emerged in the form of NASDAQ. And in 1973, the SWIFT (Society for Worldwide Interbank Financial Telecommunications) built a unified messaging framework between banks for handling money movement.

1997 was the year which saw the emergence of mobile payment through Coca-Cola Vending Machine. Fast forward to 2000s and the present decade, a slew of innovations crashed into the finance sector with the introduction of digital wallets, contactless payments and cryptocurrencies.

FinTech is definitely re-inventing a quicker and more durable wheel as the world continues to witness a superabundance of new ventures refining financial services with technology.

Merits of FinTech


Financial technology has taken the financial services to a whole new level with a cluster of merits that it offers. Here are some of the major benefits of FinTech:

  • Robo Advisors: They are one of the biggest areas of FinTech. These online investment services put users through a slew of questions and then relies on algorithms to come up with an investment plan for them.
  • Online Lending: It encompasses all aspects of borrowing from personal loans to refinancing student loans which improves money lending.
  • Mobile payments: There is a growing demand for mobile payment options with the stupendous rise of mobile devices over the years.
Total revenue of global mobile payment market from 2015 to 2019 (in billion U.S. dollars) | Statista

Personal Finance and Savings: A plethora of FinTech organisations in the micro saving department have been helping people to save their change for rainy days and a whole lot of them rewarding customers for doing so. For instance, Digit allows you to automate the process of saving extra cash.

Source: Statista

Online Banking and Budgeting: Online banks like Simple reward users for using their ‘automatic savings’ service and also offer a cost-effective option over a traditional bank. Leveraging online tools, they assist users to plan budgets and handle their money smartly from their mobile devices with minimal effort to meet their savings goals.

Insurance: New insurance models have been strengthening the FinTech space. Metromile, an insurance model, sells pay per mile car insurance.

Source: Statista

Regtech: Regulation Technology, which utilises IT to enhance regulatory processes, is one of the significant sectors where numerous FinTech app ideas have come into light around this domain. Regtech is useful for trading in financial markets, monitoring payment transactions and identification of clients among others. For instance, PassFort helps in standardising the online compliance processes.

How is Drupal powering FinTech?

Organisations offering FinTech solutions need to maintain a robust online presence. Drupal has been powering the landscape of FinTech with its enormous capabilities.

The launch of TPG Capital


TPG Capital is one of the major enterprise-level FinTech companies which has leveraged the power of Drupal 8.

One of the primary objectives for TPG’s marketing circuit was to harness the Drupal’s flexibility as a digital empowerment platform. They wanted the ability to make alterations to content on the fly and try out new messaging approaches. Simultaneously, the financial industry’s stringent legal and regulatory requirements called for a flexible TPG platform that would meet the specific needs of the sector thereby offering top-notch security.

Drupal came out as the right choice when it came to the CMS that would facilitate the TPG’s goal for mirroring their cutting-edge business practices and incorporate modern website design and branding.

A digital agency built a responsive, mobile-first site. It featured newer CSS features like Flexbox and CSS animations and minimised the site’s dependence on Compass by introducing auto prefixer. Moreover, Drupal 8 version of Swifttype was built for the search component and contributed back to the Drupal Community.

The launch of Tech Coast Angels


Tech Coast Angels are one of the biggest angel investment organisation in the US. 

Tech Coast Angels selected Drupal as their CMS of choice for its excellent features vis-à-vis user authentication, account management, roles and access control, custom dashboards, intricate web forms for membership and funding application, workflow management and email notifications.

Performance improvements were made by a digital agency to both the Drupal application and the server environments which brought down the costs to a huge extent by minimising the hardware requirements necessary to run the Drupal codebase in both staging and production environments.

With Drupal being one of the most security focussed CMSs, it helped a great deal in making amendments related to security of the site. Views caching were enabled and unnecessary modules were turned off on the production server.

Market trends


The Pulse of FinTech 2018 by KPMG shows that global investments activity in FinTech companies has been steadily rising with 2018 turning out as the most profitable year. It is only going to grow more in the coming years.

In the coming years, the main trends in the asset and wealth management, banking, insurance and transactions and payments services industries can be seen in the illustration above.

Conclusion

FinTech is a great alternative to traditional banks. FinTech excels where traditional banks lag behind. In addition to offering robust financial services leveraging technological advancements, organisations offering FinTech solutions need to have a superb digital presence to offer a great digital experience. Drupal can be an awesome content store for an enterprise-level FinTech platform.

Drupal experts at Opensense Labs have been powering digital transformation pursuits of organisations offering a suite of services.

Contact us at hello@opensenselabs.com to build a FinTech web application for your business using Drupal.

blog banner blog image FinTech Drupal FinTech Drupal and FinTech Financial Technology FinTech platform FinTech web application FinTech website Blog Type Articles Is it a good read ? On
Categories: FLOSS Project Planets

3D – Interactions with Qt, KUESA and Qt Design Studio, Part 1

Planet KDE - Fri, 2019-06-21 09:37

This is the first in a series of blog posts about 3D and the interaction with Qt, KUESA and Qt 3D Studio, and other things that pop up when we’re working on something.

I’m a 3D designer, mostly working in blender. Sometimes I come across interesting problems and I’ll try to share those here. For example, trying to display things on low-end hardware – where memory is sometimes limited, meaning every polygon and triangle counts;  where the renderer doesn’t do what the designer wants it to, that sort of thing. The problem that I’ll cover today is, how to easily create a reflection in KUESA or Qt 3D Studio.

Neither KUESA or Qt 3D Studio will give you free reflections. If you know a little about 3D, you know that requires ray tracing software, not OpenGL. So, I wondered if there would be an easy way to create this effect. I mean, all that a reflection is, is a mirror of an object projected onto a plane, right? So, I wondered, could this be imitated?

To recreate this, I’d need to create an exact mirror of the object and duplicate it below the original, and have a floor that is partially transparent. I’ve created a simple scene to show you how this technique works – a scene with two cubes, a ground plane and a point light.

Here’s the result of this scene. It’s starting to look like something, but I want to compare it to a ‘real’ reflection.

For comparison, the above is a cube on a reflective, rough surface – showing the result using raytracing. You can see here the reflection is different from our example above – the main issue is that the reflection eventually fades out the further away it gets from the contact point. 

How to resolve this? This can be mimicked by creating an image texture for the alpha that fades out the model towards the top (or rather the bottom) of the reflection. I can also further enhance the illusion by ensuring that the floor is rough – allowing the texture of the surface to assist the illusion of a reflection.

Another difference between the shots is the blurriness on the edge of the mesh – this could be approximated by creating duplicates of the mesh and for each one, increasing the size and reducing the opacity. Depending on the complexity of the model, this may add too many polygons to render, while only adding a subtle effect.

So, given that this is a very simple example and not one that would translate well to something that a client might ask for, how can I translate this into a more complex model, such as the car below? I’ll chat about that in the next post.

The post 3D – Interactions with Qt, KUESA and Qt Design Studio, Part 1 appeared first on KDAB.

Categories: FLOSS Project Planets

parallel @ Savannah: GNU Parallel 20190622 ('HongKong') released

GNU Planet! - Fri, 2019-06-21 09:36

GNU Parallel 20190622 ('HongKong') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

GNU Parallel is 10 years old in a year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17.

See https://www.gnu.org/software/parallel/10-years-anniversary.html

Quote of the month:

  I want to make a shout-out for @GnuParallel, it's a work of beauty and power
    -- Cristian Consonni @CristianCantoro

New in this release:

  • --shard can now take a column name and optionally a perl expression. Similar to --group-by and replacement strings.
  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

Categories: FLOSS Project Planets

mailutils @ Savannah: Version 3.7

GNU Planet! - Fri, 2019-06-21 09:15

Version 3.7 of GNU mailutils is available for download.

This version introduces a new format for mailboxes: dotmail. Dotmail is a replacement for traditional mbox format, proposed by
Kurt Hackenberg. A dotmail mailbox is a single disk file, where messages are stored sequentially. Each message ends with a single
dot (similar to the format used in the SMTP DATA command). A dot appearing at the start of the line is doubled, to prevent it from being interpreted as end of message marker.

For a complete list of changes, please see the NEWS file.

Categories: FLOSS Project Planets

wishdesk.com: Responsive design in Drupal 8: great core & contributed modules

Planet Drupal - Fri, 2019-06-21 08:23
Drupal 8 has been built with mobile devices in mind. It has responsive default themes, responsive admin interfaces, and powerful opportunities for mobile-friendly design. Great Drupal 8 modules are very helpful in implementing any ideas in this area.
Categories: FLOSS Project Planets

New website, new company, new partners, new code

Planet KDE - Fri, 2019-06-21 06:00

The obvious change to announce is the new website design. But there is much more to talk about. ### Website overhaul The old website, reachable primarily on the domain [subdiff.de][subdiff.de], was a pure blog built with Jekyll and the design was some random theme I picked up on GitHub. It was a quick thing to do back in the days when I needed a blog up fast for community interaction as a KWin and Plasma developer. But on the back burner my goal was already for quite some time to rebuild the website with a more custom and professional design. Additionally I wanted this website to not only be a blog but also a landing page with some general information about my work. The opportunity arose now and after several months of research and coding I finished the website rebuild. This all needed longer because it seemed to me like an ideal occasion to learn about modern web development techniques and so I didn't settle for the first plain solution I came across but invested some more time into selecting and learning a suitable technology stack. In the end I decided to use [Gridsome][gridsome], a static site generator leveraging [Vue.js][vue] for the frontend and [GraphQL][graphql] as data backend when generating the site. By that Gridsome is a prime example of the [JAMstack][jamstack], a most modern and very sensible way of building small to medium sized websites with only few selected dynamic elements through JavaScript APIs while keeping everything else static. After all that learning, decision taking and finally coding I'm now really happy with this solution and I definitely want to write in greater detail about it in the future. Feature-wise the current website provides what I think are the necessary basics and it could still be extended in several ways, but as for now I will stick to these basics and only look into new features when I get an urge to do it. ### Freelancer business Since January I work as a freelancer. This means in Germany that I basically had to start a company, so I did that. I called it *subdiff : software system*, and the brand is still the domain name you are currently browsing. I used it already before as this website's domain name and as an online nickname. It is derived from a mathematical concept and on the other side stands for a slogan I find sensible on a practical level in work and life: > Subtract the nonsense, differentiate what's left. ### Part of Valve's Open Source Group As a freelancer I am contracted by Valve to work on certain gaming-related XServer projects and improve KWin in this regard and for general desktop usage. In the XServer there are two main projects at the moment. The technical details of one of them are currently discussed on a work-in-progress patch series [on Gitlab][xserver-composite-accel-patch] but I want to write accessible articles about both projects here on the blog as well in the near future. In KWin I have several large projects I will look into, which would benefit KWin on X11 and Wayland alike. The most relevant one is [reworking the compositing pipeline][phab-comp-rework]. You can expect more info about this project and the other ones in KWin in future blog posts too. ### New code While there are some big projects in the pipeline I was also able to commit some major changes in the last few months to KWin and Plasma. The largest one was for sure [XWayland drag-and-drop support][xwl-dnd] in KWin. But in best case scenario the user won't even notice this feature because drag-and-drop between any relevant windows will just work from now on in our Wayland session. Inside KWin though the technical solution enabling this was built up from the ground. And in a way such that we should be able to later support something like middle-click-paste between XWayland and Wayland native windows easily. There were two other major initiatives by me that I was able to merge: the finalization of basing every display representation in KWin on the generic `AbstractOutput` class and in Plasma's display management library, daemon and settings panel to [save display-individual values][kscreen-patch] in a consistent way by introducing a new communication channel between these components. While the results of both enhancements are again supposed to be unnoticeable by the user but should improve the code structure and increase the overall stability there is more work lined up for display management which then will directly affect the interface. Take a look at [this task][display-further-work-task] to see what I have planned. So there is interesting work ahead. Luckily this week I am with my fellow KWin and Plasma developers at the Plasma and Usability sprint in Valencia to discuss and plan work on such projects. The sprint officially started yesterday and the first day already was very productive. We strive to keep up that momentum till the end of the sprint next week and I plan on writing an article about the sprint results afterwards. In the meantime you can follow [@kdecommunity][twitter-kdecommunity] on Twitter if you want to receive timely updates on our sprint while it's happening. ### Final remarks and prospect I try to keep the articles in this blog rather prosaic and technical but there are so many things moving forward and evolving right now that I want to spend a few paragraphs in the end on the opposite. In every aspect there is just immense *potential* when looking at our open source graphics stack consisting of KDE Plasma with KWin, at the moment still good old X but in the future Wayland, and the Linux graphics drivers below. While the advantages of free and open source software for the people were always obvious, how rapidly this type of software became the backbone of our global economy signifies that it is immensely valuable for companies alike. In this context the opportunities on how to make use of our software offerings and improve them are endless while the technical challenges we face when doing that are interesting. By this we can do our part such that the open source community will grow and foster. As a reader of these sentences you are already in a prime position to take part in this great journey as well by becoming an active member of the community through contributing. Maybe you already do this for example by coding, designing, researching, donating or just by giving us feedback on how our technology can become better. But if you are not yet, this is a great time to get involved and bring in your individual talents and motivation to build up something great together for ourselves and everybody. You can find out more on how to do that by visiting KDE's [Get Involved page][kde-involved] or join in on the ongoing discussion about KDE's [future goals][goals-blog]. [subdiff.de]: https://subdiff.de [gridsome]: https://gridsome.org [vue]: https://vuejs.org [graphql]: https://graphql.org [jamstack]: https://jamstack.org [xserver-composite-accel-patch]: https://gitlab.freedesktop.org/xorg/xserver/merge_requests/211 [phab-comp-rework]: https://phabricator.kde.org/T11071 [xwl-dnd]: https://phabricator.kde.org/R108:548978bfe1f714e51af6082933a512d28504f7e3 [kscreen-patch]: https://phabricator.kde.org/T10028 [display-further-work-task]: https://phabricator.kde.org/T11095 [twitter-kdecommunity]: https://twitter.com/kdecommunity [kde-involved]: https://community.kde.org/Get_Involved [goals-blog]: http://blog.lydiapintscher.de/2019/06/09/evolving-kde-lets-set-some-new-goals-for-kde/

Categories: FLOSS Project Planets

Ruslan Spivak: Let’s Build A Simple Interpreter. Part 15.

Planet Python - Fri, 2019-06-21 05:45

“I am a slow walker, but I never walk back.” — Abraham Lincoln

And we’re back to our regularly scheduled programming! :)

Before moving on to topics of recognizing and interpreting procedure calls, let’s make some changes to improve our error reporting a bit. Up until now, if there was a problem getting a new token from text, parsing source code, or doing semantic analysis, a stack trace would be thrown right into your face with a very generic message. We can do better than that.

To provide better error messages pinpointing where in the code an issue happened, we need to add some features to our interpreter. Let’s do that and make some other changes along the way. This will make the interpreter more user friendly and give us an opportunity to flex our muscles after a “short” break in the series. It will also give us a chance to prepare for new features that we will be adding in future articles.

Goals for today:

  • Improve error reporting in the lexer, parser, and semantic analyzer. Instead of stack traces with very generic messages like “Invalid syntax”, we would like to see something more useful like “SyntaxError: Unexpected token -> Token(TokenType.SEMI, ‘;’, position=23:13)”
  • Add a “—scope” command line option to turn scope output on/off
  • Switch to Python 3. From here on out, all code will be tested on Python 3.7+ only

Let’s get cracking and start flexing our coding muscles by changing our lexer first.


Here is a list of the changes we are going to make in our lexer today:

  1. We will add error codes and custom exceptions: LexerError, ParserError, and SemanticError
  2. We will add new members to the Lexer class to help to track tokens’ positions: lineno and column
  3. We will modify the advance method to update the lexer’s lineno and column variables
  4. We will update the error method to raise a LexerError exception with information about the current line and column
  5. We will define token types in the TokenType enumeration class (Support for enumerations was added in Python 3.4)
  6. We will add code to automatically create reserved keywords from the TokenType enumeration members
  7. We will add new members to the Token class: lineno and column to keep track of the token’s line number and column number, correspondingly, in the text
  8. We will refactor the get_next_token method code to make it shorter and have a generic code that handles single-character tokens


1. Let’s define some error codes first. These codes will be used by our parser and semantic analyzer. Let’s also define the following error classes: LexerError, ParserError, and SemanticError for lexical, syntactic, and, correspondingly, semantic errors:

from enum import Enum class ErrorCode(Enum): UNEXPECTED_TOKEN = 'Unexpected token' ID_NOT_FOUND = 'Identifier not found' DUPLICATE_ID = 'Duplicate id found' class Error(Exception): def __init__(self, error_code=None, token=None, message=None): self.error_code = error_code self.token = token # add exception class name before the message self.message = f'{self.__class__.__name__}: {message}' class LexerError(Error): pass class ParserError(Error): pass class SemanticError(Error): pass


ErrorCode is an enumeration class, where each member has a name and a value:

>>> from enum import Enum >>> >>> class ErrorCode(Enum): ... UNEXPECTED_TOKEN = 'Unexpected token' ... ID_NOT_FOUND = 'Identifier not found' ... DUPLICATE_ID = 'Duplicate id found' ... >>> ErrorCode <enum 'ErrorCode'> >>> >>> ErrorCode.ID_NOT_FOUND <ErrorCode.ID_NOT_FOUND: 'Identifier not found'>


The Error base class constructor takes three arguments:

  • error_code: ErrorCode.ID_NOT_FOUND, etc

  • token: an instance of the Token class

  • message: a message with more detailed information about the problem

As I’ve mentioned before, LexerError is used to indicate an error encountered in the lexer, ParserError is for syntax related errors during the parsing phase, and SemanticError is for semantic errors.


2. To provide better error messages, we want to display the position in the source text where the problem happened. To be able do that, we need to start tracking the current line number and column in our lexer as we generate tokens. Let’s add lineno and column fields to the Lexer class:

class Lexer(object): def __init__(self, text): ... # self.pos is an index into self.text self.pos = 0 self.current_char = self.text[self.pos] # token line number and column number self.lineno = 1 self.column = 1


3. The next change that we need to make is to reset lineno and column in the advance method when encountering a new line and also increase the column value on each advance of the self.pos pointer:

def advance(self): """Advance the `pos` pointer and set the `current_char` variable.""" if self.current_char == '\n': self.lineno += 1 self.column = 0 self.pos += 1 if self.pos > len(self.text) - 1: self.current_char = None # Indicates end of input else: self.current_char = self.text[self.pos] self.column += 1

With those changes in place, every time we create a token we will pass the current lineno and column from the lexer to the newly created token.


4. Let’s update the error method to throw a LexerError exception with a more detailed error message telling us the current character that the lexer choked on and its location in the text.

def error(self): s = "Lexer error on '{lexeme}' line: {lineno} column: {column}".format( lexeme=self.current_char, lineno=self.lineno, column=self.column, ) raise LexerError(message=s)


5. Instead of having token types defined as module level variables, we are going to move them into a dedicated enumeration class called TokenType. This will help us simplify certain operations and make some parts of our code a bit shorter.

Old style:

# Token types PLUS = 'PLUS' MINUS = 'MINUS' MUL = 'MUL' ...

New style:

class TokenType(Enum): # single-character token types PLUS = '+' MINUS = '-' MUL = '*' FLOAT_DIV = '/' LPAREN = '(' RPAREN = ')' SEMI = ';' DOT = '.' COLON = ':' COMMA = ',' # block of reserved words PROGRAM = 'PROGRAM' # marks the beginning of the block INTEGER = 'INTEGER' REAL = 'REAL' INTEGER_DIV = 'DIV' VAR = 'VAR' PROCEDURE = 'PROCEDURE' BEGIN = 'BEGIN' END = 'END' # marks the end of the block # misc ID = 'ID' INTEGER_CONST = 'INTEGER_CONST' REAL_CONST = 'REAL_CONST' ASSIGN = ':=' EOF = 'EOF'


6. We used to manually add items to the RESERVED_KEYWORDS dictionary whenever we had to add a new token type that was also a reserved keyword. If we wanted to add a new STRING token type, we would have to

  • (a) create a new module level variable STRING = ‘STRING’
  • (b) manually add it to the RESERVED_KEYWORDS dictionary

Now that we have the TokenType enumeration class, we can remove the manual step (b) above and keep token types in one place only. This is the “two is too many” rule in action - going forward, the only change you need to make to add a new keyword token type is to put the keyword between PROGRAM and END in the TokenType enumeration class, and the _build_reserved_keywords function will take care of the rest:

def _build_reserved_keywords(): """Build a dictionary of reserved keywords. The function relies on the fact that in the TokenType enumeration the beginning of the block of reserved keywords is marked with PROGRAM and the end of the block is marked with the END keyword. Result: {'PROGRAM': <TokenType.PROGRAM: 'PROGRAM'>, 'INTEGER': <TokenType.INTEGER: 'INTEGER'>, 'REAL': <TokenType.REAL: 'REAL'>, 'DIV': <TokenType.INTEGER_DIV: 'DIV'>, 'VAR': <TokenType.VAR: 'VAR'>, 'PROCEDURE': <TokenType.PROCEDURE: 'PROCEDURE'>, 'BEGIN': <TokenType.BEGIN: 'BEGIN'>, 'END': <TokenType.END: 'END'>} """ # enumerations support iteration, in definition order tt_list = list(TokenType) start_index = tt_list.index(TokenType.PROGRAM) end_index = tt_list.index(TokenType.END) reserved_keywords = { token_type.value: token_type for token_type in tt_list[start_index:end_index + 1] } return reserved_keywords RESERVED_KEYWORDS = _build_reserved_keywords()


As you can see from the function’s documentation string, the function relies on the fact that a block of reserved keywords in the TokenType enum is marked by PROGRAM and END keywords.

The function first turns TokenType into a list (the definition order is preserved), and then it gets the starting index of the block (marked by the PROGRAM keyword) and the end index of the block (marked by the END keyword). Next, it uses dictionary comprehension to build a dictionary where the keys are string values of the enum members and the values are the TokenType members themselves.

>>> from spi import _build_reserved_keywords >>> from pprint import pprint >>> pprint(_build_reserved_keywords()) # 'pprint' sorts the keys {'BEGIN': <TokenType.BEGIN: 'BEGIN'>, 'DIV': <TokenType.INTEGER_DIV: 'DIV'>, 'END': <TokenType.END: 'END'>, 'INTEGER': <TokenType.INTEGER: 'INTEGER'>, 'PROCEDURE': <TokenType.PROCEDURE: 'PROCEDURE'>, 'PROGRAM': <TokenType.PROGRAM: 'PROGRAM'>, 'REAL': <TokenType.REAL: 'REAL'>, 'VAR': <TokenType.VAR: 'VAR'>}


7. The next change is to add new members to the Token class, namely lineno and column, to keep track of a token’s line number and column number in a text

class Token(object): def __init__(self, type, value, lineno=None, column=None): self.type = type self.value = value self.lineno = lineno self.column = column def __str__(self): """String representation of the class instance. Example: >>> Token(TokenType.INTEGER, 7, lineno=5, column=10) Token(TokenType.INTEGER, 7, position=5:10) """ return 'Token({type}, {value}, position={lineno}:{column})'.format( type=self.type, value=repr(self.value), lineno=self.lineno, column=self.column, ) def __repr__(self): return self.__str__()


8. Now, onto get_next_token method changes. Thanks to enums, we can reduce the amount of code that deals with single character tokens by writing a generic code that generates single character tokens and doesn’t need to change when we add a new single character token type:

Instead of a lot of code blocks like these:

if self.current_char == ';': self.advance() return Token(SEMI, ';') if self.current_char == ':': self.advance() return Token(COLON, ':') if self.current_char == ',': self.advance() return Token(COMMA, ',') ...

We can now use this generic code to take care of all current and future single-character tokens

# single-character token try: # get enum member by value, e.g. # TokenType(';') --> TokenType.SEMI token_type = TokenType(self.current_char) except ValueError: # no enum member with value equal to self.current_char self.error() else: # create a token with a single-character lexeme as its value token = Token( type=token_type, value=token_type.value, # e.g. ';', '.', etc lineno=self.lineno, column=self.column, ) self.advance() return token

Arguably it’s less readable than a bunch of if blocks, but it’s pretty straightforward once you understand what’s going on here. Python enums allow us to access enum members by values and that’s what we use in the code above. It works like this:

  • First we try to get a TokenType member by the value of self.current_char
  • If the operation throws a ValueError exception, that means we don’t support that token type
  • Otherwise we create a correct token with the corresponding token type and value.

This block of code will handle all current and new single character tokens. All we need to do to support a new token type is to add the new token type to the TokenType definition and that’s it. The code above will stay unchanged.

The way I see it, it’s a win-win situation with this generic code: we learned a bit more about Python enums, specifically how to access enumeration members by values; we wrote some generic code to handle all single character tokens, and, as a side effect, we reduced the amount of repetitive code to handle those single character tokens.

The next stop is parser changes.


Here is a list of changes we’ll make in our parser today:

  1. We will update the parser’s error method to throw a ParserError exception with an error code and current token
  2. We will update the eat method to call the modified error method
  3. We will refactor the declarations method and move the code that parses a procedure declaration into a separate method.

1. Let’s update the parser’s error method to throw a ParserError exception with some useful information

def error(self, error_code, token): raise ParserError( error_code=error_code, token=token, message=f'{error_code.value} -> {token}', )


2. And now let’s modify the eat method to call the updated error method

def eat(self, token_type): # compare the current token type with the passed token # type and if they match then "eat" the current token # and assign the next token to the self.current_token, # otherwise raise an exception. if self.current_token.type == token_type: self.current_token = self.get_next_token() else: self.error( error_code=ErrorCode.UNEXPECTED_TOKEN, token=self.current_token, )


3. Next, let’s update the declaration‘s documentation string and move the code that parses a procedure declaration into a separate method, procedure_declaration:

def declarations(self): """ declarations : (VAR (variable_declaration SEMI)+)? procedure_declaration* """ declarations = [] if self.current_token.type == TokenType.VAR: self.eat(TokenType.VAR) while self.current_token.type == TokenType.ID: var_decl = self.variable_declaration() declarations.extend(var_decl) self.eat(TokenType.SEMI) while self.current_token.type == TokenType.PROCEDURE: proc_decl = self.procedure_declaration() declarations.append(proc_decl) return declarations def procedure_declaration(self): """procedure_declaration : PROCEDURE ID (LPAREN formal_parameter_list RPAREN)? SEMI block SEMI """ self.eat(TokenType.PROCEDURE) proc_name = self.current_token.value self.eat(TokenType.ID) params = [] if self.current_token.type == TokenType.LPAREN: self.eat(TokenType.LPAREN) params = self.formal_parameter_list() self.eat(TokenType.RPAREN) self.eat(TokenType.SEMI) block_node = self.block() proc_decl = ProcedureDecl(proc_name, params, block_node) self.eat(TokenType.SEMI) return proc_decl

These are all the changes in the parser. Now, we’ll move onto the semantic analyzer.


And finally here is a list of changes we’ll make in our semantic analyzer:

  1. We will add a new error method to the SemanticAnalyzer class to throw a SemanticError exception with some additional information
  2. We will update visit_VarDecl to signal an error by calling the error method with a relevant error code and token
  3. We will also update visit_Var to signal an error by calling the error method with a relevant error code and token
  4. We will add a log method to both the ScopedSymbolTable and SemanticAnalyzer, and replace all print statements with calls to self.log in the corresponding classes
  5. We will add a command line option “—-scope” to turn scope logging on and off (it will be off by default) to control how “noisy” we want our interpreter to be
  6. We will add empty visit_Num and visit_UnaryOp methods


1. First things first. Let’s add the error method to throw a SemanticError exception with a corresponding error code, token and message:

def error(self, error_code, token): raise SemanticError( error_code=error_code, token=token, message=f'{error_code.value} -> {token}', )


2. Next, let’s update visit_VarDecl to signal an error by calling the error method with a relevant error code and token

def visit_VarDecl(self, node): type_name = node.type_node.value type_symbol = self.current_scope.lookup(type_name) # We have all the information we need to create a variable symbol. # Create the symbol and insert it into the symbol table. var_name = node.var_node.value var_symbol = VarSymbol(var_name, type_symbol) # Signal an error if the table already has a symbol # with the same name if self.current_scope.lookup(var_name, current_scope_only=True): self.error( error_code=ErrorCode.DUPLICATE_ID, token=node.var_node.token, ) self.current_scope.insert(var_symbol)


3. We also need to update the visit_Var method to signal an error by calling the error method with a relevant error code and token

def visit_Var(self, node): var_name = node.value var_symbol = self.current_scope.lookup(var_name) if var_symbol is None: self.error(error_code=ErrorCode.ID_NOT_FOUND, token=node.token)

Now semantic errors will be reported as follows:

SemanticError: Duplicate id found -> Token(TokenType.ID, 'a', position=21:4)

Or

SemanticError: Identifier not found -> Token(TokenType.ID, 'b', position=22:9)


4. Let’s add the log method to both the ScopedSymbolTable and SemanticAnalyzer, and replace all print statements with calls to self.log:

def log(self, msg): if _SHOULD_LOG_SCOPE: print(msg)

As you can see, the message will be printed only if the global variable _SHOULD_LOG_SCOPE is set to true. The —scope command line option that we will add in the next step will control the value of the _SHOULD_LOG_SCOPE variable.


5. Now, let’s update the main function and add a command line option “—scope” to turn scope logging on and off (it’s off by default)

parser = argparse.ArgumentParser( description='SPI - Simple Pascal Interpreter' ) parser.add_argument('inputfile', help='Pascal source file') parser.add_argument( '--scope', help='Print scope information', action='store_true', ) args = parser.parse_args() global _SHOULD_LOG_SCOPE _SHOULD_LOG_SCOPE = args.scope

Here is an example with the switch on:

$ python spi.py idnotfound.pas --scope ENTER scope: global Insert: INTEGER Insert: REAL Lookup: INTEGER. (Scope name: global) Lookup: a. (Scope name: global) Insert: a Lookup: b. (Scope name: global) SemanticError: Identifier not found -> Token(TokenType.ID, 'b', position=6:9)

And with scope logging off (default):

$ python spi.py idnotfound.pas SemanticError: Identifier not found -> Token(TokenType.ID, 'b', position=6:9)


6. Add empty visit_Num and visit_UnaryOp methods

def visit_Num(self, node): pass def visit_UnaryOp(self, node): pass

These are all the changes to our semantic analyzer for now.

See GitHub for Pascal files with different errors to try your updated interpreter on and see what error messages the interpreter generates.


That is all for today. You can find the full source code for today’s article interpreter on GitHub. In the next article we’ll talk about how to recognize (i.e. how to parse) procedure calls. Stay tuned and see you next time!

LSBASI_SERIES_LINKS

Categories: FLOSS Project Planets

Candy Tsai: Outreachy Week 5: What is debci?

Planet Debian - Fri, 2019-06-21 05:33

The theme for this week in Outreachy is “Think About Your Audience”. So I’m currently thinking about you.

Or not?

After being asked sooo many times what am I doing for this internship, I think I never explained it well enough so that others could understand. Let me give it a try here.

debci is short for “Debian Continuous Integration”, so I’ll start with a short definition of what “Continuous Integration” is then!

Continuous Integration (CI)

Since there should be quite some articles talking about this topic, here is a quick explanation that I found on Microsoft Azure (link):

Continuous Integration (CI) is the process of automating the build and testing of code every time a team member commits changes to version control.

A scenario would be whenever I push code to Debian Salsa Gitlab, it automatically runs the tests we have written in our code. This is to make sure that the new code changes doesn’t break the stuff that used to work.

The debci Project

Before Debian puts out a new release, all of the packages that have tests written in them need to be tested. debci is a platform for testing packages and provides a UI to see if they pass or not. The goal is to make sure that the the packages will pass their tests before a major Debian release. For example, when the ruby-defaults package is updated, we not only want to test ruby-defaults but also all the packages that depend on it. In short, debci helps make sure the packages are working correctly.

For my internship, I am working on improving the user experience through the UI of the debci site. The most major task is to let developers easily test their packages with packages in different suites and architectures.

The terms that keep popping up in debci are:

  • suite
  • architecture
  • pin-packages
  • trigger

There are three obvious suites on the debci site right now, namely unstable, testing and stable. There is also an experimental suite that a user can test their packages upon. And architecture is like amd64 or arm64.

The life of a package is something like this:

  1. When a package is updated/added, it goes into unstable
  2. After it has stayed 2-10 days in unstable without any big issues, it moves into testing
  3. It becomes stable after a release
Normally a package moves from unstable > testing > stable

Let’s say, a user wants to test the ruby-defaults package in unstable on amd64 along with a package C from experimental. Here package C would be a pin-package, which means the package the user wants to test along with. Last but not least, trigger is just the name of the test job, one can choose to use it or not.

Currently, there is an API that you make this request with curl or something similar, but it’s not very friendly since not everyone is familiar with how the request should look like. Therefore, not a lot of people are willing to use it and I have also seen requests for an improvement on this in the #debci channel. An easy to use UI might be the solution to make requesting these tests easier. Knowing what I am working on is useful for others is an important key to keep myself motivated.

The debci Community

The debci community is very small but active. The people that are directly involved are my mentors: terceiro and elbrus. Sometimes people drop by the IRC channel to ask questions, but basically that’s it. This works pretty good for me, because I’m usually not at ease and will keep a low profile if the community gets too large.

I’m not familiar with the whole Debian community, but I have also been hanging around the #debian-outreach channel. It felt warm to know that someone realized there was an intern from Taiwan for this round of Outreachy. As far as I have experienced, everyone I have chatted with were nice and eager to share which Debian-related communities were close to me.

Week 5: Modularizing & Testing

This week I worked on adding tests and tried pulling out the authentication code to make the code a bit more DRY.

  • Learned how to setup tests in Ruby
  • Came up with test cases
  • Learned more about how classes work in Ruby
  • Separated the authentication code

And… probably also writing this blog post! I found that blogging takes up more time than I thought it should.

Categories: FLOSS Project Planets

Sven Hoexter: logstash json filter error

Planet Debian - Fri, 2019-06-21 04:08

If you've a logstash filter that contains a json filter/decoding step like

filter { json { source => "log" } }

this, and you end up with an error message like that:

[2019-06-21T09:47:58,243][WARN ][logstash.filters.json ] Error parsing json {:source=>"log", :raw=>{"file"=>{"path"=>"/var/lib/docker/containers/abdf3db21fca8e1dc17c888d4aa661fe16ae4371355215157cf7c4fc91b8ea4b/abdf3db21fca8e1dc17c888d4aa661fe16ae4371355215157cf7c4fc91b8ea4b-json.log"}}, :exception=>java.lang.ClassCastException}

It might be just telling you that the field log actually does contain valid json, and no decoding is required.

Categories: FLOSS Project Planets

Talk Python to Me: #217 Notebooks vs data science-enabled scripts

Planet Python - Fri, 2019-06-21 04:00
On this episode, I meet up with Rong Lu and Katherine Kampf from Microsoft while I was at BUILD this year. We cover a bunch of topics around data science and talk about two opposing styles of data science development and related tooling: Notebooks vs Python code files and editors.
Categories: FLOSS Project Planets

Agiledrop.com Blog: Burnout: Symptoms of developer burnout & ways to tackle it

Planet Drupal - Fri, 2019-06-21 03:06

Burnout is becoming an increasingly prevalent problem, especially in a field as fast-paced as development. In this post, we'll take a look at how you can spot the symptoms of burnout in your developers and what measures you can take to tackle it.

READ MORE
Categories: FLOSS Project Planets

OpenSense Labs: Disseminating Knowledge: Drupal for Education and E-learning

Planet Drupal - Fri, 2019-06-21 02:56
Disseminating Knowledge: Drupal for Education and E-learning Shankar Fri, 06/21/2019 - 12:26 "Information is a source of learning. But unless it is organized, processed, and available to the right people in a format for decision making, it is a burden, not a benefit." - C. William Pollard, Chairman, Fairwyn Investment Company

Have you always secretly wanted to spend your evenings writing symphonies, learning about filmography or assessing climate change? Studying niche subjects have traditionally been for niche students. But e-learning platforms have changed all that with the provision for learning almost any subject online.


Corporate e-learning has witnessed a stupendous 900% growth in the last decade or so. With more and more e-learning platforms flourishing, organisations are striving to be the best to stand apart from the rest. Drupal has been a great asset in powering education and e-learning with its powerful capabilities that can help enterprises offer a wonderful digital experience. Let’s trace the roots of e-learning before diving deep into the ocean of possibilities with Drupal for building an amazing e-learning platform.

Before the internet era Source: eFront

A brief history of e-learning can be traced through the compilation made by eFront. Even before the internet existed, distance education was being offered. In 1840, Isaac Pitman taught shorthand via correspondence where completed assignments were sent to him via mail and he would, then, send his students more work.

Fast forward to the 20th century, the first testing machine was invented in 1924 that enabled students to test themselves. The teaching machine was invented in 1954 by a Harvard professor for allowing schools to administer programmed instruction to students. In 1960, the first computer-based training program (CBT program) called Programmed Logic for Automated Teaching Operation (PLATO).

At a CBT systems seminar in 1999, the term ‘e-learning’ was first utilised. Eventually, with internet and computers becoming the core of businesses, the 2000s saw the adoption of e-learning by organisations to train employees. Today, a plenitude of e-learning solutions are available in the form of MOOCs (Massive Open Online Courses), Social platforms and Learning Management System among others.

E-learning: Learn anywhere, anytime

In essence, e-learning refers to the computer-based educational tool or system that allows you to learn anywhere and at any time. It is the online method of building skills and knowledge across the complete workforce and with customers and partners. It comes with numerous formats like the self-paced courses, virtual live classrooms or informal learning.

E-learning refers to the computer-based educational tool or system that allows you to learn anywhere and at any time

Technological advancements have diminished the geographical gap with the use of tools that can make you feel as if you are inside the classroom. E-learning provides the ability to share material in all sorts of formats such as videos, slideshows, and PDFs. It is possible to conduct webinars (live online classes) and communicate with professors via chat and message forums.

There is a superabundance of different e-learning systems (otherwise known as Learning Management Systems or LMS) and methods which enable the courses to be delivered. With the right kind of tools, several processes can be automated like the marking of tests or the creation of engrossing content. E-learning offers the learners with the ability to fit learning around their lifestyles thereby enabling even the busiest of persons to further a career and gain new qualifications.

Merits and Demerits

Some of the major benefits are outlined below:

  • No restrictions: E-learning facilitates learning without having to organise when and where everyone, who is interested in learning a course, can be present.
  • Interactive and fun: Designing a course to make it interactive and fun with the use of multimedia or gamification enhances engagement and the relative lifetime of the course.
  • Affordable: E-learning is cost-effective. For instance, while textbooks can become obsolete, the need to perpetually acquire new editions by paying exorbitant amounts of money is not present in e-learning.

Some of the concerns that are needed to be taken care of:

  • Practical skills: It is considered tougher to pick up skills like building a wooden table, pottery, and car engineering from online resources as these require hands-on experience.
  • Secludedness: Although e-learning enables a person to remotely access a classroom in his or her own time, learners may feel a sense of isolation. Tools such as video conferencing, social media and discussion forums can allow them to actively engage with professors or other students.
  • Health concerns: With the mandatory need of a computer or mobile devices, health-related issues like eyestrain, bad posture, and other physical problems may be troublesome. However, sending out proper guidelines beforehand to the learner like correct sitting posture, desk height, and recommendations for regular breaks can be done.
Building Yardstick LMS with Drupal

OpenSense Labs built Yardstick LMS, a learning management system, for Yardstick Educational Initiatives which caters to the students of various schools of Dubai.

Yardstick LMS Homepage

The architecture of the project involved a lot of custom development:

1. Yardstick Core

This is the core module of the Yardstick LMS where the process of creating, updating and deleting the nodes take place.

2. Yardstick Quiz

We built this custom module for the whole functionality of the quiz component. It generates a quiz, quiz palette and quiz report after quiz completion based upon the validation of the visibility of the report.


We could generate three kinds of reports: 

  • An individual-level quiz where one’s performance is evaluated
  • A sectional-level report where performance for each section is evaluated
  • Grade-level report where performance for all the sections is compared and evaluated.

For the quiz, we had different sub-components like questions, options, marks, the average time to answer, learning objective, skill level score, and concept. The same question could be used for different quiz thereby minimising the redundancy of the data. Also, image, video or text could be added for questions.


3. Yardstick Bulk User Import

This module was built to assist the administrators in creating users all at once by importing a CSV file. Also, there is an option to send invitation mail to all the users with login credentials.


4. Yardstick Custom Login

We provided a custom login feature where same login credentials could be used to log into the Yardstick system. That is, we provided an endpoint for verifying the login credentials and upon success, users were logged in.

5. Yardstick Validation

This module offers all the validation across the site whether it is related to access permission or some time validation.

6. Yardstick Challenge

It offers the user an option to submit a task which is assigned to them where they are provided with text area and file upload widget.

Yardstick LMS has an intricate structure

On the end user side, there is a seamless flow but as we go deeper, it becomes challenging. Yardstick LMS has an intricate structure.

We had two kinds of login:

  • Normal login using Yardstick credentials
  • And the other for school-specific login like the Delhi Public School (DPS) users.
Yardstick LMS custom login for DPS users

For DPS users, we used the same login form but a different functionality for validating credentials. DPS school gave us an endpoint where we sent a POST request with username and password. If the username and password were correct, then that endpoint returned the user information.

If the username was received, we checked on our Yardstick system if the username exists. If it does not exist, then we programmatically created a new user with the information that we received from the endpoint and created a user session. And if does exist, then we updated the password on our system.

Yardstick LMS is designed to govern multiple schools at the same time

We designed Yardstick LMS in such a way that multiple schools can be governed at the same time. All the students of various schools will be learning the same content thereby building uniformity.

The core part of our system dwells in the modules. The module is a content type that can store numerous information like components, concept, description, objective, syllabus among others. 

Several different components can be added like Task, Quiz, Video task, Extension, Feedback, Inspiration, pdf lesson plan, Real life application, and Scientific principles.

Yardstick LMS Real life application component page

Schools could opt for different modules for different grades. When a module was subscribed by a school, a clone module of the master module was created and the school copy was visible only to the school. School version could be modified by the school admin as per their needs and preferences. Master module remained the same. While creating a subscription, administrator had to provide the date so that the components were accessible to the students. School admin could set different dates to different components and only the components with past date were accessible.

Flow Diagram of module subscription to school

Also, we provided an option to create a dynamic feedback form for the modules for analysis. Yardstick Admin had the option to design and create a feedback form as per their requirement and could assign it to a particular module. Different types of elements could be utilised for designing the form like rating, captcha, email, range slider, text field, checkboxes, radio buttons and so on.


Students and teachers need to submit their feedback for each of the modules. On the basis of this, Yardstick team try to improve the content of the system.


Also, various roles were defined for users such as Yardstick Administrator, School Administrator, Teacher, and Student.

1. Yardstick Admin

Yardstick Admin can perform all the operations. He or she can create new users, grant permissions and revoke them as well.

2. School Admin

It has the provision for handling all the operation which are only related to their school. School Admin handles the modules and their components and can import user for their school. All school reports and task submissions are visible to School Admins.

3. Teachers

Teachers can view modules and components assigned to their classes and provide remarks to the students for multiple components and they can view all kinds of reports.

4. Students

They can attempt quiz, submit tasks, view components and view their own reports.

What’s the future of e-learning?

According to a report on Research and Markets, the e-learning market is anticipated to generate revenue of $65.41 billion by 2023 with a growth rate of 7.07% during the forecast period.

The report goes on to state that with the advent of cloud infrastructure, peer-to-peer problem solving and open content creation, more business opportunities would pop up for service providers in the global e-learning market. The introduction of cloud-based learning and AR/VR mobile-based learning will be a major factor in driving the growth of e-learning.

The growth of the e-learning market is due to the learning process enhancements in the academic sector

According to Technavio, the growth of the market is due to the learning process enhancements in the academic sector.

Global self-paced e-learning market 2019-2023 | Source: Technavio

Following are major trends to look forward to:

  • Microlearning, which emphasises on the design of microlearning activities through micro-steps in digital media environments, will be on the rise.
  • Gamification, which is the use of game thinking and game mechanics in a non-game context to keep the users engrossed and help them solve more problems, will see increased adoption rates.
  • Personalised learning, which is the tailoring of pedagogy, curriculum and learning environments to meet the demands of learners, can be a driving force.
  • Automatic learning, like the one shown in the movie The Matrix where a person is strapped onto a high-tech chair and a series of martial arts training programs are downloaded into his brain, can be a possibility.
Conclusion

It’s a world which is replete with possibilities. As one of the most intelligent species to walk on this earth, we perpetually innovate with the way we want to lead a better lifestyle. We learn new things to gain more knowledge. And in the process, we find ways of improving our learning experience. E-learning is one such tech marvel that promises to be a force to reckon with. It is not a disrupting technology but something that is going to get bigger and bigger in the years to come.

As a content management framework, Drupal offers a magnificent platform to build a robust e-learning system. With years of experience in Drupal Development, OpenSense Labs can help in providing an amazing digital experience. 

Contact us at hello@opensenselabs.com to build an e-learning system using Drupal and transform the educational experience.

blog banner blog image E-learning Drupal e-learning Drupal and education Yardstick LMS Drupal Learning Management System Drupal LMS LMS Learning Management System E-learning platform E-learning system E-learning application Blog Type Articles Is it a good read ? On
Categories: FLOSS Project Planets

Learn PyQt: What's the difference between PyQt5 &amp; PySide2? What should you use, and how to migrate.

Planet Python - Fri, 2019-06-21 00:24

If you start building Python application with Qt5 you'll soon discover that there are in fact two packages which you can use to do this — PyQt5 and PySide2.

In this short guide I'll run through why exactly this is, whether you need to care (spoiler: you really don't), what the few differences are and how to work around them. By the end you should be comfortable re-using code examples from both PyQt5 and PySide2 tutorials to build your apps, regardless of which package you're using yourself.

Background

Why are there two packages?

PyQt has been developed by Phil Thompson of Riverbank Computing Ltd. for a very long time — supporting versions of Qt going back to 2.x. Back in 2009 Nokia, who owned the Qt toolkit at the time, wanted to have Python bindings for Qt available under the LGPL license (like Qt itself). Unable to come to agreement with Riverbank (who would lose money from this, so fair enough) they then released their own bindings as PySide (also, fair enough).

If you know why it's called PySide I would love to find out.

The two interfaces were comparable at first but PySide ultimately development lagged behind PyQt. This was particularly noticeable following the release of Qt 5 — the Qt5 version of PyQt (PyQt5) was available from mid-2016, while the first stable release of PySide2 was 2 years later.

It is this delay which explains why many Qt 5 on Python examples use PyQt5 rather than PySide2 — it's not necessarily better, but it existed. However, the Qt project has recently adopted PySide as the official Qt for Python release which should ensure its viability and increase it's popularity going forward.

PyQt5 PySide2 Current stable version (2019-06-23) 5.12 5.12 First stable release Apr 2016 Jul 2018 Developed by Riverbank Computing Ltd. Qt License GPL or commercial LGPL Platforms Python 3 Python 3 and Python 2.7 (Linux and MacOS only)

Which should you use? Well, honestly, it doesn't really matter.

Both packages are wrapping the same library — Qt5 — and so have 99.9% identical APIs (see below for the few differences). Code that is written for one can often be used as-is with other, simply changing the imports from PyQt5 to PySide2. Anything you learn for one library will be easily applied to a project using the other.

Also, no matter with one you choose to use, it's worth familiarising yourself with the other so you can make the best use of all available online resources — using PyQt5 tutorials to build your PySide2 applications for example, and vice versa.

In this short chapter I'll run through the few notable differences between the two packages and explain how to write code which works seamlessly with both. After reading this you should be able to take any PyQt5 example online and convert it to work with PySide2.

Licensing

The key difference in the two versions — in fact the entire reason PySide2 exists — is licensing. PyQt5 is available under a GPL or commercial license, and PySide2 under a LGPL license.

If you are planning to release your software itself under the GPL, or you are developing software which will not be distributed, the GPL requirement of PyQt5 is unlikely to be an issue. However, if you plan to distribute your software commercially you will either need to purchase a commercial license from Riverbank for PyQt5 or use PySide2.

Qt itself is available under a Qt Commercial License, GPL 2.0, GPL 3.0 and LGPL 3.0 licenses.

Python versions
  • PyQt5 is Python 3 only
  • PySide2 is available for Python3 and Python 2.7, but Python 2.7 builds are only available for 64 bit versions of MacOS and Linux. Windows 32 bit is supported on Python 2 only.
UI files

Both packages use slightly different approaches for loading .ui files exported from Qt Creator/Designer. PyQt5 provides the uic submodule which can be used to load UI files directly, to produce an object. This feels pretty Pythonic (if you ignore the camelCase).

import sys from PyQt5 import QtWidgets, uic app = QtWidgets.QApplication(sys.argv) window = uic.loadUi("mainwindow.ui") window.show() app.exec()

The equivalent with PySide2 is one line longer, since you need to create a QUILoader object first. Unfortunately the api of these two interfaces is different too (.load vs .loadUI) and take different parameters.

import sys from PySide2 import QtCore, QtGui, QtWidgets from PySide2.QtUiTools import QUiLoader loader = QUiLoader() app = QtWidgets.QApplication(sys.argv) window = loader.load("mainwindow.ui", None) window.show() app.exec_()

To load a UI onto an object in PyQt5, for example in your QMainWindow.__init__, you can call uic.loadUI passing in self (the target widget) as the second parameter.

import sys from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5 import uic class MainWindow(QtWidgets.QMainWindow): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) uic.loadUi("mainwindow.ui", self) app = QtWidgets.QApplication(sys.argv) window = MainWindow() window.show() app.exec_()

The PySide2 loader does not support this — the second parameter to .load is the parent widget of the widget you're creating. This prevents you adding custom code to the __init__ block of the widget, but you can work around this with a separate function.

import sys from PySide2 import QtWidgets from PySide2.QtUiTools import QUiLoader loader = QUiLoader() def mainwindow_setup(w): w.setTitle("MainWindow Title") app = QtWidgets.QApplication(sys.argv) window = loader.load("mainwindow.ui", None) mainwindow_setup(window) window.show() app.exec() Converting UI files to Python

Both libraries provide identical scripts to generate Python importable modules from Qt Designer .ui files. For PyQt5 the script is named pyuic5 —

pyuic5 mainwindow.ui -o MainWindow.py

You can then import the UI_MainWindow object, subclass using multiple inheritance from the base class you're using (e.g. QMainWIndow) and then call self.setupUI(self) to set the UI up.

import sys from PyQt5 import QtWidgets from MainWindow import Ui_MainWindow class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setupUi(self) app = QtWidgets.QApplication(sys.argv) window = MainWindow() window.show() app.exec()

For PySide2 it is named pyside2-uic —

pyside2-uic mainwindow.ui -o MainWindow.py

The subsequent setup is identical.

import sys from PySide2 import QtWidgets from MainWindow import Ui_MainWindow class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setupUi(self) app = QtWidgets.QApplication(sys.argv) window = MainWindow() window.show() app.exec_()

For more information on using Qt Designer with either PyQt5 or PySide2 see the Qt Creator tutorial.

exec() or exec_()

The .exec() method is used in Qt to start the event loop of your QApplication or dialog boxes. In Python 2.7 exec was a keyword, meaning it could not be used for variable, function or method names. The solution used in both PyQt4 and PySide was to rename uses of .exec to .exec_() to avoid this conflict.

Python 3 removed the exec keyword, freeing the name up to be used. As PyQt5 targets only Python 3 it could remove the workaround, and .exec() calls are named just as in Qt itself. However, the .exec_() names are maintained for backwards compatibility.

PySide2 is available on both Python 3 and Python 2.7 and so still uses .exec_(). It is however only available for 64bit Linux and Mac.

If you're targeting both PySide2 and PyQt5 use .exec_()

Slots and Signals

Defining custom slots and signals uses slightly different syntax between the two libraries. PySide2 provides this interface under the names Signal and Slot while PyQt5 provides these as pyqtSignal and pyqtSlot respectively. The behaviour of them both is identical for defining and slots and signals.

The following PyQt5 and PySide2 examples are identical —

my_custom_signal = pyqtSignal() # PyQt5 my_custom_signal = Signal() # PySide2 my_other_signal = pyqtSignal(int) # PyQt5 my_other_signal = Signal(int) # PySide2

Or for a slot —

@pyqtslot def my_custom_slot(): pass @Slot def my_custom_slot(): pass

If you want to ensure consistency across PyQt5 and PySide2 you can use the following import pattern for PyQt5 to use the Signal and @Slot style there too.

from PyQt5.QtCore import pyqtSignal as Signal, pyqtSlot as Slot

You could of course do the reverse from PySide2.QtCore import Signal as pyqtSignal, Slot as pyqtSlot although that's a bit confusing.

Supporting both in libraries

You don't need to worry about this if you're writing a standalone app, just use whichever API you prefer.

If you're writing a library, widget or other tool you want to be compatible with both PyQt5 and PySide2 you can do so easily by adding both sets of imports.

import sys if 'PyQt5' in sys.modules: # PyQt5 from PyQt5 import QtGui, QtWidgets, QtCore from PyQt5.QtCore import pyqtSignal as Signal, pyqtSlot as Slot else: # PySide2 from PySide2 import QtGui, QtWidgets, QtCore from PySide2.QtCore import Signal, Slot

This is the approach used in our custom widgets library, where we support for PyQt5 and PySide2 with a single library import. The only caveat is that you must ensure PyQt5 is imported before (as in on the line above or earlier) when importing this library, to ensure it is in sys.modules.

An alternative would be to use an environment variable to switch between them — see QtPy later.

If you're doing this in multiple files it can get a bit cumbersome. A nice solution to this is to move the import logic to its own file, e.g. named qt.py in your project root. This module imports the Qt modules (QtCore, QtGui, QtWidgets, etc.) from one of the two libraries, and then you import into your application from there.

The contents of the qt.py are the same as we used earlier —

import sys if 'PyQt5' in sys.modules: # PyQt5 from PyQt5 import QtGui, QtWidgets, QtCore from PyQt5.QtCore import pyqtSignal as Signal, pyqtSlot as Slot else: # PySide2 from PySide2 import QtGui, QtWidgets, QtCore from PySide2.QtCore import Signal, Slot

You must remember to add any other PyQt5 modules you use (browser, multimedia, etc.) in both branches of the if block. You can then import Qt5 into your own application with —

from .qt import QtGui, QtWidgets, QtCore

…and it will work seamlessly across either library.

QtPy

If you need to target more than just Qt5 support (e.g. including PyQt4 and PySide v1) take a look at QtPy. This provides a standardised PySide2-like API for PyQt4, PySide, PyQt5 and PySide2. Using QtPy you can control which API to load from your application using the QT_API environment variable e.g.

import os os.environ['QT_API'] = 'pyside2' from qtpy import QtGui, QtWidgets, QtCore # imports PySide2. That's really it

There's not much more to say — the two are really very similar. With the above tips you should feel comfortable taking code examples or documentation from PyQt5 and using it to write an app with PySide2. If you do stumble across any PyQt5 or PySide2 examples which you can't easily convert, drop a note in the comments and I'll update this page with advice.

Categories: FLOSS Project Planets

Pages