Feeds

OpenSense Labs: CMS and Static Site Generators

Planet Drupal - Mon, 2021-01-18 11:46
CMS and Static Site Generators Gurpreet Kaur Mon, 01/18/2021 - 22:16

Websites have entered a new playing field now, at least compared to what they used to be a few decades ago. They are not one-dimensional anymore. They represent a multitude of different business agendas that are essential for growth and visibility.

Websites are not just limited to words, their world has widened progressively. From animations to social media integration, websites today can do it all. A major reason for these advancements in websites and their build is the software they are built on. And that is going to be the highlight of this blog.  

We will talk about the Content Management Systems and the Static Site Generators and shed light on their uses, their suitability and whether they can work in sync or not? So let’s begin. 

Understanding a CMS  Source: Opensource.com

Commencing with the veterans, CMS or a Content Management System have been around for almost two decades (Drupal, one of the world leaders in web content management, was initially released on 15th January 2001). Despite being that old, the conventions they are built on and the features they have been added with over the years have resulted in CMSs being as modern as modern as can be. 

From easing the workload off of the bloggers’ shoulders to making newspaper editors happy; from catering for corporations and their digital marketing team to aiding numerous government departments online and transparent, a CMS has a wide audience. 

If I had to define a CMS, I would simply call it the one-stop destination for all your website’s content needs. It manages, organises and publishes web content. What is more impressive is that content authors can create, edit, contribute and publish on their own, they do not need to be dependent on developers for that. A CMS offers a collaborative environment to build and present websites, allowing multiple users to work with it at once. Terms like Web Content Management and Digital Experience Platform are being thrown around today and they are nothing, but a modern variant of a CMS. 

Getting into the meaning of CMS a little further, you would hear two versions of it and they are essentially its break down. 

  • First would be the Content Management Application. This makes marketers, merchandisers and content creators self-reliant. They can do the contextual heavy-lifting on their own with a CMA without the requirement of a code, so, none of the guys or girls from IT would be needed. 
  • Next is the Content Delivery Application. This is basically the foundation for your content; the back-end aspect that placed your content into templates to be further presented as one website. So, what your audiences see is provided by the CDA. 

Both of these together make a CMS whole for your use. 

Moving further, after the meaning, it is time to get a brief understanding of the various categories of a CMS. Based upon different categorisations, there are seven in all.

Based on the CMS’ role 

Traditional 

Most often, a traditional CMS is used on really simple marketing sites. I have used the term simple to describe it because it is just that, be it the layout or general functionality. You can create and edit your content using a WYSIWYG or HTML editor and it would display the content as per the CSS you have used.

With a traditional CMS, your entire site is encompassed by one software. The frontend and the backend are closely connected through it, hence, it is also referred to as a Coupled CMS. 

Decoupled 

Unlike its traditional counterpart, the decoupled CMS separated the frontend from the backend. This means they work independent of each other and a change in the presentation layer does not necessarily affect the backend repository. Through decoupling, you get the features of more than one software to base your site’s architecture on. 

Headless 

A headless CMS is more or less similar to a decoupled one. When you take up a headless CMS, your content would always remain the same, however, each of your clients, be it an app, a device, or a browser, would be obligated for the presentation of the content. 

The code in this instance is not in the CMS, rather it is an API that is used for communication and data sharing amongst the two software. This way developers can consume content through an API and content authors can start adding content at the same time. If you are looking for the ‘one size fits all’ approach, this is where you will find your answer. 

Based on cost and ownership 

Open source 

Open source CMSs are the ones that are free of cost, at least initially. You do not need to pay for any license fee for its installation; however, there can be costs that you may incur for add-on templates and more such features. 

Open Source CMSs are pretty popular today, the reason being their thriving community of developers. This results in the veterans redistributing and modifying the code, which not only leads to perpetual software improvements, but also helps the newbies in making progress. 

Proprietary 

A proprietary CMS is the exact opposite of an open source CMS, meaning it is commercial and mandates a licensing fee along with annual or monthly payments. In return for the payments, you would get an out-of-the-box system to meet all your companies requirements, continuous support and built-in functionality.

Based on the location 

On premises 

As the name suggests, this is a CMS that has a physical presence within the company’s premises. The high degree of control it offers to its users is the reason for its popularity. However, the humongous investment and the chances of human error dampen its potential. 

Cloud-based 

The name gives it away. Such a CMS is hosted on the cloud and delivered through the web. It is essentially the combination of web hosting, web software components and technical support. It provides fast implementation and deployment along with accessibility from across the globe on any device.

Why choose a CMS? 

Moving further, let’s now delve into the multitudinal features that are packed inside a CMS making it a suitable choice for you and your organisation’s virtual needs.

If I had to broadly categorise all the features of a CMS, I would end up with three major categories, which will sum up the true potential of this software. 

Content and its production needs

Producing content is the primary reason anyone takes on a CMS. It is true if you are a blogger and it is also true if you work for an educational institution and its online persona. It is the content that speaks for itself, when it comes to your site and it needs to be pristine, for lack of a better word. And CMSs help you achieve a level of control over your content production that you desire.

  • Starting with the edits, the WYSIWYG editor could be deemed as the heart and soul of a CMS. It provides you formatted text in paragraphs with quotes, superscripts, underlines as well as images and videos. Your authors would not have to work around codes for sure. 
  • Focusing on the media, images are an important part of it. Every CMS has room for them, they can be uploaded directly from your computer or archives, either within the content or you can add them in the page itself. The same is true for pdfs, animations and videos. Videos also have the option of being embedded through Youtube. 
  • Furthermore, CMSs also support multilingual and multi-channel sites. This eases the pressure off of the content authors and makes localised projects easy to run. 
Content and its presentation needs

Presentation is all about design, how it is done and how it would be showcased to the end user. There are a lot of design considerations that a CMS can help you with. 

  • A CMS would have you sorted with the right font and its size and the right colours and contrast. 
  • A CMS would have your sorted with the right responsiveness for your site. 
  • A CMS would have you sorted with the right URLs and URL logic. 
  • A CMS would have you sorted with the right templating tools to change your layout. 
  • A CMS would have you sorted with the right hierarchy for your site as well as provide the right prominence to the aspects that need it. 
  • Finally, a CMS would have your site sorted for all the right accessibility protocols to make it universally accessible. 
Content and its distribution needs

Once the content is produced, its distribution comes into play. This has a direct impact on your site's visibility. And CMSs ensure that you get the most out of it. 

  • The foremost part of distribution needs is metadata. This helps in tagging, categorising and describing your content. It includes everything from keyword insertion to identifying the distribution channels and placing access restrictions on the content. 
  • Secondly, CMSs also come equipped with automated marketing tools like analytics and A/B testing that help you understand user behaviour and help you capitalise it. You would just have to define the parameters and the automation would do the rest, be it publishing on your site or email marketing. 
Content and its management needs

Then comes the management of your content, it is a perpetual process that helps in providing an ease to the editors and developers that streamlines the builds and updates of a website. 

  • For one, a CMS helps you plan and execute the publishing of your content. You can actually schedule when and what to post and where to post it. You can also decide when something would be available for the audience to see and when it won’t be like an events’ post. Once the event has happened, it won't need to be on your site anymore and a CMS helps with that. 
  • CMSs also help you to figure out user roles and implement them. This helps in ensuring that sensitive information is only accessible to the users who have the clearance. A manager and a director are going to have different roles, so does a premium member and a regular member of your site. 
  • Finally CMS helps you in avoiding instances where you delete something important and its recovery becomes impossible. Version control and revisions are a feature that has to be in your CMS, if you want the powers to bring back the lost content. 

Apart from these main categories, CMSs are also renowned for their security, their scalability and user friendliness. There is one more thing to add and that is the fact that a CMS can go above and beyond it capabilities by integrating itself to third-parties and combining their features with its own, a headless CMS is an example of the same. Drupal is one of the most popular CMSs, when it comes to going headless. Read our blog, Decoupled Drupal Architecture to know more about it.

Understanding a new vogue: Static Site Generators 

Before understanding a static site generator, let’s shed some light on static sites, since these are what it builds. A static site is the one that is designed in a way that it remains static, fixed and constant, during its design, its storage on a server and even upon its delivery to the user’s web browser. This is the attribute that differs it from a dynamic, it never changes, from the developers desktop to the end user’s, it remains as-is.

Coming to Static Site Generators or SSG, in the most basic of terms they apply data and content to templates and create a view of a webpage. This view is then shown to end users of a site. 

Now let’s get a little technical, you know that an SSG will only create static sites, it does so by creating a series of HTML pages that get deployed to an HTTP server. There would only be files and folders, which points to no database and no server-side rendering.

Developers using an SSG, create a static site and deploy it to the server, so when a user requests a page, all the server has to do is find the matching file and route it towards the user. 

If I talk about the difference between an SSG and a conventional web application stack or a CMS, I would say that it is in the view of webpages. While an SSG keeps all the views possibly needed for a site at hand well in advance, a traditional stack waits until a page has been requested and then generates the view.

Why did SSG come along?

Static Site Generators act differently than a CMS, they are more aligned with the needs of static sites. However, their emergence has a bigger story to tell. 

Yes, CMSs are quite popular today, yet there is a drawback to that. With the rising acclaim of CMSs, some of them have become more prone to cyberattacks. The lead in security hacks goes to WordPress, with almost 90% of all hacks being experienced by it as reported by ITPRO reports of 2020. But, Drupal is considered the most secure CMS as can be seen in Sucuri’s 2019 Website Threat Research Report.

Then there is the issue of performance. CMS sites operate mainly upon their servers, meaning they do the heavy-lifting. If a request is sent, it would mean the server taking the charge of the page assembly from templates and content every time. This also means that for every user visiting your site, the PHP code would have to be run to start up, communicate with the database, create an HTTP response based on the recovered data, send it to the server and then finally, an HTML file is returned to the user’s browser to display the content after interpretation. All of this may impede the performance of the site built on CMS when compared to the one powered by a static site generator. But, it’s not like CMSes give you low-performance websites. They do have provisions for delivering high performance websites. It depends upon which CMS you go with. If web performance is your concern, Drupal can be your go-to option.

An SSG is a solution to these two conundrums, hence, it emerged with a bang. 

What can a Static Site Generator do for you?

Static Site Generators solve a lot of the issues that a CMS cannot, consequently they can provide you a lot for your site’s well-being. 

SSG means better security 

In an SSG, the need for a server is non-existent and this is the reason it provides more security. As we have already established that an SSG is rendered well in advance and its ready-to-serve infrastructure helps remove any malicious intent upon your site. This infrastructure essentially eliminates the need for servers, they do not need to perform any logic or work. 

Apart from this, with SSG, you would not need to access databases, execute logical operations or alter resources for each independent view. As a result, there is an easy hosting infrastructure as well as an enhanced security because of the lack of physical servers required for fulfilling requests. 

SSG means elevated performance 

A website’s performance is concerned with its speed and request time, and SSG provides in this area as well. Whenever a page is requested, it involves a whole bunch of mechanism to get it displayed for the visitors. There is the distance it has to cover, the systems it has to interact with along with the work that those systems do. All of these take up time, shadowing your performance. 

Since an SSG site does not mandate such a lengthy iteration per visitor request, it reduces the travel time. This is done through delivering the work directly from a CDN, a distributed network of caches, which aids in avoiding system interaction. Resultantly, your performance soars 

SSG means higher scalability 

When an SSG builds a site, it is often considered pre-built. I mean that is what building all the views in advance of an actual request could be defined as, right? So, with a pre-built site, you have less work on your hands. For instance, a spike in traffic would not mandate you to add in more computing power to handle each additional request, since you have already done all the work beforehand. You would also be able to cache everything in the CDN and serve it directly to the user. As a result, SSG sites offer scalability by default. 

When should you choose a Static site generator?

Now that you know how an SSG can benefit you, it is time to understand the scenarios that would mandate taking up a static site generator and all its advantages. 

When building complex site is the goal 

If you want your website to deliver more complexity, in terms of the kind of features it provides, SSG becomes a good choice. There are many that come equipped to provide you client-side features that are ready to go. 

When creating and displaying content is the only goal

Here SSG is a suitable choice because it would generate pages and URLs for you. And these pages would give you a 100% control over what is being displayed, meaning the output would always be in your hands; content pages need that. 

When generating numerous pages is the goal 

A static site generator can create pages at a great speed. It might not be seconds, but it is quite fast. So, when creating websites that would need a lot of pages to be created, SSG’s speed comes in quite handy. 

When templating needs are complex as well 

An SSG is a powerful software, it has the ability to assess your site’s visual style and content along with its behaviour and functionality. This feature becomes fruitful, when building a website with diverse templating needs. Vue and React based SSGs would definitely help you get the versatility you need on your website, along with the standard use of concept of code reuse on your site. 

I would like to add just one more thing, and that is the fact that your team must be familiar with the static site generator that you are going to end up using. There are a lot in the market. If your team is familiar with .net, use and SSG powered with it. On the other hand if it finds JavaScript more familiar territory, go with an SSG based on that. Let your development team be a part of the discussion, when the suitability of a static site generator is being decided. 

Are Static Site Generators always the right option? 

Coming from the suitability, you would think that an SSG is a great choice. Don’t get me wrong, it is. However, it isn’t a universal software. There are instances when it may not be the right choice. So, let’s delve into these scenarios.

Not when you do not have development experience 

Static Site Generators become a tad bit difficult for amateur developers. Your developers ought to have experience to reap all its benefits. The building process is considered to be more difficult than that of a CMS, something that finding plugins for pre-built pages acn become a chore. Furthermore, there isn’t a huge community out there to help you in the development part, if you are a beginner. 

Not when you need a site built urgently 

You have to understand the urgency and SSGs are not the best of friends. From learning the build process to developing the template code, everything needs time. 

There are development scripts to be me made;
There is the complication of customised them;
There is the additional process of creating and setting Markdown files;

All of these account to more time requirements for the development process. Think of it like this, you are going to be doing all the grunt work beforehand, and that would necessitate more time. 

Not when you need server-side functionality 

When partnering with an SSG, you would be parting with some, if not many, interactive functions on your site. For instance, user logins would be difficult to create, so would web forms and discussion forums. However, there are certain options like lunr.js search and Disqus commenting to help you with your sites interactivity. I would say that these options are pretty limited.

Not when your site has to have hundreds of pages

You might think that I am contradicting myself, however, I am not. Static site generators can create a website with a thousand pages, yet the process can become tedious and awkward. For a thousand or so pages, the content editing and publishing would be cumbersome. Along with this real-time updates could get delayed and like I mentioned before build times rise consequently.

Not when website consistency is a priority 

Lastly, SSG sites offer a lot of flexibility. That should be a good thing, however, it does have a side effect and that is on your site’s consistency. This is because anything that is found in the Markdown files can be rendered as page content. Consequently, users get the chance to include scripts, widgets and other undesired items. 

Can a CMS and an SSG work together? 

Yes, a CMS and an SSG can work together and pretty efficiently at that. However, that partnership is only possible in a headless CMS. This is because a headless CMS gives room for other frontend technology to come and play and in this case that technology is found in static site generators. 

A headless CMS is pretty versatile, choosing a static site to go as its head could help you get most of the benefits that both, the static site and headless CMS, come along with. This partnership indeed has a lot to offer. Let’s find out what that is. 

Proffers easy deployment via APIs

SSGs are quite straightforward to use, especially with an API, which is the connecting force between the SSG and the CMS. Pulling data from an API for generating and deploying a static PWA to any web host or Content Delivery Network is a breeze. 

Proffers ease to the marketing team 

When you work only with an SSG, you would face difficulties as it puts a lot of boundations on the marketing team. This isn’t a problem when you partner with a CMS. 

Proffers easy editing and workflow 

Conventionally, SSGs do not have a WYSIWYG editor or workflow capabilities for the tracking and collaboration of content. You might think that it is only needed for dynamic sites, but that isn’t the case. Static sites also need that. Since CMSs have that capability, they become ideal for content before actually running the SSG; the perfect contextual partnership. 

Proffers easy updates to sites 

With a CMS, you can easily change and update the content. With an SSG, the same changes can be pulled up through the APIs and a new static site can be generated every time they are incurred. All the developers have to do is set a tool up for content pulling and generation. As a result, your site would always be up-to-date and the users would not need to be processed whenever they visit your site. 

To check out some examples of how CMS and SSG can come together, read how Drupal and Gatsby can be leveraged for developing blazing fast websites. You can also go through the benefits of going ultra-minimalistic with the combination of Metalsmith and Drupal.

Conclusion 

In the end, all I want to say is that both a CMS and an SSG have their own set of features and capabilities that make them excellent at what they do, making their users more than happy. However, when it comes to getting the best out of both of them, there is only one kind of CMS that can help you reap the benefits of this dynamic. It is up to you to decide whether you want to use them together or individually.  
 

blog banner blog image CMS Content Management System Static Site Generators Drupal Gatsby Metalsmith Blog Type Articles Is it a good read ? On
Categories: FLOSS Project Planets

Python Pool: CV2 Normalize() in Python Explained With Examples

Planet Python - Mon, 2021-01-18 09:46

Hello geeks and welcome in this article, we will cover cv2 normalize(). Along with that, we will also look at its syntax for an overall better understanding. Then we will see the application of all the theory part through a couple of examples. The cv2 is a cross-platform library designed to solve all computer vision-related problems. We will look at its application and work later in this article. But first, let us look at the definition of the function.

In general, normalization means repeating data repetition and eliminate unwanted characteristics. So, Image normalization can be understood as to how we change an image’s pixel intensity. Often it is linked with increasing contrast, which helps in better image segmentation.

Syntax cv.normalize(img, norm_img)

This is the general syntax of our function. Here the term “img” represents the image file to be normalized. “Norm_img” represents the user’s condition to be implemented on the image. As we move ahead in this article, we will develop a better understanding of this function.

How Cv2 Normalize works?

We have discussed the definition and general syntax of Cv2 Normalize. In this section, we will try to get a brief idea about how it works. With the help of this, we can remove noise from an image. We bring the image in a range of intensity values, which makes the image less stressful and more normal to our senses. Primarily it does the job of making the subject image a bit clearer. It does so with the help of several parameters that we will discuss in detail in the next section.

Application of Cv2 Normalize

In this section, we will see what difference the cv2 Normalize code makes. To achieve this, we will first use the Cv2 imshow to display an image, after which we will use the normalize function and compare the 2 images to spot the difference.

import cv2 img = cv2.imread('3.jpeg',1) cv2.imshow("sample",img) cv2.waitKey(5000)

Output:

Here we have successfully used the imshow() function to print our image. As I have already covered the imshow() function, I will not go in-depth about it here. Our picture is not very clear, and its overall appearance can be improved considerably. Now let use our function and see the difference.

import cv2 as cv import numpy as ppool img = cv.imread("3.jpeg") norm = ppool.zeros((800,800)) final = cv.normalize(img, norm, 0, 255, cv.NORM_MINMAX) cv.imshow('Normalized Image', final) cv.imwrite('city_normalized.jpg', final) cv.waitKey(5000)

Output:

See what our function does; the change is quite evident. When you compare it with the previous one, you can notice that it is far clearer and has better contrast.

Now let us try to decode and understand the code that helped us achieve it. Here at first, we have imported cv2. After which, we have imported the NumPy module. Then we have used the imread() function to read our image. After that, we have used the numpy function zeros, which gives a new array of 800*800. Then we have used the cv normalized syntax. Here 1st we have our image name, second normalization condition. Then we have 255, which is the upper limit of our array, which means values beyond that will not be stored in it. Then, at last, we have used cv.NORM_MINMAX, in this case, the lower value is alpha, and the higher value is beta, so the function works between them.

How to get the original image back?

Using the normalized function creates a separate new file for the subject image. Our original image remains unchanged, and hence to obtain it, we can use the imshow() function.

Conclusion

In this article, we covered the Cv2 normalize(). We looked at its syntax and example. We tried to understand what difference this function can make to your image through example. As in our case, by applying this, we were able to achieve a much clearer picture. In the end, we can conclude that cv2 normalize() helps us by changing the pixel intensity and increasing the overall contrast.

I hope this article was able to clear all doubts. But in case you have any unsolved queries feel free to write them below in the comment section.

The post CV2 Normalize() in Python Explained With Examples appeared first on Python Pool.

Categories: FLOSS Project Planets

Python Pool: What is Python Syslog? Explained with Different methods

Planet Python - Mon, 2021-01-18 09:45

Hello geeks and welcome in today’s article, we will cover Python Syslog. Along with that, for proper understanding, we will look at different methods and also some sample code. Before moving that ahead let us first understand Syslog through its definition. The module provides an interface to the Unix Syslog library. Here Unix is an OS developed for multiuser and multitasking computers. A module named SysLogHandler is available in logging.handlers, a pure python library that can speak to the Syslog server.

Different Methods for Python Syslog

1. SYSLOG.SYLOG(MESSAGE,PRIORITY)

This function sends a string message to the system logger. Here logger keeps track of events when the software runs. Here the priority argument is optional and defaults to log_info, and it determines the priority.

2. SYSLOG.OPENLOG

This is used as an subsequent of syslog call. It takes an ident argument which of string type.

3. SYSLOG.CLOSELOG

This method is used for the purpose of resetting the syslog module.

4. SYSLOG.SETLOGMASK

This method is used to set up the priority mask to maskpri, It returns the previous mask value. When there is no priority maskpri is ignored.

Sample Code Covering Various Syslog Methods

In this section we will look at some sample codes in which we have will use the various methods discussed in the above section.

import syslog import sys syslog.openlog(sys.argv[0]) syslog.syslog(syslog.LOG_NOTICE, "notice-1") syslog.syslog(syslog.LOG_NOTICE, "notice-2") syslog.closelog()

Here above we can see the sample code. Here at first, we have imported the Syslog module. Along with that we have imported the sys which means system-specific parameters. Next, we have used openlog with a command sys. argv[0]. This command is a list that contains the command-line argument passed to the script. Next, we have a Syslog method and closed with a close log command.

SysLogHandler

As discussed at the start of the article it is a method available in logging. handlers. This particular method supports sending a message to a remote or Unix Syslog. Let us look at an example in order for a better understanding.

import logging import logging.handlers import sys logger = logging.getLogger() logger.setLevel(logging.INFO) syslog = logging.handlers.SysLogHandler(address=("localhost", 8000)) logger.addHandler(syslog) print (logger.handlers)

Output

Here at first, we have imported the logging module, an in-built module of python. Then we have imported logging handlers, which sends the log records to the appropriate destination. Next, we have imported SYS, as discussed above. Then in the next step, we have created an object with getlogger. Then in the next step, we have used the setlevel command. What it does is that all messages before this level are ignored. Then we have used our Sysloghandeler command. Next, we have used addHandler to add a specific handler to our logger “syslog” in our case. Finally, we have just used a print statement to print that handler.

Difference Between Syslog and Logging

This section will discuss the basic difference between the Syslog and logging in python. Here we have discussed Syslog in detail but before comparing the 2, let us look at the logging definition. It is an in-built module of python that helps the programmer keep track of events that are taking place. The basic difference between the 2 is that Syslog is more powerful, whereas the logging is easy and used for simple purposes. Another advantage of Syslog over logging is that it can send log lines to a different computer to have it logged there.

You might be interested in reading: Conclusion

In this article, we covered Python Syslog. We looked at its definition, use, and the different methods associated with it. In the end, we can conclude that it provides us with an interface of Unix.

I hope this article was able to clear all doubts. But in case you have any unsolved queries feel free to write them below in the comment section. Done reading this why not look at the FizzBuzz challenge next.

The post What is Python Syslog? Explained with Different methods appeared first on Python Pool.

Categories: FLOSS Project Planets

Python Pool: How to Solve “unhashable type: list” Error in Python

Planet Python - Mon, 2021-01-18 09:44

Hello geeks, and welcome in this article, we will be covering “unhashable type: list.” It is a type of error that we come across when writing code in python. In this article, ur main objective is to look at this error. Along with that, we will also try to troubleshoot and get rid of this error. We will achieve all this with a couple of examples. But first, let us try to get a brief overview of why this error occurs.

Python dictionaries only accept hashable data-types as a key. Here the hashable data-types means those values whose value remains the same during the lifetime. But when we use the list data-type, which is non-hashable, we get this kind of error.

The error-“unhahasble type: list”

In this section, we will look at the reason due to which this error occurs. We will take into account everything discussed so far. Let us see this through an example:

numb ={ 1:'one', [2,10]:'two and ten',11:'eleven'} print(numb)

Output:

TypeError: unhashable type: 'list'

Here above, we have considered a straightforward example. Here we have used a dictionary to create a number dictionary and then tried to print it. But instead of the output, we get an error because we have used a list type in it. In the next section, we will see how to eliminate the error.

But Before that let us also look at an another example.

country=[ { "name":"India",[28,7]:"states and area", "name":"France",[27,48]:"states and area"} ] print(country) TypeError: unhashable type: 'list'

Here in the above example, we can see we come across the same problem. Here in this dictionary, we have taken the number of states and their ranking worldwide as the data. Now let’s quickly jump to the next section and eliminate these errors.

Troubleshooting:”unhashable type:list”

In this section, we will try to get rid of the errors. Let us start with the first example here. To rectify it all, we have to do is use a tuple.

numb ={ 1:'one', tuple([2,10]):'two and ten',11:'eleven'} print(numb) {1: 'one', (2, 10): 'two and ten', 11: 'eleven'}

With just a slight code change, we can rectify this. Here we have used a tuple, which is a hashable data data-type. Similarly, we can rectify the second error of the second example.

country=[ { "name":"India",tuple([28,7]):"states and area", "name":"France",tuple([27,48]):"states and area"} ] print(country) [{'name': 'France', (28, 7): 'states and area', (27, 48): 'states and area'}]

Again with the help of tuple we are able to rectify it. It is a simple error and can be rectified easily.

Difference between hashable and unhashable type

In this section, we see the basic difference between the 2 types. Also, we classify the various data-types that we use while coding in python under these 2 types.

HashableUnhashableFor this data-type, the value remains constant throughout.For this data-type, the value is not constant and change.Some data types that fall under this category are- int, float, tuple, bool, string, bytes.Some data types that fall under this category are- list, set, dict, bytearray. CONCLUSION

In this article, we covered unhashable type: list error. We looked at why it occurs and the methods by which we can rectify it. To achieve it, we looked at a couple of examples. In the end, we can conclude that this error arises when we use an unhashable data-type in a dictionary.

I hope this article was able to clear all doubts. But in case you have any unsolved queries feel free to write them below in the comment section. Done reading this why not read about the cPickle module.

The post How to Solve “unhashable type: list” Error in Python appeared first on Python Pool.

Categories: FLOSS Project Planets

Real Python: Make Your First Python Game: Rock, Paper, Scissors!

Planet Python - Mon, 2021-01-18 09:00

Game programming is a great way to learn how to program. You use many tools that you’ll see in the real world, plus you get to play a game to test your results! An ideal game to start your Python game programming journey is rock paper scissors.

In this tutorial, you’ll learn how to:

  • Code your own rock paper scissors game
  • Take in user input with input()
  • Play several games in a row using a while loop
  • Clean up your code with Enum and functions
  • Define more complex rules with a dictionary

Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level.

What Is Rock Paper Scissors?

You may have played rock paper scissors before. Maybe you’ve used it to decide who pays for dinner or who gets first choice of players for a team.

If you’re unfamiliar, rock paper scissors is a hand game for two or more players. Participants say “rock, paper, scissors” and then simultaneously form their hands into the shape of a rock (a fist), a piece of paper (palm facing downward), or a pair of scissors (two fingers extended). The rules are straightforward:

  • Rock smashes scissors.
  • Paper covers rock.
  • Scissors cut paper.

Now that you have the rules down, you can start thinking about how they might translate to Python code.

Play a Single Game of Rock Paper Scissors in Python

Using the description and rules above, you can make a game of rock paper scissors. Before you dive in, you’re going to need to import the module you’ll use to simulate the computer’s choices:

import random

Awesome! Now you’re able to use the different tools inside random to randomize the computer’s actions in the game. Now what? Since your users will also need to be able to choose their actions, the first logical thing you need is a way to take in user input.

Take User Input

Taking input from a user is pretty straightforward in Python. The goal here is to ask the user what they would like to choose as an action and then assign that choice to a variable:

user_action = input("Enter a choice (rock, paper, scissors): ")

This will prompt the user to enter a selection and save it to a variable for later use. Now that the user has selected an action, the computer needs to decide what to do.

Make the Computer Choose

A competitive game of rock paper scissors involves strategy. Rather than trying to develop a model for that, though, you can save yourself some time by having the computer select a random action. Random selections are a great way to have the computer choose a pseudorandom value.

You can use random.choice() to have the computer randomly select between the actions:

possible_actions = ["rock", "paper", "scissors"] computer_action = random.choice(possible_actions)

This allows a random element to be selected from the list. You can also print the choices that the user and the computer made:

print(f"\nYou chose {user_action}, computer chose {computer_action}.\n")

Printing the user and computer actions can be helpful to the user, and it can also help you debug later on in case something isn’t quite right with the outcome.

Determine a Winner Read the full article at https://realpython.com/python-rock-paper-scissors/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Evgeni Golov: building a simple KVM switch for 30€

Planet Debian - Mon, 2021-01-18 08:25

Prompted by tweets from Lesley and Dave, I thought about KVM switches again and came up with a rather cheap solution to my individual situation (YMMY, as usual).

As I've written last year, my desk has one monitor, keyboard and mouse and two computers. Since writing that post I got a new (bigger) monitor, but also an USB switch again (a DIGITUS USB 3.0 Sharing Switch) - this time one that doesn't freak out my dock \o/

However, having to switch the used computer in two places (USB and monitor) is rather inconvenient, but also getting an KVM switch that can do 4K@60Hz was out of question.

Luckily, hackers gonna hack, everything, and not only receipt printers (😉). There is a tool called ddcutil that can talk to your monitor and change various settings. And udev can execute commands when (USB) devices connect… You see where this is going?

After installing the package (available both in Debian and Fedora), we can inspect our system with ddcutil detect. You might have to load the i2c_dev module (thanks Philip!) before this works -- it seems to be loaded automatically on my Fedora, but you never know 😅.

$ sudo ddcutil detect Invalid display I2C bus: /dev/i2c-4 EDID synopsis: Mfg id: BOE Model: Serial number: Manufacture year: 2017 EDID version: 1.4 DDC communication failed This is an eDP laptop display. Laptop displays do not support DDC/CI. Invalid display I2C bus: /dev/i2c-5 EDID synopsis: Mfg id: AOC Model: U2790B Serial number: Manufacture year: 2020 EDID version: 1.4 DDC communication failed Display 1 I2C bus: /dev/i2c-7 EDID synopsis: Mfg id: AOC Model: U2790B Serial number: Manufacture year: 2020 EDID version: 1.4 VCP version: 2.2

The first detected display is the built-in one in my laptop, and those don't support DDC anyways. The second one is a ghost (see ddcutil#160) which we can ignore. But the third one is the one we can (and will control). As this is the only valid display ddcutil found, we don't need to specify which display to talk to in the following commands. Otherwise we'd have to add something like --display 1 to them.

A ddcutil capabilities will show us what the monitor is capable of (or what it thinks, I've heard some give rather buggy output here) -- we're mostly interested in the "Input Source" feature (Virtual Control Panel (VCP) code 0x60):

$ sudo ddcutil capabilities … Feature: 60 (Input Source) Values: 0f: DisplayPort-1 11: HDMI-1 12: HDMI-2 …

Seems mine supports it, and I should be able to switch the inputs by jumping between 0x0f, 0x11 and 0x12. You can see other values defined by the spec in ddcutil vcpinfo 60 --verbose, some monitors are using wrong values for their inputs 🙄. Let's see if ddcutil getvcp agrees that I'm using DisplayPort now:

$ sudo ddcutil getvcp 0x60 VCP code 0x60 (Input Source ): DisplayPort-1 (sl=0x0f)

And try switching to HDMI-1 using ddcutil setvcp:

$ sudo ddcutil setvcp 0x60 0x11

Cool, cool. So now we just need a way to trigger input source switching based on some event…

There are three devices connected to my USB switch: my keyboard, my mouse and my Yubikey. I do use the mouse and the Yubikey while the laptop is not docked too, so these are not good indicators that the switch has been turned to the laptop. But the keyboard is!

Let's see what vendor and product IDs it has, so we can write an udev rule for it:

$ lsusb … Bus 005 Device 006: ID 17ef:6047 Lenovo ThinkPad Compact Keyboard with TrackPoint …

Okay, so let's call ddcutil setvcp 0x60 0x0f when the USB device 0x17ef:0x6047 is added to the system:

ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="17ef", ATTR{idProduct}=="6047", RUN+="/usr/bin/ddcutil setvcp 0x60 0x0f" $ sudo vim /etc/udev/rules.d/99-ddcutil.rules $ sudo udevadm control --reload

And done! Whenever I connect my keyboard now, it will force my screen to use DisplayPort-1.

On my workstation, I deployed the same rule, but with ddcutil setvcp 0x60 0x11 to switch to HDMI-1 and my cheap not-really-KVM-but-in-the-end-KVM-USB-switch is done, for the price of one USB switch (~30€).

Note: if you want to use ddcutil with a Lenovo Thunderbolt 3 Dock (or any other dock using Displayport Multi-Stream Transport (MST)), you'll need kernel 5.10 or newer, which fixes a bug that prevents ddcutil from talking to the monitor using I²C.

Categories: FLOSS Project Planets

Chris Moffitt: Case Study: Automating Excel File Creation and Distribution with Pandas and Outlook

Planet Python - Mon, 2021-01-18 08:25
Introduction

I enjoy hearing from readers that have used concepts from this blog to solve their own problems. It always amazes me when I see examples where only a few lines of python code can solve a real business problem and save organizations a lot of time and money. I am also impressed when people figure out how to do this with no formal training - just with some hard work and willingness to persevere through the learning curve.

This example comes from Mark Doll. I’ll turn it over to him to give his background:

I have been learning/using Python for about 3 years to help automate business processes and reporting. I’ve never had any formal training in Python, but found it to be a reliable tool that has helped me in my work.

Read on for more details on how Mark used Python to automate a very manual process of collecting and sorting Excel files to email to 100’s of users.

The Problem

Here’s Mark’s overview of the problem:

A business need arose to send out emails with Excel attachments to a list of ~500 users and presented us with a large task to complete manually. Making this task harder was the fact that we had to split data up by user from a master Excel file to create their own specific file, then email that file out to the correct user.

Imagine the time it would take to manually filter, cut and paste the data into a file, then save it and email it out - 500 times! Using this Python approach we were able to automate the entire process and save valuable time.

I have seen this type of problem multiple times in my experience. If you don’t have experience with a programming language, then it can seem daunting. With Python, it’s very feasible to automate this tedious process. Here’s a graphical view of what Mark was able to do:

Solving the Problem

The first step is getting the imports in place:

import datetime import os import shutil from pathlib import Path import pandas as pd import win32com.client as win32

Now we will set up some strings with the current date and our directory structure:

## Set Date Formats today_string = datetime.datetime.today().strftime('%m%d%Y_%I%p') today_string2 = datetime.datetime.today().strftime('%b %d, %Y') ## Set Folder Targets for Attachments and Archiving attachment_path = Path.cwd() / 'data' / 'attachments' archive_dir = Path.cwd() / 'archive' src_file = Path.cwd() / 'data' / 'Example4.xlsx'

Let’s take a look at the data file we need to process:

df = pd.read_excel(src_file) df.head()

The next step is to group all of the CUSTOMER_ID transactions together. We start by doing a groupby on CUSTOMER_ID .

customer_group = df.groupby('CUSTOMER_ID')

It might not be apparent to you what customer_group is in this case. A loop shows how we can process this grouped object:

for ID, group_df in customer_group: print(ID) A1000 A1001 A1002 A1005

Here’s the last group_df that shows all of the transactions for customer A1005:

We have everything we need to create an Excel file for each customer and store in a directory for future use:

## Write each ID, Group to Individual Excel files and use ID to name each file with Today's Date attachments = [] for ID, group_df in customer_group: attachment = attachment_path / f'{ID}_{today_string}.xlsx' group_df.to_excel(attachment, index=False) attachments.append((ID, str(attachment)))

The attachments list contains the customer ID and the full path to the file:

[('A1000', 'c:\\Users\\chris\\notebooks\\2020-10\\data\\attachments\\A1000_01162021_12PM.xlsx'), ('A1001', 'c:\\Users\\chris\\notebooks\\2020-10\\data\\attachments\\A1001_01162021_12PM.xlsx'), ('A1002', 'c:\\Users\\chris\\notebooks\\2020-10\\data\\attachments\\A1002_01162021_12PM.xlsx'), ('A1005', 'c:\\Users\\chris\\notebooks\\2020-10\\data\\attachments\\A1005_01162021_12PM.xlsx')]

To make the processing easier, we convert the list to a DataFrame:

df2 = pd.DataFrame(attachments, columns=['CUSTOMER_ID', 'FILE'])

The final data prep stage is to generate a list of files with their email addresses by merging the DataFrames together:

email_merge = pd.merge(df, df2, how='left') combined = email_merge[['CUSTOMER_ID', 'EMAIL', 'FILE']].drop_duplicates()

Which gives this simple DataFrame:

We’ve gathered the list of customers, their emails and the attachments. Now we need to send an email with Outlook. Refer to this article for additional explanation of this code:

# Email Individual Reports to Respective Recipients class EmailsSender: def __init__(self): self.outlook = win32.Dispatch('outlook.application') def send_email(self, to_email_address, attachment_path): mail = self.outlook.CreateItem(0) mail.To = to_email_address mail.Subject = today_string2 + ' Report' mail.Body = """Please find today's report attached.""" mail.Attachments.Add(Source=attachment_path) # Use this to show the email #mail.Display(True) # Uncomment to send #mail.Send()

We can use this simple class to generate the emails and attach the Excel file.

email_sender = EmailsSender() for index, row in combined.iterrows(): email_sender.send_email(row['EMAIL'], row['FILE'])

The last step is to move the files to our archive directory:

# Move the files to the archive location for f in attachments: shutil.move(f[1], archive_dir) Summary

This example does a nice job of automating a highly manual process where someone likely did a lot of copying and pasting and manual file manipulation. I hope the solution that Mark developed can help you figure out how to automate some of the more painful parts of your job.

I encourage you to use this example to identify similar challenges in your day to day work. Maybe you don’t have to work with 100’s of files but you might have a manual process you run once a week. Even if that process only takes 1 hour, use that as a jumping off point to figure out how to use Python to make it easier. There is no better way to learn Python than to apply it to one of your own problems.

Thanks again to Mark for taking the time to walk us through this content example!

Categories: FLOSS Project Planets

Andre Roberge: Don't you want to win a free book?

Planet Python - Mon, 2021-01-18 08:02

 At the end of Day 2 of the contest, still only one entry. If this keeps up, by next Monday there will not be a draw for a prize, and we will have a winner by default.


The submission was based on the use of __slots__. In playing around with similar cases, I found an AttributeError message that I had not seen before.  Here's a sample code.

class F:
__slots__ = ["a"]
b = 1

f = F()
f.b = 2

What happens if I execute this code using Friendly-traceback. Normally, there would be an explanation provided below the list of variables. Here we see nothing.



Let's inspect by using the friendly console.





I'll have to take care of this later today. Perhaps you know of other error messages specific to the use of __slots__. If so, and if you are quick enough, you could enter the contest. ;-)

Categories: FLOSS Project Planets

Zato Blog: Why Zato and Python make sense for complex API integrations

Planet Python - Mon, 2021-01-18 07:35

This article is an excerpt from the broader set of changes to our documentation in preparation for Zato.

High-level overview

Zato is a highly scalable, Python-based integration platform for APIs, SOA and microservices. It is used to connect distributed systems or data sources and to build API-first, backend applications. The platform is designed and built specifically with Python users in mind.

Zato is used for enterprise, business integrations, data science, IoT and other scenarios that require integrations of multiple systems.

Real-world, production Zato environments include:

  • A platform for processing payments from consumer devices

  • A system for a telecommunication operator integrating CRM, ERP, Billing and other systems as well as applications of the operator's external partners

  • A data science system for processing of information related to securities transactions (FIX)

  • A platform for public administration systems, helping achieve healthcare data interoperability through the integration of independent data sources, databases and health information exchanges (HIE)

  • A global IoT platform integrating medical devices

  • A platform to process events produced by early warning systems

  • Backend e-commerce systems managing multiple suppliers, marketplaces and process flows

  • B2B platforms to accept and process multi-channel orders in cooperation with backend ERP and CRM systems

  • Platforms integrating real-estate applications, collecting data from independent data sources to present unified APIs to internal and external applications

  • A system for the management of hardware resources of an enterprise cloud provider

  • Online auction sites

  • E-learning platforms

Zato offers connectors to all the popular technologies, such as REST, SOAP, AMQP, IBM MQ, SQL, Odoo, SAP, HL7, Redis, MongoDB, WebSockets, S3 and many more.

Running on premises, in the cloud, or under Docker, Kubernetes and other container technologies, Zato services are optimised for high performance - it is easily possible to run hundreds and thousands of services on typical server instances as offered by Amazon, Google Cloud, Azure or other cloud providers.

Zato servers offer high availability and no-downtime deployment. Servers form clusters that are used to scale systems both horizontally and vertically.

The software is 100% Open Source with commercial and community support available

A platform and language for interesting, reusable and atomic services

Zato promotes the design of, and helps you build, solutions composed of services which are interesting, reusable and atomic (IRA):

  • I for Interesting - each service should make its clients want to use it more and more. People should immediately see the value of using the service in their processes. An interesting service is one that strikes everyone as immediately useful in wider contexts, preferably with few or no conditions, prerequisites and obligations. An interesting service is aesthetically pleasing, both in terms of its technical usage as well as in its potential applicability in fields broader than originally envisaged. If people check the service and say "I know, we will definitely use it" or "Why don't we use it" you know that the service is interesting. If they say "Oh no, not this one again" or "No, thanks, but no" then it is the opposite.
  • R for Reusable - services can be used in different, independent business processes
  • A for Atomic - each service fullfils a single, atomic business need

Each service is deployed independently and, as a whole, they constitute an implementation of business processes taking place in your company or organisation.

With Zato, developers use Python to focus on the business logic exclusively and the platform takes care of scalability, availability, communication protocols, messaging, security or routing. This lets developers concentrate only on what is the very core of systems integrations - making sure their services are IRA.

Python is the perfect choice for API integrations, SOA and microservices, because it hits the sweet spot under several key headings:

  • It is a very high level language, with syntax close to how grammar of various spoken languages works, which makes it easy to translate business requirements into implementation
  • Yet, it is a solid, mainstream and full-featured, real programming language rather than a domain-specific one which means that it offers to developers a great degree of flexibility and choice in expressing their needs
  • Many Python developers have a strong web programming / open source background which means that it is little effort to take a step further, towards API integrations and backend servers. In turn, this means that it is easy to find good people for API projects.
  • Many Python developers have knowledge of multiple programming languages - this is very useful in the context of integration projects where one is typically faced with dozens of technologies, vendors or integration methods and techniques
  • Lower maintenance costs - thanks to the language's unique design, Python programmers tend to produce code that is easy to read and understand. From the perspective of multi-year maintenance, reading and analysing code, rather than writing it, is what most programmers do most of the time so it makes sense to use a language which makes it easy to carry out the most common tasks.

In short, Python can be construed as executable pseudo-code with many of its users already having roots in modern server-side programming so Zato, both from a technical and strategic perspective, is a natural choice for complex and sophisticated API solutions as a platform built in the language and designed for Python developers from day one.

More than services

Systems integrations commonly require two more features that Zato offers as well:

  • File transfer - allows you to move batch data between locations and to distribute it among systems and APIs

  • Single Sign-On (SSO) - a convenient REST interface lets you easily provide authentication and authorisation to users across multiple systems

Next steps
  • Start the tutorial to learn more technical details about Zato, including its architecture, installation and usage. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single, consistent API to its callers.

  • Visit the support page if you would like to discuss anything about Zato with its creators

Categories: FLOSS Project Planets

Python Pool: 6 Ways to Plot a Circle in Matplotlib

Planet Python - Mon, 2021-01-18 07:28

Hello coders!! In this article, we will learn how to make a circle using matplotlib in Python. A circle is a figure of round shape with no corners. There are various ways in which one can plot a circle in matplotlib. Let us discuss them in detail.

Method 1: matplotlib.patches.Circle():
  • SYNTAX:
    • class matplotlib.patches.Circle(xyradius=r, **kwargs)
  • PARAMETERS:
    • xy: (x,y) center of the circle
    • r: radius of the circle
  • RESULT: a circle of radius r with center at (x,y)
import matplotlib.pyplot as plt figure, axes = plt.subplots() cc = plt.Circle(( 0.5 , 0.5 ), 0.4 ) axes.set_aspect( 1 ) axes.add_artist( cc ) plt.title( 'Colored Circle' ) plt.show() Output & Explanation: Output

Here, we have used the circle() method of the matplotlib module to draw the circle. We adjusted the ratio of y unit to x unit using the set_aspect() method. We set the radius of the circle as 0.4 and made the coordinate (0.5,0.5) as the center of the circle.

Method 2: Using the equation of circle:

The equation of circle is:

  • x = r cos θ
  • y = r sin θ

r: radius of the circle

This equation can be used to draw the circle using matplotlib.

import numpy as np import matplotlib.pyplot as plt angle = np.linspace( 0 , 2 * np.pi , 150 ) radius = 0.4 x = radius * np.cos( angle ) y = radius * np.sin( angle ) figure, axes = plt.subplots( 1 ) axes.plot( x, y ) axes.set_aspect( 1 ) plt.title( 'Parametric Equation Circle' ) plt.show() Output & Explanation: Output

In this example, we used the parametric equation of the circle to plot the figure using matplotlib. For this example, we took the radius of the circle as 0.4 and set the aspect ratio as 1.

Method 3: Scatter Plot to plot a circle:

A scatter plot is a graphical representation that makes use of dots to represent values of the two numeric values.  Each dot on the xy axis indicates value for an individual data point.

  • SYNTAX:
    • matplotlib.pyplot.scatter(x_axis_data, y_axis_data, s=None, c=None, marker=None, cmap=None, vmin=None, vmax=None, alpha=None, linewidths=None, edgecolors=None)
  • PARAMETERS:
    • x_axis_data-  x-axis data
    • y_axis_data- y-axis data
    • s- marker size
    • c- color of sequence of colors for markers
    • marker- marker style
    • cmap- cmap name
    • linewidths- width of marker border
    • edgecolor- marker border-color
    • alpha- blending value
import matplotlib.pyplot as plt plt.scatter( 0 , 0 , s = 7000 ) plt.xlim( -0.85 , 0.85 ) plt.ylim( -0.95 , 0.95 ) plt.title( "Scatter plot of points Circle" ) plt.show() Output & Explanation: Output

Here, we have used the scatter plot to draw the circle. The xlim() and the ylim() methods are used to set the x limits and the y limits of the axes respectively. We’ve set the marker size as 7000 and got the circle as the output.

Method 4: Matplotlib hollow circle: import matplotlib.pyplot as plt plt.scatter( 0 , 0 , s=10000 , facecolors='none', edgecolors='blue' ) plt.xlim( -0.5 , 0.5 ) plt.ylim( -0.5 , 0.5 ) plt.show() Output & Explanation: Output

To make the circle hollow, we have set the facecolor parameter as none, so that the circle is hollow. To differentiate the circle from the plane we have set the edgecolor as blue for better visualization.

Method 5: Matplotlib draw circle on image: import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.cbook as cb with cb.get_sample_data('C:\\Users\\Prachee\\Desktop\\cds\\img1.jpg') as image_file: image = plt.imread(image_file) fig, ax = plt.subplots() im = ax.imshow(image) patch = patches.Circle((100, 100), radius=80, transform=ax.transData) im.set_clip_path(patch) ax.axis('off') plt.show() Output & Explanation: Output

In this example, we first loaded our data and then used the axes.imshow() method. This method is used to display data as an image. We then set the radius and the center of the circle. Then using the set_clip_path() method we set the artist’s clip-path.

Method 6: Matplotlib transparent circle: import matplotlib.pyplot as plt figure, axes = plt.subplots() cc = plt.Circle(( 0.5 , 0.5 ), 0.4 , alpha=0.1) axes.set_aspect( 1 ) axes.add_artist( cc ) plt.title( 'Colored Circle' ) plt.show() Output & Explanation: Output

To make the circle transparent we changed the value of the alpha parameter which is used to control the transparency of our figure.

Conclusion:

With this, we come to an end with this article. These are the various ways in which one can plot a circle using matplotlib in Python.

However, if you have any doubts or questions, do let me know in the comment section below. I will try to help you as soon as possible.

Happy Pythoning!

The post 6 Ways to Plot a Circle in Matplotlib appeared first on Python Pool.

Categories: FLOSS Project Planets

"CodersLegacy": Python GUI Frameworks

Planet Python - Mon, 2021-01-18 06:50

This article covers the most popular GUI Frameworks in Python.

One of Python’s strongest selling points is the vast number of GUI libraries available for GUI development. GUI development can be a tricky task, but thanks to the tools these Python GUI frameworks provide us, things become much simpler.

While some of the below GUI libraries are similar and directly compete with each other, each library has it’s own pros and cons. Sometimes you have special libraries designed for a specific situation, like Kivy is for touchscreen devices. So you don’t have to learn just one.

There are a large number of GUI frameworks in Python and we couldn’t possibly cover all of them. Hence we’ll just be discussing 5 of the most popular and important GUI frameworks in Python.

Tkinter GUI

I decided to start with Tkinter as it’s probably the oldest and most well known GUI framework in Python.

Tkinter was released in 1991 and quickly gained popularity due to its simplicity and ease of use compared to other GUI toolkits at the time. In fact, Tkinter is now included in the standard Python Library, meaning you don’t have to download and install it separately.

Other plus points include the fact that Tkinter has a pretty small memory footprint and a quick start up time. If you were to convert a Tkinter application into an exe with something like pyinstaller, it’s size would smaller than the other GUI library equivalents.

The only downsides to Tkinter are it’s rather outdated and old design. If you’re goal is to create a sleek and modern-looking GUI, Tkinter probably isn’t the best choice. Another possible downside is that Tkinter has fewer “special” widgets than the others, such as a VideoPlayer widget. Such widgets are used rarely, but still important.

You can begin learning it with our very own Tkinter Tutorial series.

PyQt5

PyQt5 is the Python binding of the popular Qt GUI framework which is written in C++.

PyQt5’s main plus points is it’s cross platform ability and modern looking GUI. Personally I’ve noticed quite a few people switching from Tkinter to PyQt5 to be able to create for stylish GUI’s.

Another one of PyQt5’s plus points is the Qt Designer. The Qt Designer is a drag and drop kind of tool where you don’t have to code in each widget individually. Instead, you can simply “drag” the widget and “drop” it onto the screen to create a GUI. It’s similar to Windows Form (VB.NET) and the Scene Builder (JavaFX).

PyQt5 downsides include it’s relatively large package size and slow start up speed. Furthermore, PyQt was released under the GPL License. This means you cannot distribute any software containing PyQt code without bundling the source code with it as well. For someone selling commercial software, this a significant set back. You’ll have to buy a special commercial license which gives you the right to withhold the source code.

The license issue isn’t something just should bother the average programmer though. You can begin learning PyQt from our very own tutorial series here!

If you’ve narrowed down your GUI of choice between Tkinter and PyQt5 and are having a hard time picking one, I suggest you read this comparison article that compares both in a very detailed manner.

PySide2

We’re bringing up PySide right after PyQt5 due to their strong connection. PySide is also a Python binding of the popular Qt GUI framework. Because of this reason the syntax is almost the exact same with some very minor differences.

The reason why PyQt is used more nowadays is because it’s development was faster than that of PySide. When Qt5 was released, PyQt released their binding for it (called PyQt5) in 2016. Whereas it took PySide an extra 2 years to release PySide2 in 2018. If both had released at the same time, things might have been a bit different today.

All the plus points for PyQt5 also apply for PySide2, with an extra addition. Unlike PyQt5, PySide was released under the LGPL license, allowing you to keep the source code for your distributed programs private. This makes the selling of commercial applications easier than it would be using PyQt5.

You can learn more about PyQt5 vs PySide2 from this article here.

Kivy

Kivy is an opensource multi-platform GUI development library for Python and can run on iOS, Android, Windows, OS X, and Linux.

The Kivy framework is well known for it’s support for touchscreen devices and it’s clean and modern looking GUI’s. It’s GUI and widgets have the interactive, multi-touch kind of ability that’s required for any decent GUI on a touchscreen device like a mobile.

The one possible downside to GUI’s created with Kivy is the non-native look. This may or may not be something you wish to have. Other issues may include the smaller community and lack of documentation compared to more popular GUI libraries like Tkinter.

If you’re looking to be developing on Desktop mostly, then it’s better to stick to one of the Qt options. Mobile support is Kivy’s greatest draw after all.

wxPython

wxPython is a Python open source cross platform GUI toolkit. Similar to how PyQt5 is based of the Qt GUI framework, WxPython is also based of a GUI framework called wxWidgets written in C++.

It’s purpose is to allow Python developers to create native user interfaces for their GUI applications on a wide variety of different operating systems.

The native GUI ability makes GUI’s created by wxPython looks very natural on any Operating system that they are run. Although some people may not want to have this native GUI look, instead preferring to have one look/style that is the exact same across all platforms.

This marks the end of the Python GUI Frameworks article. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the article content can be asked in the comments section below.

The post Python GUI Frameworks appeared first on CodersLegacy.

Categories: FLOSS Project Planets

Codementor: Selenium 4 With Python: All You Need To Know

Planet Python - Mon, 2021-01-18 04:41
Selenium 4 is driving a lot of curiosity as it follows a different architecture compared to its predecessor. In this blog, we will see how to work with Python in Selenium 4.
Categories: FLOSS Project Planets

Craig Small: Percent CPU for processes

Planet Debian - Mon, 2021-01-18 04:39

The ps program gives a snapshot of the processes running on your Unix-like system. On most Linux installations, this will be the ps program from the procps project.

While you can get a lot of information from the tool, a lot of the fields need further explanation or can give “wrong” or confusing information; or putting it another way, they provide the right information that looks wrong.

One of these confusing fields is the %CPU or pcpu field. You can see this as the third field with the ps aux command. You only really need the u option to see it, but ps aux is a pretty common invokation.

More than 100%?

This post was inspired by procps issue 186 where the submitter said that the sum of the processes cannot be more than the number of CPUs times 100%. If you have 1 CPU then the sum of %CPU for all processes should be 100% or less, have 16 CPUs then 1600% is your maximum number.

Some people reason for the oddity of over 100% CPU as some rounding thing gone wrong and at first I did think that; except I know we get a lot of reports about comparing the top header CPU load vs process load not lining up and its because “they’re different”.

The trick here, is ps is reporting a percentage of what? Or, perhaps to give a better clue, a percentage of when?

PCPU Calculations

So to get to the bottom of this, let’s look at the relevant code. In ps/output.c we have a function pr_pcpu that prints the percent CPU. The relevant lines are:

total_time = pp->utime + pp->stime; if(include_dead_children) total_time += (pp->cutime + pp->cstime); seconds = cook_etime(pp); if(seconds) pcpu = (total_time * 1000ULL / Hertz) / seconds;

OK, ignoring the include _dead_time line (you get this from the S option and means you include the time this process waited for its children processes) and the scaling (process times are in Jiffies, we have the CPU as 0 to 999 for reasons) you can reduce this down to.

%CPU = ( Tutime + Tstime ) / Tetime

So we find the amount of time the CPU(s) have been busy either in userland or the system, add them together, then divide the sum by the total time. The utime and stime increment like a car’s odometer. So if a process uses one Jiffy of CPU time in userland, that counter goes to 1. If it does it again a few seconds later, then that counter goes to 2.

To give an example, if a process has run for ten seconds and within those ten seconds the CPU has been busy in userland for that process, then we get 10/10 = 100% which makes sense.

Not all Start times are the same

Let’s take another example, a process still consumes ten seconds CPU time but been running for twenty seconds, the answer is 10/20 or 50%. With our single CPU example system both of these cannot be running at the same time otherwise we have 150% CPU utilisation which is not possible.

However, let’s adjust this slightly. We have assumed uniform utilisation. But take the following scenario:

  • At time T: Process P1 starts and uses 100% CPU
  • At time T+10 seconds: Process P1 stops using CPU but still runs, perhaps waiting for I/O or sleeping.
  • Also at time T+10 seconds: Process P2 starts and uses 100% CPU
  • At time T+20 we run the ps command and look at the %CPU column

The output for ps -o times,etimes,pcpu,comm would look something like:

TIME ELAPSED %CPU COMMAND 10 20 50 P1 10 10 100 P2

What we will see is P1 has 10/20 or 50% CPU and P2 has 10/10 or 100% CPU. Add those up, and you have 150% CPU, magic!

The key here is the ELAPSED column. P1 has given you the CPU utilisation across 20 seconds of system time and P2 the CPU utilisation across only 10 seconds. You directly add them together you get the wrong answer.

What’s the point of %CPU?

Probably the %CPU column gives results that a lot of people are not expecting, so what’s the point of it? Don’t use it to see why the CPU is running hot; you can see above those two processes were working the CPU hard at different times. What it is useful for is to see how “busy” a process is, but be warned its an average. It’s helpful for something that starts busy but if the process idles or hardly uses CPU for a week then goes bananas you won’t see it.

The top program, because a lot of its statistics are deltas from the last refresh, is a much better program for this sort of information about what is happening right now.

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Claudia Regio

Planet Python - Mon, 2021-01-18 01:05

This week we welcome Claudia Regio (@ClaudiaRegio) as our PyDev of the Week! Claudia is a program manager for Python Data Science with a focus on Python Notebooks in Visual Studio Code at Microsoft. She also blogs on Microsoft’s dev blog.

Let’s spend some time getting to know Claudia better!

Can you tell us a little about yourself (hobbies, education, etc):

I am originally from Italy and moved to the Greater Seattle Area when I was 4 years old. Growing up I lived and breathed squash, and had it not been for COVID I still would be! I have been and always will be a huge math nerd and have been tutoring in math for over 10 years now. I attended the University of Washington where I majored in Applied Physics and received two minors in Comprehensive Mathematics and Applied Mathematics while a member of both the Delta Zeta Sorority and the UW Men’s Squash Team.

After graduating I pursued a Data Science Certificate from the University of Washington to enhance my data analysis + data science skills while working at T-Mobile as a Systems Network Architecture Engineer.

Two years after working in that role, I transitioned to Program Manager at Microsoft for the Python Extension in VS Code, focusing on the development of the Data Science & AI components and features.

Why did you start using Python?

The courses in my data science certificate got me started on Python back in 2017.

What other programming languages do you know and which is your favorite?

I learned Java during my time in college and while I enjoyed Java being a strongly typed language, no language beats Python when it comes to data science.

What projects are you working on now?

I am currently managing the Python + Jupyter Extension partnership in VS Code. While our recently released Jupyter Extension provides Jupyter Notebook support for other languages in VS Code Insiders, I focus on the collaboration of the two extensions to create an optimal notebooks experience for data scientists using Python.

Which Python libraries are your favorite (core or 3rd party)?

Scikit-learn will forever have my heart 3 What do you see as the best features of Python Notebooks?

Our best features in Python Notebooks currently include the variable explorer, data viewer, and my personal favorite, Gather (a complimentary VS Code extension).

When experimenting and prototyping in a notebook, it can often become busy as a user explores different approaches. After eventually reaching the desired result (for instance a specific visualization of the data) a user would then need to manually curate the cells involved with this specific flow. This task can be laborious and error-prone, leaving users without a strong approach for aggregating related code. A second scenario that is common among Notebooks users is that software engineers are tasked with turning an existing notebook into a production-ready script. The process of pulling out unneeded imports, graphs, outputs is often highly time consuming and can lead to errors as well. Gather is a complimentary VS Code extension that grabs all the relevant and dependent code required to create the contents of a selected cell and extracts those code dependencies into a new notebook or Python script. This helps save data scientists and developers a lot of notebook cleaning time!

Why should Python developers and data scientists use Visual Studio Code over another editor?

VS Code is a free and open source editor with a family of extensions (from both Microsoft and the open source community), products, and features that aim to make a seamless experience for developers and data scientists. A few examples include:

  • Python (Comes with the Jupyter Extension): Includes features such as IntelliSense, linting, debugging, code navigation, code formatting, Jupyter notebook support, refactoring, variable explorer, test explorer, snippets, and more!
  • Pylance: Language server that supercharges your Python IntelliSense experience with rich type information, helping you write better code faster.
  • Live Share: Enables you to collaboratively edit and debug with others in real-time, regardless of what programming languages you’re using or app types you’re building. It allows you to instantly (and securely) share your current project, and then as needed, share debugging sessions, terminal instances, localhost webapps, voice calls, and more!
  • Gather: A code cleaning tool that uses a static analysis technique to find and then copy all of the dependent code that was used to generate that cell’s result into a new notebook or script.
  • Coding Pack for Python Installer: An installer pack that helps students and new coders quickly get started by installing VS Code, all of the extensions above, as well as Python and common packages such as numpy and pandas.
  • Azure Machine Learning: Easily build, train, and deploy machine learning models to the cloud or the edge with Azure Machine Learning service from the Visual Studio Code interface.
  • Over 350+ community contributed Python-related extensions on the VS Code Marketplace!

It is the partnership constructed amongst these extensions and the open-source community as well as the Developer Division mindset to always build for the customer that creates an unmatchable experience for both developers and data scientists in VS Code.

Is there anything else you’d like to say?

I would like to thank the incredible team I get to work with (David Kutugata, Don Jayamanne, Ian Huff, Jim Griesmer, Joyce Er, Rich Chiodo, Rong Lu) who make this tool come to life and a thank you to all the customers who engage with us and are helping us build the best tool for data scientists!

If anyone would like to provide any additional feedback, feature requests, or help contribute back to the product you can do so here!

Thanks for doing the interview, Claudia!

The post PyDev of the Week: Claudia Regio appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Full Stack Python: How to Transcribe Speech Recordings into Text with Python

Planet Python - Mon, 2021-01-18 00:00

When you have a recording where one or more people are talking, it's useful to have a highly accurate and automated way to extract the spoken words into text. Once you have the text, you can use it for further analysis or as an accessibility feature.

In this tutorial, we'll use a high accuracy speech-to-text web application programming interface called AssemblyAI to extract text from an MP3 recording (many other formats are supported as well).

With the code from this tutorial, you will be able to take an audio file that contains speech such as this example one I recorded and output a highly accurate text transcription like this:

An object relational mapper is a code library that automates the transfer of data stored in relational, databases into objects that are more commonly used in application code or EMS are useful because they provide a high level abstraction upon a relational database that allows developers to write Python code instead of sequel to create read update and delete, data and schemas in their database. Developers can use the programming language. They are comfortable with to work with a database instead of writing SQL... (the text goes on from here but I abbreviated it at this point) Tutorial requirements

Throughout this tutorial we are going to use the following dependencies, which we will install in just a moment. Make sure you also have Python 3, preferably 3.6 or newer installed, in your environment:

We will use the following dependencies to complete this tutorial:

All code in this blog post is available open source under the MIT license on GitHub under the transcribe-speech-text-script directory of the blog-code-examples repository. Use the source code as you desire for your own projects.

Setting up the development environment

Change into the directory where you keep your Python virtual environments. I keep mine in a subdirectory named venvs within my user's home directory. Create a new virtualenv for this project using the following command.

python3 -m venv ~/venvs/pytranscribe

Activate the virtualenv with the activate shell script:

source ~/venvs/pytranscribe/bin/activate

After the above command is executed, the command prompt will change so that the name of the virtualenv is prepended to the original command prompt format, so if your prompt is simply $, it will now look like the following:

(pytranscribe) $

Remember, you have to activate your virtualenv in every new terminal window where you want to use dependencies in the virtualenv.

We can now install the requests package into the activated but otherwise empty virtualenv.

pip install requests==2.24.0

Look for output similar to the following to confirm the appropriate packages were installed correctly from PyPI.

(pytranscribe) $ pip install requests==2.24.0 Collecting requests==2.24.0 Using cached https://files.pythonhosted.org/packages/45/1e/0c169c6a5381e241ba7404532c16a21d86ab872c9bed8bdcd4c423954103/requests-2.24.0-py2.py3-none-any.whl Collecting certifi>=2017.4.17 (from requests==2.24.0) Using cached https://files.pythonhosted.org/packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/certifi-2020.6.20-py2.py3-none-any.whl Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests==2.24.0) Using cached https://files.pythonhosted.org/packages/9f/f0/a391d1463ebb1b233795cabfc0ef38d3db4442339de68f847026199e69d7/urllib3-1.25.10-py2.py3-none-any.whl Collecting chardet<4,>=3.0.2 (from requests==2.24.0) Using cached https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl Collecting idna<3,>=2.5 (from requests==2.24.0) Using cached https://files.pythonhosted.org/packages/a2/38/928ddce2273eaa564f6f50de919327bf3a00f091b5baba8dfa9460f3a8a8/idna-2.10-py2.py3-none-any.whl Installing collected packages: certifi, urllib3, chardet, idna, requests Successfully installed certifi-2020.6.20 chardet-3.0.4 idna-2.10 requests-2.24.0 urllib3-1.25.10

We have all of our required dependencies installed so we can get started coding the application.

Uploading, initiating and transcribing audio

We have everything we need to start building our application that will transcribe audio into text. We're going to build this application in three files:

  1. upload_audio_file.py: uploads your audio file to a secure place on AssemblyAI's service so it can be access for processing. If your audio file is already accessible with a public URL, you don't need to do this step, you can just follow this quickstart
  2. initiate_transcription.py: tells the API which file to transcribe and to start immediately
  3. get_transcription.py: prints the status of the transcription if it is still processing, or displays the results of the transcription when the process is complete

Create a new directory named pytranscribe to store these files as we write them. Then change into the new project directory.

mkdir pytranscribe cd pytranscribe

We also need to export our AssemblyAI API key as an environment variable. Sign up for an AssemblyAI account and log in to the AssemblyAI dashboard, then copy "Your API token" as shown in this screenshot:

export ASSEMBLYAI_KEY=your-api-key-here

Note that you must use the export command in every command line window that you want this key to be accessible. The scripts we are writing will not be able to access the API if you do not have the token exported as ASSEMBLYAI_KEY in the environment you are running the script.

Now that we have our project directory created and the API key set as an environment variable, let's move on to writing the code for the first file that will upload audio files to the AssemblyAI service.

Uploading the audio file for transcription

Create a new file named upload_audio_file.py and place the following code in it:

import argparse import os import requests API_URL = "https://api.assemblyai.com/v2/" def upload_file_to_api(filename): """Checks for a valid file and then uploads it to AssemblyAI so it can be saved to a secure URL that only that service can access. When the upload is complete we can then initiate the transcription API call. Returns the API JSON if successful, or None if file does not exist. """ if not os.path.exists(filename): return None def read_file(filename, chunk_size=5242880): with open(filename, 'rb') as _file: while True: data = _file.read(chunk_size) if not data: break yield data headers = {'authorization': os.getenv("ASSEMBLYAI_KEY")} response = requests.post("".join([API_URL, "upload"]), headers=headers, data=read_file(filename)) return response.json()

The above code imports the argparse, os and requests packages so that we can use them in this script. The API_URL is a constant that has the base URL of the AssemblyAI service. We define the upload_file_to_api function with a single argument, filename that should be a string with the absolute path to a file and its filename.

Within the function, we check that the file exists, then use Request's chunked transfer encoding to stream large files to the AssemblyAI API.

The os module's getenv function reads the API that was set on the command line using the export command with the getenv. Make sure that you use that export command in the terminal where you are running this script otherwise that ASSEMBLYAI_KEY value will be blank. When in doubt, use echo $ASSEMBLY_AI to see if the value matches your API key.

To use the upload_file_to_api function, append the following lines of code in the upload_audio_file.py file so that we can properly execute this code as a script called with the python command:

if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("filename") args = parser.parse_args() upload_filename = args.filename response_json = upload_file_to_api(upload_filename) if not response_json: print("file does not exist") else: print("File uploaded to URL: {}".format(response_json['upload_url']))

The code above creates an ArgumentParser object that allows the application to obtain a single argument from the command line to specify the file we want to access, read and upload to the AssmeblyAI service.

If the file does not exist, the script will print a message that the file couldn't be found. In the happy path where we do find the correct file at that path, then the file is uploaded using the code in upload_file_to_api function.

Execute the completed upload_audio_file.py script by running it on the command line with the python command. Replace FULL_PATH_TO_FILE with an absolute path to the file you want to upload, such as /Users/matt/devel/audio.mp3.

python upload_audio_file.py FULL_PATH_TO_FILE

Assuming the file is found at the location that you specified, when the script finishes uploading the file, it will print a message like this one with a unique URL:

File uploaded to URL: https://cdn.assemblyai.com/upload/463ce27f-0922-4ea9-9ce4-3353d84b5638

This URL is not public, it can only be used by the AssemblyAI service, so no one else will be able to access your file and its contents except for you and their transcription API.

The part that is important is the last section of the URL, in this example it is 463ce27f-0922-4ea9-9ce4-3353d84b5638. Save that unique identifier because we need to pass it into the next script that initiates the transcription service.

Initiate transcription

Next, we'll write some code to kick off the transcription. Create a new file named initiate_transcription.py. Add the following code to the new file.

import argparse import os import requests API_URL = "https://api.assemblyai.com/v2/" CDN_URL = "https://cdn.assemblyai.com/" def initiate_transcription(file_id): """Sends a request to the API to transcribe a specific file that was previously uploaded to the API. This will not immediately return the transcription because it takes a moment for the service to analyze and perform the transcription, so there is a different function to retrieve the results. """ endpoint = "".join([API_URL, "transcript"]) json = {"audio_url": "".join([CDN_URL, "upload/{}".format(file_id)])} headers = { "authorization": os.getenv("ASSEMBLYAI_KEY"), "content-type": "application/json" } response = requests.post(endpoint, json=json, headers=headers) return response.json()

We have the same imports as the previous script and we've added a new constant, CDN_URL that matches the separate URL where AssemblyAI stores the uploaded audio files.

The initiate_transcription function essentially just sets up a single HTTP request to the AssemblyAI API to start the transcription process on the audio file at the specific URL passed in. This is why passing in the file_id is important: that completes the URL of the audio file that we are telling AssemblyAI to retrieve.

Finish the file by appending this code so that it can be easily invoked from the command line with arguments.

if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("file_id") args = parser.parse_args() file_id = args.file_id response_json = initiate_transcription(file_id) print(response_json)

Start the script by running the python command on the initiate_transcription file and pass in the unique file identifier you saved from the previous step.

# the FILE_IDENTIFIER is returned in the previous step and will # look something like this: 463ce27f-0922-4ea9-9ce4-3353d84b5638 python initiate_transcription.py FILE_IDENTIFIER

The API will send back a JSON response that this script prints to the command line.

{'audio_end_at': None, 'acoustic_model': 'assemblyai_default', 'text': None, 'audio_url': 'https://cdn.assemblyai.com/upload/463ce27f-0922-4ea9-9ce4-3353d84b5638', 'speed_boost': False, 'language_model': 'assemblyai_default', 'redact_pii': False, 'confidence': None, 'webhook_status_code': None, 'id': 'gkuu2krb1-8c7f-4fe3-bb69-6b14a2cac067', 'status': 'queued', 'boost_param': None, 'words': None, 'format_text': True, 'webhook_url': None, 'punctuate': True, 'utterances': None, 'audio_duration': None, 'auto_highlights': False, 'word_boost': [], 'dual_channel': None, 'audio_start_from': None}

Take note of the value of the id key in the JSON response. This is the transcription identifier we need to use to retrieve the transcription result. In this example, it is gkuu2krb1-8c7f-4fe3-bb69-6b14a2cac067. Copy the transcription identifier in your own response because we will need it to check when the transcription process has completed in the next step.

Retrieving the transcription result

We have uploaded and begun the transcription process, so let's get the result as soon as it is ready.

How long it takes to get the results back can depend on the size of the file, so this next script will send an HTTP request to the API and report back the status of the transcription, or print the output if it's complete.

Create a third Python file named get_transcription.py and put the following code into it.

import argparse import os import requests API_URL = "https://api.assemblyai.com/v2/" def get_transcription(transcription_id): """Requests the transcription from the API and returns the JSON response.""" endpoint = "".join([API_URL, "transcript/{}".format(transcription_id)]) headers = {"authorization": os.getenv('ASSEMBLYAI_KEY')} response = requests.get(endpoint, headers=headers) return response.json() if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("transcription_id") args = parser.parse_args() transcription_id = args.transcription_id response_json = get_transcription(transcription_id) if response_json['status'] == "completed": for word in response_json['words']: print(word['text'], end=" ") else: print("current status of transcription request: {}".format( response_json['status']))

The code above has the same imports as the other scripts. In this new get_transcription function, we simply call the AssemblyAI API with our API key and the transcription identifier from the previous step (not the file identifier). We retrieve the JSON response and return it.

In the main function we handle the transcription identifier that is passed in as a command line argument and pass it into the get_transcription function. If the response JSON from the get_transcription function contains a completed status then we print the results of the transcription. Otherwise, print the current status which is either queued or processing before it is completed.

Call the script using the command line and the transcription identifier from the previous section:

python get_transcription.py TRANSCRIPTION_ID

If the service has not yet started working on the transcript then it will return queued like this:

current status of transcription request: queued

When the service is currently working on the audio file it will return processing:

current status of transcription request: processing

When the process is completed, our script will return the text of the transcription, like you see here:

An object relational mapper is a code library that automates the transfer of data stored in relational, databases into objects that are more commonly used in application code or EMS are useful because they provide a high level ...(output abbreviated)

That's it, we've got our transcription!

You may be wondering what to do if the accuracy isn't where you need it to be for your situation. That is where boosting accuracy for keywords or phrases and selecting a model that better matches your data come in. You can use either of those two methods to boost the accuracy of your recordings to an acceptable level for your situation.

What's next?

We just finished writing some scripts that call the AssemblyAI API to transcribe recordings with speech into text output.

Next, take a look at some of their more advanced documentation that goes beyond the basics in this tutorial:

Questions? Let me know via an issue ticket on the Full Stack Python repository, on Twitter @fullstackpython or @mattmakai. See something wrong with this post? Fork this page's source on GitHub and submit a pull request.

Categories: FLOSS Project Planets

Promet Source: What is Human-Centered Web Design?

Planet Drupal - Sun, 2021-01-17 21:13
Human-centered design is a concept that gained traction in the 1990s as an approach  to developing innovative solutions based on a laser-sharp focus on human needs and human perspectives during every phase of a design or problem-solving process. Building upon the principles of human-centered design, Promet Source has served as a pioneer and leading practitioner human-centered web design. 
Categories: FLOSS Project Planets

Craig Small: test2

Planet Debian - Sun, 2021-01-17 19:49

ignore this

Categories: FLOSS Project Planets

Pages