Everyday Superpowers: Recover from pre-commit-adjacent file losses

Planet Python - Tue, 2021-04-13 09:40

Pre-commit is a immensely useful tool for your projects. I use it for every project I work on.

In a rare event, it might seem as though you lost many modified files—potentially hours of work, but the truth is your files are safe, and it's easy to restore them.

Categories: FLOSS Project Planets

Stack Abuse: Plotly Scatter Plot - Tutorial with Examples

Planet Python - Tue, 2021-04-13 08:30

Plotly is a JavaScript-based, Python data visualization library, focused on interactive and web-based visualizations. It has the simplicity of Seaborn, with a high-level API, but also the interactivity of Bokeh.

In addition to the core library's functionality, using the built-in Plotly Express with Dash, makes it an amazing choice for web-based applications and interactive, data-driven dashboards, usually written in Flask.

In this guide, we'll take a look at how to plot a Scatter Plot with Plotly.

Scatter Plots explore the relationship between two numerical variables (features) of a dataset.

Import Data

We'll be working with the Heart Attack Dataset from Kaggle, which contains data on various bodily metrics that we could use as indicators of a heart attack possibility.

Let's import the dataset and print the head() to take a peek:

import pandas as pd df = pd.read_csv('heart.csv') print(df.head())

This results in:

age cp trtbps chol fbs restecg thalachh exng oldpeak slp caa output 0 63 3 145 233 1 0 150 0 2.3 0 0 1 1 37 2 130 250 0 1 187 0 3.5 0 0 1 2 41 1 130 204 0 0 172 0 1.4 2 0 1 3 56 1 120 236 0 1 178 0 0.8 2 0 1 4 57 0 120 354 0 1 163 1 0.6 2 0 1

Let's explore the relationships between features such as the thalachh (maximum recorded heart rate), trtbps (resting blood pressure), chol (amount of cholesterol) and output (0 or 1, representing lower or higher chances of experiencing a heart attack respectively).

First, let's go ahead and save our features separately for brevity's sake:

max_heartrate = df['thalachh'] resting_blood_pressure = df['trtbps'] cholesterol_level = df['chol'] output = df['output'] Plot a Scatter Plot with Plotly

Finally, we can go ahead and plot a Scatter Plot. Let's go ahead and first explore the relationship between max_heartrate and cholesterol_level. To plot a Scatter Plot with Plotly, we'll use the scatter() function of the Plotly Express (px) instance:

fig = px.scatter(x=cholesterol_level, y=max_heartrate) fig.show()

The only required arguments are the x and y features, which will plot a Scatter Plot (without axis labels) in a spun-up server on your browser of choice:

Alternatively, if you don't want to define your variables beforehand, Plotly offers the exact same syntax as Seaborn - you specify the data source, and the names of the features you'd like to visualize. This will map the features to labels, and plot them directly without having to specify the features like we did before:

import pandas as pd import plotly.express as px df = pd.read_csv('heart.csv') fig = px.scatter(df, x='chol', y='thalachh') fig.show()

This results in:

Note: You can also do a mish-mash of these approaches, where you supply your DataFrame as the source, but also use pre-defined variables instead of referencing the feature column-names in the scatter() call:

fig = px.scatter(df, x=cholesterol_level, y=max_heartrate) fig.show()

This results in a labeled Scatter Plot as well:

There doesn't seem to be much of a correlation between the cholesterol level and maximum heart rate of individuals in this dataset.

Customizing a Plotly Scatter Plot

Now, we rarely visualize plain plots. The point is to visualize certain characteristics of data, intuitively.

In our case, this might include coloring the markers depending on the output feature, or adding hover_data, which specifies what's shown on the markers when someone hovers over them.

Currently, the hover_data isn't very helpful, only showing us the x and y values, which can already be reasonably inferred from observing the resulting plot.

Let's go ahead and change a few of the parameters to make this plot a bit more intuitive:

import pandas as pd import plotly.express as px df = pd.read_csv('heart.csv') fig = px.scatter(df, x='chol', y='thalachh', color='output', hover_data=['sex', 'age']) fig.show()

We've set the color of each marker to be mapped to the output feature, coloring higher and lower chances of experiencing a heart attack in different colors. We've also included the sex and age of each individual on their markers.

This results in:

Finally, you can also change the size of the marker, either passing it a scalar value (such as 5) to the fig.update_traces() method, or by passing in a vector value (such as mapping the size to a feature) to the size argument.

Let's map the oldpeak feature with the size of each marker:

import pandas as pd import plotly.express as px df = pd.read_csv('heart.csv') fig = px.scatter(df, x='chol', y='thalachh', color='output', size='oldpeak', hover_data=['sex', 'age']) fig.show()

Now, each marker will have a variable size, depending on the values of the oldpeak feature:

Or, if you want to specifically make all markers of the same, fixed size, you can update the Figure's traces:

import pandas as pd import plotly.express as px df = pd.read_csv('heart.csv') fig = px.scatter(df, x='chol', y='thalachh', color='output', hover_data=['sex', 'age']) fig.update_traces(marker={'size': 10}) fig.show()

This results in:


In this guide, we've taken a look at how to plot a Scatter Plot using Python and Plotly.

If you're interested in Data Visualization and don't know where to start, make sure to check out our bundle of books on Data Visualization in Python:

Data Visualization in Python

Become dangerous with Data Visualization

✅ 30-day no-question money-back guarantee

✅ Beginner to Advanced

✅ Updated regularly for free (latest update in April 2021)

✅ Updated with bonus resources and guides

Data Visualization in Python with Matplotlib and Pandas is a book designed to take absolute beginners to Pandas and Matplotlib, with basic Python knowledge, and allow them to build a strong foundation for advanced work with theses libraries - from simple plots to animated 3D plots with interactive buttons.

It serves as an in-depth, guide that'll teach you everything you need to know about Pandas and Matplotlib, including how to construct plot types that aren't built into the library itself.

Data Visualization in Python, a book for beginner to intermediate Python developers, guides you through simple data manipulation with Pandas, cover core plotting libraries like Matplotlib and Seaborn, and show you how to take advantage of declarative and experimental libraries like Altair. More specifically, over the span of 11 chapters this book covers 9 Python libraries: Pandas, Matplotlib, Seaborn, Bokeh, Altair, Plotly, GGPlot, GeoPandas, and VisPy.

It serves as a unique, practical guide to Data Visualization, in a plethora of tools you might use in your career.

Categories: FLOSS Project Planets

Python Software Foundation: The PSF is hiring a Python Packaging Project Manager!

Planet Python - Tue, 2021-04-13 07:00

Thanks to a two-year grant commitment from Bloomberg, our second 2021 Visionary Sponsor and a long term committed supporter of the Python ecosystem, The Python Software Foundation (PSF) is hiring a full-time project and community manager for the Python Packaging ecosystem, with a specific focus on the Python Package Index (PyPI).

We are excited about the opportunities that partnerships like these can provide for our community and know that this one will serve as a model for what can be accomplished when organizations make investments in entire ecosystems rather than individual initiatives.

Bloomberg has been a Python Software Foundation sponsor since 2017. We greatly appreciate their support. You can read more about how Bloomberg is supporting the Python ecosystem, and why they’re interested in improving the Python Packaging ecosystem, on the Tech At Bloomberg blog.

About the role

Over time, the Python packaging ecosystem has grown to span numerous software projects, standards, and use cases within the Python community. This growth and specialization of projects has helped bring the Python community to where it is, but has posed challenges in coordinating important changes across the entire ecosystem. Major projects in the space include PyPI, pip, virtualenv, wheel, setuptools, twine, and packaging.python.org.

The project manager will oversee improvements and added functionality that will benefit all Python users while leading the development of PyPI into a sustainable service. Working with the PSF Director of Resource Development to fund work throughout the packaging ecosystem. This role will also serve as a community manager to solicit feedback and facilitate discussions amongst stakeholders in the Python packaging community to generate consensus and establish new specifications.

Over the past three years, the PSF has overseen work on multiple1 one-off grants in the Python packaging ecosystem, which were successfully fulfilled for improvements to PyPI as well as pip. While these projects have been successful, the PSF is committed to establishing a structure through this role to support such projects sustainably, and assessing the backlog of potentially funded work, as well as lingering demand in the community for further improvement.

Ultimately, this role will provide a basis for progressing initiatives across multiple packaging projects and taking on additional funding opportunities on a continuous basis and acting as a known point of contact for members of the community.

Interested in the position?

If you are interested, please see the job post on the Python Job Board for the job description and instructions to apply. The call for resumes will be open until May 18, 2021.

1: Past funded work includes: MOSS award to finish and launch ground-up rewrite of PyPI (2018); Open Technology Fund contract to add 2FA, API keys, translations, improved accessibility, and internationalization to PyPI (2019); Facebook grant to implement automated checks for package uploads and PEP 548 in PyPI (2020); and MOSS award and Chan Zuckerberg Initiative grant to overhaul user experience and dependency resolver in pip (2020).

Categories: FLOSS Project Planets

Akademy 2021 – Call for Participation!

Planet KDE - Tue, 2021-04-13 05:57

By Allyson Alexandrou

Akademy will be held online from 18th to 25th June and the Call for Participation is open! Submit your talk ideas and abstracts by 2nd May.

While all talk topics are welcome, here are a few talk topics specifically relevant to the KDE Community:

  • Topics related to KDE's current Community Goals.
  • KDE In Action: cases of KDE technology in real life.
  • Overview of what is going on in the various areas of the KDE Community.
  • Collaboration between KDE and other Free Software projects.
  • Release, packaging, and distribution of software by KDE.
  • Increasing our reach through efforts such as accessibility, promotion, translation and localization.
  • Improving our governance and processes, community building.
  • Innovations and best practices in the libraries and technologies used by KDE software.
Why should I submit a talk?

KDE is one of the biggest and well-established Free Software communities. Talking at Akademy gives you an audience that will be receptive to your ideas and will also offer you their experience and know-how in return.

As an independent developer, you will gain supporters for your project, the insight of experienced developers, and you may even gain active contributors. As a community leader, you will be able to discuss the hot topics associated with managing large groups of volunteers, such as management, inclusivity and conflict resolution. As a CTO, you will be able to explain your company’s mission, its products and services and benefit from the brainshare of one of the most cutting edge community-based Free Software projects.

How do I get started?

With an idea. Even if you do not know exactly how you will focus it, no worries! Submit some basic details about your talk idea. All abstracts can be edited after the initial submission.

What should my talk abstract or proposal include?

This is a great question! To ensure you get your point across both clearly and comprehensively, your abstract should include uses of your idea or product and show what different groups of people get out of it. For example, how can a company, developer, or even a user benefit from using your app? In what ways can you further their experiences?

If you’re still stuck on where to start or what to talk about, take a look at a brief list of talks given in previous years at Akademy:

You can find more Akademy videos on the KDE's YouTube channel.

Categories: FLOSS Project Planets

Opensource.com: What's new with Drupal in 2021?

Planet Drupal - Tue, 2021-04-13 03:01

The success of open source projects is largely carried by the pillars of the community and group collaborations. Without putting a stake in the ground to achieve strategic initiatives, an open source project can lose focus. Open source strategic initiatives should aim at solving impactful problems through collaboration involving the project's stakeholders.

Categories: FLOSS Project Planets

Kushal Das: Workshop on writing Python modules in Rust April 2020

Planet Python - Tue, 2021-04-13 02:05

I am conducting 2 repeat sessions for a workshop on "Writing Python modules in Rust".

The first session is on 16th April, 1500 UTC onwards, and the repeat session will be on 18th April 0900 UTC. More details can be found in this issue.

You don't have to have any prior Rust knowledge. I will be providing working code, and we will go very slowly to have a working Python module with useful functions in it.

If you are planning to attend or know anyone else who may want to join, then please point them to the issue link.

Categories: FLOSS Project Planets

DEVLOG: Bugfixing the KDE Panel!

Planet KDE - Tue, 2021-04-13 02:03

DEVLOG: Bugfixing the KDE Panel!

Categories: FLOSS Project Planets

François Marier: Deleting non-decryptable restic snapshots

Planet Debian - Mon, 2021-04-12 23:19

Due to what I suspect is disk corruption error due to a faulty RAM module or network interface on my GnuBee, my restic backup failed with the following error:

$ restic check using temporary cache in /var/tmp/restic-tmp/restic-check-cache-854484247 repository b0b0516c opened successfully, password is correct created new cache in /var/tmp/restic-tmp/restic-check-cache-854484247 create exclusive lock for repository load indexes check all packs check snapshots, trees and blobs error for tree 4645312b: decrypting blob 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c failed: ciphertext verification failed error for tree 2c3248ce: decrypting blob 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6 failed: ciphertext verification failed Fatal: repository contains errors

I started by locating the snapshots which make use of these corrupt trees:

$ restic find --tree 4645312b repository b0b0516c opened successfully, password is correct Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c ... path /usr/include/boost/spirit/home/support/auxiliary ... in snapshot 41e138c8 (2021-01-31 08:35:16) Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c ... path /usr/include/boost/spirit/home/support/auxiliary ... in snapshot e75876ed (2021-02-28 08:35:29) $ restic find --tree 2c3248ce repository b0b0516c opened successfully, password is correct Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6 ... path /usr/include/boost/spirit/home/support/char_encoding ... in snapshot 41e138c8 (2021-01-31 08:35:16) Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6 ... path /usr/include/boost/spirit/home/support/char_encoding ... in snapshot e75876ed (2021-02-28 08:35:29)

and then deleted them:

$ restic forget 41e138c8 e75876ed repository b0b0516c opened successfully, password is correct [0:00] 100.00% 2 / 2 files deleted $ restic prune repository b0b0516c opened successfully, password is correct counting files in repo building new index for repo [13:23] 100.00% 58964 / 58964 packs repository contains 58964 packs (1417910 blobs) with 278.913 GiB processed 1417910 blobs: 0 duplicate blobs, 0 B duplicate load all snapshots find data that is still in use for 20 snapshots [1:15] 100.00% 20 / 20 snapshots found 1364852 of 1417910 data blobs still in use, removing 53058 blobs will remove 0 invalid files will delete 942 packs and rewrite 1358 packs, this frees 6.741 GiB [10:50] 31.96% 434 / 1358 packs rewritten hash does not match id: want 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57, got 95d90aa48ffb18e6d149731a8542acd6eb0e4c26449a4d4c8266009697fd1904 github.com/restic/restic/internal/repository.Repack github.com/restic/restic/internal/repository/repack.go:37 main.pruneRepository github.com/restic/restic/cmd/restic/cmd_prune.go:242 main.runPrune github.com/restic/restic/cmd/restic/cmd_prune.go:62 main.glob..func19 github.com/restic/restic/cmd/restic/cmd_prune.go:27 github.com/spf13/cobra.(*Command).execute github.com/spf13/cobra/command.go:852 github.com/spf13/cobra.(*Command).ExecuteC github.com/spf13/cobra/command.go:960 github.com/spf13/cobra.(*Command).Execute github.com/spf13/cobra/command.go:897 main.main github.com/restic/restic/cmd/restic/main.go:98 runtime.main runtime/proc.go:204 runtime.goexit runtime/asm_amd64.s:1374

As you can see above, the prune command failed due to a corrupt pack and so I followed the process I previously wrote about and identified the affected snapshots using:

$ restic find --pack 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57

before deleting them with:

$ restic forget 031ab8f1 1672a9e1 1f23fb5b 2c58ea3a 331c7231 5e0e1936 735c6744 94f74bdb b11df023 dfa17ba8 e3f78133 eefbd0b0 fe88aeb5 repository b0b0516c opened successfully, password is correct [0:00] 100.00% 13 / 13 files deleted $ restic prune repository b0b0516c opened successfully, password is correct counting files in repo building new index for repo [13:37] 100.00% 60020 / 60020 packs repository contains 60020 packs (1548315 blobs) with 283.466 GiB processed 1548315 blobs: 129812 duplicate blobs, 4.331 GiB duplicate load all snapshots find data that is still in use for 8 snapshots [0:53] 100.00% 8 / 8 snapshots found 1219895 of 1548315 data blobs still in use, removing 328420 blobs will remove 0 invalid files will delete 6232 packs and rewrite 1275 packs, this frees 36.302 GiB [23:37] 100.00% 1275 / 1275 packs rewritten counting files in repo [11:45] 100.00% 52822 / 52822 packs finding old index files saved new indexes as [a31b0fc3 9f5aa9b5 db19be6f 4fd9f1d8 941e710b 528489d9 fb46b04a 6662cd78 4b3f5aad 0f6f3e07 26ae96b2 2de7b89f 78222bea 47e1a063 5abf5c2d d4b1d1c3 f8616415 3b0ebbaa] remove 23 old index files [0:00] 100.00% 23 / 23 files deleted remove 7507 old packs [0:08] 100.00% 7507 / 7507 files deleted done

And with 13 of my 21 snapshots deleted, the checks now pass:

$ restic check using temporary cache in /var/tmp/restic-tmp/restic-check-cache-407999210 repository b0b0516c opened successfully, password is correct created new cache in /var/tmp/restic-tmp/restic-check-cache-407999210 create exclusive lock for repository load indexes check all packs check snapshots, trees and blobs no errors were found

This represents a significant amount of lost backup history, but at least it's not all of it.

Categories: FLOSS Project Planets

Shirish Agarwal: what to write

Planet Debian - Mon, 2021-04-12 23:14

First up, I am alive and well. I have been receiving calls from friends for quite sometime but now that I have become deaf, it is a pain and the hearing aids aren’t all that useful. But moreover, where we have been finding ourselves each and every day sinking lower and lower feels absurd as to what to write and not write about India. Thankfully, I ran across this piece which does tell in far more detail than I ever could. The only interesting and somewhat positive news I had is from south of India otherwise sad days, especially for the poor. The saddest story is that this time Covid has reached alarming proportions in India and surprise, surprise this time the villain for many is my state of Maharashtra even though it hasn’t received its share of GST proceeds for last two years and this was Kerala’s perspective, different state, different party, different political ideology altogether.

Kerala Finance Minister Thomas Issac views on GST, October 22, 2020 Indian Express.

I briefly also share the death of somewhat liberal Film censorship in India unlike Italy which abolished film censorship altogether. I don’t really want spend too much on how we have become No. 2 in Covid cases in the world and perhaps death also. Many people still believe in herd immunity but don’t really know what it means. So without taking too much time and effort, bid adieu. May post when I’m hopefully emotionally feeling better, stronger

Categories: FLOSS Project Planets

Ned Batchelder: Coverage.py and third-party code

Planet Python - Mon, 2021-04-12 22:19

I’ve made a change to coverage.py, and I could use your help testing it before it’s released to the world.

tl;dr: install this and let me know if you don’t like the results: pip install coverage==5.6b1

What’s changed? Previously, coverage.py didn’t understand about third-party code you had installed. With no options specified, it would measure and report on that code, for example in site-packages. A common solution was to use --source=. to only measure code in the current directory tree. But many people put their virtualenv in the current directory, so third-party code installed into the virtualenv would still get reported.

Now, coverage.py understands where third-party code gets installed, and won’t measure code it finds there. This should produce more useful results with less work on your part.

This was a bit tricky because the --source option can also specify an importable name instead of a directory, and it had to still measure that code even if it was installed where third-party code goes.

As of now, there is no way to change this new behavior. Third-party code is never measured.

This is kind of a big change, and there could easily be unusual arrangements that aren’t handled properly. I would like to find out about those before an official release. Try the new version and let me know what you find out:

pip install coverage==5.6b1

In particular, I would like to know if any of the code you wanted measured wasn’t measured, or if there is code being measured that “obviously” shouldn’t be. Testing on Debian (or a derivative like Ubuntu) would be helpful; I know they have different installation schemes.

If you see a problem, write up an issue. Thanks for helping.

Categories: FLOSS Project Planets

hussainweb.me: Simple Infrastructure as Code (IaC) for Drupal

Planet Drupal - Mon, 2021-04-12 22:01
I have been setting up computers and configuring web servers for a long time now. I started off my computing journey by building computers and setting up operating systems for others. Soon, I started configuring servers first using shared hosting and then dedicated servers. As virtualization became mainstream, I started configuring cloud instances to run websites. At a certain point, I was maintaining several projects (some seasonal), it became harder to remember how exactly I had configured a particular server when I needed to upgrade or set it up again. That is why I have been interested in Infrastructure as Code (IaC) for a long time.
Categories: FLOSS Project Planets

Third & Grove: Core Web Vitals in Drupal

Planet Drupal - Mon, 2021-04-12 21:51

In May 2021, Google will begin to recognize Page Experience as a ranking factor, prioritizing user experience in the particular and overall ranking score.

Categories: FLOSS Project Planets

Podcast.__init__: Let The Robots Do The Work Using Robotic Process Automation with Robocorp

Planet Python - Mon, 2021-04-12 21:45
One of the great promises of computers is that they will make our work faster and easier, so why do we all spend so much time manually copying data from websites, or entering information into web forms, or any of the other tedious tasks that take up our time? As developers our first inclination is to "just write a script" to automate things, but how do you share that with your non-technical co-workers? In this episode Antti Karjalainen, CEO and co-founder of Robocorp, explains how Robotic Process Automation (RPA) can help us all cut down on time-wasting tasks and let the computers do what they're supposed to. He shares how he got involved in the RPA industry, his work with Robot Framework and RPA framework, how to build and distribute bots, and how to decide if a task is worth automating. If you're sick of spending your time on mind-numbing copy and paste then give this episode a listen and then let the robots do the work for you.Summary

One of the great promises of computers is that they will make our work faster and easier, so why do we all spend so much time manually copying data from websites, or entering information into web forms, or any of the other tedious tasks that take up our time? As developers our first inclination is to "just write a script" to automate things, but how do you share that with your non-technical co-workers? In this episode Antti Karjalainen, CEO and co-founder of Robocorp, explains how Robotic Process Automation (RPA) can help us all cut down on time-wasting tasks and let the computers do what they’re supposed to. He shares how he got involved in the RPA industry, his work with Robot Framework and RPA framework, how to build and distribute bots, and how to decide if a task is worth automating. If you’re sick of spending your time on mind-numbing copy and paste then give this episode a listen and then let the robots do the work for you.

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • We’ve all been asked to help with an ad-hoc request for data by the sales and marketing team. Then it becomes a critical report that they need updated every week or every day. Then what do you do? Send a CSV via email? Write some Python scripts to automate it? But what about incremental sync, API quotas, error handling, and all of the other details that eat up your time? Today, there is a better way. With Census, just write SQL or plug in your dbt models and start syncing your cloud warehouse to SaaS applications like Salesforce, Marketo, Hubspot, and many more. Go to pythonpodcast.com/census today to get a free 14-day trial.
  • Software is read more than it is written, so complex and poorly organized logic slows down everyone who has to work with it. Sourcery makes those problems a thing of the past, giving you automatic refactoring recommendations in your IDE or text editor while you write (I even have it working in Emacs). It isn’t just another linting tool that nags you about issues. It’s like pair programming with a senior engineer, finding and applying structural improvements to your functions so that you can write cleaner code faster. Best of all, listeners of Podcast.__init__ get 6 months of their Pro tier for free if you go to pythonpodcast.com/sourcery today and use the promo code INIT when you sign up.
  • Your host as usual is Tobias Macey and today I’m interviewing Antti Karjalainen about the RPA Framework for automating your daily tasks and his work at Robocorp to manage your robots in production
  • Introductions
  • How did you get introduced to Python?
  • Can you start by giving an overview of what Robotic Process Automation is?
  • What are some of the ways that RPA might be used?
    • What are the advantages over writing a custom library or script in Python to automate a given task?
    • How does the functionality of RPA compare to automation services like Zapier, IFTTT, etc.?
  • What are you building at Robocorp and what was your motivation for starting the business?
    • Who is your target customer and how does that inform the products that you are building?
  • Can you give an overview of the state of the ecosystem for RPA tools and products and how Robocorp and RPA framework fit within it?
    • How does the RPA Framework relate to Robot Framework?
  • What are some of the challenges that developers and end users often run into when trying to build, use, or implement an RPA system?
  • How is the RPA framework itself implemented?
    • How has the design of the project evolved since you first began working on it?
  • Can you talk through an example workflow for building a robot?
  • Once you have built a robot, what are some of the considerations for local execution or deploying it to a production environment?
  • How can you chain together multiple robots?
  • What is involved in extending the set of operations available in the framework?
  • What are the available integration points for plugging a robot written with RPA Framework into another Python project?
  • What are the dividing lines between RPA Framework and Robocorp?
    • How are you handling the governance of the open source project?
  • What are some of the most interesting, innovative, or unexpected ways that you have seen RPA Framework and the Robocorp platform used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while building and growing RPA Framework and the Robocorp business?
  • When is RPA and RPA Framework the wrong choice for automation?
  • What do you have planned for the future of the framework and business?
Keep In Touch Picks Closing Announcements
  • Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
  • Join the community in the new Zulip chat workspace at pythonpodcast.com/chat

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Categories: FLOSS Project Planets

Russell Coker: Yama

Planet Debian - Mon, 2021-04-12 19:00

I’ve just setup the Yama LSM module on some of my Linux systems. Yama controls ptrace which is the debugging and tracing API for Unix systems. The aim is to prevent a compromised process from using ptrace to compromise other processes and cause more damage. In most cases a process which can ptrace another process which usually means having capability SYS_PTRACE (IE being root) or having the same UID as the target process can interfere with that process in other ways such as modifying it’s configuration and data files. But even so I think it has the potential for making things more difficult for attackers without making the system more difficult to use.

If you put “kernel.yama.ptrace_scope = 1” in sysctl.conf (or write “1” to /proc/sys/kernel/yama/ptrace_scope) then a user process can only trace it’s child processes. This means that “strace -p” and “gdb -p” will fail when run as non-root but apart from that everything else will work. Generally “strace -p” (tracing the system calls of another process) is of most use to the sysadmin who can do it as root. The command “gdb -p” and variants of it are commonly used by developers so yama wouldn’t be a good thing on a system that is primarily used for software development.

Another option is “kernel.yama.ptrace_scope = 3” which means no-one can trace and it can’t be disabled without a reboot. This could be a good option for production servers that have no need for software development. It wouldn’t work well for a small server where the sysadmin needs to debug everything, but when dozens or hundreds of servers have their configuration rolled out via a provisioning tool this would be a good setting to include.

See Documentation/admin-guide/LSM/Yama.rst in the kernel source for the details.

When running with capability SYS_PTRACE (IE root shell) you can ptrace anything else and if necessary disable Yama by writing “0” to /proc/sys/kernel/yama/ptrace_scope .

I am enabling mode 1 on all my systems because I think it will make things harder for attackers while not making things more difficult for me.

Also note that SE Linux restricts SYS_PTRACE and also restricts cross-domain ptrace access, so the combination with Yama makes things extra difficult for an attacker.

Yama is enabled in the Debian kernels by default so it’s very easy to setup for Debian users, just edit /etc/sysctl.d/whatever.conf and it will be enabled on boot.

Related posts:

  1. Defense in Depth and Sudo My blog post about logging in as root and whether...
  2. SE Linux and Decrypted Data There is currently a discussion on the Debian-security mailing list...
  3. Secure Boot and Protecting Against Root There has been a lot of discussion recently about the...
Categories: FLOSS Project Planets

Steinar H. Gunderson: Squirrel!

Planet Debian - Mon, 2021-04-12 18:30

“All comments on this article will now be moderated. The bar to pass moderation will be high, it's really time to think about something else. Did you all see that we have an exciting article on spinlocks?” Poor LWN <3


Categories: FLOSS Project Planets

Submit your talks now for Akademy 2021!

Planet KDE - Mon, 2021-04-12 18:27

As you can see in https://akademy.kde.org/2021/cfp the Call for Participation for Akademy 2021 (that will take place online from Friday the 18th to Friday the 25th of June) is already open.


You have until Sunday the 2nd of May 2021 23:59 UTC to submit your proposals but you will make our (talks committee) live much easier if you start sending the proposals *now* and not send them all last minute in 2 weeks ;)

I promise i'll buy you a $preferred_beverage$ for next years Akademy (which we're hoping will happen live) if you send a talk before end of this week (and you send me a mail about it)

Categories: FLOSS Project Planets

Paolo Amoroso: Free Python Books Went Viral on Hacker News

Planet Python - Mon, 2021-04-12 14:24

My Free Python Books list went viral on Hacker News, ending up on the home page within the first 2-3 entries for several hours.

Free Python Books on the home page of Hacker News.

Mike Andreuzza shared the project’s link to Hacker News on April 10, 2021. Since then the post gathered 154 upvotes. The Free Python Books GitHub repository jumped to almost 700 stars and 80 forks (up from about 95 stars and 20 forks before), reached almost 15K views from over 8K visitors, and went trending on GitHub.

This attention brought new contributions to the project as 3 authors submitted their books and another user reported a broken link. Two people even sent me donations (thanks for the coffee!).

A plot of the views (green) and unique visitors (blue) of the Free Python Books GitHub repository when the project was featured on Hacker News.

Although I had interacted online with Mike a number of times, his submission to Hacker News came out of the blue and was a complete, pleasant surprise for me.

Free Python Books is a project I began when first approaching the language. Books are my preferred learning resources, so I started maintaining a list of the many good free works I run across.

Curation is another learning tool and the list is also a reference source for me.

My deepest thanks to Mike and the many users who appreciate the project. If you haven’t already, check out Free Python Books.

This post by Paolo Amoroso was published on Moonshots Beyond the Cloud.

Categories: FLOSS Project Planets

PythonClub - A Brazilian collaborative blog about Python: Orientação a objetos de outra forma: Classes e objetos

Planet Python - Mon, 2021-04-12 14:00

Nas poucas e raríssimas lives que eu fiz na Twitch, surgiu a ideia de escrever sobre programação orientada a objetos em Python, principalmente por algumas diferenças de como ela foi implementada nessa linguagem. Aproveitando o tema, vou fazer uma série de postagens dando uma visão diferente sobre orientação a objetos. E nessa primeira postagem falarei sobre classes e objetos.

Usando um dicionário

Entretanto, antes de começar com orientação a objetos, gostaria de apresentar e discutir alguns exemplos sem utilizar esse paradigma de programação.

Pensando em um sistema que precise manipular dados de pessoas, é possível utilizar os dicionários do Python para agrupar os dados de uma pessoa em uma única variável, como no exemplo a baixo:

pessoa = { 'nome': 'João', 'sobrenome': 'da Silva', 'idade': 20, }

Onde os dados poderiam ser acessados através da variável e do nome do dado desejado, como:

print(pessoa['nome']) # Imprimindo João

Assim, todos os dados de uma pessoa ficam agrupados em uma variável, o que facilita bastante a programação, visto que não é necessário criar uma variável para cada dado, e quando se manipula os dados de diferentes pessoas fica muito mais fácil identificar de qual pessoa aquele dado se refere, bastando utilizar variáveis diferentes.

Função para criar o dicionário

Apesar de prático, é necessário replicar essa estrutura de dicionário toda vez que se desejar utilizar os dados de uma nova pessoa. Para evitar a repetição de código, a criação desse dicionário pode ser feita dentro de uma função que pode ser colocada em um módulo pessoa (arquivo, nesse caso com o nome de pessoa.py):

# Arquivo: pessoa.py def nova(nome, sobrenome, idade): return { 'nome': nome, 'sobrenome': sobrenome, 'idade': idade, }

E para criar o dicionário que representa uma pessoa, basta importar esse módulo (arquivo) e chamar a função nova:

import pessoa p1 = pessoa.nova('João', 'da Silva', 20) p2 = pessoa.nova('Maria', 'dos Santos', 18)

Desta forma, garante-se que todos os dicionários representando pessoas terão os campos desejados e devidamente preenchidos.

Função com o dicionário

Também é possível criar algumas funções para executar operações com os dados desses dicionários, como pegar o nome completo da pessoa, trocar o seu sobrenome, ou fazer aniversário (o que aumentaria a idade da pessoa em um ano):

# Arquivo: pessoa.py def nova(nome, sobrenome, idade): ... # Código abreviado def nome_completo(pessoa): return f"{pessoa['nome']} {pessoa['sobrenome']}" def trocar_sobrenome(pessoa, sobrenome): pessoa['sobrenome'] = sobrenome def fazer_aniversario(pessoa): pessoa['idade'] += 1

E sendo usado como:

import pessoa p1 = pessoa.nova('João', 'da Silva', 20) pessoa.trocar_sobrenome(p1, 'dos Santos') print(pessoa.nome_completo(p1)) pessoa.fazer_aniversario(p1) print(p1['idade'])

Nesse caso, pode-se observar que todas as funções aqui implementadas seguem o padrão de receber o dicionário que representa a pessoa como primeiro argumento, podendo ter outros argumentos ou não conforme a necessidade, acessando e alterando os valores desse dicionário.

Versão com orientação a objetos

Antes de entrar na versão orientada a objetos propriamente dita dos exemplos anteriores, vou fazer uma pequena alteração para facilitar o entendimento posterior. A função nova será separada em duas partes, a primeira que criará um dicionário, e chamará uma segunda função (init), que receberá esse dicionário como primeiro argumento (seguindo o padrão das demais funções) e criará sua estrutura com os devidos valores.

# Arquivo: pessoa.py def init(pessoa, nome, sobrenome, idade): pessoa['nome'] = nome pessoa['sobrenome'] = sobrenome pessoa['idade'] = idade def nova(nome, sobrenome, idade): pessoa = {} init(pessoa, nome, sobrenome, idade) return pessoa ... # Demais funções do arquivo

Porém isso não muda a forma de uso:

import pessoa p1 = pessoa.nova('João', 'da Silva', 20) Função para criar uma pessoa

A maioria das linguagens de programação que possuem o paradigma de programação orientado a objetos faz o uso de classes para definir a estrutura dos objetos. O Python também utiliza classes, que podem ser definidas com a palavra-chave class seguidas de um nome para ela. E dentro dessa estrutura, podem ser definidas funções para manipular os objetos daquela classe, que em algumas linguagens também são chamadas de métodos (funções declaradas dentro do escopo uma classe).

Para converter o dicionário para uma classe, o primeiro passo é implementar uma função para criar a estrutura desejada. Essa função deve possui o nome __init__, e é bastante similar a função init do código anterior:

class Pessoa: def __init__(self, nome, sobrenome, idade): self.nome = nome self.sobrenome = sobrenome self.idade = idade

As diferenças são que agora o primeiro parâmetro se chama self, que é um padrão utilizado no Python, e em vez de usar colchetes e aspas para acessar os dados, aqui basta utilizar o ponto e o nome do dado desejado (que aqui também pode ser chamado de atributo, visto que é uma variável do objeto). A função nova implementada anteriormente não é necessária, a própria linguagem cria um objeto e passa ele como primeiro argumento para o __init__. E assim para se criar um objeto da classe Pessoa basta chamar a classe como se fosse uma função, ignorando o argumento self e informando os demais, como se estivesse chamando a função __init__ diretamente:

p1 = Pessoa('João', 'da Silva', 20)

Nesse caso, como a própria classe cria um contexto diferente para as funções (escopo ou namespace), não está mais sendo utilizado arquivos diferentes, porém ainda é possível fazê-lo, sendo necessário apenas fazer o import adequado. Mas para simplificação, tanto a declaração da classe, como a criação do objeto da classe Pessoa podem ser feitas no mesmo arquivo, assim como os demais exemplos dessa postagem.

Outras funções

As demais funções feitas anteriormente para o dicionário também podem ser feitas na classe Pessoa, seguindo as mesmas diferenças já apontadas anteriormente:

class Pessoa: def __init__(self, nome, sobrenome, idade): self.nome = nome self.sobrenome = sobrenome self.idade = idade def nome_completo(self): return f'{self.nome} {self.sobrenome}' def trocar_sobrenome(self, sobrenome): self.sobrenome = sobrenome def fazer_aniversario(self): self.idade += 1

Para se chamar essas funções, basta acessá-las através do contexto da classe, passando o objeto criado anteriormente como primeiro argumento:

p1 = Pessoa('João', 'dos Santos', 20) Pessoa.trocar_sobrenome(p1, 'dos Santos') print(Pessoa.nome_completo(p1)) Pessoa.fazer_aniversario(p1) print(p1.idade)

Essa sintaxe é bastante semelhante a versão sem orientação a objetos implementada anteriormente. Porém quando se está utilizando objetos, é possível chamar essas funções com uma outra sintaxe, informando primeiro o objeto, seguido de ponto e o nome da função desejada, com a diferença de que não é mais necessário informar o objeto como primeiro argumento. Como a função foi chamada através de um objeto, o próprio Python se encarrega de passá-lo para o argumento self, sendo necessário informar apenas os demais argumentos:

p1.trocar_sobrenome('dos Santos') print(p1.nome_completo()) p1.fazer_aniversario() print(p1.idade)

Existem algumas diferenças entre as duas sintaxes, porém isso será tratado posteriormente. Por enquanto a segunda sintaxe pode ser vista como um açúcar sintático da primeira, ou seja, uma forma mais rápida e fácil de fazer a mesma coisa que a primeira, e por isso sendo a recomendada.


Como visto nos exemplos, programação orientada a objetos é uma técnica para juntar variáveis em uma mesma estrutura e facilitar a escrita de funções que seguem um determinado padrão, recebendo a estrutura como argumento, porém a sintaxe mais utilizada no Python para chamar as funções de um objeto (métodos) posiciona a variável que guarda a estrutura antes do nome da função, em vez do primeiro argumento.

No Python, o argumento da estrutura ou objeto (self) aparece explicitamente como primeiro argumento da função, enquanto em outras linguagens essa variável pode receber outro nome (como this) e não aparece explicitamente nos argumentos da função, embora essa variável tenha que ser criada dentro do contexto da função para permitir manipular o objeto.

Esse artigo foi publicado originalmente no meu blog, passe por lá, ou siga-me no DEV para ver mais artigos que eu escrevi.

Categories: FLOSS Project Planets

Evolving Web: Drupal Modules our Team Loves, 2021 Edition

Planet Drupal - Mon, 2021-04-12 13:23

It's been a while since we've written a round-up of must-have modules (the last one was back in 2018), so I asked the Evolving Web team about some of their favourites they've been using lately.

Here are a few staples that can benefit pretty much any of your Drupal 9 projects. Add your favourites down in the comments!

TL;DR: Essential Drupal 9 modules Crop API

The Crop API module adds functionality to Drupal’s built-in image tools by allowing editors to crop images according to how they’re used. No more cut-off faces in your thumbnail cards!

Note that to use Crop API, you’ll need a UI module like Image widget crop or Focal point. Our team uses the latter.

“I like the fact that we can give editors control over which part of the image to focus on.”

- Robert Ngo, Evolving Web developer

Not quite what you were looking for? There are several alternatives to Crop API, which you can read about on Drupal.org: Comparison of image cropping and resizing modules.

Reroute Email

This module “intercepts all outgoing emails from a Drupal site and reroutes them to a predefined configurable email address”. 

In other words, if you want to send a test email that doesn’t actually make it to users, the Reroute Email module gives you an easy way to do it. 

Rabbit Hole

Rabbit Hole lets you control how content types are displayed on their own page.

For example, if you have a certain content type that should never be displayed on its own page, you can use Rabbit Hole to display an Access denied message should a user attempt to access its node.

Config Pages

Save time and cut down on coding whenever you need to create a custom page. The Config Pages module lets you create rich page types that your content editors can easily modify via custom fields and drop-downs.


Drupal’s Mailgun module provides integration with the open-source, developer-focused Mailgun email service. The service, which “uses REST APIs to effortlessly send, receive and track emails”, is a mainstay in our team.

“If I need to send emails, this is my go-to.”

- Ivan Doroshenko, Evolving Web developer

Admin Toolbar

If you’re creating a new Drupal 9 site, this should probably be the first module you install. Admin Toolbar lets you access all of Drupal’s admin pages via a convenient mega-menu, saving you countless clicks. 


The Pathauto module helps you keep your URL aliases clean and consistent by automatically generating them according to your desired parameters.


Never worry about forgetting a redirect again. If a content editor changes a page’s URL alias, the Redirect module will automatically implement an appropriate redirect.


If you’ve ever been disappointed by the way a piece of content looked when you shared it on social media, this must-have module is for you. It lets you customize things like image format and description text across various social media snippet types, so your content looks the way you want it to whether it’s being viewed on Twitter or on Facebook.

For a look at the Metatag module in action, check out our trainer Trevor’s video on perfecting social media previews.

What Drupal 9 module can you not live without?

We’re working on a crowd-sourced list of essential Drupal 9 modules.

Leave a comment with your favourite for a chance to be featured in an upcoming article!

+ more awesome articles by Evolving Web
Categories: FLOSS Project Planets