Feeds
The Drop Times: Practice Makes Perfect
If you are a good swimmer, you would remember, in the beginning, how difficult it was to practice bubbling: the art of releasing air bubbles little by little and prolonging the time underwater without surfacing like a bloated balloon and not letting water in your nostrils, picking up a coin or a ring from the bottom of the pool in one quick motion.
At least a few times, water would have gotten into your nasal channel, and you felt that chill in your head. And then suddenly, the trainer would say to float, holding your breath without bubbling, start kicking while holding your head down and hands straight, start exhaling while kicking, and turn your head to take a breath, then fashion your arms to have maximum thrust. Some cardio exercises might be recommended so that your lung capacity increases.
These are all exercises to make you a natural swimmer, utilizing buoyancy and minimal friction against the current. But now, when you swim, you swim. You don't particularly remember bubbling, turning your head to inhale, or keeping your upper body motionless while utilizing your legs alone to push and hands to pull.
The above is true about each thing that we master. It is the baby steps that matter. And once you are thorough, it comes naturally to you. To reach that level need patience and practice.
DrupalCamp NJ and NERD Summit are around the corner. Local DUGs have announced several meetups. Drupal organizations have started training sessions. Now is the right time to get to practice, to immerse oneself in the pool of Drupal.
The best story we published last week was the interview with DrupalCamp NJ speaker John Jameson. He speaks in detail about digital accessibility.
Drupal Netherlands has announced DrupalJam 2023: Connected on June 01, 2023, at De Fabrique in Utrecht.
DrupalCamp Asheville has called for training sessions, and you have time until March 28 to submit the topics for the July 7 event. With their neurodiversity initiative, you have multiple options to present your sessions. DrupalCamp Asheville has also requested help to organize the camp remotely. The DrupalCamp NJ schedule is live now. The DrupalSouth paper submission deadline was today and might have ended already.
Meanwhile, Drupal Brisbane's March 02 event got canceled, although Melbourne Drupal Meetup scheduled for March 08 is on track. DrupalCon Pittsburgh Trivia Night was seeking sponsors. DrupalCon Lille has called for volunteers.
Last Thursday, AmyJune Hineline presented a workshop, "Beyond 99 Red Balloons: A Guide to Alternative Text," at Women Who Code's Connect Empower event. On March 15, 12 pm ET, she will present a webinar for Design4Drupal Boston about Accessible Presentations.
Drupal Bangalore will hold a Meetup on March 18, 2023. Drupal Pune had a meetup on March 3, 2023. Central NJ Web Developers are holding a meetup on March 10, 2023.
We did a sneak peek into Axelerant's guide on accessibility compliance acts and standards. Another blog post we went through was a bit dated by Hounder.co about the need to migrate Drupal 7 sites to the newest version. Acquia Digital Freedom Tour is coming to New York as part of Acquia Engage on March 21, 2023. On March 23, they will promote Aquia DAM (Widen) in Los Angeles. Evolving Web has announced training for developers on Drupal Development Workflows from April 03 to April 05, 2023. A11yTalks will discuss the next generation of automated testing tomorrow (March 7).
That is for this week. Thank you.
Sebin A. Jacob
Editor-In-Chief
PyBites: Learning to code is a lot like learning a language
This content was first sent as part of our friends email list, you can subscribe here to get this type of content early and fresh every week …
Let me tell you the story of how I effectively learned foreign languages (there are a lot of parallels with how I became a developer).
I was not a good language learning when I was young, heck I even flunked 6th grade in part because my English was terrible!
But being bad at something does not mean it’s game over, I was using the wrong “software”.
My approach was flawed: I tried to memorize things, almost learning from a dictionary, highly inefficient and not fun.
Learning practical things like languages, driving a car or programming doesn’t work like that.
Everything changed when I was introduced to “interrailing“. For those that don’t know, it’s a 3-4 week train pass that let’s you travel through an area of (in my case) Europe. Apart from having a well deserved break over summer, I always used those as an opportunity to learn the languages of the countries I visited, mainly French, Spanish and Italian.
Once I decided I would move to Spain that summer I went all in. I travelled through Spain and I fully immersed myself in the language. I tried to speak it with as many people as I could, I constantly noted down words I kept forgetting, I bought local newspapers and forced myself to read as much as I could (not understanding much yet).
After a few weeks of constant deliberate practice – we also covered this on the podcast here and here – I reached a tipping point: I earned conversational level in a language I could not even say “Hola, que tal?” in a few months prior.
Flash forward 10 or so years and I got on a similar mammoth mission to learn to program, because when I saw the power of automating the boring stuff I knew this would be a game changer for my career. It would be a fun one too, coder’s delight is real!
But… learning to program almost from scratch (at most I was fluent in Excel) is hard.
But I applied the same techniques as the language learning journey: constant deliberate practice, failing my way forward. The tutorials got boring pretty fast (paralysis by tutorial anyone?!), and things were not clicking, and many times I wanted to give up!
Until I started to build projects and apply what I learned. It was still hard, I got stuck constantly, but it was my way of fully emerging into the learning.
And over time, I had great success. The first code I wrote was really ugly, but I got a tool working, that ultimately would become a staple in the organization I worked for. “Done > perfect” is so true and something you seriously need to embrace if you want to succeed as a developer.
More importantly building bigger projects I cared about, started doing 3 things for me:
- I inevitably ran into a lot of design issues, things you usually are not confronted with in the “safe” zone of tutorials.
- I got completely hooked because building software was so much more tangible, I was creating a product, something that could help people and I could talk about later when I had to qualify as a programmer (specially because I did not have a formal CS degree).
- Building complete solutions without being a developer by training, over time I dared to call myself a developer. Without relevant projects shipped this would never have happened. Tutorial purgatory is insidious, you not only waste a lot of time, there is actually a “0 to 1” issue, you won’t bridge the chasm!
So if you take one thing away from this email, let it be that if you want to become a developer and want to get there relatively fast, find one or more projects you can fully immerse yourself in.
Don’t worry about design patterns or overly planning everything when you start. Doing so will keep you stuck.
Have people look at your code. Live and breathe code in the context of your apps. Build in public, ship fast.
Have people use your code, there is no better learning like your code hitting the real world (as Tyson said: “everybody has a plan till they get hit in the face”), it changes everything and you will grow.
I hope my story inspired you and made you realize you might be “two inches removed from gold”, it is possible, using the right approach.
– Bob
What took us years to learn we have distilled in our PDM program which will significantly shortcut the time it takes to become a proficient developer.
Check out how it works and what people achieve working with us here. For more information about our uniquely practical approach check out this training.
Promet Source: How to Leverage Load Testing to Scale up a Drupal Site
Kubuntu Manual 22.04.2 Release
Hello everyone! It’s a great day with a new release of the Kubuntu Manual to match the recently-released Kubuntu 22.04.2 update. Thank you for the community members who provided feedback and filed bug reports to our GitHub project! You can find the new releases either on the GitHub releases page or our support page.
Real Python: What's a Python Namespace Package, and What's It For?
Python namespace packages are an advanced Python feature. You may have heard them mentioned in relation to the __init__.py file. Specifically, if you don’t include at least an empty __init__.py file in your package, then your package becomes a namespace package.
For the most part, namespace packages and regular packages won’t behave differently when it comes to using them in your project. In fact, you’ve probably accidentally forgotten to include an __init__.py file in at least one package but didn’t notice any side effects. While namespace packages are generally a bit slower to import than regular packages, you won’t usually run into many issues.
In this tutorial, you’ll dive into what Python namespace packages are, where they come from, why they’re created when you don’t include an __init__.py file, and when they might be useful.
Along the way, you’ll get the chance to make your own namespace package structures, and you’ll install and extend an experimental namespace package from PyPI.
Python namespace packages are probably going to be useful for people who manage or architect collections of related packages. That said, you’ll get to experiment with a project that can make namespace packages more accessible to the average user.
This is an advanced tutorial, and to get the most out of it, you should be very familiar with the basics of Python and comfortable with the import system, as well as having some exposure to packaging.
So, what’s a Python namespace package, and what’s it for?
Free Source Code: Click here to download the free source code that you’ll use to explore Python namespace packages.
In Short: Python Namespace Packages Are a Way to Organize Multiple PackagesBy way of a quick recap, you’ll first examine the general concept of a Python namespace before tackling namespace packages. A namespace is a way to group objects under a specific name. You can group values, functions, and other objects.
For example, when you import math, you gain access to the math namespace. Inside the math namespace, you can select from a whole host of different objects.
You can also think of a Python dictionary as a namespace. With a Python dictionary, you can take two variables that started out as completely separate, independent variables and include them within the same dictionary namespace:
>>>>>> real_python = {} >>> home_page = "https://realpython.com" >>> import_tutorial = "python-import" >>> real_python["home_page"] = home_page >>> real_python["import_tutorial"] = import_tutorialNow, you can reference both the home_page and import_tutorial values from the real_python namespace. Namespace packages work in a similar way, but they combine whole packages instead of values or other Python objects.
This allows you to have two independent packages on PyPI, for example, that still share the same namespace. One single namespace package doesn’t make a whole lot of sense. To really see the advantage of namespace packages, you need at least two packages.
Namespace packages are typically used by businesses that may have extensive libraries of packages that they want to keep under a company namespace. For example, the Microsoft Azure packages are all accessible, once installed, through the azure namespace.
That’s why you’ll create your own company namespace in the next section.
What Does a Namespace Package Look Like?Imagine you work for the Snake Corporation, and your team wants to make sure that all the packages in its library are always accessible from the snake_corp namespace. So, no matter what package you’re using, as long as it was made by the Snake Corporation, you’ll import from snake_corp.
Without namespace packages, you’d have two options:
- Create a monorepo, which would be a single package called snake_corp with hundreds of modules for all the different libraries and utilities that you might need.
- Create various packages, but prefix them all with snake_corp. For example, you might have snake_corp_dateutil as a package.
The trouble with creating a monorepo is that everyone has to download all the code even if they only use a tiny fraction of it. It also complicates matters in terms of version management and other packaging workflows, especially if different teams are in charge of different subpackages.
The issue with creating various packages with a common prefix is that it can be quite verbose, messy, and inelegant. And, at the end of the day, the Snake Corporation CEO has said that they don’t like that solution. They’d prefer using a monorepo over prefixing all the packages. Besides, using common prefixes is just a convention that doesn’t technically create a common namespace.
What you have in this situation is a perfect use case for namespace packages! Namespace packages allow you to have multiple separate packages with their own packaging workflow, but they can all live in the same snake_corp namespace.
Note: In the following examples, you’ll be creating a bunch of packages. You’ll note that the containing folders and the package names vary depending on whether you’re referring to the name for when you pip install or import, for example. Check out this table for a rough breakdown of what’s generally used:
Purpose Typical Casing Example PyPI and pip Kebab charset-normalizer Import Snake charset_normalizer Prose Pascal or none Charset NormalizerIn the following examples, you’ll be using these conventions. But be aware that these conventions aren’t universal.
Read the full article at https://realpython.com/python-namespace-package/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python for Beginners: Pandas Insert Columns into a DataFrame in Python
We use a pandas dataframe to store and manipulate tabular data in python. In this article, we will discuss how to insert a new column into the pandas dataframe using the insert() method.
Table of Contents- The Pandas insert() Method
- Pandas Insert a Column at The Beginning of a DataFrame
- Insert Column at The End of a DataFrame in Python
- Pandas Insert Column at a Specific Index in a DataFrame
- Conclusion
The insert() method is used to insert a column into a dataframe at a specific position. It has the following syntax.
DataFrame.insert(loc, column, value, allow_duplicates=_NoDefault.no_default)Here,
- The loc parameter takes the index at which the new column is inserted as its input argument.
- The column parameter takes the column name as its input.
- The value parameter takes a list or pandas series as values for the specified column.
- The allow_duplicates parameter is used to decide if we can insert duplicate column names into the dataframe. By default, the insert() method raises a ValueError exception if the dataframe contains a column with the same name that we are trying to insert. If you want to insert duplicate column names into the pandas dataframe, you can set the allow_duplicates parameter to True.
To insert a column at the beginning of a dataframe, we can use the insert() method. Here, we will set the loc parameter to 0 so that the new column is inserted at the beginning. You can observe this in the following example.
import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The original dataframe is:") print(df) df.insert(0,"Name", ["Aditya","Joel", "Sam", "Chris", "Riya", "Anne"]) print("The mofified dataframe is:") print(df)Output:
The original dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The mofified dataframe is: Name Roll Maths Physics Chemistry 0 Aditya 1 100 80 90 1 Joel 2 80 100 90 2 Sam 3 90 80 70 3 Chris 4 100 100 90 4 Riya 5 90 90 80 5 Anne 6 80 70 70In this example, we first converted a list of dictionaries to a dataframe using the DataFrame() function. Then, we inserted the Name column in the created dataframe at index 0 using the insert() method. For this, we passed the value 0 as the first input argument, the string "Name" as the second input argument and the list of values as the third input argument to the insert() method.
Insert Column at The End of a DataFrame in PythonTo insert a column at the end of the dataframe, we can directly assign the column values to the column name in the dataframe as shown below.
import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The original dataframe is:") print(df) df["Name"]= ["Aditya","Joel", "Sam", "Chris", "Riya", "Anne"] print("The mofified dataframe is:") print(df)Output:
The original dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The mofified dataframe is: Roll Maths Physics Chemistry Name 0 1 100 80 90 Aditya 1 2 80 100 90 Joel 2 3 90 80 70 Sam 3 4 100 100 90 Chris 4 5 90 90 80 Riya 5 6 80 70 70 AnneIn the above example, we have used the indexing operator to insert a new column at the end of a dataframe.
Instead of the above approach, we can also use the insert() method to insert a column at the end. For this, we will use the following steps.
- First, will obtain the list of column names using the columns attribute of the dataframe. The columns attribute contains a list of column names.
- Next, we will use the len() function to find the total number of columns in the dataframe. Let it be numCol.
- Once, we get the number of columns in the dataframe, we know that the current columns exist at the positions 0 to numCol-1. Hence, we will insert the new column to the dataframe at the index numCol using the insert() method.
After execution of the above steps, we can insert a column at the end of the dataframe as shown in the following example.
import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The original dataframe is:") print(df) numCol=len(df.columns) df.insert(numCol,"Name", ["Aditya","Joel", "Sam", "Chris", "Riya", "Anne"]) print("The mofified dataframe is:") print(df)Output:
The original dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The mofified dataframe is: Roll Maths Physics Chemistry Name 0 1 100 80 90 Aditya 1 2 80 100 90 Joel 2 3 90 80 70 Sam 3 4 100 100 90 Chris 4 5 90 90 80 Riya 5 6 80 70 70 Anne Pandas Insert Column at a Specific Index in a DataFrameTo insert a column at a specific position in the dataframe, you can use the insert() method as shown below.
import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The original dataframe is:") print(df) df.insert(2,"Name", ["Aditya","Joel", "Sam", "Chris", "Riya", "Anne"]) print("The mofified dataframe is:") print(df)Output:
The original dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The mofified dataframe is: Roll Maths Name Physics Chemistry 0 1 100 Aditya 80 90 1 2 80 Joel 100 90 2 3 90 Sam 80 70 3 4 100 Chris 100 90 4 5 90 Riya 90 80 5 6 80 Anne 70 70In this example, we have inserted the Name column at index 2 of the input dataframe using the insert() method.
ConclusionIn this article, we discussed different ways to insert a column in a pandas dataframe. To learn more about python programming, you can read this article on how to create an empty dataframe in python. You might also like this article on working with XML files in Python.
I hope you enjoyed reading this article. Stay tuned for more informative articles.
Happy Learning!
The post Pandas Insert Columns into a DataFrame in Python appeared first on PythonForBeginners.com.
Mike Driscoll: PyDev of the Week: Janos Gabler
This week we welcome Janos Gabler (@JanosGabler) as our PyDev of the Week! Janos is the creator of estimagic, a Python package for nonlinear optimization. You can catch up with Janos on his website or by checking out Janos’ GitHub Profile.
Let’s spend some time getting to know Janos better!
Can you tell us a little about yourself (hobbies, education, etc.)I am Jano?. I live in Bonn, Germany, where I did a PhD in economics. I am now a postdoc at the University of Bonn and teach “Effective Programming Practices for Economists” and “Scientific computing”.
My contract runs until October. I am currently deciding what I want to do next. Most likely, I will be looking for Jobs in AI or the scientific Python ecosystem, but there is a slight chance of founding a startup.
While I try to avoid yak-shaving at work, I fully embrace it in my hobbies. For example, I like baking, which eventually led me to build my own wood-fired brick oven. I also enjoy woodworking, and the bookshelf I am currently building required me to learn to weld, so I could construct a big bandsaw out of scrap metal which I needed to resaw the boards for the shelf.
Why did you start using Python?I started using Python in 2015 for empirical research (using pandas and statsmodels). I had no previous experience in any other programming language and did not expect programming to be something I would enjoy. This changed very quickly!
I was lucky to attend “Effective programming practices for Economists” (the class I am teaching now) right at the beginning of my programming journey. This introduced me to git, unit testing and best practices.
The projects quickly became more challenging. There was a short period when I regretted picking a “slow language”, but I quickly learned how to get around that. First with Cython, then Numba and nowadays JAX.
What other programming languages do you know and which is your favorite?It speaks for Python that I don’t know any other programming language well. I have some experience in Fortran, Matlab, C, and R, but I did all my computational projects during my PhD in Python.
I guess this also answers the question of which language is my favorite?
Having said that, I enjoy reading code in other languages and would like to learn a functional language like Haskell when I find the time.
What projects are you working on now?My main focus is deep learning and natural language processing, and I am interested in how AI can make us more productive. In a few years, scientists and programmers will use very different tools than now and will be vastly more effective. Things like GitHub copilot are just the start. I want to be part of that process, either by working on better language models or by integrating language models into next-generation tools.
On the side, I continue working on estimagic together with amazing contributors. The goal of estimagic is to enable scientists who are not experts in numerical optimization to solve challenging optimization problems in practice. They should not have to care too much about selecting algorithms or setting their tuning parameters. We are therefore developing new algorithms that are more adaptive and automatically adjust some tuning parameters that previously had to be specified by a user.
Which Python libraries are your favorite (core or 3rd party)?You mean besides estimagic? There are so many libraries I really like and use a lot:
One of my absolute favorites is JAX. First, it gives you automatic differentiation, Jit compilation, and GPU acceleration almost for free if you know numpy. But it does not stop there. Vmap lets you vectorize functions, and you can thus write simple functions that are easy to test and vectorize later. And due to pytrees (think of them as nested dictionaries if you haven’t heard the term), you can use quite flexible data structures in places where the math (and most libraries) expect one-dimensional numpy arrays.
Pytask is a workflow management system for reproducible research inspired by pytest. It is really easy to use, especially if you already know pytest, and I have used it for all my research projects in my PhD.
One of my favorite core libraries is inspect. It lets you check the signatures of functions at runtime. So if you are wrapping functions, you can look at their signature and call them with the correct arguments.
I am also continuously amazed by the foundational libraries numpy and scipy. None of the things I do would be possible in Python without them. I first really appreciated this at last year’s scipy conference. BTW: I’ll be at the scipy conference in Austin again and happy to chat!
What’s the origin story of estimagic?In computational economics, we encounter a lot of challenging optimization problems. Either to solve economic models or to fit their parameters to data. There were many good open-source optimizers, but they were scattered across different libraries. Most of them forced me to put start parameters into an unlabeled array, making it hard to see which parameter was which. Few provided error handling, logging, and other convenience features.
Estimagic is based on the insight that all of these features can be added to existing optimizers by wrapping them, i.e., without modifying their source code. I wrote a very rudimentary prototype in 2019 where parameters could be provided as a pandas DataFrame (so the index provided names), constraints could be implemented via reparametrization, and the optimization could be monitored in real-time in a dashboard. It was horrible but good enough to excite some people about the idea. Together we built estimagic into something better than I ever would have imagined.
What are some of the most unusual scientific models you have seen estimagic used for?The most unusual application I heard of was not a scientific model. While giving a tutorial on estimagic, I met an aerospace engineer who works on flying taxis that can take off and land vertically. I love the idea there might be a flying taxi that contains a part optimized with estimagic!
Is there anything else you’d like to say?I encourage everyone who uses a small open-source library to contact the authors and provide feedback. As a user, you often get a feature you want or a fix for free. As a maintainer, you get a chance to make your library better. And while you are at it, give them a star on GitHub.
Thanks so much for doing the interview, Janos!
The post PyDev of the Week: Janos Gabler appeared first on Mouse Vs Python.
Salsa Digital Drupal-Related Articles: BenefitMe — coding NZ’s Social Security Act (Rules as Code)
A month as KDE Software Platform Engineer
Precisely one month ago I joined KDE e.V., the non-profit organization behind KDE, as Software Platform Engineer. This is part of three positions in KDE’s “Make a living” initiative.
The exact scope of this position is a bit vague. I like to describe it as “Taking care of everything needed so that people can build and enjoy awesome software”. A large part of that is taking care of foundational libraries such as Qt and KDE Frameworks, but it can be really anything that helps people do awesome things. This is pretty much what I’ve been doing as a volunteer for the last couple of years anyway.
So what have I been up to this past month? A lot, but also not a lot that’s worth mentioning individually right now. As you probably know we are heading full steam towards using Qt6 for all our products. This is something that started almost four years ago (and I’ve been involved from the start) and is growing ever more closely to being finished. Last week we switched Plasma master to use Qt6 exclusively, completing an important milestone for the whole transition. This involved a ton of small to medium-sized changes and fixes across the stack.
Instead of listing all the changes I have done as part of that let’s focus on the outcome instead: I’m typing this post running on a full Plasma session running against Qt6. There are still some rough edges, but overall it’s quite usable already. Definitely enough to get involved and hack on it. I’d show you a screenshot, but that would be pretty boring, it looks exactly the same as before!
So what does the future hold? The transition towards Qt6/KF6 is going to stay my focus for a while, but once that settles down I’m going to focus on other areas of our software platform eventually. If you have ideas for what it would make sense for me to focus on please get in touch.
This position is enabled and financed by KDE e.V.. To allow me to keep this position in the long term (and perhaps even hire additional people) please consider donating to KDE e.V.
Python Does What?!: Annotation Inheritance
Type annotations in Python are mostly a static declaration to a type-checker like mypy or pyrightabout the expected types. However, they are also a dynamic data structure which a growing number of libraries such as the original attrsand dataclasses in the standard library, and even sqlalchemyuse at runtime. >>> from dataclasses import dataclass
>>>
>>> @dataclass
... class C:
... a: int
... b: str
...
>>> C(1, "a")
C(a=1, b='a') These libraries inspect the annotations of a class to generate__init__ and __eq__, saving a lot of boilerplate code. You could call this type of API named tuple without the tuple. (To get meta, the typing module has added dataclass_transformwhich libraries can use to properly annotate new class decorators with this API.)
These libraries support inheritance of fields. >>> @dataclass
... class D(C):
... e: int
...
>>> D(1, "a", 2)
D(a=1, b='a', e=2) Type checkers also consider class annotations to be inherited. For example, mypy considers this to be correct: class A:
a: int
class B(A): pass
B().a That code fails at runtime, because nothing is actually setting a on the B instance. But, what if B was a dataclass? >>> class A:
... a: int
...
>>> @dataclass
... class B:
... pass
...
>>> B(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() takes 1 positional argument but 2 were given It doesn't work, because annotations are not inherited. >>> A.__annotations__
{'a': <class 'int'>}
>>> B.__annotations__
{} It's up to the library to look up it's inheritance tree and decide to include the annotations of parents or not when generating code. As it happens, dataclasses has made the design decision to only inherit annotations from other dataclasses.
As an aside, class variables which are used to represent default values are inherited. >>> class A:
... a = 1
...
>>> @dataclass
... class B(A):
... a: int
...
>>> B()
B(a=1)
We can write another decorator which grabs annotations from parents and adds them in method resolution order, as if they were inherited. def inherit_annotations(cls):
annotations = {}
for parent in cls.__mro__[::-1]:
# reverse order so children override parents
annotations.update(getattr(parent, "__annotations__", {}))
# use getattr(): not everything has __annotations__ (e.g. object)
cls.__annotations__.update(annotations)
return cls Since all dataclasses sees is the __annotations__ dict at runtime, any modifications made before the class decorator runs will be reflected in the generated fields. >>> @dataclass
... @inherit_annotations
... class B(A): pass
...
>>> B(1)
B(a=1) Here's a robustified version of the function.
I know what you're thinking though: why not just use multiple class decorators? Sure, all but one of the generated __init__s will be overwritten, but that's fine because they all have the same behavior anyway. import attr
from dataclasses import dataclass
@dataclass
@attr.define
class DualCitizen:
a: int
@dataclass
class Dataclassified(DualCitizen):
pass
@attr.define
class Attrodofined(DualCitizen):
pass Looks like perfectly normal class definitions. >>> DualCitizen(1)
DualCitizen(a=1)
>>> Dataclassified(1)
Dataclassified(a=1)
>>> Attrodofined(1)
Attrodofined(1) And it works.
So, type-checkers consider annotations to be inherited, but class decorators which use annotations at runtime only inherit annotations from ancestors with the same decorator. We can work around this either by multiply decorating the ancestors, or by pulling annotations from ancestors into __annotations__.
The Drop Times: Drupal Related Events This Week
Colorfield: Visual regression testing for Drupal migrations with Playwright
Vincent Bernat: DDoS detection and remediation with Akvorado and Flowspec
Akvorado collects sFlow and IPFIX flows, stores them in a ClickHouse database, and presents them in a web console. Although it lacks built-in DDoS detection, it’s possible to create one by crafting custom ClickHouse queries.
DDoS detectionLet’s assume we want to detect DDoS targeting our customers. As an example, we consider a DDoS attack as a collection of flows over one minute targeting a single customer IP address, from a single source port and matching one of these conditions:
- an average bandwidth of 1 Gbps,
- an average bandwidth of 200 Mbps when the protocol is UDP,
- more than 20 source IP addresses and an average bandwidth of 100 Mbps, or
- more than 10 source countries and an average bandwidth of 100 Mbps.
Here is the SQL query to detect such attacks over the last 5 minutes:
SELECT * FROM ( SELECT toStartOfMinute(TimeReceived) AS TimeReceived, DstAddr, SrcPort, dictGetOrDefault('protocols', 'name', Proto, '???') AS Proto, SUM(((((Bytes * SamplingRate) * 8) / 1000) / 1000) / 1000) / 60 AS Gbps, uniq(SrcAddr) AS sources, uniq(SrcCountry) AS countries FROM flows WHERE TimeReceived > now() - INTERVAL 5 MINUTE AND DstNetRole = 'customers' GROUP BY TimeReceived, DstAddr, SrcPort, Proto ) WHERE (Gbps > 1) OR ((Proto = 'UDP') AND (Gbps > 0.2)) OR ((sources > 20) AND (Gbps > 0.1)) OR ((countries > 10) AND (Gbps > 0.1)) ORDER BY TimeReceived DESC, Gbps DESCHere is an example output1 where two of our users are under attack. One from what looks like an NTP amplification attack, the other from a DNS amplification attack:
TimeReceived DstAddr SrcPort Proto Gbps sources countries 2023-02-26 17:44:00 ::ffff:203.0.113.206 123 UDP 0.102 109 13 2023-02-26 17:43:00 ::ffff:203.0.113.206 123 UDP 0.130 133 17 2023-02-26 17:43:00 ::ffff:203.0.113.68 53 UDP 0.129 364 63 2023-02-26 17:43:00 ::ffff:203.0.113.206 123 UDP 0.113 129 21 2023-02-26 17:42:00 ::ffff:203.0.113.206 123 UDP 0.139 50 14 2023-02-26 17:42:00 ::ffff:203.0.113.206 123 UDP 0.105 42 14 2023-02-26 17:40:00 ::ffff:203.0.113.68 53 UDP 0.121 340 65 DDoS remediationOnce detected, there are at least two ways to stop the attack at the network level:
- blackhole the traffic to the targeted user (RTBH), or
- selectively drop packets matching the attack patterns (Flowspec).
The easiest method is to sacrifice the attacked user. While this helps the attacker, this protects your network. It is a method supported by all routers. You can also offload this protection to many transit providers. This is useful if the attack volume exceeds your internet capacity.
This works by advertising with BGP a route to the attacked user with a specific community. The border router modifies the next hop address of these routes to a specific IP address configured to forward the traffic to a null interface. RFC 7999 defines 65535:666 for this purpose. This is known as a “remote-triggered blackhole” (RTBH) and is explained in more detail in RFC 3882.
It is also possible to blackhole the source of the attacks by leveraging unicast Reverse Path Forwarding (uRPF) from RFC 3704, as explained in RFC 5635. However, uRPF can be a serious tax on your router resources. See “NCS5500 uRPF: Configuration and Impact on Scale” for an example of the kind of restrictions you have to expect when enabling uRPF.
On the advertising side, we can use BIRD. Here is a complete configuration file to allow any router to collect them:
log stderr all; router id 192.0.2.1; protocol device { scan time 10; } protocol bgp exporter { ipv4 { import none; export where proto = "blackhole4"; }; ipv6 { import none; export where proto = "blackhole6"; }; local as 64666; neighbor range 192.0.2.0/24 external; multihop; dynamic name "exporter"; dynamic name digits 2; graceful restart yes; graceful restart time 0; long lived graceful restart yes; long lived stale time 3600; # keep routes for 1 hour! } protocol static blackhole4 { ipv4; route 203.0.113.206/32 blackhole { bgp_community.add((65535, 666)); }; route 203.0.113.68/32 blackhole { bgp_community.add((65535, 666)); }; } protocol static blackhole6 { ipv6; }We use BGP long-lived graceful restart to ensure routes are kept for one hour, even if the BGP connection goes down, notably during maintenance.
On the receiver side, if you have a Cisco router running IOS XR, you can use the following configuration to blackhole traffic received on the BGP session. As the BGP session is dedicated to this usage, The community is not used, but you can also forward these routes to your transit providers.
router static vrf public address-family ipv4 unicast 192.0.2.1/32 Null0 description "BGP blackhole" ! address-family ipv6 unicast 2001:db8::1/128 Null0 description "BGP blackhole" ! ! ! route-policy blackhole_ipv4_in_public if destination in (0.0.0.0/0 le 31) then drop endif set next-hop 192.0.2.1 done end-policy ! route-policy blackhole_ipv6_in_public if destination in (::/0 le 127) then drop endif set next-hop 2001:db8::1 done end-policy ! router bgp 12322 neighbor-group BLACKHOLE_IPV4_PUBLIC remote-as 64666 ebgp-multihop 255 update-source Loopback10 address-family ipv4 unicast maximum-prefix 100 90 route-policy blackhole_ipv4_in_public in route-policy drop out long-lived-graceful-restart stale-time send 86400 accept 86400 ! address-family ipv6 unicast maximum-prefix 100 90 route-policy blackhole_ipv6_in_public in route-policy drop out long-lived-graceful-restart stale-time send 86400 accept 86400 ! ! vrf public neighbor 192.0.2.1 use neighbor-group BLACKHOLE_IPV4_PUBLIC description akvorado-1When the traffic is blackholed, it is still reported by IPFIX and sFlow. In Akvorado, use ForwardingStatus >= 128 as a filter.
While this method is compatible with all routers, it makes the attack successful as the target is completely unreachable. If your router supports it, Flowspec can selectively filter flows to stop the attack without impacting the customer.
FlowspecFlowspec is defined in RFC 8955 and enables the transmission of flow specifications in BGP sessions. A flow specification is a set of matching criteria to apply to IP traffic. These criteria include the source and destination prefix, the IP protocol, the source and destination port, and the packet length. Each flow specification is associated with an action, encoded as an extended community: traffic shaping, traffic marking, or redirection.
To announce flow specifications with BIRD, we extend our configuration. The extended community used shapes the matching traffic to 0 bytes per second.
flow4 table flowtab4; flow6 table flowtab6; protocol bgp exporter { flow4 { import none; export where proto = "flowspec4"; }; flow6 { import none; export where proto = "flowspec6"; }; # […] } protocol static flowspec4 { flow4; route flow4 { dst 203.0.113.68/32; sport = 53; length >= 1476 && <= 1500; proto = 17; }{ bgp_ext_community.add((generic, 0x80060000, 0x00000000)); }; route flow4 { dst 203.0.113.206/32; sport = 123; length = 468; proto = 17; }{ bgp_ext_community.add((generic, 0x80060000, 0x00000000)); }; } protocol static flowspec6 { flow6; }If you have a Cisco router running IOS XR, the configuration may look like this:
vrf public address-family ipv4 flowspec address-family ipv6 flowspec ! router bgp 12322 address-family vpnv4 flowspec address-family vpnv6 flowspec neighbor-group FLOWSPEC_IPV4_PUBLIC remote-as 64666 ebgp-multihop 255 update-source Loopback10 address-family ipv4 flowspec long-lived-graceful-restart stale-time send 86400 accept 86400 route-policy accept in route-policy drop out maximum-prefix 100 90 validation disable ! address-family ipv6 flowspec long-lived-graceful-restart stale-time send 86400 accept 86400 route-policy accept in route-policy drop out maximum-prefix 100 90 validation disable ! ! vrf public address-family ipv4 flowspec address-family ipv6 flowspec neighbor 192.0.2.1 use neighbor-group FLOWSPEC_IPV4_PUBLIC description akvorado-1Then, you need to enable Flowspec on all interfaces with:
flowspec vrf public address-family ipv4 local-install interface-all ! address-family ipv6 local-install interface-all ! ! !As with the RTBH setup, you can filter dropped flows with ForwardingStatus >= 128.
DDoS detection (continued)In the example using Flowspec, the flows were also filtered on the length of the packet:
route flow4 { dst 203.0.113.68/32; sport = 53; length >= 1476 && <= 1500; proto = 17; }{ bgp_ext_community.add((generic, 0x80060000, 0x00000000)); };This is an important addition: legitimate DNS requests are smaller than this and therefore not filtered. With ClickHouse, you can get the 10th and 90th percentiles of the packet sizes with quantiles(0.1, 0.9)(Bytes/Packets).
The last issue we need to tackle is how to optimize the request: it may need several seconds to collect the data and it is likely to consume substantial resources from your ClickHouse database. One solution is to create a materialized view to pre-aggregate results:
CREATE TABLE ddos_logs ( TimeReceived DateTime, DstAddr IPv6, Proto UInt32, SrcPort UInt16, Gbps SimpleAggregateFunction(sum, Float64), Mpps SimpleAggregateFunction(sum, Float64), sources AggregateFunction(uniqCombined(12), IPv6), countries AggregateFunction(uniqCombined(12), FixedString(2)), size AggregateFunction(quantiles(0.1, 0.9), UInt64) ) ENGINE = SummingMergeTree PARTITION BY toStartOfHour(TimeReceived) ORDER BY (TimeReceived, DstAddr, Proto, SrcPort) TTL toStartOfHour(TimeReceived) + INTERVAL 6 HOUR DELETE ; CREATE MATERIALIZED VIEW ddos_logs_view TO ddos_logs AS SELECT toStartOfMinute(TimeReceived) AS TimeReceived, DstAddr, Proto, SrcPort, sum(((((Bytes * SamplingRate) * 8) / 1000) / 1000) / 1000) / 60 AS Gbps, sum(((Packets * SamplingRate) / 1000) / 1000) / 60 AS Mpps, uniqCombinedState(12)(SrcAddr) AS sources, uniqCombinedState(12)(SrcCountry) AS countries, quantilesState(0.1, 0.9)(toUInt64(Bytes/Packets)) AS size FROM flows WHERE DstNetRole = 'customers' GROUP BY TimeReceived, DstAddr, Proto, SrcPortThe ddos_logs table is using the SummingMergeTree engine. When the table receives new data, ClickHouse replaces all the rows with the same sorting key, as defined by the ORDER BY directive, with one row which contains summarized values using either the sum() function or the explicitly specified aggregate function (uniqCombined and quantiles in our example).2
Finally, we can modify our initial query with the following one:
SELECT * FROM ( SELECT TimeReceived, DstAddr, dictGetOrDefault('protocols', 'name', Proto, '???') AS Proto, SrcPort, sum(Gbps) AS Gbps, sum(Mpps) AS Mpps, uniqCombinedMerge(12)(sources) AS sources, uniqCombinedMerge(12)(countries) AS countries, quantilesMerge(0.1, 0.9)(size) AS size FROM ddos_logs WHERE TimeReceived > now() - INTERVAL 60 MINUTE GROUP BY TimeReceived, DstAddr, Proto, SrcPort ) WHERE (Gbps > 1) OR ((Proto = 'UDP') AND (Gbps > 0.2)) OR ((sources > 20) AND (Gbps > 0.1)) OR ((countries > 10) AND (Gbps > 0.1)) ORDER BY TimeReceived DESC, Gbps DESC Gluing everything togetherTo sum up, building an anti-DDoS system requires to following these steps:
- define a set of criteria to detect a DDoS attack,
- translate these criteria into SQL requests,
- pre-aggregate flows into SummingMergeTree tables,
- query and transform the results to a BIRD configuration file, and
- configure your routers to pull the routes from BIRD.
A Python script like the following one can handle the fourth step. For each attacked target, it generates both a Flowspec rule and a blackhole route.
import socket import types from clickhouse_driver import Client as CHClient # Put your SQL query here! SQL_QUERY = "…" # How many anti-DDoS rules we want at the same time? MAX_DDOS_RULES = 20 def empty_ruleset(): ruleset = types.SimpleNamespace() ruleset.flowspec = types.SimpleNamespace() ruleset.blackhole = types.SimpleNamespace() ruleset.flowspec.v4 = [] ruleset.flowspec.v6 = [] ruleset.blackhole.v4 = [] ruleset.blackhole.v6 = [] return ruleset current_ruleset = empty_ruleset() client = CHClient(host="clickhouse.akvorado.net") while True: results = client.execute(SQL_QUERY) seen = {} new_ruleset = empty_ruleset() for (t, addr, proto, port, gbps, mpps, sources, countries, size) in results: if (addr, proto, port) in seen: continue seen[(addr, proto, port)] = True # Flowspec if addr.ipv4_mapped: address = addr.ipv4_mapped rules = new_ruleset.flowspec.v4 table = "flow4" mask = 32 nh = "proto" else: address = addr rules = new_ruleset.flowspec.v6 table = "flow6" mask = 128 nh = "next header" if size[0] == size[1]: length = f"length = {int(size[0])}" else: length = f"length >= {int(size[0])} && <= {int(size[1])}" header = f""" # Time: {t} # Source: {address}, protocol: {proto}, port: {port} # Gbps/Mpps: {gbps:.3}/{mpps:.3}, packet size: {int(size[0])}<=X<={int(size[1])} # Flows: {flows}, sources: {sources}, countries: {countries} """ rules.append( f"""{header} route {table} {{ dst {address}/{mask}; sport = {port}; {length}; {nh} = {socket.getprotobyname(proto)}; }}{{ bgp_ext_community.add((generic, 0x80060000, 0x00000000)); }}; """ ) # Blackhole if addr.ipv4_mapped: rules = new_ruleset.blackhole.v4 else: rules = new_ruleset.blackhole.v6 rules.append( f"""{header} route {address}/{mask} blackhole {{ bgp_community.add((65535, 666)); }}; """ ) new_ruleset.flowspec.v4 = list( set(new_ruleset.flowspec.v4[:MAX_DDOS_RULES]) ) new_ruleset.flowspec.v6 = list( set(new_ruleset.flowspec.v6[:MAX_DDOS_RULES]) ) # TODO: advertise changes by mail, chat, ... current_ruleset = new_ruleset changes = False for rules, path in ( (current_ruleset.flowspec.v4, "v4-flowspec"), (current_ruleset.flowspec.v6, "v6-flowspec"), (current_ruleset.blackhole.v4, "v4-blackhole"), (current_ruleset.blackhole.v6, "v6-blackhole"), ): path = os.path.join("/etc/bird/", f"{path}.conf") with open(f"{path}.tmp", "w") as f: for r in rules: f.write(r) changes = ( changes or not os.path.exists(path) or not samefile(path, f"{path}.tmp") ) os.rename(f"{path}.tmp", path) if not changes: continue proc = subprocess.Popen( ["birdc", "configure"], stdin=subprocess.DEVNULL, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) stdout, stderr = proc.communicate(None) stdout = stdout.decode("utf-8", "replace") stderr = stderr.decode("utf-8", "replace") if proc.returncode != 0: logger.error( "{} error:\n{}\n{}".format( "birdc reconfigure", "\n".join( [" O: {}".format(line) for line in stdout.rstrip().split("\n")] ), "\n".join( [" E: {}".format(line) for line in stderr.rstrip().split("\n")] ), ) )Until Akvorado integrates DDoS detection and mitigation, the ideas presented in this blog post provide a solid foundation to get started with your own anti-DDoS system. 🛡️
Python GUIs: Working With Classes in Python and PyQt
Python supports object-oriented programming (OOP) through classes, which allow you to bundle data and behavior in a single entity. Python classes allow you to quickly model concepts by creating representations of real objects that you can then use to organize your code.
Most of the currently available GUI frameworks for Python developers, such as PyQt, PySide, and Tkinter, rely on classes to provide apps, windows, widgets, and more. This means that you'll be actively using classes for designing and developing your GUI apps.
In this tutorial, you'll learn how OOP and classes work in Python. This knowledge will allow you to quickly grasp how GUI frameworks are internally organized, how they work, and how you can use their classes and APIs to create robust GUI applications.
Defining Classes in PythonPython classes are templates or blueprints that allow us to create objects through instantiation. These objects will contain data representing the object's state, and methods that will act on the data providing the object's behavior.
Instantiation is the process of creating instances of a class by calling the class constructor with appropriate arguments.
Attributes and methods make up what is known as the class interface or API. This interface allows us to operate on the objects without needing to understand their internal implementation and structure.
Alright, it is time to start creating our own classes. We'll start by defining a Color class with minimal functionality. To do that in Python, you'll use the class keyword followed by the class name. Then you provide the class body in the next indentation level:
python >>> class Color: ... pass ... >>> red = Color() >>> type(red) <class '__main__.Color'>In this example, we defined our Color class using the class keyword. This class is empty. It doesn't have attributes or methods. Its body only contains a pass statement, which is Python's way to do nothing.
Even though the class is minimal, it allows us to create instances by calling its constructor, Colo(). So, red is an instance of Color. Now let's make our Color class more fun by adding some attributes.
Adding Class and Instance &AcyttributesPython classes allow you to add two types of attributes. You can have class and instance attributes. A class attribute belongs to its containing class. Its data is common to the class and all its instances. To access a class attribute, we can use either the class or any of its instances.
Let's now add a class attribute to our Color class. For example, let's say we need to keep note of how many instance of Color your code creates. Then you can have a color_count attribute:
python >>> class Color: ... color_count = 0 ... def __init__(self): ... Color.color_count += 1 ... >>> red = Color() >>> green = Color() >>> Color.color_count 2 >>> red.color_count 2Now Color has a class attribute called color_count that gets incremented every time we create a new instance. We can quickly access that attribute using either the class directly or one of its instances, like red.
To follow up with this example, say that we want to represent our Color objects using red, green, and blue attributes as part of the RGB color model. These attributes should have specific values for specific instances of the class. So, they should be instance attributes.
To add an instance attribute to a Python class, you must use the .__init__() special method, which we introduced in the previous code but didn't explain. This method works as the instance initializer because it allows you to provide initial values for instance attributes:
python >>> class Color: ... color_count = 0 ... def __init__(self, red, green, blue): ... Color.color_count += 1 ... self.red = red ... self.green = green ... self.blue = blue ... >>> red = Color(255, 0, 0) >>> red.red 255 >>> red.green 0 >>> red.blue 0 >>> Color.red Traceback (most recent call last): ... AttributeError: type object 'Color' has no attribute 'red'Cool! Now our Color class looks more useful. It has the usual class attributes and also three new instance attributes. Note that, unlike class attributes, instance attributes can't be accessed through the class itself. They're specific to a concrete instance.
There's something that jumps into sight in this new version of Color. What is the self argument in the definition of .__init__()? This attribute holds a reference to the current instance. Using the name self to identify the current instance is a strong convention in Python.
We'll use self as the first or even the only argument to instance methods like .__init__(). Inside an instance method, we'll use self to access other methods and attributes defined in the class. To do that, we must prepend self to the name of the target attribute or method instance of the class.
For example, our class has an attribute .red that we can access using the syntax self.red inside the class. This will return the number stored under that name. From outside the class, you need to use a concrete instance instead of self.
Providing Behavior With MethodsA class bundles data (attributes) and behavior (methods) together in an object. You'll use the data to set the object's state and the methods to operate on that data or state.
Methods are just functions that we define inside a class. Like functions, methods can take arguments, return values, and perform different computations on an object's attributes. They allow us to make our objects usable.
In Python, we can define three types of methods in our classes:
- Instance methods, which need the instance (self) as their first argument
- Class methods, which take the class (cls) as their first argument
- Static methods, which take neither the class nor the instance
Let's now talk about instance methods. Say that we need to get the attributes of our Color class as a tuple of numbers. In this case, we can add an .as_tuple() method like the following:
python class Color: representation = "RGB" def __init__(self, red, green, blue): self.red = red self.green = green self.blue = blue def as_tuple(self): return self.red, self.green, self.blueThis new method is pretty straightforward. Since it's an instance method, it takes self as its first argument. Then it returns a tuple containing the attributes .red, .green, and .blue. Note how you need to use self to access the attributes of the current instance inside the class.
This method may be useful if you need to iterate over the RGB components of your color objects:
python >>> red = Color(255, 0, 0) >>> red.as_tuple() (255, 0, 0) >>> for level in red.as_tuple(): ... print(level) ... 255 0 0Our as_tuple() method works great! It returns a tuple containing the RGB components of our color objects.
We can also add class methods to our Python classes. To do this, we need to use the @classmethod decorator as follows:
python class Color: representation = "RGB" def __init__(self, red, green, blue): self.red = red self.green = green self.blue = blue def as_tuple(self): return self.red, self.green, self.blue @classmethod def from_tuple(cls, rbg): return cls(*rbg)The from_tuple() method takes a tuple object containing the RGB components of a desired color as an argument, creates a valid color object from it, and returns the object back to the caller:
python >>> blue = Color.from_tuple((0, 0, 255)) >>> blue.as_tuple() (0, 0, 255)In this example, we use the Color class to access the class method from_tuple(). We can also access the method using a concrete instance of this class. However, in both cases, we'll get a completely new object.
Finally, Python classes can also have static methods that we can define with the @staticmethod decorator:
python class Color: representation = "RGB" def __init__(self, red, green, blue): self.red = red self.green = green self.blue = blue def as_tuple(self): return self.red, self.green, self.blue @classmethod def from_tuple(cls, rbg): return cls(*rbg) @staticmethod def color_says(message): print(message)Static methods don't operate either on the current instance self or the current class cls. These methods can work as independent functions. However, we typically put them inside a class when they are related to the class, and we need to have them accessible from the class and its instances.
Here's how the method works:
python >>> Color.color_says("Hello from the Color class!") Hello from the Color class! >>> red = Color(255, 0, 0) >>> red.color_says("Hello from the red instance!") Hello from the red instance!This method accepts a message and prints it on your screen. It works independently from the class or instance attributes. Note that you can call the method using the class or any of its instances.
Writing Getter & Setter MethodsProgramming languages like Java and C++ rely heavily on setter and getter methods to retrieve and update the attributes of a class and its instances. These methods encapsulate an attribute allowing us to get and change its value without directly accessing the attribute itself.
For example, say that we have a Label class with a text attribute. We can make text a non-public attribute and provide getter and setter methods to manipulate the attributes according to our needs:
python class Label: def __init__(self, text): self.set_text(text) def text(self): return self._text def set_text(self, value): self._text = str(value)In this class, the text() method is the getter associated with the ._text attribute, while the set_text() method is the setter for ._text. Note how ._text is a non-public attribute. We know this because it has a leading underscore on its name.
The setter method calls str() to convert any input value into a string. Therefore, we can call this method with any type of object. It will convert any input argument into a string, as you will see in a moment.
If you come from programming languages like Java or C++, you need to know Python doesn't have the notion of private, protected, and public attributes. In Python, you'll use a naming convention to signal that an attribute is non-public. This convention consists of adding a leading underscore to the attribute's name. Note that this naming pattern only indicates that the attribute isn't intended to be used directly. It doesn't prevent direct access, though.
This class works as follows:
python >>> label = Label("Python!") >>> label.text() 'Python!' >>> label.set_text("PyQt!") >>> label.text() 'PyQt!' >>> label.set_text(123) >>> label.text() '123'In this example, we create an instance of Label. The original text is passed to the class constructor, Label(), which automatically calls __init__() to set the value of ._text by calling the setter method text(). You can use text() to access the label's text and set_text() to update it. Remember that any input will be converted into a string, as we can see in the final example above.
Note that the Label class above is just a toy example, don't confuse this class with similarly named classes from GUI frameworks like PyQt, PySide, and Tkinter.
The getter and setter pattern is pretty common in languages like Java and C++. Because PyQt and PySide are Python bindings to the Qt library, which is written in C++, you'll be using this pattern a lot in your Qt-based GUI apps. However, this pattern is less popular among Python developers. Instead, they use the @property decorator to hide attributes behind properties.
Here's how most Python developer will write their Label class:
python class Label: def __init__(self, text): self.text = text @property def text(self): return self._text @text.setter def text(self, value): self._text = str(value)This class defines .text as a property. This property has getter and setter methods. Python calls them automatically when we access the attribute or update its value in an assignment:
python >>> label = Label("Python!") >>> label.text 'Python!' >>> label.text = "PyQt" >>> label.text 'PyQt' >>> label.text = 123 >>> label.text '123'Python properties allow you to add function behavior to your attributes while permitting you to use them as normal attributes instead of as methods.
Writing Special MethodsPython supports many special methods, also known as dunder or magic methods, that are part of its class mechanism. We can identify these methods because their names start and end with a double underscore, which is the origin of their other name: dunder methods.
These methods accomplish different tasks in Python's class mechanism. They all have a common feature: Python calls them automatically depending on the operation we run.
For example, all Python objects are printable. We can print them to the screen using the print() function. Calling print() internally falls back to calling the target object's __str__() special method:
python >>> label = Label("Python!") >>> print(label) <__main__.Label object at 0x10354efd0>In this example, we've printed our label object. This action provides some information about the object and the memory address where it lives. However, the actual output is not very useful from the user's perspective.
Fortunately, we can improve this by providing our Label class with an appropriate __str__() method:
python class Label: def __init__(self, text): self.text = text @property def text(self): return self._text @text.setter def text(self, value): self._text = str(value) def __str__(self): return self.textThe __str__() method must return a user-friendly string representation for our objects. In this case, when we print an instance of Label to the screen, the label's text will be displayed:
python >>> label = Label("Python!") >>> print(label) Python!As you can see, Python takes care of calling __str__() automatically when we use the print() function to display our instances of Label.
Another special method that belongs to Python's class mechanism is __repr__(). This method returns a developer-friendly string representation of a given object. Here, developer-friendly implies that the representation should allow a developer to recreate the object itself.
python class Label: def __init__(self, text): self.text = text @property def text(self): return self._text @text.setter def text(self, value): self._text = str(value) def __str__(self): return self.text def __repr__(self): return f"{type(self).__name__}(text='{self.text}')"The __repr__() method returns a string representation of the current objects. This string differs from what __str__() returns:
python >>> label = Label("Python!") >>> label Label(text='Python!')Now when you access the instance on your REPL session, you get a string representation of the current object. You can copy and paste this representation to recreate the object in an appropriate environment.
Reusing Code With InheritanceInheritance is an advanced topic in object-oriented programming. It allows you to create hierarchies of classes where each subclass inherits all the attributes and behaviors from its parent class or classes. Arguably, code reuse is the primary use case of inheritance.
Yes, we code a base class with a given functionality and make that functionality available to its subclass through inheritance. This way, we implement the functionality only once and reuse it in every subclass.
Python classes support single and multiple inheritance. For example, let's say we need to create a button class. This class needs .width and .height attributes that define its rectangular shape. The class also needs a label for displaying some informative text.
We can code this class from scratch, or we can use inheritance and reuse the code of our current Label class. Here's how to do this:
python class Button(Label): def __init__(self, text, width, height): super().__init__(text) self.width = width self.height = height def __repr__(self): return ( f"{type(self).__name__}" f"(text='{self.text}', " f"width={self.width}, " f"height={self.height})" )To inherit from a parent class in Python, we need to list the parent class or classes in the subclass definition. To do this, we use a pair of parentheses and a comma-separated list of parent classes. If we use several parent classes, then we're using multiple inheritance, which can be challenging to reason about.
The first line in __init__() calls the __init__() method on the parent class to properly initialize its .text attribute. To do this, we use the built-in super() function. Then we define the .width and .height attributes, which are specific to our Button class. Finally, we provide a custom implementation of __repr__().
Here's how our Button class works:
python >>> button = Button("Ok", 10, 5) >>> button.text 'Ok' >>> button.text = "Click Me!" >>> button.text 'Click Me!' >>> button.width 10 >>> button.height 5 >>> button Button(text='Ok', width=10, height=5) >>> print(button) Click Me!As you can conclude from this code, Button has inherited the .text attribute from Label. This attribute is completely functional. Our class has also inherited the __str__() method from Label. That's why we get the button's text when we print the instance.
Using Classes in PyQt GUI AppsEverything we've learned so far about Python classes is the basis of our future work in GUI development. When it comes to working with PyQt, PySide, Tkinter, or any other GUI framework, we'll heavily rely on our knowledge of classes and OOP because most of them are based on classes and class hierarchies.
We'll now look at how to use inheritance to create some GUI-related classes. For example, when we create an application with PyQt or PySide, we usually have a main window. To create this window, we typically inherit from QMainWindow:
- PyQt6
- PySide6
In the definition of our Window class, we use the QMainWindow class as the parent class. This tells Python that we want to define a class that inherits all the functionalities that QMainWindow provides.
We can continue adding attributes and methods to our Window class. Some of these attributes can be GUI widgets, such as labels, buttons, comboboxes, checkboxes, line edits, and many others. In PyQt, we can create all these GUI components using classes such as QLabel, QPushButton, QComboBox, QCheckBox, and QLineEdit.
All of them have their own sets of attributes and methods that we can use according to our specific needs when designing the GUI of a given application.
Wrapping Up Classes-Related ConceptsAs we've seen, Python allows us to write classes that work as templates that you can use to create concrete objects that bundle together data and behavior. The building blocks of Python classes are:
- Attributes, which hold the data in a class
- Methods, which provide the behaviors of a class
The attributes of a class define the class's data, while the methods provide the class's behaviors, which typically act on that data.
To better understand OOP and classes in Python, we should first discuss some terms that are commonly used in this aspect of Python development:
-
Classes are blueprints or templates for creating objects -- just like a blueprint for creating a car, plane, house, or anything else. In programming, this blueprint will define the data (attributes) and behavior (methods) of the object and will allow us to create multiple objects of the same kind.
-
Objects or Instances are the realizations of a class. We can create objects from the blueprint provided by the class. For example, you can create John's car from a Car class.
-
Methods are functions defined within a class. They provide the behavior of an object of that class. For example, our Car class can have methods to start the engine, turn right and left, stop, and so on.
-
Attributes are properties of an object or class. We can think of attributes as variables defined in a class or object. Therefore, we can have:
- class attributes, which are specific to a concrete class and common to all the instances of that class. You can access them either through the class or an object of that class. For example, if we're dealing with a single car manufacturer, then our Car class can have a manufacturer attribute that identifies it.
- instance attributes, which are specific to a concrete instance. You can access them through the specific instance. For example, our Car class can have attributes to store properties such as the maximum speed, the number of passengers, the car's weight, and so on.
-
Instantiation is the process of creating an individual instance from a class. For example, we can create John's car, Jane's car, and Linda's car from our Car class through instantiation. In Python, this process runs through two steps:
- Instance creation: Creates a new object and allocates memory for storing it.
- Instance initialization: Initializes all the attributes of the current object with appropriate values.
-
Inheritance is a mechanism of code reuse that allows us to inherit attributes and methods from one or multiple existing classes. In this context, we'll hear terms like:
- Parent class: The class we're inheriting from. This class is also known as the superclass or base class. If we have one parent class, then we're using single inheritance. If we have more than one parent class, then we're using multiple inheritance.
- Child class: The class that inherits from a given parent. This class is also known as the subclass.
Don't feel frustrated or bad if you don't understand all these terms immediately. They'll become more familiar with use as you use them in your own Python code. Many of our GUI tutorials make use of some or all of these concepts.
ConclusionNow you know the basics of Python classes. You also learned fundamental concepts of object-oriented programming, such as inheritance. You also learned that most GUI frameworks are heavily based on classes. Therefore knowing about classes will open the door to begin building your own GUI app using PyQt, PySide, Tkinter, or any other GUI framework for Python.
For an in-depth guide to building GUIs with Python see my PyQt6 book.
Codementor: Make your own Library in C Programming Language
Season of KDE 2023 With KDE Eco: Setting Up Selenium For Energy Consumption Measurements
I am very thankful to the KDE community for inviting me to be a part of this amazing Open Source project through their annual program Season of KDE.
About meMy name is Nitin Tejuja. I am a Software Engineer working on a platform engineering team where I develop, research, and optimize various technology projects. I have contributed to projects like Slack, GitHub Actions, and Strapi CMS, and I have created my own nodejs library which helps in migrating content between different Strapi CMS environments. This is my first time contributing to a project with such a large community. I'm very passinoate about solving problems and then optimizing the solutions. I enjoy learning new technologies and building excellent Open Source projects. I also believe that contributing to the KDE community is the best way to learn more about KDE projects and how we can produce environmentally-friendly Free & Open Source software.
What I Am Working OnIn this project, we are setting up Selenium using Selenium AT-SPI and replicating an existing unit test written with KdeEcoTest to test the educational software suite GCompris, which provides a number of activities for children aged 2 to 10. KdeEcoTest and Selenium are both useful tools for emulating user behavior in Standard Usage Scenarios, represented in "Emulate Actions with Tool" in the workflow below.
Figure : Steps for preparing Standard Usage Scenario (SUS) scripts to measure the energy consumption of software. (Image from the KDE Eco Handbook published under a CC-BY-SA-4.0 license.)By measuring and then comparing the energy needed by KdeEcoTest and Selenium themselves, we can decide whether and how we should use Selenium for measuring the energy consumption of other software. Selenium AT-SPI for QT is still in an early stage but it is relevant to KDE as a unit testing tool, and given its flexibility it could become a great tool for energy consumption measurements.
I am writing a guide for installing selenium-webdriver-at-spi and the GCompris application scripts using selenium-webdriver-at-spi. The aim is to provide assistance to developers to create their own KDE application tests, either as a system testing tool or a energy measurement tool.
Motivation Behind Working On The KDE Eco ProjectI have a strong interest in ecology and want to develop techniques that will reduce the impact of new technology on the planet. At the moment, KDE Eco is using KdeEcoTest for the energy consumption measurements of GCompris, among other applications. In this project, we will write unit tests of GCompris using Selenium. By using Selenium we can access application elements using names and many other different features, which provides more flexibility than what KdeEcoTest is currently able to do. Moreover, Selenium works with Wayland, which is not the case with KdeEcoTest.
What I Have Done & Will Be Doing In The Coming WeeksFor the first two weeks of the project, I was working on understanding KdeEcoTest and exploring selenium-webdriver-at before writing the unit test scripts.
In the first week I set up GCompris and explored the application's activities. I then turned to the KdeTestEcoCreator script to understood how the code present in FOSS Energy Efficiency Project (FEEP) repository is working.
In the second week I started writing unit test scripts using selenium-webdriver-at-spi, for which I have written an installation guide. It is already possible to use this guide to install a Selenium accessibility server to test your own KDE application!
With everything all set up, I am now learning how to write unit scripts and have written my first Selenium Python script which simply opens GCompris.
Figure : Opening GCompris with my first Selenium Python script. (Image from Nitin Tejuja published under a CC-BY-SA-4.0 license.)In the coming weeks, I will write unit scripts in Python to perform full testing for several activities in GCompris, which will replicate the behavior already scripted using KdeEcoTest. To enable event handling on activity elements, I need to dive into GCompris's QML code and add QML accessibility code in order to locate and use the GCompris elements.
Once this is completed, both Selenium and KdeEcoTest can be used to measure the energy consumption of GCompris and it will be possible to compare the two emulation tools directly.
Community Bonding (SoK’23)I am thankful to my mentors Emmanuel Charruau and Harald Sitter for taking time to help me by providing resources and solving my doubts.
I am also thankful to you for taking the time to read this update. If you would like to access the scripts, they can be found here.
Contact me on Matrix at @nitin.tejuja12:matrix.org.
Valhalla's Things: Bookbinding: photo album
When I paint postcards I tend to start with a draft (usually on lightweight (250 g/m²) watercolour paper, then trace1 the drawing on blank postcards and paint it again.
I keep the drafts for a number of reasons; for the views / architectural ones I’m using a landscape photo album that I bought many years ago, but lately I’ve also sent a few cards with my historical outfits to people who like to be kept updated on that, and I wanted a different book for those, both for better organization and to be able to keep them in the portrait direction.
If you know me, you can easily guess that buying one wasn’t considered as an option.
Since I’m not going to be writing on the pages, I decided to use a relatively cheap 200 g / m² linoprint paper with a nice feel, and I’ve settled on a B6 size (before trimming) to hold A6 postcard drafts.
For the binding I’ve decided to use a technique I’ve learned from a craft book ages ago that doesn’t use tapes, and added a full hard cover in dark grey linen-feel2 paper. For the end-papers I’ve used some random sheets of light blue paper (probably around 100-something g / m²), and that’s the thing where I could have done better, but they work.
Up to now there isn’t anything I hadn’t done before, what was new was the fact that this book was meant to hold things between the pages, and I needed to provide space for them.
After looking on the internet for solutions, I settled on adding spacers by making a signature composed of paper - spacer - paper - spacer, with the spacers being 2 cm wide, folded in half.
And then, between finishing binding the book and making the cover I utterly forgot to add the head bands. Argh. It’s not the first time I make this error.
I’m happy enough with the result. There are things that are easy to improve on in the next iteration (endpapers and head bands), and something in me is not 100% happy with the fact that the spacers aren’t placed between every sheet, but there are places with no spacer and places with two of them, but I can’t think of (and couldn’t find) a way to make them otherwise with a sewn book, unless I sew each individual sheet, which sounds way too bulky (the album I’m using for the landscapes was glued, but I didn’t really want to go that way).
The size is smaller than the other one I was using and doesn’t leave a lot of room around the paintings, but that isn’t necessarily a bad thing, because it also means less wasted space.
I believe that one of my next project will be another similar book in a landscape format, for those postcard drafts that aren’t landscapes nor clothing related.
And then maybe another? or two? or…
Traceback (most recent call last): TooManyProjectsError: project queue is fullyes, trace. I can’t draw. I have too many hobbies to spend the required amount of time every day to practice it. I’m going to fake it. 85% of the time I’m tracing from a photo I took myself, so I’m not even going to consider it cheating.↩︎
the description of which, on the online shop, made it look like fabric, even if the price was suspiciously low, so I bought a sheet to see what it was. It wasn’t fabric. It feels and looks nice, but I’m not sure how sturdy it’s going to be.↩︎
Enrico Zini: Heart-driven drum loop
I have Python code for reading a heart rate monitor.
I have Python code to generate MIDI events.
Could I resist putting them together? Clearly not.
Here's Jack Of Hearts, a JACK MIDI drum loop generator that uses the heart rate for BPM, and an improvised way to compute heart rate increase/decrease to add variations in the drum pattern.
It's very simple minded and silly. To me it was a fun way of putting unrelated things together, and Python worked very well for it.
Test and Code: 194: Test & Code Returns
A brief discussion of why Test & Code has been off the air for a bit, and what to expect in upcoming episodes.
Links:
<p>A brief discussion of why Test & Code has been off the air for a bit, and what to expect in upcoming episodes.</p><p>Links:</p><ul><li><a href="https://pythontest.com/pytest-book/" title="Python Testing with pytest, 2nd Edition" rel="nofollow">Python Testing with pytest, 2nd Edition</a></li><li><a href="https://training.talkpython.fm/courses/getting-started-with-testing-in-python-using-pytest" title="Getting started with pytest Online Course" rel="nofollow">Getting started with pytest Online Course</a></li><li><a href="https://pythontest.com/training/" title="Software Testing with pytest Training" rel="nofollow">Software Testing with pytest Training</a></li><li><a href="https://pythonbytes.fm/" title="Python Bytes Podcast" rel="nofollow">Python Bytes Podcast</a></li></ul>grep @ Savannah: grep-3.9 released [stable]
This is to announce grep-3.9, a stable release.
The NEWS below describes the two main bug fixes since 3.8.
There have been 38 commits by 4 people in the 26 weeks since 3.8.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Bruno Haible (2)
Carlo Marcelo Arenas Belón (2)
Jim Meyering (11)
Paul Eggert (23)
Jim
[on behalf of the grep maintainers]
==================================================================
Here is the GNU grep home page:
http://gnu.org/s/grep/
For a summary of changes and contributors, see:
http://git.sv.gnu.org/gitweb/?p=grep.git;a=shortlog;h=v3.9
or run this command from a git-cloned grep directory:
git shortlog v3.8..v3.9
Here are the compressed sources:
https://ftp.gnu.org/gnu/grep/grep-3.9.tar.gz (2.7MB)
https://ftp.gnu.org/gnu/grep/grep-3.9.tar.xz (1.7MB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/grep/grep-3.9.tar.gz.sig
https://ftp.gnu.org/gnu/grep/grep-3.9.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
f84afbfc8d6e38e422f1f2fc458b0ccdbfaeb392 grep-3.9.tar.gz
7ZF6C+5DtxJS9cpR1IwLjQ7/kAfSpJCCbEJb9wmfWT8= grep-3.9.tar.gz
bcaa3f0c4b81ae4192c8d0a2be3571a14ea27383 grep-3.9.tar.xz
q80RQJ7iPUyvNf60IuU7ushnAUz+7TE7tfSIrKFwtZk= grep-3.9.tar.xz
Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify grep-3.9.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
Key fingerprint = 155D 3FC5 00C8 3448 6D1E EA67 7FD9 FCCB 000B EEEE
uid [ unknown] Jim Meyering <jim@meyering.net>
uid [ unknown] Jim Meyering <meyering@fb.com>
uid [ unknown] Jim Meyering <meyering@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key jim@meyering.net
gpg --recv-keys 7FD9FCCB000BEEEE
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=grep&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify grep-3.9.tar.gz.sig
This release was bootstrapped with the following tools:
Autoconf 2.72a.65-d081
Automake 1.16i
Gnulib v0.1-5861-g2ba7c75ed1
NEWS
* Noteworthy changes in release 3.9 (2023-03-05) [stable]
** Bug fixes
With -P, some non-ASCII UTF8 characters were not recognized as
word-constituent due to our omission of the PCRE2_UCP flag. E.g.,
given f(){ echo Perú|LC_ALL=en_US.UTF-8 grep -Po "$1"; } and
this command, echo $(f 'r\w'):$(f '.\b'), before it would print ":r".
After the fix, it prints the correct results: "rú:ú".
When given multiple patterns the last of which has a back-reference,
grep no longer sometimes mistakenly matches lines in some cases.
[Bug#36148#13 introduced in grep 3.4]