Planet Python

Subscribe to Planet Python feed
Planet Python -
Updated: 2 hours 3 min ago

Real Python: Inheritance and Internals: Object-Oriented Programming in Python

Tue, 2023-09-12 10:00

Python includes mechanisms for writing object-oriented code where the data and operations on that data are structured together. The class keyword is how you create these structures in Python. The definition of a class can be based on other classes, allowing the creation of hierarchical structures and promoting code reuse. This mechanism is known as inheritance.

In this course, you’ll learn about:

  • Basic class inheritance
  • Multi-level inheritance, or classes that inherit from classes
  • Classes that inherit directly from more than one class, or multiple inheritance
  • Special methods that you can use when writing classes
  • Abstract base classes for classes that you don’t want to fully implement yet

This course is the second in a three-part series. Part one is an introduction to class syntax, teaching you how to write a class and use its attributes and methods. Part three dives deeper into the philosophy behind writing good object-oriented code.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

PyBites: Debunking 7 Myths About Software Development Coaching

Tue, 2023-09-12 08:27

If you give a man a fish, you feed him for a day. If you teach a man to fish, you feed him for a lifetime.

Chinese proverb Transformative power of guidance

10 years ago I was overweight, maybe not more than +12 kg, but it definitely had a bearing on the quality of my life and (!) future health perspective.

Back then I thought I ate reasonably healthy and did my daily walk (the World Health Organization (WHO) recommends you “do at least 150–300 minutes of moderate-intensity aerobic physical activity”, I definitely did that).

Now I know: I simply ate too much sugar and my total caloric intake was too high. I also wasn’t hitting the gym. No muscles, no “engine” to burn more energy (and even when I started a routine I lost muscle because I did not align my diet for it to be effective).

The difference between THEN “assuming I was doing things ok-ish” yet getting bad results, and NOW, maintaining a lean physique quite effortlessly, not having to be a saint with my diet either, is … coaching.

When I was out of shape, I recognized a significant gap and grew increasingly frustrated with the status quo. Until one day I said: “f* it, I need to address this!” or I will live with the negative consequences the rest of my life!

So I sought help. What I did not know yet at the time was that getting a professional coach will move you from the slow to fast lane.

Success leaves clues.

Jim Rohn

… and a coach will show you those clues!

It’s also kind of reassuring knowing that with a coach you just have to listen to that “one source of truth” (especially in this all too distracting world!). That when you follow their advice and put in the daily effort, you will get similar results (at least relative to the level you are currently at).

So I did just that, dropped 10kg, live in a body I am happy with, and the rest is history.

Effective coaching has that power. It can get you out of a rut, it will give you clarity about your goals and makes you laser focused in order to achieve them.

Just as I needed guidance to navigate my fitness journey, many find the same to be true in their software development careers.

However, some people are skeptical. They see “coaching” more as a tool for athletes and business leaders.

Which brings me to …

Debunking 7 common myths of software coaching

In the rest of this article I will show you why it’s a must for software developers too.

Although there is a fundamental difference between “getting lean” and “landing a developer job”, as a (aspiring) developer, applying the general principles of coaching can help you get to your goals faster.

Myth 1

What will I gain from software coaching that I can’t just figure out on my own?

There are so many (free) resources out there. You can get a whole education just by spending hours consuming them, right?

Wrong. This mentality leads to what’s known as “tutorial paralysis.”

Tutorial paralysis is the phenomenon where individuals become overly reliant on tutorials and educational content. Instead of actively working on projects or problems on their own, they continue to watch or read one tutorial after another, mistakenly believing they are making progress.

In reality, they are stuck in a loop of passive learning without any real-world application. Your time and effort is best spent working on concrete goals, with somebody that reviews your work giving continuous expert advice and keeps the higher-level goal in mind.

No “passive” learning method gives you this, and that’s where all of the free resources fall short. They are valuable but only as an add-on to a goal-focused + guided approach.

Even classroom training suffers from this shortage, it’s too passive. The information needs to go both ways and feedback on goal oriented work is what really sticks and where the real learning happens. This isn’t rocket science, we see it every day with the people that work with us.

Myth 2

It’s just about the tech skills.

This definitely is an appealing thought which we entertained for a long time.

Until we reflected back on our careers and made a balance sheet of what “assets” really contributed to our growth. Tech skills were high up there, unmissable, but so were “soft skills” (we like to group them under “mindset” rather).

Things like the ability to communicate well, asserting influence, negotiation skills and grit + persistence. Coaching by a HUMAN is super powerful here, because it’s through human interaction and 1:1 (and group) conversations that we nurture those types of skills.

It also requires a deep trust in the person you work with, because this stuff is often very personal. Coaching is built on trust and that’s also where you can get very deep.

You might actually not know what deeper issues you have stashed away and that are consciously (or unconsciously) holding you back. Working closely with a coach you trust and through working on complex things together (which again requires will hit both tech and soft skills), deeper things get addressed that you were not even aware of in the first place. This is important and that’s where we have seen people’s progress going through the roof.

Myth 3

I am too much of a beginner or too advanced for coaching.

Coaching is for all levels.

For a beginner, coaching provides an incredible boost of motivation and a foundation in the basics.

But for those more advanced, its value doesn’t diminish. In fact, even top professionals in various fields, from sports to business, continuously seek coaching to refine their skills and gain new insights.

With a more advanced person, coaching can be about fine-tuning, autonomous growth, and strategic course correction.

You might think: “My situation is so unique, I doubt a coach can help me”. Again, coaches are humans so they can (and will) adjust their styles and levels to each person they work with and at all phases of the coaching journey. It’s the perfect customized learning form, and this is the reason we think it’s so highly effective.

Regardless of your skill level, whether you’re a beginner or advanced, coaching can be tailored to meet your specific needs

Myth 4

Fitness milestones are very tangible, for software devs this is not the case.

True, right? Fitness is all about the nominal weight progression (measured daily on the scale), a six-pack for the more fitness aficionados, calories tracked, number of cheat meals. All very tangible indeed.

But in software we can get very specific too:

  • Number of quality projects on your GitHub, packages shippped to PyPI.
  • Code quality can be measured, both by how you write code and general “care” you put into your projects (e.g. adding a test suite + proper documentation < why is FastAPI so popular?)
  • Number of successful code reviews or pull requests merged.
  • Number of tech blog posts (or YouTube videos or other content pieces) published every year
  • Number of meaningful contributions to open-source projects (“greens” on GitHub profile).
  • Etc.

Everybody that we’ve worked with has improved on multiple aspects above, both because their tech skills improved but also their confidence to start (or continue) putting their work out there. As the saying goes:

The harder I work the lucker I get!

– Samuel Goldwyn Myth 5

It takes too much time and/or with enough time I will figure it out myself.

The beauty of coaching is that results show up after months (not years), sometimes even weeks!

This is because it changes the way you think, and everything starts with thought. And this will compound over time, because a new mindset will pay dividends moving forward. So no, it does not take too much time per se.

The “I will figure it out” is a bit more insidious, because yes, you can get very far by yourself.

However, there is a category of unknown unknowns that is hard, if not impossible, to really see and grasp without having worked with more experienced people in your field. They will open your eyes.

Going through PDM was eye-opening. Once shown what’s possible, you can’t unsee the potential—or the challenges.

PDM Client (a year after finishing the program) Myth 6

A coach will do the work for me, so I won’t learn as effectively.

When I started as a coach I fell into the trap of doing too much for my clients, specifically writing parts of the code. it’s typical for the engineer in us: we love to code hence we will do so whenever we can.

But that’s not the most effective for people that needed to learn and really understand. Hence I changed and prefer to show the way. There is no better experience (for both coach and client) than “showing just enough”, enabling clients to find answers by themselves.

It’s often said that coaches unlock people’s potential. This is interesting, it means that you already have it in you, but you mostly need the help of a coach to get it out. Coaching people is much more about enabling them to succeed and this is again a very human endeavor!

This also means that a coach does not have to have all the answers, they are learning with you. They cannot (and should not be) specialist in all fancy new technologies (falling into the trap of shiny new object syndrome), they are much better when they have a wide scope and generalist skillset (for the reading list: Range).

Myth 7

Coaching / working with somebody is expensive.

Yes, the upfront cost of coaching might seem high. However, what you should consider is the Return on Investment (ROI) it offers.

Think about it this way: if a coach accelerates your learning and career progression by even a year or two, how much is that worth in salary, job opportunities, or personal growth? The insights, skills, and connections you gain through coaching can be invaluable. These benefits can lead to better job positions, increased earning potential, and greater job satisfaction—outcomes that far exceed the initial cost of coaching.

A prime example is Matt, a participant in our PDM program. Through coaching, he not only developed his technical skills but also saw a significant boost in his earnings:

Furthermore, there’s the non-tangible ROI. The increased confidence, clearer direction, reduced stress, and the elimination of potentially years of wandering aimlessly in your career, wondering if you’re doing the right things.

Every change requires an investment. Investing in coaching isn’t just about spending money; it’s about investing in your future self. The prospect of making exponential leaps in your career and personal growth makes the cost of coaching pale in comparison.

It takes courage to invest in your growth. But the regret of missed opportunities and unfulfilled potential can be a much greater expense in the long run.


Just as the old proverb goes: “We don’t give you the fish, we teach you how to fish.” By embarking on a coaching journey, you’re not only acquiring immediate skills but also fostering a transformative mindset that will be the cornerstone of your future growth and success.

Beyond tangible results like completed projects and an enriched GitHub profile, the true value lies in the new approach and perspective you’ll adopt. An approach that empowers you to tackle challenges more efficiently and capitalize on opportunities more effectively, setting you up for long-term success in your career.

Are you ready to leap forward in your development journey?

Check out our Python coaching options

I went from being unsure about my skills and feeling like an imposter to launching an MVP (Minimal Viable Product); a cloud based video trans-coding solution. At the outset of the program they gave me a survey of my goals and desired outcomes and molded my time with them to suit me and those goals.

Aaron J (PDM Client in Canada)
Categories: FLOSS Project Planets

Python Bytes: #352 Helicopter Time Comes to Python

Tue, 2023-09-12 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><a href=""><strong>Heliclockter</strong></a> - Like datetime, but more timezone-aware</li> <li><a href=""><strong>Wagtail 5</strong></a></li> <li><a href=""><strong>Git log customization</strong></a></li> <li><a href=""><strong>MiniJinja template engine</strong></a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='' style='font-weight: bold;'>Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href=""><strong>courses at Talk Python Training</strong></a></li> <li><a href=""><strong>Python People</strong></a> Podcast</li> <li><a href=""><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href=""><strong></strong></a></li> <li>Brian: <a href=""><strong></strong></a></li> <li>Show: <a href=""><strong></strong></a></li> </ul> <p>Join us on YouTube at <a href=""><strong></strong></a> to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too.</p> <p><strong>Brian #1:</strong> <a href=""><strong>Heliclockter</strong></a> - Like datetime, but more timezone-aware</p> <ul> <li>Suggested by Peter Nilsson</li> <li>The library exposes 3 classes: <ul> <li><code>datetime_tz</code>, a datetime ensured to be timezone-aware.</li> <li><code>datetime_local</code>, a datetime ensured to be timezone-aware in the local timezone.</li> <li><code>datetime_utc</code>, a datetime ensured to be timezone-aware in the UTC+0 timezone.</li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href=""><strong>Wagtail 5</strong></a></p> <ul> <li>Wagtail is the leading open-source Python CMS, based on Django.</li> <li>Anything you can do in Python or Django, you can do in Wagtail.</li> <li>Wagtail 5.0 provides even more options for your content creation experience <ul> <li>Dark mode has arrived</li> <li>SVG support</li> <li>Enhanced accessibility checker</li> <li>Delete more safely</li> <li>Some breaking changes in it because this release removes some of the old code paths that were maintained to give people more time to adapt their code to the new upgrades</li> <li>Add custom validation logic to your Wagtail projects. You can now attach errors to specific child blocks in StreamField.</li> </ul></li> </ul> <p><strong>Brian #3:</strong> <a href=""><strong>Git log customization</strong></a></p> <ul> <li>Justin Joyce</li> <li>Just a simple <code>git log --oneline</code> makes the log so much more readable, but don’t stop there.</li> <li><code>--graph</code> helps to show different branches</li> <li><code>-10</code> shows the last 10 commits.</li> <li>And this beauty in <code>.gitconfig</code> makes <code>git lg</code> mostly do what you want most of the time: <pre><code>[alias] lg = log --graph -10 --format='%C(yellow)%h%Creset %s %Cgreen(%cr) %C(bold blue)[HTML_REMOVED]%Creset' </code></pre></li> </ul> <p><strong>Michael #4:</strong> <a href=""><strong>MiniJinja template engine</strong></a></p> <ul> <li>MiniJinja is a powerful but minimal dependency template engine <strong>for Rust</strong> compatible with Jinja/Jinja2</li> <li>Comes with integration back into Python via <a href="">minijinja-py</a> package.</li> <li>MiniJinja has a stronger sandbox than Jinja2 and <em>might perform ever so slightly better</em> in some situations. </li> <li>However you should be aware that due to the marshalling that needs to happen in either direction there is a certain amount of loss of information.</li> <li><a href="">Compiles to WebAssembly</a></li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>The <a href=""><strong>pytest Primary Power</strong></a> course is ready. <ul> <li>To celebrate wrapping up the first course, <a href=""><strong>pytest Primary Power</strong></a> is $49, the bundle is $99.</li> <li>Bundle: This + next 2 courses + access to repo, discussion forum, Slack, and Discord</li> </ul></li> </ul> <p>Michael:</p> <ul> <li>New HTMX, language course, and data science course coming at Talk Python. <a href=""><strong>Add your name here to get notified</strong></a>.</li> <li>I’ll be <a href=""><strong>at PyBay 2023</strong></a> on Oct 8, 2023 <ul> <li>Use "<strong>friendofspeaker</strong>" with for a 20% discount on the regular tickets.</li> </ul></li> <li>Follow up from docstrings: <ul> <li><a href="">From Rhet</a></li> <li>John Hagen: <ul> <li>You can certainly omit the type information from the docstring when you are using typehints. This is the way I've seen almost all modern usages of Google style docstrings nowadays. They still have some examples that include the type information because the original standard pre-dated Python 3 type annotations. Here is a simple example:</li> <li><a href=""></a></li> <li>This also shows off the next point that you brought up: can I document all of the exceptions that a function could raise. Google docstrings have the "Raises:" block for this, and I find it pretty nice and concise for when this is needed.</li> <li>Also, PyCharm can be configured to autocomplete and render Google style docstrings</li> <li><a href=""></a></li> <li>Tools | Python Integrated Tools | Docstrings | Docstring Format: Google</li> <li>What's nice about this, is that then PyCharm will render the google style docstrings in the Quick Doc function (Ctrl+Q), making the headers bold and larger and lists look nice so it's easy to read.</li> </ul></li> </ul></li> </ul> <p><strong>Joke:</strong> <a href=""><strong>Fully optimized my algorithm</strong></a></p>
Categories: FLOSS Project Planets

Stack Abuse: Remove Trailing Newlines in Python

Mon, 2023-09-11 16:00

Handling string data is a task that most software has to do in some capacity. These strings aren't always properly formatted, like those that may have a trailing newline that doesn't actually add any value to the sring and could be removed. This Byte will introduce you to the basics of removing trailing newlines and how to use the rstrip() method to achieve this.

Why Remove Trailing Newlines?

Trailing newlines, or any kind of trailing whitespace, can cause issues in your code. They might seem harmless, but they can introduce bugs that are hard to track down. For instance, if you're comparing two strings, one with a trailing newline and one without, Python will consider these two strings as different even though they are fundamentally the same. By removing trailing newlines, you can make sure that your string comparisons and other operations behave as you expect.

Removing a Trailing Newline

A newline is represented by the character \n in Python. Let's say we have a string with a trailing newline:

s = "Hello, World!\n" print(s)

When we run and print this string, we'll see that the output appears on two lines:

Hello, World!

The trailing newline causes the cursor to move to the next line after printing the string. If we want to remove this trailing newline, we can use string slicing:

s = "Hello, World!\n" s = s[:-1] print(s)

Now, the output will be:

Hello, World!

Using s[:-1], we're creating a new string that includes every character from the string s except the last one. This effectively removes the trailing newline.

However, there better ways to handle trailing whitespace, which we'll see in the next section.

Using rstrip() to Remove Trailing Newlines

While string slicing works, Python provides a more intuitive way to remove trailing newlines using the rstrip() method. This method returns a copy of the original string with trailing whitespaces removed. Here's how you can use it:

s = "Hello, World!\n" s = s.rstrip() print(s)

This will produce the same output as before:

Hello, World!

The rstrip() method is particularly useful when you want to remove all trailing whitespaces, not just newlines. It's also more readable than string slicing, making your code easier to understand.

Using splitlines() to Remove Trailing Newlines

The Python splitlines() method can be a handy tool when dealing with strings that contain newline characters. This method splits a string into a list where each element is a line of the original string.

Here's a simple example:

text = "Hello, World!\n" print(text.splitlines())

This code would output:

['Hello, World!']

As you can see, splitlines() effectively removes the trailing newline from the string. However, it might not work exactly as needed since it returns a list, not a string. If you want to get a string without the newline, you'll need to join the list elements back together.

text = "Hello, World!\n" print(''.join(text.splitlines()))

This code would output:

Hello, World!

Just like string slicing, while this method works, it isn't nearly as intuitive/readable as the rstrip() method, which is what I'd recommend.

Handling Trailing Newlines in File Reading

When reading text from a file in Python, you're more likely to encounter trailing newlines. This is because each line in a text file ends with a newline character, and some text editors (i.e. Atom) will even append newline characters at the end of a file for you if one doesn't already exist.

Here's an example of how you can handle this:

with open('file.txt', 'r') as file: lines = # Do something with the lines... print(lines)

The splitlines() method is appropriate here because it is not only used to divide up the file contents into lines but also remove the trailing newlines. The result is a list of lines without newlines.

Handling Trailing Newlines in User Input

Another scenario in which newlines are common is user input. Depending on how you're taking input from the user, the newline may always exist. This is commonly because the user has to hit "Enter" to submit the input, which is also the key to create newlines in most text input components. In these cases, it would be wise to always sanitize your input with rstrip() just in case.


Trailing newlines can often be a problem when dealing with strings in any language, but luckily Python provides several methods to handle them. In this Byte, we explored how to use string slicing, rstrip(), and splitlines() to remove trailing newlines from a string, and we also discussed how to handle trailing newlines when reading from a file or receiving user input.

Categories: FLOSS Project Planets

death and gravity: When to use classes in Python? When you repeat the same functions

Mon, 2023-09-11 15:41

Are you having trouble figuring out when to use classes or how to organize them?

Have you repeatedly searched for "when to use classes in Python", read all the articles and watched all the talks, and still don't know whether you should be using classes in any given situation?

Have you read discussions about it that for all you know may be right, but they're so academic you can't parse the jargon?

Have you read articles that all treat the "obvious" cases, leaving you with no clear answer when you try to apply them to your own code?

My experience is that, unfortunately, the best way to learn this is to look at lots of examples.

Most guidelines tend to either be too vague if you don't already know enough about the subject, or too specific and saying things you already know.

This is one of those things that once you get it seems obvious and intuitive, but it's not, and is quite difficult to explain properly.

So, instead of prescribing a general approach, let's look at:

  • one specific case where you may want to use classes
  • examples from real-world code
  • some considerations you should keep in mind
Contents The heuristic #

If you repeat similar sets of functions, consider grouping them in a class.

That's it.

In its most basic form, a class is when you group data with functions that operate on that data; sometimes, there is no data, but it can still be useful to group the functions into an abstract object that exists only to make things easier to use / understand.

Depending on whether you choose which class to use at runtime, this is sometimes called the strategy pattern.


As Wikipedia puts it, "A heuristic is a practical way to solve a problem. It is better than chance, but does not always work. A person develops a heuristic by using intelligence, experience, and common sense."

So, this is not the correct thing to do all the time, or even most of the time.

Instead, I hope that this and other heuristics can help build the right intuition for people on their way from "I know the class syntax, now what?" to "proper" object-oriented design.

Example: Retrievers #

My feed reader library retrieves and stores web feeds (Atom, RSS and so on).

Usually, feeds come from the internet, but you can also use local files. The parsers for various formats don't really care where a feed is coming from, so they always take an open file as input.

reader supports conditional requests – that is, only retrieve a feed if it changed. To do this, it stores the ETag HTTP header from a response, and passes it back as the If-None-Match header of the next request; if nothing changed, the server can respond with 304 Not Modified instead of sending back the full content.

Let's have a look at how the code to retrieve feeds evolved over time; this version omits a few details, but it will end up with a structure similar to that of the full version. In the beginning, there was a function – URL and old ETag in, file and new ETag out:

def retrieve(url, etag=None): if any(url.startswith(p) for p in ('http://', 'https://')): headers = {} if etag: headers['If-None-Match'] = etag response = requests.get(url, headers=headers, stream=True) response.raise_for_status() if response.status_code == 304: response.close() return None, etag etag = response.headers.get('ETag', etag) response.raw.decode_content = True return response.raw, etag # fall back to file path = extract_path(url) return open(path, 'rb'), None

We use Requests to get HTTP URLs, and return the underlying file-like object.1

For local files, we suport both bare paths and file URIs; for the latter, we do a bit of validation – file:feed and file://localhost/feed are OK, but file://invalid/feed and unknown:feed2 are not:

def extract_path(url): url_parsed = urllib.parse.urlparse(url) if url_parsed.scheme == 'file': if url_parsed.netloc not in ('', 'localhost'): raise ValueError("unknown authority for file URI") return urllib.request.url2pathname(url_parsed.path) if url_parsed.scheme: raise ValueError("unknown scheme for file URI") # no scheme, treat as a path return url Problem: can't add new feed sources #

One of reader's goals is to be extensible. For example, it should be possible to add new feed sources like an FTP server (ftp://...) or Twitter without changing reader code; however, our current implementation makes it hard to do so.

We can fix this by extracting retrieval logic into separate functions, one per protocol:

def http_retriever(url, etag): headers = {} # ... return response.raw, etag def file_retriever(url, etag): path = extract_path(url) return open(path, 'rb'), None

...and then routing to the right one depending on the URL prefix:

# sorted by key length (longest first) RETRIEVERS = { 'https://': http_retriever, 'http://': http_retriever, # fall back to file '': file_retriever, } def get_retriever(url): for prefix, retriever in RETRIEVERS.items(): if url.lower().startswith(prefix.lower()): return retriever raise ValueError("no retriever for URL") def retrieve(url, etag=None): retriever = get_retriever(url) return retriever(url, etag)

Now, plugins can register retrievers by adding them to RETRIEVERS (in practice, there's a method for that, so users don't need to care about it staying sorted).

Problem: can't validate URLs until retrieving them #

To add a feed, you call add_feed() with the feed URL.

But what if you pass an invalid URL? The feed gets stored in the database, and you get an "unknown scheme for file URI" error on the next update. However, this can be confusing – a good API should signal errors near the action that triggered them. This means add_feed() needs to validate the URL without actually retrieving it.

For HTTP, Requests can do the validation for us; for files, we can call extract_path() and ignore the result. Of course, we should select the appropriate logic in the same way we select retrievers, otherwise we're back where we started.

Now, there's more than one way of doing this. We could keep a separate validator registry, but that may accidentally become out of sync with the retriever one.

URL_VALIDATORS = { 'https://': http_url_validator, 'http://': http_url_validator, '': file_url_validator, }

Or, we could keep a (retriever, validator) pair in the retriever registry. This is better, but it's not all that readable (what if need to add a third thing?); also, it makes customizing behavior that affects both the retriever and validator harder.

RETRIEVERS = { 'https://': (http_retriever, http_url_validator), 'http://': (http_retriever, http_url_validator), '': (file_retriever, file_url_validator), }

Better yet, we can use a class to make the grouping explicit:

class HTTPRetriever: def retrieve(self, url, etag): headers = {} # ... return response.raw, etag def validate_url(self, url): session = requests.Session() session.get_adapter(url) session.prepare_request(requests.Request('GET', url)) class FileRetriever: def retrieve(self, url, etag): path = extract_path(url) return open(path, 'rb'), None def validate_url(self, url): extract_path(url)

We then instantiate them, and update retrieve() to call the methods:

http_retriever = HTTPRetriever() file_retriever = FileRetriever() def retrieve(url, etag=None): retriever = get_retriever(url) return retriever.retrieve(url, etag)

validate_url() works just the same:

def validate_url(url): retriever = get_retriever(url) retriever.validate_url(url)

And there you have it – if you repeat similar sets of functions, consider grouping them in a class.

Not just functions, attributes too #

Say you want to update feeds in parallel, using multiple threads.

Retrieving feeds is mostly waiting around for I/O, so it will benefit the most from it. Parsing, on the other hand, is pure Python, CPU bound code, so threads won't help due to the global interpreter lock.

However, because we're streaming the reponse body, I/O is not done when the retriever returns the file, but when the parser finishes reading it.3 We can consume the response in retrieve() by reading it into a temporary file and returning that instead.

We'll allow any retriever to opt into this behavior by using a class attribute:

class HTTPRetriever: slow_to_read = True class FileRetriever: slow_to_read = False

If a retriever is slow to read, retrieve() does the swap:

def retrieve(url, etag=None): retriever = get_retriever(url) file, etag = retriever.retrieve(url, etag) if file and retriever.slow_to_read: temp = tempfile.TemporaryFile() shutil.copyfileobj(file, temp) file.close() file = temp return file, etag Liking this so far? Here's another article you might like:

When to use classes in Python? When your functions take the same arguments

Example: Flask's tagged JSON #

The Flask web framework provides an extendable compact representation for non-standard JSON types called tagged JSON (code). The serializer class delegates most conversion work to methods of various JSONTag subclasses (one per supported type):

  • check() checks if a Python value should be tagged by that tag
  • tag() converts it to tagged JSON
  • to_python() converts a JSON value back to Python (the serializer uses the key tag attribute to find the correct tag)

Interestingly, tag instances have an attribute pointing back to the serializer, likely to allow recursion – when (un)packing a possibly nested collection, you need to recursively (un)pack its values. Passing the serializer to each method would have also worked, but when your functions take the same arguments...

Formalizing this #

OK, the retriever code works. But, how should you communicate to others (readers, implementers, interpreters, type checkers) that an HTTPRetriever is the same kind of thing as a FileRetriever, and as anything else that can go in RETRIEVERS?

Duck typing #

Here's the definition of duck typing:

A programming style which does not look at an object's type to determine if it has the right interface; instead, the method or attribute is simply called or used ("If it looks like a duck and quacks like a duck, it must be a duck.") [...]

This is what we're doing now! If it retrieves like a retriever and validates URLs like a retriever, then it's a retriever.

You see this all the time in Python. For example, json.dump() takes a file-like object; now, the full text file interface has lots methods and attributes, but dump() only cares about write(), and will accept any object implementing it:

>>> class MyFile: ... def write(self, s): ... print(f"writing: {s}") ... >>> f = MyFile() >>> json.dump({'one': 1}, f) writing: { writing: "one" writing: : writing: 1 writing: }

The main way to communicate this is through documentation:

Serialize obj [...] to fp (a .write()-supporting file-like object)

Inheritance #

Nevertheless, you may want to be more explicit about the relationships between types. The easiest option is to use a base class, and require retrievers to inherit from it.

class Retriever: slow_to_read = False def retrieve(self, url, etag): raise NotImplementedError def validate_url(self, url): raise NotImplementedError

This allows you to check you the type with isinstance(), provide default methods and attributes, and will help type checkers and autocompletion, at the expense of forcing a dependency on the base class.

>>> class MyRetriever(Retriever): pass >>> retriever = MyRetriever() >>> retriever.slow_to_read False >>> isinstance(retriever, Retriever) True

What it won't do is check subclasses actually define the methods:

>>> retriever.validate_url('myurl') Traceback (most recent call last): ... NotImplementedError Abstract base classes #

This is where abstract base classes come in. The decorators in the abc module allow defining abstract methods that must be overriden:

class Retriever(ABC): @abstractproperty def slow_to_read(self): return False @abstractmethod def retrieve(self, url, etag): raise NotImplementedError @abstractmethod def validate_url(self, url): raise NotImplementedError

This is checked at runtime (but only that methods and attributes are present, not their signatures or types):

>>> class MyRetriever(Retriever): pass >>> MyRetriever() Traceback (most recent call last): ... TypeError: Can't instantiate abstract class MyRetriever with abstract methods retrieve, slow_to_read, validate_url >>> class MyRetriever(Retriever): ... slow_to_read = False ... def retrieve(self, url, etag): ... ... def validate_url(self, url): ... ... >>> MyRetriever() <__main__.MyRetriever object at 0x1037aac50>


You can also use ABCs to register arbitrary types as "virtual subclasses"; this allows them to pass isinstance() checks without inheritance, but won't check for required methods:

>>> class MyRetriever: pass >>> Retriever.register(MyRetriever) <class '__main__.MyRetriever'> >>> isinstance(MyRetriever(), Retriever) True Protocols #

Finally, we have protocols, aka structural subtyping, aka static duck typing. Introduced in PEP 544, they go in the opposite direction – what if instead declaring what the type of something is, we declare what methods it has to have to be of a specific type?

You define a protocol by inheriting typing.Protocol:

class Retriever(Protocol): @property def slow_to_read(self) -> bool: ... def retrieve(self, url: str, etag: str | None) -> tuple[IO[bytes] | None, str | None]: ... def validate_url(self, url: str) -> None: ...

...and then use it in type annotations:

def mount_retriever(prefix: str, retriever: Retriever) -> None: raise NotImplementedError

Some other code (not necessarily yours, not necessarily aware the protocol even exists) defines an implementation:

class MyRetriever: slow_to_read = False def validate_url(self): pass

...and then uses it with annotated code:

mount_retriever('my', MyRetriever())

A type checker like mypy will check if the provided instance conforms to the protocol – not only that methods exist, but that their signatures are correct too – all without the implementation having to declare anything.

$ mypy error: Argument 2 to "mount_retriever" has incompatible type "MyRetriever"; expected "Retriever" [arg-type] note: "MyRetriever" is missing following "Retriever" protocol member: note: retrieve note: Following member(s) of "MyRetriever" have conflicts: note: Expected: note: def validate_url(self, url: str) -> None note: Got: note: def validate_url(self) -> Any Found 1 error in 1 file (checked 1 source file)


If you decorate your protocol with runtime_checkable, you can use it in isinstance() checks, but like ABCs, it only checks methods are present.

Counter-example: modules #

If a class has no state and you don't need inheritance, you can use a module instead:

# slow_to_read = False def retrieve(url, etag): raise NotImplementedError def validate_url(url): raise NotImplementedError

From a duck typing perspective, this is a valid retriever, since it has all the expected methods and attributes. So much so, that it's also compatible with protocols:

import module mount_retriever('mod', module) $ mypy Success: no issues found in 1 source file

I tried to keep the retriever example stateless, but real world classes rarely are (it may be immutable state, but it's state nonetheless). Also, you're limited to exactly one implementation per module, which is ususally too much like Java for my tastes.

Try it out #

If you're doing something and you think you need a class, do it and see how it looks. If you think it's better, keep it, otherwise, revert the change. You can always switch in either direction later.

If you got it right the first time, great! If not, by having to fix it you'll learn something, and next time you'll know better.

Also, don't beat yourself up.

Sure, there are nice libraries out there that use classes in just the right way, after spending lots of time to find the right abstraction. But abstraction is difficult and time consuming, and in everyday code good enough is just that – good enough – you don't need to go to the extreme.

Learned something new today? Share this with others, it really helps!

Want to know when new articles come out? Subscribe here to get new stuff straight to your inbox!

If you've made it this far, you might like:

Write an SQL query builder in 150 lines of Python!

  1. This code has a potential bug: if we were using a persistent session instead of a transient one, the connection would never be released, since we're not closing the response after we're done with it. In the actual code, we're doing both, but the only way do so reliably is to return a context manager; I omitted this because it doesn't add anything to our discussion about classes. [return]

  2. We're handling unknown URI schemes here because bare paths don't have a scheme, so anything that didn't match a known scheme must be a bare path. Also, on Windows (not supported yet), the drive letter in a path like c:\feed.xml is indistinguishable from a scheme. [return]

  3. Unless the response is small enough to fit in the TCP receive buffer. [return]

Categories: FLOSS Project Planets

Martin Fitzpatrick: PyQt6 Book now available in Korean: 파이썬과 Qt6로 GUI 애플리케이션 만들기

Mon, 2023-09-11 13:39

I am very happy to announce that my Python GUI programming book Create GUI Applications with Python & Qt6 / PyQt6 Edition is now available in Korean from Acorn Publishing

It's more than a little mind-blowing to see a book I've written translated into another language -- not least one I cannot remotely understand! When I started writing this book a few years ago I could never have imagined it would end up on book shelves in Korea, never mind in Korean. This is just fantastic.

파이썬과 Qt6로 GUI 애플리케이션 만들기 파이썬 애플리케이션 제작 실습 가이드

If you're in Korea, you can also pick up a copy at any of the following bookstores: Kyobobook, YES24 or Aladin

Thanks again to Acorn Publishing for translating my book and making it available to readers in Korea.

Categories: FLOSS Project Planets

Martin Fitzpatrick: Getting Started With Git and GitHub in Your Python Projects

Mon, 2023-09-11 13:39

Using a version control system (VCS) is crucial for any software development project. These systems allow developers to track changes to the project's codebase over time, removing the need to keep multiple copies of the project folder.

VCSs also facilitate experimenting with new features and ideas without breaking existing functionality in a given project. They also enable collaboration with other developers that can contribute code, documentation, and more.

In this article, we'll learn about Git, the most popular VCS out there. We'll learn everything we need to get started with this VCS and start creating our own repositories. We'll also learn how to publish those repositories to GitHub, another popular tool among developers nowadays.

Installing and Setting Up Git

To use Git in our coding projects, we first need to install it on our computer. To do this, we need to navigate to Git's download page and choose the appropriate installer for our operating system. Once we've downloaded the installer, we need to run it and follow the on-screen instructions.

We can check if everything is working correctly by opening a terminal or command-line window and running git --version.

Once we've confirmed the successful installation, we should provide Git with some personal information. You'll only need to do this once for every computer. Now go ahead and run the following commands with your own information:

shell $ git config --global <"YOUR NAME"> $ git config --global <>

The first command adds your full name to Git's config file. The second command adds your email. Git will use this information in all your repositories.

If you publish your projects to a remote server like GitHub, then your email address will be visible to anyone with access to that repository. If you don't want to expose your email address this way, then you should create a separate email address to use with Git.

As you'll learn in a moment, Git uses the concept of branches to manage its repositories. A branch is a copy of your project's folder at a given time in the development cycle. The default branch of new repositories is named either master or main, depending on your current version of Git.

You can change the name of the default branch by running the following command:

shell $ git config --global init.defaultBranch <branch_name>

This command will set the name of Git's default branch to branch_name. Remember that this is just a placeholder name. You need to provide a suitable name for your installation.

Another useful setting is the default text editor Git will use to type in commit messages and other messages in your repo. For example, if you use an editor like Visual Studio Code, then you can configure Git to use it:

shell # Visual Studio Code $ git config --global core.editor "code --wait"

With this command, we tell Git to use VS Code to process commit messages and any other message we need to enter through Git.

Finally, to inspect the changes we've made to Git's configuration files, we can run the following command:

shell $ git config --global -e

This command will open the global .gitconfig file in our default editor. There, we can fix any error we have made or add new settings. Then we just need to save the file and close it.

Understanding How Git Works

Git works by allowing us to take a snapshot of the current state of all the files in our project's folder. Each time we save one of those snapshots, we make a Git commit. Then the cycle starts again, and Git creates new snapshots, showing how our project looked like at any moment.

Git was created in 2005 by Linus Torvalds, the creator of the Linux kernel. Git is an open-source project that is licensed under the GNU General Public License (GPL) v2. It was initially made to facilitate kernel development due to the lack of a suitable alternative.

The general workflow for making a Git commit to saving different snapshots goes through the following steps:

  1. Change the content of our project's folder.
  2. Stage or mark the changes we want to save in our next commit.
  3. Commit or save the changes permanently in our project's Git database.

As the third step mentions, Git uses a special database called a repository. This database is kept inside your project's directory under a folder called .git.

Version-Controlling a Project With Git: The Basics

In this section, we'll create a local repository and learn how to manage it using the Git command-line interface (CLI). On macOS and Linux, we can use the default terminal application to follow along with this tutorial.

On Windows, we recommend using Git Bash, which is part of the Git For Windows package. Go to the Git Bash download page, get the installer, run it, and follow the on-screen instruction. Make sure to check the Additional Icons -> On the Desktop to get direct access to Git Bash on your desktop so that you can quickly find and launch the app.

Alternatively, you can also use either Windows' Command Prompt or PowerShell. However, some commands may differ from the commands used in this tutorial.

Initializing a Git Repository

To start version-controlling a project, we need to initialize a new Git repository in the project's root folder or directory. In this tutorial, we'll use a sample project to facilitate the explanation. Go ahead and create a new folder in your file system. Then navigate to that folder in your terminal by running these commands:

shell $ mkdir sample_project $ cd sample_project

The first command creates the project's root folder or directory, while the second command allows you to navigate into that folder. Don't close your terminal window. You'll be using it throughout the next sections.

To initialize a Git repository in this folder, we need to use the git init command like in the example below:

shell $ git init Initialized empty Git repository in /.../sample_project/.git/

This command creates a subfolder called .git inside the project's folder. The leading dot in the folder's name means that this is a hidden directory. So, you may not see anything on your file manager. You can check the existence of .git with the ls -a, which lists all files in a given folder, including the hidden ones.

Checking the Status of Our Project

Git provides the git status command to allow us to identify the current state of a Git repository. Because our sample_project folder is still empty, running git status will display something like this:

shell $ git status On branch main No commits yet nothing to commit (create/copy files and use "git add" to track)

When we run git status, we get detailed information about the current state of our Git repository. This command is pretty useful, and we'll turn back to it in multiple moments.

As an example of how useful the git status command is, go ahead and create a file called inside the project's folder using the following commands:

shell $ touch $ git status On branch main No commits yet Untracked files: (use "git add <file>..." to include in what will be committed) nothing added to commit but untracked files present (use "git add" to track)

With the touch command, we create a new file under our project's folder. Then we run git status again. This time, we get information about the presence of an untracked file called We also get some basic instructions on how to add this file to our Git repo. Providing these guidelines or instructions is one of the neatest features of git status.

Now, what is all that about untracked files? In the following section, we'll learn more about this topic.

Tracking and Committing Changes

A file in a Git repository can be either tracked or untracked. Any file that wasn't present in the last commit is considered an untracked file. Git doesn't keep a history of changes for untracked files in your project's folder.

In our example, we haven't made any commits to our Git repo, so is naturally untracked. To start tracking it, run the git add command as follows:

shell $ git add $ git status On branch main No commits yet Changes to be committed: (use "git rm --cached <file>..." to unstage) new file:

This git add command has added to the list of tracked files. Now it's time to save the file permanently using the git commit command with an appropriate commit message provided with the -m option:

shell $ git commit -m "Add" [main (root-commit) 5ac6586] Add 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 $ git status On branch master nothing to commit, working tree clean

We have successfully made our first commit, saving to our Git repository. The git commit command requires a commit message, which we can provide through the -m option. Commit messages should clearly describe what we have changed in our project.

After the commit, our main branch is completely clean, as you can conclude from the git status output.

Now let's start the cycle again by modifying, staging the changes, and creating a new commit. Go ahead and run the following commands:

shell $ echo "print('Hello, World!')" > $ cat print('Hello, World!') $ git add $ git commit -m "Create a 'Hello, World!' script on" [main 2f33f7e] Create a 'Hello, World!' script on 1 file changed, 1 insertion(+)

The echo command adds the statement "print('Hello, World!')" to our file. You can confirm this addition with the cat command, which lists the content of one or more target files. You can also open in your favorite editor and update the file there if you prefer.

We can also use the git stage command to stage or add files to a Git repository and include them in our next commit.

We've made two commits to our Git repo. We can list our commit history using the git log command as follows:

shell $ git log --oneline 2f33f7e (HEAD -> main) Create a 'Hello, World!' script on 5ac6586 Add

The git log command allows us to list all our previous commits. In this example, we've used the --oneline option to list commits in a single line each. This command takes us to a dedicated output space. To leave that space, we can press the letter Q on our keyboard.

Using a .gitignore File to Skip Unneeded Files

While working with Git, we will often have files and folders that we must not save to our Git repo. For example, most Python projects include a venv/ folder with a virtual environment for that project. Go ahead and create one with the following command:

shell $ python -m venv venv

Once we've added a Python virtual environment to our project's folder, we can run git status again to check the repo state:

shell $ git status On branch main Untracked files: (use "git add <file>..." to include in what will be committed) venv/ nothing added to commit but untracked files present (use "git add" to track)

Now the venv/ folder appears as an untracked file in our Git repository. We don't need to keep track of this folder because it's not part of our project's codebase. It's only a tool for working on the project. So, we need to ignore this folder. To do that, we can add the folder to a .gitignore file.

Go ahead and create a .gitignore file in the project's folder. Add the venv/ folders to it and run git status:

shell $ touch .gitignore $ echo "venv/" > .gitignore $ git status On branch main Untracked files: (use "git add <file>..." to include in what will be committed) .gitignore nothing added to commit but untracked files present (use "git add" to track)

Now git status doesn't list venv/ as an untracked file. This means that Git is ignoring that folder. If we take a look at the output, then we'll see that .gitignore is now listed as an untracked file. We must commit our .gitignore files to the Git repository. This will prevent other developers working with us from having to create their own local .gitignore files.

We can also list multiple files and folders in our .gitignore file one per line. The file even accepts glob patterns to match specific types of files, such as *.txt. If you want to save yourself some work, then you can take advantage of GitHub's gitignore repository, which provides a rich list of predefined .gitignore files for different programming languages and development environments.

We can also set up a global .gitignore file on our computer. This global file will apply to all our Git repositories. If you decide to use this option, then go ahead and create a .gitignore_global in your home folder.

Working With Branches in Git

One of the most powerful features of Git is that it allows us to create multiple branches. A branch is a copy of our project's current status and commits history. Having the option to create and handle branches allows us to make changes to our project without messing up the main line of development.

We'll often find that software projects maintain several independent branches to facilitate the development process. A common branch model distinguishes between four different types of branches:

  1. A main or master branch that holds the main line of development
  2. A develop branch that holds the last developments
  3. One or more feature branches that hold changes intended to add new features
  4. One or more bugfix branches that hold changes intended to fix critical bugs

However, the branching model to use is up to you. In the following sections, we'll learn how to manage branches using Git.

Creating New Branches

Working all the time on the main or master branch isn't a good idea. We can end up creating a mess and breaking the code. So, whenever we want to experiment with a new idea, implement a new feature, fix a bug, or just refactor a piece of code, we should create a new branch.

To kick things off, let's create a new branch called hello on our Git repo under the sample_project folder. To do that, we can use the git branch command followed by the branch's name:

shell $ git branch hello $ git branch --list * main hello

The first command creates a new branch in our Git repo. The second command allows us to list all the branches that currently exist in our repository. Again, we can press the letter Q on our keyboard to get back to the terminal prompt.

The star symbol denotes the currently active branch, which is main in the example. We want to work on hello, so we need to activate that branch. In Git's terminology, we need to check out to hello.

Checking Out to a New Branch

Although we have just created a new branch, in order to start working on it, we need to switch to or check out to it by using the git checkout command as follows:

shell $ git checkout hello Switched to branch 'hello' $ git branch --list main * hello $ git log --oneline 2f33f7e (HEAD -> hello, main) Create a 'Hello, World!' script on 5ac6586 Add

The git checkout command takes the name of an existing branch as an argument. Once we run the command, Git takes us to the target branch.

We can derive a new branch from whatever branch we need.

As you can see, git branch --list indicates which branch we are currently on by placing a * symbol in front of the relevant branch name. If we check the commit history with git log --oneline, then we'll get the same as we get from main because hello is a copy of it.

The git checkout can take a -b flag that we can use to create a new branch and immediately check out to it in a single step. That's what most developers use while working with Git repositories. In our example, we could have run git checkout -b hello to create the hello branch and check out to it with one command.

Let's make some changes to our project and create another commit. Go ahead and run the following commands:

shell $ echo "print('Welcome to PythonGUIs!')" >> $ cat print('Hello, World!') print('Welcome to PythonGUIs!') $ git add $ git commit -m "Extend our 'Hello, World' program with a welcome message." [hello be62476] Extend our 'Hello, World' program with a welcome message. 1 file changed, 1 insertion(+)

The final command committed our changes to the hello branch. If we compare the commit history of both branches, then we'll see the difference:

shell $ git log --oneline -1 be62476 (HEAD -> hello) Extend our 'Hello, World' program with a welcome message. $ git checkout main Switched to branch 'main' $ git log --oneline -1 2f33f7e (HEAD -> main) Create a 'Hello, World!' script on

In this example, we first run git log --oneline with -1 as an argument. This argument tells Git to give us only the last commit in the active branch's commit history. To inspect the commit history of main, we first need to check out to that branch. Then we can run the same git log command.

Now say that we're happy with the changes we've made to our project in the hello branch, and we want to update main with those changes. How can we do this? We need to merge hello into main.

Merging Two Branches Together

To add the commits we've made in a separate branch back to another branch, we can run what is known as a merge. For example, say we want to merge the new commits in hello into main. In this case, we first need to switch back to main and then run the git merge command using hello as an argument:

shell $ git checkout main Already on 'main' $ git merge hello Updating 2f33f7e..be62476 Fast-forward | 1 + 1 file changed, 1 insertion(+)

To merge a branch into another branch, we first need to check out the branch we want to update. Then we can run git merge. In the example above, we first check out to main. Once there, we can merge hello.

Deleting Unused Branches

Once we've finished working in a given branch, we can delete the entire branch to keep our repo as clean as possible. Following our example, now that we've merged hello into main, we can remove hello.

To remove a branch from a Git repo, we use the git branch command with the --delete option. To successfully run this command, make sure to switch to another branch before:

shell $ git checkout main Already on 'main' $ git branch --delete hello Deleted branch hello (was be62476). $ git branch --list * main

Deleting unused branches is a good way to keep our Git repositories clean, organized, and up to date. Also, deleting them as soon as we finish the work is even better because having old branches around may be confusing for other developers collaborating with our project. They might end up wondering why these branches are still alive.

Using a GUI Client for Git

In the previous sections, we've learned to use the git command-line tool to manage Git repositories. If you prefer to use GUI tools, then you'll find a bunch of third-party GUI frontends for Git. While they won't completely replace the need for using the command-line tool, they can simplify your day-to-day workflow.

You can get a complete list of standalone GUI clients available on the Git official documentation.

Most popular IDEs and code editors, including PyCharm and Visual Studio Code, come with basic Git integration out-of-the-box. Some developers will prefer this approach as it is directly integrated with their development environment of choice.

If you need something more advanced, then GitKraken is probably a good choice. This tool provides a standalone, cross-platform GUI client for Git that comes with many additional features that can boost your productivity.

Managing a Project With GitHub

If we publish a project on a remote server with support for Git repositories, then anyone with appropriate permissions can clone our project, creating a local copy on their computer. Then, they can make changes to our project, commit them to their local copy, and finally push the changes back to the remote server. This workflow provides a straightforward way to allow other developers to contribute code to your projects.

In the following sections, we'll learn how to create a remote repository on GitHub and then push our existing local repository to it. Before we do that, though, head over to and create an account there if you don't have one yet. Once you have a GitHub account, you can set up the connection to that account so that you can use it with Git.

Setting Up a Secure Connection to GitHub

In order to work with GitHub via the git command, we need to be able to authenticate ourselves. There are a few ways of doing that. However, using SSH is the recommended way. The first step in the process is to generate an SSH key, which you can do with the following command:

shell $ ssh-keygen -t ed25519 -C "GitHub -"

Replace the placeholder email address with the address you've associated with your GitHub account. Once you run this command, you'll get three different prompts in a row. You can respond to them by pressing Enter to accept the default option. Alternatively, you can provide custom responses.

Next, we need to copy the contents of our file. To do this, you can run the following command:

shell $ cat ~/.ssh/

Select the command's output and copy it. Then go to your GitHub Settings page and click the SSH and GPG keys option. There, select New SSH key, set a descriptive title for the key, make sure that the Key Type is set to Authentication Key, and finally, paste the contents of in the Key field. Finally, click the Add SSH key button.

At this point, you may be asked to provide some kind of Two-Factor Authentication (2FA) code. So, be ready for that extra security step.

Now we can test our connection by running the following command:

shell $ ssh -T The authenticity of host ' (IP ADDRESS)' can not be established. ECDSA key fingerprint is SHA256:p2QAMXNIC1TJYWeIOttrVc98/R1BUFWu3/LiyKgUfQM. Are you sure you want to continue connecting (yes/no/[fingerprint])?

Make sure to check whether the key fingerprint shown on your output matches GitHub's public key fingerprint. If it matches, then enter yes and press Enter to connect. Otherwise, don't connect.

If the connection is successful, we will get a message like this:

shell Hi USERNAME! You have successfully authenticated, but GitHub does not provide shell access.

Congrats! You've successfully connected to GitHub via SSH using a secure SSH key. Now it's time to start working with GitHub.

Creating and Setting Up a GitHub Repository

Now that you have a GitHub account with a proper SSH connection, let's create a remote repository on GitHub using its web interface. Head over to the GitHub page and click the + icon next to your avatar in the top-right corner. Then select New repository.

Give your new repo a unique name and choose who can see this repository. To continue with our example, we can give this repository the same name as our local project, sample_project.

To avoid conflicts with your existing local repository, don't add .gitignore, README, or LICENSE files to your remote repository.

Next, set the repo's visibility to Private so that no one else can access the code. Finally, click the Create repository button at the end of the page.

If you create a Public repository, make sure also to choose an open-source license for your project to tell people what they can and can't do with your code.

You'll get a Quick setup page as your remote repository has no content yet. Right at the top, you'll have the choice to connect this repository via HTTPS or SSH. Copy the SSH link and run the following command to tell Git where the remote repository is hosted:

shell $ git remote add origin

This command adds a new remote repository called origin to our local Git repo.

The name origin is commonly used to denote the main remote repository associated with a given project. This is the default name Git uses to identify the main remote repo.

Git allows us to add several remote repositories to a single local one using the git remote add command. This allows us to have several remote copies of your local Git repo.

Pushing a Local Git Repository to GitHub

With a new and empty GitHub repository in place, we can go ahead and push the content of our local repo to its remote copy. To do this, we use the git push command providing the target remote repo and the local branch as arguments:

shell $ git push -u origin main Enumerating objects: 9, done. Counting objects: 100% (9/9), done. Delta compression using up to 8 threads Compressing objects: 100% (4/4), done. Writing objects: 100% (9/9), 790 bytes | 790.00 KiB/s, done. Total 9 (delta 0), reused 0 (delta 0), pack-reused 0 To * [new branch] main -> main branch 'main' set up to track 'origin/main'.

This is the first time we push something to the remote repo sample_project, so we use the -u option to tell Git that we want to set the local main branch to track the remote main branch. The command's output provides a pretty detailed summary of the process.

Note that if you don't add the -u option, then Git will ask what you want to do. A safe workaround is to copy and paste the commands GitHub suggests, so that you don't forget -u.

Using the same command, we can push any local branch to any remote copy of our project's repo. Just change the repo and branch name: git push -u remote_name branch_name.

Now let's head over to our browser and refresh the GitHub page. We will see all of our project files and commit history there.

Now we can continue developing our project and making new commits locally. To push our commits to the remote main branch, we just need to run git push. This time, we don't have to use the remote or branch name because we've already set main to track origin/main.

Pulling Content From a GitHub Repository

We can do basic file editing and make commits within GitHub itself. For example, if we click the file and then click the pencil icon at the top of the file, we can add another line of code and commit those changes to the remote main branch directly on GitHub.

Go ahead and add the statement print("Your Git Tutorial is Here...") to the end of Then go to the end of the page and click the Commit changes button. This makes a new commit on your remote repository.

This remote commit won't appear in your local commit history. To download it and update your local main branch, use the git pull command:

shell $ git pull remote: Enumerating objects: 5, done. remote: Counting objects: 100% (5/5), done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Unpacking objects: 100% (3/3), 696 bytes | 174.00 KiB/s, done. From be62476..605b6a7 main -> origin/main Updating be62476..605b6a7 Fast-forward | 1 + 1 file changed, 1 insertion(+)

Again, the command's output provides all the details about the operation. Note that git pull will download the remote branch and update the local branch in a single step.

If we want to download the remote branch without updating the local one, then we can use the [git fetch]( command. This practice gives us the chance to review the changes and commit them to our local repo only if they look right.

For example, go ahead and update the remote copy of by adding another statement like print("Let's go!!"). Commit the changes. Then get back to your local repo and run the following command:

shell $ git fetch remote: Enumerating objects: 5, done. remote: Counting objects: 100% (5/5), done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Unpacking objects: 100% (3/3), 731 bytes | 243.00 KiB/s, done. From 605b6a7..ba489df main -> origin/main

This command downloaded the latest changes from origin/main to our local repo. Now we can compare the remote copy of to the local copy. To do this, we can use the git diff command as follows:

shell $ git diff main origin/main diff --git a/ b/ index be2aa66..4f0e7cf 100644 --- a/ +++ b/ @@ -1,3 +1,4 @@ print('Hello, World!') print('Welcome to PythonGUIs!') print("Your Git Tutorial is Here...") +print("Let's go!!")

In the command's output, you can see that the remote branch adds a line containing print("Let's go!!") to the end of This change looks good, so we can use git pull to commit the change automatically.

Exploring Alternatives to GitHub

While GitHub is the most popular public Git server and collaboration platform in use, it is far from being the only one. and BitBucket are popular commercial alternatives similar to GitHub. While they have paid plans, both offer free plans, with some restrictions, for individual users.

Although, if you would like to use a completely open-source platform instead, Codeberg might be a good option. It's a community-driven alternative with a focus on supporting Free Software. Therefore, in order to use Codeberg, your project needs to use a compatible open-source license.

Optionally, you can also run your own Git server. While you could just use barebones git for this, software such as GitLab Community Edition (CE) and Forgejo provide you with both the benefits of running your own server and the experience of using a service like GitHub.


By now, you're able to use Git for version-controlling your projects. Git is a powerful tool that will make you much more efficient and productive, especially as the scale of your project grows over time.

While this guide introduced you to most of its basic concepts and common commands, Git has many more commands and options that you can use to be even more productive. Now, you know enough to get up to speed with Git.

Categories: FLOSS Project Planets

Martin Fitzpatrick: Working With Classes in Python

Mon, 2023-09-11 13:39

Python supports object-oriented programming (OOP) through classes, which allow you to bundle data and behavior in a single entity. Python classes allow you to quickly model concepts by creating representations of real objects that you can then use to organize your code.

In this tutorial, you'll learn how OOP and classes work in Python. This knowledge will allow you to quickly grasp how you can use their classes and APIs to create robust Python applications.

Defining Classes in Python

Python classes are templates or blueprints that allow us to create objects through instantiation. These objects will contain data representing the object's state, and methods that will act on the data providing the object's behavior.

Instantiation is the process of creating instances of a class by calling the class constructor with appropriate arguments.

Attributes and methods make up what is known as the class interface or API. This interface allows us to operate on the objects without needing to understand their internal implementation and structure.

Alright, it is time to start creating our own classes. We'll start by defining a Color class with minimal functionality. To do that in Python, you'll use the class keyword followed by the class name. Then you provide the class body in the next indentation level:

python >>> class Color: ... pass ... >>> red = Color() >>> type(red) <class '__main__.Color'>

In this example, we defined our Color class using the class keyword. This class is empty. It doesn't have attributes or methods. Its body only contains a pass statement, which is Python's way to do nothing.

Even though the class is minimal, it allows us to create instances by calling its constructor, Colo(). So, red is an instance of Color. Now let's make our Color class more fun by adding some attributes.

Adding Class and Instance &Acyttributes

Python classes allow you to add two types of attributes. You can have class and instance attributes. A class attribute belongs to its containing class. Its data is common to the class and all its instances. To access a class attribute, we can use either the class or any of its instances.

Let's now add a class attribute to our Color class. For example, let's say we need to keep note of how many instance of Color your code creates. Then you can have a color_count attribute:

python >>> class Color: ... color_count = 0 ... def __init__(self): ... Color.color_count += 1 ... >>> red = Color() >>> green = Color() >>> Color.color_count 2 >>> red.color_count 2

Now Color has a class attribute called color_count that gets incremented every time we create a new instance. We can quickly access that attribute using either the class directly or one of its instances, like red.

To follow up with this example, say that we want to represent our Color objects using red, green, and blue attributes as part of the RGB color model. These attributes should have specific values for specific instances of the class. So, they should be instance attributes.

To add an instance attribute to a Python class, you must use the .__init__() special method, which we introduced in the previous code but didn't explain. This method works as the instance initializer because it allows you to provide initial values for instance attributes:

python >>> class Color: ... color_count = 0 ... def __init__(self, red, green, blue): ... Color.color_count += 1 ... = red ... = green ... = blue ... >>> red = Color(255, 0, 0) >>> 255 >>> 0 >>> 0 >>> Traceback (most recent call last): ... AttributeError: type object 'Color' has no attribute 'red'

Cool! Now our Color class looks more useful. It has the usual class attributes and also three new instance attributes. Note that, unlike class attributes, instance attributes can't be accessed through the class itself. They're specific to a concrete instance.

There's something that jumps into sight in this new version of Color. What is the self argument in the definition of .__init__()? This attribute holds a reference to the current instance. Using the name self to identify the current instance is a strong convention in Python.

We'll use self as the first or even the only argument to instance methods like .__init__(). Inside an instance method, we'll use self to access other methods and attributes defined in the class. To do that, we must prepend self to the name of the target attribute or method instance of the class.

For example, our class has an attribute .red that we can access using the syntax inside the class. This will return the number stored under that name. From outside the class, you need to use a concrete instance instead of self.

Providing Behavior With Methods

A class bundles data (attributes) and behavior (methods) together in an object. You'll use the data to set the object's state and the methods to operate on that data or state.

Methods are just functions that we define inside a class. Like functions, methods can take arguments, return values, and perform different computations on an object's attributes. They allow us to make our objects usable.

In Python, we can define three types of methods in our classes:

  1. Instance methods, which need the instance (self) as their first argument
  2. Class methods, which take the class (cls) as their first argument
  3. Static methods, which take neither the class nor the instance

Let's now talk about instance methods. Say that we need to get the attributes of our Color class as a tuple of numbers. In this case, we can add an .as_tuple() method like the following:

python class Color: representation = "RGB" def __init__(self, red, green, blue): = red = green = blue def as_tuple(self): return,,

This new method is pretty straightforward. Since it's an instance method, it takes self as its first argument. Then it returns a tuple containing the attributes .red, .green, and .blue. Note how you need to use self to access the attributes of the current instance inside the class.

This method may be useful if you need to iterate over the RGB components of your color objects:

python >>> red = Color(255, 0, 0) >>> red.as_tuple() (255, 0, 0) >>> for level in red.as_tuple(): ... print(level) ... 255 0 0

Our as_tuple() method works great! It returns a tuple containing the RGB components of our color objects.

We can also add class methods to our Python classes. To do this, we need to use the @classmethod decorator as follows:

python class Color: representation = "RGB" def __init__(self, red, green, blue): = red = green = blue def as_tuple(self): return,, @classmethod def from_tuple(cls, rbg): return cls(*rbg)

The from_tuple() method takes a tuple object containing the RGB components of a desired color as an argument, creates a valid color object from it, and returns the object back to the caller:

python >>> blue = Color.from_tuple((0, 0, 255)) >>> blue.as_tuple() (0, 0, 255)

In this example, we use the Color class to access the class method from_tuple(). We can also access the method using a concrete instance of this class. However, in both cases, we'll get a completely new object.

Finally, Python classes can also have static methods that we can define with the @staticmethod decorator:

python class Color: representation = "RGB" def __init__(self, red, green, blue): = red = green = blue def as_tuple(self): return,, @classmethod def from_tuple(cls, rbg): return cls(*rbg) @staticmethod def color_says(message): print(message)

Static methods don't operate either on the current instance self or the current class cls. These methods can work as independent functions. However, we typically put them inside a class when they are related to the class, and we need to have them accessible from the class and its instances.

Here's how the method works:

python >>> Color.color_says("Hello from the Color class!") Hello from the Color class! >>> red = Color(255, 0, 0) >>> red.color_says("Hello from the red instance!") Hello from the red instance!

This method accepts a message and prints it on your screen. It works independently from the class or instance attributes. Note that you can call the method using the class or any of its instances.

Writing Getter & Setter Methods

Programming languages like Java and C++ rely heavily on setter and getter methods to retrieve and update the attributes of a class and its instances. These methods encapsulate an attribute allowing us to get and change its value without directly accessing the attribute itself.

For example, say that we have a Label class with a text attribute. We can make text a non-public attribute and provide getter and setter methods to manipulate the attributes according to our needs:

python class Label: def __init__(self, text): self.set_text(text) def text(self): return self._text def set_text(self, value): self._text = str(value)

In this class, the text() method is the getter associated with the ._text attribute, while the set_text() method is the setter for ._text. Note how ._text is a non-public attribute. We know this because it has a leading underscore on its name.

The setter method calls str() to convert any input value into a string. Therefore, we can call this method with any type of object. It will convert any input argument into a string, as you will see in a moment.

If you come from programming languages like Java or C++, you need to know Python doesn't have the notion of private, protected, and public attributes. In Python, you'll use a naming convention to signal that an attribute is non-public. This convention consists of adding a leading underscore to the attribute's name. Note that this naming pattern only indicates that the attribute isn't intended to be used directly. It doesn't prevent direct access, though.

This class works as follows:

python >>> label = Label("Python!") >>> label.text() 'Python!' >>> label.set_text("Classes!") >>> label.text() 'Classes!' >>> label.set_text(123) >>> label.text() '123'

In this example, we create an instance of Label. The original text is passed to the class constructor, Label(), which automatically calls __init__() to set the value of ._text by calling the setter method text(). You can use text() to access the label's text and set_text() to update it. Remember that any input will be converted into a string, as we can see in the final example above.

The getter and setter pattern is pretty common in languages like Java and C++. However, this pattern is less popular among Python developers. Instead, they use the @property decorator to hide attributes behind properties.

Here's how most Python developer will write their Label class:

python class Label: def __init__(self, text): self.text = text @property def text(self): return self._text @text.setter def text(self, value): self._text = str(value)

This class defines .text as a property. This property has getter and setter methods. Python calls them automatically when we access the attribute or update its value in an assignment:

python >>> label = Label("Python!") >>> label.text 'Python!' >>> label.text = "Class" >>> label.text 'Class' >>> label.text = 123 >>> label.text '123'

Python properties allow you to add function behavior to your attributes while permitting you to use them as normal attributes instead of as methods.

Writing Special Methods

Python supports many special methods, also known as dunder or magic methods, that are part of its class mechanism. We can identify these methods because their names start and end with a double underscore, which is the origin of their other name: dunder methods.

These methods accomplish different tasks in Python's class mechanism. They all have a common feature: Python calls them automatically depending on the operation we run.

For example, all Python objects are printable. We can print them to the screen using the print() function. Calling print() internally falls back to calling the target object's __str__() special method:

python >>> label = Label("Python!") >>> print(label) <__main__.Label object at 0x10354efd0>

In this example, we've printed our label object. This action provides some information about the object and the memory address where it lives. However, the actual output is not very useful from the user's perspective.

Fortunately, we can improve this by providing our Label class with an appropriate __str__() method:

python class Label: def __init__(self, text): self.text = text @property def text(self): return self._text @text.setter def text(self, value): self._text = str(value) def __str__(self): return self.text

The __str__() method must return a user-friendly string representation for our objects. In this case, when we print an instance of Label to the screen, the label's text will be displayed:

python >>> label = Label("Python!") >>> print(label) Python!

As you can see, Python takes care of calling __str__() automatically when we use the print() function to display our instances of Label.

Another special method that belongs to Python's class mechanism is __repr__(). This method returns a developer-friendly string representation of a given object. Here, developer-friendly implies that the representation should allow a developer to recreate the object itself.

python class Label: def __init__(self, text): self.text = text @property def text(self): return self._text @text.setter def text(self, value): self._text = str(value) def __str__(self): return self.text def __repr__(self): return f"{type(self).__name__}(text='{self.text}')"

The __repr__() method returns a string representation of the current objects. This string differs from what __str__() returns:

python >>> label = Label("Python!") >>> label Label(text='Python!')

Now when you access the instance on your REPL session, you get a string representation of the current object. You can copy and paste this representation to recreate the object in an appropriate environment.

Reusing Code With Inheritance

Inheritance is an advanced topic in object-oriented programming. It allows you to create hierarchies of classes where each subclass inherits all the attributes and behaviors from its parent class or classes. Arguably, code reuse is the primary use case of inheritance.

Yes, we code a base class with a given functionality and make that functionality available to its subclass through inheritance. This way, we implement the functionality only once and reuse it in every subclass.

Python classes support single and multiple inheritance. For example, let's say we need to create a button class. This class needs .width and .height attributes that define its rectangular shape. The class also needs a label for displaying some informative text.

We can code this class from scratch, or we can use inheritance and reuse the code of our current Label class. Here's how to do this:

python class Button(Label): def __init__(self, text, width, height): super().__init__(text) self.width = width self.height = height def __repr__(self): return ( f"{type(self).__name__}" f"(text='{self.text}', " f"width={self.width}, " f"height={self.height})" )

To inherit from a parent class in Python, we need to list the parent class or classes in the subclass definition. To do this, we use a pair of parentheses and a comma-separated list of parent classes. If we use several parent classes, then we're using multiple inheritance, which can be challenging to reason about.

The first line in __init__() calls the __init__() method on the parent class to properly initialize its .text attribute. To do this, we use the built-in super() function. Then we define the .width and .height attributes, which are specific to our Button class. Finally, we provide a custom implementation of __repr__().

Here's how our Button class works:

python >>> button = Button("Ok", 10, 5) >>> button.text 'Ok' >>> button.text = "Click Me!" >>> button.text 'Click Me!' >>> button.width 10 >>> button.height 5 >>> button Button(text='Ok', width=10, height=5) >>> print(button) Click Me!

As you can conclude from this code, Button has inherited the .text attribute from Label. This attribute is completely functional. Our class has also inherited the __str__() method from Label. That's why we get the button's text when we print the instance.

Wrapping Up Classes-Related Concepts

As we've seen, Python allows us to write classes that work as templates that you can use to create concrete objects that bundle together data and behavior. The building blocks of Python classes are:

  • Attributes, which hold the data in a class
  • Methods, which provide the behaviors of a class

The attributes of a class define the class's data, while the methods provide the class's behaviors, which typically act on that data.

To better understand OOP and classes in Python, we should first discuss some terms that are commonly used in this aspect of Python development:

  • Classes are blueprints or templates for creating objects -- just like a blueprint for creating a car, plane, house, or anything else. In programming, this blueprint will define the data (attributes) and behavior (methods) of the object and will allow us to create multiple objects of the same kind.

  • Objects or Instances are the realizations of a class. We can create objects from the blueprint provided by the class. For example, you can create John's car from a Car class.

  • Methods are functions defined within a class. They provide the behavior of an object of that class. For example, our Car class can have methods to start the engine, turn right and left, stop, and so on.

  • Attributes are properties of an object or class. We can think of attributes as variables defined in a class or object. Therefore, we can have:

    • class attributes, which are specific to a concrete class and common to all the instances of that class. You can access them either through the class or an object of that class. For example, if we're dealing with a single car manufacturer, then our Car class can have a manufacturer attribute that identifies it.
    • instance attributes, which are specific to a concrete instance. You can access them through the specific instance. For example, our Car class can have attributes to store properties such as the maximum speed, the number of passengers, the car's weight, and so on.
  • Instantiation is the process of creating an individual instance from a class. For example, we can create John's car, Jane's car, and Linda's car from our Car class through instantiation. In Python, this process runs through two steps:

    1. Instance creation: Creates a new object and allocates memory for storing it.
    2. Instance initialization: Initializes all the attributes of the current object with appropriate values.
  • Inheritance is a mechanism of code reuse that allows us to inherit attributes and methods from one or multiple existing classes. In this context, we'll hear terms like:

    • Parent class: The class we're inheriting from. This class is also known as the superclass or base class. If we have one parent class, then we're using single inheritance. If we have more than one parent class, then we're using multiple inheritance.
    • Child class: The class that inherits from a given parent. This class is also known as the subclass.

Don't feel frustrated or bad if you don't understand all these terms immediately. They'll become more familiar with use as you use them in your own Python code.


Now you know the basics of Python classes. You also learned fundamental concepts of object-oriented programming, such as inheritance.

Categories: FLOSS Project Planets

Martin Fitzpatrick: Getting started with VS Code for Python

Mon, 2023-09-11 13:39

Setting up a working development environment is the first step for any project. Your development environment setup will determine how easy it is to develop and maintain your projects over time. That makes it important to choose the right tools for your project. This article will guide you through how to set up Visual Studio Code, which is a popular free-to-use, cross-platform code editor developed by Microsoft, in order to develop Python applications.

Visual Studio Code is not to be confused with Visual Studio, which is a separate product also offered by Microsoft. Visual Studio is a fully-fledged IDE that is mainly geared towards Windows application development using C# and the .NET Framework.

Setup a Python environment

In case you haven't already done this, Python needs to be installed on the development machine. You can do this by going to and grabbing the specific installer for either Windows or macOS. Python is also available for installation via Microsoft Store on Windows devices.

Make sure that you select the option to Add Python to PATH during installation (via the installer).

If you are on Linux, you can check if Python is already installed on your machine by typing python3 --version in a terminal. If it returns an error, you need to install it from your distribution's repository. On Ubuntu/Debian, this can be done by typing sudo apt install python3. Both pip (or pip3) and venv are distributed as separate packages on Ubuntu/Debian and can also be installed by typing sudo apt install python3-pip python3-venv.

Setup Visual Studio Code

First, head over to to and grab the installer for your specific platform.

If you are on a Raspberry Pi (with Raspberry Pi OS), you can also install VS Code by simply typing sudo apt install code. On Linux distributions that support Snaps, you can do it by typing sudo snap install code --classic.

Once VS Code is installed, head over to the Extensions tab in the sidebar on the left by clicking on it or by pressing CTRL+SHIFT+X. Search for the 'Python' extension published by Microsoft and click on Install.

The Extensions tab in the left-hand sidebar.

Usage and Configuration

Now that you have finished setting up VS Code, you can go ahead and create a new Python file. Remember that the Python extension only works if you open a .py file or have selected the language mode for the active file as Python.

To change the language mode for the active file, simply press CTRL+K once and then press M after releasing the previous keys. This kind of keyboard shortcut is called a chord in VS Code. You can see more of them by pressing CTRL+K CTRL+S (another chord).

The Python extension in VS Code allows you to directly run a Python file by clicking on the 'Play' button on the top-right corner of the editor (without having to type python in the terminal).

You can also do it by pressing CTRL+SHIFT+P to open the Command Palette and running the > Python: Run File in Terminal command.

Finally, you can configure VS Code's settings by going to File > Preferences > Settings or by pressing CTRL+COMMA. In VS Code, each individual setting has an unique identifier which you can see by clicking on the cog wheel that appears to the left of each setting and clicking on 'Copy Setting ID'. This ID is what will be referred to while talking about a specific setting. You can also search for this ID in the search bar under Settings.

Linting and Formatting Support (Optional)

Linters make it easier to find errors and check the quality of your code. On the other hand, code formatters help keep the source code of your application compliant with PEP (Python Enhancement Proposal) standards, which make it easier for other developers to read your code and collaborate with you.

For VS Code to provide linting support for your projects, you must first install a preferred linter like flake8 or pylint.

bash pip install flake8

Then, go to Settings in VS Code and toggle the relevant setting (e.g. python.linting.flake8Enabled) for the Python extension depending on what you installed. You also need to make sure that python.linting.enabled is toggled on.

A similar process must be followed for code formatting. First, install something like autopep8 or black.

bash pip install autopep8

You then need to tell VS Code which formatter to use by modifying python.formatting.provider and toggle on editor.formatOnSave so that it works without manual intervention.

If pip warns that the installed modules aren't in your PATH, you may have to specify the path to their location in VS Code (under Settings). Follow the method described under Working With Virtual Environments to do that.

Now, when you create a new Python file, VS Code automatically gives you a list of Problems (CTRL+SHIFT+M) in your program and formats the code on saving the file.

Identified problems in the source code, along with a description and line/column numbers.

You can also find the location of identified problems from the source overview on the right hand, inside the scrollbar.

Working With Virtual Environments

Virtual environments are a way of life for Python developers. Most Python projects require the installation of external packages and modules (via pip). Virtual environments allow you to separate one project's packages from your other projects, which may require a different version of those same packages. Hence, it allows all those projects to have the specific dependencies they require to work.

The Python extension makes it easier for you by automatically activating the desired virtual environment for the in-built terminal and Run Python File command after you set the path to the Python interpreter. By default, the path is set to use the system's Python installation (without a virtual environment).

To use a virtual environment for your project/workspace, you need to first make a new one by opening a terminal (View > Terminal) and typing python -m venv .venv. Then, you can set the default interpreter for that project by opening the Command Palette (CTRL+SHIFT+P) and selecting > Python: Select Interpreter.

You should now either close the terminal pane in VS Code and open a new one or type source .venv/bin/activate into the existing one to start using the virtual environment. Then, install the required packages for your project by typing pip install <package_name>.

VS Code, by default, looks for tools like linters and code formatters in the current Python environment. If you don't want to keep installing them over and over again for each new virtual environment you make (unless your project requires a specific version of that tool), you can specify the path to their location under Settings in VS Code. - flake8 - python.linting.flake8Path - autopep8 - python.formatting.autopep8Path

To find the global location of these packages on macOS and Linux, type which flake8 and which autopep8 in a terminal. If you are on Windows, you can use where <command_name>. Both these commands assume that flake8 and autopep8 are in your PATH.

Understanding Workspaces in VS Code

VS Code has a concept of Workspaces. Each 'project folder' (or the root/top folder) is treated as a separate workspace. This allows you to have project-specific settings and enable/disable certain extensions for that workspace. It is also what allows VS Code to quickly recover the UI state (e.g. files that were previously kept open) when you open that workspace again.

In VS Code, each workspace (or folder) has to be 'trusted' before certain features like linters, autocomplete suggestions and the in-built terminal are allowed to work.

In the context of Python projects, if you tend to keep your virtual environments outside the workspace (where VS Code is unable to detect it), you can use this feature to set the default path to the Python interpreter for that workspace. To do that, first Open a Folder (CTRL+K CTRL+O) and then go to File > Preferences > Settings > Workspace to modify python.defaultInterpreterPath.

Setting the default interpreter path for the workspace.

In VS Code settings you can search for settings by name using the bar at the top.

You can also use this approach to do things like use a different linter for that workspace or disable the code formatter for it. The workspace-specific settings you change are saved in a .vscode folder inside that workspace, which you can share with others.

If your VS Code is not recognizing libraries you are using in your code, double check the correct interpreter is being used. You can find which Python version you're using on the command line by running which python or which python3 on macOS/Linux, or where python or where python3 on Windows.

Working With Git in VS Code (Optional)

Using Version Control is required for developing applications. VS Code does have in-built support for Git but it is pretty barebones, not allowing much more than tracking changes that you have currently made and committing/pushing those changes once you are done.

For the best experience, it is recommended to use the GitLens extension. It lets you view your commit history, check who made the changes and much more. To set it up, you first need to have Git set up on your machine (go here) and then install GitLens from the Extensions tab in the sidebar on the left. You can now use those Git-related features by going to the Git tab in the sidebar (CTRL+SHIFT+G).

There are more Git-related extensions you could try as well, like Git History and GitLab Workflow. Give them a whirl too!

Community-driven & open source alternatives

While VS Code is open source (MIT-licensed), the distributed versions include some Microsoft-specific proprietary modifications, such as telemetry (app tracking). If you would like to avoid this, there is also a community-driven distribution of Visual Studio Code called VSCodium that provides freely-licensed binaries without telemetry.

Due to legal restrictions, VSCodium is unable to use the official Visual Studio Marketplace for extensions. Instead, it uses a separate vendor neutral, open source marketplace called Open VSX Registry. It doesn't have every extension, especially proprietary ones, and some are not kept up-to-date but both the Python and GitLens extensions are available on it.

You can also use the open source Jedi language server for the Python extension, rather than the bundled Pylance language server/extension, by configuring the python.languageServer setting. You can then completely disable Pylance by going to the Extensions tab. Note that, if you are on VSCodium, Jedi is used by default (as Pylance is not available on Open VSX Registry) when you install the Python extension.


Having the right tools and making sure they're set up correctly will greatly simplify your development process. While Visual Studio starts as a simple tool, it is flexible and extendable with plugins to suit your own preferred workflow. In this tutorial we've covered the basics of setting up your environment, and you should now be ready to start developing your own applications with Python!

Categories: FLOSS Project Planets

Martin Fitzpatrick: PyQt6, PySide6, PyQt5 and PySide2 Books -- updated for 2022!

Mon, 2023-09-11 13:39

Hello! Today I have released new digital editions of my PyQt5, PyQt6, PySide2 and PySide6 book Create GUI Applications with Python & Qt.

This update adds over 200 pages of Qt examples and exercises - the book is now 780 pages long! - and continues to be updated and extended. The latest additions include:

  • Built-in dialogs, including QMessageBox and QFileDialog
  • Working with multiple windows, cross-window communication
  • Using QThreadPool.start() to execute Python functions
  • Long-running threads with QThread
  • Using custom widgets in Qt Designer
  • Recurring & single shot timers
  • Managing data files, working with paths
  • Packaging with PyInstaller on Windows, macOS & Linux
  • Creating distributable installers on Windows, macOS & Linux

This update marks the 5th Edition of the book.

As always, if you've previously bought a copy of the book you get these updates for free! Just go to the downloads page and enter the email you used for the purchase. If you have problems getting this update just get in touch.


Categories: FLOSS Project Planets

Martin Fitzpatrick: DiffCast: Hands-free Python Screencast Creator

Mon, 2023-09-11 13:39

Programming screencasts are a popular way to teach programming and demo tools. Typically people will open up their favorite editor and record themselves tapping away. But this has a few problems. A good setup for coding isn't necessarily a good setup for video -- with text too small, a window too big, or too many distractions. Typing code in live means mistakes, which means more time editing or confusion for the people watching.

DiffCast eliminates that, automatically generating screencasts from Python source files you prepare ahead of time.

DiffCast is written in Python with PyQt6. Source code available on Github

Given the following Python source files:

python print('Hello, world!') python name = input('What is your name?\n') print(f'Hello, {name}!') python friends = ['Emelia', 'Jack', 'Bernardina', 'Jaap'] name = input('What is your name?\n') if name in friends: print(f'Hello, {name}!') else: print("I don't know you.") python friends = ['Emelia', 'Jack', 'Bernardina', 'Jaap'] while True: name = input('What is your name?\n') if name in friends: print(f'Hello, {name}!') else: print("I don't know you.") friends.append(name)

DiffCast will generate the following screencast (editor can be captured separately).

The editor view is configured to be easily readable in video, without messing with your IDE settings. Edits happen at a regular speed, without mistakes, making them easy to follow. Each diffcast is completely reproducible, with the same files producing the same output every time. You can set defined window sizes, or remove the titlebar to record.

Designed for creating reproducible tutoring examples, library demos or demonstrating code to a class. You can also step forwards and backwards through each file manually, using the control panel.

Finally, you can write out edits to a another file, and show the file location in a file listing for context. Run the intermediate files to demonstrate the effect of the changes.

DiffCast is open source (GPL licensed) & free to use. For bug reports, feature requests see the Github project.

Categories: FLOSS Project Planets

Martin Fitzpatrick: PySide6 tutorial now available

Mon, 2023-09-11 13:39

Hello! With the release of Qt6 versions of PyQt and PySide the course was getting a little crowded. So, today I've split the PySide tutorials into their own standalone PySide2 course and PySide6 course.

The tutorials have all been updated for PySide2 & PySide6, with some additional improvements based on the latest editions of the book.

This first update includes the following PySide tutorials --

Getting started creating Python GUIs with PySide Using Qt Designer with PySide Extended UI features in PySide Multi threading PySide applications & QProcess Qt Model Views Pyside plotting & graphics with Matplotlib/PyQtGraph Bitmap graphics and custom widgets Packaging (PySide2 only)

That's all for now!

You can also still access the PyQt5 tutorial and PyQt6 tutorial.

Categories: FLOSS Project Planets

Martin Fitzpatrick: PyQt6 Book now available: Create GUI Applications with Python &amp; Qt6

Mon, 2023-09-11 13:39

Hello! Today I have released the first PyQt6 edition of my book Create GUI Applications, with Python & Qt6.

This update follows the 4th Edition of the PyQt5 book updating all the code examples and adding additional PyQt6-specific detail. The book contains 600+ pages and 200+ complete code examples taking you from the basics of creating PyQt applications to fully functional apps.


Categories: FLOSS Project Planets

Martin Fitzpatrick: PySide6 Book now available: Create GUI Applications with Python &amp; Qt6

Mon, 2023-09-11 13:39

Hello! This morning I released the first Qt6 edition of my PySide book Create GUI Applications, with Python & Qt6.

This update follows the 4th Edition of the PySide2 book updating all the code examples and adding additional PySide6-specific detail. The book contains 600+ pages and 200+ complete code examples taking you from the basics of creating PySide applications to fully functional apps.

If you have any questions or difficulty getting hold of this update, just get in touch.


Categories: FLOSS Project Planets

Martin Fitzpatrick: Using MicroPython and uploading libraries on Raspberry Pi Pico

Mon, 2023-09-11 13:39

MicroPython is an implementation of the Python 3 programming language, optimized to run on microcontrollers. It's one of the options available for programming your Raspberry Pi Pico and a nice friendly way to get started with microcontrollers.

MicroPython can be installed easily on your Pico, by following the instructions on the Raspberry Pi website (click the Getting Started with MicroPython tab and follow the instructions).

After that point you might get a bit stuck. The Pico documentation covers connecting to the Pico from a Pi, so if you're wanting to code from your own computer you'll need something else. One option is the Thonny IDE which you use to write and upload code to your Pico. It's got a nice friendly interface for working with Python.

But if you don't want to change your IDE, or want a way to communicate with your Pico from the command line? You're in luck: there is a simple tool for accessing the MicroPython REPL on your Pico and uploading custom Python scripts or libraries you may wish to use: rshell

In this tutorial I'll take you through working with MicroPython in rshell, coding live and uploading custom scripts to your Pico.

Installing rshell

Rshell itself is built on Python (not MicroPython) and can be installed and run locally on your main machine. You can install it like any other Python library.

bash python -m pip install rshell

Unfortunately, the current version of rshell does not always play nicely with the Raspberry Pico. If you have problems you can install a fixed version in the pico branch from the rshell repository. You can install this directly from Github with the following --

bash python -m pip install

This will download the latest version of the pico branch (as a .zip) and install this in your Python environment.

Once installed, you will have access to a new command line tool rshell.

The rshell interface

To use rshell from the command line, enter rshell at your command prompt. You will see a welcome message and the prompt will turn green, to indicate you're in rshell mode.

The rshell interface on Windows 10

The rshell interface on macOS

If previous pip install worked but the rshell command doesn't work, then you may have a problem with your Python paths.

To see the commands available in rshell, enter help and press Enter.

python help Documented commands (type help <topic>): ======================================== args cat connect date edit filesize help mkdir rm shell boards cd cp echo exit filetype ls repl rsync Use the exit command to exit rshell.

You can exit rshell at any time by entering exit or pressing Ctrl-C. Once exited the prompt will turn white.

The basic file operation commands are shown below.

  • cd <dirname> change directory
  • cp <from> <to> copy a file
  • ls list current directory
  • rm <filename> remove (delete) a file
  • filesize <filename> give the size of a file in bytes

If you type ls and press enter you will see a listing of your current folder on your host computer. The same goes for any of the other file operations, until we've connect a board and opened it's file storage -- so be careful! We'll look at how to connect to a MicroPython board and work with the files on it next.

Connecting to your Pico with rshell

Enter boards to see a list of MicroPython boards connected to your computer. If you don't have any connected boards, you'll see the message No boards connected.

If your board isn't connected plug your Pico in now. You can use the connect command to connect to the board, but for that you'll need to know which port it is on. Save yourself some effort and just restart rshell to connect automatically. To do this, type exit and press Enter (or press Ctrl-C) to exit and then restart rshell by entering rshell again at the prompt.

If a board is connected when you start rshell you will see something like the following...

python C:\Users\Gebruiker>rshell Connecting to COM4 (buffer-size 128)... Trying to connect to REPL connected

Or an equivalent on macOS...

python Martins-Mac: ~ mfitzp$ rshell Connecting to /dev/cu.usbmodem0000000000001 (buffer-size 128) Trying to connect to REPL connected

...which shows you've connected to the MicroPython REPL on the Raspberry Pi Pico. Once connected the boards command will return some information about the connected board, like the following.

python pyboard @ COM4 connected Epoch: 1970 Dirs:

The name on the left is the type of board (Pico appears as pyboard) and connected port (here COM4). The label at the end Dirs: will list any files on the Pico -- currently none.

Starting a REPL

With the board connected, you can enter the Pico's REPL by entering the repl command. This will return something like the following

python repl Entering REPL. Use Control-X to exit. > MicroPython v1.14 on 2021-02-14; Raspberry Pi Pico with RP2040 Type "help()" for more information. >>> >>>

You are now writing Python on the Pico! Try entering print("Hello!") at the REPL prompt.

python MicroPython v1.14 on 2021-02-14; Raspberry Pi Pico with RP2040 Type "help()" for more information. >>> >>> print("Hello!") Hello!

As you can see, MicroPython works just like normal Python. If you enter help() and press Enter, you'll get some basic help information about MicroPython on the Pico. Helpfully, you also get a small reference to how the pins on the Pico are numbered and the different ways you have to control them.

python Type "help()" for more information. >>> help() Welcome to MicroPython! For online help please visit For access to the hardware use the 'machine' module. RP2 specific commands are in the 'rp2' module. Quick overview of some objects: machine.Pin(pin) -- get a pin, eg machine.Pin(0) machine.Pin(pin, m, [p]) -- get a pin and configure it for IO mode m, pull mode p methods: init(..), value([v]), high(), low(), irq(handler) machine.ADC(pin) -- make an analog object from a pin methods: read_u16() machine.PWM(pin) -- make a PWM object from a pin methods: deinit(), freq([f]), duty_u16([d]), duty_ns([d]) machine.I2C(id) -- create an I2C object (id=0,1) methods: readfrom(addr, buf, stop=True), writeto(addr, buf, stop=True) readfrom_mem(addr, memaddr, arg), writeto_mem(addr, memaddr, arg) machine.SPI(id, baudrate=1000000) -- create an SPI object (id=0,1) methods: read(nbytes, write=0x00), write(buf), write_readinto(wr_buf, rd_buf) machine.Timer(freq, callback) -- create a software timer object eg: machine.Timer(freq=1, callback=lambda t:print(t)) Pins are numbered 0-29, and 26-29 have ADC capabilities Pin IO modes are: Pin.IN, Pin.OUT, Pin.ALT Pin pull modes are: Pin.PULL_UP, Pin.PULL_DOWN Useful control commands: CTRL-C -- interrupt a running program CTRL-D -- on a blank line, do a soft reset of the board CTRL-E -- on a blank line, enter paste mode For further help on a specific object, type help(obj) For a list of available modules, type help('modules')

You can run help() in the REPL any time you need a reminder.

While we're here, lets flash the LED on the Pico board.

Enter the following at the REPL prompt...

python from machine import Pin led = Pin(25, Pin.OUT) led.toggle()

Every time you call led.toggle() the LED will toggle from ON to OFF or OFF to ON.

To exit the REPL at any time press Ctrl-X

Uploading a file

MicroPython comes with a lot of built-in support for simple devices and communication protocols -- enough to build some quite fun things just by hacking in the REPL. But there are also a lot of libraries available for working with more complex hardware. To use these, you need to be able to upload them to your Pico! Once you can upload files, you can also edit your own code locally on your own computer and upload it from there.

To keep things simple, lets create our own "library" that adjusts the brightness of the LED on the Pico board -- exciting I know. This library contains a single function ledon which accepts a single parameter brightness between 0 and 65535.

python from machine import Pin, PWM led = PWM(Pin(25)) def ledon(brightness=65535): led.duty_u16(brightness)

Don't worry if you don't understand it, we'll cover how this works later. The important bit now is getting this on your Pico.

Take the code above and save it in a file named on your main computer, in the same folder you're executing rshell from. We'll upload this file to the Pico next.

Start rshell if you are not already in it -- look for the green prompt. Enter boards at the prompt to get a list of connected boards.

bash pyboard @ COM4 connected Epoch: 1970 Dirs:

To see the directory contents of the pyboard device, you can enter:

bash ls /pyboard

You should see nothing listed. The path /pyboard works like a virtual folder meaning you can copy files to this location to have the uploaded to your Pico. It is only available by a pyboard is connected. To upload a file, we copy it to this location. Enter the following at the prompt.

bash cp /pyboard/

After you press Enter you'll see a message confirming the copy is taking place

python C:\Users\Gebruiker> cp /pyboard Copying 'C:\Users\Gebruiker/' to '/pyboard/' ...

Once the copy is complete, run boards again at the prompt and you'll see the file listed after the Dirs: section, showing that it's on the board.

bash C:\Users\Gebruiker> boards pyboard @ COM4 connected Epoch: 1970 Dirs: / /pyboard/

You can also enter ls /pyboard to see the listing directly.

bash C:\Users\Gebruiker> ls /pyboard

If you ever need to upload multiple files, just repeat the upload steps until everything is where it needs to be. You can always drop in and out of the REPL to make sure things work.

Using uploaded libraries

Now we've uploaded our library, we can use it from the REPL. To get to the MicroPython REPL enter the repl command in rshell as before. To use the library we uploaded, we can import it, just like any other Python library.

python MicroPython v1.14 on 2021-02-14; Raspberry Pi Pico with RP2040 Type "help()" for more information. >>> >>> import picoled >>> picoled.ledon(65535) >>> picoled.ledon(30000) >>> picoled.ledon(20000) >>> picoled.ledon(10000)

Or to pulse the brightness of the LED...

python >>> import picoled >>> import time >>> while True: ... for a in range(0, 65536, 10000): ... picoled.ledon(a) ... time.sleep(0.1) Auto-running Python

So far we've been uploading code and running it manually, but once you start building projects you'll want your code to run automatically.

When it starts, MicroPython runs two scripts by default: and, in that order. By uploading your own script with the name it will run automatically every time the Raspberry Pico starts.

Let's update our "library" to become an auto script that runs at startup. Save the following code to a script named

python from machine import Pin, PWM from time import sleep led = PWM(Pin(25)) def ledon(brightness=65535): led.duty_u16(brightness) while True: for a in range(0, 65536, 10000): ledon(a) sleep(0.1)

In rshell run the command to copy the file to on the board.

bash cp /pyboard/

Don't copy this file to -- the loop will block the REPL startup and you won't be able to connect to your Pico to delete it again! If you do this, use the Resetting Flash memory instructions to clear your Pico. You will need to re-install MicroPython afterwards.

Once the file is uploaded, restart your Pico -- either unplug and re-plug it, or press Ctrl-D in the REPL -- and the LED will start pulsing automatically. The script will continue running until it finishes, or the Pico is reset. You can replace the script at any time to change the behavior, or delete it with.

bash rm /pyboard/ What's next?

Now you can upload libraries to your Pico you can get experimenting with the many MicroPython libraries that are available.

If you're looking for some more things to do with MicroPython on your Pico, there are some MicroPython examples available from Raspberry Pi themselves, and also the MicroPython documentation for language/API references.

Categories: FLOSS Project Planets

Real Python: Object-Oriented Programming (OOP) in Python 3

Mon, 2023-09-11 10:00

Object-oriented programming (OOP) is a method of structuring a program by bundling related properties and behaviors into individual objects. In this tutorial, you’ll learn the basics of object-oriented programming in Python.

Conceptually, objects are like the components of a system. Think of a program as a factory assembly line of sorts. At each step of the assembly line, a system component processes some material, ultimately transforming raw material into a finished product.

An object contains data, like the raw or preprocessed materials at each step on an assembly line. In addition, the object contains behavior, like the action that each assembly line component performs.

In this tutorial, you’ll learn how to:

  • Define a class, which is like a blueprint for creating an object
  • Use classes to create new objects
  • Model systems with class inheritance

Note: This tutorial is adapted from the chapter “Object-Oriented Programming (OOP)” in Python Basics: A Practical Introduction to Python 3.

The book uses Python’s built-in IDLE editor to create and edit Python files and interact with the Python shell, so you’ll see occasional references to IDLE throughout this tutorial. If you don’t use IDLE, you can run the example code from the editor and environment of your choice.

Get Your Code: Click here to download the free sample code that shows you how to do object-oriented programming with classes in Python 3.

What Is Object-Oriented Programming in Python?

Object-oriented programming is a programming paradigm that provides a means of structuring programs so that properties and behaviors are bundled into individual objects.

For example, an object could represent a person with properties like a name, age, and address and behaviors such as walking, talking, breathing, and running. Or it could represent an email with properties like a recipient list, subject, and body and behaviors like adding attachments and sending.

Put another way, object-oriented programming is an approach for modeling concrete, real-world things, like cars, as well as relations between things, like companies and employees or students and teachers. OOP models real-world entities as software objects that have some data associated with them and can perform certain operations.

Note: You can also check out the Python Basics: Object-Oriented Programming video course to reinforce the skills that you’ll develop in this section of the tutorial.

The key takeaway is that objects are at the center of object-oriented programming in Python. In other programming paradigms, objects only represent the data. In OOP, they additionally inform the overall structure of the program.

How Do You Define a Class in Python?

In Python, you define a class by using the class keyword followed by a name and a colon. Then you use .__init__() to declare which attributes each instance of the class should have:

class Employee: def __init__(self, name, age): = name self.age = age

But what does all of that mean? And why do you even need classes in the first place? Take a step back and consider using built-in, primitive data structures as an alternative.

Primitive data structures—like numbers, strings, and lists—are designed to represent straightforward pieces of information, such as the cost of an apple, the name of a poem, or your favorite colors, respectively. What if you want to represent something more complex?

For example, you might want to track employees in an organization. You need to store some basic information about each employee, such as their name, age, position, and the year they started working.

One way to do this is to represent each employee as a list:

kirk = ["James Kirk", 34, "Captain", 2265] spock = ["Spock", 35, "Science Officer", 2254] mccoy = ["Leonard McCoy", "Chief Medical Officer", 2266]

There are a number of issues with this approach.

First, it can make larger code files more difficult to manage. If you reference kirk[0] several lines away from where you declared the kirk list, will you remember that the element with index 0 is the employee’s name?

Second, it can introduce errors if employees don’t have the same number of elements in their respective lists. In the mccoy list above, the age is missing, so mccoy[1] will return "Chief Medical Officer" instead of Dr. McCoy’s age.

A great way to make this type of code more manageable and more maintainable is to use classes.

Read the full article at »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Stack Abuse: Maximum and Minimum Values for Integers in Python

Mon, 2023-09-11 09:31

Integers are one of the fundamental data types that you'll encounter. They're used in just about every application and understanding their limits can be crucial for avoiding errors or even optimizing your code. In this Byte, we'll peak into the world of integers, exploring how to find their maximum and minimum values and why you might need to know these values.

Integers in Python

Python is a dynamically typed language, which means that the Python interpreter infers the type of an object at runtime. This is different from statically-typed languages where you have to explicitly declare the type of all variables. For integers, Python provides the int type. Here's a simple example:

x = 10 print(type(x)) # <class 'int'>

This is a basic usage of an integer in Python. But what if we try to assign a really, really large value to an integer?

x = 10**100 print(x) # 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 print(type(x)) # <class 'int'>

Even with such a large number, Python still treats it as an integer! This is because Python's int type can handle large integers, limited only by the amount of memory available.

Why Would You Need to Know Max/Min Integer Values?

So you might be wondering why you'd ever need to know the maximum or minimum values of an integer in Python. After all, Python's int type can handle pretty large numbers, right? Well, while it's true that Python's int type can handle large numbers, there are still cases where knowing the maximum or minimum values can be useful.

For instance, when interfacing with C libraries or when dealing with file formats or network protocols that have specific integer size requirements, it's important to know the limits of your integers. Also, knowing the limits of your integers can be useful for debugging and optimization.

Another common use for min/max values is in certain algorithms. Let's say you're trying to find the minimum number in a set. For the sake of the initial comparison, you'd likely want to set your min value to the highest number possible so that the first value you compare it to will be lower. In a language like JavaScript, we'd use:

let min = Infinity;

But unfortunately, Python doesn't have a built-in way to do that.

How to Find Maximum/Minimum Integer Values

In Python, the sys module provides a constant, sys.maxsize, that represents the maximum integer that can be used for things like indexing Python's built-in data structures. Here's how you can access it:

import sys print(sys.maxsize) # 9223372036854775807

Note: The value of sys.maxsize can vary between platforms and Python versions, but it's generally 2**31 - 1 on a 32-bit platform and 2**63 - 1 on a 64-bit platform.

But what about the minimum value? Python doesn't have a built-in way to find the minimum value of an integer. However, since Python's integers can be negative, the minimum value is simply -sys.maxsize - 1.

import sys print(-sys.maxsize - 1) # -9223372036854775808 Finding the Min/Max Values for Floats, Including Infinity

Floating-point numbers in Python have their limits, just like integers. However, these limits are fairly large and suffice for most applications. Knowing these limits becomes essential when you're dealing with expansive datasets or high-precision calculations.

You can find the maximum and minimum float values using the sys.float_info object, which is part of Python's sys module. This object provides details about the floating-point type, including its maximum and minimum representable positive finite values.

import sys print("Max finite float value:", sys.float_info.max) print("Min positive finite float value:", sys.float_info.min)

When you execute this code, you'll likely see output similar to the following:

Max finite float value: 1.7976931348623157e+308 Min positive finite float value: 2.2250738585072014e-308

Note: Again, the exact values may differ based on your system's architecture and the version of Python you are using.

Interestingly, Python also provides a way to represent positive and negative infinity for float types, which effectively serve as bounds beyond the finite limits. You can define these infinities using float('inf') for positive infinity and float('-inf') for negative infinity.

Here's a quick example:

positive_infinity = float('inf') negative_infinity = float('-inf') print("Positive Infinity:", positive_infinity) print("Negative Infinity:", negative_infinity)

Running this code snippet will display:

Positive Infinity: inf Negative Infinity: -inf

These special float values can come in handy for initializing variables in algorithms, where you need a value guaranteed to be higher or lower than any other number.

Python 2 vs Python 3

When it comes to integer and float limits, there's a significant difference between Python 2 and Python 3.

In Python 2, there were two types of integers: int and long. The int type does have a limit, but the long type could handle arbitrarily large numbers. In Python 3, however, these two types were merged into a single int type, which can handle arbitrarily large numbers just like the long type in Python 2.

As for floats, there's no difference between Python 2 and Python 3. Both versions use the IEEE 754 standard for floating-point arithmetic, which defines the max and min values we discussed in the previous section.


While Python's dynamic typing system makes it easy to work with numbers, it's still important to know these limits, especially when dealing with very large numbers or high-precision calculations. I hope this Byte has shed some light on a topic that often goes unnoticed but is still important in Python programming.

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Jelle Zijlstra

Mon, 2023-09-11 08:30

This week, we welcome Jelle Zijlstra as our PyDev of the Week! Jelle is a core developer of the Python programming language. If you’d like to see what exactly Jelle is working on, you should check out his GitHub profile, Quora, or the Python Packaging Index.

Let’s take a few moments to get to know Jelle better!

Can you tell us a little about yourself (hobbies, education, etc):

I grew up in the Netherlands wanting to be a biologist, but decided to go to the United States for college. Then I took a computer science class, took a few more, and decided that software engineering offered better career prospects than biology. I still graduated with a degree in biology, but found a job as a software engineer.

Still, biology remains one of my main hobbies: I maintain a big database of biological nomenclature at I started out doing this in spreadsheets, and now it is a SQLite database. Learning to program has been hugely valuable for building up a high-quality, interconnected database.

Before the pandemic I used to walk much of the way to work, but then I was stuck at home, so I decided to go on walks anyway to exercise. That quickly spiraled into ridiculously long walks around the San Francisco Bay Area, up to 57 miles in a day. I write about my walks at

But the reason you’re interviewing me is my open source work. At my job a few years ago I got to start a new service that was going to be in Python 3, so I got to use shiny new tools like typing and asyncio. I ran into some minor issues with typing, started contributing to the relevant open source projects, and got sucked in. Now I help maintain several important parts of the Python ecosystem, including typing-extensions, typeshed, Black, mypy, and CPython.

Why did you start using Python?

The first language I learned in college was C. I’m still not sure whether I think C is a good first language to teach to students, but it’s definitely been valuable in giving me an understanding of how basic concepts such as pointers and memory work. Later in that class, we did a little bit of web programming in PHP. PHP isn’t a great language, but for my personal programming projects (such as maintaining my mammal database), it was still a better fit than C—no segfaults and much more library support. But I had to use Python in another class, and I quickly realized it was a lot better than PHP. Then I started a job mostly using Python, so it remained my main language.

What other programming languages do you know and which is your favorite?

When I interviewed for my current job, during each interview I picked the language that felt like the best fit for the question, and at the end of the day it turned out I had used a different language in each interview! I think it was C, Python, JavaScript, and OCaml. More recently, the main non-Python languages I’ve used have been TypeScript and C++, plus a bit of Swift. I think languages are valuable when they teach you something new about programming. OCaml was the most valuable language I learned in college because it taught me a completely new style of programming. Among languages I have dabbled in since, Haskell and Rust have been most useful.

What projects are you working on now?

My most recent major open-source contribution has been implementing PEP 695 for Python 3.12, which is coming out in October. It’s a major syntactic change that makes it a lot easier to write generics in Python. I wrote up a detailed account of the implementation at

At my job, I am now working on Poe, an AI chat app. Outside of work, I’ve been focusing recently on my mammal database instead of open-source software.

Which Python libraries are your favorite (core or 3rd party)?

typing. It’s been an invaluable tool to keep our huge Python codebase at work manageable.

How did you get into doing core Python development?

I attended a CPython sprint at PyCon in Portland and Cleveland before the pandemic and contributed a few things; for example, you can thank me for `@contextlib.asynccontextmanager`. However, the CPython core always felt a little remote and hard to get into, with the unusual workflow at the time (e.g., using Gerrit) and long release cycle. But then one day, Guido van Rossum emailed me to ask whether I was interested in becoming a core dev, so I said yes and after spending a few months becoming more familiar with the workflow (which is now a lot less unusual), I was voted in as a core dev. Guido asked me because at the time he was basically maintaining by himself, and obviously he is very busy. Now, we have several other people helping out in that area.

What are some new features in Python that you’re excited about?

I’m excited about the new syntax for generics and type aliases that I helped get into Python 3.12. Longer term, Python 3.13 should ship with deferred evaluation of annotations (PEP 649), which will make a lot of code that uses type annotations more ergonomic. We’re also likely to ship support for inline TypedDict types, another nice usability improvement.

Thanks so much for doing the interview, Jelle!

The post PyDev of the Week: Jelle Zijlstra appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Talk Python to Me: #429: Taming Flaky Tests

Mon, 2023-09-11 04:00
We write tests to show us when there are problems with our code. But what if there are intermittent problems with the tests themselves? That can be big hassle. In this episode, we have Gregory Kapfhammer and Owain Parry on the show to share their research and advice for taming flaky tests.<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Gregory Kapfhammer</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Owain Parry on Twitter</b>: <a href="" target="_blank" rel="noopener">@oparry9</a><br/> <b>Radon</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>pytest-xdist</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>awesome-pytest</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Tenacity</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Stamina</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Flaky Test Management</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Flaky Test Management (Datadog)</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Flaky Test Management (Spotify)</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Flaky Test Management (Google)</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Detecting Test Pollution</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Surveying the developer experience of flaky tests paper</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Build Kite CI/CD</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Flake It: Finding and Fixing Flaky Test Cases</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Unflakable</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>CircleCI Test Detection</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Watch this episode on YouTube</b>: <a href="" target="_blank" rel="noopener"></a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div><br/> <strong>Sponsors</strong><br/> <a href=''>PyCharm</a><br> <a href=''>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href=''>Talk Python Training</a>
Categories: FLOSS Project Planets

Stack Abuse: Parsing Boolean Values with Argparse in Python

Sun, 2023-09-10 18:29

In this Byte, we'll be looking into Python's argparse module and how we can use it to parse boolean values from the command line. This is often used when you're writing a command line tool or even a more complex application. Understanding argparse and its functionalities can make your life a lot easier when taking input from the command line.

The Argparse Module

Argparse is a Python module that makes it easy to write user-friendly command-line interfaces. The argparse module can handle positional arguments, optional arguments, and even sub-commands. It can also generate usage and help messages, and throw errors when users give the program invalid arguments.

Argparse has been part of the Python standard library since version 2.7, so you don't need to install anything extra to start using it. It's designed to replace the older optparse module, and offers a more flexible interface.

Basic Usage of Argparse

To start using argparse, you first need to import it:

import argparse

Next, you create a parser object:

parser = argparse.ArgumentParser(description='My first argparse program.')

The ArgumentParser object holds all the information necessary to parse the command line into Python data types. The description argument to ArgumentParser is a text to display before the argument help (the --help text).

To add arguments to your program, you use the add_argument() method:

parser.add_argument('integers', metavar='N', type=int, nargs='+', help='an integer for the accumulator') parser.add_argument('--sum', dest='accumulate', action='store_const', const=sum, default=max, help='sum the integers (default: find the max)')

Finally, you can parse the command-line arguments with parse_args():

args = parser.parse_args() print(args.accumulate(args.integers))

This will parse the command line, convert each argument to the appropriate type, and then invoke any actions you've specified.

Parsing Boolean Values

Now, let's see how we can parse boolean values with argparse. For this, we'll use the built-in store_true or store_false actions. These actions store the values True and False respectively.

import argparse parser = argparse.ArgumentParser(description='Parse boolean values.') parser.add_argument('--flag', dest='flag', action='store_true', help='Set the flag value to True.') parser.add_argument('--no-flag', dest='flag', action='store_false', help='Set the flag value to False.') parser.set_defaults(flag=True) args = parser.parse_args() print('Flag:', args.flag)

In this script, we've set up two command-line options: --flag and --no-flag. The --flag option sets args.flag to True, and the --no-flag option sets it to False. If neither option is given, the set_defaults() method sets args.flag to True.

Here's how it works in the command line:

$ python --flag Flag: True $ python --no-flag Flag: False $ python Flag: True

As you can see, argparse makes it easy to parse boolean values, and even to set default values. This can be very useful in a lot of applications, from simple scripts to more complex command-line interfaces.

Other Ways to Pass Boolean Values via CLI

In addition to the store_true and store_false actions, there are other ways you can pass boolean values using the command line interface. One common alternative is to pass the boolean value as a string, and then convert this string to a boolean in your script.

Let's consider the following example:

import argparse def str2bool(v): if v.lower() in ('yes', 'true', 't', 'y', '1'): return True elif v.lower() in ('no', 'false', 'f', 'n', '0'): return False else: raise argparse.ArgumentTypeError('Boolean value expected.') parser = argparse.ArgumentParser() parser.add_argument('--flag', type=str2bool, help='Boolean flag') args = parser.parse_args() print(args.flag)

Here we've define a helper function str2bool that converts a string to a boolean value. It works by checking for common "truthy" and "falsey" values.

We then use this function as the type for our argument. This allows us to pass boolean values as strings, like this:

$ python --flag yes True $ python --flag no False Conclusion

In this Byte, we've shown some examples on how to parse boolean values with the argparse module in Python. We've looked at the basic usage of argparse, how to parse boolean values, and some alternatives for handling boolean values. Whether you're writing a simple script for personal use or a complex application to be used by others, argparse is a useful tool to make your script more flexible and user-friendly.

Categories: FLOSS Project Planets