Feeds

Python Software Foundation: Announcing Python Software Foundation Fellow Members for Q2 2024! 🎉

Planet Python - Thu, 2024-10-24 06:00

 

The PSF is pleased to announce its second batch of PSF Fellows for 2024! Let us welcome the new PSF Fellows for Q2! The following people continue to do amazing things for the Python community:

Leonard Richardson

Blog

Winnie Ke 

Facebook, LinkedIn

Thank you for your continued contributions. We have added you to our Fellow roster.

The above members help support the Python ecosystem by being phenomenal leaders, sustaining the growth of the Python scientific community, maintaining virtual Python communities, maintaining Python libraries, creating educational material, organizing Python events and conferences, starting Python communities in local regions, and overall being great mentors in our community. Each of them continues to help make Python more accessible around the world. To learn more about the new Fellow members, check out their links above.

Let's continue recognizing Pythonistas all over the world for their impact on our community. The criteria for Fellow members is available online: https://www.python.org/psf/fellows/. If you would like to nominate someone to be a PSF Fellow, please send a description of their Python accomplishments and their email address to psf-fellow at python.org. Quarter 3 nominations are currently in review. We are accepting nominations for Quarter 4 through November 20th, 2024.

Are you a PSF Fellow and want to help the Work Group review nominations? Contact us at psf-fellow at python.org.

Categories: FLOSS Project Planets

Talk Python to Me: #482: Pre-commit Hooks for Python Devs

Planet Python - Thu, 2024-10-24 04:00
Do you struggle to make sure your code is always correct before you check it in? What about your team members' code? That one person who never wants to run the linter? Tired of dealing with tons of conflicts and spurious git changes? You need git pre-commit hooks. We're lucky to have Stefanie Molin on this episode who has done a bunch of writing and teaching of git hooks.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/bluehost'>Bluehost</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Stefanie Molin</b>: <a href="https://stefaniemolin.com/?featured_on=talkpython" target="_blank" >stefaniemolin.com</a><br/> <br/> <b>Talk Python Blog</b>: <a href="https://talkpython.fm/blog/" target="_blank" >talkpython.fm/blog</a><br/> <br/> <b>How to Set Up Pre-Commit Hooks</b>: <a href="https://stefaniemolin.com/articles/devx/pre-commit/setup-guide/?featured_on=talkpython" target="_blank" >stefaniemolin.com</a><br/> <b>Common Pre-Commit Errors and How to Solve Them</b>: <a href="https://stefaniemolin.com/articles/devx/pre-commit/troubleshooting-guide/?featured_on=talkpython" target="_blank" >stefaniemolin.com</a><br/> <b>A Behind-the-Scenes Look at How Pre-Commit Works</b>: <a href="https://stefaniemolin.com/articles/devx/pre-commit/behind-the-scenes/?featured_on=talkpython" target="_blank" >stefaniemolin.com</a><br/> <b>Pre-Commit Hook Creation Guide</b>: <a href="https://stefaniemolin.com/articles/devx/pre-commit/hook-creation-guide/?featured_on=talkpython" target="_blank" >stefaniemolin.com</a><br/> <b>(Pre-)Commit to Better Code Workshop</b>: <a href="https://stefaniemolin.com/workshops/pre-commit-workshop/?featured_on=talkpython" target="_blank" >stefaniemolin.com</a><br/> <b>exif-stripper</b>: <a href="https://stefaniemolin.com/articles/devx/pre-commit/exif-stripper/?featured_on=talkpython" target="_blank" >stefaniemolin.com</a><br/> <b>exif-stripper on GitHub</b>: <a href="https://github.com/stefmolin/exif-stripper?featured_on=talkpython" target="_blank" >github.com</a><br/> <b>docstring-validation-using-pre-commit-hook</b>: <a href="https://numpydoc.readthedocs.io/en/latest/validation.html#docstring-validation-using-pre-commit-hook" target="_blank" >numpydoc.readthedocs.io</a><br/> <b>Data Morph: Moving Beyond the Datasaurus Dozen</b>: <a href="https://stefaniemolin.com/articles/data-science/introducing-data-morph/?featured_on=talkpython" target="_blank" >stefaniemolin.com</a><br/> <b>Data Morph on GitHub</b>: <a href="https://github.com/stefmolin/data-morph?featured_on=talkpython" target="_blank" >github.com</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=EzlzX1OL92w" target="_blank" >youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/482/pre-commit-hooks-for-python-devs" target="_blank" >talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Categories: FLOSS Project Planets

Parabola GNU/Linux-libre: manual intervention required for local pacman repositories

GNU Planet! - Thu, 2024-10-24 02:11

NOTE: pacman v7 is currently in [libre-testing]; but it will be promoted to libre soon

from arch:

With the release of [version 7.0.0] pacman has added support for downloading packages as a separate user with dropped privileges.

For users with local repos however this might imply that the download user does not have access to the files in question, which can be fixed by assigning the files and folder to the alpm group and ensuring the executable bit (+x) is set on the folders in question.

$ chown :alpm -R /path/to/local/repo

Remember to [merge the .pacnew] files to apply the new default.

Pacman also introduced [a change] to improve checksum stability for git repos that utilize .gitattributes files. This might require a one-time checksum change for PKGBUILDs that use git sources.

Categories: FLOSS Project Planets

amazee.io: Push Your Code. We’ll Handle The Rest.

Planet Drupal - Wed, 2024-10-23 20:00
What does “pushing code” mean for your business? Read on to discover how easy it is to get your code from development to production without headaches.
Categories: FLOSS Project Planets

Valhalla's Things: Asemic Writing, a Zine

Planet Debian - Wed, 2024-10-23 20:00
Posted on October 24, 2024
Tags: madeof:atoms, madeof:bits, craft:zine

I have no idea either.

Happy Maladay1 to those who celebrate it, I guess.

If you care about the how, it started as china ink on tracing paper, with the help of a template (and a correction sheet for one page where I used the wrong line on the template).

A rubber stamp was carved with the author’s signature and stamped on white paper because the ink from the pad wasn’t working well on tracing paper.

Then everything was scanned (with the correction on top of the wrong page) asemic_zine_scans.tar.

Imported in Inkscape and traced asemic_zine_svg.tar.

Printed, cut in half, folded and stapled. The magenta lines weren’t by design, but are there because my printer is currently2 cursed.

And finally, asemic_zine.pdf was created, joining the pages together with pdfjam, for convenience in case somebody wants to download the full thing.

All the .tar and .pdf downloads from this page are released under the WTFPL, or All Rites Reversed..

  1. it’s still technically Maladay when I write this, even if by the time you’ll get this it’s probably the 6th of The Aftermath.↩︎

  2. I mean, all printers are always cursed, but at different times they can be cursed in different and novel ways.↩︎

Categories: FLOSS Project Planets

Acquia Developer Portal Blog: Crafting A Winning Content Strategy for Your Drupal Site

Planet Drupal - Wed, 2024-10-23 18:21

Learn to develop an effective content strategy for your Drupal site with expert advice. Drive traffic, engage users, and boost conversions with strategic content planning.

Introduction

A content strategy is essential for any modern Drupal site as it aligns content with business goals, optimizes user experience, ensures quality and consistency, and leverages Drupal's advanced content management features for efficient governance and workflow management. Your organization’s content strategy will incorporate SEO best practices to amplify online visibility and prepare the organization for future digital trends and multi-channel distribution. 

Ultimately, a well-crafted content strategy is a linchpin in transforming a Drupal CMS into a dynamic asset that supports engagement, lead conversion, and customer loyalty, thus driving an organization's digital success.

Categories: FLOSS Project Planets

Calamares towards 3.3.11

Planet KDE - Wed, 2024-10-23 18:00

I’m going to change up the Calamares release process a little. It’s been slow going as a community-maintained project – which isn’t to say that that is a bad thing. Just slow. I’ve decided to make releases marginally more predictable than “when [ade] has a relaxed kind of Tuesday” and have marked a couple of issues with the Calamares 3.3 milestone. When the milestone is empty again, then there will be a release. After the next release, I’ll put a couple more issues on the milestone, and the recipe can be repeated.

Categories: FLOSS Project Planets

EBN lives?

Planet KDE - Wed, 2024-10-23 18:00

Many years ago I was involved in software-quality research – the SQO-OSS project and things like that. That work begat the code-quality checking scripts that we in the KDE community called “the EBN”, or EnglishBreakfastNetwork. I was a tea-drinker then. The EBN stuff has been surpassed by Klazy and many other software-quality-checking tools. But the EBN domain carries on. Although I haven’t got anything to put on it I just renewed the domain again for two years – just in case.

Categories: FLOSS Project Planets

PyPy: A DSL for Peephole Transformation Rules of Integer Operations in the PyPy JIT

Planet Python - Wed, 2024-10-23 11:00

As is probably apparent from the sequence of blog posts about the topic in the last year, I have been thinking about and working on integer optimizations in the JIT compiler a lot. This work was mainly motivated by Pydrofoil, where integer operations matter a lot more than for your typical Python program.

In this post I'll describe my most recent change, which is a new small domain specific language that I implemented to specify peephole optimizations on integer operations in the JIT. It uses pattern matching to specify how (sequences of) integer operations should be simplified and optimized. The rules are then compiled to RPython code that then becomes part of the JIT's optimization passes.

To make it less likely to introduce incorrect optimizations into the JIT, the rules are automatically proven correct with Z3 as part of the build process (for a more hands-on intro to how that works you can look at the knownbits post). In this blog post I want to motivate why I introduced the DSL and give an introduction to how it works.

Motivation

This summer, after I wrote my scripts to mine JIT traces for missed optimization opportunities, I started implementing a few of the integer peephole rewrite that the script identified. Unfortunately, doing so led to the problem that the way we express these rewrites up to now is very imperative and verbose. Here's a snippet of RPython code that shows some rewrites for integer multiplication (look at the comments to see what the different parts actually do). You don't need to understand the code in detail, but basically it's in very imperative style and there's quite a lot of boilerplate.

def optimize_INT_MUL(self, op): arg0 = get_box_replacement(op.getarg(0)) b0 = self.getintbound(arg0) arg1 = get_box_replacement(op.getarg(1)) b1 = self.getintbound(arg1) if b0.known_eq_const(1): # 1 * x == x self.make_equal_to(op, arg1) elif b1.known_eq_const(1): # x * 1 == x self.make_equal_to(op, arg0) elif b0.known_eq_const(0) or b1.known_eq_const(0): # 0 * x == x * 0 == 0 self.make_constant_int(op, 0) else: for lhs, rhs in [(arg0, arg1), (arg1, arg0)]: lh_info = self.getintbound(lhs) if lh_info.is_constant(): x = lh_info.get_constant_int() if x & (x - 1) == 0: # x * (2 ** c) == x << c new_rhs = ConstInt(highest_bit(lh_info.get_constant_int())) op = self.replace_op_with(op, rop.INT_LSHIFT, args=[rhs, new_rhs]) self.optimizer.send_extra_operation(op) return elif x == -1: # x * -1 == -x op = self.replace_op_with(op, rop.INT_NEG, args=[rhs]) self.optimizer.send_extra_operation(op) return else: # x * (1 << y) == x << y shiftop = self.optimizer.as_operation(get_box_replacement(lhs), rop.INT_LSHIFT) if shiftop is None: continue if not shiftop.getarg(0).is_constant() or shiftop.getarg(0).getint() != 1: continue shiftvar = get_box_replacement(shiftop.getarg(1)) shiftbound = self.getintbound(shiftvar) if shiftbound.known_nonnegative() and shiftbound.known_lt_const(LONG_BIT): op = self.replace_op_with( op, rop.INT_LSHIFT, args=[rhs, shiftvar]) self.optimizer.send_extra_operation(op) return return self.emit(op)

Adding more rules to these functions is very tedious and gets super confusing when the functions get bigger. In addition I am always worried about making mistakes when writing this kind of code, and there is no feedback at all about which of these rules are actually applied a lot in real programs.

Therefore I decided to write a small domain specific language with the goal of expressing these rules in a more declarative way. In the rest of the post I'll describe the DSL (most of that description is adapted from the documentation about it that I wrote).

The Peephole Rule DSL Simple transformation rules

The rules in the DSL specify how integer operation can be transformed into cheaper other integer operations. A rule always consists of a name, a pattern, and a target. Here's a simple rule:

add_zero: int_add(x, 0) => x

The name of the rule is add_zero. It matches operations in the trace of the form int_add(x, 0), where x will match anything and 0 will match only the constant zero. After the => arrow is the target of the rewrite, i.e. what the operation is rewritten to, in this case x.

The rule language has a list of which of the operations are commutative, so add_zero will also optimize int_add(0, x) to x.

Variables in the pattern can repeat:

sub_x_x: int_sub(x, x) => 0

This rule matches against int_sub operations where the two arguments are the same (either the same box, or the same constant).

Here's a rule with a more complicated pattern:

sub_add: int_sub(int_add(x, y), y) => x

This pattern matches int_sub operations, where the first argument was produced by an int_add operation. In addition, one of the arguments of the addition has to be the same as the second argument of the subtraction.

The constants MININT, MAXINT and LONG_BIT (which is either 32 or 64, depending on which platform the JIT is built for) can be used in rules, they behave like writing numbers but allow bit-width-independent formulations:

is_true_and_minint: int_is_true(int_and(x, MININT)) => int_lt(x, 0)

It is also possible to have a pattern where some arguments needs to be a constant, without specifying which constant. Those patterns look like this:

sub_add_consts: int_sub(int_add(x, C1), C2) # incomplete # more goes here => int_sub(x, C)

Variables in the pattern that start with a C match against constants only. However, in this current form the rule is incomplete, because the variable C that is being used in the target operation is not defined anywhere. We will see how to compute it in the next section.

Computing constants and other intermediate results

Sometimes it is necessary to compute intermediate results that are used in the target operation. To do that, there can be extra assignments between the rule head and the rule target.:

sub_add_consts: int_sub(int_add(x, C1), C2) # incomplete C = C1 + C1 => int_sub(x, C)

The right hand side of such an assignment is a subset of Python syntax, supporting arithmetic using +, -, *, and certain helper functions. However, the syntax allows you to be explicit about unsignedness for some operations. E.g. >>u exists for unsigned right shifts (and I plan to add >u, >=u, <u, <=u for comparisons).

Here's an example of a rule that uses >>u:

urshift_lshift_x_c_c: uint_rshift(int_lshift(x, C), C) mask = (-1 << C) >>u C => int_and(x, mask) Checks

Some rewrites are only true under certain conditions. For example, int_eq(x, 1) can be rewritten to x, if x is known to store a boolean value. This can be expressed with checks:

eq_one: int_eq(x, 1) check x.is_bool() => x

A check is followed by a boolean expression. The variables from the pattern can be used as IntBound instances in checks (and also in assignments) to find out what the abstract interpretation of the JIT knows about the value of a trace variable (IntBound is the name of the abstract domain that the JIT uses for integers, despite the fact that it also stores knownbits information nowadays).

Here's another example:

mul_lshift: int_mul(x, int_lshift(1, y)) check y.known_ge_const(0) and y.known_le_const(LONG_BIT) => int_lshift(x, y)

It expresses that x * (1 << y) can be rewritten to x << y but checks that y is known to be between 0 and LONG_BIT.

Checks and assignments can be repeated and combined with each other:

mul_pow2_const: int_mul(x, C) check C > 0 and C & (C - 1) == 0 shift = highest_bit(C) => int_lshift(x, shift)

In addition to calling methods on IntBound instances, it's also possible to access their attributes, like in this rule:

and_x_c_in_range: int_and(x, C) check x.lower >= 0 and x.upper <= C & ~(C + 1) => x Rule Ordering and Liveness

The generated optimizer code will give preference to applying rules that produce a constant or a variable as a rewrite result. Only if none of those match do rules that produce new result operations get applied. For example, the rules sub_x_x and sub_add are tried before trying sub_add_consts, because the former two rules optimize to a constant and a variable respectively, while the latter produces a new operation as the result.

The rule sub_add_consts has a possible problem, which is that if the intermediate result of the int_add operation in the rule head is used by some other operations, then the sub_add_consts rule does not actually reduce the number of operations (and might actually make things slightly worse due to increased register pressure). However, currently it would be extremely hard to take that kind of information into account in the optimization pass of the JIT, so we optimistically apply the rules anyway.

Checking rule coverage

Every rewrite rule should have at least one unit test where it triggers. To ensure this, the unit test file that mainly checks integer optimizations in the JIT has an assert at the end of a test run, that every rule fired at least once.

Printing rule statistics

The JIT can print statistics about which rule fired how often in the jit-intbounds-stats logging category, using the PYPYLOG mechanism. For example, to print the category to stdout at the end of program execution, run PyPy like this:

PYPYLOG=jit-intbounds-stats:- pypy ...

The output of that will look something like this:

int_add add_reassoc_consts 2514 add_zero 107008 int_sub sub_zero 31519 sub_from_zero 523 sub_x_x 3153 sub_add_consts 159 sub_add 55 sub_sub_x_c_c 1752 sub_sub_c_x_c 0 sub_xor_x_y_y 0 sub_or_x_y_y 0 int_mul mul_zero 0 mul_one 110 mul_minus_one 0 mul_pow2_const 1456 mul_lshift 0 ... Termination and Confluence

Right now there are unfortunately no checks that the rules actually rewrite operations towards "simpler" forms. There is no cost model, and also nothing that prevents you from writing a rule like this:

neg_complication: int_neg(x) # leads to infinite rewrites => int_mul(-1, x)

Doing this would lead to endless rewrites if there is also another rule that turns multiplication with -1 into negation.

There is also no checking for confluence (yet?), i.e. the property that all rewrites starting from the same input trace always lead to the same output trace, no matter in which order the rules are applied.

Proofs

It is very easy to write a peephole rule that is not correct in all corner cases. Therefore all the rules are proven correct with Z3 before compiled into actual JIT code, by default. When the proof fails, a (hopefully minimal) counterexample is printed. The counterexample consists of values for all the inputs that fulfil the checks, values for the intermediate expressions, and then two different values for the source and the target operations.

E.g. if we try to add the incorrect rule:

mul_is_add: int_mul(a, b) => int_add(a, b)

We get the following counterexample as output:

Could not prove correctness of rule 'mul_is_add' in line 1 counterexample given by Z3: counterexample values: a: 0 b: 1 operation int_mul(a, b) with Z3 formula a*b has counterexample result vale: 0 BUT target expression: int_add(a, b) with Z3 formula a + b has counterexample value: 1

If we add conditions, they are taken into account and the counterexample will fulfil the conditions:

mul_is_add: int_mul(a, b) check a.known_gt_const(1) and b.known_gt_const(2) => int_add(a, b)

This leads to the following counterexample:

Could not prove correctness of rule 'mul_is_add' in line 46 counterexample given by Z3: counterexample values: a: 2 b: 3 operation int_mul(a, b) with Z3 formula a*b has counterexample result vale: 6 BUT target expression: int_add(a, b) with Z3 formula a + b has counterexample value: 5

Some IntBound methods cannot be used in Z3 proofs because their control flow is too complex. If that is the case, they can have Z3-equivalent formulations defined (in every case this is done, it's a potential proof hole if the Z3 friendly reformulation and the real implementation differ from each other, therefore extra care is required to make very sure they are equivalent).

It's possible to skip the proof of individual rules entirely by adding SORRY_Z3 to its body (but we should try not to do that too often):

eq_different_knownbits: int_eq(x, y) SORRY_Z3 check x.known_ne(y) => 0 Checking for satisfiability

In addition to checking whether the rule yields a correct optimization, we also check whether the rule can ever apply. This ensures that there are some runtime values that would fulfil all the checks in a rule. Here's an example of a rule violating this:

never_applies: int_is_true(x) check x.known_lt_const(0) and x.known_gt_const(0) # impossible condition, always False => x

Right now the error messages if this goes wrong are not completely easy to understand. I hope to be able to improve this later:

Rule 'never_applies' cannot ever apply in line 1 Z3 did not manage to find values for variables x such that the following condition becomes True: And(x <= x_upper, x_lower <= x, If(x_upper < 0, x_lower > 0, x_upper < 0)) Implementation Notes

The implementation of the DSL is done in a relatively ad-hoc manner. It is parsed using rply, there's a small type checker that tries to find common problems in how the rules are written. Z3 is used via the Python API, like in the previous blog posts that are using it. The pattern matching RPython code is generated using an approach inspired by Luc Maranget's paper Compiling Pattern Matching to Good Decision Trees. See this blog post for an approachable introduction.

Conclusion

Now that I've described the DSL, here are the rules that are equivalent to the imperative code in the motivation section:

mul_zero: int_mul(x, 0) => 0 mul_one: int_mul(x, 1) => x mul_minus_one: int_mul(x, -1) => int_neg(x) mul_pow2_const: int_mul(x, C) check C > 0 and C & (C - 1) == 0 shift = highest_bit(C) => int_lshift(x, shift) mul_lshift: int_mul(x, int_lshift(1, y)) check y.known_ge_const(0) and y.known_le_const(LONG_BIT) => int_lshift(x, y)

The current status of the DSL is that it got merged to PyPy's main branch. I rewrote a part of the integer rewrites into the DSL, but some are still in the old imperative style (mostly for complicated reasons, the easily ported ones are all done). Since I've only been porting optimizations that had existed prior to the existence of the DSL, performance numbers of benchmarks didn't change.

There are a number of features that are still missing and some possible extensions that I plan to work on in the future:

  • All the integer operations that the DSL handles so far are the variants that do not check for overflow (or where overflow was proven to be impossible to happen). In regular Python code the overflow-checking variants int_add_ovf etc are much more common, but the DSL doesn't support them yet. I plan to fix this, but don't completely understand how the correctness proofs for them should be done correctly.

  • A related problem is that I don't understand what it means for a rewrite to be correct if some of the operations are only defined for a subset of the input values. E.g. division isn't defined if the divisor is zero. In theory, a division operation in the trace should always be preceded by a check that the divisor isn't zero. But sometimes other optimization move the check around and the connection to the division gets lost or muddled. What optimizations can we still safely perform on the division? There's lots of prior work on this question, but I still don't understand what the correct approach in our context would be.

  • Ordering comparisons like int_lt, int_le and their unsigned variants are not ported to the DSL yet. Comparisons are an area where the JIT is not super good yet at optimizing away operations. This is a pretty big topic and I've started a project with Nico Rittinghaus to try to improve the situation a bit more generally.

  • A more advanced direction of work would be to implement a simplified form of e-graphs (or ae-graphs). The JIT has like half of an e-graph data structure already, and we probably can't afford a full one in terms of compile time costs, but maybe we can have two thirds or something?

Acknowledgements

Thank you to Max Bernstein and Martin Berger for super helpful feedback on drafts of the post!

Categories: FLOSS Project Planets

The Python Show: 48 - Writing About Python with David Mertz

Planet Python - Wed, 2024-10-23 10:24

In this episode of the Python Show Podcast, David Mertz is our guest. David is a prolific writer about the Python programming language. From his extremely popular IPM Developerworks articles to his multiple books on the Python language, David has been a part of the Python community for decades.

We ended up chatting about:

  • The history of Python

  • Book writing

  • Conference speaking

  • The PSF

  • and more!

Show Links
Categories: FLOSS Project Planets

Real Python: Python Thread Safety: Using a Lock and Other Techniques

Planet Python - Wed, 2024-10-23 10:00

Python threading allows you to run parts of your code concurrently, making the code more efficient. However, when you introduce threading to your code without knowing about thread safety, you may run into issues such as race conditions. You solve these with tools like locks, semaphores, events, conditions, and barriers.

By the end of this tutorial, you’ll be able to identify safety issues and prevent them by using the synchronization primitives in Python’s threading module to make your code thread-safe.

In this tutorial, you’ll learn:

  • What thread safety is
  • What race conditions are and how to avoid them
  • How to identify thread safety issues in your code
  • What different synchronization primitives exist in the threading module
  • How to use synchronization primitives to make your code thread-safe

To get the most out of this tutorial, you’ll need to have basic experience working with multithreaded code using Python’s threading module and ThreadPoolExecutor.

Get Your Code: Click here to download the free sample code that you’ll use to learn about thread safety techniques in Python.

Take the Quiz: Test your knowledge with our interactive “Python Thread Safety: Using a Lock and Other Techniques” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Python Thread Safety: Using a Lock and Other Techniques

In this quiz, you'll test your understanding of Python thread safety. You'll revisit the concepts of race conditions, locks, and other synchronization primitives in the threading module. By working through this quiz, you'll reinforce your knowledge about how to make your Python code thread-safe.

Threading in Python

In this section, you’ll get a general overview of how Python handles threading. Before discussing threading in Python, it’s important to revisit two related terms that you may have heard about in this context:

  • Concurrency: The ability of a system to handle multiple tasks by allowing their execution to overlap in time but not necessarily happen simultaneously.
  • Parallelism: The simultaneous execution of multiple tasks that run at the same time to leverage multiple processing units, typically multiple CPU cores.

Python’s threading is a concurrency framework that allows you to spin up multiple threads that run concurrently, each executing pieces of code. This improves the efficiency and responsiveness of your application. When running multiple threads, the Python interpreter switches between them, handing the control of execution over to each thread.

By running the script below, you can observe the creation of four threads:

Python threading_example.py import threading import time from concurrent.futures import ThreadPoolExecutor def threaded_function(): for number in range(3): print(f"Printing from {threading.current_thread().name}. {number=}") time.sleep(0.1) with ThreadPoolExecutor(max_workers=4, thread_name_prefix="Worker") as executor: for _ in range(4): executor.submit(threaded_function) Copied!

In this example, threaded_function prints the values zero to two that your for loop assigns to the loop variable number. Using a ThreadPoolExecutor, four threads are created to execute the threaded function. ThreadPoolExecutor is configured to run a maximum of four threads concurrently with max_workers=4, and each worker thread is named with a “Worker” prefix, as in thread_name_prefix="Worker".

In print(), the .name attribute on threading.current_thread() is used to get the name of the current thread. This will help you identify which thread is executed each time. A call to sleep() is added inside the threaded function to increase the likelihood of a context switch.

You’ll learn what a context switch is in just a moment. First, run the script and take a look at the output:

Shell $ python threading_example.py Printing from Worker_0. number=0 Printing from Worker_1. number=0 Printing from Worker_2. number=0 Printing from Worker_3. number=0 Printing from Worker_0. number=1 Printing from Worker_2. number=1 Printing from Worker_1. number=1 Printing from Worker_3. number=1 Printing from Worker_0. number=2 Printing from Worker_2. number=2 Printing from Worker_1. number=2 Printing from Worker_3. number=2 Copied!

Each line in the output represents a print() call from a worker thread, identified by Worker_0, Worker_1, Worker_2, and Worker_3. The number that follows the worker thread name shows the current iteration of the loop each thread is executing. Each thread takes turns executing the threaded_function, and the execution happens in a concurrent rather than sequential manner.

For example, after Worker_0 prints number=0, it’s not immediately followed by Worker_0 printing number=1. Instead, you see outputs from Worker_1, Worker_2, and Worker_3 printing number=0 before Worker_0 proceeds to number=1. You’ll notice from these interleaved outputs that multiple threads are running at the same time, taking turns to execute their part of the code.

This happens because the Python interpreter performs a context switch. This means that Python pauses the execution state of the current thread and passes control to another thread. When the context switches, Python saves the current execution state so that it can resume later. By switching the control of execution at specific intervals, multiple threads can execute code concurrently.

You can check the context switch interval of your Python interpreter by typing the following in the REPL:

Python >>> import sys >>> sys.getswitchinterval() 0.005 Copied!

The output of calling the getswitchinterval() is a number in seconds that represents the context switch interval of your Python interpreter. In this case, it’s 0.005 seconds or five milliseconds. You can think of the switch interval as how often the Python interpreter checks if it should switch to another thread.

An interval of five milliseconds doesn’t mean that threads switch exactly every five milliseconds, but rather that the interpreter considers switching to another thread at these intervals.

The switch interval is defined in the Python docs as follows:

Read the full article at https://realpython.com/python-thread-lock/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

www-ru @ Savannah: Разговор о свободных программах

GNU Planet! - Wed, 2024-10-23 09:17

Компьютеры и сети содействуют нам в борьбе за свободу: они помогают посвятить время и силы важным общественным инициативам, организовывать протесты, защищаться от цензуры.

Но свободны ли наши компьютеры?  И свободны ли мы как пользователи?

Обсудим эти вопросы в Открытом пространстве с Глебом Ерофеевым — активистом движения за свободные программы и волонтёром проекта "ГНУ", который в 1983 году запустил философ и активист Ричард Столлман.

Команда проекта "ГНУ" занимается разработкой свободного софта и техноэтическим активизмом, чтобы дать пользователям контроль над их компьютерами и искоренить несправедливость, которую приносят в общество собственнические программы.

Адрес: Плетешковский пер., 8с1 (м. "Бауманская").

Участие бесплатно.  Приветствуются пожертвования в пользу пространства.

Categories: FLOSS Project Planets

Lullabot: How to Avoid Reinventing the Menu On a Drupal Project

Planet Drupal - Wed, 2024-10-23 09:01

The navigation menu is a crucial element of any website, guiding users through content, enhancing their experience, and playing a vital role in the site's overall usability and success.

Creating a menu that's accessible, responsive, and easy to navigate is a non-trivial task, no matter how simple or complex your navigation is. However, people usually underestimate the efforts required to create a navigation menu that provides a good user experience, and that it can take several iterations to do it right.

Categories: FLOSS Project Planets

Real Python: Quiz: Python Class Constructors: Control Your Object Instantiation

Planet Python - Wed, 2024-10-23 08:00

In this quiz, you’ll test your understanding of Python Class Constructors.

By working through this quiz, you’ll revisit the internal instantiation process, object initialization using .__init__(), and fine-tuning object creation by overriding .__new__().

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

1xINTERNET blog: Web accessibility: why it matters and how to achieve it

Planet Drupal - Wed, 2024-10-23 08:00

Web accessibility is crucial for ensuring people of all abilities can engage with digital content without barriers. With the European Accessibility Act approaching, both the public and private sector must comply with its requirements. Learn how!

Categories: FLOSS Project Planets

Skynet Technologies USA LLC Blogs: How to enhance speed and security optimization in Drupal 11 website development?

Planet Drupal - Wed, 2024-10-23 07:51
Drupal needs no introduction! It is a robust content management system (CMS) widely used for developing powerful, scalable, and secure websites. However, like any web platform, optimizing for speed and security is essential to ensure smooth performance, protect data, and improve user experience.Drupal 11 is all about enhanced scalability and security, which makes this version more dynamic and…
Categories: FLOSS Project Planets

LN Webworks: Top 5 Reasons Why Drupal is the Best Choice for E-commerce Websites

Planet Drupal - Wed, 2024-10-23 06:06

The companies are using the newest technologies to improve their e-commerce websites customer service and outreach. With e-commerce websites, you can offer better features to the audience that enhance the overall experience of the visitors on the website. 

And what better CMS of choice than Drupal for your large-scale websites? Drupal offers fresh features for e-commerce businesses that help their customers to have a better experience shopping online. On top of that, there is also a Drupal e-commerce module that allows you to help you engage more with the audience that visits the websites and converts them. 

In this blog, you will learn more about why you should use Drupal For your e-commerce websites and how it can be the best decision for your business. 

Categories: FLOSS Project Planets

Jonathan Dowland: Why hardware synths?

Planet Debian - Wed, 2024-10-23 05:51

Russell wrote a great comment on my last post (thanks!):

What benefits do these things offer when a general purpose computer can do so many things nowadays? Is there a USB keyboard that you can connect to a laptop or phone to do these things? I presume that all recent phones have the compute power to do all the synthesis you need if you have the right software. Is it just a lack of software and infrastructure for doing it on laptops/phones that makes synthesisers still viable?

I've decided to turn my response into a post of its own.

The issue is definitely not compute power. You can indeed attach a USB keyboard to a computer and use a plethora of software synthesisers, including very faithful emulations of all the popular classics. The raw compute power of modern hardware synths is comparatively small: I’ve been told the modern Korg digital synths are on a par with a raspberry pi. I’ve seen some DSPs which are 32 bit ARMs, and other tools which are roughly equivalent to arduinos.

I can think of four reasons hardware synths remain popular with some despite the above:

  1. As I touched on in my original synth post, computing dominates my life outside of music already. I really wanted something separate from that to keep mental distance from work.

  2. Synths have hard real-time requirements. They don't have raw power in compute terms, but they absolutely have to do their job within microseconds of being instructed to, with no exceptions. Linux still has a long way to go for hard real-time.

  3. The Linux audio ecosystem is… complex. Dealing with pipewire, pulseaudio, jack, alsa, oss, and anything else I've forgotten, as well as their failure modes, is too time consuming.

  4. The last point is to do with creativity and inspiration. A good synth is more than the sum of its parts: it's an instrument, carefully designed and its components integrated by musically-minded people who have set out to create something to inspire. There are plenty of synths which aren't good instruments, but have loads of features: they’re boxes of "stuff". Good synths can't do it all: they often have limitations which you have to respond to, work around or with, creatively. This was expressed better than I could by Trent Reznor in the video archetype of a synthesiser:

Categories: FLOSS Project Planets

10 Tips to Make Your QML Code Faster and More Maintainable

Planet KDE - Wed, 2024-10-23 03:00

In recent years, a lot has been happening to improve performance, maintainability and tooling of QML. Some of those improvements can only take full effect when your code follows modern best practices. Here are 10 things you can do in order to modernize your QML code and take full advantage of QML’s capabilities.

1. Use qt_add_qml_module CMake API

Qt6 introduced a new CMake API to create QML modules. Not only is this more convenient than what previously had to be done manually, but it is also a prerequisite for being able to exploit most of the following tips.

By using the qt_add_qml_module, your QML code is automatically processed by qmlcachegen, which not only creates QML byte code ahead of time, but also converts parts of your QML code to C++ code, improving performance. How much of your code can be compiled to C++ depends on the quality of the input code. The following tips are all about improving your code in that regard.

add_executable(myapp main.cpp) qt_add_qml_module(myapp URI "org.kde.myapp" QML_FILES Main.qml ) 2. Use declarative type registration

When creating custom types in C++ and registering them with qmlRegisterType and friends, they are not visible to the tooling at the compile time. qmlcachegen doesn’t know which types exist and which properties they have. Hence, it cannot translate to C++ the code that’s using them. Your experience with the QML Language Server will also suffer since it cannot autocomplete types and property names.

To fix this, your types should be registered declaratively using the QML_ELEMENT (and its friends, QML_NAMED_ELEMENT, QML_SINGLETON, etc) macros.

qmlRegisterType("org.kde.myapp", 1, 0, "MyThing");

becomes

class MyThing : public QObject { Q_OBJECT QML_ELEMENT };

The URL and version information are inferred from the qt_add_qml_module call.

3. Declare module dependencies

Sometimes your QML module depends on other modules. This can be due to importing it in the QML code, or more subtly by using types from another module in your QML-exposed C++ code. In the latter case, the dependency needs to be declared in the qt_add_qml_module call.

For example, exposing a QAbstractItemModel subclass to QML adds a dependency to the QtCore (that’s where QAbstractItemModel is registered) to your module. This does not only happen when subclassing a type but also when using it as a parameter type in properties or invokables.

Another example is creating a custom QQuickItem-derived type in C++, which adds a dependency on the Qt Quick module.

To fix this, add the DEPENDENCIES declaration to qt_add_qml_module:

qt_add_qml_module(myapp URI "org.kde.myapp" QML_FILES Main.qml DEPENDENCIES QtCore ) 4. Qualify property types fully

MOC needs types in C++ property definitions to be fully qualified, i.e. include the full namespace, even when inside that namespace. Not doing this will cause issues for the QML tooling.

namespace MyApp { class MyHelper : public QObject { Q_OBJECT }; class MyThing : public QObject { Q_OBJECT QML_ELEMENT Q_PROPERTY(MyHelper *helper READ helper CONSTANT) // bad Q_PROPERTY(MyApp::MyHelper *helper READ helper CONSTANT) // good ... }; } 5. Use types

In order for qmlcachegen to generate efficient code for your bindings, it needs to know the type for properties. Avoid using ‘property var’ wherever possible and use concrete types. This may be built-in types like int, double, or string, or any declaratively-defined custom type. Sometimes you want to be able to use a type as a property type in QML but don’t want the type to be creatable from QML directly. For this, you can register them using the QML_UNCREATABLE macro.

property var size: 10 // bad property int size: 10 // good property var thing // bad property MyThing thing // good 6. Avoid parent and other generic properties

qmlcachegen can only work with the property types it knows at compile time. It cannot make any assumptions about which concrete subtype a property will hold at runtime. This means that, if a property is defined with type Item, it can only compile bindings using properties defined on Item, not any of its subtypes. This is particularly relevant for properties like ‘parent’ or ‘contentItem’. For this reason, avoid using properties like these to look up items when not using properties defined on Item (properties like width, height, or visible are okay) and use look-ups via IDs instead.

Item { id: thing property int size: 10 Rectangle { width: parent.size // bad, Item has no 'size' property height: thing.height // good, lookup via id color: parent.enabled ? "red" : "black" // good, Item has 'enabled' property } } 7. Annotate function parameters with types

In order for qmlcachegen to compile JavaScript functions, it needs to know the function’s parameter and return type. For that, you need to add type annotations to the function:

function calculateArea(width: double, height: double) : double { return width * height }

When using signal handlers with parameters, you should explicitly specify the signal parameters by supplying a JS function or an arrow expression:

MouseArea { onClicked: event => console.log("clicked at", event.x, event.y) }

Not only does this make qmlcachegen happy, it also makes your code far more readable.

8. Use qualified property lookup

QML allows you to access properties from objects several times up in the parent hierarchy without explicitly specifying which object is being referenced. This is called an unqualified property look-up and generally considered bad practice since it leads to brittle and hard to reason about code. qmlcachegen also cannot properly reason about such code. So, it cannot properly compile it. You should only use qualified property lookups

Item { id: root property int size: 10 Rectangle { width: size // bad, unqualified lookup height: root.size // good, qualified lookup } }

Another area that needs attention is accessing model roles in a delegate. Views like ListView inject their model data as properties into the context of the delegate where they can be accessed with expressions like ‘foo’, ‘model.foo’, or ‘modelData.foo’. This way, qmlcachegen has no information about the types of the roles and cannot do its job properly. To fix this, you should use required properties to fetch the model data:

ListView { model: MyModel delegate: ItemDelegate { text: name // bad, lookup from context icon.name: model.iconName // more readable, but still bad required property bool active // good, using required property checked: active } } 9. Use pragma ComponentBehavior: Bound

When defining components, either explicitly via Component {} or implicitly when using delegates, it is common to want to refer to IDs outside of that component, and this generally works. However, theoretically any component can be used outside of the context it is defined in and, when doing that, IDs might refer to another object entirely. For this reason, qmlcachegen cannot properly compile such code.

To address this, we need to learn about pragma ComponentBehavior. Pragmas are file-wide switches that influence the behavior of QML. By specifying pragma ComponentBehavior: Bound at the top of the QML file, we can bind any components defined in this file to their surroundings. As a result, we cannot use the component in another place anymore but can now safely access IDs outside of it.

pragma ComponentBehavior: Bound import QtQuick Item { id: root property int delegateHeight: 10 ListView { model: MyModel delegate: Rectangle { height: root.delegateHeight // good with ComponentBehavior: Bound, bad otherwise } } }

A side effect of this is that accessing model data now must happen using required properties, as described in the previous point. Learn more about ComponentBehavior here.

10. Know your tools

A lot of these pitfalls are not obvious, even to seasoned QML programmers, especially when working with existing codebases. Fortunately, qmllint helps you find most of these issues and avoids introducing them. By using the QML Language Server, you can incorporate qmllint directly into your preferred IDE/editor such as Kate or Visual Studio Code.

While qmlcachegen can help boost your QML application’s performance, there are performance problems it cannot help with, such as scenes that are too complex, slow C++ code, or inefficient rendering. To investigate such problems, tools like the QML profiler, Hotspot for CPU profiling, Heaptrack for memory profiling, and GammaRay for analyzing QML scenes are very helpful.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post 10 Tips to Make Your QML Code Faster and More Maintainable appeared first on KDAB.

Categories: FLOSS Project Planets

Michael Ablassmeier: qmpbackup 0.33

Planet Debian - Tue, 2024-10-22 20:00

In the last weeks qmpbackup has seen a bit more improvements.

  • Adds support for CEPH/RBD backed devices.
  • Allows to use unique bitmaps for having multiple, separate backup chains.
  • Adds support for jsonified filename configurations like often used on proxmox systems.
  • Adds support for saving attached pflash/nvram devices (storing UEFI related settings)
  • qmprestore can now merge the backup chain into a new image file and the new snapshotrebase command can rebase the images and after committing, creates an internal qcow snapshot, so one can easily switch between different vm states in the backup.

Ive been running it lately to backup Virtual machines on proxmox systems, where the proxmox backup server is not an option.

Categories: FLOSS Project Planets

Pages