FLOSS Project Planets

Making crochet animation in Krita and Kdenlive

Planet KDE - Mon, 2019-03-25 10:02

I’ve only recently become anywhere decent at crochet. That is, making fabric stuff from yarn with a crochet needle. My stepmother taught me how to do the basic slipknot and chain stitch in finger crochet when I was a little girl, but my attempts at getting any better at it failed for the longest time.

The issue was that whenever I would try to learn how to do crochet from a book, the instructions were always incredibly vague. To the point that when I finally learned a stitch beyond the chain stitch, it was not the single/double stitch I was trying to learn, but instead ended up being a sort of weird slip stitch that lead to a very stretchy fabric resembling knitted work rather than crochet.

Videos on the other hand tend to have a little bit too much information going on for me. Like the tutor’s voice, anything wrong with the video lighting wise, any blurriness.

So I really wanted to do some animations that show how a given stitch is done, without too much noise surrounding it.

First, I made a sketch in Krita at 1600×900 pixels. The sketch is super rough as you can see. The idea is to get the motion down first.

I then cleaned it up a little and started playing with aspect ratios.

Here’s one that’s more zoomed in, giving more attention to the actual important part of the image. I do feel that this decreased the sense of how the hands move, which is why I decided against using this one for the final animation.

Another option was a square ratio that was super-zoomed in. This was somewhat inspired by the gifsets on tumblr and cooking videos.

And then shared that on mastodon.art, as I had done when I made a turntable of Kiki:

The general feedback was that the square images were nicest for actually getting the stitch. Because of my worries about the hands, I ended up deciding to continue with the full view version, and make smaller square ones for social media. The idea being that the full version can be viewed full screen and the stitch should be easy to read then.

The Second Day

I spend the second day firstly, building Krita with Address Sanitizer enabled, so that I could catch memory related bugs. The Address Sanitizer allows me to find bugs, in particular we’ve been implementing a lockless hashmap for Krita’s canvas tiles, and this could have some scary bugs that the address sanitizer might be able to find. Basically, the Address Sanitizer will crash as soon as it finds a memory-related bug.

I didn’t find hashmap bugs, but I did find the following bugs:

As well as several other bugs. The big downside to the Address Sanitizer(and GDB as well, really) is that Krita will take up twice as much ram, which was a little bit of an issue, as discussed later.

When animating, I spend more time looking at my hands and the motion I did, and I noticed that I was missing the almost natural first step of making a loop with my fingers. So I added that. I also tried to make the hand motion at the end a little more natural, letting the yarn hand pull at the knot as well.

There’s also pauses now, letting people identify the separate steps, and I colored the thread red.

The feedback I got on mastodon was that the pauses helped a surprising lot. The other feedback I got was that it was still hard to tell what was going on with twisting the loop or the pulling of the yarn into a knot at the end. I decided then that I should add text, as well as try to make the animation smoother so the motion is easier to follow.

Third day.

So, then I started inking the image. I first decided to increase the size from 1600 × 900 to 3840 × 2160, which is the 4k definition. The idea being that if I got it that high res, then I wouldn’t have to worry about the inking lines looking awful. You see, video codecs tend to be optimized for gradients and smooth areas, so images with a lot of contrast, such as traditional raster animation tend to be disadvantaged when converted to video. Having a high resolution offsets this issue as well as other compression errors.

I also doubled the frame count to smoothen it out. The thumbs and rest of the hand are separated. For the crochet needle I made one basic frame and then copied and rotated it all over the place. This part, as well as the hook hand made me wish for tweening on transformation masks, but that’s something that requires a smarter person than me to finish it.

At the end, I tried colouring the hands so it wouldn’t be a mess of lines. I ended up with too much ram usage, so I tried to reduce my ram usage by reducing the layer type or color space used. After all, the line art didn’t need to have 4 channels when all it needed was alpha, so I converted those to greyscale. This had the side effect of turning the onion skins grey too… I also tried using animated fill layers for the hands, but it seemed that upon playback there was a single frame lag for the fill layers but nothing else, which was a bit of a letdown.

So the following bugs were filed…

Eventually I scaled the canvas down to 3200 × 1600, which is still quite huge, but at these sizes it made a full gig of difference. Anyway, at the end of the day, I had this:

The hands were easier than it looks, as with the turning motion I was able to use my knowledge on orthographic projection to gauge the proportions.

The turntable of Kiki, the Krita mascot was a previous animation project I did which allowed me to practice my knowledge of orthographic projection, which I then could reuse for the hands here.

And the next day, I also animated the yarn, coloured everything, and used transparency masks to ensure that the needle and yarn were masked at the correct moments. There’s a little bit of stretch and squash happening on the yarn in the pulling moment that I hope really sells the motion.

The final animation as screenshotted in Krita. The file is 2.2 gib, my ram and swap(8gib and 4gib respectively) are almost full. The blue frames are the inbetweens and frames that only contain color. The yellow, purple, green and orange frames represent the different steps. The red frames are frames that are the most extreme of a motion. I can’t tell left from right without thinking, so I just named the two hands the yarn and hook hand Editing the video

Because Krita was having so much trouble, and I still wanted to add text, I decided to finish this in KDENLive. I couldn’t get anything rendered from Krita’s side even, because I had earlier that week installed earlyoom to prevent my whole desktop locking up indefinitely as it is wont to do due to me doing a little bit too much with my computer. So I basically had to render the frames, but then Krita refused to render the frames and gave no feedback why, but I suspect it was because of running out of memory. So what I did was I just tried to render out frames, and then when Krita errored out, rendering again, but then starting from the last frame Krita rendered. This is sort of what that feature was designed for.

I brought the animation into KDENLive. I Have KDENLive 18.12.3 on this device.

KDENLive, while it can import frame sequences doesn’t seem to like them much. It didn’t allow smooth playback for them, filter effects didn’t seem to work(I wanted to get the animation slowed down to have it playback at 16 fps instead of 24), nor did it have a menu option to transcode. So I had to go into the commandline and convert the different steps into mp4s. When using the log docker or a terminal, Krita will spit out the ffmpeg commandline entry into it before rendering, so I could copy-paste that and use it as a base.

KDENLive also had some other issues, like an issue where it would just kinda, ‘error’ when resizing clips or moving them around, complete with error noise. Afterwards it would refuse to do anything until I restarted it. Another issue was that there was this ‘ghost’ clip somewhere that didn’t show up in the timeline, but it was affecting the render, making it a 19sec animation instead of 11. I had to work around it by defining a zone and only rendering that. Then, later, KDENLive upon startup would just randomly have said clip appear. The final issue is that sometimes when playing back, KDENLive will just have a memory spike or will slowly consume all memory meaning that I couldn’t preview the animation correctly, as earlyoom would kick in and kill KDENLive.

Adding the text was painless, however, and KDENLive’s list of render settings are a blessing. So I rendered the file to webm, and uploaded it.

Of course, 10 hours afterwards, someone goes: ‘well, that last step is too short’. The reason the last step feels too short is because the text for it is too long. If I hadn’t put in any text, this wouldn’t have been an issue. I did this again this morning(that’s a full day afterwards) because someone else had the same issue, and I suspect that each time a line ends there’s a significant lag while our eyes do a ‘carriage return’, so that’s something that ought to be taken into account.

Webvtt

Because I am dyslexic, and also Dutch(we’re a little language obsessed in the Netherlands), I value captions and subtitles a lot, and try to always make them for my videos.

Webvtt is the official webfriendly format, but the typical subtitle creation software like Aegissub don’t support it. So to test this, I made a very simple html file with a reference to a webvtt file:

<html> <head> <title></title> </head> <body> <video controls width="800"> <source src='crochet_slipknot.webm' type='video/webm'> <track label="English subtitles" kind="subtitles" srclang="en" src="crochet_slipknot_subtitles.vtt" default /> </video> </body> </html>

I then made that file, using KDENLive as a reference to determine the timings. KDENLive’s timestamps go mm:ss:frames instead of hh:mm:ss:miliss, this was a little bit of a surprise.

WEBVTT REGION id:title width:24% lines:1 regionanchor:0%,0% viewportanchor:6%,9% REGION id:instructions width:24% lines:5 regionanchor:0%,0% viewportanchor:6%,57% STYLE ::cue { background-color:#ddd49f; color:black; } STYLE ::cue-region(instructions) { align:left; color:black; } STYLE ::cue-region(title) { font-size:200%; } 00:00:00.000 --> 00:00:03.125 region:title Crochet Slipknot 00:00:03.125 --> 00:00:05.333 region:instructions Step 1: Make a loop between your fingers. 00:00:05.333 --> 00:00:06.625 region:instructions Step 2: Put your hook through the loop. 00:00:06.625 --> 00:08.625 region:instructions Step 3: Turn your hook so the loop will twist. 00:00:08.625 --> 00:00:12.375 region:instructions Step 4: Now push the hook down and pull up the yarn through the twisted loop. 00:00:12.375 --> 00:00:15.000 region:instructions Done! You're ready to start your project.

You can then open the html file in Firefox, and it’ll allow you to select the subtitle track for preview. Firefox despite everything still doesn’t support styling in webvtt files, which is kinda annoying. Chromium also didn’t. VideoLan does support styling, but video lan’s then doesn’t support alignment of the text, which is a little weird.

All of them, however, support regions, which is more than I expected, to be honest… Sadly, peertube, where I uploaded the video to, has a player which does not support regions.

But none the less, here’s the final result:

The final video.

And the square ones of each step:

I made these in Krita eventually, because I couldn’t figure out how to get KDENLive make me a 1:1 project. It was just a case of importing the frames and cropping them and then copy-pasting the layers containing the title.

While doing this I noticed I had masked out the crochet needle in the last step incorrectly so it showed the needle always in front of the yarn instead of briefly behind it to indicate a turn…

Step 1: Make a loop between your fingers. Step 2: Put your hook through the loop. Step 3: Turn your hook so the loop will twist. Step 4: Now push the hook down and pull up the yarn through the twisted loop. Afterthoughts

Generally, while animating was fun, I always just kinda… lose all motivation when having to deal with the video editing part.

Video editing mistakes tend to haunt me more than anything, and I am not sure why. Maybe it is because my videos actually get comments unlike my writing and my art. Maybe it is because having to fix a mistake in a video, unlike writing, always leads to having to open the video editor, deal with potential bugs, and then you always have to delete the previous video, copy over all your comments, reupload the video, and then have to anticipate the next set of people going ‘oh, hey there’s this mistake over here’ and I have to delete the video, open up the editor, deal with the editors bugginess as I fix the issue, rerender the video, reupload it, just so I can wait for the next set of comment that…

And I also just kind of have the feeling that because I am an experienced artist and have a good sense of rhythm, I am cursed with the ability to see all the ways in which the video is wrong, but not the experience to fix it with confidence.

Other than that, I would like to share the source files for this one, but the issue is that git isn’t very binary-files friendly(which all video and image files are, as far as git cares), so I am not sure how to go about sharing the source files. In total, I think I ended up about 15~ hours on this, of which 10 were the actual animation.

I do kind of want to continue animating these stitches, but there will be a little bit of a pause in between, I think. Hopefully the reported bugs and address sanitizer backtraces mean that others who animate in Krita will have a smoother experience, but I think people will always have to watch their ram usage.

Categories: FLOSS Project Planets

Real Python: An Intro to Threading in Python

Planet Python - Mon, 2019-03-25 10:00

Python threading allows you to have different parts of your program run concurrently and can simplify your design. If you’ve got some experience in Python and want to speed up your program using threads, then this tutorial is for you!

In this article, you’ll learn:

  • What threads are
  • How to create threads and wait for them to finish
  • How to use a ThreadPoolExecutor
  • How to avoid race conditions
  • How to use the common tools that Python threading provides

This article assumes you’ve got the Python basics down pat and that you’re using at least version 3.6 to run the examples. If you need a refresher, you can start with the Python Learning Paths and get up to speed.

If you’re not sure if you want to use Python threading, asyncio, or multiprocessing, then you can check out Speed Up Your Python Program With Concurrency.

All of the sources used in this tutorial are available to you in the Real Python GitHub repo.

Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you'll need to take your Python skills to the next level.

Take the Quiz: Test your knowledge with our interactive “Python Threading” quiz. Upon completion you will receive a score so you can track your learning progress over time:

Take the Quiz »

What Is a Thread?

A thread is a separate flow of execution. This means that your program will have two things happening at once. But the different threads do not actually happen at once: they merely appear to.

It’s tempting to think of threading as having two (or more) different processors running on your program, each one doing an independent task at the same time. That’s almost right, but that’s what multiprocessing provides.

threading is similar, but there is only a single processor running your program. The different tasks, called threads, are all run on a single core, with the operating system managing when your program works on which thread.

The best real-world analogy I’ve read is in the intro of Async IO in Python: A Complete Walkthrough, which likens it to a grand-master chess player competing against many opponents at once. It’s all the same grand master, but she needs to switch tasks and maintain where she was (usually called the state) for each game.

Because threading runs on a single CPU, it is good at speeding up some tasks but not all tasks. Tasks that require heavy CPU computation and spend little time waiting for external events will not run faster with threading, so you should look to multiprocessing instead.

Architecting your program to use threading can also provide gains in design clarity. Most of the examples you’ll learn about in this tutorial are not necessarily going to run faster because they use threads. Using threading in them helps to make the design cleaner and easier to reason about.

So, let’s stop talking about threading and start using it!

Starting a Thread

Now that you’ve got an idea of what a thread is, let’s learn how to make one. The Python standard library provides threading, which contains most of the primitives you’ll see in this article. Thread, in this module, nicely encapsulates threads, providing a clean interface to work with them.

To start a separate thread, you create a Thread instance and then tell it to .start():

1 import logging 2 import threading 3 import time 4 5 def thread_function(name): 6 logging.info("Thread %s: starting", name) 7 time.sleep(2) 8 logging.info("Thread %s: finishing", name) 9 10 if __name__ == "__main__": 11 format = "%(asctime)s: %(message)s" 12 logging.basicConfig(format=format, level=logging.INFO, 13 datefmt="%H:%M:%S") 14 15 logging.info("Main : before creating thread") 16 x = threading.Thread(target=thread_function, args=(1,)) 17 logging.info("Main : before running thread") 18 x.start() 19 logging.info("Main : wait for the thread to finish") 20 # x.join() 21 logging.info("Main : all done")

If you look around the logging statements, you can see that the main section is creating and starting the thread:

x = threading.Thread(target=thread_function, args=(1,)) x.start()

When you create a Thread, you pass it a function and a list containing the arguments to that function. In this case, you’re telling the Thread to run thread_function() and to pass it 1 as an argument.

For this article, you’ll use sequential integers as names for your threads. There is threading.get_ident(), which returns a unique name for each thread, but these are usually neither short nor easily readable.

thread_function() itself doesn’t do much. It simply logs some messages with a time.sleep() in between them.

When you run this program as it is (with line twenty commented out), the output will look like this:

$ ./single_thread.py Main : before creating thread Main : before running thread Thread 1: starting Main : wait for the thread to finish Main : all done Thread 1: finishing

You’ll notice that the Thread finished after the Main section of your code did. You’ll come back to why that is and talk about the mysterious line twenty in the next section.

Daemon Threads

In computer science, a daemon is a process that runs in the background.

Python threading has a more specific meaning for daemon. A daemon thread will shut down immediately when the program exits. One way to think about these definitions is to consider the daemon thread a thread that runs in the background without worrying about shutting it down.

If a program is running Threads that are not daemons, then the program will wait for those threads to complete before it terminates. Threads that are daemons, however, are just killed wherever they are when the program is exiting.

Let’s look a little more closely at the output of your program above. The last two lines are the interesting bit. When you run the program, you’ll notice that there is a pause (of about 2 seconds) after __main__ has printed its all done message and before the thread is finished.

This pause is Python waiting for the non-daemonic thread to complete. When your Python program ends, part of the shutdown process is to clean up the threading routine.

If you look at the source for Python threading, you’ll see that threading._shutdown() walks through all of the running threads and calls .join() on every one that does not have the daemon flag set.

So your program waits to exit because the thread itself is waiting in a sleep. As soon as it has completed and printed the message, .join() will return and the program can exit.

Frequently, this behavior is what you want, but there are other options available to us. Let’s first repeat the program with a daemon thread. You do that by changing how you construct the Thread, adding the daemon=True flag:

x = threading.Thread(target=thread_function, args=(1,), daemon=True)

When you run the program now, you should see this output:

$ ./daemon_thread.py Main : before creating thread Main : before running thread Thread 1: starting Main : wait for the thread to finish Main : all done

The difference here is that the final line of the output is missing. thread_function() did not get a chance to complete. It was a daemon thread, so when __main__ reached the end of its code and the program wanted to finish, the daemon was killed.

join() a Thread

Daemon threads are handy, but what about when you want to wait for a thread to stop? What about when you want to do that and not exit your program? Now let’s go back to your original program and look at that commented out line twenty:

# x.join()

To tell one thread to wait for another thread to finish, you call .join(). If you uncomment that line, the main thread will pause and wait for the thread x to complete running.

Did you test this on the code with the daemon thread or the regular thread? It turns out that it doesn’t matter. If you .join() a thread, that statement will wait until either kind of thread is finished.

Working With Many Threads

The example code so far has only been working with two threads: the main thread and one you started with the threading.Thread object.

Frequently, you’ll want to start a number of threads and have them do interesting work. Let’s start by looking at the harder way of doing that, and then you’ll move on to an easier method.

The harder way of starting multiple threads is the one you already know:

import logging import threading import time def thread_function(name): logging.info("Thread %s: starting", name) time.sleep(2) logging.info("Thread %s: finishing", name) if __name__ == "__main__": format = "%(asctime)s: %(message)s" logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S") threads = list() for index in range(3): logging.info("Main : create and start thread %d.", index) x = threading.Thread(target=thread_function, args=(index,)) threads.append(x) x.start() for index, thread in enumerate(threads): logging.info("Main : before joining thread %d.", index) thread.join() logging.info("Main : thread %d done", index)

This code uses the same mechanism you saw above to start a thread, create a Thread object, and then call .start(). The program keeps a list of Thread objects so that it can then wait for them later using .join().

Running this code multiple times will likely produce some interesting results. Here’s an example output from my machine:

$ ./multiple_threads.py Main : create and start thread 0. Thread 0: starting Main : create and start thread 1. Thread 1: starting Main : create and start thread 2. Thread 2: starting Main : before joining thread 0. Thread 2: finishing Thread 1: finishing Thread 0: finishing Main : thread 0 done Main : before joining thread 1. Main : thread 1 done Main : before joining thread 2. Main : thread 2 done

If you walk through the output carefully, you’ll see all three threads getting started in the order you might expect, but in this case they finish in the opposite order! Multiple runs will produce different orderings. Look for the Thread x: finishing message to tell you when each thread is done.

The order in which threads are run is determined by the operating system and can be quite hard to predict. It may (and likely will) vary from run to run, so you need to be aware of that when you design algorithms that use threading.

Fortunately, Python gives you several primitives that you’ll look at later to help coordinate threads and get them running together. Before that, let’s look at how to make managing a group of threads a bit easier.

Using a ThreadPoolExecutor

There’s an easier way to start up a group of threads than the one you saw above. It’s called a ThreadPoolExecutor, and it’s part of the standard library in concurrent.futures (as of Python 3.2).

The easiest way to create it is as a context manager, using the with statement to manage the creation and destruction of the pool.

Here’s the __main__ from the last example rewritten to use a ThreadPoolExecutor:

import concurrent.futures [rest of code] if __name__ == "__main__": format = "%(asctime)s: %(message)s" logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S") with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor: executor.map(thread_function, range(3))

The code creates a ThreadPoolExecutor as a context manager, telling it how many worker threads it wants in the pool. It then uses .map() to step through an iterable of things, in your case range(3), passing each one to a thread in the pool.

The end of the with block causes the ThreadPoolExecutor to do a .join() on each of the threads in the pool. It is strongly recommended that you use ThreadPoolExecutor as a context manager when you can so that you never forget to .join() the threads.

Note: Using a ThreadPoolExecutor can cause some confusing errors.

For example, if you call a function that takes no parameters, but you pass it parameters in .map(), the thread will throw an exception.

Unfortunately, ThreadPoolExecutor will hide that exception, and (in the case above) the program terminates with no output. This can be quite confusing to debug at first.

Running your corrected example code will produce output that looks like this:

$ ./executor.py Thread 0: starting Thread 1: starting Thread 2: starting Thread 1: finishing Thread 0: finishing Thread 2: finishing

Again, notice how Thread 1 finished before Thread 0. The scheduling of threads is done by the operating system and does not follow a plan that’s easy to figure out.

Race Conditions

Before you move on to some of the other features tucked away in Python threading, let’s talk a bit about one of the more difficult issues you’ll run into when writing threaded programs: race conditions.

Once you’ve seen what a race condition is and looked at one happening, you’ll move on to some of the primitives provided by the standard library to prevent race conditions from happening.

Race conditions can occur when two or more threads access a shared piece of data or resource. In this example, you’re going to create a large race condition that happens every time, but be aware that most race conditions are not this obvious. Frequently, they only occur rarely, and they can produce confusing results. As you can imagine, this makes them quite difficult to debug.

Fortunately, this race condition will happen every time, and you’ll walk through it in detail to explain what is happening.

For this example, you’re going to write a class that updates a database. Okay, you’re not really going to have a database: you’re just going to fake it, because that’s not the point of this article.

Your FakeDatabase will have .__init__() and .update() methods:

class FakeDatabase: def __init__(self): self.value = 0 def update(self, name): logging.info("Thread %s: starting update", name) local_copy = self.value local_copy += 1 time.sleep(0.1) self.value = local_copy logging.info("Thread %s: finishing update", name)

FakeDatabase is keeping track of a single number: .value. This is going to be the shared data on which you’ll see the race condition.

.__init__() simply initializes .value to zero. So far, so good.

.update() looks a little strange. It’s simulating reading a value from a database, doing some computation on it, and then writing a new value back to the database.

In this case, reading from the database just means copying .value to a local variable. The computation is just to add one to the value and then .sleep() for a little bit. Finally, it writes the value back by copying the local value back to .value.

Here’s how you’ll use this FakeDatabase:

if __name__ == "__main__": format = "%(asctime)s: %(message)s" logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S") database = FakeDatabase() logging.info("Testing update. Starting value is %d.", database.value) with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor: for index in range(2): executor.submit(database.update, index) logging.info("Testing update. Ending value is %d.", database.value)

The program creates a ThreadPoolExecutor with two threads and then calls .submit() on each of them, telling them to run database.update().

.submit() has a signature that allows both positional and named arguments to be passed to the function running in the thread:

.submit(function, *args, **kwargs)

In the usage above, index is passed as the first and only positional argument to database.update(). You’ll see later in this article where you can pass multiple arguments in a similar manner.

Since each thread runs .update(), and .update() adds one to .value, you might expect database.value to be 2 when it’s printed out at the end. But you wouldn’t be looking at this example if that was the case. If you run the above code, the output looks like this:

$ ./racecond.py Testing unlocked update. Starting value is 0. Thread 0: starting update Thread 1: starting update Thread 0: finishing update Thread 1: finishing update Testing unlocked update. Ending value is 1.

You might have expected that to happen, but let’s look at the details of what’s really going on here, as that will make the solution to this problem easier to understand.

One Thread

Before you dive into this issue with two threads, let’s step back and talk a bit about some details of how threads work.

You won’t be diving into all of the details here, as that’s not important at this level. We’ll also be simplifying a few things in a way that won’t be technically accurate but will give you the right idea of what is happening.

When you tell your ThreadPoolExecutor to run each thread, you tell it which function to run and what parameters to pass to it: executor.submit(database.update, index).

The result of this is that each of the threads in the pool will call database.update(index). Note that database is a reference to the one FakeDatabase object created in __main__. Calling .update() on that object calls an instance method on that object.

Each thread is going to have a reference to the same FakeDatabase object, database. Each thread will also have a unique value, index, to make the logging statements a bit easier to read: https://treyhunner.com/2019/03/unique-and-sentinel-values-in-python/

When the thread starts running .update(), it has its own version of all of the data local to the function. In the case of .update(), this is local_copy. This is definitely a good thing. Otherwise, two threads running the same function would always confuse each other. It means that all variables that are scoped (or local) to a function are thread-safe.

Now you can start walking through what happens if you run the program above with a single thread and a single call to .update().

The image below steps through the execution of .update() if only a single thread is run. The statement is shown on the left followed by a diagram showing the values in the thread’s local_value and the shared database.value:

The diagram is laid out so that time increases as you move from top to bottom. It begins when Thread 1 is created and ends when it is terminated.

When Thread 1 starts, FakeDatabase.value is zero. The first line of code in the method, local_copy = self.value, copies the value zero to the local variable. Next it increments the value of local_copy with the local_copy += 1 statement. You can see .value in Thread 1 getting set to one.

Next time.sleep() is called, which makes the current thread pause and allows other threads to run. Since there is only one thread in this example, this has no effect.

When Thread 1 wakes up and continues, it copies the new value from local_copy to FakeDatabase.value, and then the thread is complete. You can see that database.value is set to one.

So far, so good. You ran .update() once and FakeDatabase.value was incremented to one.

Two Threads

Getting back to the race condition, the two threads will be running concurrently but not at the same time. They will each have their own version of local_copy and will each point to the same database. It is this shared database object that is going to cause the problems.

The program starts with Thread 1 running .update():

When Thread 1 calls time.sleep(), it allows the other thread to start running. This is where things get interesting.

Thread 2 starts up and does the same operations. It’s also copying database.value into its private local_copy, and this shared database.value has not yet been updated:

When Thread 2 finally goes to sleep, the shared database.value is still unmodified at zero, and both private versions of local_copy have the value one.

Thread 1 now wakes up and saves its version of local_copy and then terminates, giving Thread 2 a final chance to run. Thread 2 has no idea that Thread 1 ran and updated database.value while it was sleeping. It stores its version of local_copy into database.value, also setting it to one:

The two threads have interleaving access to a single shared object, overwriting each other’s results. Similar race conditions can arise when one thread frees memory or closes a file handle before the other thread is finished accessing it.

Why This Isn’t a Silly Example

The example above is contrived to make sure that the race condition happens every time you run your program. Because the operating system can swap out a thread at any time, it is possible to interrupt a statement like x = x+1 after it has read the value of x but before it has written back the incremented value.

The details of how this happens are quite interesting, but not needed for the rest of this article, so feel free to skip over this hidden section.

How Does This Really Work Show/Hide

The code above isn’t quite as out there as you might originally have thought. It was designed to force a race condition every time you run it, but that makes it much easier to solve than most race conditions.

There are two things to keep in mind when thinking about race conditions:

  1. Even an operation like x += 1 takes the processor many steps. Each of these steps is a separate instruction to the processor.

  2. The operating system can swap which thread is running at any time. A thread can be swapped out after any of these small instructions. This means that a thread can be put to sleep to let another thread run in the middle of a Python statement.

Let’s look at this in detail. The REPL below shows a function that takes a parameter and increments it:

>>>>>> def inc(x): ... x += 1 ... >>> import dis >>> dis.dis(inc) 2 0 LOAD_FAST 0 (x) 2 LOAD_CONST 1 (1) 4 INPLACE_ADD 6 STORE_FAST 0 (x) 8 LOAD_CONST 0 (None) 10 RETURN_VALUE

The REPL example uses dis from the Python standard library to show the smaller steps that the processor does to implement your function. It does a LOAD_FAST of the data value x, it does a LOAD_CONST 1, and then it uses the INPLACE_ADD to add those values together.

We’re stopping here for a specific reason. This is the point in .update() above where time.sleep() forced the threads to switch. It is entirely possible that, every once in while, the operating system would switch threads at that exact point even without sleep(), but the call to sleep() makes it happen every time.

As you learned above, the operating system can swap threads at any time. You’ve walked down this listing to the statement marked 4. If the operating system swaps out this thread and runs a different thread that also modifies x, then when this thread resumes, it will overwrite x with an incorrect value.

Technically, this example won’t have a race condition because x is local to inc(). It does illustrate how a thread can be interrupted during a single Python operation, however. The same LOAD, MODIFY, STORE set of operations also happens on global and shared values. You can explore with the dis module and prove that yourself.

It’s rare to get a race condition like this to occur, but remember that an infrequent event taken over millions of iterations becomes likely to happen. The rarity of these race conditions makes them much, much harder to debug than regular bugs.

Now back to your regularly scheduled tutorial!

Now that you’ve seen a race condition in action, let’s find out how to solve them!

Basic Synchronization Using Lock

There are a number of ways to avoid or solve race conditions. You won’t look at all of them here, but there are a couple that are used frequently. Let’s start with Lock.

To solve your race condition above, you need to find a way to allow only one thread at a time into the read-modify-write section of your code. The most common way to do this is called Lock in Python. In some other languages this same idea is called a mutex. Mutex comes from MUTual EXclusion, which is exactly what a Lock does.

A Lock is an object that acts like a hall pass. Only one thread at a time can have the Lock. Any other thread that wants the Lock must wait until the owner of the Lock gives it up.

The basic functions to do this are .acquire() and .release(). A thread will call my_lock.acquire() to get the lock. If the lock is already held, the calling thread will wait until it is released. There’s an important point here. If one thread gets the lock but never gives it back, your program will be stuck. You’ll read more about this later.

Fortunately, Python’s Lock will also operate as a context manager, so you can use it in a with statement, and it gets released automatically when the with block exits for any reason.

Let’s look at the FakeDatabase with a Lock added to it. The calling function stays the same:

class FakeDatabase: def __init__(self): self.value = 0 self._lock = threading.Lock() def locked_update(self, name): logging.info("Thread %s: starting update", name) logging.debug("Thread %s about to lock", name) with self._lock: logging.debug("Thread %s has lock", name) local_copy = self.value local_copy += 1 time.sleep(0.1) self.value = local_copy logging.debug("Thread %s about to release lock", name) logging.debug("Thread %s after release", name) logging.info("Thread %s: finishing update", name)

Other than adding a bunch of debug logging so you can see the locking more clearly, the big change here is to add a member called ._lock, which is a threading.Lock() object. This ._lock is initialized in the unlocked state and locked and released by the with statement.

It’s worth noting here that the thread running this function will hold on to that Lock until it is completely finished updating the database. In this case, that means it will hold the Lock while it copies, updates, sleeps, and then writes the value back to the database.

If you run this version with logging set to warning level, you’ll see this:

$ ./fixrace.py Testing locked update. Starting value is 0. Thread 0: starting update Thread 1: starting update Thread 0: finishing update Thread 1: finishing update Testing locked update. Ending value is 2.

Look at that. Your program finally works!

You can turn on full logging by setting the level to DEBUG by adding this statement after you configure the logging output in __main__:

logging.getLogger().setLevel(logging.DEBUG)

Running this program with DEBUG logging turned on looks like this:

$ ./fixrace.py Testing locked update. Starting value is 0. Thread 0: starting update Thread 0 about to lock Thread 0 has lock Thread 1: starting update Thread 1 about to lock Thread 0 about to release lock Thread 0 after release Thread 0: finishing update Thread 1 has lock Thread 1 about to release lock Thread 1 after release Thread 1: finishing update Testing locked update. Ending value is 2.

In this output you can see Thread 0 acquires the lock and is still holding it when it goes to sleep. Thread 1 then starts and attempts to acquire the same lock. Because Thread 0 is still holding it, Thread 1 has to wait. This is the mutual exclusion that a Lock provides.

Many of the examples in the rest of this article will have WARNING and DEBUG level logging. We’ll generally only show the WARNING level output, as the DEBUG logs can be quite lengthy. Try out the programs with the logging turned up and see what they do.

Deadlock

Before you move on, you should look at a common problem when using Locks. As you saw, if the Lock has already been acquired, a second call to .acquire() will wait until the thread that is holding the Lock calls .release(). What do you think happens when you run this code:

import threading l = threading.Lock() print("before first acquire") l.acquire() print("before second acquire") l.acquire() print("acquired lock twice")

When the program calls l.acquire() the second time, it hangs waiting for the Lock to be released. In this example, you can fix the deadlock by removing the second call, but deadlocks usually happen from one of two subtle things:

  1. An implementation bug where a Lock is not released properly
  2. A design issue where a utility function needs to be called by functions that might or might not already have the Lock

The first situation happens sometimes, but using a Lock as a context manager greatly reduces how often. It is recommended to write code whenever possible to make use of context managers, as they help to avoid situations where an exception skips you over the .release() call.

The design issue can be a bit trickier in some languages. Thankfully, Python threading has a second object, called RLock, that is designed for just this situation. It allows a thread to .acquire() an RLock multiple times before it calls .release(). That thread is still required to call .release() the same number of times it called .acquire(), but it should be doing that anyway.

Lock and RLock are two of the basic tools used in threaded programming to prevent race conditions. There are a few other that work in different ways. Before you look at them, let’s shift to a slightly different problem domain.

Producer-Consumer Threading

The Producer-Consumer Problem is a standard computer science problem used to look at threading or process synchronization issues. You’re going to look at a variant of it to get some ideas of what primitives the Python threading module provides.

For this example, you’re going to imagine a program that needs to read messages from a network and write them to disk. The program does not request a message when it wants. It must be listening and accept messages as they come in. The messages will not come in at a regular pace, but will be coming in bursts. This part of the program is called the producer.

On the other side, once you have a message, you need to write it to a database. The database access is slow, but fast enough to keep up to the average pace of messages. It is not fast enough to keep up when a burst of messages comes in. This part is the consumer.

In between the producer and the consumer, you will create a Pipeline that will be the part that changes as you learn about different synchronization objects.

That’s the basic layout. Let’s look at a solution using Lock. It doesn’t work perfectly, but it uses tools you already know, so it’s a good place to start.

Producer-Consumer Using Lock

Since this is an article about Python threading, and since you just read about the Lock primitive, let’s try to solve this problem with two threads using a Lock or two.

The general design is that there is a producer thread that reads from the fake network and puts the message into a Pipeline:

SENTINEL = object() def producer(pipeline): """Pretend we're getting a message from the network.""" for index in range(10): message = random.randint(1, 101) logging.info("Producer got message: %s", message) pipeline.set_message(message, "Producer") # Send a sentinel message to tell consumer we're done pipeline.set_message(SENTINEL, "Producer")

To generate a fake message, the producer gets a random number between one and one hundred. It calls .set_message() on the pipeline to send it to the consumer.

The producer also uses a SENTINEL value to signal the consumer to stop after it has sent ten values. This is a little awkward, but don’t worry, you’ll see ways to get rid of this SENTINEL value after you work through this example.

On the other side of the pipeline is the consumer:

def consumer(pipeline): """ Pretend we're saving a number in the database. """ message = 0 while message is not SENTINEL: message = pipeline.get_message("Consumer") if message is not SENTINEL: logging.info("Consumer storing message: %s", message)

The consumer reads a message from the pipeline and writes it to a fake database, which in this case is just printing it to the display. If it gets the SENTINEL value, it returns from the function, which will terminate the thread.

Before you look at the really interesting part, the Pipeline, here’s the __main__ section, which spawns these threads:

if __name__ == "__main__": format = "%(asctime)s: %(message)s" logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S") # logging.getLogger().setLevel(logging.DEBUG) pipeline = Pipeline() with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor: executor.submit(producer, pipeline) executor.submit(consumer, pipeline)

This should look fairly familiar as it’s close to the __main__ code in the previous examples.

Remember that you can turn on DEBUG logging to see all of the logging messages by uncommenting this line:

# logging.getLogger().setLevel(logging.DEBUG)

It can be worthwhile to walk through the DEBUG logging messages to see exactly where each thread acquires and releases the locks.

Now let’s take a look at the Pipeline that passes messages from the producer to the consumer:

class Pipeline: """Class to allow a single element pipeline between producer and consumer. """ def __init__(self): self.message = 0 self.producer_lock = threading.Lock() self.consumer_lock = threading.Lock() self.consumer_lock.acquire() def get_message(self, name): logging.debug("%s:about to acquire getlock", name) self.consumer_lock.acquire() logging.debug("%s:have getlock", name) message = self.message logging.debug("%s:about to release setlock", name) self.producer_lock.release() logging.debug("%s:setlock released", name) return message def set_message(self, message, name): logging.debug("%s:about to acquire setlock", name) self.producer_lock.acquire() logging.debug("%s:have setlock", name) self.message = message logging.debug("%s:about to release getlock", name) self.consumer_lock.release() logging.debug("%s:getlock released", name)

Woah! That’s a lot of code. A pretty high percentage of that is just logging statements to make it easier to see what’s happening when you run it. Here’s the same code with all of the logging statements removed:

class Pipeline: """Class to allow a single element pipeline between producer and consumer. """ def __init__(self): self.message = 0 self.producer_lock = threading.Lock() self.consumer_lock = threading.Lock() self.consumer_lock.acquire() def get_message(self, name): self.cosumer_lock.acquire() message = self.message self.producer_lock.release() return message def set_message(self, message, name): self.producer_lock.acquire() self.message = message self.consumer_lock.release()

That seems a bit more manageable. The Pipeline in this version of your code has three members:

  1. .message stores the message to pass.
  2. .producer_lock is a threading.Lock object that restricts access to the message by the producer thread.
  3. .consumer_lock is also a threading.Lock that restricts access to the message by the consumer thread.

__init__() initializes these three members and then calls .acquire() on the .consumer_lock. This is the state you want to start in. The producer is allowed to add a new message, but the consumer needs to wait until a message is present.

.get_message() and .set_messages() are nearly opposites. .get_message() calls .acquire() on the consumer_lock. This is the call that will make the consumer wait until a message is ready.

Once the consumer has acquired the .consumer_lock, it copies out the value in .message and then calls .release() on the .producer_lock. Releasing this lock is what allows the producer to insert the next message into the pipeline.

Before you go on to .set_message(), there’s something subtle going on in .get_message() that’s pretty easy to miss. It might seem tempting to get rid of message and just have the function end with return self.message. See if you can figure out why you don’t want to do that before moving on.

Here’s the answer. As soon as the consumer calls .producer_lock.release(), it can be swapped out, and the producer can start running. That could happen before .release() returns! This means that there is a slight possibility that when the function returns self.message, that could actually be the next message generated, so you would lose the first message. This is another example of a race condition.

Moving on to .set_message(), you can see the opposite side of the transaction. The producer will call this with a message. It will acquire the .producer_lock, set the .message, and the call .release() on then consumer_lock, which will allow the consumer to read that value.

Let’s run the code that has logging set to WARNING and see what it looks like:

$ ./prodcom_lock.py Producer got data 43 Producer got data 45 Consumer storing data: 43 Producer got data 86 Consumer storing data: 45 Producer got data 40 Consumer storing data: 86 Producer got data 62 Consumer storing data: 40 Producer got data 15 Consumer storing data: 62 Producer got data 16 Consumer storing data: 15 Producer got data 61 Consumer storing data: 16 Producer got data 73 Consumer storing data: 61 Producer got data 22 Consumer storing data: 73 Consumer storing data: 22

At first, you might find it odd that the producer gets two messages before the consumer even runs. If you look back at the producer and .set_message(), you will notice that the only place it will wait for a Lock is when it attempts to put the message into the pipeline. This is done after the producer gets the message and logs that it has it.

When the producer attempts to send this second message, it will call .set_message() the second time and it will block.

The operating system can swap threads at any time, but it generally lets each thread have a reasonable amount of time to run before swapping it out. That’s why the producer usually runs until it blocks in the second call to .set_message().

Once a thread is blocked, however, the operating system will always swap it out and find a different thread to run. In this case, the only other thread with anything to do is the producer.

The producer calls .get_message(), which reads the message and calls .release() on the .producer_lock, thus allowing the producer to run again the next time threads are swapped.

Notice that the first message was 43, and that is exactly what the consumer read, even though the producer had already generated the 45 message.

While it works for this limited test, it is not a great solution to the producer-consumer problem in general because it only allows a single value in the pipeline at a time. When the producer gets a burst of messages, it will have nowhere to put them.

Let’s move on to a better way to solve this problem, using a Queue.

Producer-Consumer Using Queue

If you want to be able to handle more than one value in the pipeline at a time, you’ll need a data structure for the pipeline that allows the number to grow and shrink as data backs up from the producer.

Python’s standard library has a queue module which, in turn, has a Queue class. Let’s change the Pipeline to use a Queue instead of just a variable protected by a Lock. You’ll also use a different way to stop the worker threads by using a different primitive from Python threading, an Event.

Let’s start with the Event. The threading.Event object allows one thread to signal an event while many other threads can be waiting for that event to happen. The key usage in this code is that the threads that are waiting for the event do not necessarily need to stop what they are doing, they can just check the status of the Event every once in a while.

The triggering of the event can be many things. In this example, the main thread will simply sleep for a while and then .set() it:

1 if __name__ == "__main__": 2 format = "%(asctime)s: %(message)s" 3 logging.basicConfig(format=format, level=logging.INFO, 4 datefmt="%H:%M:%S") 5 # logging.getLogger().setLevel(logging.DEBUG) 6 7 pipeline = Pipeline() 8 event = threading.Event() 9 with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor: 10 executor.submit(producer, pipeline, event) 11 executor.submit(consumer, pipeline, event) 12 13 time.sleep(0.1) 14 logging.info("Main: about to set event") 15 event.set()

The only changes here are the creation of the event object on line 6, passing the event as a parameter on lines 8 and 9, and the final section on lines 11 to 13, which sleep for a second, log a message, and then call .set() on the event.

The producer also did not have to change too much:

1 def producer(pipeline, event): 2 """Pretend we're getting a number from the network.""" 3 while not event.is_set(): 4 message = random.randint(1, 101) 5 logging.info("Producer got message: %s", message) 6 pipeline.set_message(message, "Producer") 7 8 logging.info("Producer received EXIT event. Exiting")

It now will loop until it sees that the event was set on line 3. It also no longer puts the SENTINEL value into the pipeline.

consumer had to change a little more:

1 def consumer(pipeline, event): 2 """ Pretend we're saving a number in the database. """ 3 while not event.is_set() or not pipeline.empty(): 4 message = pipeline.get_message("Consumer") 5 logging.info( 6 "Consumer storing message: %s (queue size=%s)", 7 message, 8 pipeline.qsize(), 9 ) 10 11 logging.info("Consumer received EXIT event. Exiting")

While you got to take out the code related to the SENTINEL value, you did have to do a slightly more complicated while condition. Not only does it loop until the event is set, but it also needs to keep looping until the pipeline has been emptied.

Making sure the queue is empty before the consumer finishes prevents another fun issue. If the consumer does exit while the pipeline has messages in it, there are two bad things that can happen. The first is that you lose those final messages, but the more serious one is that the producer can get caught on the producer_lock and never return.

This happens if the event gets triggered after the producer has checked the .is_set() condition but before it calls pipeline.set_message().

If that happens, it’s possible for the producer to wake up and exit with the .producer_lock still being held. The producer will then try to .acquire() the .producer_lock, but the consumer has exited and will never .release() it.

The rest of the consumer should look familiar.

The Pipeline has changed dramatically, however:

1 class Pipeline(queue.Queue): 2 def __init__(self): 3 super().__init__(maxsize=10) 4 5 def get_message(self, name): 6 logging.debug("%s:about to get from queue", name) 7 value = self.get() 8 logging.debug("%s:got %d from queue", name, value) 9 return value 10 11 def set_message(self, value, name): 12 logging.debug("%s:about to add %d to queue", name, value) 13 self.put(value) 14 logging.debug("%s:added %d to queue", name, value)

You can see that Pipeline is a subclass of queue.Queue. Queue has an optional parameter when initializing to specify a maximum size of the queue.

If you give a positive number for maxsize, it will limit the queue to that number of elements, causing .put() to block until there are fewer than maxsize elements. If you don’t specify maxsize, then the queue will grow to the limits of your computer’s memory.

.get_message() and .set_message() got much smaller. They basically wrap .get() and .put() on the Queue. You might be wondering where all of the locking code that prevents the threads from causing race conditions went.

The core devs who wrote the standard library knew that a Queue is frequently used in multi-threading environments and incorporated all of that locking code inside the Queue itself. Queue is thread-safe.

Running this program looks like the following:

$ ./prodcom_queue.py Producer got message: 32 Producer got message: 51 Producer got message: 25 Producer got message: 94 Producer got message: 29 Consumer storing message: 32 (queue size=3) Producer got message: 96 Consumer storing message: 51 (queue size=3) Producer got message: 6 Consumer storing message: 25 (queue size=3) Producer got message: 31 [many lines deleted] Producer got message: 80 Consumer storing message: 94 (queue size=6) Producer got message: 33 Consumer storing message: 20 (queue size=6) Producer got message: 48 Consumer storing message: 31 (queue size=6) Producer got message: 52 Consumer storing message: 98 (queue size=6) Main: about to set event Producer got message: 13 Consumer storing message: 59 (queue size=6) Producer received EXIT event. Exiting Consumer storing message: 75 (queue size=6) Consumer storing message: 97 (queue size=5) Consumer storing message: 80 (queue size=4) Consumer storing message: 33 (queue size=3) Consumer storing message: 48 (queue size=2) Consumer storing message: 52 (queue size=1) Consumer storing message: 13 (queue size=0) Consumer received EXIT event. Exiting

If you read through the output in my example, you can see some interesting things happening. Right at the top, you can see the producer got to create five messages and place four of them on the queue. It got swapped out by the operating system before it could place the fifth one.

The consumer then ran and pulled off the first message. It printed out that message as well as how deep the queue was at that point:

Consumer storing message: 32 (queue size=3)

This is how you know that the fifth message hasn’t made it into the pipeline yet. The queue is down to size three after a single message was removed. You also know that the queue can hold ten messages, so the producer thread didn’t get blocked by the queue. It was swapped out by the OS.

Note: Your output will be different. Your output will change from run to run. That’s the fun part of working with threads!

As the program starts to wrap up, can you see the main thread generating the event which causes the producer to exit immediately. The consumer still has a bunch of work do to, so it keeps running until it has cleaned out the pipeline.

Try playing with different queue sizes and calls to time.sleep() in the producer or the consumer to simulate longer network or disk access times respectively. Even slight changes to these elements of the program will make large differences in your results.

This is a much better solution to the producer-consumer problem, but you can simplify it even more. The Pipeline really isn’t needed for this problem. Once you take away the logging, it just becomes a queue.Queue.

Here’s what the final code looks like using queue.Queue directly:

import concurrent.futures import logging import queue import random import threading import time def producer(queue, event): """Pretend we're getting a number from the network.""" while not event.is_set(): message = random.randint(1, 101) logging.info("Producer got message: %s", message) queue.put(message) logging.info("Producer received event. Exiting") def consumer(queue, event): """ Pretend we're saving a number in the database. """ while not event.is_set() or not pipeline.empty(): message = queue.get() logging.info( "Consumer storing message: %s (size=%d)", message, queue.qsize() ) logging.info("Consumer received event. Exiting") if __name__ == "__main__": format = "%(asctime)s: %(message)s" logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S") pipeline = queue.Queue(maxsize=10) event = threading.Event() with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor: executor.submit(producer, pipeline, event) executor.submit(consumer, pipeline, event) time.sleep(0.1) logging.info("Main: about to set event") event.set()

That’s easier to read and shows how using Python’s built-in primitives can simplify a complex problem.

Lock and Queue are handy classes to solve concurrency issues, but there are others provided by the standard library. Before you wrap up this tutorial, let’s do a quick survey of some of them.

Threading Objects

There are a few more primitives offered by the Python threading module. While you didn’t need these for the examples above, they can come in handy in different use cases, so it’s good to be familiar with them.

Semaphore

The first Python threading object to look at is threading.Semaphore. A Semaphore is a counter with a few special properties. The first one is that the counting is atomic. This means that there is a guarantee that the operating system will not swap out the thread in the middle of incrementing or decrementing the counter.

The internal counter is incremented when you call .release() and decremented when you call .acquire().

The next special property is that if a thread calls .acquire() when the counter is zero, that thread will block until a different thread calls .release() and increments the counter to one.

Semaphores are frequently used to protect a resource that has a limited capacity. An example would be if you have a pool of connections and want to limit the size of that pool to a specific number.

Timer

A threading.Timer is a way to schedule a function to be called after a certain amount of time has passed. You create a Timer by passing in a number of seconds to wait and a function to call:

t = threading.Timer(30.0, my_function)

You start the Timer by calling .start(). The function will be called on a new thread at some point after the specified time, but be aware that there is no promise that it will be called exactly at the time you want.

If you want to stop a Timer that you’ve already started, you can cancel it by calling .cancel(). Calling .cancel() after the Timer has triggered does nothing and does not produce an exception.

A Timer can be used to prompt a user for action after a specific amount of time. If the user does the action before the Timer expires, .cancel() can be called.

Barrier

A threading.Barrier can be used to keep a fixed number of threads in sync. When creating a Barrier, the caller must specify how many threads will be synchronizing on it. Each thread calls .wait() on the Barrier. They all will remain blocked until the specified number of threads are waiting, and then the are all released at the same time.

Remember that threads are scheduled by the operating system so, even though all of the threads are released simultaneously, they will be scheduled to run one at a time.

One use for a Barrier is to allow a pool of threads to initialize themselves. Having the threads wait on a Barrier after they are initialized will ensure that none of the threads start running before all of the threads are finished with their initialization.

Conclusion: Threading in Python

You’ve now seen much of what Python threading has to offer and some examples of how to build threaded programs and the problems they solve. You’ve also seen a few instances of the problems that arise when writing and debugging threaded programs.

If you’d like to explore other options for concurrency in Python, check out Speed Up Your Python Program With Concurrency.

If you’re interested in doing a deep dive on the asyncio module, go read Async IO in Python: A Complete Walkthrough.

Whatever you do, you now have the information and confidence you need to write programs using Python threading!

Take the Quiz: Test your knowledge with our interactive “Python Threading” quiz. Upon completion you will receive a score so you can track your learning progress over time:

Take the Quiz »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

James Duncan

Planet Apache - Mon, 2019-03-25 08:45

If you live in the European Union, please consider participating in the #SaveYourInternet campaign.

Categories: FLOSS Project Planets

leftmouseclickin: Plot the Aroon Oscillator values with python

Planet Python - Mon, 2019-03-25 08:39
Our Own Score

Welcome to the ongoing Forex and Stock python project. In this chapter we will create a method to plot the Aroon Oscillator values with python. The Aroon Oscillator is a line that can fall between -100 and 100. A high Oscillator value is an indication of an uptrend while a low Oscillator value is an indication of a downtrend. When Aroon Up remains high from consecutive new highs, the Oscillator value will be high, following the uptrend. Below is the revised version of the project, we have created a method to plot the Aroon Oscillator values and of course a button which will call that method.

import json from tkinter import * import tkinter.ttk as tk from alpha_vantage.foreignexchange import ForeignExchange from alpha_vantage.techindicators import TechIndicators from alpha_vantage.timeseries import TimeSeries import matplotlib.pyplot as plt from alpha_vantage.sectorperformance import SectorPerformances win = Tk() # Create tk instance win.title("Real Forex n Stock") # Add a title win.resizable(0, 0) # Disable resizing the GUI win.configure(background='white') # change window background color selectorFrame = Frame(win, background="white") # create the top frame to hold base and quote currency combobox selectorFrame.pack(anchor = "nw", pady = 2, padx=10) currency_label = Label(selectorFrame, text = "Select base currency / quote currency :", background="white") currency_label.pack(anchor="w") # the currency pair label selector1Frame = Frame(win, background="white") # create the middle frame to hold base and quote currency combobox selector1Frame.pack(anchor = "nw", pady = 2, padx=10) stock_label = Label(selector1Frame, text = "Select Stock / Time Interval / Series type / Moving average type / Fast Period / Slow Period :", background="white") stock_label.pack(anchor="w") # the stock label curr1 = tuple() # the tuple which will be populated by base and quote currency currency_list = ['AUD', 'BCH', 'BNB', 'BND', 'BTC', 'CAD', 'CHF', 'CNY', 'EOS', 'EUR', 'ETH', 'GBP', 'HKD', 'JPY', 'LTC', 'NZD', 'MYR', 'TRX', 'USD', 'USDT', 'XLM', 'XRP'] # major world currency pairs # populate the combo box for both the base and quote currency for key in currency_list: curr1 += (key, ) # populate the stock symbol tuple f = open("stock.txt", "r") curr2 = tuple() for line in f.readlines(): curr2 += (line.replace('\n', ''),) f.close() # Create a combo box for base currency base_currency = StringVar() # create a string variable based = tk.Combobox(selectorFrame, textvariable=base_currency) based['values'] = curr1 based.pack(side = LEFT, padx=3) # Create a combo box for quote currency quote_currency = StringVar() # create a string variable quote = tk.Combobox(selectorFrame, textvariable=quote_currency) quote['values'] = curr1 quote.pack(side = LEFT, padx=3) # Create a combo box for stock items stock_symbol = StringVar() # create a string variable stock = tk.Combobox(selector1Frame, textvariable=stock_symbol) stock['values'] = curr2 stock.current(0) stock.pack(side = LEFT, padx=3) interval = tk.Combobox(selector1Frame) interval['values'] = ('1min', '5min', '15min', '30min', '60min', 'daily', 'weekly', 'monthly') interval.current(0) interval.pack(side = LEFT, padx=3) price_type = tk.Combobox(selector1Frame) price_type['values'] = ('close', 'open', 'high', 'low') price_type.current(0) price_type.pack(side =LEFT, padx=3) matype_type = tk.Combobox(selector1Frame, width=37) matype_type['values'] = ('Simple Moving Average (SMA)', 'Exponential Moving Average (EMA)', 'Weighted Moving Average (WMA)', 'Double Exponential Moving Average (DEMA', 'Triple Exponential Moving Average (TEMA)', 'Triangular Moving Average (TRIMA', 'T3 Moving Average', 'Kaufman Adaptive Moving Average (KAMA)', ' MESA Adaptive Moving Average (MAMA)') matype_type.current(0) matype_type.pack(side =LEFT, padx=3) mattype_list = ['Simple Moving Average (SMA)', 'Exponential Moving Average (EMA)', 'Weighted Moving Average (WMA)', 'Double Exponential Moving Average (DEMA', 'Triple Exponential Moving Average (TEMA)', 'Triangular Moving Average (TRIMA', 'T3 Moving Average', 'Kaufman Adaptive Moving Average (KAMA)', ' MESA Adaptive Moving Average (MAMA)'] # fill up the fast period and slow period combo boxes with integer ranging from 2 to 10,000 fa = tuple() for i in range(2, 10001): fa += (i, ) fast_pe = tk.Combobox(selector1Frame) fast_pe['values'] = fa fast_pe.current(0) fast_pe.pack(side=LEFT, padx=3) slow_pe = tk.Combobox(selector1Frame) slow_pe['values'] = fa slow_pe.current(0) slow_pe.pack(side=LEFT, padx=3) # create text widget area s = StringVar() # create string variable which will be used to fill up the Forex data # create currency frame and text widget to display the incoming forex data currencyFrame = Frame(win) currencyFrame.pack(side=TOP, fill=X) currency = Label(currencyFrame) currency.pack(fill=X) text_widget = Text(currency, fg='white', background='black') text_widget.pack(fill=X) s.set("Click the find button to find out the currency exchange rate") text_widget.insert(END, s.get()) buttonFrame = Frame(win) # create a bottom frame to hold the find button buttonFrame.pack(side = BOTTOM, fill=X, pady = 6, padx=10) # first get the api key and secret from the file f = open("alpha.txt", "r") api_key = f.readline() f.close() api_key = api_key.replace('\n', '') def get_exchange_rate(): # this method will display the incoming forex data after the api called try: cc = ForeignExchange(key= api_key) from_ = based.get() to_ = quote.get() countVar = StringVar() # use to hold the character count text_widget.tag_remove("search", "1.0", "end") # cleared the hightlighted currency pair if(from_ != '' and to_ != '' and from_ != to_): data, _ = cc.get_currency_exchange_rate(from_currency=from_, to_currency=to_) exchange_rate = dict(json.loads(json.dumps(data))) count = 1 sell_buy = str(count) + ".) Pair : " + exchange_rate['1. From_Currency Code'] + "(" + exchange_rate['2. From_Currency Name'] + ")" + " / " + exchange_rate['3. To_Currency Code']+"(" + exchange_rate['4. To_Currency Name'] + ") : " + str(exchange_rate['5. Exchange Rate']) + '\n' text_widget.delete('1.0', END) # clear all those previous text first s.set(sell_buy) text_widget.insert(INSERT, s.get()) # display forex rate in text widget pos = text_widget.search(from_, "1.0", stopindex="end", count=countVar) text_widget.tag_configure("search", background="green") end_pos = float(pos) + float(0.7) text_widget.tag_add("search", pos, str(end_pos)) # highlight the background of the searched currency pair pos = float(pos) + 2.0 text_widget.see(str(pos)) except: print("An exception occurred") def plot_stock_echange(): try: stock_symbol_text = stock.get() # get the selected symbol if(stock_symbol_text!= ''): ts = TimeSeries(key=api_key, output_format='pandas') data, meta_data = ts.get_intraday(symbol=stock_symbol_text, interval='1min', outputsize='full') data['4. close'].plot() stock_title = 'Intraday Times Series for the ' + stock_symbol_text + ' stock (1 min)' plt.title(stock_title) plt.show() except: print("An exception occurred") def plot_stock_technical(): try: stock_symbol_text = stock.get() # get the selected stock symbol if(stock_symbol_text!= ''): ti = TechIndicators(key=api_key, output_format='pandas') data, meta_data = ti.get_bbands(symbol=stock_symbol_text, interval=interval.get(), series_type=price_type.get(), matype=mattype_list.index(matype_type.get()), time_period=int(interval.get().replace('min', ''))) data.plot() stock_title = 'BBbands indicator for ' + stock_symbol_text + ' ' + interval.get() plt.title(stock_title) plt.show() except: print("An exception occurred") def plot_op(): # plot the Absolute price oscillator (APO) try: stock_symbol_text = stock.get() # get the selected stock symbol if(stock_symbol_text!= ''): ti = TechIndicators(key=api_key, output_format='pandas') data, meta_data = ti.get_apo(symbol=stock_symbol_text, interval=interval.get(), series_type=price_type.get(), matype=mattype_list.index(matype_type.get()), fastperiod = fast_pe.get(), slowperiod= slow_pe.get()) data.plot() stock_title = 'Absolute Price Oscillator (APO) for ' + stock_symbol_text + ' ' + interval.get() plt.title(stock_title) plt.show() except ValueError: print("An exception occurred") def plot_adxr(): # plot the average directional movement index rating try: stock_symbol_text = stock.get() # get the selected stock symbol if(stock_symbol_text!= ''): ti = TechIndicators(key=api_key, output_format='pandas') data, meta_data = ti.get_adxr(symbol=stock_symbol_text, interval=interval.get(), time_period=int(interval.get().replace('min', ''))) data.plot() stock_title = 'Average directional movement index rating for ' + stock_symbol_text + ' at ' + interval.get() plt.title(stock_title) plt.show() except ValueError: print("An exception occurred") def plot_adx(): # plot the average directional movement index try: stock_symbol_text = stock.get() # get the selected stock symbol if(stock_symbol_text!= ''): ti = TechIndicators(key=api_key, output_format='pandas') data, meta_data = ti.get_adx(symbol=stock_symbol_text, interval=interval.get(), time_period=int(interval.get().replace('min', ''))) data.plot() stock_title = 'Average directional movement index for ' + stock_symbol_text + ' at ' + interval.get() plt.title(stock_title) plt.show() except ValueError: print("An error exception occurred") def plot_sector_performance(): sp = SectorPerformances(key=api_key, output_format='pandas') data, meta_data = sp.get_sector() data['Rank A: Real-Time Performance'].plot(kind='bar') plt.title('Real Time Performance (%) per Sector') plt.tight_layout() plt.grid() plt.show() def plot_ad(): try: stock_symbol_text = stock.get() # get the selected stock symbol if (stock_symbol_text != ''): ti = TechIndicators(key=api_key, output_format='pandas') data, meta_data = ti.get_ad(symbol=stock_symbol_text, interval=interval.get()) data.plot() stock_title = 'Chaikin A/D line values for ' + stock_symbol_text + ' ' + interval.get() plt.title(stock_title) plt.show() except: print("An exception occurred") def plot_aroon(): # plot the aroon values try: stock_symbol_text = stock.get() # get the selected stock symbol if(stock_symbol_text!= ''): ti = TechIndicators(key=api_key, output_format='pandas') data, meta_data = ti.get_aroon(symbol=stock_symbol_text, interval=interval.get(), series_type=price_type.get(), time_period=int(interval.get().replace('min', ''))) data.plot() stock_title = 'The Aroon Up and the Aroon Down lines for ' + stock_symbol_text + ' ' + interval.get() plt.title(stock_title) plt.show() except ValueError: print("An exception occurred") def plot_aroonosc(): # plot the aroon oscillator values try: stock_symbol_text = stock.get() # get the selected stock symbol if(stock_symbol_text!= ''): ti = TechIndicators(key=api_key, output_format='pandas') data, meta_data = ti.get_aroon(symbol=stock_symbol_text, interval=interval.get(), series_type=price_type.get(), time_period=int(interval.get().replace('min', ''))) data.plot() stock_title = 'The aroon oscillator values for ' + stock_symbol_text + ' ' + interval.get() plt.title(stock_title) plt.show() except ValueError: print("An exception occurred") action_vid = tk.Button(buttonFrame, text="Calculate Exchange Rate", command=get_exchange_rate) # button used to find out the exchange rate of currency pair action_vid.pack(side=LEFT, padx=2) action_stock_plot = tk.Button(buttonFrame, text="Plot Stock", command=plot_stock_echange) # button used to plot the intra-minute stock value action_stock_plot.pack(side=LEFT, padx=2) action_technical_plot = tk.Button(buttonFrame, text="Plot Technical", command=plot_stock_technical) # button used to plot the 60 minutes stock technical value action_technical_plot.pack(side=LEFT, padx=2) action_sector_plot = tk.Button(buttonFrame, text="Plot Sector Performance", command=plot_sector_performance) # button used to plot the sector performance graph action_sector_plot.pack(side=LEFT, padx=2) action_ad_plot = tk.Button(buttonFrame, text="Plot AD Line", command=plot_ad) # button used to plot the A/D line graph action_ad_plot.pack(side=LEFT, padx=2) action_ad_op = tk.Button(buttonFrame, text="Plot APO Line", command=plot_op) # button used to plot the APO line graph action_ad_op.pack(side=LEFT, padx=3) action_adxr = tk.Button(buttonFrame, text="Plot ADXR Line", command=plot_adxr) # button used to plot the average directional movement index rating action_adxr.pack(side=LEFT, padx=3) action_adx = tk.Button(buttonFrame, text="Plot ADX Line", command=plot_adx) # button used to plot the average directional movement index action_adx.pack(side=LEFT, padx=3) action_aroon = tk.Button(buttonFrame, text="Plot Aroon Line", command=plot_aroon) # button used to plot the aroon values action_aroon.pack(side=LEFT, padx=3) action_aroonosc = tk.Button(buttonFrame, text="Plot Aroon Oscillator Line", command=plot_aroonosc) # button used to plot the aroon oscillator values action_aroonosc.pack(side=LEFT, padx=3) win.iconbitmap(r'ico.ico') win.mainloop()

If you run the above program and call the above method then you will see this.

Plot the Aroon Oscillator line with #python @stock pic.twitter.com/IO6tCoNmIw

— TechLikin (@ChooWhei) March 25, 2019

Like, share or follow me on Twitter.

Categories: FLOSS Project Planets

Digital Echidna: Thoughts on all things digital: New Search Overrides Module

Planet Drupal - Mon, 2019-03-25 08:03
You can make the most elegant, relevance-based site search appliance possible -- but, still, sometimes you’re going to want to ‘game’ the system. Manipulating site search results sounds nefarious, but really it’s all about providing the most…
Categories: FLOSS Project Planets

Bits from Debian: Google Platinum Sponsor of DebConf19

Planet Debian - Mon, 2019-03-25 07:30

We are very pleased to announce that Google has committed to support DebConf19 as a Platinum sponsor.

"The annual DebConf is an important part of the Debian development ecosystem and Google is delighted to return as a sponsor in support of the work of the global community of volunteers who make Debian and DebConf a reality" said Cat Allman, Program Manager in the Open Source Programs and Making & Science teams at Google.

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf since more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

With this additional commitment as Platinum Sponsor for DebConf19, Google contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Google, for your support of DebConf19!

Become a sponsor too!

DebConf19 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf19 website at https://debconf19.debconf.org.

Categories: FLOSS Project Planets

James Duncan

Planet Apache - Mon, 2019-03-25 06:00

Apple could probably start a massive cycle of MacBook Pro sales by building their Magic Keyboard into the next version. Done.

I’d buy one myself in a heartbeat. The Magic Keyboard is one of my all-time favorites at this point. So much so that I’m considering getting a Studio Neat Canopy so that I can use one with my iPad.

Yes, I know of a few people who prefer or have adapted to the current generation of MacBook keyboards. But, count up the number of people who are holding onto an older laptop and add them to the number of people who have complained vocally about the current generation.

Draw your own conclusions.

Categories: FLOSS Project Planets

Petter Reinholdtsen: PlantUML for text based UML diagram modelling - nice free software

Planet Debian - Mon, 2019-03-25 04:35

As part of my involvement with the Nikita Noark 5 core project, I have been proposing improvements to the API specification created by The National Archives of Norway and helped migrating the text from a version control system unfriendly binary format (docx) to Markdown in git. Combined with the migration to a public git repository (on github), this has made it possible for anyone to suggest improvement to the text.

The specification is filled with UML diagrams. I believe the original diagrams were modelled using Sparx Systems Enterprise Architect, and exported as EMF files for import into docx. This approach make it very hard to track changes using a version control system. To improve the situation I have been looking for a good text based UML format with associated command line free software tools on Linux and Windows, to allow anyone to send in corrections to the UML diagrams in the specification. The tool must be text based to work with git, and command line to be able to run it automatically to generate the diagram images. Finally, it must be free software to allow anyone, even those that can not accept a non-free software license, to contribute.

I did not know much about free software UML modelling tools when I started. I have used dia and inkscape for simple modelling in the past, but neither are available on Windows, as far as I could tell. I came across a nice list of text mode uml tools, and tested out a few of the tools listed there. The PlantUML tool seemed most promising. After verifying that the packages is available in Debian and found its Java source under a GPL license on github, I set out to test if it could represent the diagrams we needed, ie the ones currently in the Noark 5 Tjenestegrensesnitt specification. I am happy to report that it could represent them, even thought it have a few warts here and there.

After a few days of modelling I completed the task this weekend. A temporary link to the complete set of diagrams (original and from PlantUML) is available in the github issue discussing the need for a text based UML format, but please note I lack a sensible tool to convert EMF files to PNGs, so the "original" rendering is not as good as the original was in the publised PDF.

Here is an example UML diagram, showing the core classes for keeping metadata about archived documents:

@startuml skinparam classAttributeIconSize 0 !include media/uml-class-arkivskaper.iuml !include media/uml-class-arkiv.iuml !include media/uml-class-klassifikasjonssystem.iuml !include media/uml-class-klasse.iuml !include media/uml-class-arkivdel.iuml !include media/uml-class-mappe.iuml !include media/uml-class-merknad.iuml !include media/uml-class-registrering.iuml !include media/uml-class-basisregistrering.iuml !include media/uml-class-dokumentbeskrivelse.iuml !include media/uml-class-dokumentobjekt.iuml !include media/uml-class-konvertering.iuml !include media/uml-datatype-elektronisksignatur.iuml Arkivstruktur.Arkivskaper "+arkivskaper 1..*" <-o "+arkiv 0..*" Arkivstruktur.Arkiv Arkivstruktur.Arkiv o--> "+underarkiv 0..*" Arkivstruktur.Arkiv Arkivstruktur.Arkiv "+arkiv 1" o--> "+arkivdel 0..*" Arkivstruktur.Arkivdel Arkivstruktur.Klassifikasjonssystem "+klassifikasjonssystem [0..1]" <--o "+arkivdel 1..*" Arkivstruktur.Arkivdel Arkivstruktur.Klassifikasjonssystem "+klassifikasjonssystem [0..1]" o--> "+klasse 0..*" Arkivstruktur.Klasse Arkivstruktur.Arkivdel "+arkivdel 0..1" o--> "+mappe 0..*" Arkivstruktur.Mappe Arkivstruktur.Arkivdel "+arkivdel 0..1" o--> "+registrering 0..*" Arkivstruktur.Registrering Arkivstruktur.Klasse "+klasse 0..1" o--> "+mappe 0..*" Arkivstruktur.Mappe Arkivstruktur.Klasse "+klasse 0..1" o--> "+registrering 0..*" Arkivstruktur.Registrering Arkivstruktur.Mappe --> "+undermappe 0..*" Arkivstruktur.Mappe Arkivstruktur.Mappe "+mappe 0..1" o--> "+registrering 0..*" Arkivstruktur.Registrering Arkivstruktur.Merknad "+merknad 0..*" <--* Arkivstruktur.Mappe Arkivstruktur.Merknad "+merknad 0..*" <--* Arkivstruktur.Dokumentbeskrivelse Arkivstruktur.Basisregistrering -|> Arkivstruktur.Registrering Arkivstruktur.Merknad "+merknad 0..*" <--* Arkivstruktur.Basisregistrering Arkivstruktur.Registrering "+registrering 1..*" o--> "+dokumentbeskrivelse 0..*" Arkivstruktur.Dokumentbeskrivelse Arkivstruktur.Dokumentbeskrivelse "+dokumentbeskrivelse 1" o-> "+dokumentobjekt 0..*" Arkivstruktur.Dokumentobjekt Arkivstruktur.Dokumentobjekt *-> "+konvertering 0..*" Arkivstruktur.Konvertering Arkivstruktur.ElektroniskSignatur -[hidden]-> Arkivstruktur.Dokumentobjekt @enduml

The format is quite compact, with little redundant information. The text expresses entities and relations, and there is little layout related fluff. One can reuse content by using include files, allowing for consistent naming across several diagrams. The include files can be standalone PlantUML too. Here is the content of media/uml-class-arkivskaper.iuml:

@startuml class Arkivstruktur.Arkivskaper { +arkivskaperID : string +arkivskaperNavn : string +beskrivelse : string [0..1] } @enduml

This is what the complete diagram for the PlantUML notation above look like:

A cool feature of PlantUML is that the generated PNG files include the entire original source diagram as text. The source (with include statements expanded) can be extracted using for example exiftool. Another cool feature is that parts of the entities can be hidden after inclusion. This allow to use include files with all attributes listed, even for UML diagrams that should not list any attributes.

The diagram also show some of the warts. Some times the layout engine place text labels on top of each other, and some times it place the class boxes too close to each other, not leaving room for the labels on the relationship arrows. The former can be worked around by placing extra newlines in the labes (ie "\n"). I did not do it here to be able to demonstrate the issue. I have not found a good way around the latter, so I normally try to reduce the problem by changing from vertical to horizontal links to improve the layout.

All in all, I am quite happy with PlantUML, and very impressed with how quickly its lead developer responds to questions. So far I got an answer to my questions in a few hours when I send an email. I definitely recommend looking at PlantUML if you need to make UML diagrams. Note, PlantUML can draw a lot more than class relations. Check out the documention for a complete list. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: FLOSS Project Planets

Codementor: R vs Python | Best Programming Language for Data Science and Analysis

Planet Python - Mon, 2019-03-25 04:09
This comparison blog on R vs Python will provide you with a crisp knowledge about the two most favorite languages for the data scientists and data analysts.
Categories: FLOSS Project Planets

Made With Mu: Mu 1.1.0-alpha.1 Released

Planet Python - Mon, 2019-03-25 04:00

We have just released the first “alpha” of the upcoming 1.1 version of Mu. To try it, follow the links on Mu’s download page. Mu is a team effort, so many thanks to all the volunteers who have contributed in innumerable ways to make this happen.

This is the first of several “alpha” releases for the next version of Mu. Over the coming weeks we intend to release newer versions based upon feedback from you, our users. Please don’t hesitate to get in touch if you have comments, suggestions or have found a problem (bug). The more feedback we get, the better. While we carefully read all feedback, we can’t always respond to nor address such feedback (remember, we’re all volunteers). If you’re interested in what we’re up to, all our development work happens in the open on GitHub and you can even chat directly with us (we’re friendly, so come say, “Hi”).

What exactly do we mean by “alpha”..?

Essentially, “alpha” means we’re adding new features, and there may be bugs. When we’re happy with the new features we’ll switch to “beta” releases, where we only polish and fix problems with the features created so far. When we feel the “beta” release is stable enough, well tested and checked by a large number of different sorts of user, then we’ll release the “final” 1.1 version which will replace the current version we recommend you download from the website. The current 1.0 version of Mu will no longer be supported and our advice will be for you to upgrade.

So, what’s new in this first “alpha” release?

The first new feature is the ability to install third party Python packages. There are almost 200,000 community created packages on the Python Package Index (PyPI ~ pronounced Pie-Pee-eye).

The example above shows us installing a silly Python module called Arrr which converts English into Pirate speak. First we open the admin dialogue, select the “Third Party Packages” tab and simply type into the text area the name of the package we want to install. Then Mu shows us its progress downloading and installing the new package (and any of its dependencies). Afterwards we can use the arrr module in both our code and from the REPL. Finally, if we open the “Third Party Packages” tab in the admin dialogue again, we can see Arrr listed as an installed package. To uninstall it, simply remove its entry in the text area and Mu does the rest. Once uninstalled arrr is no longer available to us. Importantly, the packages installed by Mu are only available to Python run from Mu. We do this because the universal feedback from school system administrators is that they want Mu to be completely isolated from anything else that may be installed on the computer.

Next, we have a really useful feature for beginner programmers who want help writing tidy looking code. There is an amazing tool, called Black, which analyses code and reformats it into readable code (it’s called “Black” because you can have any type of reformatting, so long as it’s Black’s). The creator of Black is a talented developer called Łukasz Langa who has a son learning to code with, you guessed it, Mu. I recently had the great pleasure of catching up with Łukasz when he was in London and he suggested Black might be helpful for beginners. In the time it took me to catch my train home, Łukasz had created the code needed to make this work. Some testing with teachers and beginners confirmed how useful this feature would be.

In the example above we see some “messy” looking Python code. There’s no gap between the import statement and the rest of the code, items in a list are single-quoted strings (which makes it hard to use apostrophes) and a second list is too wide for the editor. Once the new “Tidy” button is clicked, Black immediately reformats the code to correct these visual niggles. It’s important to note that the code itself remains unchanged, it has merely been reformatted. Furthermore, if your code contains invalid Python then Black will signal this to Mu and the “Check” function will run to highlight the problems which should be fixed before running the “Tidy” function again.

The final new feature is a completely new mode created by Danish computer scientist and teacher Martin Dybdal. The new mode targets the small, network enabled boards made by Espressif: ESP8266 and ESP32. These boards are special because they connect to the Internet via WiFi, are very cheap, easily obtained and make great development boards for simple Internet of Things projects. Happily MicroPython runs on these boards making them ideal for educational use.

With the help of Murilo Polese we were able to test the new mode and refine the capabilities to the following features:

  • A “Run” button which will use a serial connection to push the code in the current tab onto the board to be run immediately.
  • A files button which works in exactly the same way as the micro:bit mode: drag Python files to or from the board. If one of the files is called main.py then it will run automatically when the board starts.
  • A REPL to connect to MicroPython running on the attached board.
  • A Plotter, which turns tuples of numeric data read from the board via the serial connection into pretty plots. This works in exactly the same way as in the other modes which use the plotter.

This release also includes quite a number of bug fixes and behind-the-scenes tidy ups. You can find out the full details in the project CHANGELOG.

What’s still to come..? The following is in the pipeline:

  • A new mode for simple web development with Python, HTML and CSS. Beginners will be able to very quickly create and view simple dynamic websites (such as a personal blog).
  • A REPL for PyGameZero mode, so you can interactively poke around with the code of your game!
  • Some new admin settings which will allow you to configure how Mu looks and behaves (especially important for dyslexic users for whom the most effective colour scheme for reading code is a personal discovery).
  • A new mode for the Calliope board (subject to extensive testing). The Calliope is Germany’s answer to the micro:bit, and it runs MicroPython!

As always, constructive critique, ideas and bug reports are most welcome.

Watch this space!

Categories: FLOSS Project Planets

Tryton News: Release of GooCalendar 0.5

Planet Python - Mon, 2019-03-25 03:00

@ced wrote:

We are glad to announce the release of GooCalendar version 0.5.

GooCalendar is a Python library that implements a calendar widget using GooCanvas.

In addition to bug-fixes, this release contains this following improvements:

  • A single day view is added.
  • Support for Python 2 is removed.
  • Now it uses GTK+ 3

GooCalendar is available on PyPI: https://pypi.org/project/GooCalendar/0.5/

Posts: 1

Participants: 1

Read full topic

Categories: FLOSS Project Planets

Iustin Pop: The last 10 percent

Planet Debian - Mon, 2019-03-25 01:52

Gamification is everywhere this days, but sometimes it’s well implemented, sometimes not. In this particular case—Garmin environment—it introduced badges around a year or two ago for all kind of things (whether real achievements or not), most of them interesting or at least funny, like having an activity while below 0°C, etc.

The other nice part of all this was that it allows you to easily compare badges with connections. Which results, err can result in races to both get the badges your connections have but you don’t, and get ones that they don’t. All this because it also has a leaderboard based on the total points accumulated—and some badges are worth way more points than basic ones.

One thing I found out this way was that some of my connections had the “achieve step goal 30 days in a row” (4 points!), or even 60 days (8 points!!!). But how can this be, since by default Garmin increases the goal as long as you hit each day’s goals? This was the reason I couldn’t get it before, as the target inexorably increases, and would reach something like 3 hours of walking needed per day at 60 days.

So, my conclusion was that this is possible only with switching to fixed-goal (N steps per day), and did so. I set myself a moderate goal (I don’t walk much, sadly, especially when I commute by bike), and started working towards it. This was back in December. And I still don’t have the badge, argh!

In the first iteration, I went all the way up to 29 days, had 40 steps left for the day, was almost celebrating, but at the end of the day completely forgot about it—because it was just 2 minutes of moving needed. 29 days of carefully checking each evening my goal, all gone away due to early sleeping during vacation… All that was between me and my goal were 40 lousy steps.

I said no problem, I’ll start again. So once more I go, all the way up to 28 days, when sadly external factors intervened and I really couldn’t hit my goal on the 28th day. At least not my fault this time, or at least partially not my fault—had I met my goal early in the day and not leave it to match on the evening, I would have still gotten it.

So, on third iteration now. All went well until day 24th or so, when I got a bit of a cold. This was Wednesday last week, but still, between stuff (I got out of the house that day), it was easy to get the steps. Then, Thursday, I was really out and stayed in-house, with running nose and headache and all that stuff, when I realised: this is a “you-will-not-get-the-badge” event type!! Damn you dungeon master! I needed to move. So here I was, indoors, slowly pacing between the hot teas, for one hour, then after some sleep for another hour, until I hit my goal. Yay!

Today (Monday), I’m at 27 days out of 30. If I hit my goal today, tomorrow and Wednesday, I’ll finally have the damn badge and can both increase my target steps and start working towards the 60 day badge.

But three days is a lot. Many things could happen, including the worst case: my watch could stop working or I could lose it. And I don’t have a backup… I need to be careful…

So yeah, sometimes gamification really works :)

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Miro Hrončok

Planet Python - Mon, 2019-03-25 01:05

This week we welcome Miro Hrončok (@hroncok) as our PyDev of the Week! Miro teaches at Czech Technical University and helps out with the local PyLadies chapter. He is also involved with the Special Interest Group for Python in Fedora as he works for Red Hat in addition to his teaching position. You can check out some of the projects he is involved in over on Github or check out his website. Let’s take a few moments to get to know Miro better!

Can you tell us a little about yourself (hobbies, education, etc):

I’m a guy from Prague, Czech Republic, in my late twenties, yet both of my parents are from Košice, Slovakia, so I’m kinda both Czech and Slovak. I’ve studied Pascal at a gymnasium and later did my bachelors and masters in Computer Science/Software Engineering at the Faculty of Information Technology, Czech Tecnical University in Prague. Most of my hobbies are related to computers and technology but apart from that I have two Irish Wolfhounds and I love to ski.

One of my dogs when she was little

My technological interest has always been connected to Free and Open Source Software (and Hardware), starting with the Czech Linux community when I was a teenager, co-founding the RepRap 3D Printing Lab during my early years at the university and joining Fedora and later Red Hat, now working in the Python Maintenance team, also pro-active in the Czech Python community.

Why did you start using Python?

Python somehow sprung to the surface everywhere where I was doing something. Was it some basic Linux utilities or several RepRap oriented apps, it happened to be written in Python (or Perl ). I liked the Python syntax a bit more and since I was already familiar with multiple languages such as C, C++, Java, Pascal or even PHP, I decided to give Python a try. Accidentally, I got the Czech translation of Dive Into Python 3 by Mark Pilgrim for free at some Czech Linux conference, so that was my primary source of wisdom.

What other programming languages do you know and which is your favorite?

Apart from the already mentioned languages, I mostly work in (Bourne Again) Shell. I don’t think I have a favorite programming language other than Python, but I love both TeX and OpenSCAD (not “programming” languages per se).

I’m really excited about Rust, yet I still haven’t found the right project to learn it on and I’ve somehow lost the patience to learn by hello world examples since I’ve finished the university.

Haskell is another language I’d like to learn one day.

OpenSCAD

What projects are you working on now?

My primary responsibility in Red Hat and probably also my main hobby is Fedora. Fedora is a large project, I’ve started as a packager (aiming to bring in the RepRap apps), became a Fedora Ambassador several months later, being mentored by my Czech friend and colleague Jiří Eischmann and several other Fedora Ambassadors mostly from Germany.

Later I’ve become packager sponsor (being able to “sponsor” more packagers into Fedora Project) and recently joined Fedora Packaging Committee and Fedora Engineering Steering Committee.

As part of my job, I’m the member of the Fedora’s Python special interest group and being the “main Fedora” person of Red Hat’s Python Maintenance team I mostly work on various aspects of Python in Fedora, trying to make Fedora the best system for a Python developer.

Other than Fedora, I contribute to various upstream projects (mostly small patches) including pytest, betamax, Printrun and CPython itself.

I also maintain or co-maintain various projects created by the Czech Python community, most importantly, I’m the main maintainer of Elsa an opinionated wrapper around Frozen-Flask.

Those are mostly programming projects, but I also teach. We have 3D Printing and Advanced Python classes at the university and I also participate in the Czech PyLadies beginners course.

What non-Python open source projects do you enjoy using?

I guess I’ve already mentioned a lot Apart from the obvious things like Git, I really enjoy the Geany text editor (written in C) when working on my Xfce desktop; for terminal use I’ve recently grown fond of micro (written in Go). As for the utility tools, I love ripgrep (a `grep -r` or `git grep` replacement written in Rust).

Why did you go into teaching at a university?

When we started the 3D Printing Lab (IIRC I was in the second year of bachelor), we were searching for ways to get other students familiarized with the RepRap project and our activities. We’ve decided to create a new optional class about 3D printing. We get somebody to officially lead the class, but it as in fact taught by students. Later on, it was easy to get involved with other things and when I learned that they are looking for help in the Technical Documentation class (LaTeX, graphviz, gnuplot and later even groff), I’ve chipped in. I guess I tend to participate in things I like.

When I was finishing my masters, I’ve got an idea to teach advanced Python as well and I’ve talked my team lead Petr Viktorin into it. We’ve managed to open a “Red Hat sponsored” course for masters with Open Source study materials under Creative Commons license.

Teaching mostly lads at the university I’ve also joined the PyLadies beginners courses, to help balance the situation at least a bit by introducing Python to ladies.

And since I’ve always worked from home, being able to teach once or twice a week gives me more opportunities to socialize.

What challenges (if any) do you see in the Czech Republic for people wanting to learn programming?

I honestly don’t. There are plenty of free available resources both in English and Czech, plenty of public events where you can learn more, countless initiatives for kids to learn, even courses for ex-miners in the Ostrava region. All you need is a will to invest your time into programming.

I don’t really know what is the state of informatics classes at elementary and secondary schools, might vary from institution to institution and can sometimes be seriously horrible as well as excellent.

My advice: If your school cannot teach you programming, look beyond.

Come to the Czech PyCon to meet all the enthusiastic people from my homeland and see how we do things here

I mean, seriously, come, the next one is in Ostrava in June 2019 (and if you read this later, check the website for next years).

PyCon 2017

What’s the best thing about working at Red Hat?

Working with excellent people on stuff that I’d probably work on for free as well and getting paid for it

I work for Red Hat for 6 years now and most of the time I just did what I thought was right. Don’t get me wrong I talk to other people as well, but there is no client that demands a feature right now (or at least the client is layered away from me), or a big bad boss that tells me I can only contribute to Free Software projects on Fridays. Everything I do for Red Hat is Free Software – in the communities.

Thanks for doing the interview, Miro!

Categories: FLOSS Project Planets

Russ Allbery: Review: The Love Song of Numo and Hammerfist

Planet Debian - Sun, 2019-03-24 22:22

Review: The Love Song of Numo and Hammerfist, by Maddox Hahn

Publisher: Maddox Hahn Copyright: 2018 ISBN: 1-73206-630-2 Format: Kindle Pages: 329

Numo is a drake, a type of homunculus created by alchemy from a mandrake root. He is, to be more precise, a stoker: a slave whose purpose is to stoke the hypocaust of his owning family. Numo's life is wood and fires and the colors of flames, not running messages to the arena for his master. (That may be part of the message his master was sending.) Falling desperately in love at first sight with an infandus fighting in the arena is definitely not part of his normal job.

Hammerfist is an infandus, the other type of homunculus. They aren't made from mandrake root. They're made from humans who have been sentenced to transmogrification. Hammerfist has had a long and successful career in the arena, but she's starting to suffer from the fall, which means she's remembering that she used to be human. This leads to inevitable cognitive decline and eventually death. In Hammerfist's case, it also leads to plotting revolution against the alchemists who make homunculi and use them as slaves.

Numo is not the type to plot revolution. His slave lobe is entirely intact, which means the idea of disobeying his owners is hard to even understand. But he is desperately in love with Hammerfist (even though he doesn't understand what love is), and a revolution would make her happy, so he'll gamely give it a try.

Numo is not a very good revolutionary, but the alchemists are also not very bright, and have more enemies than just the homunculi. And Numo is remarkably persistent and stubborn once he wraps his head around an idea.

Okay, first, when I say that you need a high tolerance for body horror to enjoy this book, I am Seriously Not Kidding. I don't think I have ever read a book with a higher density of maiming, mutilation, torture, mind control, vivisection, and horrific biological experiments. I spent most of this book wincing, and more than a few parts were more graphic than I wanted to read. Hahn's style is light and bubbly and irrepressible and doesn't dwell on the horror, which helps, but if you have a strong visual imagination and body integrity violations bother you, this may not be the book for you.

That said, although this book is about horrible things, this is not a horror novel. It's a fantasy about politics and revolution, about figuring out how to go forward after horrible things happen to you, about taking dramatic steps to take control of your own life, about the courage to choose truth over a familiar lie, and about how sympathy and connection and decency may be more important than love. It's also a book full of gruesome things described in prose like this:

Her eyes were as red as bellowed embers. Her blood-spattered mane stood up a foot or more from her head and neck, cresting between her shoulders like a glorious wave of shimmering heat. Her slobbering mouth was an orangey oven of the purest fire, a font of wondrousness gaping open down to the little iron plate stamped above her pendulous bosoms.

and emotions described like this:

And he'd had enough. Numo was taut as a wire, worn as a cliff face, tired as a beermonger on the solstice. One more gust of wind and he'd snap like a shoddy laundry pole.

This is the book for simile and metaphor lovers. Hahn achieves a rhythm with off-beat metaphor and Numo's oddly-polite mental voice that I found mesmerizing and weirdly cheery.

Except for Numo and Hammerfist, nearly everyone in this book is awful, even if they don't seem so at first. (And Hammerfist is often so wrapped up in depression and self-loathing to be kind of awful herself.) Next to the body horror, that was the aspect of this story I struggled with the most. But Numo's stubborn determination and persistent decency pulled me through, helped by the rare oasis of a supporting character I really liked. Bollix is wonderful (although I'm rather grumpy about how her story turns out). Sangja isn't exactly wonderful — he can be as awful to others as most of the people in this story — but for me he was one of the most sympathetic characters and the one I found myself rooting for.

(I'm going to be coy about Sangja's nature and role, since I think it's a spoiler, but I greatly appreciated the way Hahn portrayed Sangja in this book. He is so perfectly and exactly fits the implications of his nature in this world, and the story is entirely matter-of-fact about it.)

Hahn said somewhere on-line (which I cannot now find and therefore cannot get exactly right) that part of the motivation for this story was the way the beast becomes human at the end of Beauty and the Beast stories, against all of our experience in the real world. Harm and change isn't magically undone; it's something that you still have to live with past the end of the story. This is, therefore, not a purely positive good-triumphs type of story, but I found the ending touching and oddly satisfying (although I wish the cost hadn't been so high).

I am, in general, dubious of the more extravagant claims about the power of self-publishing to bypass gatekeepers, mostly because I think traditional publishing gatekeepers do a valuable job for the reader. This book is one of the more convincing exceptions I've seen. It's a bit of a sprawling mess in places and it doesn't pull together the traditional quest line, which combined with the body horror outside the horror genre makes it hard for me to imagine a place for it in a traditional publishing line-up. But it's highly original, weirdly delightful, and so very much itself that I'm glad I read it even if I had to wince through it.

This is, to be honest, not really my thing, and I'm not sure I'd read another book just like it. But I think some people with more interest in body horror than I do are going to love this book, and I'm not at all unhappy I read it. If you want your devoted, odd, and angstful complex love story mixed with horrific images, gallows humor, and unexpected similes, well, there aren't a lot of books out there that meet that description. This is one. Give it a try.

Rating: 6 out of 10

Categories: FLOSS Project Planets

FSF Blogs: LibrePlanet Day 2: Welcoming everyone to the world of free software

GNU Planet! - Sun, 2019-03-24 18:50

One of the most important questions that free software is facing in the year 2019 is: how do we make the world of free software accessible to broader audiences? Vast numbers of people are using software every day -- how do we relate our message to something that is important to them, and then welcome them into our community? In order to achieve our mission, we need to invite people and get them to use, create, and proliferate ethical software, until it replaces until all technology is free.

Many of the best talks at LibrePlanet 2019 echoed a message for the free software community to focus on building a culture that's respectful and encouraging for new people, respecting a wide variety of personalities and values. The first way to get people invested in the culture of free software is to make it fun, and that was the focus of the morning keynote, "Freedom is fun!", delivered by free software veteran Bdale Garbee. A prominent name in the free software world for decades, Bdale talked about how he has a habit of turning all of his hobbies into free software projects, starting with model rockets.

He detailed how some of the most prominent changes made to free software are made by people working through one particular problem and creating a unique solution that is valuable to them. The joy of experimenting and the magic of constantly improving systems through "people scratching their unique itches" provide far greater benefits than any company could ever create through the closed model of proprietary software. Bdale also stressed the value of inviting new people, as well as thanking people for their contributions. He urged all free software users and contributors to have fun, use your hobbies and interests as a way to experiment and develop, and to not hoard these new ideas... instead, share them!

Other morning sessions included Kate Chapman's presentation on the history and future of Free Software Award-winning OpenStreetMap, as well as Micah Altman's introduction to the important possibilities presented by the free software redistricting application DistrictBuilder. In another talk, on "Right to Repair and the DMCA," Nathan Proctor outlined how our right to repair and maintain products is a free software issue as well as an environmental issue. The throwaway culture is a direct consequence of the manufacturer's choice to restrict diagnostics and repair of any product owned by the individual.

In her talk on the "meta-rules for codes of conduct," Katheryn Sutter returned to the theme of inclusiveness by prying open many distinctions in the ways that free software enthusiasts and other communities can communicate well, and communicate poorly, and enumerating some of the ways we can come together in respectful and productive ways. Although we all agree on many values within the free software community, we may disagree on others, and it's hard to create codes of conduct that will satisfy everyone, across a variety of experiences and backgrounds. The idea of the code of conduct, then, is not to eliminate all disagreement or stifle participants; Sutter emphasized that the goal is to "create and protect safe places for conflict."

Sunday afternoon, Free Software Award winner and community-building champion Deborah Nicholson delivered a talk on "Free software/utopia," using examples from her work on GNU MediaGoblin, Outreachy, and other free software initiatives to demonstrate how to create and maintain projects that attract and welcome newcomers, and reward the time and care invested by contributors. She highlighted her efforts to consciously sustain a positive development environment; in her opinion, it's better for a project to lose one big contributor whose behavior is detrimental to the community than any small contributor who treats others with consideration.

Next came the Lightning Talks session, which provided attendees with an opportunity to give a five-minute presentation on their work and their ideas. Topics ranged from how we're facing an existential crisis because fewer hardware and firmware products fully support the use of free software, to how Purism managed to design a fully functioning Librem 5 Dev Kit with 100% free software. Projects shared included Mission Possible, a primary school program teaching children the four freedoms the FSF promotes, and Vegan on a Desert Island, a free software video game project answering the common question of what a vegan would do when stranded on a desert island. Blueprint, a team of UC Berkeley students working pro bono for nonprofits, talked about the mobile app that they're building for the Free Software Foundation. All of these talks gave a glimpse into the knowledge and creativity shared by the free software community -- the scheduled conference talks only scratch the surface of all of the multifaceted work that our supporters do every day!

Finally, the day ended on a bracing and sobering note with a keynote speech from Micky Metts, a prominent free software activist and member of the Agaric Design Collective, the MayFirst.org leadership committee, and Drupal. In her speech "How can we prevent the Orwellian 1984 digital world?", Micky talked about what's truly at stake if we fail in our efforts to make all software free: corporate technological entities already are intruding into our private lives in some truly terrifying ways, and the situation will only get worse if our movement doesn't grow and change for the better. Ultimately, free software must form the foundation of a movement to regain our personal power.

Over 289 people participated in LibrePlanet 2019, which was powered by 53 amazing volunteers, who ensured that everything from video streaming to IRC chats went smoothly. We also gave away raffle prizes generously donated by Vikings GmBH, Technoethical, Aleph Objects, ThinkPenguin, JMP, Altus Metrum, LLC, and Aeronaut, and we're extremely grateful to our generous sponsors, including Red Hat and Private Internet Access.

Between Saturday and Sunday, there were 66 speakers and over 40 sessions. Videos will be posted soon at https://media.libreplanet.org, so keep an eye out for announcements -- whether you were here in Cambridge, watched the livestream, or missed LibrePlanet entirely, there's so much more you'll want to see! And if you were at the conference, please fill out the feedback form, so we can make next year's LibrePlanet even better.

Photo credits: Copyright Š 2019 Free Software Foundation, by Madi Muhlberg, photos licensed under CC-BY 4.0.

Categories: FLOSS Project Planets

LibrePlanet Day 2: Welcoming everyone to the world of free software

FSF Blogs - Sun, 2019-03-24 18:50

One of the most important questions that free software is facing in the year 2019 is: how do we make the world of free software accessible to broader audiences? Vast numbers of people are using software every day -- how do we relate our message to something that is important to them, and then welcome them into our community? In order to achieve our mission, we need to invite people and get them to use, create, and proliferate ethical software, until it replaces until all technology is free.

Many of the best talks at LibrePlanet 2019 echoed a message for the free software community to focus on building a culture that's respectful and encouraging for new people, respecting a wide variety of personalities and values. The first way to get people invested in the culture of free software is to make it fun, and that was the focus of the morning keynote, "Freedom is fun!", delivered by free software veteran Bdale Garbee. A prominent name in the free software world for decades, Bdale talked about how he has a habit of turning all of his hobbies into free software projects, starting with model rockets.

He detailed how some of the most prominent changes made to free software are made by people working through one particular problem and creating a unique solution that is valuable to them. The joy of experimenting and the magic of constantly improving systems through "people scratching their unique itches" provide far greater benefits than any company could ever create through the closed model of proprietary software. Bdale also stressed the value of inviting new people, as well as thanking people for their contributions. He urged all free software users and contributors to have fun, use your hobbies and interests as a way to experiment and develop, and to not hoard these new ideas... instead, share them!

Other morning sessions included Kate Chapman's presentation on the history and future of Free Software Award-winning OpenStreetMap, as well as Micah Altman's introduction to the important possibilities presented by the free software redistricting application DistrictBuilder. In another talk, on "Right to Repair and the DMCA," Nathan Proctor outlined how our right to repair and maintain products is a free software issue as well as an environmental issue. The throwaway culture is a direct consequence of the manufacturer's choice to restrict diagnostics and repair of any product owned by the individual.

In her talk on the "meta-rules for codes of conduct," Katheryn Sutter returned to the theme of inclusiveness by prying open many distinctions in the ways that free software enthusiasts and other communities can communicate well, and communicate poorly, and enumerating some of the ways we can come together in respectful and productive ways. Although we all agree on many values within the free software community, we may disagree on others, and it's hard to create codes of conduct that will satisfy everyone, across a variety of experiences and backgrounds. The idea of the code of conduct, then, is not to eliminate all disagreement or stifle participants; Sutter emphasized that the goal is to "create and protect safe places for conflict."

Sunday afternoon, Free Software Award winner and community-building champion Deborah Nicholson delivered a talk on "Free software/utopia," using examples from her work on GNU MediaGoblin, Outreachy, and other free software initiatives to demonstrate how to create and maintain projects that attract and welcome newcomers, and reward the time and care invested by contributors. She highlighted her efforts to consciously sustain a positive development environment; in her opinion, it's better for a project to lose one big contributor whose behavior is detrimental to the community than any small contributor who treats others with consideration.

Next came the Lightning Talks session, which provided attendees with an opportunity to give a five-minute presentation on their work and their ideas. Topics ranged from how we're facing an existential crisis because fewer hardware and firmware products fully support the use of free software, to how Purism managed to design a fully functioning Librem 5 Dev Kit with 100% free software. Projects shared included Mission Possible, a primary school program teaching children the four freedoms the FSF promotes, and Vegan on a Desert Island, a free software video game project answering the common question of what a vegan would do when stranded on a desert island. Blueprint, a team of UC Berkeley students working pro bono for nonprofits, talked about the mobile app that they're building for the Free Software Foundation. All of these talks gave a glimpse into the knowledge and creativity shared by the free software community -- the scheduled conference talks only scratch the surface of all of the multifaceted work that our supporters do every day!

Finally, the day ended on a bracing and sobering note with a keynote speech from Micky Metts, a prominent free software activist and member of the Agaric Design Collective, the MayFirst.org leadership committee, and Drupal. In her speech "How can we prevent the Orwellian 1984 digital world?", Micky talked about what's truly at stake if we fail in our efforts to make all software free: corporate technological entities already are intruding into our private lives in some truly terrifying ways, and the situation will only get worse if our movement doesn't grow and change for the better. Ultimately, free software must form the foundation of a movement to regain our personal power.

Over 289 people participated in LibrePlanet 2019, which was powered by 53 amazing volunteers, who ensured that everything from video streaming to IRC chats went smoothly. We also gave away raffle prizes generously donated by Vikings GmBH, Technoethical, Aleph Objects, ThinkPenguin, JMP, Altus Metrum, LLC, and Aeronaut, and we're extremely grateful to our generous sponsors, including Red Hat and Private Internet Access.

Between Saturday and Sunday, there were 66 speakers and over 40 sessions. Videos will be posted soon at https://media.libreplanet.org, so keep an eye out for announcements -- whether you were here in Cambridge, watched the livestream, or missed LibrePlanet entirely, there's so much more you'll want to see! And if you were at the conference, please fill out the feedback form, so we can make next year's LibrePlanet even better.

Photo credits: Copyright © 2019 Free Software Foundation, by Madi Muhlberg, photos licensed under CC-BY 4.0.

Categories: FLOSS Project Planets

Sam Hartman: Questioning and Finding Purpose

Planet Debian - Sun, 2019-03-24 16:04
This is copied over from my spiritual blog. I'm nervous doing that, especially at a point when I'm more vulnerable than usual in the Debian community. Still, this is who I am, and I want to be proud of that rather than hide it. And Debian and the free software community are about far more than just the programs we write. So hear goes:

The Libreplanet opening keynote had me in tears. It was a talk by Dr. Tarek Loubani. He described his work as an emergency physician in Gaza and how 3d printers and open hardware are helping save lives.


They didn't have enough stethoscopes; that was one of the critical needs. So, they imported a 3d printer, used that to print another 3d printer, and then began iterative designs of 3d-printable stethoscopes. By the time they were done, they had a device that performed as well or better than than a commercially available model. What was amazing is that the residents of Gaza could print their own; this didn't introduce dependencies on some external organization. Instead, open/free hardware was used to help give people a sense of dignity, control of some part of their lives, and the ability to better save those who depended on them.


Even more basic supplies were unavailable. The lack of tourniquets caused the death of some significant fraction of casualties in the 2014 war. The same solution—3d-printed tourniquets had an even more dramatic result.


Dr. Loubani talked about how he felt powerless to change the world around him. He talked about how he felt like an insignificant ant.


By this point I was feeling my own sense of hopelessness and insignificance. In the face of someone saving lives like that, I felt like I was only playing at changing the world. What is helping teach love and connection when we face that level of violence? Claming that sexual freedom is worth fighting for seems like a joke in the worst possible taste in the face of what he is doing. I felt like an imposter.


Then he went on to talk about how we are all ants, but it is the combination of all our insignificant actions that eventually change the world. He talked about how the violence he sees is an intimate act: he talked about the connection between a sniper and their victim. We die one at a time; we can work to make things better one at a time.


He never othered or judged those committing violence. Not as he talked about his fellow doctor and friend who was shot, radioed that he could not breathe, and eventually died pinned down by gunfire so that no one could rescue him. Not as he talked about how he himself was shot. Not as he helped the audience connect with grief-stricken family members facing the death of their loved ones. He never withdrew compassion.


To me I heard hope that what I try to teach can matter; it can connect. If he can face that violence and take a stand against it while still maintaining compassion, then this stuff I believe actually can work. Facing the world and making real changes without giving up compassion and empathy seems more possible: I’ve seen it done.


Somewhere in this talk, I regained a connection with my own value. People like him are helping save people. However, the violence will continue until we have the love, empathy and compassion to understand and connect with each other and find better options. In my own way I’m doing that. Every time I help someone see a different way of looking at things, I make it easier for them to start with empathy first rather than fear.


Everything I’ve written about sex is still true. That journey can bring us closer to accepting ourselves, stepping past fear and shame. Once we accept our own desires and our own need, we’re in a better position to meet in the Strength of Love and advocate for our own needs while offering compassion to others. Once we know what we can find when we have empathy and connection, we can learn to strive for it.


So I will find joy in being my own little ant. Insignificant and divine: take your pick as it’s all the same in the end.


Bringing that Round to Debian

Debian is back in the center of my compassion work. I'm running for Debian project Leader (DPL). I served on the Debian Technical Committee for over a year, hoping to help bring understanding of diverse positions to our technical dispute resolution process. That ended up being the wrong place. Everyone seems to believe that the DPL is currently at the center of most of the work of helping people connect. I hope to fix that: more than one person should be driving that work.


After the keynote I found myself sitting between Micky Metts and Henry Poole. Micky asked me what I did that I loved. “Ah, she’s not expecting this answer,” I thought to myself as I talked about my spiritual work and how it overlaps with my Debian work. It turns out that she was delighted by the answer and we had a great time chatting about self empowerment. I’m looking forward to her keynote later today.


Then Henry asked how I was going to accomplish bringing empathy into Debian. I talked about my hopes and dreams and went through some of the specifics I’ve discussed in my platform and what I’ve had success with so far. He talked about similarities and overlaps with work his company does and how he works to teach people about free software.


Especially after that keynote it was joyful to sit between two luminaries and be able to share hopes for empathy, compassion and connection. I felt like I had found validation and energy again.

Categories: FLOSS Project Planets

PyPy Development: PyPy v7.1 released; now uses utf-8 internally for unicode strings

Planet Python - Sun, 2019-03-24 15:45
The PyPy team is proud to release version 7.1.0 of PyPy, which includes two different interpreters:
  • PyPy2.7, which is an interpreter supporting the syntax and the features of Python 2.7
  • PyPy3.6-beta: this is the second official release of PyPy to support 3.6 features, although it is still considered beta quality.
The interpreters are based on much the same codebase, thus the double release.

This release, coming fast on the heels of 7.0 in February, finally merges the internal refactoring of unicode representation as UTF-8. Removing the conversions from strings to unicode internally lead to a nice speed bump. We merged the utf-8 changes to the py3.5 branch (Python3.5.3) but will concentrate on 3.6 going forward.

We also improved the ability to use the buffer protocol with ctype structures and arrays.

The CFFI backend has been updated to version 1.12.2. We recommend using CFFI rather than c-extensions to interact with C, and cppyy for interacting with C++ code.
 You can download the v7.1 releases here:
http://pypy.org/download.html We would like to thank our donors for the continued support of the PyPy project. If PyPy is not quite good enough for your needs, we are available for direct consulting work.

We would also like to thank our contributors and encourage new people to join the project. PyPy has many layers and we need help with all of them: PyPy and RPython documentation improvements, tweaking popular modules to run on pypy, or general help with making RPython’s JIT even better.
What is PyPy? PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7, 3.6. It’s fast (PyPy and CPython 2.7.x performance comparison) due to its integrated tracing JIT compiler.

We also welcome developers of other dynamic languages to see what RPython can do for them.
This PyPy release supports:
 
  • x86 machines on most common operating systems (Linux 32/64 bits, Mac OS X 64 bits, Windows 32 bits, OpenBSD, FreeBSD)
  • big- and little-endian variants of PPC64 running Linux
  •  ARM32 although we do not supply downloadable binaries at this time
  • s390x running Linux
What else is new? PyPy 7.0 was released in February, 2019. There are many incremental improvements to RPython and PyPy, for more information see the changelog.

Please update, and continue to help us make PyPy better.


Cheers, The PyPy team
Categories: FLOSS Project Planets

EuroPython: EuroPython 2019: Presenting our conference logo for Basel

Planet Python - Sun, 2019-03-24 14:27

We’re pleased to announce our official conference logo for EuroPython 2019, July 8-14, in Basel, Switzerland:

The logo is inspired by graphical elements from the Basel Jean Tinguely Museum and Basel Rhine Swimming. It was again created by our designer Jessica Peña Moro from Simétriko, who had already helped us in previous years with the conference design.

Some more updates:

  • We’re working on launching the website, the CFP and ticket sales in April. This year, we’d like to see more CFP entries for advanced topics.
  • We are also preparing the sponsorship packages for the website launch. Launch sponsors will again receive a 10% discount on the package price. If you’re interested in becoming a launch sponsor, please contact our sponsor team at sponsoring@europython.eu.

Enjoy,

EuroPython Society Board
https://www.europython-society.org/

Categories: FLOSS Project Planets

Reuven Lerner: Python’s “else” clause for loops

Planet Python - Sun, 2019-03-24 13:30

Let’s say that we have a list of tuples, with each tuple containing some numbers. For example:

>>> mylist = [(3,5), (2,4,6,8),
(4,10, 17), (15, 14, 11), (3,3,2)]

I want to write a program that asks the user to enter a number. If one of the tuples adds up to the user’s input, we’ll print the tuple out. Sounds good, right? Here’s how we could write it:

>>> lookfor = int(input("Enter a number: "))
for one_tuple in mylist:
if sum(one_tuple) == lookfor:
print(f"{one_tuple} adds up to {lookfor}!")

In other words: We iterate over the list of tuples, and use the built-in “sum” function to sum its contents. When we find a tuple whose sum matches the user’s input, “lookfor”, we print the tuple.

And sure enough, if we run this short program with our previously defined “mylist”, we get the following:

Enter a number: 8
(3, 5) adds up to 8!
(3, 3, 2) adds up to 8!

But what if the user enters a number, and no tuples match that sum? We don’t say anything; we basically just leave the user hanging. It would be nice to let them know that none of the tuples in our list matched their sum, right?

This can traditionally be done using a “flag” variable, one whose value starts off with a “False” value (i.e., we haven’t found what we’re searching for) and which is set to “True” if and when we do find what we want. Here’s how we could do that in Python:

found = False
lookfor = int(input("Enter a number: "))
for one_tuple in mylist:
if sum(one_tuple) == lookfor:
found = True
print(f"{one_tuple} adds up to {lookfor}!")
if not found:
print(f"Sorry, didn't find any tuple summing to {lookfor}")

And sure enough, this works just fine:

Enter a number: 3
Sorry, didn't find any tuple summing to 3

Python provides us with a different way to handle this situation. It’s a bit odd at first, and newcomers find it hard to understand — but in cases like this, it helps to shorten the code.

What I’m talking about is the “else” clause for loops. You’re undoubtedly familiar with the “else” clause for “if” statements. You know, the block that gets executed if the main “if” clause isn’t true.

With a loop — and this works with both “for” and “while” loops — the “else” clause is executed if the loop reached its natural end. That is, if Python didn’t encounter a “break” statement in your loop, then the “else” clause will work. This allows us to remove the “found” flag variable, as follows:

lookfor = int(input("Enter a number: "))
for one_tuple in mylist:
if sum(one_tuple) == lookfor:
print(f"{one_tuple} adds up to {lookfor}!")
break
else:
print(f"Sorry, didn't find any tuple summing to {lookfor}")

In other words: If we found a tuple that adds up to “lookfor”, then we print it out and exit the loop immediately, using “break”. But if we reach the natural end of the loop (i.e., never encounter a “break”), then the “else” clause is executed.

Aha, but you might see a problem with our implementation: Whereas in the first and second versions, we printed all of the matching tuples, in this version, we only print the first matching tuple. Even though two tuples have sums of 8, our “break” statement exits the loop once it finds the first one.

The “else” clause on a loop is thus only useful if you’re planning to exit out of it with a “break” statement at some point. If there’s no “break”, then there’s no real value in having an “else” statement.

Also realize that there’s a difference between the code in an “else” block and the code that immediately follows a loop. Code in the “else” block will only execute if the “break” wasn’t encountered.

I’ve found that many newcomers to Python are confused about these “else” blocks, partly because they expect “else” to be connected to an “if”. They thus try to align the “else” with the “if” statement’s indentation, which doesn’t work. In many ways, I do wish that Python’s core developers had chosen a different word, rather than reusing “else”… but that didn’t happen, and so now we have to learn that “else” can be used in two different contexts.

The post Python’s “else” clause for loops appeared first on Reuven Lerner's Blog.

Categories: FLOSS Project Planets

Pages