Feeds

Matthew Garrett: Making unphishable 2FA phishable

Planet Debian - Wed, 2022-11-30 16:53
One of the huge benefits of WebAuthn is that it makes traditional phishing attacks impossible. An attacker sends you a link to a site that looks legitimate but isn't, and you type in your credentials. With SMS or TOTP-based 2FA, you type in your second factor as well, and the attacker now has both your credentials and a legitimate (if time-limited) second factor token to log in with. WebAuthn prevents this by verifying that the site it's sending the secret to is the one that issued it in the first place - visit an attacker-controlled site and said attacker may get your username and password, but they won't be able to obtain a valid WebAuthn response.

But what if there was a mechanism for an attacker to direct a user to a legitimate login page, resulting in a happy WebAuthn flow, and obtain valid credentials for that user anyway? This seems like the lead-in to someone saying "The Aristocrats", but unfortunately it's (a) real, (b) RFC-defined, and (c) implemented in a whole bunch of places that handle sensitive credentials. The villain of this piece is RFC 8628, and while it exists for good reasons it can be used in a whole bunch of ways that have unfortunate security consequences.

What is the RFC 8628-defined Device Authorization Grant, and why does it exist? Imagine a device that you don't want to type a password into - either it has no input devices at all (eg, some IoT thing) or it's awkward to type a complicated password (eg, a TV with an on-screen keyboard). You want that device to be able to access resources on behalf of a user, so you want to ensure that that user authenticates the device. RFC 8628 describes an approach where the device requests the credentials, and then presents a code to the user (either on screen or over Bluetooth or something), and starts polling an endpoint for a result. The user visits a URL and types in that code (or is given a URL that has the code pre-populated) and is then guided through a standard auth process. The key distinction is that if the user authenticates correctly, the issued credentials are passed back to the device rather than the user - on successful auth, the endpoint the device is polling will return an oauth token.

But what happens if it's not a device that requests the credentials, but an attacker? What if said attacker obfuscates the URL in some way and tricks a user into clicking it? The user will be presented with their legitimate ID provider login screen, and if they're using a WebAuthn token for second factor it'll work correctly (because it's genuinely talking to the real ID provider!). The user will then typically be prompted to approve the request, but in every example I've seen the language used here is very generic and doesn't describe what's going on or ask the user. AWS simply says "An application or device requested authorization using your AWS sign-in" and has a big "Allow" button, giving the user no indication at all that hitting "Allow" may give a third party their credentials.

This isn't novel! Christoph Tafani-Dereeper has an excellent writeup on this topic from last year, which builds on Nestori Syynimaa's earlier work. But whenever I've talked about this, people seem surprised at the consequences. WebAuthn is supposed to protect against phishing attacks, but this approach subverts that protection by presenting the user with a legitimate login page and then handing their credentials to someone else.

RFC 8628 actually recognises this vector and presents a set of mitigations. Unfortunately nobody actually seems to implement these, and most of the mitigations are based around the idea that this flow will only be used for physical devices. Sadly, AWS uses this for initial authentication for the aws-cli tool, so there's no device in that scenario. Another mitigation is that there's a relatively short window where the code is valid, and so sending a link via email is likely to result in it expiring before the user clicks it. An attacker could avoid this by directing the user to a domain under their control that triggers the flow and then redirects the user to the login page, ensuring that the code is only generated after the user has clicked the link.

Can this be avoided? The best way to do so is to ensure that you don't support this token issuance flow anywhere, or if you do then ensure that any tokens issued that way are extremely narrowly scoped. Unfortunately if you're an AWS user, that's probably not viable - this flow is required for the cli tool to perform SSO login, and users are going to end up with broadly scoped tokens as a result. The logs are also not terribly useful.

The infuriating thing is that this isn't necessary for CLI tooling. The reason this approach is taken is that you need a way to get the token to a local process even if the user is doing authentication in a browser. This can be avoided by having the process listen on localhost, and then have the login flow redirect to localhost (including the token) on successful completion. In this scenario the attacker can't get access to the token without having access to the user's machine, and if they have that they probably have access to the token anyway.

There's no real moral here other than "Security is hard". Sorry.

comments
Categories: FLOSS Project Planets

FSF Blogs: Fifteen years of LibrePlanet: Register now to join us on March 18 and 19

GNU Planet! - Wed, 2022-11-30 16:38
The fifteenth edition of the Free Software Foundation's (FSF) annual conference is only a couple of months away. Registration is open now.
Categories: FLOSS Project Planets

Fifteen years of LibrePlanet: Register now to join us on March 18 and 19

FSF Blogs - Wed, 2022-11-30 16:38
The fifteenth edition of the Free Software Foundation's (FSF) annual conference is only a couple of months away. Registration is open now.
Categories: FLOSS Project Planets

The Python Coding Blog: Argh! What are args and kwargs in Python? [Intermediate Python Functions Series #4]

Planet Python - Wed, 2022-11-30 16:23

In the first three articles in this Series, you familiarised yourself with the key terms when dealing with functions. You also explored positional and keyword arguments and optional arguments with default values. In this article, you’ll look at different types of optional arguments. Rather unfortunately, these are often referred to by the obscure names args and kwargs in Python.

Overview Of The Intermediate Python Functions Series

Here’s an overview of the seven articles in this series:

  1. Introduction to the series: Do you know all your functions terminology well?
  2. Choosing whether to use positional or keyword arguments when calling a function
  3. Using optional arguments by including default values when defining a function
  4. [This article] Using any number of optional positional and keyword arguments: *args and **kwargs
  5. Using positional-only arguments and keyword-only arguments: the “rogue” forward slash / or asterisk * in function signatures
  6. Type hinting in functions
  7. Best practices when defining and using functions
Using Any Number of Optional Positional Arguments: *args

The topic of *args may seem weird and difficult. However, it’s neither that strange nor that hard. The name is a bit off-putting, but once you understand what’s going on, *args will make sense.

Let’s dive in by looking at this code:

def greet_people(number, *people): for person in people: print(f"Hello {person}! How are you doing today?\n" * number)

Did you spot the asterisk in front of the parameter name people? You’ll often see the parameter name *args used for this, such as:

def greet_person(number, *args):

However, what makes this “special” is not the name ‘args’ but the asterisk * in front of the parameter name. You can use any parameter name you want. In fact, it’s best practice to use parameter names that describe the data rather than obscure terms. This is why I chose people in this example.

Let’s go back to the function you defined earlier. Consider the following three function calls:

# 1. greet_people(3, "James", "Stephen", "Kate") # 2. greet_people(2, "Bob") # 3. greet_people(5)

All three function calls are valid:

  • The first function call has four arguments: 3, "James", "Stephen", and "Kate"
  • The second function call has two arguments: 2 and "Bob"
  • The last function call has one argument: 5

To understand how this is possible, let’s dig a bit deeper into what the parameter people is. Let’s print it out and also print out its type. I’ll comment out the rest of the function’s body for the time being. I’m showing the output from the three function calls as comments in the code snippet below:

def greet_people(number, *people): print(people) print(type(people)) # for person in people: # print(f"Hello {person}! How are you doing today?\n" * number) # 1. greet_people(3, "James", "Stephen", "Kate") # OUTPUT: # ('James', 'Stephen', 'Kate') # <class 'tuple'> # 2. greet_people(2, "Bob") # OUTPUT: # ('Bob',) # <class 'tuple'> # 3. greet_people(5) # OUTPUT: # () # <class 'tuple'>

The local variable people inside the function is a tuple. Its contents are all the arguments you used in the function calls from the second argument onward. The first argument when you call greet_people() is the positional argument assigned to the first parameter: number. All the remaining arguments are collected in the tuple named people.

In the first function call, the first argument is the integer 3. Then there are three more arguments: "James", "Stephen", and "Kate". Therefore, people is a tuple containing these three strings.

In the second function call, the required positional argument is 2. Then, there’s only one additional argument: "Bob". Therefore, the tuple people contains just one item.

In the final function call, there are no additional arguments. The only argument is the required one which is assigned to number. Therefore, people is an empty tuple. It’s empty, but it still exists!

Let’s go back to the function definition you wrote at the start of this section and look at the output from the three function calls:

def greet_people(number, *people): for person in people: print(f"Hello {person}! How are you doing today?\n" * number) # 1. greet_people(3, "James", "Stephen", "Kate") # 2. greet_people(2, "Bob") # 3. greet_people(5)

The output from this code is:

Hello James! How are you doing today? Hello James! How are you doing today? Hello James! How are you doing today? Hello Stephen! How are you doing today? Hello Stephen! How are you doing today? Hello Stephen! How are you doing today? Hello Kate! How are you doing today? Hello Kate! How are you doing today? Hello Kate! How are you doing today? Hello Bob! How are you doing today? Hello Bob! How are you doing today?

Since people is a tuple, you can iterate through it using a for loop. This way, you can perform the same action for each of the optional arguments assigned to the tuple people.

The first function call prints three blocks of output (the ones with James, Stephen, and Kate.) The second function call outputs the lines with Bob in them. The final function call doesn’t print anything since the tuple people is empty.

Therefore, when you add an *args to your function definition, you’re allowing any number of optional positional arguments to be added to the function call. Note that I used the term ‘positional’ in the last sentence. These arguments are collected into the args variable using their position in the function call. All the arguments that come after the required positional arguments are optional positional arguments.

Some rules when using *args

Let’s make a small change to the function definition from earlier:

def greet_people(*people, number): for person in people: print(f"Hello {person}! How are you doing today?\n" * number)

You’ve swapped the position of the parameters number and *people compared to the previous example. Let’s try this function call:

greet_people("James", "Kate", 5)

This raises the following error:

Traceback (most recent call last): File "...", line 5, in <module> greet_people("James", "Kate", 5) TypeError: greet_people() missing 1 required keyword-only argument: 'number'

Note that the error is not raised by the function definition but by the function call. There’s a hint as to what the problem is in the error message. Let’s summarise the problem, and then I’ll expand further. All parameters which follow the *args parameter in the function definition must be used with keyword arguments.

What? And Why?

The parameter *people tells the function that it can accept any number of arguments which will be assigned to the tuple people. Since this could be any number, there’s no way for the program to know when you wish to stop adding these optional arguments and move on to arguments that are assigned to the next parameters, in this case, number.

Let’s fix the function call, and then we’ll come back to this explanation:

def greet_people(*people, number): for person in people: print(f"Hello {person}! How are you doing today?\n" * number) greet_people("James", "Kate", number=5)

This code now works and gives the following output:

Hello James! How are you doing today? Hello James! How are you doing today? Hello James! How are you doing today? Hello James! How are you doing today? Hello James! How are you doing today? Hello Kate! How are you doing today? Hello Kate! How are you doing today? Hello Kate! How are you doing today? Hello Kate! How are you doing today? Hello Kate! How are you doing today?

Since the last argument is a named (keyword) argument, it’s no longer ambiguous that the value 5 should be assigned to the parameter name number. The program can’t read your mind! Therefore, it needs to be told when the optional positional arguments end since you can have any number of them. Naming all subsequent arguments removes all ambiguity and fixes the problem.

*args summary

Before moving on, let’s summarise what we’ve learned about *args.

  • You can add a parameter with an asterisk * in front of it when defining a function to show that you can use any number of positional arguments in the function call. You can use none, one, or more arguments matched to the *args parameter
  • All the arguments which match the *args parameter are collected in a tuple
  • There’s nothing special about the name args. You can (in fact, you should) use a more descriptive parameter name in your code. Just add the asterisk * before the parameter name
Using Any Number of Optional Keyword Arguments: **kwargs

When you hear about *args in Python, you’ll often hear them mentioned in the same breath as **kwargs. They always seem to come as a pair: “Args and Kwargs in Python!” So let’s see what **kwargs are with the following example:

def greet_people(**people): for person, number in people.items(): print(f"Hello {person}! How are you doing today?\n" * number)

As with ‘args’, there’s nothing special about the name ‘kwargs’. What makes a kwargs a kwargs (!) is the double asterisk ** in front of the parameter name. You can use a more descriptive parameter name when defining a function with **kwargs.

Let’s use the following three function calls as examples in this section:

# 1. greet_people(James=5, Mark=2, Ishaan=1) # 2. greet_people(Stephen=4) # 3. greet_people()

Let’s explore the variable people inside the function. As you did earlier, you’ll print out its contents and its type. The rest of the function body is commented, out and the output from the three function calls is shown as comments:

def greet_people(**people): print(people) print(type(people)) # for person, number in people.items(): # print(f"Hello {person}! How are you doing today?\n" * number) # 1. greet_people(James=5, Mark=2, Ishaan=1) # OUTPUT: # {'James': 5, 'Mark': 2, 'Ishaan': 1} # <class 'dict'> # 2. greet_people(Stephen=4) # OUTPUT: # {'Stephen': 4} # <class 'dict'> # 3. greet_people() # OUTPUT: # {} # <class 'dict'>

The variable people is a dictionary. You used keyword (named) arguments in the function calls, not positional ones. Notice how the keywords you used when naming the arguments are the same as the keys in the dictionary. The argument is the value associated with that key.

For example, in the function call greet_people(James=5, Mark=2, Ishaan=1), the keyword argument James=5 became an item in the dictionary with the string "James" as key and 5 as its value, and so on for the other named arguments. You can include as many keyword arguments as you wish when you call a function with **kwargs.

You may be wondering where the name ‘kwargs’ comes from. Possibly you guessed this already: KeyWord ARGumentS.

Here’s the original function definition and the three function calls again:

def greet_people(**people): for person, number in people.items(): print(f"Hello {person}! How are you doing today?\n" * number) # 1. greet_people(James=5, Mark=2, Ishaan=1) # 2. greet_people(Stephen=4) # 3. greet_people()

This code gives the following output:

Hello James! How are you doing today? Hello James! How are you doing today? Hello James! How are you doing today? Hello James! How are you doing today? Hello James! How are you doing today? Hello Mark! How are you doing today? Hello Mark! How are you doing today? Hello Ishaan! How are you doing today? Hello Stephen! How are you doing today? Hello Stephen! How are you doing today? Hello Stephen! How are you doing today? Hello Stephen! How are you doing today?

Since people is a dictionary, you can loop through it using the dictionary method items(). The first function call prints out three sets of greetings for James, Mark, and Ishaan. The number of times each greeting is printed depends on the argument used. The second call displays four greetings for Stephen. The final call doesn’t display anything since the dictionary is empty.

**kwargs summary

In summary:

  • You can add a parameter with a double asterisk ** in front of it when defining a function to show that you can use any number of keyword arguments in the function call
  • All the arguments which match the **kwargs parameter are collected in a dictionary
  • The keyword becomes the key in the dictionary. The argument becomes the value associated with that key
  • There’s nothing special about the name kwargs. As long as you add the double asterisk ** before the parameter name, you can use a more descriptive name
Combining *args and **kwargs

You now know about *args. You also know about **kwargs. Let’s combine both args and kwargs in Python functions.

Let’s look at this code. You have two teams (represented using the dictionaries red_team and blue_team) and the function adds members to one of the teams. Each member starts off with some number of points:

red_team = {} blue_team = {} def add_team_members(team, **people): for person, points in people.items(): team[person] = points add_team_members(red_team, Stephen=10, Kate=8, Sharon=12) print(f"{red_team = }") print(f"{blue_team = }")

What do you think the output will be?

The function definition has two parameters. The second one has the double asterisk ** in front of it, which makes it a ‘kwargs’. This means you can pass any number of keyword arguments which will be assigned to a dictionary with the name people.

Now, let’s look at the function call. There is one positional argument, red_team. There are also three keyword arguments. Remember that you can have as many keyword arguments as you want after the first required positional argument.

In the function definition’s body, people is a dictionary. Therefore, you can loop through it using items(). The variables person and points will contain the key and the value of each dictionary item. In the first iteration of the for loop, person will contain the string "Stephen" and points will contain 10. In the second for loop iteration, person will contain "Kate" and points will be 8. And "Sharon" and 12 will be used in the third loop iteration.

Here’s the output of the code above:

red_team = {'Stephen': 10, 'Kate': 8, 'Sharon': 12} blue_team = {}

Only red_team has changed. blue_team is still the same empty dictionary you initialised at the beginning. That’s because you passed red_team as the first argument in add_team_members().

Some rules when using *args and **kwargs

As you’ve seen in the previous articles in this series and earlier in this one, there are always some rules on how to order the different types of parameters and arguments. Let’s look at a few of these rules here.

Let’s start with this example:

red_team = {} blue_team = {} def add_team_members(**people, team): for person, points in people.items(): team[person] = points

You’ll get an error when you run this code even without a function call:

File "...", line 4 def add_team_members(**people, team): ^^^^ SyntaxError: arguments cannot follow var-keyword argument

Note: the error message in Python versions before 3.11 is different. The error says that the variable keyword parameter – that’s the **kwargs – must come after the other parameters.

Let’s make a change to the function before you explore some other options. You can check whether the name of the team member is already in the team and only add it if it’s not already there:

red_team = {"Stephen": 4} blue_team = {} def add_team_members(team, **people): for person, points in people.items(): if person not in team.keys(): team[person] = points else: print(f"{person} is already in the team") add_team_members(red_team, Stephen=10, Kate=8, Sharon=12) print(f"{red_team = }") print(f"{blue_team = }")

The output from this code is:

Stephen is already in the team red_team = {'Stephen': 4, 'Kate': 8, 'Sharon': 12} blue_team = {}

Notice how "Stephen" is already in the dictionary with a value of 4 so the function doesn’t update it. Now, you can add another team and modify the function so that you can add team members to more than one team at a time in a single function call. People can be in more than one team:

red_team = {} blue_team = {} green_team = {} def add_team_members(*teams, **people): for person, points in people.items(): for team in teams: if person not in team.keys(): team[person] = points else: print(f"{person} is already in the team.") add_team_members(red_team, blue_team, Stephen=10, Kate=8, Sharon=12) add_team_members(red_team, green_team, Mary=3, Trevor=15) add_team_members(blue_team, Ishaan=8) print(f"{red_team = }") print(f"{blue_team = }") print(f"{green_team = }")

You’re using both *args and **kwargs in this function. When you call the function, you can first use any number of positional arguments (without a keyword) followed by any number of keyword arguments:

  • The positional arguments are assigned to the tuple teams
  • The keyword arguments are assigned to the dictionary people

The output from this code is:

red_team = {'Stephen': 10, 'Kate': 8, 'Sharon': 12, 'Mary': 3, 'Trevor': 15} blue_team = {'Stephen': 10, 'Kate': 8, 'Sharon': 12, 'Ishaan': 8} green_team = {'Mary': 3, 'Trevor': 15}

You’ll note that Stephen, Kate, and Sharon are in both the red team and the blue team. Mary and Trevor are in the red and green teams. Ishaan is just in the blue team.

Let’s get back to talking about the rules of what you can and cannot do. You can change the function call from the one you used earlier:

red_team = {} blue_team = {} green_team = {} def add_team_members(*teams, **people): for person, points in people.items(): for team in teams: if person not in team.keys(): team[person] = points else: print(f"{person} is already in the team.") add_team_members(Stephen=10, Kate=8, Sharon=12, red_team, blue_team)

The output from this code is the following error:

File "...", line 13 add_team_members(Stephen=10, Kate=8, Sharon=12, red_team, blue_team) ^ SyntaxError: positional argument follows keyword argument

You cannot place keyword (named) arguments before positional arguments when you call the function. This makes sense since *teams is listed before **people in the function signature. So, can you swap these over when you define a function? Let’s find out:

red_team = {} blue_team = {} green_team = {} def add_team_members(**people, *teams): for person, points in people.items(): for team in teams: if person not in team.keys(): team[person] = points else: print(f"{person} is already in the team.")

The answer is “No”:

File "...", line 5 def add_team_members(**people, *teams): ^ SyntaxError: arguments cannot follow var-keyword argument

This is an error with the function definition, not the function call (there is no function call in this code!) Therefore, you must include the *args before the **kwargs when you define a function.

Final Words

There are more combinations of “normal” positional arguments, “normal” keyword arguments, *args, and **kwargs you could try. But we’ll draw a line here in this article as the main objective was to give you a good idea of what these types of arguments are and how you can use them.

Now that you know about args and kwargs in Python functions, you can move on to yet another type of argument. In the next article, you’ll read about positional-only arguments and keyword-only arguments.

Next Article: <Link will be posted here when the next article in the series is posted>

Further Reading Get the latest blog updates

No spam promise. You’ll get an email when a new blog post is published

The post Argh! What are args and kwargs in Python? [Intermediate Python Functions Series #4] appeared first on The Python Coding Book.

Categories: FLOSS Project Planets

ImageX: The Joy of Giving Back to Drupal: Celebrating Contribution

Planet Drupal - Wed, 2022-11-30 15:23
Drupal boasts a vibrant community of people passionate about making the project thrive in every way. As active members of this community, we at the ImageX team are excited to give back to Drupal. Outside of code contribution, ImageX has sponsored numerous DrupalCon events, and team members regularly help organize the Vancouver DrupalCafe and are lead organizers for the largest Drupal event in Europe (outside of DrupalCon), DrupalCamp Kyiv.  Our experts have shared their skills through sessions and webinars, along with Promote Drupal and other Committees at DrupalCons.
Categories: FLOSS Project Planets

Everyday Superpowers: Refactor Python for more satisfaction

Planet Python - Wed, 2022-11-30 14:38

This is a blogified version of my 18-minute PyJamas talk, Refactor refactoring—How changing your views on refactoring can make your job more satisfying.

You can watch it here:


Read more...
Categories: FLOSS Project Planets

texinfo @ Savannah: Texinfo 7.0.1 released

GNU Planet! - Wed, 2022-11-30 13:38

We have released version 7.0.1 of Texinfo, the GNU documentation format. This is a minor bug-fix release.

It's available via a mirror (xz is much smaller than gz, but gz is available too just in case):
http://ftpmirror.gnu.org/texinfo/texinfo-7.0.1.tar.xz
http://ftpmirror.gnu.org/texinfo/texinfo-7.0.1.tar.gz

Please send any comments to bug-texinfo@gnu.org.

Full announcement:
https://lists.gnu.org/archive/html/bug-texinfo/2022-11/msg00237.html

Categories: FLOSS Project Planets

Bits from Debian: New Debian Developers and Maintainers (September and October 2022)

Planet Debian - Wed, 2022-11-30 10:00

The following contributors got their Debian Developer accounts in the last two months:

  • Abraham Raji (abraham)
  • Phil Morrell (emorrp1)
  • Anupa Ann Joseph (anupa)
  • Mathias Gibbens (gibmat)
  • Arun Kumar Pariyar (arun)
  • Tino Didriksen (tinodidriksen)

The following contributors were added as Debian Maintainers in the last two months:

  • Gavin Lai
  • Martin Dosch
  • Taavi Väänänen
  • Daichi Fukui
  • Daniel Gröber
  • Vivek K J
  • William Wilson
  • Ruben Pollan

Congratulations!

Categories: FLOSS Project Planets

Real Python: Advent of Code: Solving Your Puzzles With Python

Planet Python - Wed, 2022-11-30 09:00

Advent of Code is an online Advent calendar where you’ll find new programming puzzles offered each day from December 1 to 25. While you can solve the puzzles at any time, the excitement when new puzzles unlock is really something special. You can participate in Advent of Code in any programming language—including Python!

With the help of this tutorial, you’ll be ready to start solving puzzles and earning your first gold stars.

In this tutorial, you’ll learn:

  • What an online Advent calendar is
  • How solving puzzles can advance your programming skills
  • How you can participate in Advent of Code
  • How you can organize your code and tests when solving Advent of Code puzzles
  • How test-driven development can be used when solving puzzles

Advent of Code puzzles are designed to be approachable by anyone with an interest in problem-solving. You don’t need a heavy computer science background to participate. Instead, Advent of Code is a great arena for learning new skills and testing out new features of Python.

Source Code: Click here to download the free source code that shows you how to solve Advent of Code puzzles with Python.

Puzzling in Programming?

Working on puzzles may seem like a waste of your available programming time. After all, it seems like you’re not really producing anything useful and you’re not advancing your current projects forward.

However, there are several advantages to taking some time off to practice with programming puzzles:

  • Programming puzzles are usually better specified and more contained than your regular job tasks. They offer you the chance to practice logical thinking on problems that are less complex than the ones you typically need to handle in your day job.

  • You can often challenge yourself with several similar puzzles. This allows you to build procedural memory, much like muscle memory, and get experience with structuring certain kinds of code.

  • Puzzles are often designed with an eye towards a solution. They allow you to learn about and apply algorithms that are tried and tested and are an important part of any programmer’s toolbox.

  • For some puzzle solutions, even the greatest supercomputers can be too slow if the algorithm is inefficient. You can analyze the performance of your solution and get experience to help you understand when a straightforward method is fast enough and when a more optimized procedure is necessary.

  • Most programming languages are well-suited for solving programming puzzles. This gives you a great opportunity to compare different programming languages for different tasks. Puzzles are also a great way of getting to know a new programming language or trying out some of the newest features of your favorite language.

On top of all of this, challenging yourself with a programming puzzle is often pretty fun! When you add it all up, setting aside some time for puzzles can be very rewarding.

Exploring Options for Solving Programming Puzzles Online

Luckily, there are many websites where you can find programming puzzles and try to solve them. There are often differences in the kinds of problems these websites present, how you submit your solutions, and what kind of feedback and community the sites can offer. You should therefore take some time to look around and find those that appeal the most to you.

In this tutorial, you’ll learn about Advent of Code, including what kind of puzzles you can find there and which tools and tricks you can employ to solve them. However, there are also other places where you can get started solving programming puzzles:

  • Exercism has learning tracks in many different programming languages. Each learning track offers coding challenges, small tutorials about different programming concepts, and mentors who give you feedback on your solutions.

  • Project Euler has been around for a long time. The site offers hundreds of puzzles, usually formulated as math problems. You can solve the problems in any programming language, and once you’ve solved a puzzle, you get access to a community thread where you can discuss your solution with others.

  • Code Wars offers tons of coding challenges, which they call katas. You can solve puzzles in many different programming languages with their built-in editor and automated tests. Afterward, you can compare your solutions to others’ and discuss strategies in the forums.

  • HackerRank has great features if you’re looking for a job. They offer certifications in many different skills, including problem-solving and Python programming, as well as a job board that lets you show off your puzzle-solving skills as part of your job applications.

There are many other sites available where you can practice your puzzle-solving skills. In the rest of this tutorial, you’ll focus on what Advent of Code has to offer.

Preparing for Advent of Code: 25 Fresh Puzzles for Christmas

It’s time for Advent of Code! It was started by Eric Wastl in 2015. Since then, a new Advent calendar of twenty-five new programming puzzles has been published every December. The puzzles have gotten more and more popular over the years. More than 235,000 people have solved at least one of the puzzles from 2021.

Note: Traditionally, an Advent calendar is a calendar used to count the days of Advent while waiting for Christmas. Over the years, Advent calendars have become more commercial and have lost some of their Christian connection.

Most Advent calendars start on December 1 and end on December 24, Christmas Eve, or December 25, Christmas Day. Nowadays, there are all kinds of Advent calendars available, including LEGO calendars, tea calendars, and cosmetics calendars.

In traditional Advent calendars, you open one door every day to reveal what’s inside. Advent of Code mimics this by giving you access to one new puzzle each day from December 1 to December 25. For each puzzle you solve, you’ll earn gold stars that are yours to keep.

In this section, you’ll get more familiar with Advent of Code and see a glimpse of your first puzzle. Later, you’ll look at the details of how you can solve these puzzles and practice solving a few of the puzzles yourself.

Advent of Code Puzzles

Advent of Code is an online Advent calendar where a new puzzle is published every day from December 1 to December 25. Each puzzle becomes available at midnight, US Eastern Time. An Advent of Code puzzle has a few typical characteristics:

  • Each puzzle consists of two parts, but the second part isn’t revealed until you finish the first part.
  • You’ll earn one gold star (⭐) for each part that you finish. This means you can earn two stars per day and fifty stars if you solve all the puzzles for one year.
  • The puzzle is the same for everyone, but you need to solve it based on personalized input that you get from the Advent of Code site. This means that your answer to a puzzle will be different from someone else’s, even if you use the same code to calculate it.

You can participate in a global race to be the first to solve each puzzle. However, this is usually pretty crowded with highly skilled, competitive programmers. Advent of Code is probably going to be more fun if you use it as practice for yourself or if you challenge your friends and coworkers to a small, friendly competition.

Read the full article at https://realpython.com/python-advent-of-code/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Python for Beginners: Drop Elements From a Series in Python

Planet Python - Wed, 2022-11-30 09:00

Pandas series is very useful for handling data having ordered key-value pairs. In this article, we will discuss different ways to drop elements from a pandas series.

Table of Contents
  1. Drop Elements From a Pandas Series Using the drop() Method
  2. Drop a Single Element From a Pandas Series
  3. Delete Multiple Elements From a Pandas Series
  4. Drop Elements Inplace From a Pandas Series
  5. Delete an Element From a Series if the Index Exists
  6. Drop NaN Values From a Pandas Series
  7. Drop NaN Values Inplace From a Pandas Series
  8. Drop Duplicates From a Pandas Series
  9. Drop Duplicates Inplace in a Pandas Series
  10. Drop All Duplicate Values From a Pandas Series
  11. Conclusion
Drop Elements From a Pandas Series Using the drop() Method

We can drop elements from a pandas series using the drop() method. It has the following syntax.

Series.drop(labels=None, *, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise')

Here, 

  • The labels parameter takes the index of the elements that we need to delete from the series. You can pass a single index label or a list of indices to the labels parameter. 
  • The axis parameter is used to decide if we want to delete a row or column. For a pandas series, the axis parameter isn’t used. It is defined in the function just to ensure the compatibility of the drop() method with pandas dataframes.
  • The index parameter is used to select the index of the elements to delete for given labels in the dataframe. The index parameter is redundant for series objects. However, you can use the index parameter instead of the labels parameter. 
  • The columns parameter is used to select the columns to delete in a dataframe. The “columns” parameter is also redundant here. You can use labels or index parameters to drop elements from a series. 
  • The levels parameter is used to delete elements from a series when the series contains a multi-index. The levels parameter takes the level or list of levels from which the elements need to be deleted for specified labels. 
  • By default, the drop() method returns a new series object after deleting elements from the original series. In this process, the original series isn’t modified. If you want to modify the original series instead of creating a new series, you can set the inplace parameter to True.
  • The drop() method raises an exception whenever it runs into an error while dropping the elements from the series. For example, if an index or label that we want to delete doesn’t exist in the series, the drop() method raises a python KeyError exception. To suppress such errors while deleting an element from the series, you can set the errors parameter to “ignore”.

After execution, the drop() method returns a new series if the inplace parameter is set to False. Otherwise, it returns None. 

Drop a Single Element From a Pandas Series

To drop a single element from a series, you can pass the index of the element to the labels parameter in the drop() method as shown below.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series.index=numbers print("The original series is:") print(series) series=series.drop(labels=11) print("The modified series is:") print(series)

Output:

The original series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object The modified series is: 3 a 23 b 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object

In the above example, we first created a Series object using the Series() constructor. Then we dropped the element having index 11 using the drop() method. For this, we have passed the value 11 to the drop() method. After execution of the drop() method, you can observe that the element with index 11 has been removed from the output series.

Instead of the labels parameter, you can also use the index parameter in the drop() method as shown below.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series.index=numbers print("The original series is:") print(series) series=series.drop(index=11) print("The modified series is:") print(series)

Output:

The original series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object The modified series is: 3 a 23 b 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object

In this example, we have used the index parameter instead of the labels parameter. However, the resultant series after execution of the drop() method is the same in both cases.

Delete Multiple Elements From a Pandas Series

To drop multiple elements from a series, you can pass a python list of indices of the elements to be deleted to the labels parameter. For instance, if you want to delete elements at indices 11, 16, and 2 of the given Series, you can pass the list [11,16,2] to the labels parameter in the drop() method as shown below.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series.index=numbers print("The original series is:") print(series) series=series.drop(labels=[11,16,2]) print("The modified series is:") print(series)

Output:

The original series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object The modified series is: 3 a 23 b 14 ab 45 bc 65 d dtype: object

In this example, we have passed the list [11, 16, 2] as input to the labels parameter. Hence, after execution of the drop() method, the elements at index 11, 16, and 2 are deleted from the original series object.

Instead of the labels parameter, you can pass the list of indices to the index parameter as shown below.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series.index=numbers print("The original series is:") print(series) series=series.drop(index=[11,16,2]) print("The modified series is:") print(series)

Output:

The original series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object The modified series is: 3 a 23 b 14 ab 45 bc 65 d dtype: object Drop Elements Inplace From a Pandas Series

By default, the drop() method returns a new series and doesn’t delete specified elements from the original series. To drop elements inplace from a pandas series, you can set the inplace parameter to True as shown below.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series.index=numbers print("The original series is:") print(series) series.drop(index=[11,16,2],inplace=True) print("The modified series is:") print(series)

Output:

The original series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object The modified series is: 3 a 23 b 14 ab 45 bc 65 d dtype: object

In all the previous examples, the drop() method returned a new Series object. In this example, we have set the inplace parameter to True in the drop() method. Hence, the elements are deleted from the original series and it is modified. In this case, the drop() method returns None.

Delete an Element From a Series if the Index Exists

While deleting elements from a series using the drop() method, it is possible that we might pass an index to the labels or index parameter that is not present in the Series object. If the value passed to the labels or index parameter isn’t present in the Series, the drop() method runs into a KeyError exception as shown below.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series.index=numbers print("The original series is:") print(series) series.drop(index=1117,inplace=True) print("The modified series is:") print(series)

Output:

KeyError: '[1117] not found in axis'

In the above example, we have passed the value 1117 to the index parameter. As the value 1117 is not present in the Series, we get a KeyError exception.

To avoid errors and drop elements from a series if the index exists, you can use the errors parameter. By default, the errors parameter is set to "raise". Due to this, the drop() method raises an exception every time it runs into an error. To suppress the exception, you can set the errors parameter to “ignore” as shown in the following example.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series.index=numbers print("The original series is:") print(series) series.drop(index=1117,inplace=True,errors="ignore") print("The modified series is:") print(series)

Output:

The original series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object The modified series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object

In the above example, we have passed the value 1117 to the index parameter. As 1117 is not present in the series index, the drop() method would have run into a KeyError exception. However, we have set the errors parameter to "ignore" in the drop() method. Hence, it suppresses the error. You can also observe that the series returned by the drop() method is the same as the original series.

Suggested Reading: If you are into machine learning, you can read this article on regression in machine learning. You might also like this article on clustering mixed data types in Python.

Drop NaN Values From a Pandas Series

NaN values are special numbers having floating-point data type in Python. NaN values are used to represent the absence of a value. Most of the times, NaN values have no importance in a given dataset and we need to remove these values.

You can drop NaN values from a pandas series using the dropna() method. It has the following syntax.

Series.dropna(*, axis=0, inplace=False, how=None)

Here,  

  • The axis parameter is used to decide if we want to delete nan values from a row or column from the series. For a pandas series, the axis parameter isn’t used. It is defined just to ensure the compatibility of the dropna() method with pandas dataframes.
  • By default, the dropna() method returns a new series object after deleting nan values from the original series. In this process, the original series isn’t modified. If you want to delete the nan values from the original series instead of creating a new series, you can set the inplace parameter to True.
  • The “how” parameter is not used for a series. 

After execution, the dropna() method returns a new series if the inplace parameter is set to False. Otherwise, it returns None. 

You can drop nan values from a pandas series as shown in the following example.

import pandas as pd import numpy as np letters=["a","b","c",np.nan,"ab","abc",np.nan,"abcd","bc","d"] series=pd.Series(letters) print("The original series is:") print(series) series=series.dropna() print("The modified series is:") print(series)

Output:

The original series is: 0 a 1 b 2 c 3 NaN 4 ab 5 abc 6 NaN 7 abcd 8 bc 9 d dtype: object The modified series is: 0 a 1 b 2 c 4 ab 5 abc 7 abcd 8 bc 9 d

In the above example, you can observe that the original series has two NaN values. After execution, the dropna() method deletes both the NaN values with their indices and returns a new series.

Drop NaN Values Inplace From a Pandas Series

If you want to drop NaN values from the original series instead of creating a new series, you can set the inplace parameter to True in the dropna() method as shown below.

import pandas as pd import numpy as np letters=["a","b","c",np.nan,"ab","abc",np.nan,"abcd","bc","d"] series=pd.Series(letters) print("The original series is:") print(series) series.dropna(inplace=True) print("The modified series is:") print(series)

Output:

import pandas as pd import numpy as np letters=["a","b","c",np.nan,"ab","abc",np.nan,"abcd","bc","d"] series=pd.Series(letters) print("The original series is:") print(series) series.dropna(inplace=True) print("The modified series is:") print(series)

Here, we have set the inplace parameter to True. Hence, the dropna() method modified the original series instead of creating a new one. In this case, the dropna() method returns None after execution.

Drop Duplicates From a Pandas Series

We data preprocessing, we often need to remove duplicate values from the given data. To drop duplicate values from a pandas series, you can use the drop_duplicates() method. It has the following syntax.

Series.drop_duplicates(*, keep='first', inplace=False)

Here,

  • The keep parameter is used to decide what values we need to keep after removing the duplicates. To drop all the duplicate values except for the first occurrence, you can set the keep parameter to “first” which is its default value. To drop all the duplicate values except for the last occurrence, you can set the keep parameter to “last”. If you want to drop all the duplicate values, you can set the keep parameter to False.
  • By default, the drop_duplicates() method returns a new series object after deleting duplicate values from the original series. In this process, the original series isn’t modified. If you want to delete the duplicate values from the original series instead of creating a new series, you can set the inplace parameter to True.

After execution, the drop_duplicates() method returns a new series if the inplace parameter is set to False. Otherwise, it returns None. You can observe this in the following example.

import pandas as pd import numpy as np letters=["a","b","a","a","ab","abc","ab","abcd","bc","abc","ab"] series=pd.Series(letters) print("The original series is:") print(series) series=series.drop_duplicates() print("The modified series is:") print(series)

Output:

The original series is: 0 a 1 b 2 a 3 a 4 ab 5 abc 6 ab 7 abcd 8 bc 9 abc 10 ab dtype: object The modified series is: 0 a 1 b 4 ab 5 abc 7 abcd 8 bc dtype: object

In the above example, you can observe that strings “a”, “ab”, and “abc” are present multiple times in the series. Hence, when we invoke the drop_duplicates() method on the series objects, all the duplicates except the one occurrence of the strings are removed from the series.

Looking at the indices, you can observe that first occurrence of the elements have been retained if the elements are present multiple times in the series. If you want to preserve the last occurrence of the elements having duplicate values, you can set the keep parameter to "last" as shown below.

import pandas as pd import numpy as np letters=["a","b","a","a","ab","abc","ab","abcd","bc","abc","ab"] series=pd.Series(letters) print("The original series is:") print(series) series=series.drop_duplicates(keep="last") print("The modified series is:") print(series)

Output:

The original series is: 0 a 1 b 2 a 3 a 4 ab 5 abc 6 ab 7 abcd 8 bc 9 abc 10 ab dtype: object The modified series is: 1 b 3 a 7 abcd 8 bc 9 abc 10 ab dtype: object

In the above example, we have set the keep parameter to "last". Hence, you can observe that the drop_duplicates() method preserves the last occurrence of the elements that have duplicate values.

Drop Duplicates Inplace in a Pandas Series

By default, the drop_duplicates() method doesn’t modify the original series object. It returns a new series. If you want to modify the original series by deleting the duplicates, you can set the inplace parameter to True in the drop_duplicates() method as shown below.

import pandas as pd import numpy as np letters=["a","b","a","a","ab","abc","ab","abcd","bc","abc","ab"] series=pd.Series(letters) print("The original series is:") print(series) series.drop_duplicates(inplace=True) print("The modified series is:") print(series)

Output:

The original series is: 0 a 1 b 2 a 3 a 4 ab 5 abc 6 ab 7 abcd 8 bc 9 abc 10 ab dtype: object The modified series is: 0 a 1 b 4 ab 5 abc 7 abcd 8 bc dtype: object

In this example, we have set the inplace parameter to True. Hence, the drop_duplicates() method modified the original series instead of creating a new one. In this case, the drop_duplicates() method returns None after execution.

Drop All Duplicate Values From a Pandas Series

To drop all the duplicates from a pandas series, you can set the keep parameter to False as shown below.

import pandas as pd import numpy as np letters=["a","b","a","a","ab","abc","ab","abcd","bc","abc","ab"] series=pd.Series(letters) print("The original series is:") print(series) series=series.drop_duplicates(keep=False) print("The modified series is:") print(series)

Output:

The original series is: 0 a 1 b 2 a 3 a 4 ab 5 abc 6 ab 7 abcd 8 bc 9 abc 10 ab dtype: object The modified series is: 1 b 7 abcd 8 bc dtype: object

In this example, we have set the keep parameter to False in the drop_duplicates() method. Hence, you can observe that all the elements having duplicate values are removed from the series.

Conclusion

In this article, we have discussed different ways to drop elements from a pandas series. To know more about pandas module, you can read this article on how to sort a pandas dataframe. You might also like this article on how to drop columns from a pandas dataframe.

The post Drop Elements From a Series in Python appeared first on PythonForBeginners.com.

Categories: FLOSS Project Planets

OpenSense Labs: An overview of Automatic Updates in Drupal 10

Planet Drupal - Wed, 2022-11-30 07:46
An overview of Automatic Updates in Drupal 10 Maitreayee Bora Wed, 11/30/2022 - 18:16

Between November 2020 and October 2021, 5212 organizations worldwide experienced data breaches. (source: statista).

And the number is steadily increasing. 

While every business that operates online faces some cyber threats, there are many ways to prevent data breaches or at least minimize their impact.

Delays before security updates are applied on site can result in compromised sites as seen in Drupalgeddon.

Manually updating a Drupal site can be an expensive, difficult, & time-consuming. 

The goal of the Automatic Updates Initiative is to provide safe and secure automatic updates for Drupal sites. It aims to solve the problem of any security concerns while over ridding the troublesome manual update process of a Drupal site.

Explained: Drupal Automatic Updates

Drupal’s Automatic Updates focus on resolving some of the most difficult usability concerns in maintaining Drupal websites. It is listed as one of the Drupal Core Strategic Initiatives for Drupal 9. 

It comprises of updates on production, development, and staging environments, with some integrations required in existing CI/CD processes. 

Automatic Updates in Drupal offers some major benefits to its users such as a reduction in the total cost of ownership (TCO) for Drupal websites and also a decrease in the maintenance cost.

Presently, we get to see a stable release that comprises features such as public safety alerts and readiness checks which will be discussed below. 

Importance of updating website

Here is the importance of updating a website. Take a look below:

  • Helps in increasing brand exposure

If we update a website by changing the outdated information with newly updated content then it will lead to an increase in brand exposure. But if we do not take this responsibility of updating content then it can be an obstacle in increasing the brand exposure which is essentially important.

  • Increases security

One of the major reasons for updating a website can be security concerns. For example, if a website is hacked then it can bring trouble for both the business and clients. But if we frequently update our website with the latest security features then such troubles of website hacking can be avoided. 

  • Provides mobile-friendly facility

By updating our website to a mobile-friendly website we enable our users to go through our website across various devices and platforms with ease and comfort. This leads to an increase in website traffic also resulting in a better company reputation.

Key Features of the Automatic Updates Module 

Here’s a list of features in the Automatic Updates module.  

  • Update readiness checks

We might not be always capable of updating all websites. Therefore, in instances like such, the readiness checks, one of the key features of Automatic Updates helps in identifying if a website is ready for updating automatically after a new release is offered to the Drupal community. 

For instance, websites that have un-run database updates, not having sufficient disk space for updating, or working on read-only file systems, won’t be able to get automatic updates. And if our website fails readiness checks and a Public service announcement (PSA) happens to be released, then it is essentially important to solve the readiness issue so that the website can be updated instantly.

  • In-place updates
  1. After the PSA service provides a notification to a Drupal site owner of an available update, and also the readiness checks happen to confirm that the website is ready to be updated, the website administrator is then able to update through the Update form.
     
  2. Tarball-based installations are well supported by this particular module and it doesn’t happen to choose some of the requirements in order to secure updating, rollback, etc which will come under the core solution.
     
  3. This module doesn’t support contrib updates or composer-based site installations. And also, the work on composer integration has begun already and is in progress.
  • Public service announcements (PSAs)

We get to see that infrequent announcements are done especially for critical security releases in regard to core and contrib modules. After a PSA is released, site owners need to review their websites so that they are updated well with the latest releases. Also, the website needs to be in a good position in order to quickly update if any fixes are given to the community.

Here is a quick video on the above-discussed features of automatic updates.


Conclusion

The Drupal community never fails to make an honest effort in building a community where its users can be benefited by making their software and websites safer and more user-friendly. The Automatic Updates initiative is a great example of it and by far it has made tremendous progress that cannot be unseen. 
 

Articles Off
Categories: FLOSS Project Planets

Russell Coker: Links November 2022

Planet Debian - Wed, 2022-11-30 05:26

Here’s the US Senate Statement of Frances Haugen who used to work for Facebook countering misinformation and espionage [1]. She believes that Facebook is capable of dealing with the online radicalisation and promotion of bad things on it’s platform but is unwilling to do so for financial reasons. We need strong regulation of Facebook and it probably needs to be broken up.

Interesting article from The Atlantic about filtered cigarettes being more unhealthy than unfiltered [2]. Every time I think I know how evil tobacco companies are I get surprised by some new evidence.

Cory Doctorow wrote an insightful article about resistance to “rubber hose cryptanalysis” [3].

Cory Doctorow wrote an interesting article “When Automation Becomes Enforcement” with a new way of thinking about Snapchat etc [4].

Cory Doctorow wrote an insightful and informative article Big Tech Isn’t Stealing News Publishers’ Content, It’s Stealing Their Money [5] which should be read by politicians from all countries that are trying to restrict quoting news on the Internet.

Interesting articl;e on Santiago Genoves who could be considered as a pioneer of reality TV for deliberately creating disputes between a group of young men and women on a raft in the Atlantic for 3 months [6].

Matthew Garrett wrote an interesting review of the Freedom Phone, seems that it’s not good for privacy and linked to some companies doing weird stuff [7]. Definitely worth reading.

Cory Doctorow wrote an interesting and amusing article about backdoors for machine learning [8]

Petter Reinholdtsen wrote an informative post on how to make a bootable USB stick image from an ISO file [9]. Apparently Lenovo provides ISO images to update laptops that don’t have DVD drives. :(

Barry Gander wrote an interesting article about the fall of Rome and the decline of the US [10]. It’s a great concern that the US might fail in the same way as Rome.

Ethan Siegel wrote an interesting article about Iapetus, a moon of Saturn that is one of the strangest objects in the solar system [11].

Cory Doctorow’s article Revenge of the Chickenized Reverse-Centaurs has some good insights into the horrible things that companies like Amazon are doing to their employees and how we can correct that [12].

Charles Stross wrote an insightful blog post about Billionaires [13]. They can’t do much for themselves with the extra money beyond about $10m or $100m (EG Steve Jobs was unable to extend his own life much when he had cancer) and their money is trivial when compared to the global economy. They are however effective parasites capable of performing great damage to the country that hosts them.

Cory Doctorow has an interesting article about how John Deere is being evil again [14]. This time with potentially catastrophic results.

Related posts:

  1. Links September 2022 Tony Kern wrote an insightful document about the crash of...
  2. Links November 2021 The Guardian has an amusing article by Sophie Elmhirst about...
  3. Links September 2020 MD5 cracker, find plain text that matches MD5 hash [1]....
Categories: FLOSS Project Planets

Qt for MCUs 2.3 released

Planet KDE - Wed, 2022-11-30 04:19

Since the very first release of Qt for MCUs, your feedback and requests have been driving the development of Qt for MCUs. Today, we are happy to announce the release of version 2.3, which includes several of the most requested features and improvements. This new version adds the Loader QML type to Qt Quick Ultralite, support for partial framebuffers to substantially reduce the overall memory requirements of your applications, support for building applications using MinGW on Windows, and much more!

Categories: FLOSS Project Planets

Morpht: Data migration for the Book module

Planet Drupal - Wed, 2022-11-30 04:14
A tutorial on how to preserve the hieracrhial structure of book pages during a migration from Drupal 7 to Drupal 9.
Categories: FLOSS Project Planets

The Drop Times: Behind the New Logo for NEDCamp: A Short Conversation with John Picozzi

Planet Drupal - Wed, 2022-11-30 03:59
TDT asked John Picozzi, one of the organizers for NEDCamp, about the process and idea behind creating a new logo for the camp. Read through to know, what we learned from him.
Categories: FLOSS Project Planets

John Ludhi/nbshare.io: Pandas Read and Write Excel File

Planet Python - Wed, 2022-11-30 02:38
Pandas Read and Write Excel File

Make sure you have openpyxl package installed. Otherwise you will get following error
...

ModuleNotFoundError: No module named 'openpyxl'

Install the package with following command
... pip install openpyxl

Pandas print excel sheet names In [1]: import pandas as pd

Pandas has ExcelFile method which returns Pandas excel object.

In [2]: excel = pd.ExcelFile("stocks.xlsx") excel.sheet_names Out[2]: ['Sheet12']

Note you might run in to following error

ValueError: Worksheet index 0 is invalid, 0 worksheets found

which usually means the Excel file is corrupt. To fix this error, copy the data in to another excel file and save it.

ExcelFile has many methods. For example excel.dict will print the data of spreadsheet in dictionary format.

In [3]: excel.__dict__ Out[3]: {'io': 'stocks.xlsx', '_io': 'stocks.xlsx', 'engine': 'openpyxl', 'storage_options': None, '_reader': <pandas.io.excel._openpyxl.OpenpyxlReader at 0x7f4cb232c8e0>}

To convert the data in to Pandas Dataframe. We will use ExcelFile.parse() method.

Pandas Read Excel Files In [4]: excel = pd.ExcelFile("stocks.xlsx") df = excel.parse() In [5]: df.head() Out[5]: Unnamed: 0 Unnamed: 1 Unnamed: 2 Unnamed: 3 0 NaN Stock Price Date 1 NaN INTC 28.9 2022-11-29 00:00:00 2 NaN AAPL 141.17 2022-11-29 00:00:00

Since our excel sheet has first column and row empty that is why we see headers and ist column as Unnamed and NaN respectively.

Let us fix it by specifying that header starts at row1.

In [6]: excel.parse(header=1) Out[6]: Unnamed: 0 Stock Price Date 0 NaN INTC 28.90 2022-11-29 1 NaN AAPL 141.17 2022-11-29

To fix the column indexing, we can use "usecols" option as shown below.

In [7]: excel.parse(usecols=[1,2,3],header=1) Out[7]: Stock Price Date 0 INTC 28.90 2022-11-29 1 AAPL 141.17 2022-11-29

To specify stock symbol as our index column, we can ues "index_col" option.

In [8]: excel.parse(index_col="Stock",usecols=[1,2,3],header=1) Out[8]: Price Date Stock INTC 28.90 2022-11-29 AAPL 141.17 2022-11-29

We can also use pd.read_excel() method to achieve the same

In [9]: pd.read_excel("stocks.xlsx",index_col="Stock",usecols=[1,2,3],header=1) Out[9]: Price Date Stock INTC 28.90 2022-11-29 AAPL 141.17 2022-11-29

Instead of specifying each column number, we can use range function to specify the columns which contain the data.

In [10]: excel.parse(usecols=range(1,4),header=1) Out[10]: Stock Price Date 0 INTC 28.90 2022-11-29 1 AAPL 141.17 2022-11-29

let us save the dataframe in to a variable.

In [11]: dfef = pd.read_excel("stocks.xlsx",usecols=range(1,4),header=1) In [12]: dfef.head() Out[12]: Stock Price Date 0 INTC 28.90 2022-11-29 1 AAPL 141.17 2022-11-29 Pandas write Dataframe to Excel File

We can write the dataframe in to Excel file using pd.to_excel() method.

In [13]: dfef.to_excel("stocktmp.xlsx") In [14]: !ls -lrt stocktmp.xlsx -rw-r--r-- 1 root root 5078 Nov 30 05:21 stocktmp.xlsx
Categories: FLOSS Project Planets

Russ Allbery: Review: The Fed Unbound

Planet Debian - Wed, 2022-11-30 00:08

Review: The Fed Unbound, by Lev Menand

Publisher: Columbia Global Reports Copyright: 2022 ISBN: 1-7359137-1-5 Format: Kindle Pages: 156

The Fed Unbound is a short non-fiction exploration of US Federal Reserve actions to reducing systemic risk caused by shadow banking. Its particular focus is the role of the Fed from the 2008 financial crisis to the present, including the COVID shock, but it includes a history of what Menand calls the "American Monetary Settlement," the political compromise that gave rise to the Federal Reserve.

In Menand's view, a central cause of instability in the US financial system (and, given the influence of the dollar system, the world financial system as well) is shadow banking: institutions that act as banks without being insured like banks or subject to bank regulations. A bank, in this definition, is an organization that takes deposits. I'm simplifying somewhat, but what distinguishes a deposit from a security or an investment is that deposits can be withdrawn at any time, or at least on very short notice. When you want to spend the money in your checking account, you don't have to wait for a three-month maturity period or pay an early withdrawal penalty. You simply withdraw the money, with immediate effect. This property is what makes deposits "money," rather than something that you can (but possibly cannot) sell for money, such as stocks or bonds.

Most people are familiar with the basic story of how banks work. Essentially no bank simply takes people's money and puts it in a vault until the person wants it again. If that were the case, you would need to pay the bank to store your money. Instead, a bank takes in deposits and then lend some portion of that money out to others. Those loans, for things like cars or houses or credit card spending, come due over time, with interest. The interest rate the bank charges on the loans is much higher than the rate it has to pay on its deposits, and it pockets the difference.

The problem with this model, of course, is that the bank doesn't have your money, so if all the depositors go to the bank at the same time and ask for their money, the bank won't be able to repay them and will collapse. (See, for example, the movie It's a Wonderful Life, or Mary Poppins, or any number of other movies or books.) Retail banks are therefore subject to stringent regulations designed to promote public trust and to ensure that traditional banking is a boring (if still lucrative) business. Banks are also normally insured, which in the US means that if they do experience a run, federal regulators will step in, shut down the bank in an orderly fashion, and ensure every depositor gets their money bank (at least up to the insurance limit).

Alas, if you thought people would settle for boring work that makes a comfortable profit, you don't know the financial industry. Highly-regulated insured deposits are less lucrative than acting like a bank without all of those restrictions and rules and deposit insurance payments. As Menand relates in his brief history of US banking, financial institutions constantly invent new forms of deposits with similar properties but without all the pesky rules: eurodollars (which have nothing to do with the European currency), commercial paper, repo, and many others. These forms of deposits are primarily used by large institutions like corporations. The details vary, but they tend to be prone to the same fundamental instability as bank deposits: if there's a run on the market, there may not be enough liquidity for everyone to withdraw their money at once. Unlike bank deposits, though, there is no insurance, no federal regulator to step in and make depositors whole, and much less regulation to ensure that runs are unlikely.

Instead, there's the Federal Reserve, which has increasingly become the bulwark against liquidity crises among shadow banks. This happened in 2008 during the financial crisis (which Menand argues can be seen as a shadow bank run sparked by losses on mortgage securities), and again at a larger scale in 2020 during the initial COVID crisis.

Menand is clear that these interventions from the Federal Reserve were necessary. The alternative would have been an uncontrolled collapse of large sections of the financial system, with unknown consequences. But the Fed was not intended to perform those types of interventions. It has no regulatory authority to reform the underlying financial systems to make them safer, remove executives who failed to maintain sufficient liquidity for a crisis, or (as is standard for all traditional US banks) prohibit combining banking and more speculative investment on the same balance sheet. What the Federal Reserve can do, and did, is function as a buyer of last resort, bailing out shadow banks by purchasing assets with newly-created money. This works, in the sense that it averts the immediate crisis, but it creates other distortions. Most importantly, constant Fed intervention doesn't create an incentive to avoid situations that require that intervention; if anything, it encourages more dangerous risk-taking.

The above, plus an all-too-brief history of the politics of US banking, is the meat of this book. It's a useful summary, as far as it goes, and I learned a few new things. But I think The Fed Unbound is confused about its audience.

This type of high-level summary and capsule history seems most useful for people without an economics background and who haven't been following macroeconomics closely. But Menand doesn't write for that audience. He assumes definitions of words like "deposits" and "money" that are going to be confusing or even incomprehensible to the lay reader.

For example, Menand describes ordinary retail banks as creating money, even saying that a bank loans money by simply incrementing the numbers in a customer's deposit account. This is correct in the technical economic definition of money (fractional reserve banking effectively creates new money), but it's going to sound to someone not well-versed in the terminology as if retail banks can create new dollars out of the ether. That power is, of course, reserved for the Federal Reserve, and indeed is largely the point of its existence. Much of this book that relies on a very specific definition of money and money supply that will only be familiar to those with economics training.

Similarly, the history of the Federal Reserve is interesting but slight, and at no point does Menand explain clearly how the record-keeping between it and retail banks works, or what the Fed's "balance sheet" means in practice. I realize this book isn't trying to be detailed description or history of the Federal Reserve system, but the most obvious audience is likely to flounder at the level of detail Menand provides.

Perhaps, therefore, this book is aimed at an audience already familiar with macroeconomics? But, if so, I'm not sure it says anything new. I follow macroeconomic policy moderately closely and found most of Menand's observations obvious and not very novel. There were tidbits here and there that I hadn't understood, but my time would probably have been better invested in another book. Menand proposes some reforms, but they mostly consist of "Congress should do its job and not create situations where the Federal Reserve has to act without the right infrastructure and political oversight," and, well, yes. It's hard to disagree with that, and it's also hard to see how it will ever happen. It's far too convenient to outsource such problems to central banking, where they are hidden behind financial mechanics that are incomprehensible to the average voter.

This is an important topic, but I don't think this is the book to read about it. If you want a clearer and easier-to-understand role of the Federal Reserve in shadow banking crises, read Crashed instead. If you want to learn more about how retail bank regulation works, and hear a strong case for why the same principles should be applied to shadow banks, see Sheila Bair's Bull by the Horns. I'm still looking for a great history and explainer of the Federal Reserve system as a whole, but that is not this book.

Rating: 5 out of 10

Categories: FLOSS Project Planets

scikit-learn: Interview with Meekail Zain, scikit-learn Team Member

Planet Python - Tue, 2022-11-29 19:00
Author: Reshama Shaikh , Meekail zain

Posted by Sangam SwadiK

Meekail Zain is a computer science PhD student at University of Georgia (USA), a member of Quinn Research Group and a software engineer at Quansight. Meekail officially joined the scikit-learn team as a maintainer in October 2022.

  1. Tell us about yourself.

    I’m currently attending the University of Georgia, pursuing a PhD in computer science. My area of research predominantly focuses on deep learning, generative modeling, and statistical approaches to clustering. I’m in my third year, and at the time of writing about to begin my comprehensive exams.

  2. How did you first become involved in open source and scikit-learn?

    I got my first computer when I was writing my master thesis, and back then a friend installed Linux on it for me. Since then I am a near-exclusive Linux user and learned to love the advantages of open source.

  3. We would love to learn of your open source journey.

    My journey really kicked off when I went to work at Quansight and received funding through the NASA Roses grant to be able to dedicate time to contributing to scikit-learn. It was a huge jump from what I had known up until that point. I learned Python very informally in order to be able to use PyTorch to develop/deploy models for my research, and had little-to-no experience with things like continuous integration or strong API. At first I felt incredibly intimidated and unqualified, but at the same time absolutely thrilled that I was in a position to learn so many new things! I started working on really simple changes to get used to the contribution workflow — things like removing excess whitespace and fixing typos — and then graduated to slightly more complex tasks. Eventually I got to the point where I started to “understand” small corners of the codebase and could actually offer help on new issues because of that familiarity. After that, I started reviewing others’ pull requests (PRs) and offering feedback in an unofficial capacity, as well as taking on more challenging tasks across the codebase. That process of growth and escalation is still ongoing, and truly I hope it never ends.

  4. To which OSS projects and communities do you contribute?

    NumPy, scikit-learn, and scipy. Right now it is heavily skewed towards scikit-learn with numpy being second most, but I’m hoping to take some more time to work on scipy in the near future!

  5. What advice or tips you have for people starting out in your field of work?

    Find a way to enjoy the feeling of being surrounded by things that you haven’t yet mastered. If you aim for growth — and indeed I think we all should — then you’ll find that you spend the majority of your time surrounded by things that you don’t quite understand, and the natural reaction to that is frustration and intimidation. If you can somehow convince yourself to also be excited by such an environment, you’ll find yourself growing every single day. Nobody starts off knowing everything :)

  6. What do you find alluring about OSS?

    This is a tough one, there are many amazing points. If I had to select just a few, it would be (in no particular order):

    • The growth potential
    • The community
    • The impact

    I’ve already discussed the growth potential so I’ll leave it at that.

    The community is fantastic as well! On every project the community base has its own unique personality of sorts, and they are all wonderful! It’s amazing being able to see recurring users that post interesting issues, or take a stab at opening more complex PRs (pull requests). There’s a strong sense of companionship with the people that are also trying to improve the same project as you! It’s akin to a very niche club in high school. It’s a wonderful experience finding people obsessed with the same cool project as you are.

    Finally, the impact. At the end of the day, the work we do has some serious consequences. Each project is essential to so many different workflows and enables brilliant researchers and software engineers to build complex systems and solutions to cutting edge problems. It’s sometimes surreal to think about how essential some of these projects really are.

  7. What pain points do you observe in community-led OSS?

    Consensus is difficult. This is a double-edged sword, since it carries some benefits too. With community-lead OSS, changes at every scale need to meet some kind of consensus. This ensures that the changes are well thought out and provides a layer of safety since the chance of uncaught mistakes propagating goes down with the number of people carefully reviewing changes (for the most part).

    For example, in scikit-learn a PR with changes to code needs to meet a lazy consensus where two official reviewers (currently just core developers) explicitly approve, and no other official reviewer officially disapproves. Going a bit further up, a new feature request in a project could require the consensus of several core developers that are well-versed in the topic area. Large systemic changes manifest in the form of SLEPs (scikit-learn enhancement proposals) which require a ⅔ consensus across all core developers. Above even that, there are cross-community discussions where the idea of a “consensus” itself isn’t always really clear.

    This system is a critical one, but there are important issues intrinsic to it that need to be addressed. For example, who gets to contribute to a consensus at each scale? What qualifications does one need, and how do we codify that? There’s also the intrinsic tradeoff where the stronger the consensus required, the less likely it is that changes will be adopted. This is by design since wide-reaching changes need to be held to high standards, but it does also mean that occasionally even for narrow-scoped problems no solution will be reached despite options being raised that are better than the status quo.

  8. If we discuss how far OS has evolved in 10 years, what would you like to see happen?

    I can’t speak to its evolution in the past 10 years, since I am still fairly new to OSS overall, but I would like to see systematic data-driven analysis on contributor’s needs. Different OSS projects have issued contributor surveys in the past, but in general I think a lot of emphasis is placed on the feedback given from users in meta issues or over community calls. While that is definitely helpful, there’s a lot of extrapolation that takes place when projects try to determine the needs of their contributor base like this.

    Some questions I would love to see studied include:

    • What distribution does the expertise of the contributor base follow?
    • What are the greatest bottlenecks at each level of expertise?
    • Aside from expertise, are there other socio-economic or general demographics that exhibit consistent bottlenecks? (e.g. access to hardware)
    • How do we create informed and effective DEI policies from this information? OSS projects thrive and prosper based on their community, so I would love to see more systematic research on community needs and pain points.
  9. What are your favorite resources, books, courses, conferences, etc?

    I absolutely adore “Probability and Statistics” by Evans and Rosenthal. It does a fantastic job of constructing a lot of otherwise daunting statistical concepts from very elementary ideas. It is my favorite book to recommend to eager students that do not have a rigorous foundation in probability and statistics, since this book does a great job of building up the reader’s intuition and making everything feel natural and derived, rather than arbitrarily defined.

    Regarding conferences, I have to go with SciPy! I was definitely scared going into the conference thinking that I would be the least-qualified person in every room and that I’d have nothing to talk about. I realized very quickly that there is always something to talk about, and qualifications don’t matter. It’s a gathering of super passionate people that are each eager to talk about the things that interest them, so regardless of whether you’re an expert or a beginner, they will happily explain things to you. Every single attendee has some area, no matter how specific, that they can talk about for hours. That genuine interest and excitement felt rejuvenating and reminded me why I love OSS so much.

  10. What are your hobbies, outside of work and open source?

    I really enjoy hiking, camping and playing DnD (Dungeons & Dragons)! Camping especially is an important hobby for me since whenever I have a computer in reach I feel inclined to check my GitHub notifications, so the occasional total disconnect for a weekend is a fantastic tool for me to give myself a break with no pressure of “I could work on that new feature right now…”

    If you have ever had difficulty with relaxing because of that little voice in your head that says “How dare you relax? You could be doing this and that right now!” then I highly recommend going camping, even just for one night! When that voice strikes during camping, I retort “Ah but you see, I don’t have my laptop, so I can’t work on that right now. All I can do right now is relax.” and suddenly the anxiety washes away :)

Categories: FLOSS Project Planets

Plasma Mobile Gear ⚙ 22.11 is Out

Planet KDE - Tue, 2022-11-29 19:00
Updates in Plasma Mobile for September to November 2022

The Plasma Mobile team is happy to announce the result of all the project's development work carried out in September and November of 2022.

Plasma Mobile Gear

We have decided to migrate the releases of Plasma Mobile applications to KDE Gear, starting with KDE Gear 23.04. This means that Plasma Mobile Gear will be discontinued in the future, and Plasma Mobile applications will follow the release schedule of most other KDE applications, simplifying packaging. To prepare for this, an ongoing effort was made to ensure all applications have proper Bugzilla categories created.

Akademy

Akademy 2022 was held in Barcelona, and Devin and Bhushan presented some of the work in the project in the following talk:

Several Plasma Mobile BoF (birds-of-feather) meetings were also held. More details about them can be read over at Devin's blog.

Shell

Plasma 5.27 will be released on February 9th, 2023. This will be the last Plasma 5 release, with work after that being focused on Plasma 6!

Action Drawer

Devin added a feature that lets you tap the media player so that the audio source app window opens. He also fixed the quicksettings so that it now always opens the mobile settings application. Several issues with the mobile data quicksetting not accounting for certain scenarios were also fixed; and he also worked on fixing the marquee label in the WiFi quicksetting, stopping them from overflowing when multiple network devices are attached.

Navigation Panel

Devin fixed the close button not being usable while an application is launching.

Halcyon (Homescreen)

Devin fixed some performance issues when scrolling in the grid app list. This should improve performance a lot on slower devices.

Yari fixed support for the Meta key, and it should now properly bring up the homescreen when pressed.

Lockscreen

Devin did some performance refactoring, and also set the clock text to bold to improve contrast.

KScreen

Aleix fixed wakeups while the screen is off and the device is rotated. Rotations are now only tracked when the display is on.

KWin

Xaver added support for device panel orientations. This means that devices like the OnePlus 5 (which has an upside-down mounted display) will now have the orientation correct by default, and not inverted for touch input.

Other

The bug that led to shell configurations sometimes being wiped at start has been fixed in the upcoming Plasma 5.26.4 release.

Seshan worked on an updated design for the power menu, and it now includes a logout button.

Weather

Devin spent time addressing feedback through the KDEReview process, in preparation for moving the application to the KDE Gear release cycle. These changes include:

  • The settings dialog being switched to use a window in desktop mode
  • The scrollbars being added to views
  • Re-implementing the location list reordering to be much nicer to use
  • Many bugfixes
Recorder

Devin also spent time on the Recorder app, addressing feedback through the KDEReview process in preparation for moving the application to the KDE Gear release cycle. These changes include:

  • The Recorder page now uses a fullscreen layout
  • The recording player layout has been reworked to be easier to use
  • The settings dialog is a window in desktop mode
  • Recordings now start immediately when the record button is pressed
  • You can now export recordings to a different location
  • A bug that added suffixes to recorded file names for no reason was corrected
  • Many bugfixes and UX improvements
Clock

Devin fixed an issue where looping timers could have multiple ongoing notifications and the user was not able to dismiss them.

Terminal

Devin did some bug fixing work on the terminal application. He fixed command deletion not saving in certain cases, and also fixed the bug which made the whole window to close when Ctrl-D was pressed.

Dialer

According to the feedback obtained after the previous incoming call screen updates, Alexey introduced support for changing the answer controls. He provided buttons, and a selection of asymmetric and symmetric answer swipes. He also implemented call duration and caller id support for the incoming call screen with updates both for the daemon and GUI logic.

Marco, along with Alexey, fixed an issue when there was no ringtone when the phone was in lock screen mode without an additional notification. Marco also helped Alexey improve KWin's logic when parsing application privileges, like the lock screen overlay.

Volker introduced initial support for the Qt6 CI builds.

Devin ported Dialer settings to the new form components.

Spacebar

Michael added attachment previews to notifications. He also made it so that image attachment previews are shown in the chat's list. Another thing he implemented is support for tapback reactions. There is now a confirmation dialog before deleting a chat conversation to prevent accidental deletion of a conversation. Michael also made it so that MMS messages can be downloaded even when wifi is also connected.

Discover

Aleix worked on a more helpful homepage that better displays featured applications.

Tokodon

Carl ported Tokodon's settings to the new form components. He also updated the timeline by automatically cropping and rounding the images, improving the typography and fixing some sizing issues.

Volker fixed multiple bugs in the timeline and reduced the transfer volume on a "warm" start-up by 80%. For the technical details, you might want to read his blog post: Secure and efficient QNetworkAccessManager use

NeoChat

Tobias has made a lot of progress on end-to-end encryption. You can read more about it in his blog post: NeoChat, encryption, and thanks for all the olms

But that's not all, aside from the end-to-end encryption implementation, there was also a lot of changes to NeoChat's configuration settings. James and Carl ported many settings to the new form components. James additionally created a new settings component for managing your notifications settings directly from NeoChat. Gary Wang made it possible to configure a proxy for NeoChat. Tobias improved the settings on Android (hiding the irrelevant settings).

Tobias implemented a basic developer tool that allows inspecting raw matrix events.

Carl added a confirmation dialog when signing out and Tobias added another confirmation dialog when enabling end-to-end encryption.

Tobias rewrote the account switcher to make it easier to switch between accounts.

Kasts

Bart added support for streaming to Kasts and episodes can now be listened to without the need to download them first. For people that don't care about downloading episodes, there is a new setting allowing you to select streaming over downloading. If this setting is activated, it will show streaming buttons on the UI instead of download and play buttons.

Settings

Devin did some major fixes to the cellular network settings module, ensuring that the toggle state always matches the one used in the shell. He also improved behavior for when there is no SIM, as well as added more helpful messages if an APN is not configured. Some UI issues on the APN page were also fixed.

Devin also fixed accent colors being set from the wallpaper not working in the colors settings module.

Raven

Devin fixed some issues with the new account setup. At Akademy we discussed sharing code between Kalendar and Raven.

Łukasz fixed the edit button showing on the time page even when no entries are listed.

Audiotube

Jonah implemented a lyrics view in the player, and made it possible to filter recent search queries. He also added real album cover images instead of monochrome icons in all song lists.

Mathis made a few UI improvements, including rounded images and new list headers.

Actions for each song (like add to queue, etc.) are now in a popup menu. This allows you to favorite songs without having to play them.

Contributing

Want to help with the development of Plasma Mobile? Take Plasma Mobile for a spin! Check out the device support for each distribution and find the version which will work on your phone.

Our documentation gives information on how and where to report issues. Also, consider joining our Matrix channel, and let us know what you would like to work on!

Categories: FLOSS Project Planets

Spyder IDE: Improvements to the Spyder IDE installation experience

Planet Python - Tue, 2022-11-29 19:00

Juan Sebastian Bautista, C.A.M. Gerlach and Carlos Cordoba also contributed to this post.

Spyder 5.4.0 was released recently, featuring some major enhancements to its Windows and macOS standalone installers. You'll now get more detailed feedback when new versions are available, and you can download and start the update to them from right within Spyder, instead of having to install them manually. In this post, we'll go over how these new update features work and how you can start using them!

Before proceeding, we want to acknowledge that this work was made possible by a Small Development Grant awarded to Spyder by NumFOCUS, which has enabled us to hire a new developer (Juan Sebastian Bautista Rojas) to be in charge of all the implementation details.

Before these improvements, Spyder already had a mechanism to detect more recent versions, but that functionality was very simple. There was a pop-up dialog warning that a new version was available, but users had to follow a link to manually download the installer and then run it themselves:

Once you upgrade to Spyder 5.4.0 or above, you'll get this message on future Spyder updates:

Spyder will now be able to automatically download and install a new version for you, much like many other popular applications.

After clicking "Yes" on that dialog, Spyder will display another with the status and percent completion of the download.

If it is closed, the download will continue in the background, with its progress shown in a new status bar widget.

After the download completes, Spyder will ask if you want to update immediately, cancel the update or defer it to when you close Spyder, to avoid interrupting your current workflow.

If you chose to update immediately, or once you close Spyder if you deferred the update, our installer will be started automatically. On Windows, the installer has a series of automated prompts to close the current instance, uninstall the previous version and finally install the new one:

On macOS, Spyder will automatically mount the new version's DMG, so you can simply drag and drop it in the Applications folder

We hope these improvements will make updating to future Spyder versions smoother and more straightforward, so we can bring you new features and enhancements more easily in the future!

Categories: FLOSS Project Planets

Pages