Feeds

Valhalla's Things: Bookbinding: photo album

Planet Debian - Sun, 2023-03-05 19:00
Posted on March 6, 2023

When I paint postcards I tend to start with a draft (usually on lightweight (250 g/m²) watercolour paper, then trace1 the drawing on blank postcards and paint it again.

I keep the drafts for a number of reasons; for the views / architectural ones I’m using a landscape photo album that I bought many years ago, but lately I’ve also sent a few cards with my historical outfits to people who like to be kept updated on that, and I wanted a different book for those, both for better organization and to be able to keep them in the portrait direction.

If you know me, you can easily guess that buying one wasn’t considered as an option.

Since I’m not going to be writing on the pages, I decided to use a relatively cheap 200 g / m² linoprint paper with a nice feel, and I’ve settled on a B6 size (before trimming) to hold A6 postcard drafts.

For the binding I’ve decided to use a technique I’ve learned from a craft book ages ago that doesn’t use tapes, and added a full hard cover in dark grey linen-feel2 paper. For the end-papers I’ve used some random sheets of light blue paper (probably around 100-something g / m²), and that’s the thing where I could have done better, but they work.

Up to now there isn’t anything I hadn’t done before, what was new was the fact that this book was meant to hold things between the pages, and I needed to provide space for them.

After looking on the internet for solutions, I settled on adding spacers by making a signature composed of paper - spacer - paper - spacer, with the spacers being 2 cm wide, folded in half.

And then, between finishing binding the book and making the cover I utterly forgot to add the head bands. Argh. It’s not the first time I make this error.

I’m happy enough with the result. There are things that are easy to improve on in the next iteration (endpapers and head bands), and something in me is not 100% happy with the fact that the spacers aren’t placed between every sheet, but there are places with no spacer and places with two of them, but I can’t think of (and couldn’t find) a way to make them otherwise with a sewn book, unless I sew each individual sheet, which sounds way too bulky (the album I’m using for the landscapes was glued, but I didn’t really want to go that way).

The size is smaller than the other one I was using and doesn’t leave a lot of room around the paintings, but that isn’t necessarily a bad thing, because it also means less wasted space.

I believe that one of my next project will be another similar book in a landscape format, for those postcard drafts that aren’t landscapes nor clothing related.

And then maybe another? or two? or…

Traceback (most recent call last): TooManyProjectsError: project queue is full
  1. yes, trace. I can’t draw. I have too many hobbies to spend the required amount of time every day to practice it. I’m going to fake it. 85% of the time I’m tracing from a photo I took myself, so I’m not even going to consider it cheating.↩︎

  2. the description of which, on the online shop, made it look like fabric, even if the price was suspiciously low, so I bought a sheet to see what it was. It wasn’t fabric. It feels and looks nice, but I’m not sure how sturdy it’s going to be.↩︎

Categories: FLOSS Project Planets

Enrico Zini: Heart-driven drum loop

Planet Debian - Sun, 2023-03-05 17:53

I have Python code for reading a heart rate monitor.

I have Python code to generate MIDI events.

Could I resist putting them together? Clearly not.

Here's Jack Of Hearts, a JACK MIDI drum loop generator that uses the heart rate for BPM, and an improvised way to compute heart rate increase/decrease to add variations in the drum pattern.

It's very simple minded and silly. To me it was a fun way of putting unrelated things together, and Python worked very well for it.

Categories: FLOSS Project Planets

Test and Code: 194: Test & Code Returns

Planet Python - Sun, 2023-03-05 16:15

A brief discussion of why Test & Code has been off the air for a bit, and what to expect in upcoming episodes.

Links:

<p>A brief discussion of why Test &amp; Code has been off the air for a bit, and what to expect in upcoming episodes.</p><p>Links:</p><ul><li><a href="https://pythontest.com/pytest-book/" title="Python Testing with pytest, 2nd Edition" rel="nofollow">Python Testing with pytest, 2nd Edition</a></li><li><a href="https://training.talkpython.fm/courses/getting-started-with-testing-in-python-using-pytest" title="Getting started with pytest Online Course" rel="nofollow">Getting started with pytest Online Course</a></li><li><a href="https://pythontest.com/training/" title="Software Testing with pytest Training" rel="nofollow">Software Testing with pytest Training</a></li><li><a href="https://pythonbytes.fm/" title="Python Bytes Podcast" rel="nofollow">Python Bytes Podcast</a></li></ul>
Categories: FLOSS Project Planets

grep @ Savannah: grep-3.9 released [stable]

GNU Planet! - Sun, 2023-03-05 11:09

This is to announce grep-3.9, a stable release.

The NEWS below describes the two main bug fixes since 3.8.

There have been 38 commits by 4 people in the 26 weeks since 3.8.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Bruno Haible (2)
  Carlo Marcelo Arenas Belón (2)
  Jim Meyering (11)
  Paul Eggert (23)

Jim
 [on behalf of the grep maintainers]
==================================================================

Here is the GNU grep home page:
    http://gnu.org/s/grep/

For a summary of changes and contributors, see:
  http://git.sv.gnu.org/gitweb/?p=grep.git;a=shortlog;h=v3.9
or run this command from a git-cloned grep directory:
  git shortlog v3.8..v3.9

Here are the compressed sources:
  https://ftp.gnu.org/gnu/grep/grep-3.9.tar.gz   (2.7MB)
  https://ftp.gnu.org/gnu/grep/grep-3.9.tar.xz   (1.7MB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/grep/grep-3.9.tar.gz.sig
  https://ftp.gnu.org/gnu/grep/grep-3.9.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  f84afbfc8d6e38e422f1f2fc458b0ccdbfaeb392  grep-3.9.tar.gz
  7ZF6C+5DtxJS9cpR1IwLjQ7/kAfSpJCCbEJb9wmfWT8=  grep-3.9.tar.gz
  bcaa3f0c4b81ae4192c8d0a2be3571a14ea27383  grep-3.9.tar.xz
  q80RQJ7iPUyvNf60IuU7ushnAUz+7TE7tfSIrKFwtZk=  grep-3.9.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify grep-3.9.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
        Key fingerprint = 155D 3FC5 00C8 3448 6D1E  EA67 7FD9 FCCB 000B EEEE
  uid                   [ unknown] Jim Meyering <jim@meyering.net>
  uid                   [ unknown] Jim Meyering <meyering@fb.com>
  uid                   [ unknown] Jim Meyering <meyering@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net

  gpg --recv-keys 7FD9FCCB000BEEEE

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=grep&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify grep-3.9.tar.gz.sig

This release was bootstrapped with the following tools:
  Autoconf 2.72a.65-d081
  Automake 1.16i
  Gnulib v0.1-5861-g2ba7c75ed1

NEWS

* Noteworthy changes in release 3.9 (2023-03-05) [stable]

** Bug fixes

  With -P, some non-ASCII UTF8 characters were not recognized as
  word-constituent due to our omission of the PCRE2_UCP flag. E.g.,
  given f(){ echo Perú|LC_ALL=en_US.UTF-8 grep -Po "$1"; } and
  this command, echo $(f 'r\w'):$(f '.\b'), before it would print ":r".
  After the fix, it prints the correct results: "rú:ú".

  When given multiple patterns the last of which has a back-reference,
  grep no longer sometimes mistakenly matches lines in some cases.
  [Bug#36148#13 introduced in grep 3.4]

Categories: FLOSS Project Planets

Enrico Zini: Generating MIDI events with JACK and Python

Planet Debian - Sun, 2023-03-05 06:14

I had a go at trying to figure out how to generate arbitrary MIDI events and send them out over a JACK MIDI channel.

Setting up JACK and Pipewire

Pipewire has a JACK interface, which in theory means one could use JACK clients out of the box without extra setup.

In practice, one need to tell JACK clients which set of libraries to use to communicate to servers, and Pipewire's JACK server is not the default choice.

To tell JACK clients to use Pipewire's server, you can either:

  • on a client-by-client basis, wrap the commands with pw-jack
  • to change the system default: cp /usr/share/doc/pipewire/examples/ld.so.conf.d/pipewire-jack-*.conf /etc/ld.so.conf.d/ and run ldconfig (see the Debian wiki for details)
Programming with JACK

Python has a JACK client library that worked flawlessly for me so far.

Everything with JACK is designed around minimizing latency. Everything happens around a callback that gets called form a separate thread, and which gets a buffer to fill with events.

All the heavy processing needs to happen outside the callback, and the callback is only there to do the minimal amount of work needed to shovel the data your application produced into JACK channels.

Generating MIDI messages

The Mido library can be used to parse and create MIDI messages and it also worked flawlessly for me so far.

One needs to study a bit what kind of MIDI message one needs to generate (like "note on", "note off", "program change") and what arguments they get.

It also helps to read about the General MIDI standard which defines mappings between well-known instruments and channels and instrument numbers in MIDI messages.

A timed message queue

To keep a queue of events that happen over time, I implemented a Delta List that indexes events by their future frame number.

I called the humble container for my audio experiments pyeep and here's my delta list implementation.

A JACK player

The simple JACK MIDI player backend is also in pyeep.

It needs to protect the delta list with a mutex since we are working across thread boundaries, but it tries to do as little work under lock as possible, to minimize the risk of locking the realtime thread for too long.

The play method converts delays in seconds to frame counts, and the on_process callback moves events from the queue to the jack output.

Here's an example script that plays a simple drum pattern:

#!/usr/bin/python3 # Example JACK midi event generator # # Play a drum pattern over JACK import time from pyeep.jackmidi import MidiPlayer # See: # https://soundprogramming.net/file-formats/general-midi-instrument-list/ # https://www.pgmusic.com/tutorial_gm.htm DRUM_CHANNEL = 9 with MidiPlayer("pyeep drums") as player: beat: int = 0 while True: player.play("note_on", velocity=64, note=35, channel=DRUM_CHANNEL) player.play("note_off", note=38, channel=DRUM_CHANNEL, delay_sec=0.5) if beat == 0: player.play("note_on", velocity=100, note=38, channel=DRUM_CHANNEL) player.play("note_off", note=36, channel=DRUM_CHANNEL, delay_sec=0.3) if beat + 1 == 2: player.play("note_on", velocity=100, note=42, channel=DRUM_CHANNEL) player.play("note_off", note=42, channel=DRUM_CHANNEL, delay_sec=0.3) beat = (beat + 1) % 4 time.sleep(0.3) Running the example

I ran the jack_drums script, and of course not much happened.

First I needed a MIDI synthesizer. I installed fluidsynth, and ran it on the command line with no arguments. it registered with JACK, ready to do its thing.

Then I connected things together. I used qjackctl, opened the graph view, and connected the MIDI output of "pyeep drums" to the "FLUID Synth input port".

fluidsynth's output was already automatically connected to the audio card and I started hearing the drums playing! 🥁️🎉️

Categories: FLOSS Project Planets

Season of KDE Blog #1

Planet KDE - Sun, 2023-03-05 06:12

I was first introduced to Linux when my outdated computer could not run Windows anymore. I had previously heard some information about LINUX while surfing the internet, but my initial impression was that it was highly sophisticated and exclusively for programmers or mainly hackers :). I finally started using Linux Mint, and was moved by how quick and configurable it was 🤩.

After a few distro-hopps, I finally settled for KDE Neon being amazed by how easily customizable it was ❣️

Motivation for applying to sok’23

I was very much content with KDE plasma but I always found some minor annoying bugs here and there, it was also around this time I got started with programming and was looking into ways to get better at it, so I decided that I would contribute to KDE 😇, and like every other beginner, I didn’t know how to navigate through such large repositories of KDE. I also learned about the Season of KDE program, which assists in onboarding new contributors for KDE, so I decided to apply.

My Project

My project for SOK’23 is improving the accessibility of Tokodon by writing appium tests. My project involves two parts: the first one was making Tokodon work without internet connection so that it is ready for tests and then the second part is writing GUI tests to improve accessibility. You can find my full project proposal SoK Proposal for Tokodon.

Work done so far🤗 Week 1-2:

In my first week, I researched how I would run Tokodon without network connectivity. I tried to reverse engineer the existing unit-tests, and created a new start file offline-main.cpp, and by the end of the second week, I could start Tokodon without network connectivity with some broken UI.

Week 3-4:

The next step was writing appium test for the search functionality. For this, I first fixed the broken search UI by reversing the already written unit-test for search and following which I wrote my first test for testing the GUI of search. The final result is shown in the gif below:

Week 5-6:

In these weeks I, with the help of the maintainers of Tokodon, fixed the breaking pipelines of tokodon-offline and also wrote another appium test for testing different types of timeline statuses.

What I will be doing in the upcoming weeks

In the upcoming weeks I plan to add more appium tests and fix broken UI elements in tokodon-offline.

Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in February 2023

Planet Debian - Sun, 2023-03-05 03:53

Welcome to the February 2023 report from the Reproducible Builds project. As ever, if you are interested in contributing to our project, please visit the Contribute page on our website.

FOSDEM 2023 was held in Brussels on the 4th & 5th of February and featured a number of talks related to reproducibility. In particular, Akihiro Suda gave a talk titled Bit-for-bit reproducible builds with Dockerfile discussing deterministic timestamps and deterministic apt-get (original announcement). There was also an entire ‘track’ of talks on Software Bill of Materials (SBOMs). SBOMs are an inventory for software with the intention of increasing the transparency of software components (the US National Telecommunications and Information Administration (NTIA) published a useful Myths vs. Facts document in 2021).


On our mailing list this month, Larry Doolittle was puzzled why the Debian verilator package was not reproducible [], but Chris Lamb pointed out that this was due to the use of Python’s datetime.fromtimestamp over datetime.utcfromtimestamp [].


James Addison also was having issues with a Debian package: in this case, the alembic package. Chris Lamb was also able to identify the Sphinx documentation generator as the cause of the problem, and provided a potential patch that might fix it. This was later filed upstream [].


Anthony Harrison wrote to our list twice, first by introducing himself and their background and later to mention the increasing relevance of Software Bill of Materials (SBOMs):

As I am sure everyone is aware, there is a growing interest in [SBOMs] as a way of improving software security and resilience. In the last two years, the US through the Exec Order, the EU through the proposed Cyber Resilience Act (CRA) and this month the UK has issued a consultation paper looking at software security and SBOMs appear very prominently in each publication. []


Tim Retout wrote a blog post discussing AlmaLinux in the context of CentOS, RHEL and supply-chain security in general []:

Alma are generating and publishing Software Bill of Material (SBOM) files for every package; these are becoming a requirement for all software sold to the US federal government. What’s more, they are sending these SBOMs to a third party (CodeNotary) who store them in some sort of Merkle tree system to make it difficult for people to tamper with later. This should theoretically allow end users of the distribution to verify the supply chain of the packages they have installed?

Debian

F-Droid & Android

diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats.

This month, Chris Lamb released versions 235 and 236; Mattia Rizzolo later released version 237.

Contributions include:

  • Chris Lamb:
    • Fix compatibility with PyPDF2 (re. issue #331) [][][].
    • Fix compatibility with ImageMagick version 7.1 [].
    • Require at least version 23.1.0 to run the Black source code tests [].
    • Update debian/tests/control after merging changes from others [].
    • Don’t write test data during a test [].
    • Update copyright years [].
    • Merged a large number of changes from others.
  • Akihiro Suda edited the .gitlab-ci.yml configuration file to ensure that versioned tags are pushed to the container registry [].

  • Daniel Kahn Gillmor provided a way to migrate from PyPDF2 to pypdf (#1029741).

  • Efraim Flashner updated the tool metadata for isoinfo on GNU Guix [].

  • FC Stegerman added support for Android resources.arsc files [], improved a number of file-matching regular expressions [][] and added support for Android dexdump []; they also fixed a test failure (#1031433) caused by Debian’s black package having been updated to a newer version.

  • Mattia Rizzolo:
    • updated the release documentation [],
    • fixed a number of Flake8 errors [][],
    • updated the autopkgtest configuration to only install aapt and dexdump on architectures where they are available [], making sure that the latest diffoscope release is in a good fit for the upcoming Debian bookworm freeze.
reprotest

Reprotest version 0.7.23 was uploaded to both PyPI and Debian unstable, including the following changes:

  • Holger Levsen improved a lot of documentation [][][], tidied the documentation as well [][], and experimented with a new --random-locale flag [].

  • Vagrant Cascadian adjusted reprotest to no longer randomise the build locale and use a UTF-8 supported locale instead […] (re. #925879, #1004950), and to also support passing --vary=locales.locale=LOCALE to specify the locale to vary [].

Separate to this, Vagrant Cascadian started a thread on our mailing list questioning the future development and direction of reprotest.

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, the following changes were made by Holger Levsen:

  • Add three new OSUOSL nodes [][][] and decommission the osuosl174 node [].
  • Change the order of listed Debian architectures to show the 64-bit ones first [].
  • Reduce the frequency that the Debian package sets and dd-list HTML pages update [].
  • Sort “Tested suite” consistently (and Debian unstable first) [].
  • Update the Jenkins shell monitor script to only query disk statistics every 230min [] and improve the documentation [][].
Other development work

disorderfs version 0.5.11-3 was uploaded by Holger Levsen, fixing a number of issues with the manual page [][][].


Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.

If you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. You can get in touch with us via:

Categories: FLOSS Project Planets

Brian Okken: Test &amp; Code Returns

Planet Python - Sat, 2023-03-04 19:00
Did I get the title right? I’m not sure. Maybe I should have replaced “returns” with reimagined, revisited, continued, expanded, focused, or something even more descriptive that could help me with direction as I keep producing more episodes of this thing. This post is a reflection of why I stopped in August 2022. I’d like to also talk about where the podcast is going in the future. But I’m not really sure.
Categories: FLOSS Project Planets

OSM Hack Weekend Karlsruhe 2023

Planet KDE - Sat, 2023-03-04 05:00

Last weekend I attended an OSM hack weekend, hosted by Geofabrik in Karlsruhe, to progress some OSM-related topics around KDE Itinerary and to better connect KDE and OSM.

Platform section highlighting

The most visible result is that our train station map can now also highlight relevant platform sections, at least when platform sections are mapped in OSM. That’s rare in Germany at least, with Karlsruhe being one of the few exceptions, and thus providing the perfect opportunity for real-life testing.

Train station map in KDE Itinerary highlighting relevant platform sections.

This is currently only using schedule data, we still have to use the seat reservation and coach layout data when available in Itinerary for maximum usefulness.

Raw data tiles

Another noticeable result are the performance optimizations of the map geometry reassembly. That’s the process of putting the raw data tile data back together for displaying, which is 12x faster now.

That’s basically down to multiple occurrences of the following two issues:

  • std::vector is very picky when it comes to choose to move rather than to copy its content, e.g. when growing. Content being movable isn’t enough, the move operations also need to promise no exceptions being thrown by them using noexcept. In particular, this is not necessarily done by implicitly generated move operations, so even just explicitly declared but defaulted move operators/constructors can make a big difference.
  • Don’t do insertions (or removals) on a sorted vector in a loop. Collect all those changes and apply them in bulk instead (using the erase-remove idiom, or append new elements and sort once in the end). The algorithmic complexity is O(n²) for the former and O(n log n) for the latter. For the amount of data we are working with here that is a relevant difference.

Geometry reassembly became easy to measure and optimize in isolation as we now have a standalone command-line raw data tile reassembly tool. That’s part of the longer term goal to evolve KDE’s historically somewhat Marble-specific raw data tile infrastructure to something more in line with OSM standards and thus more useful for a wider range of applications.

And there is interest in that, but especially the exact selection/clipping and reassembly procedures need work (and documentation).

Data questions

I also managed to get input on two data/data modelling questions we ran into with the train station map:

  • How does the meaning of the level tag propagate to geometry in a multi-polygon relation, e.g. if inner polygons are themselves again used to represent different elements? Turns out it is not supposed to do that automatically, inner geometry needs to be tagged separately. That then fortunately required just a few fixes in the data, rather than adding complex logic to the floor level separation code.
  • The uic_ref tag on German railway stations seems to consistently contain the very similar looking IBNR number rather than the UIC station number it is supposed to contain. I had hoped I had missed something here, but it looks like this will indeed need a mass modification fix after all.
Tools and workflows

I am still quite new to the OSM world, so hanging out with experienced OSM people is also always a good opportunity to learn about their tools and workflows.

While I had been using JOSM for online data editing before I hadn’t realized you could also save modified extracts locally, in a format we can already load anyway even. That is a massive help for testing both code and data changes, and for creating dedicated test cases.

This wasn’t entire obvious to discover in the Flatpak version of JOSM, as that has no host filesystem access and no proper file access portal integration. Switching on native file dialogs (Edit > Preferences > Display > Use native file choosers) and enabling host file system access in the Flatpak KCM works around that.

Infrastructure coordination

While our source code and data might be free, the infrastructure to distribute that costs real money. For map data in KDE applications those are the raster tiles servers and geo coding services from OSM, and the elevation and raw data tile servers from KDE.

OSM serves close to 20.000 raster tiles per second on average (about 70 of those for Marble or Mable-based applications), KDE’s more special-purpose tiles are requested a bit less than 3 times a second.

At that scale it’s important to ensure all applications are well-behaved, ie. properly identify themselves, minimize requests and use the most efficient way available to retrieve the data they need.

Most KDE applications using map data have received some form of compliance fixes in the past weeks for this, if your application didn’t please get in touch.

But even the most efficient use doesn’t eliminate the need for powerful infrastructure, and this is something you can support with your donations to organizations like KDE and OSM.

Outlook

It’s not long until the next in-person meeting with OSM people, I’ll be speaking at the FOSSGIS conference in Berlin in just ten days from now, about how we use OSM data in KDE Itinerary (in German).

Categories: FLOSS Project Planets

This week in KDE: Plasma 6 begins

Planet KDE - Fri, 2023-03-03 22:38

As has been reported in various other places already, this week the “master” branch of Plasma-aligned software repos have been ported to Qt 6. Work is ongoing, but the actual change-over is happening very quickly, and adventurous people are able to run Plasma 6 in a usable state already! This builds on years of work to port old code away from deprecated APIs and libraries that was just quietly happening in the background all along, pushed along by people like Nicolas Fella, Friedrich Kossebau, Volker Krause, and many others. It can be fairly thankless and boring-looking work, but it’s incredibly important, and the foundation of how quickly this technical transition has been able to happen. So I find myself feeling quite optimistic about our chances of shipping a solid and high quality Plasma 6 this year!

…And that’s the reason for this being a somewhat light week in terms of other things. But fear not! Plasma 5.27 continues to be maintained and bugfixed!

New Features

There’s now an option to change the visual intensity of the outline drawn around Breeze-decorated windows, or to disable them entirely. Currently this is slated to be released in Plasma 6.0, but we’re considering backporting it to 5.27 as well. Stay tuned! (Akseli Lahtinen, Plasma 6.0. Link):

User Interface Improvements

The new portal-based “Open With” dialog is no longer used by non-portal-using apps; they now get the older dialog again. This is still the future design direction we want to go in, but we plan to roll the new dialog out again only once it has all the features of the old dialog, so that nothing is lost in the transition (me: Nate Graham, Plasma 5.27.3. Link)

Linked buttons in Breeze-themed GTK apps like Rhythmbox now look better (Ivan Tkachenko, Plasma 5.27.3. Link):

Notifications in the history pop-up are now sorted chronologically, rather than by a somewhat difficult to understand combination of type and urgency (Joshua Goins, Plasma 6.0. Link)

The way sizes and positions of KDE app windows are remembered for multi-screen setups is now fundamentally more robust, so you should see fewer circumstances of windows having the wrong size and position when using multiple screens, especially when the specific screens change (me: Nate Graham, Frameworks 5.104. Link)

It’s now possibly to directly delete items that are already in the trash (Méven Car, Frameworks 5.104. Link)

Significant Bugfixes

(This is a curated list of e.g. HI and VHI priority bugs, Wayland showstoppers, major regressions, etc.)

When using an NVIDIA graphics card, after you reboot or wake the system from sleep, external screens are no longer usually inappropriately disabled, and also icons and text throughout Plasma are no longer sometimes missing (Ivan Tkachenko, Plasma 5.27.2. Link 1 and link 2)

Fixed a case where KWin could crash when switching window decoration themes (Vlad Zahorodnii, Plasma 5.27.2. Link)

In the Plasma Wayland session, when the clipboard history has been set to only one item, it’s now possible to copy text with a single copy action, not two (David Redondo, Plasma 5.27.3. Link)

Desktop icons on the active activity should no longer inappropriately re-arrange themselves when the set of connected screens changes. However during the process of investigation, we discovered that the code for storing desktop file position is inherently problematic and in need of a fundamental rewrite just like we did for multi-screen arrangement in Plasma 5.27. This will be done for Plasma 6.0, and hopefully make Plasma’s long history of being bad about remembering desktop icon positions just that–history (Marco Martin, Plasma 5.27.3. Link)

Gwenview now only registers its MPRIS interface when it’s doing something (i.e. playing a slideshow) that’s controllable over MPRIS, which should prevent it from sometimes hijacking your global media playback shortcuts while it’s running normally (Joshua Goins, Gwenview 23.04. Link)

Other bug-related information of interest:

Automation & Systematization

We now have a new tutorial on how to create cursor themes (Magno Lomardo, Link)

We now have a tutorial on uploading your KDE app to the Microsoft Store (Thiago Sueto, Link)

Added an autotest to make sure that “preferred” apps that are not actually installed are omitted from the Task Manager as expected (Fushan Wen, Link)

Added an autotest to ensure that we’re handling System Tray icons form apps correctly (Fushan Wen, Link)

Plasma now has a “codemap” file to help people learn what and where things are (Bharadwaj Raju, Link)

…And everything else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

If you’re a user, upgrade to Plasma 5.27! If your distro doesn’t offer it and won’t anytime soon, consider switching to a different one that ships software closer to its developer’s schedules.

If you’re a developer, consider working on known Plasma 5.27 regressions! You might also want to check out our 15-Minute Bug Initiative. Working on these issues makes a big difference quickly!

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

And finally, KDE can’t work without financial support, so consider making a donation today! This stuff ain’t cheap and KDE e.V. has ambitious hiring goals. We can’t meet them without your generous donations!

Categories: FLOSS Project Planets

Matt Brown: Retrospective: Feb 2023

Planet Debian - Fri, 2023-03-03 20:03

February ended up being a very short work month as I made a last minute decision to travel to Adelaide for the first 2 weeks of the month to help my brother with some house renovations he was undertaking. I thought I might be able to keep up with some work and my writing goals in the evenings while I was there, but days of hard manual labour are such an unfamiliar routine for me that I didn’t have any energy left to make good on that intention.

The majority of my time and focus for the remaining one and half weeks of the month was catching up on the consulting work that I had pushed back while in Adelaide.

So while it doesn’t make for a thrilling first month to look back and report on, overall I’m not unhappy with what I achieved given the time available. Next month, I hope to be able to report some more exciting progress on the product development front as well.

Monthly Scoring Rubric

I’m evaluating each goal using a 10 point scale based on execution velocity and risk level, rather than absolute success (which is what I will look at in the annual/mid-year review). If velocity is good and risk is low or well managed the score is high, if either the velocity is low, or risk is high then the score is low. E.g:

  • 10 - perfect execution with low-risk, on track for significantly overachieving the goal.
  • 7 - good execution with low or well managed risk, highly likely to achieve the goal.
  • 5 - execution and risk are OK, should achieve the goal if all goes well.
  • 3 - execution or risk have problems, goal is at risk.
  • 0 - stalled, with no obvious path to recovery or success.
Goals Consulting - 6/10

Goal: Execute a series of successful consulting engagements, building a reputation for myself and leaving happy customers willing to provide testimonials that support a pipeline of future opportunities.

  • I have one active local engagement assisting a software team with migrating their application from a single to multi-region architecture.
  • Two promising international engagements which were close to starting both cancelled based on newly issued company policies freezing their staffing/outsourcing budgets due to the current economic climate.

I’m happy with where this is at - I hit 90% of my target hours in February (taking into account 2 weeks off) and the feedback I’m receiving is positive. The main risk is the future pipeline of engagements, particularly if the cancellations indicate a new pattern. I’m not overly concerned yet, as all the opportunities to date have been from direct or referred contacts in my personal network, so there’s plenty of potential to more actively solicit work to create a healthier pipeline.

Product Development - 3/10

Goal: Grow my product development skill set by taking several ideas to MVP stage with customer feedback received, and launch at least one product which generates revenue and has growth potential.

  • Accelerating electrification - I continued to keep up with industry news and added some interesting reports to my reading queue, but made no significant progress towards identifying a specific product opportunity.

  • Farm management SaaS - no activity or progress at all.

  • co2mon.nz - I put significant thought and planning into how to approach a second iteration of this product. I started writing and completed 80% of a post to communicate the revised business plan, but it’s not ready for publication yet, and even if it was, the real work towards it would need to actually happen to score more points here.

I had high hopes to make at least some progress in all three areas in February, but it just didn’t happen due to lack of time. The good news is that since the low score here is purely execution driven, there’s no new risks or blockers that will hinder much better progress here in March.

Professional Network Development - 8/10

Goal: To build a professional relationship with at least 30 new people this year.

This is off to a strong start, I made 4 brand new connections and re-established contact with 9 other existing people I’d not talked to for a while. I’ve found the conversations energising and challenging and I’m looking forward to continuing to keep this up.

Writing - 2/10

Goal: To publish a high-quality piece of writing on this site at least once a week.

Well off track as already noted. I am enjoying the writing process and I continue to find it useful in developing my thoughts and forcing me to challenge my assumptions, but coupling the writing process with the thinking/planning that is a prerequisite to get those benefits definitely makes my output a lot slower than I was expecting.

The slower speed, combined with the obvious time constraints of this month are not a great doubly whammy to be starting with, but I think with some planning and preparation it should have been avoidable by having a backlog of pre-written content for use in weeks where I’m on holiday or otherwise busy.

It’s worth noting that among all the useful feedback I received, this writing target was often called out as overly ambitious, or likely to be counterproductive to producing quality writing. The feedback makes sense - for now I’m not planning to change the goal (I might at my 6-month review point), but I am going to be diligent about adhering to my quality standard, which in turn means I’m choosing to accept missing a weekly post here and there and taking a lower score on the goal overall.

I apologise if you’ve been eagerly waiting for writing that never arrived over February!

Community - 5/10

Goal: To support the growth of my local technical community by volunteering my experience and knowledge with others through activities such as mentoring, conference talks and similar.

  • I was an invited participant of the monthly KiwiSRE meet-up which was discussing SRE team models, and in particular I was able to speak to my experiences as described in an old CRE blog post on this topic.

  • I joined the program committee for SREcon23 APAC which is scheduled for mid-June in Singapore. I also submitted two talk proposals of my own (not sharing the details for now, since the review process is intended to be blind) which I’m hopeful might make the grade with my fellow PC members!

Feedback

As always, I’d love to hear from you if you have thoughts or feedback triggered by anything I’ve written above. In particular, it would be useful to know whether you find this type of report interesting to read and/or what you’d like to see added/removed or changed.

Categories: FLOSS Project Planets

Zyxware Technologies: Drupal Updates vs Upgrades vs Migrations: What's the Difference and When Do You Need Them?

Planet Drupal - Fri, 2023-03-03 19:30
In the world of Drupal, the words updates, upgrades, and migrations often cause confusion. This article tries to explain what these words mean and the difference between them in the context of maintaining a Drupal website. By clearly understanding these terms, you can take appropriate actions and ensure that your Drupal site is always up-to-date and secure.
Categories: FLOSS Project Planets

Pythonicity: Decorator overuse

Planet Python - Fri, 2023-03-03 19:00
Decorators versus blocks and partial functions.

Decorators are a beloved feature of Python, but like any good thing can be overused. The key is acknowledging that decorators are just functions.

A function returning another function, usually applied as a function transformation using the @wrapper syntax. Common examples for decorators are classmethod() and staticmethod().

The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equivalent:

def f(arg): ... f = staticmethod(f) @staticmethod def f(arg): ... Renamed

So the critical feature of the @ syntax is to retain the defined object’s name; otherwise it is just a function call. Which leads to the first example of overuse: defining a new object just to change the name. Consider this example adapted from a popular project.

class Response: def __bool__(self): return self.ok @property def ok(self): ...

Since a property wraps a function, it is natural to make the function have the implementation instead. Then it becomes clear that the property does not share the same name, so why bother with @.

class Response: def __bool__(self): ... ok = property(__bool__)

A related scenario is where the local name of the function is irrelevant, which is typical in wrapped functions:

@functools.wraps(wrapped, assigned=WRAPPER_ASSIGNMENTS, updated=WRAPPER_UPDATES)

This is a convenience function for invoking update_wrapper() as a function decorator when defining a wrapper function. It is equivalent to partial(update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated). For example:

>>> from functools import wraps >>> def my_decorator(f): ... @wraps(f) ... def wrapper(*args, **kwds): ... print('Calling decorated function') ... return f(*args, **kwds) ... return wrapper

The “convenience function” is useless indirection when the wrapper is immediately returned. Even the documentation points out that wraps is just a partial function. The example could be simply:

def my_decorator(f): def wrapper(*args, **kwds): print('Calling decorated function') return f(*args, **kwds) return update_wrapper(wrapper, f)

Giving partial(update_wrapper, wrapped=f) a short name does not make it any clearer conceptually.

With blocks

Another sign is if the decorator’s functionality only executes code before or after the wrapped function. Context managers are inherently more flexible by providing the same functionality for any code block. In some cases a function boundary is natural to bookend, e.g., logging or timing. The question is whether the function block is too broad a context to manage.

Decorators were introduced in version 2.4; context managers in 2.5. All ancient history now, but decorators had a ~2 year head start. For example, transactions are a seminal use case for context managers, but Django pre-dates 2.5, so it had a transaction decorator first. This is how transactions are currently presented:

atomic is usable both as a decorator:

from django.db import transaction @transaction.atomic def viewfunc(request): # This code executes inside a transaction. do_stuff()

and as a context manager:

from django.db import transaction def viewfunc(request): # This code executes in autocommit mode (Django's default). do_stuff() with transaction.atomic(): # This code executes inside a transaction. do_more_stuff()

So it has both, but the decorator is presented first, and is it a good example? Seems likely that a full request would have setup and teardown work that is unrelated to a database transaction. It is uncontroversial to want try blocks to be as narrow as possible. Surely there is no benefit to a request operation rolling back a vacuous transaction, nor a response operation rolling back a transaction that was committable.

Any context manager can be trivially transformed into a decorator; the converse is not true. And even if the function block is coincidentally perfect, a with block has negligible impact on readability. It is just indentation.

Partial functions

Next is a lack of appreciation of partially bound functions. Many decorator examples go out of their way to write an unnecessary def statement, in order to make using a decorator look natural. The below example is common in Python tutorials.

import functools def repeat(num_times): def decorator_repeat(func): @functools.wraps(func) def wrapper_repeat(*args, **kwargs): for _ in range(num_times): value = func(*args, **kwargs) return value return wrapper_repeat return decorator_repeat @repeat(num_times=4) def greet(name): print(f"Hello {name}") greet("World") Hello World Hello World Hello World Hello World

First the obligatory observation that abstracting a for loop in Python is not necessarily a good idea. But assuming that is the goal, it is still worth questioning why repeating 4 times is coupled to the name greet. Is print supposed to represent the “real” function in this example, or should the wrapped function be named greet_4x? It is much simpler to start with the basic functionality and postpone how to wrap it.

def repeat(num_times, func, *args, **kwargs): for _ in range(num_times): value = func(*args, **kwargs) return value def greet(name): print(f"Hello {name}") repeat(4, greet, "World") Hello World Hello World Hello World Hello World

We can stop there really. But even assuming that the goal is to bind the repetition, using partial functions is still simpler.

from functools import partial greet_4x = partial(repeat, 4, greet) greet_4x("World") Hello World Hello World Hello World Hello World

Not exactly the same without wraps, but that would be trivial to add. Futhermore it is less useful because partial objects can be easily introspected. Now onto the next - and dubious - assumption: that we really want it used as a decorator. This requires assuming the body of greet is not a simple call to an underlying wrapped function, and yet for some reason the repetition is supposed to be coupled to the wrapper function’s name anyway. Still simpler:

repeats = partial(partial, repeat, 4) @repeats def greet(name): print(f"Hello {name}") greet("World") Hello World Hello World Hello World Hello World

Nested partials may appear a little too clever, but they are just the flatter version of the original nested repeat functions. And again, none of this indirection is necessary.

For loops

A real-world example of repeat is retrying functions until success, optionally with delays. A popular one uses examples like:

@backoff.on_exception(backoff.expo, requests.exceptions.RequestException) def get_url(url): return requests.get(url)

The same pattern (ahem) repeats. The decorated function is a trivial wrapper around the “real” function. Why not:

get_url = backoff.on_exception(backoff.expo, requests.exceptions.RequestException)(requests.get)

Furthermore, for loops can be customized via the __iter__ protocol, just as with blocks are customizable. The author’s waiter package demonstrates the same functionality with for loops and undecorated functions.

Advocacy

So before assuming a decorator is the right abstraction, start with whether a def function is the right abstraction. Building out functionality in this progression works well:

  1. code blocks: with and for and customizable
  2. flat functions
  3. nested functions: using partial
  4. decorated functions
Categories: FLOSS Project Planets

Louis-Philippe Véronneau: Goodbye Bullseye — report from the Montreal 2023 BSP

Planet Debian - Fri, 2023-03-03 16:30

Hello World! I haven't really had time to blog here since the start of the semester, as I've been pretty busy at work1.

All this to say, this report for the Bug Squashing Party we held in Montreal last weekend is a little late, sorry :)

First of all, I'm pleased to announce our local community seems to be doing great and has recovered from the pandemic-induced lull. May COVID stay away from our bodies forever.

This time around, a total of 9 people made it to what has become somewhat of a biennial tradition2. We worked on a grand total of 14 bugs and even managed to close some!

It looks like I was too concentrated on bugs to take a picture of the event... To redeem myself, I hereby offer you a picture of a cute-but-hairless cat I met on Sunday morning:

You should try to join an upcoming BSP or to organise one if you can. It's loads of fun and you'll be helping the project make the next release happen sooner!

As always, thanks to Debian for granting us a budget for the food and to rent the venue.

Goodbye Bullseye!

  1. Which I guess is a good thing, since it means I actually have work this semester :O 

  2. See our previous BSPs in 2017, 2019 and 2021

Categories: FLOSS Project Planets

Go Deh: Function purity and idempotence

Planet Python - Fri, 2023-03-03 15:43

 Someone mentioned idempotence at work. I looked it up and noted that it too is a property of functions, like function purity.

I decided to see if I could write functions with combinations of those properties and embedded tests for those properties.

Running my resultant program produces this result:

Created on Fri Mar  3 18:04:09 2023
@author: Paddy3118
pure_idempotent.py
    Explores Purity and idempotence with Python examples

Definitions:    Pure:    * Answer relies solely on inputs. Same out for same in.    I.E: `f(x) == f(x) == f(x) == ...`    * No side-effects.
    Idempotent:    * The first answer from any input, if used as input to    subsequent runs of the function, will all yield the same answer.    I.E: `f(x) == f(f(x)) == f(f(f(x))) == ...`    * Any side effect of a first function execution is *preserved* on    subsequent runs of the function using the previous answer.
    Side effect:    * A function is said to have side effects if it relies apon or modifies    state outside of that *given* by its arguments. Modifying mutable    arguments is also a side effect.

#--------
def remove_twos(arg: list[int]) -> list[int]:    "Returns a copy of the list with all twos removed."    return [x for x in arg if x != 2]
Function is:  Pure  Idempotent
#--------
def return_first_int(arg: int) -> int:    "Return the int given in its first call"    global external_state
    if external_state is None:        external_state = arg    return external_state
Function is:  Impure! External state changed  Idempotent
#--------
def plus_one(arg: int) -> int:    "Add one to arg"    return arg + 1
Function is:  Pure  Non-idempotent! Output changes for nested calls
#--------
def epoc_plus_seconds(secs: float) -> float:    "Return time since epoch + seconds"    time.sleep(0.1)    return time.time() + secs
Function is:  Impure! Output changes for same input  Non-idempotent! Output changes for nested calls
Code

The code that produces the above (but not its arbitrary colourising), is the following:

# -*- coding: utf-8 -*-"""Created on Fri Mar  3 18:04:09 2023
@author: Paddy3118
pure_idempotent.py
    Explores Purity and idempotence with Python examples

Definitions:    Pure:    * Answer relies solely on inputs. Same out for same in.    I.E: `f(x) == f(x) == f(x) == ...`    * No side-effects.
    Idempotent:    * The first answer from any input, if used as input to    subsequent runs of the function, will all yield the same answer.    I.E: `f(x) == f(f(x)) == f(f(f(x))) == ...`    * Any side effect of a first function execution is *preserved* on    subsequent runs of the function using the previous answer.
    Side effect:    * A function is said to have side effects if it relies apon or modifies    state outside of that *given* by its arguments. Modifying mutable    arguments is also a side effect."""
import inspect
print(__doc__)
# %% Pure, idempotent.print('\n#--------')
def remove_twos(arg: list[int]) -> list[int]:    "Returns a copy of the list with all twos removed."    return [x for x in arg if x != 2]
print(f"\n{inspect.getsource(remove_twos)}")arg0 = [1, 2, 3, 2, 4, 5, 2]print('Function is:')print('  Pure' if remove_twos(arg0.copy()) == remove_twos(arg0.copy())      else '  Impure')print('  Idempotent' if (answer1:=remove_twos(arg0)) == remove_twos(answer1)      else 'Non-idempotent')
# %% Impure, idempotent.print('\n#--------')
def return_first_int(arg: int) -> int:    "Return the int given in its first call"    global external_state
    if external_state is None:        external_state = arg    return external_state
print(f"\n{inspect.getsource(return_first_int)}")# Purityexternal_state = initial_state = Nonearg0 = 1same_output = (return_first_int(arg0)) == return_first_int(arg0)same_state = external_state == initial_stateprint('Function is:')if same_output and same_state:    print('  Pure')else:    if not same_output:        print('  Impure! Output changes for same input')    if not same_state:        print('  Impure! External state changed')# Idempotenceexternal_state = Noneanswer1, state1 = return_first_int(arg0), external_stateanswer2, state2 = return_first_int(answer1), external_statesame_output = answer1 == answer2same_state = state1 == state2if same_output and same_state:    print('  Idempotent')else:    if not same_output:        print('  Non-idempotent! Output changes for nested calls')    if not same_state:        print('  Non-idempotent! External state changes for nested calls')
# %% Pure, non-idempotent.print('\n#--------')
def plus_one(arg: int) -> int:    "Add one to arg"    return arg + 1

print(f"\n{inspect.getsource(plus_one)}")# Purityarg0 = 1same_output = (plus_one(arg0)) == plus_one(arg0)print('Function is:')if same_output:    print('  Pure')else:    print('  Impure! Output changes for same input')# Idempotenceanswer1 = plus_one(arg0)answer2 = plus_one(answer1)same_output = answer1 == answer2if same_output:    print('  Idempotent')else:    print('  Non-idempotent! Output changes for nested calls')
# %% Impure, non-idempotent.print('\n#--------')
import time
def epoc_plus_seconds(secs: float) -> float:    "Return time since epoch + seconds"    time.sleep(0.1)    return time.time() + secs

print(f"\n{inspect.getsource(epoc_plus_seconds)}")# Purityarg0 = 1same_output = (epoc_plus_seconds(arg0)) == epoc_plus_seconds(arg0)print('Function is:')if same_output:    print('  Pure')else:    print('  Impure! Output changes for same input')# Idempotenceanswer1 = epoc_plus_seconds(arg0)answer2 = epoc_plus_seconds(answer1)same_output = answer1 == answer2if same_output:    print('  Idempotent')else:    print('  Non-idempotent! Output changes for nested calls')

END.

Categories: FLOSS Project Planets

PyBites: Jim Hodapp on coaching software engineers and the power of Rust

Planet Python - Fri, 2023-03-03 12:44

Watch here:

Or listen here:

This week we have Jim Hodapp on our podcast.

We talk about his career journey going from software engineer + manager to full-time developer coach, some of the tactics he uses with his clients, and why coaching is a powerful tool for software engineers.

Then we pivot to a more technical discussion about Rust, his passion for the language, why it’s an interesting language to consider, also for Python developers, and to his developer community Rust Never Sleeps.

We hope you enjoy this interview and that it inspires (and challenges) you to keep learning new things and expand your horizons.

Links:

– Jim’s website

– Embedded Rust WiFi crate for RP2040 microcontrollers

– Rust web application to monitor home ambient air conditions

– Jim’s coaching/Rust community

– Connect with Jim: Pybites / Twitter / LinkedIn

– Mentioned books: The Staff Engineer’s Path / Thich Nhat Hanh Essential Writings / The School of Life

Categories: FLOSS Project Planets

EuroPython: EuroPython February 2023 Newsletter

Planet Python - Fri, 2023-03-03 10:57

Dobrý den!

It’s March already, the days are flying by and EuroPython in Prague will soon be here! So, what’s been going on?

&#x1F40D; EuroPython 2023 Conference Update&#x1F1E8;&#x1F1FF; Prague

Since our last newsletter, where we announced our venue will be in Prague, we’ve put together a page containing links and details about the city, its infrastructure and the sorts of things you could explore outside the conference. You can find it here: https://ep2023.europython.eu/where (and we welcome suggestions for additions to this guide).

&#x1F9E8; Call for Proposals (CFP)

EuroPython 2023 Call for Proposals (CFP) will be open between Monday, 6 March 2023 and Sunday, 19 March 2023

https://ep2023.europython.eu/cfp. More details will be published soon when we open our CFP.

EuroPython reflects the colourful and diverse backgrounds, cultures and interests of our community, so you (yes, you!) should go for it: propose something and represent!

No matter your level of Python or public speaking experience, EuroPython&aposs job is to help you bring yourself to our community so we all flourish and benefit from each other&aposs experience and contribution.

If you’re thinking, “but they don’t mean me”, then we especially mean YOU.

  1. If you’re from a background that isn&apost usually well-represented in most Python groups, get involved - we want to help you make a difference.
  2. If you’re from a background that is well-represented in most Python groups, get involved - we want your help making a difference.
  3. If you’re worried about not being technical enough, get involved - your fresh perspective will be invaluable.
  4. If you think you’re an imposter, join us - many of the EuroPython organisers feel this way.
  5. This is a volunteer led community, so join us - you will be welcomed with friendship, support and compassion.

You are welcome to share your questions and ideas with our programme team at programme@europython.eu

&#x1F469;‍&#x1F3EB; Speaker Mentorship

As a diverse and inclusive community EuroPython offers support for potential speakers who want help preparing for their contribution. To achieve this end we have set up the speaker mentorship programme.

We are looking for both mentors and mentees to be a part of the programme.

To become a Mentor you need to fill in the application form here and If you are a mentee in need of help contributing to EuroPython, especially if you are from an underrepresented or a marginalised group in the tech industry, please fill in the form here. We will get in touch with you to update you on working with a mentor and how to participate in the workshops.

Along with this we will also run an Ask Me Anything for the CFP and a workshop for first time speakers

More details on https://ep2023.europython.eu/mentorship

&#x1F399;️ Keynotes

We are thrilled to announce our first keynote speaker for 2023, the New York Times bestselling author Andrew Smith.

Andrew has recently finished a book, due out later this year, about what it feels like to learn how to code. His language of choice was Python and, as part of his research, he became (and continues to be) involved in several different aspects of the Python and wider FLOSS community.

Andrew often appears before live audiences and on radio and TV, and has written and presented a number of films and radio series, including the 60-minute BBC TV documentaries Being Neil Armstrong and To Kill a Mockingbird at 50, and the three-part BBC Radio 4 history of the lives of submariners, People of the Abyss. The last decade has seen his focus shift more squarely to the digital revolution and its social implications, with high profile magazine and comment pieces appearing in The Economist’s 1843 magazine, The Financial Times and the US and UK editions of The Guardian. Smith also features in Stanford University&aposs History of the Internet podcast series.

Find out more about Andrew via his website: https://www.andrewsmithauthor.com/Andrew interviewing Buzz Aldrin&#x1F3AB; Ticket Sales

As usual, there will be several ticket options, so you can choose the one most suitable for you. Ticket sales are expected to start on 21 March. We are aiming to keep the ticket prices at an affordable level for all tiers, despite cost increases and inflation. Check out our ticket page to find more information: https://ep2023.europython.eu/tickets

&#x1F4B6; Financial Aid

As part of our commitment to the Python community, we offer special grants for people in need of financial aid to attend EuroPython. These grants include a free ticket grant, a travel and accommodation grant of up to € 400, and a visa application free grant of up to € 80.

We will review grant applications and award grants in two rounds this year. The submission deadline for the first round will be on 23 April 2023 and the deadline for the second round will be on 21 May 2023. If you submit your application in the first round, you will automatically be considered in the second round as well. So, apply early to increase your chances.

The Financial Aid Programme is now open for application. For more information and a link to the application form, check out https://europython.eu/finaid.

&#x1F947; Speakers Placement Programme

We provide for mentees to provide speaking opportunities at a local event or meetup before or after EuroPython to help boost their confidence. If you are an event/ meetup organiser who is looking for speakers, please kindly fill in this form and we would be happy to introduce our mentee to you if there’s a match.

&#x1F4DE; Call for Trans*Code Volunteers

EuroPython Society champions diversity & inclusion. Following the success and fun we had at our Trans*Code event in Dublin, we are hosting, a Trans*Code event again at EuroPython 2023 in Prague - an informal hackday & workshop event which aims to help draw attention to transgender issues and opportunities.

The event is open to trans and non-binary folk, allies, coders, designers and visionaries of all sorts. Check the interviews with our 2022 Trans*Code participants to get an idea and enjoy the warmth.

This year, we are again privileged to have Noami Ceder on board to help and advise us with the organisation. We want to make EuroPython 2023 an exceptionally welcoming place for trans people and folks from under-represented groups in tech. We need more volunteers to achieve our goal! If you identify as trans or non-binary and would like to volunteer your experience and time to help us organise the event, please write to trans_code@europython.eu; or to Naomi Ceder at naomi@europython.eu (if you need to discuss something more private). If you are an ally, help us spread the word and lend us your support.

&#x1F389; EPS New Fellows

We are overjoyed to announce the EuroPython Society Fellows in the first quarter of 2023: Naomi Ceder, Cheuk Ting Ho, Francesco Pierfederici and Jakub Musko. We are grateful for the significant contribution every one of them has made to our EuroPython community. You can read about their achievement here: www.europython-society.org/europython-fellow/

&#x1F40D; Upcoming Events in Europe&#x1F98A; Project Feature - FoxDot

This amazing project helps with livecode music using Python and converts your favourite programming language into a musical instrument

https://github.com/Qirky/FoxDot

Foxdot does this by providing a programming environment that provides a fast and user-friendly abstraction to SuperCollider. It also comes with its own IDE, which means it can be used straight out of the box; all you need is Python and SuperCollider and you&aposre ready to go!

We had a very nice lightning talk about this project at EuroPython 2019 https://www.youtube.com/watch?v=N7q4lB49IGM and a full length one at Pycon US 2019 https://www.youtube.com/watch?v=YUIPcXduR8E

&#x1F397;️ Clacks of Remembrance

In the Terry Pratchett novel "Going Postal", a telegraph-style system known as "Clacks" was used to pass the name of a deceased character endlessly back and forth, keeping their memory alive. But where the book had "GNU John Dearheart" -- the prefix being a basic code to instruct clacksmen to pass on, not file, and return the message -- we add meta headers to our base template in a silent but appropriately geeky tribute and act of remembrance to those in our EuroPython family we have lost.

We invite you to take a moment to “View page source” and remember our departed friends.

&#x1F92D; PyJok.es$ pip install pyjokes Collecting pyjokes Downloading pyjokes-0.6.0-py2.py3-none-any.whl (26 kB) Installing collected packages: pyjokes Successfully installed pyjokes-0.6.0 $ pyjoke

Why do sin and tan work? Just cos.

Add your own jokes to PyJokes (a project invented at a EuroPython sprint) via this issue: https://github.com/pyjokes/pyjokes/issues/10

Categories: FLOSS Project Planets

Sven Hoexter: exfat-fuse 1.4 in experimental

Planet Debian - Fri, 2023-03-03 10:39

I know a few people hold on to the exFAT fuse implementation due the support for timezone offsets, so here is a small update for you. Andrew released 1.4.0, which includes the timezone offset support, which was so far only part of the git master branch. It also fixes a, from my point of view very minor, security issue CVE-2022-29973. In addition to that it's the first build with fuse3 support. If you still use this driver, pick it up in experimental (we're in the bookworm freeze right now), and give it a try. I'm personally not using it anymore beyond a very basic "does it mount" test.

Categories: FLOSS Project Planets

Python for Beginners: Add Column to Pandas DataFrame in Python

Planet Python - Fri, 2023-03-03 09:00

Pandas dataframes are used to handle tabular data in python. Sometimes, we need to create new columns in the dataframe for analysis. This article discusses how to add a column to pandas dataframe in python.

Table of Contents
  1. Add An Empty Column to a Pandas DataFrame
    1. Create An Empty Column Using a Series in the Pandas DataFrame
  2. Add Columns at The End of a DataFrame in Python
    1. Add Columns at The End of a DataFrame Using Direct Assignment
  3. Add Multiple Columns at the End of a Pandas DataFrame
  4. Add Columns to DataFrame Using the assign() Method in Python
  5. Add Columns at a Specific Index in a Pandas DataFrame
  6. Add a Column Based on Another Column in a Pandas DataFrame
  7. Conclusion
Add An Empty Column to a Pandas DataFrame

You might think that we can add an empty list to the pandas dataframe to add an empty column. However, this isn’t true. If you assign an empty list to a dataframe column, the program will run into an error. You can observe this in the following example.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) df["Name"]=[] print("The modified dataframe is:") print(df)

Output:

ValueError: Length of values (0) does not match length of index (6)

In this example, you can observe that we have assigned an empty list to create the "Name" column in the dataframe. As the list is empty, the program runs into a Python ValueError exception.

In contrast, we can add a scaler value like a string or number to the dataframe column. In this case, the value is broadcasted to all the rows of the dataframe. You can observe this in the following example.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) df["Name"]="Aditya" print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Name 0 1 100 80 90 Aditya 1 2 80 100 90 Aditya 2 3 90 80 70 Aditya 3 4 100 100 90 Aditya 4 5 90 90 80 Aditya 5 6 80 70 70 Aditya

In this example, I have assigned the value "Aditya" to create the "Name" column. Although it’s just a single value, it has been broadcasted to all the rows of the dataframe. We will use this property of a dataframe to add an empty column to the pandas dataframe.

To add an empty column to a pandas dataframe, we can use an empty string, the name of the new column, and the python indexing operator using the following syntax.

dataframe[column_name]=””

After executing the above statement, a new empty column will be added to the dataframe. You can observe this in the following example.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) df["Name"]="" print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Name 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70

In the above example, we have assigned an empty string to the "Name" column. The "Name" column in the output dataframe might look empty to you, this isn’t correct. Each value in the "Name" column is actually an empty string.

If you want the new column to have NaN values instead of the empty string, you can assign np.nan, pd.NA, or None value to the dataframe column as shown below.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) df["Name"]=pd.NA print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Name 0 1 100 80 90 <NA> 1 2 80 100 90 <NA> 2 3 90 80 70 <NA> 3 4 100 100 90 <NA> 4 5 90 90 80 <NA> 5 6 80 70 70 <NA>

In this example, we have passed the pd.NA value instead of the empty string to the "Name" column. You can observe this in the output.

Create An Empty Column Using a Series in the Pandas DataFrame

Instead of assigning a scaler value, you can create an empty series and assign it to a column of the data frame to add a new column. For this, we will create an empty series. Then, we will assign the series to the dataframe column as shown below.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) df["Name"]=pd.Series() print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Name 0 1 100 80 90 NaN 1 2 80 100 90 NaN 2 3 90 80 70 NaN 3 4 100 100 90 NaN 4 5 90 90 80 NaN 5 6 80 70 70 NaN

In this example, we have created a pandas series using the Series() function. Then, we assigned it to the "Name" column in the dataframe.

Add Columns at The End of a DataFrame in Python

We can add a new column to a dataframe using different approaches. Let us discuss all these approaches one by one.

Add Columns at The End of a DataFrame Using Direct Assignment

To assign a new column to a dataframe, you can assign a list of values to the dataframe using the following syntax.

dataframe[column_name]=list_of_values

After executing the above statement, the new column will be added to the dataframe. You can observe this in the following example.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) names=["Aditya","Joel", "Sam", "Chris", "Riya", "Anne"] df["Name"]=names print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Name 0 1 100 80 90 Aditya 1 2 80 100 90 Joel 2 3 90 80 70 Sam 3 4 100 100 90 Chris 4 5 90 90 80 Riya 5 6 80 70 70 Anne

In this example, we created the "Name" column in the pandas dataframe by assigning a list of names to the dataframe.

In the above example, if the list of values doesn’t contain an equal number of values as the rows of the dataframe, the program will run into a python ValueError exception as shown below.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) names=["Aditya","Joel", "Sam", "Chris", "Riya"] df["Name"]=names print("The modified dataframe is:") print(df)

Output:

ValueError: Length of values (5) does not match length of index (6)

In this example, the original dataframe there are six rows. However, the list we assigned to the “Name" column has only five values. Due to this, the program runs into a ValueError exception.

Add Multiple Columns at the End of a Pandas DataFrame

To add multiple columns at the same time to the dataframe, you can use the list of column names as shown in the following example.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) names=["Aditya","Joel", "Sam", "Chris", "Riya", "Anne"] heights=[180,170,164,177,167,175] df["Name"],df["Height"]= [names,heights] print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Name Height 0 1 100 80 90 Aditya 180 1 2 80 100 90 Joel 170 2 3 90 80 70 Sam 164 3 4 100 100 90 Chris 177 4 5 90 90 80 Riya 167 5 6 80 70 70 Anne 175

In the above example, we have used list unpacking to assign a list of lists to multiple columns in the pandas dataframe.

Instead of directly assigning the list to the dataframe, you can use the loc attribute of the dataframe to add a column to the dataframe as shown below.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) names=["Aditya","Joel", "Sam", "Chris", "Riya", "Anne"] heights=[180,170,164,177,167,175] df.loc[:,"Name"],df.loc[:,"Height"]= [names,heights] print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Name Height 0 1 100 80 90 Aditya 180 1 2 80 100 90 Joel 170 2 3 90 80 70 Sam 164 3 4 100 100 90 Chris 177 4 5 90 90 80 Riya 167 5 6 80 70 70 Anne 175 Add Columns to DataFrame Using the assign() Method in Python

The pandas module provides us with the assign() method to add a new column to a dataframe. The assign() method, when invoked on a dataframe, takes the column name of the new column and the list containing values as the parameter and the associated input argument respectively. After execution, it returns the modified dataframe.

You can observe this in the following example.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) names=["Aditya","Joel", "Sam", "Chris", "Riya", "Anne"] df=df.assign(Name=names) print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Name 0 1 100 80 90 Aditya 1 2 80 100 90 Joel 2 3 90 80 70 Sam 3 4 100 100 90 Chris 4 5 90 90 80 Riya 5 6 80 70 70 Anne

In the above example, we have passed the list of names as the input to the Name parameter in the assign() method. Hence, the assign() method adds a new column with the column name "Name" to the dataframe and returns the modified dataframe.

You can also add multiple columns to the dataframe using the assign() method. For this, we can add multiple column names associated list of values as parameters and arguments to the assign() method respectively. After execution of the assign() method, we will get the modified dataframe as shown in the following example.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) names=["Aditya","Joel", "Sam", "Chris", "Riya", "Anne"] heights=[180,170,164,177,167,175] df=df.assign(Name=names, Height=heights) print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Name Height 0 1 100 80 90 Aditya 180 1 2 80 100 90 Joel 170 2 3 90 80 70 Sam 164 3 4 100 100 90 Chris 177 4 5 90 90 80 Riya 167 5 6 80 70 70 Anne 175 Add Columns at a Specific Index in a Pandas DataFrame

In all the above examples, the new column is added at the end of the dataframe. However, we can also add a column at a specified position in the dataframe. To add a column to a dataframe at a specific index, we can use the insert() method. It has the following syntax.

dataframe.insert(index, column_name, column_values)

Here, 

  • The index parameter takes the index at which the new column has to be inserted as its input argument. 
  • The column_name parameter takes the name of the column to be inserted into the dataframe.
  • The column_values parameter takes the list containing values in the new column.

After execution, the insert() method inserts a new column into the dataframe at the specified position. Remember that the original dataframe is modified when we add a column into a data frame using the insert() method.

To add a column at a specific index in a panda dataframe, we will invoke the insert() method on the dataframe. Here, we will pass the index, column name, and values as the first, second, and third input arguments to the insert() method respectively. After execution of the insert() method, we will get the output dataframe.

You can observe this in the following example.

import pandas as pd myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) names=["Aditya","Joel", "Sam", "Chris", "Riya", "Anne"] df.insert(2, "Name", names) print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Name Physics Chemistry 0 1 100 Aditya 80 90 1 2 80 Joel 100 90 2 3 90 Sam 80 70 3 4 100 Chris 100 90 4 5 90 Riya 90 80 5 6 80 Anne 70 70 Add a Column Based on Another Column in a Pandas DataFrame

Instead of a completely new column, we can also add a column based on an existing column in a dataframe. For this, we can use the apply() method. The pandas apply method, when invoked on the column of a dataframe, takes a function as its input argument. It then executes the function with each value of the column as its input and creates a new series object using the function outputs.

We can then assign the series object returned by the apply() method to add a new column based on another column in the pandas dataframe as shown below.

import pandas as pd def grade_calculator(marks): if marks>90: return "A" elif marks>80: return "B" else: return "C" myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) df["Physics Grade"]=df["Physics"].apply(grade_calculator) print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Physics Grade 0 1 100 80 90 C 1 2 80 100 90 A 2 3 90 80 70 C 3 4 100 100 90 A 4 5 90 90 80 B 5 6 80 70 70 C

Instead of using the apply() method, we can also use the map() method to add a new column in a dataframe based on an existing column as shown in the following example.

import pandas as pd def grade_calculator(marks): if marks>90: return "A" elif marks>80: return "B" else: return "C" myDicts=[{"Roll":1,"Maths":100, "Physics":80, "Chemistry": 90}, {"Roll":2,"Maths":80, "Physics":100, "Chemistry": 90}, {"Roll":3,"Maths":90, "Physics":80, "Chemistry": 70}, {"Roll":4,"Maths":100, "Physics":100, "Chemistry": 90}, {"Roll":5,"Maths":90, "Physics":90, "Chemistry": 80}, {"Roll":6,"Maths":80, "Physics":70, "Chemistry": 70}] df=pd.DataFrame(myDicts) print("The input dataframe is:") print(df) df["Physics Grade"]=df["Physics"].map(grade_calculator) print("The modified dataframe is:") print(df)

Output:

The input dataframe is: Roll Maths Physics Chemistry 0 1 100 80 90 1 2 80 100 90 2 3 90 80 70 3 4 100 100 90 4 5 90 90 80 5 6 80 70 70 The modified dataframe is: Roll Maths Physics Chemistry Physics Grade 0 1 100 80 90 C 1 2 80 100 90 A 2 3 90 80 70 C 3 4 100 100 90 A 4 5 90 90 80 B 5 6 80 70 70 C Conclusion

In this article, we have discussed different methods to add a column to a pandas dataframe. To learn more about pandas dataframes, you can read this article on how to check for not null values in pandas. You might also like this article on how to select multiple columns in a pandas dataframe.

I hope you enjoyed reading this article. Stay tuned for more informative articles.

Happy Learning!

The post Add Column to Pandas DataFrame in Python appeared first on PythonForBeginners.com.

Categories: FLOSS Project Planets

The Drop Times: Stay Tuned for Interview with John Jameson | DrupalCamp NJ

Planet Drupal - Fri, 2023-03-03 08:55
Our Interview with DrupalCamp NJ Speaker John Jameson, who is a Digital Accessibility Developer at Princeton University will be published tomorrow.
Categories: FLOSS Project Planets

Pages