FLOSS Project Planets
Daniel Roy Greenfeld: TIL: Fractional Indexing
In the past when I've done this for web pages and various other interfaces it has been a mess. I've built ungainly sort order in numeric or alphanumeric batches. Inevitably there is a conflict, often sooner rather than later. So sorting a list of things often means updating all the elements to preserve the order in the datastore. I've learned to mark each element with a big value, but it's ugly and ungainly
Fortunately for me, going forward, I now know about Fractional Indexing.
References:
- https://www.figma.com/blog/realtime-editing-of-ordered-sequences/
- https://observablehq.com/@dgreensp/implementing-fractional-indexing
- https://github.com/httpie/fractional-indexing-python
Daniel Roy Greenfeld: TIL: Python Dictonary Merge Operator
Until today I did this:
# Make first dict num_map = { 'one': '1', 'two': '2', 'three': '3', 'four': '4', 'five': '5', 'six': '6', 'seven': '7', 'eight': '8', 'nine': '9' } # Add second dict num_map.update({str(x):str(x) for x in range(1,10)}) print(num_map) The operator wayNow thanks to Audrey Roy Greenfeld now I know I can do this:
# Make first dict while adding second dict num_map = { 'one': '1', 'two': '2', 'three': '3', 'four': '4', 'five': '5', 'six': '6', 'seven': '7', 'eight': '8', 'nine': '9' } | {str(x):str(x) for x in range(1,10)} print(num_map)Daniel Roy Greenfeld: TIL: Python's defaultdict takes a factory function
I've never really paid attention to this object but maybe I should have. It takes a single argument of a callable function. If you put in Python types it sets the default value to those types. For example, if I use an int at the instantiating argument then it gives us a zero.
>>> from collections import defaultdict >>> >>> mydict = defaultdict(int) >>> print(mydict['anykey']) 0Note that defaultdict also act like regular dictionaries, in that you can set keys. So mydict['me'] = 'danny' will work as you expect it to with a standard dictionary.
It gets more interesting if we pass in a more dynamic function. In the exmaple below we use random.randint and a lambda to make the default value be a random number between 1 and 100.
>>> from random import randint >>> >>> random_values = defaultdict(lambda: randint(1,100))Let's try it out!
>>> for i in range(5): >>> print(random_values[i]) >>> print(random_values) 29 90 56 42 70 defaultdict(<function <lambda> at 0x72d292bb6de0>, {0: 29, 1: 90, 2: 56, 3: 42, 4: 70})Attribution goes to Laksman Prasad, who pointing this out and encouraging me to closer look at defaultdict.
Daniel Roy Greenfeld: TIL: How to reset Jupyter notebook passwords
Attribution for this goes to Johno Whitaker.
Daniel Roy Greenfeld: TIL: Arity
I'm excited to have learned there's a word for the count of arguments to a function/method/class: arity. Throughout my career I would have called this any of the following:
- number_of_args
- param_count
- numargs
- intArgumentCount
Thanks to Simon Willison for using it in a library or two and making me look up the word.
Daniel Roy Greenfeld: TIL: Using hx-swap-oob with FastHTML
Until now I didn't use this HTMX technique, but today Audrey Roy Greenfeld and I dove in together to figure it out. Note that we use language that may not match HTMX's description, sometimes it's better to put things into our own words so we understand it better.
from fasthtml.common import * app,rt = fast_app() def mk_row(name, email): return Tbody( # Only the Tr element and its children is being # injected, the Tbody isn't being injected Tr(Td(name), Td(email)), # This tells HTMX to inject this row at the end of # the #contacts-tbody DOM element hx_swap_oob="beforeend:#contacts-tbody", ), @rt def index(): return Div( H2("Contacts"), Table( Thead(Tr(Th("Name"), Th("Email"))), Tbody( Tr(Td("Audrey"), Td("mommy@example.com")), Tr(Td("Uma"), Td("kid@example.com")), Tr(Td("Daniel"), Td("daddy@example.com")), # Identifies the contacts-tbody DOM element id="contacts-tbody", ), ), H2("Add a Contact"), Form( Label("Name", Input(name="name", type="text")), Label("Email", Input(name="email", type="email")), Button("Save"), hx_post="/contacts", # Don't swap out the contact form hx_swap='none', # Reset the form and put focus onto the name field hx_on__after_request="this.reset();this.name.focus();" ) ) @rt def contacts(name:str,email:str): print(f"Adding {name} and {email} to table") return mk_row(name,email) serve()To verify the behavior, view the rendered elements in your browser of choice before, after, and during submitting the form.
Daniel Roy Greenfeld: TIL: Using Python to removing prefixes and suffixes
Starting in Python 3.9, s.removeprefix() and s.removesuffix() were added as str built-ins. Which easily covers all the versions of Python I currently support.
Usage for removeprefix(): >>> 'Spam, Spam'.removeprefix('Spam') ', Spam' >>> 'Spam, Spam'.removeprefix('This is not in the prefix') 'Spam, Spam' Usage for removesuffix(): >>> 'Spam, Spam'.removesuffix('Spam') 'Spam, ' >>> 'Spam, Spam'.removesuffix('This is not in the suffix') 'Spam, Spam'Daniel Roy Greenfeld: Using locust for load testing
Locust is a Python library that makes it relatively straightforward to write Python tests. This heavily commented code example explains each section of code. To use locust:
- Install locust: pip install locust
- Copy the file below into the directory where you want to run locust
- In that directory, at the command-line, type: locust
- Open http://localhost:8089/
For reference, this is the test site used to create the above locustfile. I'll admit that the above test is incomplete, a lot more tasks could be added to hit web routes. To use it:
- Install FastHTML: pip install python-fasthtml
- Copy the file into the directory you want to run it
- In that directory, at the command-line, type: python cats.py
- Open http://localhost:5001/
- 2024-11-08 Use SequentialTaskSet as recommended by Audrey Roy Greenfeld
- 2024-11-08 Fixed a few bugs in cats.py
Daniel Roy Greenfeld: TIL: Autoreload for Jupyter notebooks
Add these commands to the top of a notebook within a Python cell. Thanks to Jeremy Howard for the tip.
%load_ext autoreload %autoreload 2Daniel Roy Greenfeld: TIL: run vs source
A run launches a child process in a new bash within bash, so variables last only the lifetime of the command. This is why launching Python environments doesn't use run.
./list-things.sh SourceA source is the current bash, so variables last beyond the running of a script. This is why launching Python environments use source.
source ~/.venv/bin/activateTalking Drupal: Talking Drupal #481 - Drupal Marketing & Drupal CMS
Today we are talking about Drupal Marketing, how it applies to Drupal CMS, and what a Drupal and Drupal CMS Marketing Future look like with guest Suzanne Dergacheva. We’ll also cover Drupal 11.1 as our module of the week.
For show notes visit: https://www.talkingDrupal.com/481
Topics- Drupal marketing moves
- New brand
- Marketing people at the DA
- Goal of marketing
- How does this impact Drupal CMS
- Drupal CMS marketing
- How will you educate people about the differences between core and CMS
- Any challenges
- How do you like the new homepage
- Next steps to move the brand forward
- Case studies
- Why did you volunteer
- If someone wants to get involved how can they
- Brand Portal
- Drupal.org homepage
- Case study guidelines
- Webinar with Suzanne and Rosie Gladden about Key Strategies for Expanding Drupal’s Reach
- Advent Calendar - Freelock.com - 24 days of Drupal automations
Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Suzanne Dergacheva - evolvingweb.com pixelite
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you been wanting a version of Drupal with improvements to the recipes system, the ability to write hooks as classes, and an icon management API? The new Drupal 11.1 release has all of that and more.
- Module name/project name:
- Brief history
- How old: created on Dec 16 by catch of Tag1 and Third & Grove
- Module features and usage
- We’ve talked a number times on this show about the recipes system, particularly because it’s at the heart of Drupal CMS. In Drupal 11.1 recipes can define whether or not to use strict comparison for provided configuration, and there are a ton of new config actions. These allow your recipe to place blocks, take user input, enable layout builder for content types, clone configuration entities and more. It’s a huge leap forward, and I think you’ll quickly see a number of recipes that require Drupal 11.1 or newer.
- Hooks have long been a powerful Drupalism that allow for deep customization of how your website functions. These hooks can now be written as classes, thanks to the new Hook attribute on methods. This will bring many of the object-oriented benefits of modern Drupal to the hooks system, and should also make it easier for developers new to Drupal to understand the code to create these customizations.
- A new Icon Management API allows themes and modules to define icon packs, with unique identifiers for each included icon.
- Drupal 11.1 also includes PHP 8.4 support. I haven’t been able to find any data on speed improvements compared to PHP 8.3, but there are interesting new features like property hooks, asymmetric visibility, new functions for finding array items, and more
- There are plans to use Workspaces for content moderation, so the UI for Workspaces is now in a separate module. For new site builds if you want your editors to be able to use Workspaces, you’ll need to remember to enable this new UI module as well
- New installs of Drupal 11.1 will also see improvements to the initial experience. These include defaulting to admin-created user accounts only, not adding the body field by default when creating new content types, and more.
- Drupal 11.1 also includes a new views entity reference filter, opt-in render caching for forms, and improved browser and CDN caching for Javascript and CSS, among a host of other improvements.
- A number of these improvements will also find their way into the upcoming 10.4 release, ensuring, for example, that recipes built to use the new config actions can be used with Long-Term Support (LTS) versions of Drupal, that will be supported until the stable release of Drupal 12 in mid- to late-2026
texinfo @ Savannah: Texinfo 7.2 released
We have released version 7.2 of Texinfo, the GNU documentation format.
It's available via a mirror (xz is much smaller than gz, but gz is available too just in case):
http://ftpmirror.gnu.org/texinfo/texinfo-7.2.tar.xz
http://ftpmirror.gnu.org/texinfo/texinfo-7.2.tar.gz
Please send any comments to bug-texinfo@gnu.org.
Full announcement:
https://lists.gnu.org/archive/html/bug-texinfo/2024-12/msg00043.html
DXPR: A Christmas Message: Empowering Communities with AI for a Brighter Digital Future
This Christmas, I want to share a vision for the year ahead—one rooted in the principles of openness, collaboration, and empowerment. Just as the spirit of giving inspires acts of kindness, the open-source community, including Drupal, shows us how collective effort can create tools that serve everyone. At this pivotal moment in the evolution of artificial intelligence, I believe it’s our responsibility to ensure that AI becomes a force for good.
AI and the changing dynamics of influenceArtificial intelligence is rapidly transforming how communication happens. Governments and corporations use AI to dominate narratives, leveraging its power for hybrid warfare, infodomwarfare, and highly targeted campaigns. These tools amplify their voices and shape public opinion at an unprecedented scale.
But while some benefit from this technological leap, countless others are left behind. Grassroots movements, small organizations, and individuals working for positive change often lack access to the same advanced tools. This disparity risks creating a digital landscape where only the most powerful can influence and persuade.
AI has the potential to level the playing field—but only if we act now to make it accessible to everyone, not just those with vast resources. The Drupal community has long championed the idea that technology should empower rather than exclude, and this belief continues to inspire our work.
AI as a tool for empowermentAI offers powerful capabilities for creating, translating, and distributing content. But to truly empower communities, we must focus on making these tools both affordable and usable for all.
Here’s where AI can make the greatest impact:
- Empowering human rights advocates: AI tools can protect their causes, amplify their messages, and counter deceitful propaganda campaigns effectively.
- Breaking language barriers: Advanced localization features allow for accurate and culturally resonant translations, opening up global audiences.
- Countering misinformation: By identifying and responding to false narratives quickly, AI can help protect the credibility of those working for truth.
- Streamlining communication: Automation of repetitive tasks, such as content generation or scheduling, frees up time for more impactful work.
These applications make AI a practical and transformative tool, not just for large organizations, but for anyone looking to make a difference.
AI’s role: a realistic perspectiveLet’s be clear: AI will continue to play a significant role in shaping narratives, both for good and ill. It will be used for propaganda, hybrid warfare, and to amplify echo chambers. We cannot completely control this reality.
However, we can ensure that AI is also a force for good—a tool that enables collaboration, fosters mutual understanding, and empowers those working for positive change. By giving more people access to these tools, we can shift the balance away from dominance and toward dialogue.
This isn’t about revolutionizing AI’s role overnight; it’s about giving more people the resources they need to participate in the conversation.
Looking ahead with optimismAI is here to stay, and its impact will only grow. While challenges remain, the potential for AI to empower individuals and communities is enormous. By democratizing these tools, we can help bridge divides, amplify diverse voices, and foster a digital world that values collaboration over competition.
In this rapidly evolving landscape, the key to a fairer future is accessibility. With the right tools, anyone—whether a grassroots organizer, a small business, or a passionate advocate—can create, influence, and inspire. The Drupal community and the spirit of open-source collaboration remind us that technology can serve everyone, not just a privileged few.
As we celebrate this Christmas season, let’s also look forward to a new year filled with opportunity—where AI tools bring us closer together and empower us all to shape a brighter future. The work we are doing right now is shaping the world of tomorrow that is changing so rapidly.
Category Drupal Community Jurriaan RoelofsSahil Dhiman: Debian Mirrors Hierarchy
After finding AlmaLinux sync capacity is around 140Gbps at Tier 0 (or Tier 1, however you look at it), I wanted to find source and hierarchy in Debian mirroring systems.
There are two main types of mirrors in Debian - Debian package mirrors (for package installs and updates) and Debian CD mirrors (for ISO and others medias). Let’s talk about package mirrors (and it’s hierarchy) first.
Package mirror hierarchyTrace file was a good starting point for checking upstream for a package mirror in Debian. It resides at <URL>/debian/project/trace/_traces and shows flow of data. Sample trace file from jing.rocks’s mirror. It showed, canonical source for packages was ftp-master.debian.org. Checking via https://db.debian.org/machines.cgi, showed it’s fasolo.d.o hosted at Brown University, US. This serves as “Master Archive Server”, making it a Tier 0 mirror. It’s entry mentions that it has 1Gbps shared LAN connectivity (dated information?) but it only has to push to 3 other machines/sites.
Side note - .d.o is .debian.org
As shown on https://mirror-master.debian.org/status/mirror-hierarchy.html, three other mirror sites are:
- syncproxy2.eu.debian.org ie smit.d.o hosted by University of Twente, Netherlands with 2x10Gbps connectivity.
- syncproxy4.eu.debian.org ie schmelzer.d.o hosted by Conova in Austria with 2x10Gbps connectivity.
- syncproxy2.wna.debian.org - https://db.debian.org/machines.cgi entry mentions it being hosted at UBC here, but IP seems to be pointing to OSUOSL IP range as of now. IIRC few months ago, syncproxy2.wna.d.o was made to point to other host due to some issue (?). mirror-osuosl.d.o seems to be serving as syncproxy2.wna.d.o now. Bandwidth isn’t explicitly mentioned but from my experience seeing bandwidths which other free software projects hosted at OSUOSL have, it would be atleast 10Gbps and maybe more for Debian.
These form the Debian Tier 1 mirror network, as all the mirrors sync from them. So Debian has atleast 50Gbps+ capacity at Tier 1. A normal Debian user might never directly interact with any of these 3 machines, but every Debian package they run/download/install flows through these machines. Though, I’m unsure what wna stands for (syncproxy2.wna.d.o). NA probably is North America and W is west (coast)? If you know, do let me know.
After Tier 1, there are a few more syncproxies (detailed below). There are atleast 45 mirrors at Tier 2. Most country mirrors i.e. ftp..debian.org are at Tier 2 too (barring a few like ftp.au.d.o, ftp.nz.do etc).
Coming back to Sync proxies at Tier 2:
- syncproxy3.wna.debian.org - gretchaninov.d.o which is marked as syncproxy2 on db.d.o (information dated). It’s hosted in University of British Columbia, Canada, where a lot of Debian infrastructure including Salsa is hosted.
- syncproxy.eu.debian.org - Croatian Academic and Research Network managed machine. CNAME/redirects to debian.carnet.hr. Seems to be directly managed by hosting organization.
- syncproxy.au.debian.org - mirror-anu.d.o hosted by Australian National University with 100Mbps connectivity. Closest sync proxy for all Australian mirrors.
- syncproxy4.wna.debian.org - syncproxy-aws-wna-01.d.o hosted in AWS, in US (according to GeoIP). IPv6 only (CNAME to syncproxy-aws-wna-01.debian.org. which only has an AAAA record, no A record). A m6g.2xlarge instance which has speeds upto 10Gbps.
Coming back to https://mirror-master.debian.org/status/mirror-hierarchy.html, one can see chain extend till Tier 6 like in case of this mirror in AU which should add some latency for the updates from being pushed at ftp-master.d.o to them. Ideally, which shouldn’t be a problem as https://www.debian.org/mirror/ftpmirror#when mentions “The main archive gets updated four times a day”.
In my case, I get my updates from NITC mirror, so my updates flows from US > US > TW > IN > me in IN.
CDNs have to internally manage cache purging too unlike normal mirrors which directly serve static file. Both deb.debian.org (sponsored by Fastly) and cdn-aws.deb.debian.org (sponsored by Amazon Cloudfront) sync from following CDN backends:
- mirror.accumu.d.o hosted by Academic Computer Club in Umeå, Sweden.
- mirror-skroutz.d.o hosted by Skroutz Internet Services in Greece.
- schmelzer.d.o hosted by Conovo in Austria.
See deb.d.o trace file and cdn-aws.deb.d.o trace file.
(Thanks to Philipp Kern for the heads up here.)
CD image mirrors HierarchyTill now, I have only talked about Debian package mirrors. When you see /debian directory on various mirrors, they’re usually for package install and updates. If you want to grab the latest (and greatest) Debian ISO, you go to Debian CD (as they’re still called) mirror site.
casulana.d.o is mentioned as CD builder site hosted by Bytemark while pettersson-ng.d.o is mentioned as CD publishing server hosted at Academic Computer Club in Umeå, Sweden. Primary download site for Debian CD when you click download on debian.org homepage is https://cdimage.debian.org/debian-cd/ is hosted here as well. This essentially becomes Tier 0 mirror for Debian CD. All Debian CD mirrors are downstream to it.
pettersson-ng.d.o / cdimage.d.o (SE) ---> to the world A visualation of flow of Debian CD from cdimage.d.oAcademic Computer Club’s mirror setup uses a combination of multiple machines (called frontends and offloading servers) to load balance requests. Their document setup is a highly recommended read. Also, in that document, they mention , “All machines are reachable via both IPv4 and IPv6 and connected with 10 or 25 gigabit Ethernet, external bandwidth available is 200 gigabit/s.”
For completeness sake, following mirror (or mirror systems) exists too for Debian:
- Debian Ports mirrors.
- Debian Archive mirrors to get old Debian versions.
- Debian Security has bunch official mirrors (as mentioned here) behind security.d.o. It resolves to Fastly IP ranges so could be Fastly or Debian operated mirrors. Taking a look at https://db.debian.org/machines.cgi tells seger.d.o in DE is security-master which I’m assuming is source for all following mentioned security mirrors:
- lobos.d.o in DE
- mirror-csail.d.o in US
- mirror-anu.d.o in AU
- santoro.d.o in BR
- schumann.d.o in DE
- setoguchi.d.o in JP
- villa.d.o in DE
- wieck.d.o in DE
Debian heavily rely on various organizations to distribute and update Debian. Compiling above information made me thankful to all these organizations. Many thanks to DSA and mirror team as well for managing these stuffs.
I relied heavily on https://db.debian.org/machines.cgi which seems to be manually updated, so things might have changed along the way. If anything looks amiss, feel free to ping.
Juri Pakaste: New Swift Package: tui-fuzzy-finder
Speaking of new Swift libraries, I released another one: tui-fuzzy-finder is a terminal UI library for Swift that provides an incremental search and selection UI that imitates the core functionality of fzf very closely.
I have a ton of scripts that wrap fzf. Some of them try to provide some kind of command line interface with options. Most of them work with pipes where I fetch data from somewhere, parse it with jq, feed it fzf, use the selection again as a part of a parameter for something else, etc. It's all great, except that I really don't love shell scripting.
With tui-fuzzy-finder I want to be able to write tools like that in a language I do actually enjoy a great deal. The package provides both a command line tool and a library, but the purpose of the command line tool is just to allow me to test the library, as writing automatic tests for terminal control is difficult. Competing with fzf in the general purpose CLI tool space is a non-goal.
I haven't implemented the preview features of fzf, nor key binding configuration. I'm not ruling either of those out, but I have not needed them yet and don't plan to work on them before a need arises.
Documentation at Swift Package Index.
Juri Pakaste: New Swift Package: provision-info
I released a new Swift library! provision-info is a Swift package for macOS. Its purpose is to parse and show information about provisioning profile files. There's a command line tool and Swift library. The library part might work on iOS, too, but I have not tried. It relies on Apple's Security framework so no Linux.
It's not actually that new, but it's been sitting in a GitHub repo without any releases or changes for nearly three years. I needed the library in a tool at work a couple of weeks ago, so I added couple of features and finally made the first releases.
The CLI tool allows you to print out the basic metadata fields, the entitlements, the device IDs and the certificates in a profile file. You get them in plain text or as JSON. The library exposes the same data as Swift types.
There's documentation for the Swift APIs at Swift Package Index's excellent documentation hosting service. The command line tool prints out help with --help.
Freelock Blog: Automatically moderate comments using AI
When you allow the general Internet to post comments, or any other kind of content, you're inviting spam and abuse. We see far more spam comments than anything relevant or useful -- but when there is something relevant or useful, we want to hear it!
With the AI module and the Events, Conditions, and Actions module, you can set up automatic comment moderation.
Like any use of AI, setting an appropriate prompt is crucial to getting a decent result. Here's the one we're trying out:
Joey Hess: the twenty-fifth year of my free software career
I've been lucky to be able to spend twenty! five! years! developing free software and making a living on it, and this was a banner year for that career.
To start with, there was the Distribits conference. There's a big ecosystem of tools and projects that are based on git-annex, especially in scientific data management, and this was the first conference focused on that. Basically every talk involved git-annex in some way. It's been a while since I was at a conference where my software was in the center like that -- reminded me of Debconf days.
I gave a talk on how git-annex was probably basically feature complete. I have been very busy ever since adding new features to it, because in mapping out git-annex's feature set, I discovered new possibilities.
Meeting people and getting a better feel for the shape of that ecosytem, both technically and funding wise, led to several big developments in funding later in the year. Going into the year, I had an ongoing source of funding from several projects at Dartmouth that use git-annex, but after 10 years, some of that was winding up.
That all came together in my essentially writing a grant proposal to the OpenNeuro project at Stanford, to spend 6 months building out a whole constellation of features. The summer became a sprint to get it all done. Signficant amounts of very productive design work were done while swimming in the river. That was great.
(Somehow in there, I ended up onstage at FOSSY in Portland, in a keynote panel on Open Source and AI. This required developing a nuanced understanding of the mess of the OSI's Open Source AI definition, but I was mostly on the panel as the unqualified guy.)
Capping off the year, I have a new maintenance contract with Forschungszentrum Jülich. This covers the typical daily grind kind of tasks, like bug triage, keeping on top of security, release preparation, and updating dependencies, which is the kind of thing I've never been able to find dedicated funding for before.
A career in free software is a succession of hurdles. How to do something new and worthwhile? How to make any income while developing it at all? How to maintain your independant vision when working on it for hire? How to deal with burn-out? How to grow a project to be more than a one developer affair? And on and on.
How does a free software project keep paying the bills once it's feature complete? Maybe I am starting to get a glimpse of an answer.
The Drop Times: Drupal4Gov Earns Nonprofit Status: Empowering Government Through Open Source
Real Python: How to Remove Items From Lists in Python
Removing items from a Python list is a common task that you can accomplish with various techniques. Whether you need to remove an item by its position or value, Python has you covered. In this tutorial, you’ll explore different approaches to removing items from a list, including using .pop(), the del statement, and .remove().
The .remove() method allows you to delete the first occurrence of a specified value, while .pop() can remove an item by its index and return it. The del statement offers another way to remove items by index, and you can also use it to delete slices of a list. The approach you choose will depend on your specific needs.
By the end of this tutorial, you’ll understand that:
- To remove an item from a list in Python, you can use various approaches like .pop(), del, .remove(), and .clear().
- To remove items from a certain position in a list, you use the .pop() method.
- To delete items and slices from a list in Python, you use the del statement.
- You use the .remove() method to delete the first occurrence of a specified value from a list.
- To remove all the items from a list, you use .clear().
- You can also remove duplicate items using a loop, dictionary, or set.
To get the most out of this tutorial, you should be familiar with basic Python list topics like creating lists, adding items to a list, and accessing items in a list.
Get Your Code: Click here to download the free sample code that you’ll use to remove items from lists in Python.
Take the Quiz: Test your knowledge with our interactive “How to Remove Items From Lists in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Remove Items From Lists in PythonIn this quiz, you'll test your understanding of removing items from lists in Python. This is a fundamental skill in Python programming, and mastering it will enable you to manipulate lists effectively.
How to Remove Specific Items From a ListOne common operation you’ll perform on a Python list is to remove specific list items. You may need to remove items based on their position in the list, or their value.
To illustrate how you can accomplish this task, suppose you’re creating a website for a public library. Your web app will allow users to save a list of books they would like to read. It should also allow them to edit and remove books from the list, as well as sort the list.
You can use a Python list to store the user’s reading list as a collection of book titles. For example, the reading list might look something like this:
Python >>> books = ["Dragonsbane", "The Hobbit", "Wonder", "Jaws"] Copied!Now that you have a list of books, you have several ways to remove a single, specific book from the list. One approach is to use the .pop() method.
Removing Items Using the .pop() MethodSometimes, you may need to remove items at a certain position in a list. For example, in a public library app, users might select books to remove by ticking checkboxes in the user interface. Your app will delete each selected item based on its index, which is the item’s position in the list.
If you know the index of the item you want to remove, then you can use the .pop() method. This method takes the item’s index as an optional argument and then removes and returns the item at that index. If you don’t pass an index argument to the method call, then .pop() will remove and return the last item in the list.
Note that Python lists use zero-based indexing for positioning, which means that the first element in a list is at index 0, the second element is at index 1, and so on. With that in mind, here’s an example of how you can use .pop() to remove and display the first element in your books list:
Python >>> books.pop(0) 'Dragonsbane' Copied!You invoke the .pop() method on the books list with an index of 0, indicating the first element in the list. This call removes the first title, Dragonsbane, from the list and then returns it.
If you check the content of your list after running this code, then you’ll notice that Dragonsbane isn’t there anymore:
Python >>> books ['The Hobbit', 'Wonder', 'Jaws'] Copied!Here, you display the book list again after the .pop() call. You can see that your list is now one element shorter because .pop() removed the first title.
As you learned earlier in the tutorial, .pop() removes an item and also returns its value, which you can then use for other operations. For example, suppose the library app also allows users to store a separate list of books they’ve read. Once the user has read a book, they can remove it from the initial book list and transfer the title to the read list:
Python >>> books = ["Dragonsbane", "The Hobbit", "Wonder", "Jaws"] >>> read_books = [] >>> read = books.pop(0) >>> read_books.append(read) >>> read_books ['Dragonsbane'] >>> books ['The Hobbit', 'Wonder', 'Jaws'] Copied!On the second line in the example, you create a new, empty list called read_books to store the names of the books the user has read. Next, you use the .pop() method to remove the first title from the original book list and store it in a variable. Then, you use .append() to add the stored title to the read_books list.
Read the full article at https://realpython.com/remove-item-from-list-python/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]