Feeds
LostCarPark Drupal Blog: Drupal Advent Calendar day 11 - Event Track
It’s time to open another door of the Drupal Advent Calendar! Behind today’s door we find the Events track of Drupal CMS, and we hand over to track lead Martin Anderson-Clutz to tell us some more about it.
Managing dates and times is a common need for Drupal sites, and the Drupal CMS Events recipe aims to make this easier than ever. To be clear, the intended use case is a site that posts multiple, short events. Events like Drupalcamps that comprise sessions, a schedule, and more should look at the Drupal Event Platform instead.
A Drupal CMS site showing DrupalCon Singapore as an event listingA Smart Date field provides an intuitive way to enter dates and…
TagsDjango Weblog: Django 6.x Steering Council Candidates
Thank you to the 12 individuals who have chosen to stand for election. This page contains their candidate statements submitted as part of the Django 6.x Steering Council elections.
To our departing Steering Council members, Adam Johnson, Andrew Godwin, James Bennett, Simon Charette – thank you for your contributions to Django and its governance ❤️.
Those eligible to vote in this election will receive information on how to vote shortly. Please check for an email with the subject line “Django 6.x Steering Council Voting”. Voting will be open until 23:59 on December 17, 2024 Anywhere on Earth.
Any questions? Reach out via email to foundation@djangoproject.com.
All candidate statements ¶To make it simpler to review all statements, here they are as a list of links. Voters: please take a moment to read all statements before voting!
- Andrew Miller (he/him) — Cambridge, UK
- Carlton Gibson (he/him) — Spain
- Emma Delescolle (she/her) — Belgium
- Frank Wiles (he/him) — Lawrence, Kansas, USA
- Jake Howard (he/him) — UK
- Lily Foote (she/her) — United Kingdom
- Mark Walker — Chester, UK
- Ryan Cheley (he/him) — California, US
- Ryan Hiebert —
- Sage Abdullah (he/him) — Jakarta, Indonesia / Bristol, UK
- Tim Graham — Philadelphia, PA USA
- Tim Schilling (he/him) — United States
Hi there, for those that haven’t come across me yet, I’m very active on the Discord, joining a couple of years ago, I serve as a moderator and generally helping out. I have also authored a Working Group proposal that is almost ready to go live, pending Board approval. Finally I organise the monthly Django Social in Cambridge.
However perhaps what is most relevant to my nomination for the Steering Council are the blog posts I have written this year. They have been short & snappy where I have prodded and explained different aspects of using Django, the contributing process and other aspects of the community.
I am nominating myself for the Steering Council to ensure that Django has a secure future. Personally I have used Django for the last 12 years and it has been integral to my software engineering career. The last two and half years have been the best in terms of getting involved in the community and has increased my passion for improving Django itself and seeing it have a future beyond my personal usage.
While there is energy in the community, the technical vision has stagnated and needs a reboot. As Django is about to celebrate it’s 20th birthday, I want to see Django celebrate it’s 30th & 40th birthday’s and still be relevant to the world of web development. But what does that mean for us now as a community and how to do we ensure that future? In short I believe the next Steering Council needs to experiment with a range of ideas and gauge the community reaction to them. These ideas will form the first iteration of processes that future Steering Council’s can progress and mature.
To me these ideas need to focus on the following high level goals:
- Transparency & Consistency of communication
- Clearer, simpler Governance
- Vision of where Django could be in 10 or 20 years from now.
- Strengthening the community through teams that provides growth for each and every member
Each of these goals have plenty of actionable items… for example:
- Communication: Coordinate with the Board to recognise the work of the wider ecosystem of packages on the website and in other resources.
- Governance: Deeply examine the DEP process, simplify it where needed so we can normalise the process of writing a DEP to be closer to Forum post.
- Vision: Identify potential landmark features for the 6.X release cycle and beyond. Even propose what features might be in the Django 11.X cycle (10 years time).
- Teams: Start to create career tracks within the community, this would include Djangonaut space, Google Summer of Code, existing teams and new teams yet to be formed.
Do I expect this next Steering Council to achieve all of these goals above in one go? While these goals are idealistic, I expect this next Council to lay the foundations for futures Council’s to thrive and creating the on-ramps for a larger vibrant community of Djangonauts, ensuring the Django’s future is bright and secure.
Feel free to reach out to me if you have further questions about anything above.
Carlton Gibson (he/him) Spain ¶View personal statementI'm running for the Steering Council to help push Django forward over the 6.x release cycle.
We’re at an exciting time for the framework. There’s a whole fresh wave of new contributors keen to experiment. I think we should lean into that. My particular interest here is in helping to support, promote, and leverage the third party ecosystem better that we have done. I wrote at some length on that in my recent Thoughts on Django’s Core, if you’d like the details.
Beyond that, I want to help our mentoring effort. There’s a big gap between starting to contribute and staying to maintain. We’ve got all the resources we need to turn the new generation of Django’s contributors into its next generation of maintainers. That’s where I increasingly see my time and focus being spent over the coming years.
I was unsure whether to run for election or not. Whilst I was never part of the old Django Core, as a former Fellow, and maintainer of packages such DRF, django-filter, and crispy forms, I’m certainly towards the older-guard side of things, that we’ve heard much about in recent posts. We’re at a delicate time. With the governance updates needed, I feel that I still have lots to offer, and can be helpful in advancing those. As I say, I think we’re at an exciting time for the framework. I’d be honoured to serve if chosen.
Emma Delescolle (she/her) Belgium ¶View personal statementFor a longer version of this statement you can read this post on my blog
For a video on similar topics, you can watch my recent Djangonaut Space session on YouTube
As a member of the Django community for the past 10 years, I've had the privilege of witnessing firsthand the project's growth and evolution.
Over the decade, I've seen many exciting changes and improvements that have shaped Django into the powerful tool it is today. However, I've also noticed a gradual slowing down of this evolution in recent years.
I have also benefited from said growth and Django's reliability and stability as I have been running a business who's main activity revolves around Django for that same amount of years. Whether it be creating, reviewing, maintaining or updating software. My application to the steering council is one of the ways in which I can give back to the community.
With my candidacy as a member of the Django Steering Council, I want to highlight my focus on ensuring Django remains relevant and sustainable for the next 20 years.
Lowering the barrier to contribution and involving a more diverse set of contributorsMost code contributions merged into Django are bug fixes or cleanups. I believe this trend is not due to an unusual abundance of bugs within the project but rather due to an unsustainable barrier to contributing new features or code improvements. Contributing to Django requires a significant amount of time, mental energy and effort, which can be discouraging to most. And often, those who have bit the bullet and gone through it once do not go through it a second or third time.
Myself and others have noted, more or less recently, that the process of contributing code to Django, including but not limited to DEPs, is daunting. The words "brutal" and "bureaucratic" have been used by myself and others to describe the process.
If elected, I aim to identify areas that hinder effective code contributions to Django and work towards simplifying the process of contributing code to the project; while keeping the right balance to also protect the time, energy and sanity of the Fellows and the review team.
Dealing with the realities of an aging code-baseAs Django approaches its 20th anniversary, it's essential to acknowledge the aging code-base and technical debt accumulated over time. My goal is to initiate a review process of the existing code-base, carefully evaluating technical debt and identifying areas where improvements can be made without disrupting existing functionality.
Missing batteries and deadlinesOne of the core principles of Django has always been its commitment to being a "batteries included" framework. However, in recent years, I've noticed that many of these essential features and tools have remained stagnant, without new additions or replacements emerging to support the evolving needs of our community.
Furthermore, the third-party application ecosystem that was once thriving and a jewel of the community, has become harder and harder to navigate and discover. It has also become more time-consuming for developers to have to evaluate a large set of third-party applications to solve a specific need.
As a member of the steering council I would like to work on bringing better visibility and discoverability of those 3rd-party packages and evaluate whether any such package should be brought into Django, either Django core or a spiritual successor to contrib or some other way. Some packages that come to mind are django-csp, django-cors and django-upgrade but this is in no way an exhaustive list.
Feature requests and RoadmapI plan to use my position to champion "feature requests" – a critical aspect of the council's role that has never been utilized to this date. Feature requests being also a key part in being able to set a roadmap for Django and provide guidance to potential contributors on where to get started on their journey.
Code ownership and groupsMy belief is that, as an unexpected side-effect of the dissolution of the core team and the high barrier to contribution, expertise in specific areas of Django has begun to erode. However, it can be regained through targeted efforts. People involved in the aforementioned code review process would be perfect candidates for these roles, as they'd already have taken a deep dive in thoroughly understanding specific areas of the framework.
Moreover, frequent contributors to an area of the framework are often well-positioned to take on a leading role in "owning" that part of the project. However, this implies recurring contributions to said area. I believe that we need to find ways to incentivize people to become area specialists. Which brings us back to need for lowering the barrier to contribution.
More generally, I think that the project can benefit from those specialized groups, starting with an ORM group.
Closing thoughtsI believe that everything listed here can technically be achieved during the 6.x cycle if I'm elected but... things take time in the Django world. So, I don't want to over-promise either.
Frank Wiles (he/him) Lawrence, Kansas, USA ¶View personal statementThe community does a really great job of reaching consensus post-BDFLs but occasionally decisions do need to be made and a direction chosen.
I would like to think my long history with Django and my wide and varied use of it as a consultant gives me a unique perspective not just as a consumer of Django but as a manager/executive helping others to make decisions around how and when to use Django. The decisions that are made impact many people and organizations in sometimes subtle and non-obviously ways. I have a ton of skin in this particular game personally.
Django has been a huge part of what has driven my career and I would be honored to help steer for a bit.
Jake Howard (he/him) UK ¶View personal statementFor those who don't know me, I've been using Django professionally for almost a decade, spending over half of that focusing on performance and security. I'm also on the Core team for Wagtail CMS.
Django has a great reputation for being "batteries included" and for "perfectionists", however that reputation is starting to age. Now, people think of Django and clunky, slow, and only useful for building big monoliths. Many developers choose leaner frameworks, and end up having to re-implement Django's batteries themselves, instead of starting with Django and focusing on building their application.
For Django to progress, it needs to recharge its batteries. The ticket backlog, as well as many developer's dreams are filled with great feature ideas just looking for a little push in the right direction. Not just the big features like 2FA, Background Tasks or even type hints, but also quality of life improvements to templates, views or even the user model. To achieve this, it requires more than just code - it takes people.
From personal experience, I've seen the friction from trying to add even small features to Django, and the mountains to climb to contribute large features. To encourage new contributors, that needs to change - just because it's the way it's always been, doesn't mean it has to continue. Django is a big, complex, highly depended on project, but that doesn't mean it needs to move at a snail's pace for everything, nor does every contribution need to be 100% perfect first time. Open source projects are built on passion, which is built up over time but destroyed in seconds. By fostering and enabling that passion, the Django contributor community can flourish.
By the time Django hits 7.0, I'd love to see it more modern, more sustainable, and living up to the ideas we all have for it.
Lily Foote (she/her) United Kingdom ¶View personal statementHi! I'm Lily and I've been a contributor to Django for about a decade, mainly working on the ORM. My biggest contributions were adding check constraints and db_default. I've also contributed as a mentor within the Django Community. I was a navigator for the pilot of Djangonaut Space (and a backup navigator in following sessions) and a Google Summer of Code mentor for the Composite Primary Keys project. I also joined the triage and review team in 2023.
As a member of the Steering Council I want to enable more people to contribute to the Django codebase and surrounding projects. I think in recent years there has been too much friction in getting a change to Django agreed. I have seen several forum threads fail to gain consensus and I've experienced this frustration myself too. I also think the DEP process needs an overhaul to make creating a DEP much easier and significantly less intimidating, making it easier to move from a forum discussion to a decision when otherwise the status quo of doing nothing would win.
I believe a more proactive Steering Council will enable more proposals to move forward and I look forward to being a part of this. I will bring my years of experience of the Django codebase and processes to the Steering Council to provide the technical leadership we need.
Mark Walker Chester, UK ¶View personal statementI'm running for the Steering Council so that I might be able to help others. I wouldn’t be in the position I am today without someone very helpful on StackOverflow many years ago who took the time to help me with my first endeavour with python.
Over the years I’ve strived to help others in their journey with python & django, an aim aided by becoming a navigator for djangonaut space and the technical lead of the Django CMS Association. Through all of this I’ve acted as a facilitator to help people both professionally and in open source, something which ties in with discussions going on about the SC being the facilitator for the continued growth of the Django community and framework itself.
Ryan Cheley (he/him) California, US ¶View personal statementHello, I’m Ryan Cheley and I’ve decided to stand for the Django 6.x Steering Council.
My journey with the Django community began in March 2022 when I started contributing pull requests to DjangoPackages. My initial contributions quickly led to deeper involvement, and I was grateful and honored to be asked to be a maintainer following DjangoCon US 2022.
At the DjangoCon US 2022 Sprints, I worked on a SQLite-related bug in Django's ORM. This proved so valuable that I was was able to give a talk about my experience at DjangoCon US 2023, where I delivered my talk “Contributing to Django or How I learned to stop worrying and just try to fix a bug in the ORM”.
Building on this experience, I returned to DCUS 2024 to present on “Error Culture” where I took a deep dive into the widespread but often overlooked issue of how organizations manage error alerts in technology and programming domains.
My commitment to the Django ecosystem extends beyond code contributions. I've served as a Navigator for two sessions of Djangonaut Space, helping guide newcomers through their first contributions to Django. This role has allowed me to give back to the community while developing my mentorship skills.
As one of the admins for Django Commons I work with some amazing folks to help provide an organization that works to improve the maintainer experience.
Additionally, I've made various contributions to Django Core, including both code improvements and documentation enhancements.
Throughout my involvement with Django, I've consistently shown a commitment to both technical excellence and community building. My experience spans coding, documentation, mentorship, and public speaking, reflecting my holistic approach to contributing to the Django ecosystem.
My focus will be in creating sustainable and inclusive leadership structures. This would, in turn, not only provide help and support for current Django leadership, but also develop and empower future leaders.
The avenues to meet these goals include gathering diverse candidates, providing mentorship opportunities, clearly communicating expectations, and removing financial barriers to participation.
As a member of the Django Steering Council (SC) for the 6.x series, I hope to be able to accomplish the following with my fellow SC Members:
- Establish a governance structure that allows the SC to be successful going forward by:
- Providing Mentorship for future potential SC members from the Community
- Reviewing the 18-month requirements for eligibility for SC
- Communicating the expectations for SC role in Community
- Working to increase the diversity of those that are willing and able to stand for the SC in the 7.x series and going forward
- Collaborate with Working Groups to
- ease burden of fellows in a meaningful way via the Fellowship Working Group
- work with Social Media Working Group to promote new or upcoming features
- Write up weekly / monthly reports, similar to the fellows reports
- Work with the Django Software Foundation(DSF) Board to establish a stipend for 7.x SC members going forward to support their work and allow more diverse participation
- Implement a road map for Django drawing input and inspiration from the Community, specifically from these sources
- Adam G Hill post
- Thibaud Colas Forum post
- Paolo Melichiore post
- Timo Zimmerman post
- Roadmap work from early 2024
- Work on and complete a DEPs to
- Remove Dead Batteries, similar to Python PEP 594
- Determine the long term viability of Trac, research alternatives, and come up with triggers that would lead to a migration if/when necessary.
- Review and approve or reject all current draft DEPs
The Django community has done so much for me. I’m hoping that with my involvement on the Steering Council I’m able to work to do my part to ensure the long term success and viability of the Django community and leave it in a better place than I found it.
Ryan Hiebert ¶View personal statementI've worked professionally with Django and Python for the past 13 years. I've mostly lurked on the mailing lists and forums, but I have been around maintaining some smaller projects, most notably among them being django-safemigrate, aldjemy, hirefire, tox-travis, and backports.csv. I had the privilege of giving a talk at DjangoCon 2024 about Passkeys and Django.
Django has excelled in three areas. We take a batteries-included approach that empowers new developers, we have strong community governance, and we are conservative about the changes we make to maintain stability. These have been critical to Django's success, but the combination has made it challenging for Django to keep up with the changing technology landscape.
To allow Django meet the changing needs of our users both now and for the future, we need to think carefully about the important parts of each of those priorities, and tune the tension between them to allow the Django community to thrive.
Django should transition away from including batteries directly, and toward enabling add-on batteries. We should favor proposals that empower interoperability between a variety of third party batteries (e.g. the Background Workers DEP), and disfavor proposals that wish to bless a particular solution in core, no matter how wonderful the solution is (e.g. HTMX).
Django should be encouraging work that aims to expose third-party packages in our official documentation and communication channels, especially those that implement core interoperability interfaces. This will make room for new ideas and more approaches.
Django should seek to make a clear boundary around a smaller core where our preference for stability is the more important factor in empowering our diverse community.
Django should favor changes that bring it into alignment with Python community standards. It should favor this even over the "one way to do it" principle. By encouraging using Python standards, Django will better meet its responsibility as an entryway for new Python developers to be better equipped to grow in Python generally. For example, Django could encourage using appropriate standards in the pyproject.toml over extending Django-centric idioms like adding to the settings.py.
Django should encourage proposals that seek to lower the footprint of a new project. Projects like Nanodjango should inspire us to make starting with Django trivial and minimal, and make each step a newcomer might take to grow be as small as possible, so they only need to meet only the challenges required by the work they are needing to do.
Django should favor proposals to begin to include correct types, even to the point of carefully making any necessary breaking changes to help make the types correct and usable.
The DSF should, when financially feasible, fund non-core batteries that can empower the community. It may be appropriate for the DSF to make some requirements about the necessary governance required of these projects in order to qualify for funding.
The Steering Council should strongly consider recommending changes to its decision making process to make it more feasible to make and reverse decisions as it faces new challenges. Stability is maintained by active, careful, and persistent effort, not indecision.
By making decisions with these principles in mind, we can help our community maintain the root of our goals: A stable community-governed base, empowering a diverse community that excels in the fast-paced world of web development, and being a gateway for new developers.
Sage Abdullah (he/him) Jakarta, Indonesia / Bristol, UK ¶View personal statementDjango's best strength is that it's built by its community – but that's also a weakness. The reality of a project of Django's scale that's been around for so long, and has so many contributors, is that making substantial changes becomes increasingly difficult. You may have heard talks about how daunting it can be to get a PR merged into Django, or how hard it is to get a feature accepted.
It doesn't have to be that way.
In 2019, I added the cross-database JSONField as part of Google Summer of Code (GSoC). Many of Django's big features have come from GSoC, and some of the contributors stay involved in the community – this year, I became a GSoC mentor for Django. As a core team member of Wagtail (a Django-based CMS), I have seen the same pattern with our participations in such outreach programs. Django can do a lot more in making community contributions more accessible and sustainable, and I think I can help.
Here's what I think the steering council should do:
- Organize a living roadmap for Django. Rather than waiting for a DEP to be proposed and acted on, the steering council should actively help the community in highlighting issues and feature requests that are high priority or most wanted.
- Maximize the potential of mentorship programs. With a roadmap in place, the steering council could help find mentors and contributors who can take on the work. Programs like GSoC, Djangonaut Space, or other initiatives can flourish if we connect the ideas with the right people.
- Communicate and document progress. To allow continuous feedback and improvement, the steering council should engage with the community and document the progress of their activities, as well as the current state of Django.
Django is at a turning point. It's time for the steering council to take a more active role with the community in shaping the future of Django, and I believe I can help make that happen.
Tim Graham Philadelphia, PA USA ¶View personal statementMy deep knowledge of Django comes as a user and documentation contributor since 2009, and from working on Django as a Django Fellow from 2014-2019.
Since 2019, I've been contracted to develop and maintain several third-party database backends for Django, including CockroachDB, Google Cloud Spanner, Snowflake, and MongoDB.
I remain active on the Django Internals section of the forum and the Django ticket tracker, as well as writing and reviewing patches for Django.
Tim Schilling (he/him) United States ¶View personal statementIf elected to the Steering Council, I would strive to grow our contributor base and improve the support structures in the community. I'd like to do the work to make everyone else's lives easier.
I expect this to move slowly, but I do expect this to move. The three most important goals to me are the following:
Meet as the Steering Council regularly and post a record of the discussion and actions.
To check in on our various teams and individuals. For example, the Translations team isn't a formal team yet, but it should be.
To encourage and support feature development based on community recommendations.
I will need help with this role in understanding the context and history of technical decisions in Django. The community can support me and others like me by continuing to engage in those technical discussions on the forum. Having folks provide context and clarity will be invaluable.
If elected, I would step down from the DEFNA board and step away as a DjangoCon US organizer. That would leave me being involved with the Steering Council, Djangonaut Space, and Django Commons, all of which overlap in my goal to foster community growth.
I expect there to be technical change in the next term of the Steering Council. However, my particular focus will be on the people. By engaging the community more and encouraging new people, we can strengthen the foundation of our community to support our ambitious goals of the future.
More detailed opinions can be found at: Steering Council 6.x Thoughts · Better Simple.
A list of my involvements can be found at: Tim Schilling · Better Simple
That’s it, you’ve read it all 🌈! Be sure to vote if you’re eligible, by using the link shared over email. To support the future of Django, donate to the Django Software Foundation on our website or via GitHub Sponsors.
Paolo Melchiorre: My first DSF board meeting
Thoughts and insights aboutr my first meeting as a Django Software Foundation board member.
FSF Events: Free Software Directory meeting on IRC: Friday, December 13, starting at 12:00 EST (17:00 UTC)
PyCoder’s Weekly: Issue #659 (Dec. 10, 2024)
#659 – DECEMBER 10, 2024
View in Browser »
In this video course, you’ll learn about two popular coding styles in Python: look before you leap (LBYL) and easier to ask forgiveness than permission (EAFP). You can use these styles to deal with errors & exceptional situations in your code. You’ll dive into the LBYL vs EAFP discussion in Python.
REAL PYTHON course
An exploration of PyPy, an alternative Python implementation written in Python, is performed through two computational tasks: estimating π using the Monte Carlo method and calculating prime numbers with the Sieve of Eratosthenes.
CRISTIANOPIZZAMIGLIO.COM • Shared by Cristiano Pizzamglio
Posit Connect Cloud allows users to publish, host, and manage interactive apps, dashboards, Python models, APIs, and more. It offers a centralized, self-service platform for sharing Python-based data science. Amplify your impact by streamlining deployment and fostering sharing and collaboration →
POSIT sponsor
This step-by-step article covers how to build a scalable chat backend using Django, Wikipedia data, OpenAI embeddings, and FAISS.
YANN MALET
In this tutorial, you’ll explore the differences between an expression and a statement in Python. You’ll learn how expressions evaluate to values, while statements can cause side effects. You’ll also explore the gray areas between them, which will enhance your Python programming skills.
REAL PYTHON
Gregory is the maintainer of the Python Standalone Builds project but due to changes in his life doesn’t have as much time to contribute. The folks at Astral have offered to take the project on. This note explains why.
GREGORY SZORC
ZenRows handles all anti-bot bypass for you, from rotating proxies and headless browsers to CAPTCHAs and AI. Get a complete web scraping toolkit to extract all the data you need with a single API call. Try ZenRows now for free →
ZENROWS sponsor
Seth is the go-to security guy at Python and other open source projects. He, and others, have noticed an increase in automatically generated security reports which are often wrong. This post talks about what is going on and what you can do it if you maintain your own repository.
SETH LARSON
PEP 705 introduced the typing.ReadOnly type qualifier to allow defining read-only typing.TypedDict items. This PEP proposes using ReadOnly in annotations of class and protocol attributes, as a single concise way to mark them read-only.
PYTHON.ORG
This is a list of the talks from PyData NYC 2024. Topics include “Explaining Machine Learning Models with LLMS”, ” Using NASA EarthData Cloud & Python to Model Climate Risks”, “Unlocking the Power of Hybrid Search”, and more.
YOUTUBE.COM video
“Package resolution is a key part of the Python user experience as the means of extending Python’s core functionality.” This PEP covers how package indexes should be merged as a spec for package management tools.
PYTHON.ORG
Ned needed a way to check if a string consisted entirely of zeros and ones. He posted some possible answers and got a bunch more back in return. This post shows all the answers and how he validated them.
NED BATCHELDER
This is a collection of the talks from PyCon Australia 2024. Topics include “How Smart is AI?”, “Three Django Apps in a Trenchcoat”, “Rethinking Data Catalogs”, and more.
YOUTUBE.COM video
When do you need a Staff Engineers? What’s the difference between Staff Engineer and Engineering Manager? This article covers these questions and more.
ALEX EWERLÖF
This article talks about juv, a tool that builds on top of uv and brings better environment management to Jupyter notebooks.
ERIC J. MA
GITHUB.COM/ANCIENTNLP • Shared by Anonymous
Use Pyton 3.13 REPL With Django ShellGITHUB.COM/SELECTNULL • Shared by Sasha Matijasic
leopards: Query Your Python Lists Events Weekly Real Python Office Hours Q&A (Virtual) December 11, 2024
REALPYTHON.COM
December 13, 2024
MEETUP.COM
December 14, 2024
MEETUP.COM
December 14, 2024
MEETUP.COM
December 18, 2024
MEETUP.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #659.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Freelock Blog: Use AI to write alt text for your images
Hot off the presses! A brand new module, AI Image Alt Text, uses your configured AI engine to write Alt text for your images, based on AI vision models. When you turn this on, you get a "Generate with AI" button next to image fields, where you can easily get AI to analyze your image and come up with alternative text.
With some quick tests, I'm finding it's describing the image better than I typically do.
Stephan Lachnit: Creating a bootable USB stick that can still be used as normal storage
As someone who daily drives rolling release operating systems, having a bootable USB stick with a live install of Debian is essential. It has saved reverting broken packages and making my device bootable again at least a couple of times. However, creating a bootable USB stick usually uses the entire storage of the USB stick, which seems unnecessary given that USB sticks easily have 64GiB or more these days while live ISOs still don’t use more than 8GiB.
In this how-to, I will explain how to create a bootable USB stick, that also holds a FAT32 partition, which allows the USB stick used as usual.
Creating the partition partbleThe first step is to create the partition table on the new drive. There are several tools to do this, I recommend the ArchWiki page on this topic for details. For best compatibility, MBR should be used instead of the more modern GPT. For simplicity I just went with the GParted since it has an easy GUI, but feel free to use any other tool. The layout should look like this:
Type │ Partition │ Suggested size ──────┼───────────┼─────────────── EFI │ /dev/sda1 │ 8GiB FAT32 │ /dev/sda2 │ remainderNotes:
- The disk names are just an example and have to be adjusted for your system.
- Don’t set disk labels, they don’t appear on the new install anyway and some UEFIs might not like it on your boot partition.
- If you used GParted, create the EFI partition as FAT32 and set the esp flag.
The next step is to download your ISO and mount it. I personally use a Debian Live ISO for this. To mount the ISO, use the loop mounting option:
mkdir -p /tmp/mnt/iso sudo mount -o loop /path/to/image.iso /tmp/mnt/isoSimilarily, mount your USB stick:
mkdir -p /tmp/mnt/usb sudo mount /dev/sda1 /tmp/mnt/usb Copy the ISO contentsTo copy the contents from the ISO to the USB stick, use this command:
sudo cp -r -L /tmp/mnt/iso/. /tmp/mnt/usbThe -L option is required to deference the symbolic links present in the ISO, since FAT32 does not support symbolic links. Note that you might get warning about cyclic symbolic links. Nothing can be done to fix those, except hoping that they won’t cause a problem. For me this never was a problem though.
Finishing touchesUmnount the ISO and USB stick:
sudo umount /tmp/mnt/usb sudo umount /tmp/mnt/isoNow the USB stick should be bootable and have a usable data partition that can be used on essentially every operating system. I recommend testing that the stick is bootable to make sure it actually works in case of an emergency.
Real Python: Documenting Python Projects With Sphinx and Read the Docs
Sphinx is a document generation tool that’s become the de facto standard for Python projects. It uses the reStructuredText (RST) markup language to define document structure and styling, and it can output in a wide variety of formats, including HTML, ePub, man pages, and much more. Sphinx is extendable and has plugins for incorporating pydoc comments from your code into your docs and for using MyST Markdown instead of RST.
Read the Docs is a free document hosting site where many Python projects host their documentation. It integrates with GitHub, GitLab, and Bitbucket to automatically pull new documentation sources from your repositories and build their Sphinx sources.
In this video course, you’ll learn how to:
- Write your documentation with Sphinx
- Structure and style your document with RST syntax
- Incorporate your pydoc comments into your documentation
- Host your documentation on Read the Docs
With these skills, you’ll be able to write clear, reliable documentation that’ll help your users get the most out of your project.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Mike Driscoll: Checking Python Code with GitHub Actions
When you are working on your personal or work projects in Python, you usually want to have a way to enforce code standards. You can use tools like Flake8, PyLint or Ruff to lint your code. You might use Mypy to verify type checking. There are lots of other tools at your disposal. But it can be hard to remember to do that every time you want to create a pull request (PR) in GitHub or GitLab.
That is where continuous integration (CI) tools come in. GitHub has something called GitHub Actions that allow you to run tools or entire workflows on certain types of events.
In this article, you will learn how to create a GitHub Action that runs Ruff on a PR. You can learn more about Ruff in my introductory article.
Creating a WorkflowIn the root folder of your code, you will need to create a folder called .github/workflows. Note the period at the beginning and the fact that you need a subfolder named workflows too. Inside of that, you will create a YAML file.
Since you are going to run Ruff to lint and format your Python files, you can call this YAML file ruff.yml. Put the following in your new file:
name: Ruff on: [workflow_dispatch, pull_request] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Install Python uses: actions/setup-python@v4 with: python-version: "3.13" - name: Install dependencies run: | python -m pip install --upgrade pip pip install ruff # Include `--format=github` to enable automatic inline annotations. - name: Run Ruff run: ruff check --output-format=github . continue-on-error: false - name: Run Ruff format run: ruff format --check . continue-on-error: falseNote: This example comes from my textual-cogs repo
Let’s talk about what this does. This line is pretty important:
on: [workflow_dispatch, pull_request]It tells GitHub to run this workflow when there’s a pull request and also with “workflow_dispatch”. That second option lets you go to the Actions tab in your GitHub repo, select the workflow and run it manually. If you do not include that, you cannot run it manually at all. This is useful for testing purposes, but you can remove it if you do not need it.
The next part tells GitHub to run the build on ubuntu-linux:
jobs: build: runs-on: ubuntu-latestIf you have a GitHub subscription, you can also get Windows as a runner, which means that you can also run these actions on Windows in addition to Linux.
The steps section is the meat of this particular workflow:
steps: - uses: actions/checkout@v4 - name: Install Python uses: actions/setup-python@v4 with: python-version: "3.13" - name: Install dependencies run: | python -m pip install --upgrade pip pip install ruff # Include `--format=github` to enable automatic inline annotations. - name: Run Ruff run: ruff check --output-format=github . continue-on-error: false - name: Run Ruff format run: ruff format --check . continue-on-error: falseHere is uses a built-in checkout@v4 workflow, which is something that comes with GitHub. There are many others you can use to enhance your workflows and add new functionality. Next, you get setup with Python 3.13 and then you install your dependencies.
Once your dependencies are installed, you can run your tools. In this case, you are installing and running Ruff. For every PR, you run ruff check --output-format=github ., which will do all sorts of linting on your Python code. If any errors are found, it will add inline comments with the error, which is what that --output-format flag is for.
You also have a separate section to run Ruff format, which will format your code to follow the Black formatter (for the most part).
Wrapping UpYou can add lots of other tools to your workflows too. For example, you might add a Mypy workflow, or some test workflows to run on PR or perhaps before merging to your main branch. You might even want to check your code complexity before allowing a merge too!
With a little work, you will soon be able to use GitHub Actions to keep your code cleaner and make it more uniform too. Adding automated testing is even better! Give it a try and let me know what you think.
The post Checking Python Code with GitHub Actions appeared first on Mouse Vs Python.
Specbee: Why every Drupal developer needs to know about Services and Dependency Injection
Standards and the presumption of conformity
If you have been following the progress of the Cyber Resilience Act (CRA), you may have been intrigued to hear that the next step following publication of the Act as law in the Official Journal is the issue of a European Standards Request (ESR) to the three official European Standards Bodies (ESBs). What is that about? Well, a law like the CRA is extremely long and complex and conforming to it will involve a detailed analysis and a lot of legal advice.
Rather than forcing everyone individually to do that, the ESBs are instead sent a list of subjects that need proving and are asked to recommend a set of standards that, if observed, will demonstrate conformity with the law. This greatly simplifies things for everyone and leads to what the lawmakers call a “presumption of conformity.” You could go comply with the law based on your own research, but realistically that’s impossible for almost everyone so you will instead choose to observe the harmonized standards supplied by the ESBs.
This change of purpose for standards is very significant. They have evolved from merely being a vehicle to promote interoperability in a uniform market – an optional tool for private companies that improves their product for their consumers – to being a vehicle to prove legal compliance – a mandatory responsibility for all citizens and thus a public responsibility. This new role creates new challenges as the standards system was not originally designed with legal conformance in mind. Indeed, we are frequently reminded that standardization is a matter for the private sector.
So for example, the three ESBs (ETSI, CENELEC and CEN) all have “IPR rules” that permit the private parties who work within them to embed in the standards steps that are patented by those private companies. This arrangement is permitted by the European law that created the mechanism, Regulation 1025/2012 (in Annex II §4c). All three ESB’s expressly tolerate this behaviour as long as the patents are then licensed to implementers of the standards on “Fair, Reasonable and Non Discriminatory” (FRAND) terms. None of those words is particularly well defined, and the consequence is that to implement the standards that emerge from the ESBs you may well need to retain counsel to understand your patent obligations and enable you to enter into a relationship with Europe’s largest commercial entities to negotiate a license to those patents.
Setting aside the obvious problems this creates for Open Source software (where the need for such relationships broadly inhibits implementation), it is also a highly questionable challenge to our democracy. At the foundation of our fundamental rights is the absolute requirement that first, every citizen may know the law that governs them and secondly every citizen is freely able to comply if they choose. The Public.Resource.Org case shows us this principle also extends to standards that are expressly or effectively necessary for compliance with a given law.
But when these standards are allowed to have patents intentionally embodied within them by private actors for their own profit, citizens find themselves unable to practically conform to the law without specialist support and a necessary private relationship with the patent holders. While some may have considered this to be a tolerable compromise when the goal of standards was merely interoperability, it is clearly an abridgment of fundamental rights to condition compliance with the law on identifying and negotiating a private licensing arrangement for patents, especially those embedded intentionally in standards.
Just as Regulation 1025/2012 will need updating to reflect the court ruling on availability of standards, so too should it be updated to require that harmonized standards will only be accepted from the ESBs if they are supplied on FRAND terms where all restrictions on use are waived by the contributors. Without this change, standards will serve only the benefit of dominant actors and not the public.
Emmanuel Kasper: Too many open files in Minikube Pod
Right now playing with minikube, to run a three nodes highly available Kubernetes control plane. I am using the docker driver of minikube, so each Kubernetes node component is running inside a docker container, instead of using full blown VMs.
In my experience this works better than Kind, as using Kind you cannot correctly restart a cluster deployed in highly available mode.
This is the topology of the cluster:
$ minikube node list minikube 192.168.49.2 minikube-m02 192.168.49.3 minikube-m03 192.168.49.4Each kubernetes node is actually a docker container:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 977046487e5e gcr.io/k8s-minikube/kicbase:v0.0.45 "/usr/local/bin/entr…" 31 minutes ago Up 6 minutes 127.0.0.1:32812->22/tcp, 127.0.0.1:32811->2376/tcp, 127.0.0.1:32810->5000/tcp, 127.0.0.1:32809->8443/tcp, 127.0.0.1:32808->32443/tcp minikube-m03 8be3549f0c4c gcr.io/k8s-minikube/kicbase:v0.0.45 "/usr/local/bin/entr…" 31 minutes ago Up 6 minutes 127.0.0.1:32807->22/tcp, 127.0.0.1:32806->2376/tcp, 127.0.0.1:32805->5000/tcp, 127.0.0.1:32804->8443/tcp, 127.0.0.1:32803->32443/tcp minikube-m02 4b39f1c47c23 gcr.io/k8s-minikube/kicbase:v0.0.45 "/usr/local/bin/entr…" 31 minutes ago Up 6 minutes 127.0.0.1:32802->22/tcp, 127.0.0.1:32801->2376/tcp, 127.0.0.1:32800->5000/tcp, 127.0.0.1:32799->8443/tcp, 127.0.0.1:32798->32443/tcp minikubeThe whole list of pods running in the cluster is as this:
$ minikube kubectl -- get pods --all-namespaces kube-system coredns-6f6b679f8f-85n9l 0/1 Running 1 (44s ago) 3m32s kube-system coredns-6f6b679f8f-pnhxv 0/1 Running 1 (44s ago) 3m32s kube-system etcd-minikube 1/1 Running 1 (44s ago) 3m37s kube-system etcd-minikube-m02 1/1 Running 1 (42s ago) 3m21s ... kube-system kube-proxy-84gm6 0/1 Error 1 (31s ago) 3m4sThere is this container in Error status, let us check the logs
$ minikube kubectl -- logs -n kube-system kube-proxy-84gm6 E1210 11:50:42.117036 1 run.go:72] "command failed" err="failed complete: too many open files"This can be fixed by increasing the number of inotify watchers:
# cat > /etc/sysctl.d/minikube.conf <<EOF fs.inotify.max_user_watches = 524288 fs.inotify.max_user_instances = 512 EOF # sysctl --system ... * Applying /etc/sysctl.d/minikube.conf ... * Applying /etc/sysctl.conf ... ... fs.inotify.max_user_watches = 524288 fs.inotify.max_user_instances = 512Restart the cluster
$ minikube stop $ minikube start $ minikube kubectl -- get pods --all-namespaces | grep Error $PyCharm: The State of Python 2024
This is a guest post from Michael Kennedy, the founder of Talk Python and a PSF Fellow.
Hey there. I’m Michael Kennedy, the founder of Talk Python and a Python Software Foundation (PSF) Fellow. I’m thrilled that the PyCharm team invited me to share my thoughts on the state of Python in 2024. If you haven’t heard of Talk Python, it’s a podcast and course platform that has been exploring the Python space for almost 10 years now.
In this blog post, I want to bring that experience to analyze the results of the Python Developers Survey 2023 of 25,000 Python developers, run by PSF and JetBrains, alongside other key industry milestones of the past year. While the name implies it’s just 2023, votes were actually collected a few months into 2024, so this is indeed the latest PSF community survey.
You should know that, at my core, I’m a web and API developer. So we’ll be focusing in on the emerging trends for web developers in the Python space. Read on to discover the top Python trends, and I hope you can use these insights as a powerful guide for your professional programming journey in the year ahead.
Python continues its growth in popularityBroadlying speaking, Python is in a great space at the moment. A couple of years ago, Python became the most popular language on Stack Overflow. Then, Python jumped to the number one language on the TIOBE index. Presently, Python is twice as popular as the second most popular language on the index (C++)! Most significantly, in November 2024, Github announced that Python is now the most used language on GitHub.
The folks at GitHub really put their finger on a key reason for this popularity:
[Python’s] continued growth over the past few years – alongside that of Jupyter Notebooks – may suggest that [Python developers’] activity on GitHub is going beyond traditional software development.Indeed. It is the wide and varied usage of Python that is helping pull folks into the community. And Python’s massive library of over 500,000 open-source packages on PyPI keep them coming back.
Some key Python trends of 2024Let’s dive into some of the most important trends. We’re going to heavily lean on the Python Developer’s Survey results.
As you explore these insights and apply them to your projects, having the right tools can make all the difference. Try PyCharm for free and stay equipped with everything you need for data science and web development.
Try PyCharm for free Trend 1: Python usage with other languages drops as general adoption growsThere’s a saying in web development that you never just learn one language. For example, if you only know Python and want to get started with Flask, you still have a lot to learn. You’ll need to use HTML (via Jinja templates most likely) and you’ll need CSS. Then if you want to use a database, as most sites do, you’ll need to know some query language like SQL, and so on.
Our first trend has to do with the combination of such languages for Python developers.
Does it surprise you that Python is used with JavaScript and HTML? Of course not! But look at the year-over-year trends. We have JavaScript going from 40% to 37% and now at 35%. The trends for HTML, CSS, and SQL all follow a very similar curve.
Does that mean people are not using HTML as much? Probably not. Remember GitHub’s quote above: The massive growth of Python is partially due to twice as many non-web developers moving to Python. So HTML’s stats aren’t going down, per se. It’s just that the other areas are growing that much faster.
One other area I’d like to touch on in this section is Rust. We’ve seen the incredible success of libraries moving to Rust such as Pydantic and uv. Many of the blog posts and conference talks are abuzz with such tools. So it may feel like Rust is rushing to overtake Python. But in this survey, we see Rust barely changing YoY. Another data point is the TIOBE index I mentioned at the top. There, Rust only grew by 1.5%, whereas Python grew by 7%. We may love our new Rust tools in Python, but we love writing Python even more.
Trend 2: 41% of Python developers have under 2 years of experienceIt’s easy to assume that beginners make up a relatively small portion of this developer community. But when your community is experiencing rapid growth in significant segments, that might not be true. And this is indeed what the survey suggests. A whopping 41% of Python developers have been using Python for less than 2 years.
This has a major impact on many things. If you’re writing a tutorial or documentation for an open-source project, you may be inclined to write instructions without much context such as “Next, just create a virtual environment here…” It’s also easy to assume that everyone knows what a SQL injection attack is and how to avoid it when talking to Postgres from Python.
Realizing that almost half of the people you’re talking to, or building for, are pretty much brand new to Python or have come to Python from areas outside of traditional programming roles, you can’t take for granted that they will understand such things.
Keeping this important data point in mind allows us to build an even stronger community that fosters people joining from multiple backgrounds. They’ll have a better initial experience and we’ll all be better for it.
Trend 3: Python learning expands through diverse channelsHow do you stay on top of a rapidly changing technology such as Python and its broader ecosystem? We truly live in an age of riches when it comes to staying connected and getting news on specialized topics such as Python.
Respondents to the survey indicated several news channels, including YouTube channels, podcasts, blogs, and events. They shared popular and loved sources in each area. For YouTube, people frequently turn to ArjanCodes and mCoding. Listeners of podcasts recommend my own Talk Python To Me as their top resource, as well as the extremely popular Lex Fridman Podcast. While there are many high-quality Python blogs out there, people put Dan Bader’s Real Python and Michael Herman’s Test Driven sites at the top. Finally, respondents said they got massive value from the popular Python conferences such as PyCon, EuroPython, and others.
If you’re currently learning Python, one more resource I’ll toss in that you can find right here on the website at jetbrains.com/guide/python. You’ll find many quick videos teaching about many different frameworks. A bunch of them are done by my friend Paul Everitt, who is an excellent teacher.
Trend 4: The Python 2 vs. 3 divide is in the distant pastFor a long time in the Python community, we had a tough transition. Python 3 was released on December 3, 2008. Yet, even in the mid 2010s, there was an ongoing debate over whether moving to Python 3 was a good idea. It was an incredibly wasteful time looking back, yet this went on for years.
In fact, Guido van Rossum, the creator of Python, had to make a proclamation that there will be no more updates to Python 2 ever (beyond security patches) as a way to move people to Python 3. Here is Guido at PyCon 2014:
If you look at the Python 2 vs. 3 usage now, it’s clear this weird time for Python is in the past. Have you noticed how fast progress has been on new features and faster Python since we all moved to Python 3?
Today, it’s a bit more relevant to consider which Python 3 versions are in play for most users.
Python 3.13 came out in October 2024 after these results were concluded. So at the time of polling, 3.12 was the very latest version.
You can see that over 70% of people use one of the 3 latest releases and the majority of them are 1 release behind the latest. If you want my advice, it’s very safe to use the latest major version after a point release (e.g. 3.13.1). Talk Python and its many related websites and APIs have already been running 3.13.0 for a couple of weeks and there have been zero issues. But our apps don’t power airplanes, nuclear power stations or health-critical systems, so we can get away with a little less stability.
Trend 5: Flask, Django, and FastAPI remain top Python web frameworksThis trend is endlessly fascinating. Here’s the situation: You’re starting a new web project, and you know you’re doing it in Python. You might wonder if Django is still relevant? Do people still use Flask? After all, we’ve had some new frameworks land recently that have really gained a lot of market share. For example, FastAPI has grown very quickly as of late and its praise is well deserved.
However, the answer is yes on both accounts – people do still use Flask and they still love Django. Let’s look at the numbers.
There are three frameworks whose usage goes well beyond any others: Flask, Django, and FastAPI. They are nearly tied in usage.
Flask in particular seems to have had a big resurgence lately. I interviewed Flask’s maintainer David Lord over on Talk Python about the State of Flask and Pallets in 2024 if you’re interested in fully diving into Flask with David.
While these big three frameworks may seem nearly tied. Digging deeper reveals they are all popular but for quite different reasons and are used differently across disciplines.
Notice that 63% of web developers use Django compared to 42% using Flask. On the other hand, data scientists prefer Flask and FastAPI each over Django (36% and 31% vs 26%). Web developers use Django 2.4x as often as data scientists. Surely this is a reflection of what types of web projects data scientists typically create compared to web devs.
If you’d like to dive deeper into the Django results, check out Valeria Letusheva’s The State of Django 2024.
Trend 6: Most Python web apps run on hyperscale cloudsWhen considering where to host and deploy your web app or API, there are three hyperscale clouds and then a wide variety of other options. Most people indicated they chose one of the large cloud providers: AWS, Azure, or GCP.
For every one of the large clouds, we see YoY growth and 78% of the respondents are using at least one of the large cloud providers. That is somewhat surprising. Looking below that group, we see even more interesting trends.
Number 4 is PythonAnywhere, a seriously no-fuss provider for simple hosting of Python web abbs. It allows you to set up Python web apps and other projects with very minimal Linux experience! Interesting fact: Talk Python launched on PythonAnywhere back in 2015 before moving to Digital Ocean in 2016.
Two other things jump out here. First, Heroku has long been a darling of Python developers. They saw a big drop YoY from 13% to 7%. While this might look very dramatic, it’s likely due to the fact that Heroku canceled their very popular free tier at the end of 2022. Secondly, Hetzner, a popular German hosting company that offers ridiculously affordable IaaS servers, makes the list for the first time. This is likely due to Heztner opening their first data centers in the USA last year.
Trend 7: Containers over VMs over hardwareAs web apps increase in complexity and container technology continues to improve, we see most cloud developers publish their apps within containers (bare Docker, Kubernetes, and the like).
Close behind that, we see IaaS continuing to shine with many developers choosing virtual machines (VMs).
Trend 8: uv takes Python packaging by stormThis final trend I’m adding in did not make the survey, as it was released to the public after the questions were finalized, but it’s too important to omit.
Astral, the company behind Ruff, released a new package manager based on Rust that has immediately taken the Python community by storm. It’s fast – really fast:
It bundles many of the features of multiple, popular tools into one. If you’re using pip, pip-tools, venv, virtualenv, poetry, or others, you’ll want to give uv a look.
Parting thoughtsI hope you found this deep look inside the Python space insightful. These findings are interesting but they are also a powerful guide to your next year of professional programming decisions.
I encourage you to add yourself to the mailing list for the survey to get notified when the next one goes live and when the next set of results are published. Visit the survey results page and then go all the way to the bottom and enter your email address where it says “participate in future surveys.”
Finally, if you’ve enjoyed this deep dive into Python 2024, you might want to check out the video of my keynote from PyCon Philippines 2024.
Start developing with PyCharmPyCharm provides everything you need for productive data science and web development right out of the box, from web development and databases to Jupyter notebooks and interactive tables for data projects – all in one powerful IDE.
Try PyCharm for free About the author Michael KennedyMichael is the founder of Talk Python and a PSF Fellow. Talk Python is a podcast and course platform that has been exploring the Python ecosystem for nearly 10 years. At his core, Michael is a web and API developer.
LN Webworks: 5 Reasons Drupal Is the Best Choice For E-commerce Development
The companies are using the newest technologies to improve their e-commerce websites customer service and outreach. With e-commerce websites, you can offer better features to the audience that enhance the overall experience of the visitors on the website.
And what better CMS of choice than Drupal for your large-scale websites? Drupal development Services offers fresh features for e-commerce businesses that help their customers to have a better experience shopping online. On top of that, there is also a Drupal e-commerce module that allows you to help you engage more with the audience that visits the websites and converts them.
In this blog, you will learn more about why you should use Drupal For your e-commerce websites and how it can be the best decision for your business.
Without further ado, let's get started.
Made With Mu: Announcement: The Sun is setting on Mu in 2025
This decision has been coming for a while.
In summary: We (the core development team) have decided together to retire Mu. This blog post explains what happens next, what you can expect from us, and what the end result should look like. As always, we invite constructive feedback, ideas and critique.
In more detail: When Mu was first created, there was a need for something Mu-ish and we were able to quickly build what teachers, beginners and those who support them asked for. Almost ten years later, there are lots of well funded and supported tools that fulfil the reason Mu was created. Furthermore, we (the core team) are volunteers and our lives have moved onto new jobs, new interests and new communities. However, we all feel very fond of Mu, and proud of what we have achieved, so we want to honour the community that has coalesced around Mu and ensure the retirement is smooth, predictable and doesn’t leave folks surprised or uncertain.
Currently, here’s what’s going to happen:
- Sometime in the new year we will release a new version of Mu, based on the current state of the master branch with a few bug fixes. This will be a “legacy” release of Mu in that we deliberately pin our versions of libraries to older versions so the many folks using Mu on equipment that is old and/or out-of-date can still continue to use it. (Aside: many educational institutions and other places that use Mu have limited budgets and use computing equipment that is not able, for whatever reason, to use the latest and greatest version of the libraries upon which we depend.)
- Soon after, we will release the final version of Mu. To the best of our ability and availability of effort, this will be updated to use the latest versions of all the libraries upon which we depend. This is to ensure the final release of Mu is usable for perhaps a few more years before starting to look out of date.
- After this final release we will put all the repositories in the Mu organisation on GitHub into an archived and read-only mode.
- The Mu related websites (codewith.mu and madewith.mu) will be updated to point to the latest releases, and reflect the retired status of Mu. After one year, these domain names will not be renewed.
- Before releasing Mu in steps 1 and 2, we will ensure that any links used by Mu point to the site as archived by the Wayback Machine at archive.org - thus ensuring things like help will continue to work into the future.
- At each point in the process described above, we will publish a post on this blog so folks can follow our progress.
- Finally, I (Nicholas) in collaboration with the other core developers (Carlos, Tiago, Tim and Vasco), will write a retrospective blog post on my personal blog to reflect on the journey that we’ve been on with Mu: the good, the bad and the ugly (it has been quite a ride, and we want to end on a high note!).
We expect this process to be finished the the end of the first quarter of 2025.
As is always the case with Mu, we welcome constructive feedback, ideas and critique. Should we receieve feedback we’ll try to address and, when appropriate, integrate such feedback. Feedback should be made via our discussions page on GitHub.
Nicholas, Carlos, Tiago, Tim and Vasco.
LostCarPark Drupal Blog: Drupal Advent Calendar day 10 - Privacy
Welcome to the the tenth door of the Drupal Advent Calendar. Today we hand over to Jürgan Haas to tell us about the Privacy track of the Starshot project.
Right after DrupalCon Portland, it was clear to us at LakeDrops that we would support the Starshot initiative wherever we could. And when the tracks had been announced, I applied for the privacy track, not only because it was still open but also because that topic is so close to my heart. In my view, the internet will only remain a benefit to us, the users, if it respects our privacy, a human right in more and more countries of the world…
TagsRuss Allbery: 2024 book haul
I haven't made one of these posts since... last year? Good lord. I've therefore already read and reviewed a lot of these books.
Kemi Ashing-Giwa — The Splinter in the Sky (sff)
Moniquill Blackgoose — To Shape a Dragon's Breath (sff)
Ashley Herring Blake — Delilah Green Doesn't Care (romance)
Ashley Herring Blake — Astrid Parker Doesn't Fail (romance)
Ashley Herring Blake — Iris Kelly Doesn't Date (romance)
Molly J. Bragg — Scatter (sff)
Sarah Rees Breenan — Long Live Evil (sff)
Michelle Browne — And the Stars Will Sing (sff)
Steven Brust — Lyorn (sff)
Miles Cameron — Beyond the Fringe (sff)
Miles Cameron — Deep Black (sff)
Haley Cass — Those Who Wait (romance)
Sylvie Cathrall — A Letter to the Luminous Deep (sff)
Ta-Nehisi Coates — The Message (non-fiction)
Julie E. Czerneda — To Each This World (sff)
Brigid Delaney — Reasons Not to Worry (non-fiction)
Mar Delaney — Moose Madness (sff)
Jerusalem Demsas — On the Housing Crisis (non-fiction)
Michelle Diener — Dark Horse (sff)
Michelle Diener — Dark Deeds (sff)
Michelle Diener — Dark Minds (sff)
Michelle Diener — Dark Matters (sff)
Elaine Gallagher — Unexploded Remnants (sff)
Bethany Jacobs — These Burning Stars (sff)
Bethany Jacobs — On Vicious Worlds (sff)
Micaiah Johnson — Those Beyond the Wall (sff)
T. Kingfisher — Paladin's Faith (sff)
T.J. Klune — Somewhere Beyond the Sea (sff)
Mark Lawrence — The Book That Wouldn't Burn (sff)
Mark Lawrence — The Book That Broke the World (sff)
Mark Lawrence — Overdue (sff)
Mark Lawrence — Returns (sff collection)
Malinda Lo — Last Night at the Telegraph Club (historical)
Jessie Mihalik — Hunt the Stars (sff)
Samantha Mills — The Wings Upon Her Back (sff)
Lyda Morehouse — Welcome to Boy.net (sff)
Cal Newport — Slow Productivity (non-fiction)
Naomi Novik — Buried Deep and Other Stories (sff collection)
Claire O'Dell — The Hound of Justice (sff)
Keanu Reeves & China Miéville — The Book of Elsewhere (sff)
Kit Rocha — Beyond Temptation (sff)
Kit Rocha — Beyond Jealousy (sff)
Kit Rocha — Beyond Solitude (sff)
Kit Rocha — Beyond Addiction (sff)
Kit Rocha — Beyond Possession (sff)
Kit Rocha — Beyond Innocence (sff)
Kit Rocha — Beyond Ruin (sff)
Kit Rocha — Beyond Ecstasy (sff)
Kit Rocha — Beyond Surrender (sff)
Kit Rocha — Consort of Fire (sff)
Geoff Ryman — HIM (sff)
Melissa Scott — Finders (sff)
Rob Wilkins — Terry Pratchett: A Life with Footnotes (non-fiction)
Gabrielle Zevin — Tomorrow, and Tomorrow, and Tomorrow
(mainstream)
That's a lot of books, although I think I've already read maybe a third of them? Which is better than I usually do.
Gunnar Wolf: Some tips for those who still administer Drupal7-based sites
My main day-to-day responsability in my workplace is, and has been for 20 years, to take care of the network infrastructure for UNAM’s Economics Research Institute. One of the most visible parts of this responsability is to ensure we have a working Web presence, and that it caters for the needs of our academic community.
I joined the Institute in January 2005. Back then, our designer pushed static versions of our webpage, completely built in her computer. This was standard practice at the time, and lasted through some redesigns, but I soon started advocating for the adoption of a Content Management System. After evaluating some alternatives, I recommended adopting Drupal. It took us quite a bit to do the change: even though I clearly recall starting work toward adopting it as early as 2006, according to the Internet Archive, we switched to a Drupal-backed site around June 2010. We started using it somewhere in the version 6’s lifecycle.
As for my Debian work, by late 2012 I started getting involved in the maintenance of the drupal7 package, and by April 2013 I became its primary maintainer. I kept the drupal7 package up to date in Debian until ≈2018; the supported build methods for Drupal 8 are not compatible with Debian (mainly, bundling third-party libraries and updating them without coordination with the rest of the ecosystem), so towards the end of 2016, I announced I would not package Drupal 8 for Debian.
By March 2016, we migrated our main page to Drupal 7. By then, we already had several other sites for our academics’ projects, but my narrative follows our main Web site. I did manage to migrate several Drupal 6 (D6) sites to Drupal 7 (D7); it was quite involved process, never transparent to the user, and we did have the backlash of long downtimes (or partial downtimes, with sites half-available only) with many of our users. For our main site, we took the opportunity to do a complete redesign and deployed a fully new site.
You might note that March 2016 is after the release of D8 (November 2015). I don’t recall many of the specifics for this decision, but if I’m not mistaken, building the new site was a several months long process — not only for the technical work of setting it up, but for the legwork of getting all of the needed information from the different areas that need to be represented in the Institute. Not only that: Drupal sites often include tens of contributed themes and modules; the technological shift the project underwent between its 7 and 8 releases was too deep, and modules took a long time (if at all — many themes and modules were outright dumped) to become available for the new release.
Naturally, the Drupal Foundation wanted to evolve and deprecate the old codebase. But the pain to migrate from D7 to D8 is too big, and many sites have remained under version 7 — Eight years after D8’s release, almost 40% of Drupal installs are for version 7, and a similar proportion runs a currently-supported release (10 or 11). And while the Drupal Foundation made a great job at providing very-long-term support for D7, I understand the burden is becoming too much, so close to a year ago (and after pushing several times the D7, they finally announced support will finish this upcoming January 5.
Drupal 7 must go!I found the following usage graphs quite interesting: the usage statistics for all Drupal versions follows a very positive slope, peaking around 2014 during the best years of D7, and somewhat stagnating afterwards, staying since 2015 at the 25000–28000 sites mark (I’m very tempted to copy the graphs, but builtwith’s terms of use are very clear in not allowing it). There is a sharp drop in the last year — I attribute it to the people that are leaving D7 for other technologies after its end-of-life announcement. This becomes clearer looking only at D7’s usage statistics: D7 peaks at ≈15000 installs in 2016 stays there for close to 5 years, and has a sharp drop to under 7500 sites in the span of one year.
D8 has a more “regular” rise, peak and fall peaking at ~8500 between 2020 and 2021, and down to close to 2500 for some months already; D9 has a very brief peak of almost 9000 sites in 2023 and is now close to half of it. Currently, the Drupal king appears to be D10, still on a positive slope and with over 9000 sites. Drupal 11 is still just a blip in builtwith’s radar, with… 3 registered sites as of September 2024 :-Þ
After writing this last paragraph, I came across the statistics found in the Drupal webpage; the methodology for acquiring its data is completely different: while builtwith’s methodology is their trade secret, you can read more about how Drupal’s data is gathered (and agree or disagree with it 😉, but at least you have a page detailing 12 years so far of reported data, producing the following graph (which can be shared under the CC BY-SA license 😃):
This graph is disgregated into minor versions, and I don’t want to come up with yet another graph for it 😉 but it supports (most of) the narrative I presented above… although I do miss the recent drop builtwith reported in D7’s numbers!
And what about Backdrop?During the D8 release cycle, a group of Drupal developers were not happy with the depth of the architectural changes that were being adopted, particularly the transition to the Symfony PHP component framework, and forked the D7 codebase to create the Backdrop CMS, a modern version of Drupal, without dropping the known and tested architecture it had. The Backdrop developers keep working closely together with the Drupal community, and although its usage numbers are way smaller than Drupal’s, seems to be sustainable and lively. Of course, as I presented their numbers in the previous section, you can see Backdrop’s numbers in builtwith… are way, way lower.
I have found it to be a very warm and welcoming community, eager to receive new members. And, thanks to its contributed D2B Migrate module, I found it is quite easy to migrate a live site from Drupal 7 to Backdrop.
Migration by playbook!So… Well, I’m an academic. And (if it’s not obvious to you after reading so far 😉), one of the things I must do in my job is to write. So I decided to write an article to invite my colleagues to consider Backdrop for their D7 sites in Cuadernos Técnicos Universitarios de la DGTIC, a young journal in our university for showcasing technical academical work. And now that my article got accepted and published, I’m happy to share it with you — of course, if you can read Spanish 😉 But anyway…
Given I have several sites to migrate, and that I’m trying to get my colleagues to follow suite, I decided to automatize the migration by writing an Ansible playbook to do the heavy lifting. Of course, the playbook’s users will probably need to tweak it a bit to their personal needs. I’m also far from an Ansible expert, so I’m sure there is ample room fo improvement in my style.
But it works. Quite well, I must add.
But with this size of database…I did stumble across a big pebble, though. I am working on the migration of one of my users’ sites, and found that its database is… huge. I checked the mysqldump output, and it got me close to 3GB of data. And given the D2B_migrate is meant to work via a Web interface (my playbook works around it by using a client I wrote with Perl’s WWW::Mechanize), I repeatedly stumbled with PHP’s maximum POST size, maximum upload size, maximum memory size…
I asked for help in Backdrop’s Zulip chat site, and my attention was taken off fixing PHP to something more obvious: Why is the database so large? So I took a quick look at the database (or rather: my first look was at the database server’s filesystem usage). MariaDB stores each table as a separate file on disk, so I looked for the nine largest tables:
# ls -lhS|head total 3.8G -rw-rw---- 1 mysql mysql 2.4G Dec 10 12:09 accesslog.ibd -rw-rw---- 1 mysql mysql 224M Dec 2 16:43 search_index.ibd -rw-rw---- 1 mysql mysql 220M Dec 10 12:09 watchdog.ibd -rw-rw---- 1 mysql mysql 148M Dec 6 14:45 cache_field.ibd -rw-rw---- 1 mysql mysql 92M Dec 9 05:08 aggregator_item.ibd -rw-rw---- 1 mysql mysql 80M Dec 10 12:15 cache_path.ibd -rw-rw---- 1 mysql mysql 72M Dec 2 16:39 search_dataset.ibd -rw-rw---- 1 mysql mysql 68M Dec 2 13:16 field_revision_field_idea_principal_articulo.ibd -rw-rw---- 1 mysql mysql 60M Dec 9 13:19 cache_menu.ibdA single table, the access log, is over 2.4GB long. The three following tables are, cache tables. I can perfectly live without their data in our new site! But I don’t want to touch the slightest bit of this site until I’m satisfied with the migration process, so I found a way to exclude those tables in a non-destructive way: given D2B_migrate works with a mysqldump output, and given that mysqldump locks each table before starting to modify it and unlocks it after its job is done, I can just do the following:
$ perl -e '$output = 1; while (<>) { $output=0 if /^LOCK TABLES `(accesslog|search_index|watchdog|cache_field|cache_path)`/; $output=1 if /^UNLOCK TABLES/; print if $output}' < /tmp/d7_backup.sql > /tmp/d7_backup.eviscerated.sql; ls -hl /tmp/d7_backup.sql /tmp/d7_backup.eviscerated.sql -rw-rw-r-- 1 gwolf gwolf 216M Dec 10 12:22 /tmp/d7_backup.eviscerated.sql -rw------- 1 gwolf gwolf 2.1G Dec 6 18:14 /tmp/d7_backup.sqlFive seconds later, I’m done! The database is now a tenth of its size, and D2B_migrate is happy to take it. And I’m a big step closer to finishing my reliance on (this bit of) legacy code for my highly-visible sites 😃
Trey Hunner: Lazy self-installing Python scripts with uv
I frequently find myself writing my own short command-line scripts in Python that help me with day-to-day tasks.
It’s so easy to throw together a single-file Python command-line script and throw it in my ~/bin directory!
Well… it’s easy, unless the script requires anything outside of the Python standard library.
Recently I’ve started using uv and my primary for use for it has been fixing Python’s “just manage the dependencies automatically” problem.
I’ll share how I’ve been using uv… first first let’s look at the problem.
A script without dependenciesIf I have a Python script that I want to be easily usable from anywhere on my system, I typically follow these steps:
- Add an appropriate shebang line above the first line in the file (e.g. #!/usr/bin/env python3)
- Aet an executable bit on the file (chmod a+x my_script.py)
- Place the script in a directory that’s in my shell’s PATH variable (e.g. cp my_script.py ~/bin/my_script)
For example, here’s a script I use to print out 80 zeroes (or a specific number of zeroes) to check whether my terminal’s font size is large enough when I’m teaching:
1 2 3 4 5 6 #!/usr/bin/env python3 import sys numbers = sys.argv[1:] or [80] for n in numbers: print("0" * int(n))This file lives at /home/trey/bin/0 so I can run the command 0 from my system prompt to see 80 0 characters printed in my terminal.
This works great! But this script doesn’t have any dependencies.
The problem: a script with dependenciesHere’s a Python script that normalizes the audio of a given video file and writes a new audio-normalized version of the video to a new file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 """Normalize audio in input video file.""" from argparse import ArgumentParser from pathlib import Path from ffmpeg_normalize import FFmpegNormalize def normalize_audio_for(video_path, audio_normalized_path): """Return audio-normalized video file saved in the given directory.""" ffmpeg_normalize = FFmpegNormalize(audio_codec="aac", audio_bitrate="192k", target_level=-17) ffmpeg_normalize.add_media_file(str(video_path), audio_normalized_path) ffmpeg_normalize.run_normalization() def main(): parser = ArgumentParser() parser.add_argument("video_file", type=Path) parser.add_argument("output_file", type=Path) args = parser.parse_args() normalize_audio_for(args.video_file, args.output_file) if __name__ == "__main__": main()This script depends on the ffmpeg-normalize Python package and the ffmpeg utility. I already have ffmpeg installed, but I prefer not to globally install Python packages. I install all Python packages within virtual environments and I install global Python scripts using pipx.
At this point I could choose to either:
- Create a virtual environment, install ffmpeg-normalize in it, and put a shebang line referencing that virtual environment’s Python binary at the top of my script file
- Turn my script into a pip-installable Python package with a pyproject.toml that lists ffmpeg-normalize as a dependency and use pipx to install it
That first solution requires me to keep track of virtual environments that exist for specific scripts to work. That sounds painful.
The second solution involves making a Python package and then upgrading that Python package whenever I need to make a change to this script. That’s definitely going to be painful.
The solution: let uv handle itA few months ago, my friend Jeff Triplett showed me that uv can work within a shebang line and can read a special comment at the top of a Python file that tells uv which Python version to run a script with and which dependencies it needs.
Here’s a shebang line that would work for the above script:
1 2 3 4 5 6 7 #!/usr/bin/env -S uv run --script # /// script # requires-python = ">=3.12" # dependencies = [ # "ffmpeg-normalize", # ] # ///That tells uv that this script should be run on Python 3.12 and that it depends on the ffmpeg-normalize package.
Neat… but what does that do?
Well, the first time this script is run, uv will create a virtual environment for it, install ffmpeg-normalize into that venv, and then run the script:
1 2 3 4 5 $ normalize Reading inline script metadata from `/home/trey/bin/normalize` Installed 4 packages in 5ms usage: normalize [-h] video_file output_file normalize: error: the following arguments are required: video_file, output_fileEvery time the script is run after that, uv finds and reuses the same virtual environment:
1 2 3 4 $ normalize Reading inline script metadata from `/home/trey/bin/normalize` usage: normalize [-h] video_file output_file normalize: error: the following arguments are required: video_file, output_fileEach time uv runs the script, it quickly checks that all listed dependencies are properly installed with their correct versions.
Another script I use this for is caption, which uses whisper (via the Open AI API) to quickly caption my screencasts just after I record and edit them. The caption quality very rarely need more than a very minor edit or two (for my personal accent of English at least) even for technical like “dunder method” and via the API the captions generate very quickly.
uv everywhere?I haven’t yet fully embraced uv everywhere.
I don’t manage my Python projects with uv, though I do use it to create new virtual environments (with --seed to ensure the pip command is available) as a virtualenvwrapper replacement, along with direnv.
I have also started using uv tool as a pipx replacement and I’ve considered replacing pyenv with uv.
uv instead of pipxWhen I want to install a command-line tool that happens to be Python powered, I used to do this:
1 $ pipx countdown-cliNow I do this instead:
1 $ uv tool install countdown-cliEither way, I end up with a countdown script in my PATH that automatically uses its own separate virtual environment for its dependencies:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 $ countdown --help Usage: countdown [OPTIONS] DURATION Countdown from the given duration to 0. DURATION should be a number followed by m or s for minutes or seconds. Examples of DURATION: - 5m (5 minutes) - 45s (45 seconds) - 2m30s (2 minutes and 30 seconds) Options: --version Show the version and exit. --help Show this message and exit. uv instead of pyenvFor years, I’ve used pyenv to manage multiple versions of Python on my machine.
1 $ pyenv install 3.13.0Now I could do this:
1 $ uv python install --preview 3.13.0Or I could make a ~/.config/uv/uv.toml file containing this:
1 preview = trueAnd then run the same thing without the --preview flag:
1 $ uv python install 3.13.0This puts a python3.10 binary in my ~/.local/bin directory, which is on my PATH.
Why “preview”? Well, without it uv doesn’t (yet) place python3.13 in my PATH by default, as this feature is currently in testing/development.
Self-installing Python scripts are the big winI still prefer pyenv for its ability to install custom Python builds and I don’t have a preference between uv tool and pipx.
The biggest win that I’ve experienced from uv so far is the ability to run an executable script and have any necessary dependencies install automagically.
This doesn’t mean that I never make Python package out of my Python scripts anymore… but I do so much more rarely. I used to create a Python package out of a script as soon as it required third-party dependencies. Now my “do I really need to turn this into a proper package” bar is set much higher.
Talking Drupal: Talking Drupal #479 - Drupal CMS Media Management
Today we are talking about Drupal CMS Media Management, How media management has evolved, and Why managing our media is so important with our guest Tony Barker. We’ll also cover URL Embed as our module of the week.
For show notes visit: https://www.talkingDrupal.com/479
Topics- What do we mean by media management in Drupal CMS
- How is it different from media in Drupal today
- Why is media management important
- How are you applying these changes to Drupal
- What phase are you in
- Will this be ready for Drupal CMS release in January
- What types of advanced media will supported
- Do you see it growing to replace some DAMs
- Are there future goals
- How did you get involved
- How can people get involved
- Track 15 Proposal for Media Management
- Issue to publish research on other CMS and the questionnaire results
- Vision for media management https://www.drupal.org/project/drupal_cms/issues/3488393
- Contributed module file upload field for media https://www.drupal.org/project/media_widget and these related modules
- Slack: #starshot-media-management and #starshot
- Drupal Core strategy for 2025-2028
Tony Barker - annertech.com tonypaulbarker
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Suzanne Dergacheva - evolvingweb.com pixelite
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you ever wanted a simple way to insert oEmbed content on your Drupal site? There’s a module for that.
- Module name/project name:
- Brief history
- How old: created in Sep 2014 by the venerable Dave Reid, though recent releases are by Mark Fullmer of the University of Texas at Austin
- Versions available: 2.0.0-alpha3 and 3.0.0-beta1, the latter of which works with Drupal 10.1 or 11. That said, it does declare a dependency on the Embed project, which unfortunately doesn’t yet have a Drupal 11-ready release
- Maintainership
- Actively maintained
- Security coverage technically, but needs a stable release
- Test coverage
- Documentation guide
- Number of open issues: 63 open issues, 4 of which are bugs against the current branch
- Usage stats:
- 7,088 sites
- Module features and usage
- A content creator using this module only needs to provide a URL to the content they want to embed, as the name suggests
- The module provides both a CKEditor plugin and a formatter for link fields. Note that you will also need to enable a provided filter plugin for any text formats where you want users to use the CKEditor button
- Probably the critical distinction between how this module works and other elements of the media system is that this bypasses the media library, and as such is better suited to “one off” uses of remote content like videos, social media posts, and more
- It’s also worth mentioning that the module provides a hook to modify the parameters that will be passed to the oEmbed host, for example to set the number of posts to return from Twitter
- I could definitely see this as a valuable addition to the Event Platform that we’ve talked about previously on the podcast, but the lack of a Drupal 11-ready release for the Embed module is an obvious concern. So, if any of our listeners want to take that on, it would be a valuable contribution to the community