Feeds
Mike Driscoll: Checking Python Code with GitHub Actions
When you are working on your personal or work projects in Python, you usually want to have a way to enforce code standards. You can use tools like Flake8, PyLint or Ruff to lint your code. You might use Mypy to verify type checking. There are lots of other tools at your disposal. But it can be hard to remember to do that every time you want to create a pull request (PR) in GitHub or GitLab.
That is where continuous integration (CI) tools come in. GitHub has something called GitHub Actions that allow you to run tools or entire workflows on certain types of events.
In this article, you will learn how to create a GitHub Action that runs Ruff on a PR. You can learn more about Ruff in my introductory article.
Creating a WorkflowIn the root folder of your code, you will need to create a folder called .github/workflows. Note the period at the beginning and the fact that you need a subfolder named workflows too. Inside of that, you will create a YAML file.
Since you are going to run Ruff to lint and format your Python files, you can call this YAML file ruff.yml. Put the following in your new file:
name: Ruff on: [workflow_dispatch, pull_request] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Install Python uses: actions/setup-python@v4 with: python-version: "3.13" - name: Install dependencies run: | python -m pip install --upgrade pip pip install ruff # Include `--format=github` to enable automatic inline annotations. - name: Run Ruff run: ruff check --output-format=github . continue-on-error: false - name: Run Ruff format run: ruff format --check . continue-on-error: falseNote: This example comes from my textual-cogs repo
Let’s talk about what this does. This line is pretty important:
on: [workflow_dispatch, pull_request]It tells GitHub to run this workflow when there’s a pull request and also with “workflow_dispatch”. That second option lets you go to the Actions tab in your GitHub repo, select the workflow and run it manually. If you do not include that, you cannot run it manually at all. This is useful for testing purposes, but you can remove it if you do not need it.
The next part tells GitHub to run the build on ubuntu-linux:
jobs: build: runs-on: ubuntu-latestIf you have a GitHub subscription, you can also get Windows as a runner, which means that you can also run these actions on Windows in addition to Linux.
The steps section is the meat of this particular workflow:
steps: - uses: actions/checkout@v4 - name: Install Python uses: actions/setup-python@v4 with: python-version: "3.13" - name: Install dependencies run: | python -m pip install --upgrade pip pip install ruff # Include `--format=github` to enable automatic inline annotations. - name: Run Ruff run: ruff check --output-format=github . continue-on-error: false - name: Run Ruff format run: ruff format --check . continue-on-error: falseHere is uses a built-in checkout@v4 workflow, which is something that comes with GitHub. There are many others you can use to enhance your workflows and add new functionality. Next, you get setup with Python 3.13 and then you install your dependencies.
Once your dependencies are installed, you can run your tools. In this case, you are installing and running Ruff. For every PR, you run ruff check --output-format=github ., which will do all sorts of linting on your Python code. If any errors are found, it will add inline comments with the error, which is what that --output-format flag is for.
You also have a separate section to run Ruff format, which will format your code to follow the Black formatter (for the most part).
Wrapping UpYou can add lots of other tools to your workflows too. For example, you might add a Mypy workflow, or some test workflows to run on PR or perhaps before merging to your main branch. You might even want to check your code complexity before allowing a merge too!
With a little work, you will soon be able to use GitHub Actions to keep your code cleaner and make it more uniform too. Adding automated testing is even better! Give it a try and let me know what you think.
The post Checking Python Code with GitHub Actions appeared first on Mouse Vs Python.
Specbee: Why every Drupal developer needs to know about Services and Dependency Injection
Standards and the presumption of conformity
If you have been following the progress of the Cyber Resilience Act (CRA), you may have been intrigued to hear that the next step following publication of the Act as law in the Official Journal is the issue of a European Standards Request (ESR) to the three official European Standards Bodies (ESBs). What is that about? Well, a law like the CRA is extremely long and complex and conforming to it will involve a detailed analysis and a lot of legal advice.
Rather than forcing everyone individually to do that, the ESBs are instead sent a list of subjects that need proving and are asked to recommend a set of standards that, if observed, will demonstrate conformity with the law. This greatly simplifies things for everyone and leads to what the lawmakers call a “presumption of conformity.” You could go comply with the law based on your own research, but realistically that’s impossible for almost everyone so you will instead choose to observe the harmonized standards supplied by the ESBs.
This change of purpose for standards is very significant. They have evolved from merely being a vehicle to promote interoperability in a uniform market – an optional tool for private companies that improves their product for their consumers – to being a vehicle to prove legal compliance – a mandatory responsibility for all citizens and thus a public responsibility. This new role creates new challenges as the standards system was not originally designed with legal conformance in mind. Indeed, we are frequently reminded that standardization is a matter for the private sector.
So for example, the three ESBs (ETSI, CENELEC and CEN) all have “IPR rules” that permit the private parties who work within them to embed in the standards steps that are patented by those private companies. This arrangement is permitted by the European law that created the mechanism, Regulation 1025/2012 (in Annex II §4c). All three ESB’s expressly tolerate this behaviour as long as the patents are then licensed to implementers of the standards on “Fair, Reasonable and Non Discriminatory” (FRAND) terms. None of those words is particularly well defined, and the consequence is that to implement the standards that emerge from the ESBs you may well need to retain counsel to understand your patent obligations and enable you to enter into a relationship with Europe’s largest commercial entities to negotiate a license to those patents.
Setting aside the obvious problems this creates for Open Source software (where the need for such relationships broadly inhibits implementation), it is also a highly questionable challenge to our democracy. At the foundation of our fundamental rights is the absolute requirement that first, every citizen may know the law that governs them and secondly every citizen is freely able to comply if they choose. The Public.Resource.Org case shows us this principle also extends to standards that are expressly or effectively necessary for compliance with a given law.
But when these standards are allowed to have patents intentionally embodied within them by private actors for their own profit, citizens find themselves unable to practically conform to the law without specialist support and a necessary private relationship with the patent holders. While some may have considered this to be a tolerable compromise when the goal of standards was merely interoperability, it is clearly an abridgment of fundamental rights to condition compliance with the law on identifying and negotiating a private licensing arrangement for patents, especially those embedded intentionally in standards.
Just as Regulation 1025/2012 will need updating to reflect the court ruling on availability of standards, so too should it be updated to require that harmonized standards will only be accepted from the ESBs if they are supplied on FRAND terms where all restrictions on use are waived by the contributors. Without this change, standards will serve only the benefit of dominant actors and not the public.
Emmanuel Kasper: Too many open files in Minikube Pod
Right now playing with minikube, to run a three nodes highly available Kubernetes control plane. I am using the docker driver of minikube, so each Kubernetes node component is running inside a docker container, instead of using full blown VMs.
In my experience this works better than Kind, as using Kind you cannot correctly restart a cluster deployed in highly available mode.
This is the topology of the cluster:
$ minikube node list minikube 192.168.49.2 minikube-m02 192.168.49.3 minikube-m03 192.168.49.4Each kubernetes node is actually a docker container:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 977046487e5e gcr.io/k8s-minikube/kicbase:v0.0.45 "/usr/local/bin/entr…" 31 minutes ago Up 6 minutes 127.0.0.1:32812->22/tcp, 127.0.0.1:32811->2376/tcp, 127.0.0.1:32810->5000/tcp, 127.0.0.1:32809->8443/tcp, 127.0.0.1:32808->32443/tcp minikube-m03 8be3549f0c4c gcr.io/k8s-minikube/kicbase:v0.0.45 "/usr/local/bin/entr…" 31 minutes ago Up 6 minutes 127.0.0.1:32807->22/tcp, 127.0.0.1:32806->2376/tcp, 127.0.0.1:32805->5000/tcp, 127.0.0.1:32804->8443/tcp, 127.0.0.1:32803->32443/tcp minikube-m02 4b39f1c47c23 gcr.io/k8s-minikube/kicbase:v0.0.45 "/usr/local/bin/entr…" 31 minutes ago Up 6 minutes 127.0.0.1:32802->22/tcp, 127.0.0.1:32801->2376/tcp, 127.0.0.1:32800->5000/tcp, 127.0.0.1:32799->8443/tcp, 127.0.0.1:32798->32443/tcp minikubeThe whole list of pods running in the cluster is as this:
$ minikube kubectl -- get pods --all-namespaces kube-system coredns-6f6b679f8f-85n9l 0/1 Running 1 (44s ago) 3m32s kube-system coredns-6f6b679f8f-pnhxv 0/1 Running 1 (44s ago) 3m32s kube-system etcd-minikube 1/1 Running 1 (44s ago) 3m37s kube-system etcd-minikube-m02 1/1 Running 1 (42s ago) 3m21s ... kube-system kube-proxy-84gm6 0/1 Error 1 (31s ago) 3m4sThere is this container in Error status, let us check the logs
$ minikube kubectl -- logs -n kube-system kube-proxy-84gm6 E1210 11:50:42.117036 1 run.go:72] "command failed" err="failed complete: too many open files"This can be fixed by increasing the number of inotify watchers:
# cat > /etc/sysctl.d/minikube.conf <<EOF fs.inotify.max_user_watches = 524288 fs.inotify.max_user_instances = 512 EOF # sysctl --system ... * Applying /etc/sysctl.d/minikube.conf ... * Applying /etc/sysctl.conf ... ... fs.inotify.max_user_watches = 524288 fs.inotify.max_user_instances = 512Restart the cluster
$ minikube stop $ minikube start $ minikube kubectl -- get pods --all-namespaces | grep Error $PyCharm: The State of Python 2024
This is a guest post from Michael Kennedy, the founder of Talk Python and a PSF Fellow.
Hey there. I’m Michael Kennedy, the founder of Talk Python and a Python Software Foundation (PSF) Fellow. I’m thrilled that the PyCharm team invited me to share my thoughts on the state of Python in 2024. If you haven’t heard of Talk Python, it’s a podcast and course platform that has been exploring the Python space for almost 10 years now.
In this blog post, I want to bring that experience to analyze the results of the Python Developers Survey 2023 of 25,000 Python developers, run by PSF and JetBrains, alongside other key industry milestones of the past year. While the name implies it’s just 2023, votes were actually collected a few months into 2024, so this is indeed the latest PSF community survey.
You should know that, at my core, I’m a web and API developer. So we’ll be focusing in on the emerging trends for web developers in the Python space. Read on to discover the top Python trends, and I hope you can use these insights as a powerful guide for your professional programming journey in the year ahead.
Python continues its growth in popularityBroadlying speaking, Python is in a great space at the moment. A couple of years ago, Python became the most popular language on Stack Overflow. Then, Python jumped to the number one language on the TIOBE index. Presently, Python is twice as popular as the second most popular language on the index (C++)! Most significantly, in November 2024, Github announced that Python is now the most used language on GitHub.
The folks at GitHub really put their finger on a key reason for this popularity:
[Python’s] continued growth over the past few years – alongside that of Jupyter Notebooks – may suggest that [Python developers’] activity on GitHub is going beyond traditional software development.Indeed. It is the wide and varied usage of Python that is helping pull folks into the community. And Python’s massive library of over 500,000 open-source packages on PyPI keep them coming back.
Some key Python trends of 2024Let’s dive into some of the most important trends. We’re going to heavily lean on the Python Developer’s Survey results.
As you explore these insights and apply them to your projects, having the right tools can make all the difference. Try PyCharm for free and stay equipped with everything you need for data science and web development.
Try PyCharm for free Trend 1: Python usage with other languages drops as general adoption growsThere’s a saying in web development that you never just learn one language. For example, if you only know Python and want to get started with Flask, you still have a lot to learn. You’ll need to use HTML (via Jinja templates most likely) and you’ll need CSS. Then if you want to use a database, as most sites do, you’ll need to know some query language like SQL, and so on.
Our first trend has to do with the combination of such languages for Python developers.
Does it surprise you that Python is used with JavaScript and HTML? Of course not! But look at the year-over-year trends. We have JavaScript going from 40% to 37% and now at 35%. The trends for HTML, CSS, and SQL all follow a very similar curve.
Does that mean people are not using HTML as much? Probably not. Remember GitHub’s quote above: The massive growth of Python is partially due to twice as many non-web developers moving to Python. So HTML’s stats aren’t going down, per se. It’s just that the other areas are growing that much faster.
One other area I’d like to touch on in this section is Rust. We’ve seen the incredible success of libraries moving to Rust such as Pydantic and uv. Many of the blog posts and conference talks are abuzz with such tools. So it may feel like Rust is rushing to overtake Python. But in this survey, we see Rust barely changing YoY. Another data point is the TIOBE index I mentioned at the top. There, Rust only grew by 1.5%, whereas Python grew by 7%. We may love our new Rust tools in Python, but we love writing Python even more.
Trend 2: 41% of Python developers have under 2 years of experienceIt’s easy to assume that beginners make up a relatively small portion of this developer community. But when your community is experiencing rapid growth in significant segments, that might not be true. And this is indeed what the survey suggests. A whopping 41% of Python developers have been using Python for less than 2 years.
This has a major impact on many things. If you’re writing a tutorial or documentation for an open-source project, you may be inclined to write instructions without much context such as “Next, just create a virtual environment here…” It’s also easy to assume that everyone knows what a SQL injection attack is and how to avoid it when talking to Postgres from Python.
Realizing that almost half of the people you’re talking to, or building for, are pretty much brand new to Python or have come to Python from areas outside of traditional programming roles, you can’t take for granted that they will understand such things.
Keeping this important data point in mind allows us to build an even stronger community that fosters people joining from multiple backgrounds. They’ll have a better initial experience and we’ll all be better for it.
Trend 3: Python learning expands through diverse channelsHow do you stay on top of a rapidly changing technology such as Python and its broader ecosystem? We truly live in an age of riches when it comes to staying connected and getting news on specialized topics such as Python.
Respondents to the survey indicated several news channels, including YouTube channels, podcasts, blogs, and events. They shared popular and loved sources in each area. For YouTube, people frequently turn to ArjanCodes and mCoding. Listeners of podcasts recommend my own Talk Python To Me as their top resource, as well as the extremely popular Lex Fridman Podcast. While there are many high-quality Python blogs out there, people put Dan Bader’s Real Python and Michael Herman’s Test Driven sites at the top. Finally, respondents said they got massive value from the popular Python conferences such as PyCon, EuroPython, and others.
If you’re currently learning Python, one more resource I’ll toss in that you can find right here on the website at jetbrains.com/guide/python. You’ll find many quick videos teaching about many different frameworks. A bunch of them are done by my friend Paul Everitt, who is an excellent teacher.
Trend 4: The Python 2 vs. 3 divide is in the distant pastFor a long time in the Python community, we had a tough transition. Python 3 was released on December 3, 2008. Yet, even in the mid 2010s, there was an ongoing debate over whether moving to Python 3 was a good idea. It was an incredibly wasteful time looking back, yet this went on for years.
In fact, Guido van Rossum, the creator of Python, had to make a proclamation that there will be no more updates to Python 2 ever (beyond security patches) as a way to move people to Python 3. Here is Guido at PyCon 2014:
If you look at the Python 2 vs. 3 usage now, it’s clear this weird time for Python is in the past. Have you noticed how fast progress has been on new features and faster Python since we all moved to Python 3?
Today, it’s a bit more relevant to consider which Python 3 versions are in play for most users.
Python 3.13 came out in October 2024 after these results were concluded. So at the time of polling, 3.12 was the very latest version.
You can see that over 70% of people use one of the 3 latest releases and the majority of them are 1 release behind the latest. If you want my advice, it’s very safe to use the latest major version after a point release (e.g. 3.13.1). Talk Python and its many related websites and APIs have already been running 3.13.0 for a couple of weeks and there have been zero issues. But our apps don’t power airplanes, nuclear power stations or health-critical systems, so we can get away with a little less stability.
Trend 5: Flask, Django, and FastAPI remain top Python web frameworksThis trend is endlessly fascinating. Here’s the situation: You’re starting a new web project, and you know you’re doing it in Python. You might wonder if Django is still relevant? Do people still use Flask? After all, we’ve had some new frameworks land recently that have really gained a lot of market share. For example, FastAPI has grown very quickly as of late and its praise is well deserved.
However, the answer is yes on both accounts – people do still use Flask and they still love Django. Let’s look at the numbers.
There are three frameworks whose usage goes well beyond any others: Flask, Django, and FastAPI. They are nearly tied in usage.
Flask in particular seems to have had a big resurgence lately. I interviewed Flask’s maintainer David Lord over on Talk Python about the State of Flask and Pallets in 2024 if you’re interested in fully diving into Flask with David.
While these big three frameworks may seem nearly tied. Digging deeper reveals they are all popular but for quite different reasons and are used differently across disciplines.
Notice that 63% of web developers use Django compared to 42% using Flask. On the other hand, data scientists prefer Flask and FastAPI each over Django (36% and 31% vs 26%). Web developers use Django 2.4x as often as data scientists. Surely this is a reflection of what types of web projects data scientists typically create compared to web devs.
If you’d like to dive deeper into the Django results, check out Valeria Letusheva’s The State of Django 2024.
Trend 6: Most Python web apps run on hyperscale cloudsWhen considering where to host and deploy your web app or API, there are three hyperscale clouds and then a wide variety of other options. Most people indicated they chose one of the large cloud providers: AWS, Azure, or GCP.
For every one of the large clouds, we see YoY growth and 78% of the respondents are using at least one of the large cloud providers. That is somewhat surprising. Looking below that group, we see even more interesting trends.
Number 4 is PythonAnywhere, a seriously no-fuss provider for simple hosting of Python web abbs. It allows you to set up Python web apps and other projects with very minimal Linux experience! Interesting fact: Talk Python launched on PythonAnywhere back in 2015 before moving to Digital Ocean in 2016.
Two other things jump out here. First, Heroku has long been a darling of Python developers. They saw a big drop YoY from 13% to 7%. While this might look very dramatic, it’s likely due to the fact that Heroku canceled their very popular free tier at the end of 2022. Secondly, Hetzner, a popular German hosting company that offers ridiculously affordable IaaS servers, makes the list for the first time. This is likely due to Heztner opening their first data centers in the USA last year.
Trend 7: Containers over VMs over hardwareAs web apps increase in complexity and container technology continues to improve, we see most cloud developers publish their apps within containers (bare Docker, Kubernetes, and the like).
Close behind that, we see IaaS continuing to shine with many developers choosing virtual machines (VMs).
Trend 8: uv takes Python packaging by stormThis final trend I’m adding in did not make the survey, as it was released to the public after the questions were finalized, but it’s too important to omit.
Astral, the company behind Ruff, released a new package manager based on Rust that has immediately taken the Python community by storm. It’s fast – really fast:
It bundles many of the features of multiple, popular tools into one. If you’re using pip, pip-tools, venv, virtualenv, poetry, or others, you’ll want to give uv a look.
Parting thoughtsI hope you found this deep look inside the Python space insightful. These findings are interesting but they are also a powerful guide to your next year of professional programming decisions.
I encourage you to add yourself to the mailing list for the survey to get notified when the next one goes live and when the next set of results are published. Visit the survey results page and then go all the way to the bottom and enter your email address where it says “participate in future surveys.”
Finally, if you’ve enjoyed this deep dive into Python 2024, you might want to check out the video of my keynote from PyCon Philippines 2024.
Start developing with PyCharmPyCharm provides everything you need for productive data science and web development right out of the box, from web development and databases to Jupyter notebooks and interactive tables for data projects – all in one powerful IDE.
Try PyCharm for free About the author Michael KennedyMichael is the founder of Talk Python and a PSF Fellow. Talk Python is a podcast and course platform that has been exploring the Python ecosystem for nearly 10 years. At his core, Michael is a web and API developer.
LN Webworks: 5 Reasons Drupal Is the Best Choice For E-commerce Development
The companies are using the newest technologies to improve their e-commerce websites customer service and outreach. With e-commerce websites, you can offer better features to the audience that enhance the overall experience of the visitors on the website.
And what better CMS of choice than Drupal for your large-scale websites? Drupal development Services offers fresh features for e-commerce businesses that help their customers to have a better experience shopping online. On top of that, there is also a Drupal e-commerce module that allows you to help you engage more with the audience that visits the websites and converts them.
In this blog, you will learn more about why you should use Drupal For your e-commerce websites and how it can be the best decision for your business.
Without further ado, let's get started.
Made With Mu: Announcement: The Sun is setting on Mu in 2025
This decision has been coming for a while.
In summary: We (the core development team) have decided together to retire Mu. This blog post explains what happens next, what you can expect from us, and what the end result should look like. As always, we invite constructive feedback, ideas and critique.
In more detail: When Mu was first created, there was a need for something Mu-ish and we were able to quickly build what teachers, beginners and those who support them asked for. Almost ten years later, there are lots of well funded and supported tools that fulfil the reason Mu was created. Furthermore, we (the core team) are volunteers and our lives have moved onto new jobs, new interests and new communities. However, we all feel very fond of Mu, and proud of what we have achieved, so we want to honour the community that has coalesced around Mu and ensure the retirement is smooth, predictable and doesn’t leave folks surprised or uncertain.
Currently, here’s what’s going to happen:
- Sometime in the new year we will release a new version of Mu, based on the current state of the master branch with a few bug fixes. This will be a “legacy” release of Mu in that we deliberately pin our versions of libraries to older versions so the many folks using Mu on equipment that is old and/or out-of-date can still continue to use it. (Aside: many educational institutions and other places that use Mu have limited budgets and use computing equipment that is not able, for whatever reason, to use the latest and greatest version of the libraries upon which we depend.)
- Soon after, we will release the final version of Mu. To the best of our ability and availability of effort, this will be updated to use the latest versions of all the libraries upon which we depend. This is to ensure the final release of Mu is usable for perhaps a few more years before starting to look out of date.
- After this final release we will put all the repositories in the Mu organisation on GitHub into an archived and read-only mode.
- The Mu related websites (codewith.mu and madewith.mu) will be updated to point to the latest releases, and reflect the retired status of Mu. After one year, these domain names will not be renewed.
- Before releasing Mu in steps 1 and 2, we will ensure that any links used by Mu point to the site as archived by the Wayback Machine at archive.org - thus ensuring things like help will continue to work into the future.
- At each point in the process described above, we will publish a post on this blog so folks can follow our progress.
- Finally, I (Nicholas) in collaboration with the other core developers (Carlos, Tiago, Tim and Vasco), will write a retrospective blog post on my personal blog to reflect on the journey that we’ve been on with Mu: the good, the bad and the ugly (it has been quite a ride, and we want to end on a high note!).
We expect this process to be finished the the end of the first quarter of 2025.
As is always the case with Mu, we welcome constructive feedback, ideas and critique. Should we receieve feedback we’ll try to address and, when appropriate, integrate such feedback. Feedback should be made via our discussions page on GitHub.
Nicholas, Carlos, Tiago, Tim and Vasco.
LostCarPark Drupal Blog: Drupal Advent Calendar day 10 - Privacy
Welcome to the the tenth door of the Drupal Advent Calendar. Today we hand over to Jürgan Haas to tell us about the Privacy track of the Starshot project.
Right after DrupalCon Portland, it was clear to us at LakeDrops that we would support the Starshot initiative wherever we could. And when the tracks had been announced, I applied for the privacy track, not only because it was still open but also because that topic is so close to my heart. In my view, the internet will only remain a benefit to us, the users, if it respects our privacy, a human right in more and more countries of the world…
TagsRuss Allbery: 2024 book haul
I haven't made one of these posts since... last year? Good lord. I've therefore already read and reviewed a lot of these books.
Kemi Ashing-Giwa — The Splinter in the Sky (sff)
Moniquill Blackgoose — To Shape a Dragon's Breath (sff)
Ashley Herring Blake — Delilah Green Doesn't Care (romance)
Ashley Herring Blake — Astrid Parker Doesn't Fail (romance)
Ashley Herring Blake — Iris Kelly Doesn't Date (romance)
Molly J. Bragg — Scatter (sff)
Sarah Rees Breenan — Long Live Evil (sff)
Michelle Browne — And the Stars Will Sing (sff)
Steven Brust — Lyorn (sff)
Miles Cameron — Beyond the Fringe (sff)
Miles Cameron — Deep Black (sff)
Haley Cass — Those Who Wait (romance)
Sylvie Cathrall — A Letter to the Luminous Deep (sff)
Ta-Nehisi Coates — The Message (non-fiction)
Julie E. Czerneda — To Each This World (sff)
Brigid Delaney — Reasons Not to Worry (non-fiction)
Mar Delaney — Moose Madness (sff)
Jerusalem Demsas — On the Housing Crisis (non-fiction)
Michelle Diener — Dark Horse (sff)
Michelle Diener — Dark Deeds (sff)
Michelle Diener — Dark Minds (sff)
Michelle Diener — Dark Matters (sff)
Elaine Gallagher — Unexploded Remnants (sff)
Bethany Jacobs — These Burning Stars (sff)
Bethany Jacobs — On Vicious Worlds (sff)
Micaiah Johnson — Those Beyond the Wall (sff)
T. Kingfisher — Paladin's Faith (sff)
T.J. Klune — Somewhere Beyond the Sea (sff)
Mark Lawrence — The Book That Wouldn't Burn (sff)
Mark Lawrence — The Book That Broke the World (sff)
Mark Lawrence — Overdue (sff)
Mark Lawrence — Returns (sff collection)
Malinda Lo — Last Night at the Telegraph Club (historical)
Jessie Mihalik — Hunt the Stars (sff)
Samantha Mills — The Wings Upon Her Back (sff)
Lyda Morehouse — Welcome to Boy.net (sff)
Cal Newport — Slow Productivity (non-fiction)
Naomi Novik — Buried Deep and Other Stories (sff collection)
Claire O'Dell — The Hound of Justice (sff)
Keanu Reeves & China Miéville — The Book of Elsewhere (sff)
Kit Rocha — Beyond Temptation (sff)
Kit Rocha — Beyond Jealousy (sff)
Kit Rocha — Beyond Solitude (sff)
Kit Rocha — Beyond Addiction (sff)
Kit Rocha — Beyond Possession (sff)
Kit Rocha — Beyond Innocence (sff)
Kit Rocha — Beyond Ruin (sff)
Kit Rocha — Beyond Ecstasy (sff)
Kit Rocha — Beyond Surrender (sff)
Kit Rocha — Consort of Fire (sff)
Geoff Ryman — HIM (sff)
Melissa Scott — Finders (sff)
Rob Wilkins — Terry Pratchett: A Life with Footnotes (non-fiction)
Gabrielle Zevin — Tomorrow, and Tomorrow, and Tomorrow
(mainstream)
That's a lot of books, although I think I've already read maybe a third of them? Which is better than I usually do.
Gunnar Wolf: Some tips for those who still administer Drupal7-based sites
My main day-to-day responsability in my workplace is, and has been for 20 years, to take care of the network infrastructure for UNAM’s Economics Research Institute. One of the most visible parts of this responsability is to ensure we have a working Web presence, and that it caters for the needs of our academic community.
I joined the Institute in January 2005. Back then, our designer pushed static versions of our webpage, completely built in her computer. This was standard practice at the time, and lasted through some redesigns, but I soon started advocating for the adoption of a Content Management System. After evaluating some alternatives, I recommended adopting Drupal. It took us quite a bit to do the change: even though I clearly recall starting work toward adopting it as early as 2006, according to the Internet Archive, we switched to a Drupal-backed site around June 2010. We started using it somewhere in the version 6’s lifecycle.
As for my Debian work, by late 2012 I started getting involved in the maintenance of the drupal7 package, and by April 2013 I became its primary maintainer. I kept the drupal7 package up to date in Debian until ≈2018; the supported build methods for Drupal 8 are not compatible with Debian (mainly, bundling third-party libraries and updating them without coordination with the rest of the ecosystem), so towards the end of 2016, I announced I would not package Drupal 8 for Debian.
By March 2016, we migrated our main page to Drupal 7. By then, we already had several other sites for our academics’ projects, but my narrative follows our main Web site. I did manage to migrate several Drupal 6 (D6) sites to Drupal 7 (D7); it was quite involved process, never transparent to the user, and we did have the backlash of long downtimes (or partial downtimes, with sites half-available only) with many of our users. For our main site, we took the opportunity to do a complete redesign and deployed a fully new site.
You might note that March 2016 is after the release of D8 (November 2015). I don’t recall many of the specifics for this decision, but if I’m not mistaken, building the new site was a several months long process — not only for the technical work of setting it up, but for the legwork of getting all of the needed information from the different areas that need to be represented in the Institute. Not only that: Drupal sites often include tens of contributed themes and modules; the technological shift the project underwent between its 7 and 8 releases was too deep, and modules took a long time (if at all — many themes and modules were outright dumped) to become available for the new release.
Naturally, the Drupal Foundation wanted to evolve and deprecate the old codebase. But the pain to migrate from D7 to D8 is too big, and many sites have remained under version 7 — Eight years after D8’s release, almost 40% of Drupal installs are for version 7, and a similar proportion runs a currently-supported release (10 or 11). And while the Drupal Foundation made a great job at providing very-long-term support for D7, I understand the burden is becoming too much, so close to a year ago (and after pushing several times the D7, they finally announced support will finish this upcoming January 5.
Drupal 7 must go!I found the following usage graphs quite interesting: the usage statistics for all Drupal versions follows a very positive slope, peaking around 2014 during the best years of D7, and somewhat stagnating afterwards, staying since 2015 at the 25000–28000 sites mark (I’m very tempted to copy the graphs, but builtwith’s terms of use are very clear in not allowing it). There is a sharp drop in the last year — I attribute it to the people that are leaving D7 for other technologies after its end-of-life announcement. This becomes clearer looking only at D7’s usage statistics: D7 peaks at ≈15000 installs in 2016 stays there for close to 5 years, and has a sharp drop to under 7500 sites in the span of one year.
D8 has a more “regular” rise, peak and fall peaking at ~8500 between 2020 and 2021, and down to close to 2500 for some months already; D9 has a very brief peak of almost 9000 sites in 2023 and is now close to half of it. Currently, the Drupal king appears to be D10, still on a positive slope and with over 9000 sites. Drupal 11 is still just a blip in builtwith’s radar, with… 3 registered sites as of September 2024 :-Þ
After writing this last paragraph, I came across the statistics found in the Drupal webpage; the methodology for acquiring its data is completely different: while builtwith’s methodology is their trade secret, you can read more about how Drupal’s data is gathered (and agree or disagree with it 😉, but at least you have a page detailing 12 years so far of reported data, producing the following graph (which can be shared under the CC BY-SA license 😃):
This graph is disgregated into minor versions, and I don’t want to come up with yet another graph for it 😉 but it supports (most of) the narrative I presented above… although I do miss the recent drop builtwith reported in D7’s numbers!
And what about Backdrop?During the D8 release cycle, a group of Drupal developers were not happy with the depth of the architectural changes that were being adopted, particularly the transition to the Symfony PHP component framework, and forked the D7 codebase to create the Backdrop CMS, a modern version of Drupal, without dropping the known and tested architecture it had. The Backdrop developers keep working closely together with the Drupal community, and although its usage numbers are way smaller than Drupal’s, seems to be sustainable and lively. Of course, as I presented their numbers in the previous section, you can see Backdrop’s numbers in builtwith… are way, way lower.
I have found it to be a very warm and welcoming community, eager to receive new members. And, thanks to its contributed D2B Migrate module, I found it is quite easy to migrate a live site from Drupal 7 to Backdrop.
Migration by playbook!So… Well, I’m an academic. And (if it’s not obvious to you after reading so far 😉), one of the things I must do in my job is to write. So I decided to write an article to invite my colleagues to consider Backdrop for their D7 sites in Cuadernos Técnicos Universitarios de la DGTIC, a young journal in our university for showcasing technical academical work. And now that my article got accepted and published, I’m happy to share it with you — of course, if you can read Spanish 😉 But anyway…
Given I have several sites to migrate, and that I’m trying to get my colleagues to follow suite, I decided to automatize the migration by writing an Ansible playbook to do the heavy lifting. Of course, the playbook’s users will probably need to tweak it a bit to their personal needs. I’m also far from an Ansible expert, so I’m sure there is ample room fo improvement in my style.
But it works. Quite well, I must add.
But with this size of database…I did stumble across a big pebble, though. I am working on the migration of one of my users’ sites, and found that its database is… huge. I checked the mysqldump output, and it got me close to 3GB of data. And given the D2B_migrate is meant to work via a Web interface (my playbook works around it by using a client I wrote with Perl’s WWW::Mechanize), I repeatedly stumbled with PHP’s maximum POST size, maximum upload size, maximum memory size…
I asked for help in Backdrop’s Zulip chat site, and my attention was taken off fixing PHP to something more obvious: Why is the database so large? So I took a quick look at the database (or rather: my first look was at the database server’s filesystem usage). MariaDB stores each table as a separate file on disk, so I looked for the nine largest tables:
# ls -lhS|head total 3.8G -rw-rw---- 1 mysql mysql 2.4G Dec 10 12:09 accesslog.ibd -rw-rw---- 1 mysql mysql 224M Dec 2 16:43 search_index.ibd -rw-rw---- 1 mysql mysql 220M Dec 10 12:09 watchdog.ibd -rw-rw---- 1 mysql mysql 148M Dec 6 14:45 cache_field.ibd -rw-rw---- 1 mysql mysql 92M Dec 9 05:08 aggregator_item.ibd -rw-rw---- 1 mysql mysql 80M Dec 10 12:15 cache_path.ibd -rw-rw---- 1 mysql mysql 72M Dec 2 16:39 search_dataset.ibd -rw-rw---- 1 mysql mysql 68M Dec 2 13:16 field_revision_field_idea_principal_articulo.ibd -rw-rw---- 1 mysql mysql 60M Dec 9 13:19 cache_menu.ibdA single table, the access log, is over 2.4GB long. The three following tables are, cache tables. I can perfectly live without their data in our new site! But I don’t want to touch the slightest bit of this site until I’m satisfied with the migration process, so I found a way to exclude those tables in a non-destructive way: given D2B_migrate works with a mysqldump output, and given that mysqldump locks each table before starting to modify it and unlocks it after its job is done, I can just do the following:
$ perl -e '$output = 1; while (<>) { $output=0 if /^LOCK TABLES `(accesslog|search_index|watchdog|cache_field|cache_path)`/; $output=1 if /^UNLOCK TABLES/; print if $output}' < /tmp/d7_backup.sql > /tmp/d7_backup.eviscerated.sql; ls -hl /tmp/d7_backup.sql /tmp/d7_backup.eviscerated.sql -rw-rw-r-- 1 gwolf gwolf 216M Dec 10 12:22 /tmp/d7_backup.eviscerated.sql -rw------- 1 gwolf gwolf 2.1G Dec 6 18:14 /tmp/d7_backup.sqlFive seconds later, I’m done! The database is now a tenth of its size, and D2B_migrate is happy to take it. And I’m a big step closer to finishing my reliance on (this bit of) legacy code for my highly-visible sites 😃
Trey Hunner: Lazy self-installing Python scripts with uv
I frequently find myself writing my own short command-line scripts in Python that help me with day-to-day tasks.
It’s so easy to throw together a single-file Python command-line script and throw it in my ~/bin directory!
Well… it’s easy, unless the script requires anything outside of the Python standard library.
Recently I’ve started using uv and my primary for use for it has been fixing Python’s “just manage the dependencies automatically” problem.
I’ll share how I’ve been using uv… first first let’s look at the problem.
A script without dependenciesIf I have a Python script that I want to be easily usable from anywhere on my system, I typically follow these steps:
- Add an appropriate shebang line above the first line in the file (e.g. #!/usr/bin/env python3)
- Aet an executable bit on the file (chmod a+x my_script.py)
- Place the script in a directory that’s in my shell’s PATH variable (e.g. cp my_script.py ~/bin/my_script)
For example, here’s a script I use to print out 80 zeroes (or a specific number of zeroes) to check whether my terminal’s font size is large enough when I’m teaching:
1 2 3 4 5 6 #!/usr/bin/env python3 import sys numbers = sys.argv[1:] or [80] for n in numbers: print("0" * int(n))This file lives at /home/trey/bin/0 so I can run the command 0 from my system prompt to see 80 0 characters printed in my terminal.
This works great! But this script doesn’t have any dependencies.
The problem: a script with dependenciesHere’s a Python script that normalizes the audio of a given video file and writes a new audio-normalized version of the video to a new file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 """Normalize audio in input video file.""" from argparse import ArgumentParser from pathlib import Path from ffmpeg_normalize import FFmpegNormalize def normalize_audio_for(video_path, audio_normalized_path): """Return audio-normalized video file saved in the given directory.""" ffmpeg_normalize = FFmpegNormalize(audio_codec="aac", audio_bitrate="192k", target_level=-17) ffmpeg_normalize.add_media_file(str(video_path), audio_normalized_path) ffmpeg_normalize.run_normalization() def main(): parser = ArgumentParser() parser.add_argument("video_file", type=Path) parser.add_argument("output_file", type=Path) args = parser.parse_args() normalize_audio_for(args.video_file, args.output_file) if __name__ == "__main__": main()This script depends on the ffmpeg-normalize Python package and the ffmpeg utility. I already have ffmpeg installed, but I prefer not to globally install Python packages. I install all Python packages within virtual environments and I install global Python scripts using pipx.
At this point I could choose to either:
- Create a virtual environment, install ffmpeg-normalize in it, and put a shebang line referencing that virtual environment’s Python binary at the top of my script file
- Turn my script into a pip-installable Python package with a pyproject.toml that lists ffmpeg-normalize as a dependency and use pipx to install it
That first solution requires me to keep track of virtual environments that exist for specific scripts to work. That sounds painful.
The second solution involves making a Python package and then upgrading that Python package whenever I need to make a change to this script. That’s definitely going to be painful.
The solution: let uv handle itA few months ago, my friend Jeff Triplett showed me that uv can work within a shebang line and can read a special comment at the top of a Python file that tells uv which Python version to run a script with and which dependencies it needs.
Here’s a shebang line that would work for the above script:
1 2 3 4 5 6 7 #!/usr/bin/env -S uv run --script # /// script # requires-python = ">=3.12" # dependencies = [ # "ffmpeg-normalize", # ] # ///That tells uv that this script should be run on Python 3.12 and that it depends on the ffmpeg-normalize package.
Neat… but what does that do?
Well, the first time this script is run, uv will create a virtual environment for it, install ffmpeg-normalize into that venv, and then run the script:
1 2 3 4 5 $ normalize Reading inline script metadata from `/home/trey/bin/normalize` Installed 4 packages in 5ms usage: normalize [-h] video_file output_file normalize: error: the following arguments are required: video_file, output_fileEvery time the script is run after that, uv finds and reuses the same virtual environment:
1 2 3 4 $ normalize Reading inline script metadata from `/home/trey/bin/normalize` usage: normalize [-h] video_file output_file normalize: error: the following arguments are required: video_file, output_fileEach time uv runs the script, it quickly checks that all listed dependencies are properly installed with their correct versions.
Another script I use this for is caption, which uses whisper (via the Open AI API) to quickly caption my screencasts just after I record and edit them. The caption quality very rarely need more than a very minor edit or two (for my personal accent of English at least) even for technical like “dunder method” and via the API the captions generate very quickly.
uv everywhere?I haven’t yet fully embraced uv everywhere.
I don’t manage my Python projects with uv, though I do use it to create new virtual environments (with --seed to ensure the pip command is available) as a virtualenvwrapper replacement, along with direnv.
I have also started using uv tool as a pipx replacement and I’ve considered replacing pyenv with uv.
uv instead of pipxWhen I want to install a command-line tool that happens to be Python powered, I used to do this:
1 $ pipx countdown-cliNow I do this instead:
1 $ uv tool install countdown-cliEither way, I end up with a countdown script in my PATH that automatically uses its own separate virtual environment for its dependencies:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 $ countdown --help Usage: countdown [OPTIONS] DURATION Countdown from the given duration to 0. DURATION should be a number followed by m or s for minutes or seconds. Examples of DURATION: - 5m (5 minutes) - 45s (45 seconds) - 2m30s (2 minutes and 30 seconds) Options: --version Show the version and exit. --help Show this message and exit. uv instead of pyenvFor years, I’ve used pyenv to manage multiple versions of Python on my machine.
1 $ pyenv install 3.13.0Now I could do this:
1 $ uv python install --preview 3.13.0Or I could make a ~/.config/uv/uv.toml file containing this:
1 preview = trueAnd then run the same thing without the --preview flag:
1 $ uv python install 3.13.0This puts a python3.10 binary in my ~/.local/bin directory, which is on my PATH.
Why “preview”? Well, without it uv doesn’t (yet) place python3.13 in my PATH by default, as this feature is currently in testing/development.
Self-installing Python scripts are the big winI still prefer pyenv for its ability to install custom Python builds and I don’t have a preference between uv tool and pipx.
The biggest win that I’ve experienced from uv so far is the ability to run an executable script and have any necessary dependencies install automagically.
This doesn’t mean that I never make Python package out of my Python scripts anymore… but I do so much more rarely. I used to create a Python package out of a script as soon as it required third-party dependencies. Now my “do I really need to turn this into a proper package” bar is set much higher.
Talking Drupal: Talking Drupal #479 - Drupal CMS Media Management
Today we are talking about Drupal CMS Media Management, How media management has evolved, and Why managing our media is so important with our guest Tony Barker. We’ll also cover URL Embed as our module of the week.
For show notes visit: https://www.talkingDrupal.com/479
Topics- What do we mean by media management in Drupal CMS
- How is it different from media in Drupal today
- Why is media management important
- How are you applying these changes to Drupal
- What phase are you in
- Will this be ready for Drupal CMS release in January
- What types of advanced media will supported
- Do you see it growing to replace some DAMs
- Are there future goals
- How did you get involved
- How can people get involved
- Track 15 Proposal for Media Management
- Issue to publish research on other CMS and the questionnaire results
- Vision for media management https://www.drupal.org/project/drupal_cms/issues/3488393
- Contributed module file upload field for media https://www.drupal.org/project/media_widget and these related modules
- Slack: #starshot-media-management and #starshot
- Drupal Core strategy for 2025-2028
Tony Barker - annertech.com tonypaulbarker
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Suzanne Dergacheva - evolvingweb.com pixelite
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you ever wanted a simple way to insert oEmbed content on your Drupal site? There’s a module for that.
- Module name/project name:
- Brief history
- How old: created in Sep 2014 by the venerable Dave Reid, though recent releases are by Mark Fullmer of the University of Texas at Austin
- Versions available: 2.0.0-alpha3 and 3.0.0-beta1, the latter of which works with Drupal 10.1 or 11. That said, it does declare a dependency on the Embed project, which unfortunately doesn’t yet have a Drupal 11-ready release
- Maintainership
- Actively maintained
- Security coverage technically, but needs a stable release
- Test coverage
- Documentation guide
- Number of open issues: 63 open issues, 4 of which are bugs against the current branch
- Usage stats:
- 7,088 sites
- Module features and usage
- A content creator using this module only needs to provide a URL to the content they want to embed, as the name suggests
- The module provides both a CKEditor plugin and a formatter for link fields. Note that you will also need to enable a provided filter plugin for any text formats where you want users to use the CKEditor button
- Probably the critical distinction between how this module works and other elements of the media system is that this bypasses the media library, and as such is better suited to “one off” uses of remote content like videos, social media posts, and more
- It’s also worth mentioning that the module provides a hook to modify the parameters that will be passed to the oEmbed host, for example to set the number of posts to return from Twitter
- I could definitely see this as a valuable addition to the Event Platform that we’ve talked about previously on the podcast, but the lack of a Drupal 11-ready release for the Embed module is an obvious concern. So, if any of our listeners want to take that on, it would be a valuable contribution to the community
Thorsten Alteholz: My Debian Activities in November 2024
This was my hundred-twenty-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
- [DLA 3968-1] netatalk security update to fix four CVEs related to heap buffer overflow and writing arbitrary files. The patches have been prepared by the maintainer.
- [DLA 3976-1] tgt update to fix one CVE related to not using a propper seed for rand()
- [DLA 3977-1] xfpt update to fix one CVE related to a stack-based buffer overflow
- [DLA 3978-1] editorconfig-core update to fix two CVEs related to buffer overflows.
I also continued to work on a fix for glewlwyd, which is more difficult than expected. Besides I started to work on ffmpeg and haproxy.
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
Debian ELTSThis month was the seventy-sixth ELTS month. During my allocated time I uploaded or worked on:
- [ELA-1259-1]editorconfig-core security update for two CVEs in Buster to fix buffer overflows.
I also started to work on a fix for kmail-account-wizzard. Unfortunately preparing a testing environment takes some time and I did not finish testing this month. Besides I started to work on ffmpeg and haproxy.
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
Debian PrintingUnfortunately I didn’t found any time to work on this topic.
Debian MatomoUnfortunately I didn’t found any time to work on this topic.
Debian AstroThis month I uploaded new packages or new upstream or bugfix versions of:
I also sponsored an upload of calceph.
Debian IoTThis month I uploaded new upstream or bugfix versions of:
Debian MobcomThis month I uploaded new packages or new upstream or bugfix versions of:
- … libosmo-cc (package prepared by Nathan)
- … libosmo-abis
This month I uploaded new upstream or bugfix versions of:
I also did some NMU of opensta, kdrill, glosstex, irsim, pagetools, afnix, cpm, to fix some RC bugs.
FTP masterThis month I accepted 266 and rejected 16 packages. The overall number of packages that got accepted was 269.
Python Engineering at Microsoft: 2024 Python in VS Code Wrapped
As the year comes to a close, we would like to take time to reflect and celebrate the incredible progress the Python extension for VS Code has made in the past year. Inspired by Spotify Wrapped, we’ve compiled highlights from our year, showcasing top voted requests and countless lines of code written. Keep reading to get an inside look into all things @vscode-python and @vscode-python-debugger wrapped!
Pull Requests and issuesWe saw an impressive 604 pull requests merged in 2024, bringing in features such as Django tests, improved environment discovery, the Native REPL, and more! Moreover, our community helped close 894 issues, for both bug fixes and features requests.
We broke down our top addressed issues based on up-votes:
- Feature suggestion: Run Django unittests received 337 up-votes
- Select pyenv environment based on folder .python-version file received 135 up-votes
- Add “just my code” global setting received 34 up-votes
- Bring back Python 3.6 debugging support by breaking debugging out into its own extension received 34 up-votes
This year, we were thrilled to see contributions from 43 unique contributors: 22 internal contributors and 21 from the community!
We would like to give a special shout-out to our community contributors: @andybbruno, @aydar-kamaltdinov, @baszalmstra, @bersbersbers, @brokoli777, @covracer, @DavidArchibald, @DetachHead, @edgarrmondragon, @flying-sheep, @joar, @LouisGobert, @mnoah1, @nickwarters, @PopoDev, @renan-r-santos, @shanesaravia, @soda92, @T-256, @tomoki, @vishrutss.
The community’s support and collaboration have been invaluable, and we look forward to continuing to find ways to engage and collaborate with you all in 2025!
Codebase changesThis year, we made significant efforts to make our codebase more maintainable by rewriting what we could in Python and cleaning up old code as new code was introduced. As a result, 51,972 lines of code were added, and 48,629 lines of code were removed.
Tools ExtensionsAlthough our repository stats do not account for our tools extensions, we want to highlight the growth and progress in these repositories.
As part of our ongoing efforts to enhance the Python extension, we previously focused on separating linting and formatting tools into their own extensions. This year, we observed substantial growth across all of these extensions.
Black formatter saw an impressive growth of 89.45% and is our most installed tools extension, while autopep8 usage surged by 149.65%. Pylint experienced a notable increase of 70.71%, and flake8 grew by 88.63%. Our mypy type checker topped the download growth chart with an astounding growth of 236.18%.
We’re incredibly proud of what we’ve achieved together this year and can’t wait to continue merging exciting updates in 2025. Thank you for being a part of our community!
Engage with us on our GitHub repositories, @vscode-python and @vscode-python-debugger, and let us know what new features you would like to see next! You can stay up to date on latest releases by following us on X @pythonvscode and on our Python DevBlog.
The post 2024 Python in VS Code Wrapped appeared first on Python.
Drupal Association blog: Sachiko Muto: Empowering Open Source for the Future
We’re thrilled to introduce Sachiko Muto, one of the newest members elected to the Drupal Association Board in October. Sachiko is the Chair of OpenForum Europe and a senior researcher at RISE Research Institutes of Sweden. She originally joined OFE in 2007, serving for several years as Director, responsible for government relations, and later as CEO. Sachiko holds degrees in Political Science from the University of Toronto and the London School of Economics and received her doctorate in standardisation policy from TU Delft.
We’re excited to have Sachiko on the Board, and she recently shared her insights on this new chapter in her journey:
What are you most excited about when it comes to joining the Drupal Association Board?
I’m thrilled to join the Drupal Association Board at a time when open source is gaining significant momentum as a foundation for digital public infrastructure in Europe and beyond. I’m particularly excited about the potential to further position Drupal as a critical tool for digital sovereignty and public value, especially as it powers platforms like europa.eu.
What do you hope to accomplish during your time on the board?
During my time on the board, I hope to strengthen the connections between Drupal and broader digital public policy initiatives, ensuring that we’re at the forefront of supporting transparent, secure, and user-centric digital infrastructure. I’m particularly committed to promoting Drupal’s adoption as a foundational tool for digital public infrastructure in the public sector. Additionally, I aim to explore ways to enhance Drupal’s long-term sustainability by encouraging public sector users not only to adopt but also to actively contribute to Drupal’s ecosystem, reinforcing its growth and resilience.
What specific skill or perspective do you contribute to the board?
With a background in public policy, open technologies, and as a senior researcher at RISE, I bring a perspective focused on the societal impact of open standards and open source. My work has consistently emphasized the strategic role of open source in public digital infrastructure, so I’m here to champion that within Drupal’s mission. I also hope to contribute insights from my experience in European funding and public sector collaboration to bolster Drupal’s impact in these areas.
Share a favorite quote or piece of advice that has inspired you.
With so many directions we could go, try to pursue only those that involve working with people you want to be around. I also learned early on in my career that communicating a policy message becomes much easier—and more powerful—when you’re confident it’s the right thing to do.
We’re excited to see the amazing contributions Sachiko will make during her time on the Drupal Association Board. Thank you, Sachiko, for dedicating your time and expertise to serving the Drupal community! Connect with Sachiko on LinkedIn.
The Drupal Association Board of Directors comprises 13 members with nine are nominated for staggered three-year terms, two are elected by Drupal Association members, and one seat is reserved for the Drupal Project Founder, Dries Buytaert, while another is reserved for the immediate past chair. The Board meets twice in person and four times virtually each year. It oversees policy development, executive director management, budget approvals, financial reporting, and fundraising efforts.
Real Python: Python News Roundup: December 2024
The Python community has kept up its momentum this month, delivering a host of exciting updates. From the promising improvements to template strings in PEP 750 to the release of Python 3.14.0a2, innovation is front and center. Developers are exploring new tools like partial validation in Pydantic 2.10, while popular projects adapt to the end of life of Python 3.8.
December also welcomes the return of the beloved Advent of Code, challenging programmers and problem solvers with daily puzzles. And for those planning ahead, the PyCon 2025 call for proposals is closing soon, marking a final opportunity to contribute to the community’s largest annual gathering.
Whether you’re diving into cutting-edge features, revisiting Python fundamentals, or connecting with fellow developers, there’s something for everyone this month. Let’s round up the most important developments of last month and see how they can inspire your next Python project!
PEP 750 – Template Strings UpdatedThe authors of PEP 750, which you might have first encountered introducing tag strings, have revised and significantly updated their proposal.
The central point of the PEP is now renamed as template strings, or t-strings, and aligns its syntax with the beloved f-strings:
Python planet = "World" greeting = t"Hello, {planet}!" Copied!Looks very familiar! The main practical difference to f-strings is that t-strings are lazily evaluated, which opens the door to common string interpolation scenarios that f-strings can’t handle, such as:
- Performance-optimized logging
- Internationalization
- Template reuse
- HTML templating
All of the above are examples of string interpolation that requires lazy evaluation.
Note: Template reuse can come in handy, for example, when you’re creating template prompts in LLM-powered applications and application frameworks.
If the PEP gets accepted and implemented, then you’ll be able to use t-strings where you currently have to resort to str.format(), using a Template object from the string module, or even the older modulo (%) string formatting approach.
As the Python community continues to innovate, PEP 750 is a testament to the ongoing efforts to enhance language features while keeping them intuitive for developers. If you want to dive deeper into this proposal and its potential impact, then you can read more in the detailed examples section of the PEP.
Python 3.14.0a2 ReleasedThe Python 3.14 team rolled out the second alpha release of Python 3.14, version 3.14.0a2. This release is part of the ongoing development process for Python 3.14, with a total of seven alpha releases planned.
Read the full article at https://realpython.com/python-news-december-2024/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Mike Driscoll: JupyterLab 101 Book is Now Available
JupyterLab, the latest iteration of the Jupyter Notebook, is a versatile tool that empowers you to share your code in an easily understandable format.
Front View Two Hard Cover Book Psd MockupHundreds of thousands of people around the world use Jupyter Notebooks or variations of the Notebook architecture for any or all of the following:
- teaching
- presentations
- learning a computer language
- numerical simulations
- statistical modeling
- data visualization
- machine learning
- and much more!
Jupyter Notebooks can be emailed, put on GitHub, or run online. You may also add HTML, images, Markdown, videos, LaTeX, and custom MIME types to your Notebooks. Finally, Jupyter Notebooks support big data integration.
JupyterLab 101 will get you up to speed on the newest user interface for Jupyter Notebooks and the other tools that JupyterLab supports. You now have a tabbed interface that you can use to edit multiple Notebooks, open terminals in your browser, create a Python REPL, and more. JupyterLab also includes a debugger utility to help you figure out your coding issues.
Rest assured, JupyterLab supports all the same programming languages as Jupyter Notebook. The main difference lies in the user interface, and this guide is here to help you navigate it effectively and efficiently.
After reading JupyterLab 101, you will be an expert in JupyterLab and produce quality Notebooks quickly!
Where to PurchasePurchase on Gumroad, Leanpub or Amazon
The post JupyterLab 101 Book is Now Available appeared first on Mouse Vs Python.
The Drop Times: All Eyes on DrupalCon Singapore 2024
This week, all eyes are on DrupalCon Singapore 2024 — one of the most anticipated events in the Drupal community. Kicking off on Monday, the event has already garnered an enthusiastic response from attendees worldwide. Known for its engaging sessions, DrupalCon Singapore offers a unique opportunity to explore the present and future of Drupal. The Drop Times is here to provide comprehensive coverage with real-time updates, insightful news stories, and key highlights from the conference floor.
DrupalCon is more than just an event; it's a pivotal moment in the Drupal community’s journey. It draws bright new talent while also sharpening the skills of seasoned contributors. This year’s event is especially significant, as it comes just weeks before the official launch of the all-new Drupal CMS on January 15, 2025. Attendees can expect exclusive insights into new developments and get a sneak peek at what’s on the horizon for Drupal.
In addition to our DrupalCon coverage, we’re experimenting with a fresh, streamlined approach to our newsletter. Moving forward, you’ll receive a crisp, concise package of the most important stories from the past week. Our goal is to ensure you stay informed without feeling overwhelmed.
So, let’s stay tuned for all the action from Singapore and beyond. Big changes are on the way for Drupal, and you won’t want to miss a thing!
DrupalCon Singapore 2024- DrupalCon Singapore 2024: Spotlight on Sponsors Driving Innovation and Community Growth
- Meet the Speakers: DrupalCon Singapore 2024 Part 1
- Meet the Speakers: DrupalCon Singapore 2024 Part II
- Meet the Speakers: DrupalCon Singapore 2024 Part III
- Contribution Day at DrupalCon Singapore 2024: A Day of Collaboration and Innovation
- Networking Opportunities at DrupalCon Singapore 2024: Maximise Your Experience
Discover Drupal
Events
- Drupal Sapporo Meetup to Explore Integration with Open Source Applications on 26 December
- DrupalCon Atlanta 2025 T-Shirt Design Contest: Submit Your Entries by Dec 31
- TDT Is the Official Media Partner for DrupalCon Singapore 2024
- Banking on Open Source: Webinar Explores Open Source Success in Finance Sector
Organization News
- ImageX Recognized as a Great Place to Work for 5-Year in a Row
- Axelerant’s IDMC Digital Overhaul Earns Splash Awards Asia Nomination
- GovFlix: DrupalCon Singapore 2024 Splash Awards Finalist
- QED42’s ADA Project Finalist at Splash Awards 2024
- Transforming Digital Luxury Experience: Aqua Expeditions’ Website Revamp Journey
death and gravity: reader 3.16 released – Archived feed
Hi there!
I'm happy to announce version 3.16 of reader, a Python feed reader library.
What's new? #Here are the highlights since reader 3.15.
Archived feed #It is now possible to archive selected entries to a special "archived" feed, so they can be preserved once the original feed is deleted; this is similar to the Tiny Tiny RSS feature of the same name (which is where I got the idea in the first place).
At high level, this is available in the web app as an "archive all" button that archives currently-visible entries for a feed; in the re-design, the plan is to archive important entries by default as part of the "delete feed" confirmation page. It may also be a good idea to make this the default through a plugin before then, so that user interaction is not required for it to happen.
At low level, this is enabled by archive_entries() (I finally gave up and added a utils module 😅), and the copy_entry() method.
Entry source #reader now parses and stores the entry source, containing metadata about the source feed if the entry is a copy – this can be the case when a feed aggregates articles from other feeds. The source URL can be used for filtering entries, and its title is part of the feed title during searches.
This was a side-project for the archived feed functionality, since the source of archived entries gets set to their original feed.
Bug fixes #Also during archived feed implementation, I found out foreign keys were disabled in threads other than the one that created the reader instance, so e.g. deleting a feed from another thread would not delete its entries. This is now fixed.
The bug existed since multi-threaded use became allowed in version 2.15. Serving the web application with the serve command is known to be affected; serving it without threads (e.g. the default uWSGI configuration) should not be affected.
Attention
Your database may be in an inconsistent state. The migration to 3.16 will check for this on first use; issues will be reported as a FOREIGN KEY constraint failed storage integrity error; see the changelog for details.
That's it for now. For more details, see the full changelog.
Want to contribute? Check out the docs and the roadmap.
Learned something new today? Share this with others, it really helps! PyCoder's Weekly HN Reddit linkedin Twitter
What is reader? #reader takes care of the core functionality required by a feed reader, so you can focus on what makes yours different.
reader allows you to:
- retrieve, store, and manage Atom, RSS, and JSON feeds
- mark articles as read or important
- add arbitrary tags/metadata to feeds and articles
- filter feeds and articles
- full-text search articles
- get statistics on feed and user activity
- write plugins to extend its functionality
...all these with:
- a stable, clearly documented API
- excellent test coverage
- fully typed Python
To find out more, check out the GitHub repo and the docs, or give the tutorial a try.
Why use a feed reader library? #Have you been unhappy with existing feed readers and wanted to make your own, but:
- never knew where to start?
- it seemed like too much work?
- you don't like writing backend code?
Are you already working with feedparser, but:
- want an easier way to store, filter, sort and search feeds and entries?
- want to get back type-annotated objects instead of dicts?
- want to restrict or deny file-system access?
- want to change the way feeds are retrieved by using Requests?
- want to also support JSON Feed?
- want to support custom information sources?
... while still supporting all the feed types feedparser does?
If you answered yes to any of the above, reader can help.
The reader philosophy #- reader is a library
- reader is for the long term
- reader is extensible
- reader is stable (within reason)
- reader is simple to use; API matters
- reader features work well together
- reader is tested
- reader is documented
- reader has minimal dependencies
So you can:
- have full control over your data
- control what features it has or doesn't have
- decide how much you pay for it
- make sure it doesn't get closed while you're still using it
- really, it's easier than you think
Obviously, this may not be your cup of tea, but if it is, reader can help.
1xINTERNET blog: Shaping the Future of Search in Drupal CMS: Interview with Search Track Leads
Endorsed by Dries Buytaert in his keynote at DrupalCon Singapore, the Search Track is driving improvements in how users navigate and interact with content. Read the full interview with the Search Track Leads as they share their achievements, challenges, and future plans.