Feeds
Real Python: Basic Input and Output in Python
For a program to be useful, it often needs to communicate with the outside world. In Python, the input() function allows you to capture user input from the keyboard, while you can use the print() function to display output to the console.
These built-in functions allow for basic user interaction in Python scripts, enabling you to gather data and provide feedback. If you want to go beyond the basics, then you can even use them to develop applications that are not only functional but also user-friendly and responsive.
By the end of this tutorial, you’ll know how to:
- Take user input from the keyboard with input()
- Display output to the console with print()
- Use readline to improve the user experience when collecting input on UNIX-like systems
- Format output using the sep and end keyword arguments of print()
To get the most out of this tutorial, you should have a basic understanding of Python syntax and familiarity with using the Python interpreter and running Python scripts.
Get Your Code: Click here to download the free sample code that you’ll use to learn about basic input and output in Python.
Take the Quiz: Test your knowledge with our interactive “Basic Input and Output in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Basic Input and Output in PythonIn this quiz, you'll test your understanding of Python's built-in functions for user interaction, namely input() and print(). These functions allow you to capture user input from the keyboard and display output to the console, respectively.
Reading Input From the KeyboardPrograms often need to obtain data from users, typically through keyboard input. In Python, one way to collect user input from the keyboard is by calling the input() function:
The input() function pauses program execution to allow you to type in a line of input from the keyboard. Once you press the Enter key, all characters typed are read and returned as a string, excluding the newline character generated by pressing Enter.
If you add text in between the parentheses, effectively passing a value to the optional prompt argument, then input() displays the text you entered as a prompt:
Python >>> name = input("Please enter your name: ") Please enter your name: John Doe >>> name 'John Doe' Copied!Adding a meaningful prompt will assist your user in understanding what they’re supposed to input, which makes for a better user experience.
The input() function always reads the user’s input as a string. Even if you type characters that resemble numbers, Python will still treat them as a string:
Python 1>>> number = input("Enter a number: ") 2Enter a number: 50 3 4>>> type(number) 5<class 'str'> 6 7>>> number + 100 8Traceback (most recent call last): 9 File "<python-input-1>", line 1, in <module> 10 number + 100 11 ~~~~~~~^~~~~ 12TypeError: can only concatenate str (not "int") to str Copied!In the example above, you wanted to add 100 to the number entered by the user. However, the expression number + 100 on line 7 doesn’t work because number is a string ("50") and 100 is an integer. In Python, you can’t combine a string and an integer using the plus (+) operator.
You wanted to perform a mathematical operation using two integers, but because input() always returns a string, you need a way to read user input as a numeric type. So, you’ll need to convert the string to the appropriate type:
Python >>> number = int(input("Enter a number: ")) Enter a number: 50 >>> type(number) <class 'int'> >>> number + 100 150 Copied!In this updated code snippet, you use int() to convert the user input to an integer right after collecting it. Then, you assign the converted value to the name number. That way, the calculation number + 100 has two integers to add. The calculation succeeds and Python returns the correct sum.
Note: When you convert user input to a numeric type using functions like int() in a real-world scenario, it’s crucial to handle potential exceptions to prevent your program from crashing due to invalid input.
The input() function lets you collect information from your users. But once your program has calculated a result, how do you display it back to them? Up to this point, you’ve seen results displayed automatically as output in the interactive Python interpreter session.
However, if you ran the same code from a file instead, then Python would still calculate the values, but you wouldn’t see the results. To display output in the console, you can use Python’s print() function, which lets you show text and data to your users.
Writing Output to the ConsoleIn addition to obtaining data from the user, a program will often need to present data back to the user. In Python, you can display data to the console with the print() function.
Read the full article at https://realpython.com/python-input-output/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Web Wash: First Look at Drupal CMS (Drupal Starshot)
In the above video, we’ll take our first look at Drupal CMS Beta, part of the Drupal Starshot initiative. This initiative aims to provide a downloadable packaged version of Drupal with pre-installed and configured contributed modules.
In the show notes below, you’ll learn about the Drupal Starshot Initiative, Drupal CMS, and how to install it using DDEV.
We’ll then explore Drupal CMS’s functionality and examine some modules included in the packaged solution.
PyCharm: The State of Data Science 2024: 6 Key Data Science Trends
Generative AI and LLMs have been hot topics this year, but are they affecting trends in data science and machine learning? What new trends in data science are worth following? Every year, JetBrains collaborates with the Python Software Foundation to carry out the Python Developer Survey, which can offer some useful insight into these questions.
The results from the latest iteration of the survey, collected between November 2023 and February 2024, included a new Data Science section. This allowed us to get a more complete picture of data science trends over the past year and highlighted how important Python remains in this domain.
While 48% of Python developers are involved in data exploration and processing, the percentage of respondents using Python for data analysis dropped from 51% in 2022 to 44% in 2023. The percentage of respondents using Python for machine learning dropped from 36% in 2022 to 34% in 2023. At the same time, 27% of respondents use Python for data engineering, and 8% use it for MLOps – two new categories that were added to the survey in 2023.
Let’s take a closer look at the trends in the survey results to put these numbers into context and get a better sense of what they mean. Read on to learn about the latest developments in the fields of data science and machine learning to prepare yourself for 2025.
Data processing: pandas remains the top choice, but Polars is gaining groundData processing is an essential part of data science. pandas, a project that is 15 years old, is still at the top of the list of the most commonly used data processing tools. It is used by 77% of respondents who do data exploration and processing. As a mature project, its API is stable, and many working examples can be found on the internet. It’s no surprise that pandas is still the obvious choice. As a NumFOCUS sponsored project, pandas has proven to the community that it is sustainable and its governance model has gained user trust. It is a great choice for beginners who may still be learning the ropes of data processing, as it’s a stable project that does not undergo rapid changes.
On the other hand, Polars, which pitches itself as DataFrames for the new era, has been in the spotlight quite a bit both last year and this year, thanks to the advantages it provides in terms of speed and parallel processing. In 2023, a company led by the creator of Polars, Ritchie Vink, was formed to support the development of the project. This ensures Polars will be able to maintain its rapid pace of development. In July of 2024, version 1.0 of Polars was released. Later, Polars expanded its compatibility with other popular data science tools like Hugging Face and NVIDIA RAPIDS. It also provides a lightweight plotting backend, just like pandas.
So, for working professionals in data science, there is an advantage to switching to Polars. As the project matures, it can become a load-bearing tool in your data science workflow and can be used to process more data faster. In the 2023 survey, 10% of respondents said that they are using Polars as their data processing tool. It is not hard to imagine this figure being higher in this year’s survey.
Whether you are a working professional or just starting to process your first dataset, it is important to have an efficient tool at hand that can make your work more enjoyable. With PyCharm, you can inspect your data as interactive tables, which you can scroll, sort, filter, convert to plots, or use to generate heat maps. Moreover, you can get analytics for each column and use AI assistance to explain DataFrames or create visualizations. Apart from pandas and Polars, PyCharm provides this functionality for Hugging Face datasets, NumPy, PyTorch, and TensorFlow.
Try PyCharm for free An interactive table in PyCharm 2024.2.2 Pro provides tools for inspecting pandas and Polars DataFramesThe popularity of Polars has led to the creation of a new project called Narwhals. Independent from pandas and Polars, Narwhals aims to unite the APIs of both tools (and many others). Since it is a very young project (started in February 2024), it hasn’t yet shown up on our list of the most popular data processing tools, but we suspect it may get there in the next few years.
Also worth mentioning are Spark (16%) and Dask (7%), which are useful for processing large quantities of data thanks to their parallel processes. These tools require a bit more engineering capability to set up. However, as the amount of data that projects depend on increasingly exceeds what a traditional Python program can handle, these tools will become more important and we may see these figures go up.
Data visualization: Will HoloViz Panel surpass Plotly Dash and Streamlit within the next year?Data scientists have to be able to create reports and explain their findings to businesses. Various interactive visualization dashboard tools have been developed for working with Python. According to the survey results, the most popular of them is Plotly Dash.
Plotly is most known in the data science community for the ggplot2 library, which is a highly popular visualization library for users of the R language. Ever since Python became popular for data science, Plotly has also provided a Python library, which gives you a similar experience to ggplot2 in Python. In recent years, Dash, a Python framework for building reactive web apps developed by Plotly, has become an obvious choice for those who are used to Plotly and need to build an interactive dashboard. However, Dash’s API requires some basic understanding of the elements used in HTML when designing the layout of an app. For users who have little to no frontend experience, this could be a hurdle they need to overcome before making effective use of Dash.
Second place for “best visualization dashboard” goes to Streamlit, which has now joined forces with Snowflake. It doesn’t have as long of a history as Plotly, but it has been gaining a lot of momentum over the past few years because it’s easy to use and comes packaged with a command line tool. Although Streamlit is not as customizable as Plotly, building the layout of the dashboard is quite straightforward, and it supports multipage apps, making it possible to build more complex applications.
However, in the 2024 results these numbers may change a little. There are up-and-coming tools that could catch up to – or even surpass – these apps in popularity. One of them is HoloViz Panel. As one of the libraries in the HoloViz ecosystem, it is sponsored by NumFocus and is gaining traction among the PyData community. Panel lets users generate reports in the HTML format and also works very well with Jupyter Notebook. It offers templates to help new users get started, as well as a great deal of customization options for expert users who want to fine-tune their dashboards.
ML models: scikit-learn is still prominent, while PyTorch is the most popular for deep learningBecause generative AI and LLMs have been such hot topics in recent years, you might expect deep learning frameworks and libraries to have completely taken over. However, this isn’t entirely true. There is still a lot of insight that can be extracted from data using traditional statistics-based methods offered by scikit-learn, a well-known machine learning library mostly maintained by researchers. Sponsored by NumFocus since 2020, it remains the most important library in machine learning and data science. SciPy, another Python library that provides support for scientific calculations, is also one of the most used libraries in data science.
Having said that, we cannot ignore the impact of deep learning and the increase in popularity of deep learning frameworks. PyTorch, a machine learning library created by Meta, is now under the governance of the Linux Foundation. In light of this change, we can expect PyTorch to continue being a load-bearing library in the open-source ecosystem and to maintain its level of active community involvement. As the most used deep learning framework, it is loved by Python users – especially those who are familiar with numpy, since “tensors”, the basic data structures in PyTorch, are very similar to numpy arrays.
You can inspect Pytorch tensors in PyCharm 2024.2.2 Pro just like you inspect Numpy arraysUnlike TensorFlow, which uses a static computational graph, PyTorch uses a dynamic one – and this makes profiling in Python a blast. To top it all off, PyTorch also provides a profiling API, making it a good choice for research and experimentation. However, if your deep learning project needs to be scalable in deployment and needs to support multiple programming languages, TensorFlow may be a better choice, as it is compatible with many languages, including C++, JavaScript, Python, C#, Ruby, and Swift. Keras is a tool that makes TensorFlow more accessible and is also popular for deep learning frameworks.
Another framework we cannot ignore for deep learning is Hugging Face Transformers. Hugging Face is a hub that provides many state-of-the-art pre-trained deep learning models that are popular in the data science and machine learning community, which you can download and train further yourself. Transformers is a library maintained by Hugging Face and the community for state-of-the-art machine learning with PyTorch, TensorFlow, and JAX. We can expect Hugging Face Transformers will gain more users in 2024 due to the popularity of LLMs.
With PyCharm you can identify and manage Hugging Face models in a dedicated tool window. PyCharm can also help you to choose the right model for your use case from the large variety of Hugging Face models directly in the IDE.
One new library that is worth paying attention to in 2024 is Scikit-LLM, which allows you to tap into Open AI models like ChatGPT and integrate them with scikit-learn. This is very handy when text analysis is needed, and you can perform analysis using models from scikit-learn with the power of modern LLM models.
MLOps: The future of data science projectsOne aspect of data science projects that is essential but frequently overlooked is MLOps (machine learning operations). In the workflow of a data science project, data scientists need to manage data, retrain the model, and have version control for all the data and models used. Sometimes, when a machine learning application is deployed in production, performance and usage also need to be observed and monitored.
In recent years, MLOps tools designed for data science projects have emerged. One of the issues that has been bothering data scientists and data engineers is versioning the data, which is crucial when your pipeline constantly has data flowing in.
Data scientists and engineers also need to track their experiments. Since the machine learning model will be retrained with new data and hyperparameters will be fine-tuned, it’s important to keep track of model training and experiment results. Right now, the most popular tool is TensorBoard. However, this may be changing soon. TensorBoard.dev has been deprecated, which means users are now forced to deploy their own TensorBoard installations locally or share results using the TensorBoard integration with Google Colab. As a result, we may see a drop in the usage of TensorBoard and an uptick in that of other tools like MLflow and PyTorch.
Another MLOps step that is necessary for ensuring that data projects run smoothly is shipping the development environment for production. The use of Docker containers, a common development practice among software engineers, seems to have been adopted by the data science community. This ensures that the development environment and the production environment remain consistent, which is important for data science projects involving machine learning models that need to be deployed as applications. We can see that Docker is a popular tool among Python users who need to deploy services to the cloud.
This year, Docker containers is slightly ahead of Anaconda in the “Python installation and upgrade” category.
2023 survey results 2022 survey results Big data: How much is enough?One common misconception is that we will need more data to train better, more complex models in order to improve prediction. However, this is not the case. Since models can be overfitted, more is not always better in machine learning. Different tools and approaches will be required depending on the use case, the model, and how much data is being handled at the same time.
The challenge of handling a huge amount of data in Python is that most Python libraries rely on the data being stored in the memory. We could just deploy cloud computing resources with huge amounts of memory, but even this approach has its limitations and would sometimes be slow and costly.
When handling huge amounts of data that are hard to fit in memory, a common solution is to use distributed computing resources. Computation tasks and data are distributed over a cluster to be performed and handled in parallel. This approach makes data science and machine learning operations scalable, and the most popular engine for this is Apache Spark. Spark can be used with PySpark, the Python API library for it.
As of Spark 2.0, anyone using Spark RDD API is encouraged to switch to Spark SQL, which provides better performance. Spark SQL also makes it easier for data scientists to handle data because it enables SQL queries to be executed. We can expect PySpark to remain the most popular choice in 2024.
Another popular tool for managing data in clusters is Databricks. If you are using Databricks to work with your data in clusters, now you can benefit from the powerful integration of Databricks and PyCharm. You can write code for your pipelines and jobs in PyCharm, then deploy, test, and run it in real time on your Databricks cluster without any additional configuration.
Communities: Events shifting focus toward data scienceMany newcomers to Python are using it for data science, and thus more Python libraries have been catering to data science use cases. In that same vein, Python events like PyCon and EuroPython are beginning to include more tracks, talks, and workshops that focus on data science, while events that are specific to data science, like PyData and SciPy, remain popular, as well.
Final thoughtsData science and machine learning are becoming increasingly active, and together with the popularity of AI and LLMs, more and more new open source tools have become available for use in data science. The landscape of data science continues to change rapidly, and we are excited to see what becomes most popular in the 2024 survey results.
Enhance your data science experience with PyCharmModern data science demands skills for a wide range of tasks, including data processing and visualization, coding, model deployment, and managing large datasets. As an integrated development environment (IDE), PyCharm helps you efficiently build this skill set. It provides intelligent coding assistance, top-tier debugging, version control, integrated database management, and seamless Docker integration. For data science, PyCharm supports Jupyter notebooks, as well as key scientific and machine learning libraries, and it integrates with tools like the Hugging Face models library, Anaconda, and Databricks.
Start using PyCharm for your data science projects today and enjoy its latest improvements, including features for inspecting pandas and Polars DataFrames, and for the layer by layer inspection of PyTorch tensors, which is handy when exploring data and building deep learning models.
Try PyCharm for freeQt for MCUs 2.9 Released
We are excited to announce Qt for MCUs 2.9, which comes with many key features to enable Qt for MCUs to support more use cases in the IoT, Consumer and Automotive segments. Here are few of the major highlights from the 2.9 release.
LostCarPark Drupal Blog: Drupal Advent Calendar day 2 - Starshot Installer
It’s day 2 of the Drupal Advent calendar and today we’re taking a look at the first step to any new website built with Drupal CMS, the site installer.
The previous Drupal installer wasn’t terrible, but it required a lot of steps, and typically needed a lot more work, finding and installing modules, when the initial install was complete.
The new installer has tried to simplify the process as much as possible, and offers a friendlier interface.
The primary question it asks is what are the main goals of your site:
At present, there are six options, but these are expected to be expanded in the future…
TagsPython Bytes: #412 Closing the loop
I think the donation notification works
A few months ago, I blogged about a change for Plasma 6.2 to show a once-a-year system notification asking for a donation, starting on December 1st. Various reasons and justifications were given in that post, so I won’t repeat them here. Instead, since December 1st was yesterday in most of the world, it’s time to check in on the day 1 experience! So let’s get right into it:
Did it work?Well, I woke up to an email inbox that looked like this:
And by the end of the day, the graph on https://kde.org/community/donations/previousdonations (which by the way only counts direct Paypal donations and still doesn’t include those made using Donorbox or direct bank transfer) wound up looking like this:
Yes that’s right, KDE e.V. received double the prior two months’ Paypal donations in a single day!!!
Do people hate us now?So far, indications point to no! I scoured https://www.reddit.com/r/kde and https://discuss.kde.org all day yesterday and literally only found one non-positive comment about it, dwarfed by a large volume of mildly to highly positive ones. I wasn’t looking at Mastodon or other social media, but a colleague reported something similar.
In addition, a large number of the donations themselves were accompanied by positive messages from the donors. Here are some of my favorites:
KDE is more than just software, it’s a family. Least I can donate, but it’s coming from someone that pirates every other thing or uses the free alternative.
Thanks for all your incredible work over the years.
KDE Plasma is a big part of why I have grown to love Linux as my daily driver
Thanks for all you have done for the linux desktop community
Thanks for Plasma! Couldn’t work without it! (Visually impaired user).
Thanks for your efforts to make the world a little more independent from Big Tech
Love the work, KDE is my daily driver and I’m glad I can help
Just got the Notification to donate in KDE and after thinking about it for a bit decided to donate for the first time, since I’ve been using Linux and specifically KDE for almost a year now. Thanks for your hard work!
Thanks for all of the work and effort put into making KDE the best DE ever!
So, yeah. On the contrary, it feels like our users really, really love us!
Is this repeatable?It’s too early to say at this point, but I hope so. It will be interesting to see how fast the donations drop off. Will it be relatively fast because everyone who was going to donate after seeing to the notification already saw it yesterday? Or will the drop-off take a while because there are more notification-based potential donors who didn’t turn on their Plasma 6.2-using computer yet, or opened the donations page in a browser tab to action later? We don’t know; we’ll have to wait and see.
However it’s also worth mentioning that these donations are coming entirely from people using distros that include Plasma 6.2. Right now that’s pretty much limited to fast-paced distros like Arch, Fedora KDE, KDE Neon, OpenSUSE Tumbleweed, and their derivatives. Notably, it excludes traditional heavy hitters like Kubuntu and Debian. So there are reasons to expect the donation notification to reach even more eyeballs in 2025 than it has this year.
Now that you’re rich are you going to buy a bunch of leopard-print Porsche steering wheel covers and other KDE e.V. board junkets?No board junkets. It’s too early to make a projection based on the performance of single day, and especially if the donations drop off quickly, this isn’t “Thunderbird money” yet. But it does look quite possible that all these donations may push KDE e.V. into ending up with a balanced budget for the 2024 financial year. That would be pretty fantastic, as we weren’t predicting a balanced budget until 2025 or 2026, instead originally expecting a deficit of over €50k in 2024. And that was already an improvement over the 110k deficit in 2023.
Balancing the budget early is huge, and opens up opportunities. As you may know, German nonprofits like KDE e.V. are required to avoid stockpiling money (hence the intentional deficits), so moving into the realm of positive cashflow means we’ll need to increase our expenditures. Thankfully, KDE e.V. has become very good at spending money over the past few years, largely by expanding our hiring on personnel in technical roles: basically sponsoring community members to improve our products directly.
The easiest way to spend more money is to simply lean into that harder: hire another person, sponsor another project, stuff like that — pretty much what I mentioned in the original post. More money means more tech work financed by KDE itself, directly increasing our institutional ability to control our own destiny. It’s pretty great stuff if you ask me. But again, this is a collective board decision, not up to me alone. And if you disagree with me that this is the right use for KDE’s money, that’s fine too, and I’ll mention that I’m up for re-election on the board next year, so please do feel free to run or vote against me if you’re a KDE e.V. member! The organization works best with a board that reflects its membership’s preferences. I have zero desire to occupy that seat if I’m not representing people properly.
Anyway, it works. It appears to really work. My conclusion is that KDE has built up enough goodwill that our user community loves and trusts us, which made this outpouring of financial support possible. It’s humbling and kind of overwhelming. But it all strengthens my conviction that KDE is pointing in the right direction and amounts to a strong positive force for humanity!
Want to help out? In addition to donating your money which is what we’ve been talking about, an arguably more impactful approach is to donate your time directly, bypassing any institutional middleman that buys time with money! It’s not hard to get started, and there are loads of resources and mentorship opportunities. So help make the world a better place through KDE today!
Junichi Uekawa: Graph for my furusato tax.
Russ Allbery: Review: Long Live Evil
Review: Long Live Evil, by Sarah Rees Brennan
Series: Time of Iron #1 Publisher: Orbit Copyright: July 2024 ISBN: 0-316-56872-4 Format: Kindle Pages: 433Long Live Evil is a portal fantasy (or, arguably more precisely, a western take on an isekai villainess fantasy) and the first book of a series. If the author's name sounds familiar, it's possibly because of In Other Lands, which got a bunch of award nominations in 2018, She has also written a lot of other YA fantasy, but this is her first adult epic fantasy novel.
Rae is in the hospital, dying of cancer. Everything about that experience, from the obvious to the collapse of her friendships, absolutely fucking sucks. One of the few bright points is her sister's favorite fantasy series, Time of Iron, which her sister started reading to her during chemo sessions. Rae mostly failed to pay attention until the end of the first book and the rise of the Emperor. She fell in love with the brooding, dangerous anti-hero and devoured the next two books. The first book was still a bit hazy, though, even with the help of a second dramatic reading after she was too sick to read on her own.
This will be important later.
After one of those reading sessions, Rae wakes up to a strange woman in her hospital room who offers her an option. Rather than die a miserable death that bankrupts her family, she can go through a door to Eyam, the world of Time of Iron, and become the character who suits her best. If she can steal the Flower of Life and Death from the imperial greenhouse on the one day a year that it blooms, she will wake up, cured. If not, she will die. Rae of course goes through, and wakes in the body of Lady Rahela, the Beauty Dipped in Blood, the evil stepsister. One of the villains, on the night before she is scheduled to be executed.
Rae's initial panic slowly turns to a desperate glee. She knows all of these characters. She knows how the story will turn out. And she has a healthy body that's not racked with pain. Maybe she's not the heroine, but who cares, the villains are always more interesting anyway. If she's going to be cast as the villain, she's going to play it to the hilt. It's not like any of these characters are real.
Stories in which the protagonists are the villains are not new (Nimona and Hench come to mind just among books I've reviewed), but they are having a moment. Assistant to the Villain by Hannah Nicole Maehrer came out last year, and this book and Django Wexler's How to Become the Dark Lord and Die Trying both came out this year. This batch of villain books all take different angles on the idea, but they lean heavily on humor. In Long Live Evil, that takes the form of Rae's giddy embrace of villainous scheming, flouncing, and blatant plot manipulation, along with her running commentary on the various characters and their in-story fates.
The setup here is great. Rae is not only aware that she's in a story, she knows it's full of cliches and tropes. Some of them she loves, some of them she thinks are ridiculous, and she isn't shy about expressing both of those opinions. Rae is a naturally dramatic person, and it doesn't take her long to lean into the opportunities for making dramatic monologues and villainous quips, most of which involve modern language and pop culture references that the story characters find baffling and disconcerting.
Unfortunately, the base Time of Iron story is, well, bad. It's absurd grimdark epic fantasy with paper-thin characters and angst as a central character trait. This is clearly intentional for both in-story and structural reasons. Rae enjoys it precisely because it's full of blood and battles and over-the-top brooding, malevolent anti-heroes, and Rae's sister likes the impossibly pure heroes who suffer horrible fates while refusing to compromise their ideals. Rae is also about to turn the story on its head and start smashing its structure to try to get herself into position to steal the Flower of Life and Death, and the story has to have a simple enough structure that it doesn't get horribly confusing once smashed. But the original story is such a grimdark parody, and so not my style of fantasy, that I struggled with it at the start of the book.
This does get better eventually, as Rae introduces more and more complications and discovers some surprising things about the other characters. There are several delightful twists concerning the impossibly pure heroine of the original story that I will not spoil but that I thought retroactively made the story far more interesting. But that leads to the other problem: Rae is both not very good at scheming, and is flippant and dismissive of the characters around her. These are both realistic; Rae is a young woman with cancer, not some sort of genius mastermind, and her whole frame for interacting with the story is fandom discussions and arguments with her sister. Early in the book, it's rather funny. But as the characters around her start becoming more fleshed out and complex, Rae's inability to take them seriously starts to grate. The grand revelation to Rae that these people have their own independent existence comes so late in the book that it's arguably a spoiler, but it was painfully obvious to everyone except Rae for hundreds of pages before it got through Rae's skull.
Those are my main complaints, but there was a lot about this book that I liked. The Cobra, who starts off as a minor villain in the story, is by far the best character of the book. He's not only more interesting than Rae, he makes everyone else in the book, including Rae, more interesting characters through their interactions. The twists around the putative heroine, Lady Rahela's stepsister, are a bit too long in coming but are an absolute delight. And Key, the palace guard that Rae befriends at the start of the story, is the one place where Rae's character dynamic unquestionably works. Key anchors a lot of Rae's scenes, giving them a sense of emotional heft that Rae herself would otherwise undermine.
The narrator in this book does not stick with Rae. We also get viewpoint chapters from the Cobra, the Last Hope, and Emer, Lady Rahela's maid. The viewpoints from the Time of Iron characters can be a bit eye-roll-inducing at the start because of how deeply they follow the grimdark aesthetic of the original story, but by the middle of the book I was really enjoying the viewpoint shifts. This story benefited immensely from being seen from more angles than Rae's chaotic manipulation. By the end of the book, I was fully invested in the plot line following Cobra and the Last Hope, to the extent that I was a bit disappointed when the story would switch back to Rae.
I'm not sure this was a great book, but it was fun. It's funny in places, but I ended up preferring the heartfelt parts to the funny parts. It is a fascinating merger of gleeful fandom chaos and rather heavy emotional portrayals of both inequality and the experience of terminal illness. Rees Brennan is a stage four cancer survivor and that really shows; there's a depth, nuance, and internal complexity to Rae's reactions to illness, health, and hope that feels very real. It is the kind of book that can give you emotional whiplash; sometimes it doesn't work, but sometimes it does.
One major warning: this book ends on a ridiculous cliffhanger and does not in any sense resolve its main plot arc. I found this annoying, not so much because of the wait for the second volume, but because I thought this book was about the right length for the amount of time I wanted to spend in this world and wish Rees Brennan had found a way to wrap up the story in one book. Instead, it looks like there will be three books. I'm in for at least one more, since the story was steadily getting better towards the end of Long Live Evil, but I hope the narrative arc survives being stretched out across that many words.
This one's hard to classify, since it's humorous fantasy on the cover and in the marketing, and that element is definitely present, but I thought the best parts of the book were when it finally started taking itself seriously. It's metafictional, trope-subverting portal fantasy full of intentional anachronisms that sometimes fall flat and sometimes work brilliantly. I thought the main appeal of it would be watching Rae embrace being a proper villain, but then the apparent side characters stole the show. Recommended, but you may have to be in just the right mood.
Content notes: Cancer, terminal illness, resurrected corpses, wasting disease, lots of fantasy violence and gore, and a general grimdark aesthetic.
Rating: 7 out of 10
Charles: Hello World
As the computer science tradition demands, we must start with a Hello World.
Though I have to say this hello world took quite a long time to reach the internet. I’ve been thinking about setting up this website for way over a year, but there are always too many things to decide - what Static Site Generator will I use? Where should I get a domain from? Which registrar would be better now? What if I want to set up a mail server, is it good enough? Oh, and what about the theme, which one to choose? Can I get one simple enough to not fetch javascript or css from external sources?
This was taking so long that even my friends were saying “Please, just share your screen and let’s do it now!”. Well, rejoice friends, now it’s done!
Oliver Davies' daily list: Override Node Options and Drupal 11
Last week, I released a new version of the Override Node Options module - version 8.x-2.9.
This version makes the module compatible with Drupal 11 and, as there are no breaking changes, it's still compatible with Drupal 9 and 10.
It's great to see the module used on many Drupal 8+ websites and distributions such as LocalGov Drupal.
Whilst the overall number of installations has been consistent, the number of Drupal 7 installations has decreased whilst the Drupal 8+ version installations have increased.
With a Drupal 11-compatible version now available, I hope it continues to increase.
unifont @ Savannah: Unifont 16.0.02 Released
1 December 2024 Unifont 16.0.02 is now available. This is a minor release with many glyph improvements. See the ChangeLog file for details.
Download this release from GNU server mirrors at:
https://ftpmirror.gnu.org/unifont/unifont-16.0.02/
or if that fails,
https://ftp.gnu.org/gnu/unifont/unifont-16.0.02/
or, as a last resort,
ftp://ftp.gnu.org/gnu/unifont/unifont-16.0.02/
These files are also available on the unifoundry.com website:
https://unifoundry.com/pub/unifont/unifont-16.0.02/
Font files are in the subdirectory
https://unifoundry.com/pub/unifont/unifont-16.0.02/font-builds/
A more detailed description of font changes is available at
https://unifoundry.com/unifont/index.html
and of utility program changes at
https://unifoundry.com/unifont/unifont-utilities.html
Information about Hangul modifications is at
https://unifoundry.com/hangul/index.html
and
http://unifoundry.com/hangul/hangul-generation.html
Guido Günther: Free Software Activities November 2024
Another short status update of what happened on my side last month. The larger blocks are the Phosh 0.43 release, the initial file chooser portal, phosh-osk-stub now handling digit, number, phone and PIN input purpose via special layouts as well as Phoc mostly catching up with wlroots 0.18 and the current development version targeting 0.19.
phosh- When taking a screenshot via keybinding or power button long press save screenshots to clipboard and disk (MR)
- Robustify Screenshot CI job (MR)
- Update CI pipeline (MR)
- Fix notifications banners that aren't tall enough not being shown (MR). Another 4y old bug hopefully out of the way.
- Add rfkill mock and docs (MR). Useful for HKS testing.
- Release 0.43~rc1 and 0.43
- Drop libsoup workaround (MR)
- Ensure notification only takes its actual height (MR)
- Move wlroots 0.18 update forward (MR). Needs a bit more work before we can make it default.
- Catch up with wlroots development branch (MR) allowing us to test current wlroots again.
- Some of the above already applies to main so schedule it for 0.44 (MR)
- Add layouts for PIN, number and phone input purpose (MR)
- Release 0.43~rc1
- Ensure translation get picked up, various cleanups and release 0.43.0 (MR)
- Make desktop file match app-id (MR)
- Fix typo and reduce number of strings to translate (MR)
- Add translator comments (MR). This, the above and additional fixes in p-m-s were prompted by i18n feedback from Alexandre Franke, thanks a lot!
- Release 0.43.0
- Initial version of the adaptive file chooser dialog using gtk-rs. See demo.
- Allow to activate via double click (for non-touch use) (MR)
- Use pfs to provide a file chooser portal (MR)
- Slightly improve point release handling (MR)
- Improve string freeze announcements and add phosh-tour (MR)
- Upload Phosh 0.43.0~rc1 and 0.43.0 (MR, MR, MR, MR, MR, MR, MR, MR, MR, MR, MR)
- meta-phosh: Add Recommend: for xdg-desktop-portal-phosh (MR)
- phosh-osk-data got accepted, create repo, brush up packaging and upload to unstable (MR
- phosh-osk-stub: Recommend data packager (MR
- Phosh: drop reverts (MR)
- varnam-schemes: Fix autopkgtest (MR)
- varnam-schemes: Improve packaging (MR)
- Prepare govarnam 1.9.1 (MR)
- ussd: Set input purpose and switch to AdwDialog (MR, Screenshot)
- Drop libhandy leftover (MR)
- Improve docs and cleanup markdown (MR)
- Mention gbp push in intro (MR)
- Use application instead of productname entities to improve reading flow (MR)
- Drop mention of wlr_renderer_begin_with_buffer (MR)
- Add mock for gsd-rfkill (MR)
- Sync notification categories with the portal spec (MR)
- Add categories for SMS (MR)
- Add a pubdate so it's clear the specs aren't stale (MR) (got fixed in a different and better way, thanks Matthias!)
- Allow to set filters in file chooser portal demo (MR)
- Robustify file generation (MR)
- Unbreak tests on non intel/amd architectures (e.g. arm64) (MR)
This is not code by me but reviews I did on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions!
- flathub: livi runtime and gst update (MR)
- phosh: Split linters into their own test suite (MR)
- phosh; QuickSettings follow-up (MR)
- phosh: Accent color fixes (MR)
- phosh: Notification animation (MR)
- phosh: end-session dialog timeout fix (MR)
- phosh: search daemon (MR)
- phosh-ev: Migrate to newer gtk-rs and async_channel (MR)
- phosh-mobile-settings: Update gmobile (MR)
- phosh-mobile-settings: Make panel-switcher scrollable (MR)
- phosh-mobile-settings: i18n comments (MR)
- gbp doc updates (MR)
- gbp handle suite names with number prefix (MR)
- Debian libvirt dependency changes (MR
- Chatty: misc improvements (MR
- iio-sensor-proxy: buffer driver without trigger (MR)
- gbp doc improvements (MR)
- gbp: More doc improvements (MR)
- gbp: Clean on failure (MR)
- gbp: DEP naming consistency (MR)
If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.
Comments?Join the Fediverse thread
Colin Watson: Free software activity in November 2024
Most of my Debian contributions this month were sponsored by Freexian.
You can also support my work directly via Liberapay.
ConferencesI attended MiniDebConf Toulouse 2024, and the MiniDebCamp before it. Most of my time was spent with the Freexian folks working on debusine; Stefano gave a talk about its current status with a live demo (frantically fixed up over the previous couple of days, as is traditional) and with me and others helping to answer questions at the end. I also caught up with some people I haven’t seen in ages, ate a variety of delicious cheeses, and generally had a good time. Many thanks to the organizers and sponsors!
After the conference, Freexian collaborators spent a day and a half doing some planning for next year, and then went for an afternoon visiting the Cité de l’espace.
Rust teamI upgraded these packages to new upstream versions, as part of upgrading pydantic and rpds-py:
- rust-archery
- rust-jiter (noticing an upstream test bug in the process)
- rust-pyo3 (fixing CVE-2024-9979)
- rust-pyo3-build-config
- rust-pyo3-ffi
- rust-pyo3-macros
- rust-pyo3-macros-backend
- rust-regex
- rust-regex-automata
- rust-regex
- rust-serde
- rust-serde-derive
- rust-serde-json
- rust-speedate
- rust-triomphe
Last month, I mentioned that we still need to work out what to do about the multipart vs. python-multipart name conflict in Debian (#1085728). We eventually managed to come up with an agreed plan; Sandro has uploaded a renamed binary package to experimental, and I’ve begun work on converting reverse-dependencies (asgi-csrf, fastapi, python-curies, and starlette done so far). There’s a bit more still to do, but I expect we can finish it soon.
I fixed problems related to adding Python 3.13 support in:
- coreapi
- git-repo-updater
- offlineimap3
- ptyprocess
- pytest-testinfra
- python-formencode
- python-iniparse
- python-line-profiler
- python-parameterized
- python-sdjson (contributed upstream)
- python-testfixtures (contributed upstream)
- python-venusian
- sphinx-a4doc (contributed upstream, and I also made a small antlr4 improvement)
- webpy
I fixed some packaging problems that resulted in failures any time we add a new Python version to Debian:
I fixed other build/autopkgtest failures in:
- psycopg3 (contributed upstream)
- python-aiohttp
- python-cotengrust
- python-distutils-extra
- python-formencode
- python-openapi-schema-validator
- python-openapi-spec-validator
- sphinxcontrib-towncrier
I packaged python-quart-trio, needed for a new upstream version of python-urllib3, and contributed a small packaging tweak upstream.
I backported a twisted fix that caused problems in other packages, including breaking debusine‘s tests.
I disentangled some upstream version confusion in python-catalogue, and upgraded to the current upstream version.
I upgraded these packages to new upstream versions:
- aioftp (fixing a Python 3.13 failure)
- ansible-core
- ansible
- debugpy
- jsonpickle
- manuel
- psycopg2
- pydantic-core
- pydantic
- pydantic-settings
- pymssql (fixing a Python 3.13 failure)
- pyodbc (fixing a Python 3.13 failure)
- python-argh (fixing a Python 3.13 failure)
- python-boltons (fixing a Python 3.13 failure)
- python-channels-redis
- python-colorlog (fixing a Python 3.13 failure)
- python-django-pgtrigger
- python-line-profiler
- python-pathvalidate (fixing a Python 3.13 failure)
- python-plac (fixing a Python 3.13 failure)
- python-precis-i18n
- python-pure-eval (fixing a Python 3.13 failure)
- python-pythonjsonlogger (contributing a small packaging fix upstream, as well as a test fix to jupyter-events)
- python-rdata (fixing a Python 3.13 failure)
- python-semantic-release
- python-telethon (fixing a Python 3.13 failure, and contributing some test fixes upstream)
- python-tornado (fixing CVE-2024-52804)
- python-trio (fixing a Python 3.13 failure)
- python-trustme
- python-typeguard
- python-urllib3 (fixing CVE-2024-37891 and a Python 3.13 failure, and requiring some shenanigans with its hypercorn test-dependency)
- python-zipp
- quart (fixing CVE-2024-49767)
- rpds-py (fixing a build failure)
- sen
- sqlparse
- stravalib
- transaction
- waitress
- zope.interface
I contributed Incus support to needrestart upstream.
In response to Helmut’s Cross building talk at MiniDebConf Toulouse, I fixed libfilter-perl to support cross-building (5b4c2e10, f9788c27).
I applied a patch to move aliased files from / to /usr in iprutils (#1087733).
I adjusted debconf to use the new /usr/lib/apt/apt-extracttemplates path (#1087523).
I upgraded putty to 0.82.
gettext @ Savannah: GNU gettext 0.23 released
Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.23.tar.gz
New in this release:
- Internationalized data formats:
- XML:
- The escaping of characters such as & < > has been changed:
- No escaping is done any more by xgettext, when creating a POT file.
- Instead, extra escaping can be requested for the msgfmt pass, when merging into an XML file.
- The default value of 'escape' in the <gt:escapeRule> was "yes"; now it is "no".
- This means that existing translations of older POT files may no longer fully apply. As a maintainer of a package that has translatable XML files, you need to regenerate the POT file and pass it on to your translators.
- XML schemas for .its and .loc files are now provided.
- The value of the xml:lang attribute, inserted by msgfmt, now conforms to W3C standards.
- 'msgfmt --xml' accept an option --replace-text, that causes the output to be a mono-lingual XML file instead of a multi-lingual XML file.
- xgettext and 'msgfmt --xml' now support DocBook XML files.
- The escaping of characters such as & < > has been changed:
- Desktop: xgettext now produces POT files with correct line numbers.
- XML:
- Programming languages support:
- Python:
- xgettext now assumes source code for Python 3 rather than Python 2. This affects the interpretation of escape sequences in string literals.
- xgettext now recognizes the f-string syntax.
- Scheme:
- xgettext now supports the option '-L Guile' as an alternative to '-L Scheme'. They are nearly equivalent. They differ in the interpretation of escape sequences in string literals: While 'xgettext -L Scheme' assumes the R6RS and R7RS syntax of string literals, 'xgettext -L Guile' assumes the syntax of string literals understood by Guile 2.x and 3.0 (without command-line option '--r6rs' or '--r7rs', and before a '#!r6rs' directive is seen).
- xgettext now recognizes comments of the form '#; <expression>'.
- Java: xgettext now has an improved recognition of format strings when the String.formatted method is used.
- JavaScript:
- xgettext now parses template literals inside JSX correctly.
- xgettext has a new option --tag that customizes the behaviour of tagged template literals.
- C#:
- The build system and tools now also support 'dotnet' (.NET) as C# implementation. In order to declare a preference for 'dotnet' over 'mono', you can use the configure option '--enable-csharp=dotnet'.
- xgettext now recognizes strings with embedded expressions (a.k.a. interpolated strings).
- awk: xgettext now recognizes string concatenation by juxtaposition.
- Smalltalk: xgettext now recognizes the string concatenation operator ','.
- Vala: xgettext now has an improved recognition of format strings when the string.printf method is used.
- Glade: xgettext has improved support for GtkBuilder 4.
- Tcl: With the recently released Tcl 9.0, characters outside the Unicode BMP in Tcl message catalogs (.msg files) will work regardless of the locale's encoding.
- Perl:
- xgettext now reports warnings instead of fatal errors.
- xgettext now recognizes strings with embedded expressions (a.k.a. interpolated strings).
- PHP:
- xgettext now recognizes strings with embedded expressions.
- xgettext now scans Heredoc and Nowdoc strings correctly.
- xgettext now regards the format string directives %E, %F, %g, %G, %h, %H as valid.
- Python:
- Runtime behaviour:
- In the C.UTF-8 locale, like in the C locale, the *gettext() functions now return the msgid untranslated. This is relevant for GNU systems, Linux with musl libc, FreeBSD, NetBSD, OpenBSD, Cygwin, and Android.
- Documentation:
- The section "Preparing Strings" now gives more advice how to deal with string concatenation and strings with embedded expressions.
- xgettext:
- Most of the diagnostics emitted by xgettext are now labelled as "warning" or "error".
- msgmerge:
- The option '--sorted-output' is now deprecated.
- libgettextpo library:
- This library is now multithread-safe.
- The function 'po_message_set_format' now supports resetting a format string mark.
Real Python: Python String Formatting: Available Tools and Their Features
String formatting is essential in Python for creating dynamic and well-structured text by inserting values into strings. This tutorial covers various methods, including f-strings, the .format() method, and the modulo operator (%). Each method has unique features and benefits for different use cases. The string formatting mini-language provides additional control over the output format, allowing for aligned text, numeric formatting, and more.
F-strings provide an intuitive and efficient way to embed expressions inside string literals. The .format() method offers flexibility for lazy interpolation and is compatible with Python’s formatting mini-language. The modulo operator is an older technique still found in legacy code. Understanding these methods will help you choose the best option for your specific string formatting needs.
By the end of this tutorial, you’ll understand that:
- String formatting in Python involves inserting and formatting values within strings using interpolation.
- Python supports different types of string formatting, including f-strings, the .format() method, and the modulo operator (%).
- F-strings are generally the most readable and efficient option for eager interpolation in Python.
- Python’s string formatting mini-language offers features like alignment, type conversion, and numeric formatting.
- While f-strings are more readable and efficient compared to .format() and the % operator, the .format() method supports lazy evaluation.
To get the most out of this tutorial, you should be familiar with Python’s string data type and the available string interpolation tools. Having a basic knowledge of the string formatting mini-language is also a plus.
Get Your Code: Click here to download the free sample code you’ll use to learn about Python’s string formatting tools.
Take the Quiz: Test your knowledge with our interactive “Python String Formatting: Available Tools and Their Features” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python String Formatting: Available Tools and Their FeaturesYou can take this quiz to test your understanding of the available tools for string formatting in Python, as well as their strengths and weaknesses. These tools include f-strings, the .format() method, and the modulo operator.
Interpolating and Formatting Strings in PythonString interpolation involves generating strings by inserting other strings or objects into specific places in a base string or template. For example, here’s how you can do some string interpolation using an f-string:
Python >>> name = "Bob" >>> f"Hello, {name}!" 'Hello, Bob!' Copied!In this quick example, you first have a Python variable containing a string object, "Bob". Then, you create a new string using an f-string. In this string, you insert the content of your name variable using a replacement field. When you run this last line of code, Python builds a final string, 'Hello, Bob!'. The insertion of name into the f-string is an interpolation.
Note: To dive deeper into string interpolation, check out the String Interpolation in Python: Exploring Available Tools tutorial.
When you do string interpolation, you may need to format the interpolated values to produce a well-formatted final string. To do this, you can use different string interpolation tools that support string formatting. In Python, you have these three tools:
- F-strings
- The str.format() method
- The modulo operator (%)
The first two tools support the string formatting mini-language, a feature that allows you to fine-tune your strings. The third tool is a bit old and has fewer formatting options. However, you can use it to do some minimal formatting.
Note: The built-in format() function is yet another tool that supports the format specification mini-language. This function is typically used for date and number formatting, but you won’t cover it in this tutorial.
In the following sections, you’ll start by learning a bit about the string formatting mini-language. Then, you’ll dive into using this language, f-strings, and the .format() method to format your strings. Finally, you’ll learn about the formatting capabilities of the modulo operator.
Using F-Strings to Format StringsPython 3.6 added a string interpolation and formatting tool called formatted string literals, or f-strings for short. As you’ve already learned, f-strings let you embed Python objects and expressions inside your strings. To create an f-string, you must prefix the string with an f or F and insert replacement fields in the string literal. Each replacement field must contain a variable, object, or expression:
Python >>> f"The number is {42}" 'The number is 42' >>> a = 5 >>> b = 10 >>> f"{a} plus {b} is {a + b}" '5 plus 10 is 15' Copied!In the first example, you define an f-string that embeds the number 42 directly into the resulting string. In the second example, you insert two variables and an expression into the string.
Formatted string literals are a Python parser feature that converts f-strings into a series of string constants and expressions. These are then joined up to build the final string.
Using the Formatting Mini-Language With F-StringsWhen you use f-strings to create strings through interpolation, you need to use replacement fields. In f-strings, you can define a replacement field using curly brackets ({}) as in the examples below:
Python >>> debit = 300.00 >>> credit = 450.00 >>> f"Debit: ${debit}, Credit: ${credit}, Balance: ${credit - debit}" 'Debit: $300, Credit: $450.0, Balance: $150.0' Copied! Read the full article at https://realpython.com/python-string-formatting/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: How to Check if a Python String Contains a Substring
To check if a string contains another string in Python, use the in membership operator. This is the recommended method for confirming the presence of a substring within a string. The in operator is intuitive and readable, making it a straightforward way to evaluate substring existence.
Additionally, you can use string methods like .count() and .index() to gather more detailed information about substrings, such as their frequency and position. For more complex substring searches, use regular expressions with the re module. When you’re dealing with tabular data, then pandas provides efficient tools for searching for substrings within DataFrame columns.
By the end of this tutorial, you’ll understand that:
- The in membership operator is the recommended way to check if a Python string contains a substring.
- Converting input text to lowercase generalizes substring checks by removing case sensitivity.
- The .count() method counts occurrences of a substring, while .index() finds the first occurrence’s position.
- Regular expressions in the re module allow for advanced substring searches based on complex conditions.
- The .str.contains() method in pandas identifies which DataFrame entries contain a specific substring.
Understanding these methods and tools enables you to effectively check for substrings in Python strings, catering to various needs from simple checks to complex data analysis.
Get Your Code: Click here to download the free sample code that you’ll use to check if a string contains a substring.
How to Confirm That a Python String Contains Another StringIf you need to check whether a string contains a substring, use Python’s membership operator in. In Python, this is the recommended way to confirm the existence of a substring in a string:
Python >>> raw_file_content = """Hi there and welcome. ... This is a special hidden file with a SECRET secret. ... I don't want to tell you The Secret, ... but I do want to secretly tell you that I have one.""" >>> "secret" in raw_file_content True Copied!The in membership operator gives you a quick and readable way to check whether a substring is present in a string. You may notice that the line of code almost reads like English.
Note: If you want to check whether the substring is not in the string, then you can use not in:
Python >>> "secret" not in raw_file_content False Copied!Because the substring "secret" is present in raw_file_content, the not in operator returns False.
When you use in, the expression returns a Boolean value:
- True if Python found the substring
- False if Python didn’t find the substring
You can use this intuitive syntax in conditional statements to make decisions in your code:
Python >>> if "secret" in raw_file_content: ... print("Found!") ... Found! Copied!In this code snippet, you use the membership operator to check whether "secret" is a substring of raw_file_content. If it is, then you’ll print a message to the terminal. Any indented code will only execute if the Python string that you’re checking contains the substring that you provide.
Note: Python considers empty strings always as a substring of any other string, so checking for the empty string in a string returns True:
Python >>> "" in "secret" True Copied!This may be surprising because Python considers emtpy strings as false, but it’s an edge case that is helpful to keep in mind.
The membership operator in is your best friend if you just need to check whether a Python string contains a substring.
However, what if you want to know more about the substring? If you read through the text stored in raw_file_content, then you’ll notice that the substring occurs more than once, and even in different variations!
Which of these occurrences did Python find? Does capitalization make a difference? How often does the substring show up in the text? And what’s the location of these substrings? If you need the answer to any of these questions, then keep on reading.
Generalize Your Check by Removing Case SensitivityPython strings are case sensitive. If the substring that you provide uses different capitalization than the same word in your text, then Python won’t find it. For example, if you check for the lowercase word "secret" on a title-case version of the original text, the membership operator check returns False:
Python >>> title_cased_file_content = """Hi There And Welcome. ... This Is A Special Hidden File With A Secret Secret. ... I Don't Want To Tell You The Secret, ... But I Do Want To Secretly Tell You That I Have One.""" >>> "secret" in title_cased_file_content False Copied!Despite the fact that the word secret appears multiple times in the title-case text title_cased_file_content, it never shows up in all lowercase. That’s why the check that you perform with the membership operator returns False. Python can’t find the all-lowercase string "secret" in the provided text.
Humans have a different approach to language than computers do. This is why you’ll often want to disregard capitalization when you check whether a string contains a substring in Python.
Read the full article at https://realpython.com/python-string-contains-substring/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: Python Exceptions: An Introduction
Python exceptions provide a mechanism for handling errors that occur during the execution of a program. Unlike syntax errors, which are detected by the parser, Python raises exceptions when an error occurs in syntactically correct code. Knowing how to raise, catch, and handle exceptions effectively helps to ensure your program behaves as expected, even when encountering errors.
In Python, you handle exceptions using a try … except block. This structure allows you to execute code normally while responding to any exceptions that may arise. You can also use else to run code if no exceptions occur, and the finally clause to execute code regardless of whether an exception was raised.
By the end of this tutorial, you’ll understand that:
- Exceptions in Python occur when syntactically correct code results in an error.
- You can handle exceptions using the try, except, else, and finally keywords.
- The try … except block lets you execute code and handle exceptions that arise.
- Python 3 introduced more built-in exceptions compared to Python 2, making error handling more granular.
- It’s bad practice to catch all exceptions at once using except Exception or the bare except clause.
- Combining try, except, and pass allows your program to continue silently without handling the exception.
- Using try … except is not inherently bad, but you should use it judiciously to handle only known issues appropriately.
In this tutorial, you’ll get to know Python exceptions and all relevant keywords for exception handling by walking through a practical example of handling a platform-related exception. Finally, you’ll also learn how to create your own custom Python exceptions.
Get Your Code: Click here to download the free sample code that shows you how exceptions work in Python.
Take the Quiz: Test your knowledge with our interactive “Python Exceptions: An Introduction” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python Exceptions: An IntroductionIn this quiz, you'll test your understanding of Python exceptions. You'll cover the difference between syntax errors and exceptions and learn how to raise exceptions, make assertions, and use the try and except block.
Understanding Exceptions and Syntax ErrorsSyntax errors occur when the parser detects an incorrect statement. Observe the following example:
Python Traceback >>> print(0 / 0)) File "<stdin>", line 1 print(0 / 0)) ^ SyntaxError: unmatched ')' Copied!The arrow indicates where the parser ran into the syntax error. Additionally, the error message gives you a hint about what went wrong. In this example, there was one bracket too many. Remove it and run your code again:
Python >>> print(0 / 0) Traceback (most recent call last): File "<stdin>", line 1, in <module> ZeroDivisionError: division by zero Copied!This time, you ran into an exception error. This type of error occurs whenever syntactically correct Python code results in an error. The last line of the message indicates what type of exception error you ran into.
Instead of just writing exception error, Python details what type of exception error it encountered. In this case, it was a ZeroDivisionError. Python comes with various built-in exceptions as well as the possibility to create user-defined exceptions.
Raising an Exception in PythonThere are scenarios where you might want to stop your program by raising an exception if a condition occurs. You can do this with the raise keyword:
You can even complement the statement with a custom message. Assume that you’re writing a tiny toy program that expects only numbers up to 5. You can raise an error when an unwanted condition occurs:
Python low.py number = 10 if number > 5: raise Exception(f"The number should not exceed 5. ({number=})") print(number) Copied!In this example, you raised an Exception object and passed it an informative custom message. You built the message using an f-string and a self-documenting expression.
When you run low.py, you’ll get the following output:
Python Traceback Traceback (most recent call last): File "./low.py", line 3, in <module> raise Exception(f"The number should not exceed 5. ({number=})") Exception: The number should not exceed 5. (number=10) Copied!The program comes to a halt and displays the exception to your terminal or REPL, offering you helpful clues about what went wrong. Note that the final call to print() never executed, because Python raised the exception before it got to that line of code.
With the raise keyword, you can raise any exception object in Python and stop your program when an unwanted condition occurs.
Debugging During Development With assert Read the full article at https://realpython.com/python-exceptions/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
This Week in KDE Apps: OptiImage first release, Itinerary redesign and more
Welcome to a new issue of "This Week in KDE Apps"! Every week we cover as much as possible of what's happening in the world of KDE apps.
This week, we are continuing to polish our applications for the KDE Gear 24.12.0 release, but already starting the work for the 25.04 release happening next year. We also made the first release of OptiImage, an image size optimizer.
Meanwhile, as part of the 2024 end-of-year fundraiser, you can "Adopt an App" in a symbolic effort to support your favorite KDE app. This week, we are particularly grateful to Yogesh Girikumar, Luca Weiss and 1peter10 for supporting Itinerary; Tobias Junghans and Curtis for Konsole; Daniel Bagge and Xavier Guillot for Filelight; F., Christian Terboven, Kevin Krammer and Sean M. for the Kontact suite; Tanguy Fardet, dabe, lengau and Joshua Strobl for NeoChat; Pablo Rauzy for KWrite; PJ. for LabPlot; Dominik Barth for Kasts; Kevin Krammer for Ruqola; Florent Tassy, elbekai and retrokestrel for Gwenview; MathiusD and Dadanaut for Elisa, Andreas Kilgus and @rph.space for Konqueror; trainden@lemmy.blahaj.zone for KRDC; Marco Rebhan and Travis McCoy for Ark; and domportera for Krfb.
Getting back to all that's new in the KDE App scene, let's dig in!
GCompris Educational game for childrenGCompris 4.3 is out and contains bug fixes and graphics improvements on multiple activities.
KDE Itinerary Digital travel assistantWe redesigned the timeline of your trips and the query result pages when searching for a public transport connection to work better with a small screen while still showing all the relevant information. (Carl Schwan, 25.04.0. Link 1 and link 2)
Checking for updates and downloading map data now are scoped to a trip and will only query data from the internet related to the trip. (Volker Krause, 25.04.0. Link 1 and link 2)
The export buttons used in Itinerary are not blurry anymore. (Carl Schwan, 24.12.0. Link)
Add an extractor for GoOut tickets (an event platform in Poland, Czechia and Slovakia) as well as luma and the pkpasses from Flixbus.de. (David Pilarcik, 24.12.0. Link, link 2 and link 3)
Optimize querying a location by sorting the list of countries only once instead of hundreds of times. (Carl Schwan, 24.12.0. Link)
OptiImage Image optimizer to reduce the size of imagesOptiImage 1.0.0 is out! This is the initial release of this image size optimizer and you can read all the details on the announcement blog post.
Karp KDE arranger for PDFsKarp is now directly using the QPDF library instead of invoking a separate process, which improves the speed while making PDF operation more reliable. (Tomasz Bojczuk. Link)
Kate Advanced Text EditorMake it again possible to scroll, select text and click on links inside documentation tooltips in Kate. (Leia uwu, 24.12.0. Link) Leia also improved the tooltip positioning logic so that it doesn't obscure the hovered word. (Leia uwu, 25.04.0. Link)
KMail A feature-rich email applicationThe mail folder selection dialog now remembers which folders were collapsed and expanded between invocations.
Ruqola Rocket Chat ClientRuqola 2.3.2 is out and includes many fixes for RocketChat 7.0!
Spectacle Screenshot Capture UtilityOn Wayland, the "Window Under Cursor" mode is renamed to "Select Window" as you need to select the window. (Noah Davis, 25.04.0. Link)
Tokodon Browse the FediverseBetter icon for Android, which is also adaptable depending on your theme. (Alois Spitzbart, 24.12. Link)
Streaming timeline events and notifications now work for servers using GoToSocial. (snow flurry, 24.12. Link)
Slightly improved the performance of the timeline, with particular focus on the media. (Joshua Goins, 24.12. Link)
In the status composer, user info is now shown - useful if you post from multiple accounts. Also, the look of the text box has been updated. (Joshua Goins, 24.12. Link)
Added the ability to configure your notification policy. This allows you to reject or allow notifications e.g. for new accounts. (Joshua Goins, 25.03. Link)
Improved the appearance of the search page on desktop. (Joshua Goins, 24.12. Link)
Added preliminary support for Iceshrimp.NET instances. (Joshua Goins, 24.12. Link)
Added an error log in the UI to keep track of network errors. (Joshua Goins, 25.03. Link)
Kirigami AddonsKirigami Addons 1.6.0. is out! You can read the full announcement on my (Carl's) blog. This week we also made the following changes:
Speedup loading Kirigami pages using FormComboboxDelegate, this is particularly noticable for the country combobox in Itinerary, but affects more applications. (Carl Schwan, Kirigami Addons 1.6.0. Link)
Add new RadioSelector and FormRadioSelectorDelegate components to Kirigami Addons. On the screenshot below you can see how they're used in Itinerary. (Mathis Brüchert, Kirigami Addons 1.6.0. Link)
…And Everything ElseThis blog only covers the tip of the iceberg! If you’re hungry for more, check out Nate's blog about Plasma and be sure not to miss his This Week in Plasma series, where every Saturday he covers all the work being put into KDE's Plasma desktop environment.
For a complete overview of what's going on, visit KDE's Planet, where you can find all KDE news unfiltered directly from our contributors.
Get InvolvedThe KDE organization has become important in the world, and your time and contributions have helped us get there. As we grow, we're going to need your support for KDE to become sustainable.
You can help KDE by becoming an active community member and getting involved. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine! You don’t have to be a programmer either. There are many things you can do: you can help hunt and confirm bugs, even maybe solve them; contribute designs for wallpapers, web pages, icons and app interfaces; translate messages and menu items into your own language; promote KDE in your local community; and a ton more things.
You can also help us by donating. Any monetary contribution, however small, will help us cover operational costs, salaries, travel expenses for contributors and in general just keep KDE bringing Free Software to the world.
To get your application mentioned here, please ping us in invent or in Matrix.
LostCarPark Drupal Blog: Drupal Advent Calendar day 1 - Starshot: a Brief Introduction to Drupal CMS
Welcome to this year’s Drupal Advent Calendar, and this year the focus is on the most important Drupal initiative in quite some time.
Code named Starshot, it aims to take Drupal to a new level of user friendliness and ease of use.
Over recent versions, Drupal has become incredibly powerful, and it now powers many enterprise websites for major corporations, governments, and NGOs around the world.
Starshot was announced by the founder of the Drupal project, Dries Buytaert, at DrupalCon Portland, in April of this year. This proposed a new default installation of Drupal with many extra features, and…
Tags