Feeds
Driving the global conversation about “Open Source AI”
The Open Source Initiative (OSI) continues the work of exploring complexities surrounding the development and use of artificial intelligence in Deep Dive: AI – Defining Open Source AI, with the goal of collaboratively establishing a clear and defensible definition of “Open Source AI.” OSI is bringing together global experts to establish a shared set of principles that can recreate a permissionless, pragmatic and simplified collaboration for AI practitioners, similar to what the Open Source Definition has done.
Building community momentum and supportWe’ve gathered a significant amount of support from groups all over the world. Most recently, Google has increased its financial commitments to support this urgent initiative. Timothy Jordan, Director of Open Source and Developer Relations at Google, stated “Google is excited to continue our support of the Open Source Initiative and, more broadly, of open source developers. We look forward to the open collaboration involved in drafting the Definition of Open Source AI and hope it will help accelerate innovation in this space.”
For Catherine Stihler, executive director of Creative Commons “It’s critical to develop shared definitions about what it means to contribute to the commons, including through open source. The participatory process organized by the OSI is an important way to find the common values shared by the widest variety of organizations and people around the world.”
While Mark Collier, COO, OpenInfra Foundation said that “The next decade of open infrastructure will be built hand-in-hand with AI. The OpenInfra Foundation and the community engaged with its projects, including OpenStack, Kata Containers, and StarlingX, is focused on defining how AI will play its role. We’re excited to participate in OSI’s process to find—as soon as possible—a common baseline and definition that all of us can rely on to further the values of ‘open’ to the AI field.”
Other organizations, like GitHub, Amazon, OSS Capital, Weaviate and Sourcegraph also believe in this effort and are supporting the process with generous donations. OSI also welcomes individual donations.
“Deep Dive: Defining Open Source AI” webinarsAfter gathering a group of people from Mozilla Foundation, Creative Commons, Wikimedia Foundation, Internet Archive, Linux Foundation Europe, OSS Capital, and the OSI board in June 2023 in San Francisco, OSI is kicking off our webinar series to hear from more experts.
The presentations series identifies foundational principles of “Open” in the context of AI and will contribute to the conversations and collective thinking. The topics were selected for their focus on precise problem areas in AI and offer clear suggestions for solutions based on their expertise in many areas.
Webinars will be held Tuesday through Thursday between September 26 and October 12 (daily schedule coming soon). Each session will include a live Q&A with attendees. Registration is free and single registration gains you access to all webinars in the series.
Comment on the Draft of the “Open Source AI Definition”
A draft of the Open Source AI Definition will be available for public discussion at All Things Open, on October 17. Interested parties can review the full schedule of the global drafting and review process.
The post <span class='p-name'>Driving the global conversation about “Open Source AI”</span> appeared first on Voices of Open Source.
PyBites: Empower Your Python Ambitions – From Idea Paralysis to Real-World Projects
In this podcast episode we talk about the significance of building real-world Python applications.
Listen here:
Or watch here:
Bob highlights the importance of breaking away from tutorial paralysis and creating genuine software solutions to understand and confront real-world complexities.
He also emphasizes the career benefits of showcasing tangible Python projects on your portfolio / GitHub / resume.
As an actionable step, listeners are introduced to the Pybites Portfolio Assessment tool.
Through a fictional character, Alex, listeners are guided on how to use the tool identifying their passions, strengths, weaknesses, and ultimately leverage Python to realize their goals through real world app building.
Take the assessment here (your submission will be emailed to Pybites). If you go the manual pen + paper route, then just send it via emial – good luck!
Stack Abuse: Traversing a List in Reverse Order in Python
If you've been coding in Python for a while, you're probably familiar with how to traverse a list in the usual way - from the first element to the last. But what about when you need to traverse a list in reverse order? It might not be something you need to do every day, but when you do, it's good to know how.
In this Byte, we'll explore three different methods to traverse a list in reverse order in Python.
Why Traverse a List in Reverse Order?There are several situations where traversing a list in reverse order can be beneficial. Imagine that you're dealing with a list of events that are stored in chronological order, and you want to process them from most recent to oldest. Or maybe you're implementing an algorithm that requires you to work backwards through a list. Whatever the reason, luckily there are a few ways to accomplish this in Python.
Method 1: Using the reversed() FunctionPython has a built-in function called reversed() that returns a reverse iterator. An iterator is an object that contains a countable number of values and can be iterated (looped) upon. Here's a quick example:
my_list = [1, 2, 3, 4, 5] for i in reversed(my_list): print(i)Output:
5 4 3 2 1The reversed() function doesn't actually modify the original list. Instead, it gives you a reverse iterator that you can loop over. Otherwise, if it were to actually reverse the list and return a new one, this could become a very expensive operation.
Method 2: Using List SlicingAnother way to traverse a list in reverse order is by using list slicing. The slicing operator [:] can take three parameters - start, stop, and step. By setting the step parameter to -1, we can traverse the list in reverse order.
my_list = [1, 2, 3, 4, 5] for i in my_list[::-1]: print(i)Output:
5 4 3 2 1It might not be as readable as using the reverse() method, but it's more compact and is powerful enough to be used in many other use-cases (i.e. traversing every other item from the end).
Method 3: Using a For Loop with range()The third method involves using a for loop with the range() function. The range() function can take three parameters - start, stop, and step. By setting the start parameter to the last index of the list, the stop parameter to -1 (to go all the way to the beginning of the list), and the step parameter to -1 (to go backwards), we can traverse the list in reverse order.
my_list = [1, 2, 3, 4, 5] for i in range(len(my_list) - 1, -1, -1): print(my_list[i])Output:
5 4 3 2 1The range function may have been one of the first ways you learned to generate a list of incrementing numbers, often used for iterating. But you may not have known about the start, stop, and step parameters. Now that you do, you can probably think of other use-cases for this function.
Comparing the MethodsNow that we have seen a few different methods to reverse a list in Python, let's compare them in terms of readability, performance, and use cases.
The reversed() function is the most Pythonic way to reverse a list. It's simple, readable, and efficient. It's especially useful when you need to iterate over the list in reverse order, but don't want to modify the original list.
fruits = ['apple', 'banana', 'cherry'] for fruit in reversed(fruits): print(fruit)Output:
cherry banana appleList slicing is another Pythonic method. It's concise and readable, but it creates a copy of the list, which might not be desirable if you're working with a large list due to memory usage.
fruits = ['apple', 'banana', 'cherry'] print(fruits[::-1])Output:
['cherry', 'banana', 'apple']The for loop with range() is a more traditional approach. It's a bit more verbose and less Pythonic, but it gives you more control over the iteration, such as skipping elements.
fruits = ['apple', 'banana', 'cherry'] for i in range(len(fruits)-1, -1, -1): print(fruits[i])Output:
cherry banana apple Working with Nested ListsReversing a nested list can be a bit tricky. Let's say we have a list of lists and we want to reverse each inner list. Here's how you can do it with list comprehension and the reversed() function.
nested_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] reversed_nested_list = [list(reversed(l)) for l in nested_list] print(reversed_nested_list)Output:
[[3, 2, 1], [6, 5, 4], [9, 8, 7]]You may notice that this did not reverse the order of the inner lists themselves. If you want to do that, you can simply use reversed() again on the outer list.
Traversing in Reverse Order with ConditionsSometimes, you may want to traverse a list in reverse order and perform actions only on certain elements. Let's say we want to print only the odd numbers in reverse order. We can do this using a for loop with range() and an if statement:
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9] for i in range(len(numbers)-1, -1, -2): if numbers[i] % 2 != 0: print(numbers[i])Output:
9 7 5 3 1Another way to achieve this with range is to use a step of -2, instead of -1, and to drop the if condition.
ConclusionTraversing a list in reverse order is a common operation in Python, and there are several ways to achieve this. The reversed() function and list slicing are Pythonic methods that are simple and readable, while a for loop with range() gives you more control over the iteration. When working with nested lists or applying conditions, these methods can be combined and adapted to fit your needs.
Qt on macOS 14 Sonoma
As is customary Apple announced their latest operating system versions during WWDC this June, including the latest major version of macOS, named after the wine region located in California's Sonoma County.
Home Charging
I recently wrote about my experiences in driving a fully electric car. Today, the electrician dropped by and installed a charging box in my garage. Finally I can do 11kW charging from home, at the lowest possible price.
The box was installed super late as the eesee box that I initially ordered was banned from selling in Sweden this spring, so now I’m trying Zaptech. I guess the next step is to install it in home assistant and see where it gets me!
Russell Coker: Links August 2023
This is an interesting idea from Bruce Schneier, an “AI Dividend” paid to every person for their contributions to the input of ML systems [1]. We can’t determine who’s input was most used so sharing the money equally seems fair. It could end up as yet another justification for a Universal Basic Income.
The Long Now foundation has an insightful article about preserving digital data [2]. It covers the history of lost data and the new challenges archivists face with proprietary file formats.
Tesla gets fined for having special “Elon mode” [3], turns out that being a billionaire isn’t an exemption from road safety legislation.
Wired has an interesting article about Marcus Hutchins, how he prevented a serious bot attack and how he had a history in crime when he was a teenager [5]. It’s good to see that some people can reform.
The IEEE has a long and informative article about what needs to be done to transition to electric cars [6]. It’s a lot of work and we should try and do it as fast as possible.
Linux Tech Tips has an interesting video about a new cooling system for laptops (and similar use cases for moving tens of watts from a thin space) [7]. This isn’t going to be useful for servers or desktops as big heavy heatsinks work well for them. But for something to put on top of a laptop CPU or to have several of them connected to a laptop CPU by heat pipes it could be very useful. The technology of piezo electric cooling devices is interesting on it’s own, I expect we will see more of that in future.
- [1] https://www.schneier.com/blog/archives/2023/07/the-ai-dividend.html
- [2] https://tinyurl.com/28k5jdjm
- [3] https://tinyurl.com/2cdm5a8k
- [4] https://tinyurl.com/yylsokta
- [5] https://tinyurl.com/y7vzgqo6
- [6] https://tinyurl.com/2bzqn6mg
- [7] https://www.youtube.com/watch?v=vdD0yMS40a0
Related posts:
- Links May 2023 Petter Reinholdtsen wrote an interesting blog post about their work...
- Links March 2023 Interesting paper about a plan for eugenics in dogs with...
- Links June 2023 Tablet Magazine has an interesting article about Jewish men who...
Zero to Mastery: Python Monthly Newsletter 💻🐍
Droptica: Save Time Building Complex Drupal Websites with Code Generation and No Code Tools
The long time to build a system is often pointed out as a drawback of using Drupal in web development. However, creating complex websites using this technology doesn’t have to be time-consuming at all. In this blog post, I’ll present you with a list of modules and tools that clearly reduce the time to build systems on Drupal.
General information about DrupalDrupal is a system written in PHP and built from modules. There are dozens of them in Drupal's core, and several thousand are available for free download from the Drupal.org website.
A PHP developer can also create a custom module for Drupal and add any functionality. Developers often take this route. It’s apparently easier (though not faster) to write the required functionality than to familiarize yourself with existing modules to build it.
The key to reaping the benefits of choosing Drupal as a technology is to treat it and its modules like LEGO blocks from which you build a system.
Considering Drupal as a base and adding all the needed functionality in the custom code is a path to increase project costs. Ultimately, in the long run, it also translates into abandoning Drupal as a base solution for building systems. This is because no one likes to pay more than for alternative options available on the market, and nowadays, there is a lot to choose from in the web development world.
When looking at other technologies, it’s worth paying attention to how many different technologies you need to use to achieve what Drupal offers. Very often, you need to use many frameworks, libraries, or systems and combine them all. With these connections, problems and errors often arise (e.g., website A didn't correctly send data to website B's API, etc.), which take time to debug and fix.
Systems on Drupal are most often built as a single application with a single codebase (headless Drupal will also appear later in the text), and this simplifies application maintenance, development, and the new version implementation quite a bit. This is a significant advantage for applications with regular deployments (e.g., once a week), which reduces their time, eliminates potential problems, and facilitates application maintenance costs.
Code generation and no code tools to speed up work in DrupalI’ve divided the tools that will help you speed up the time of creating complex web pages on Drupal into several groups. You’ll find here descriptions, screenshots, and short videos. Based on these examples, you’ll see how quickly you can build websites in Drupal.
Code and data generatorsCode and database generators can significantly reduce the programmer’s work time. Every web developer working with Drupal should become familiar with these tools.
Module BuilderModule Builder is a module for Drupal that generates the files needed to make a module. Some elements are repetitive, and constantly writing them from scratch unnecessarily takes precious minutes. With the help of Module Builder, you’ll reduce the time of creating custom modules.
Drush GenerateDrush is a tool for managing Drupal from the command line. One of the handy commands available in Drush is “generate.” Like Module Builder, this command helps you create the code needed when building modules and saves you time.
Devel GenerateDevel Generate is part of the Devel module. This tool can generate test data. This is very useful when testing how the system behaves or looks when a large amount of data comes in. By reaching for this module, you save time creating test content and can focus on testing the application. I especially recommend this solution to testers working with Drupal.
No code tools and modulesThere are certain modules in Drupal, so you don't have to write your own custom ones. You can generate data structures and application logic without writing a single line of code. Some of these modules are already in Drupal core. Combining these tools with code generators (not everything can be achieved by clicking, and sometimes you need to write code) gives you a considerable advantage when implementing applications and websites on Drupal over other solutions.
Fields moduleFields is a module that is part of Drupal core. It allows you to extend entities with additional attributes - for example, you can add a “Phone” field to a user profile to store phone number information, or you can add a “File upload” field to the “Page” content type to enable you to insert downloadable PDF files.
Views moduleThe Views module is also part of Drupal core. It allows you to take data from a database and display it in a formatted way. You can extend its capabilities using many additional modules, such as exporting data to CSV format.
Entity Construction Kit (ECK)Drupal core has several entity types, including Node, User, and Taxonomy. Sometimes, you need to build your own entity instead of using, for example, a new content type. You can do it by creating a new custom module (for example, using the Module Builder module mentioned above) or you can use the Entity Construction Kit (ECK) module. With its help, without writing code, you can create a new data structure in the database and use it with, e.g., Fields and Views modules. In this case, you can also perfectly see another advantage of Drupal - modules work together rather than being separate elements.
Event - Condition - Action (ECA)The ECA module allows you to create actions on various events, such as “send an email if someone adds a comment.” The module's capabilities are vast, and if an option is missing, it can be expanded with additional actions or conditions.
WebformA form on a web page and in an application is a common form of interaction with users. It’s often essential for website administrators to be able to easily create new forms without waiting for a development team. Marketing departments need to add them to landing pages for campaigns, and HR departments need them to collect data from employees in various types of surveys. The examples are many. The Webform module perfectly solves the necessity to build contact forms easily.
Feeds moduleThe Feeds module retrieves data from external sources and saves it to a database in Drupal. The simplest use of the module is to retrieve data from RSS, but you can also configure it for other sources, such as XML files. All data import configuration is done by clicking around the administration interface. So, there is no need to involve a programmer in this. An example of using this module could be, for example, importing job postings to a company website from an external management system or importing recent blog posts to a corporate intranet system (built on Drupal).
Content building toolsNowadays, building new subpages on a website involves not only adding text but also inserting many components that will make a web page attractive and convenient to use for the visitor. An editor needs tools to build complex sites and a system that doesn’t limit them in creating content.
There are several such solutions in Drupal. Depending on the needs of content managers, you can choose from the many options available. Here are some examples.
Layout BuilderThe Layout Builder module is found in Drupal core. It allows you to manage the layout of elements for a content type (e.g., all articles) or specific content. The module is regularly developed, and its capabilities can be extended with additional modules.
Paragraphs moduleParagraphs is an additional module that extends the possibilities of building a data structure with the Fields module. It’s the basis of the Droopler system - a tool for quickly building company and corporate websites. We have built over a dozen ready-made components that editors can use when creating content there.
Other tools for contentIn addition to the solutions above, other content-building tools are:
Integrations with external applicationsToday, the number of applications that companies and organizations use is growing. Drupal fits perfectly into such an environment because it can easily integrate with external systems.
Drupal can pass stored data to other systems or accept data from applications. It has a RESTful Web Services module that allows simple and complex configurations.
These integration options open up the possibility of using Drupal as a headless CMS. One example can be found in our case study of a project for PZPN, documenting the creation of a system where the frontend is separated from the backend.
No code and code generation tools - summaryThe examples described above are only a tiny part of the capabilities of Drupal modules. There are many more, and all these tools make Drupal possible to reduce the time needed to build websites or web applications.
If you plan to create a complex website, Drupal is worth considering. Is this technology suitable for your project? Take advantage of a free consultation at our Drupal agency, during which we will help analyze your case.
LN Webworks: Drupal: The Cutting-Edge CMS Now an Incredible Base for Ecommerce Websites
Research suggests that there are currently more than 26 million ecommerce websites available for consumers to explore and more are being created every day. An effective way for e-commerce companies to outperform such fierce competition is to choose a cutting-edge and robust e-commerce platform. As more and more entrepreneurs have this realization, there is an increasing shift toward Drupal commerce. The following reasons paint a clear picture and perfectly elucidate why more and more e-commerce companies are availing of Drupal development services.
Andrew Cater: Building a mirror of various Red Hat oriented "stuff"
I've already described in brief how I built a mirror that currently mirrors Debian and Ubuntu on a daily basis. That was relatively straightforward given that I know how to install Debian and configure a basic system without a GUI and the ftpsync scripts are well maintained, I can pull some archives and get one pushed to me such that I've always got up to date copies of Debian and Ubuntu.
I wanted to do something similar using Rocky Linux to pull in archives for Almalinux, Rocky Linux, CentOS, CentOS Stream and (optionally) Fedora.
(This was originally set up using Red Hat Enterprise Linux on a developer's subscription and rebuilt using Rocky Linux so that the machine could be passed on to someone else if necessary. Red Hat 9.1 has moved to x86_64v2 - on the machine I have (HP Microserver gen 8) 9.1 it fails immediately. It has been rebuilt to use Rocky 8.8).
This is a minimal install of Rocky as console only - the machine it's on only has 4G of memory so won't run a GUI reliably. It will run Cockpit so can be remotely administered. One user to run everything - mirror.
Minimal install of Rocky 8.7 from DVD .iso. SELinux is enabled, SSH works for remote access. SELinux had to be tweaked to allow /srv/ the appropriate permissions to be served by nginx. /srv is a large LVM volume rather than a RAID 6 - I didn't have enough disks
Adding nginx, enabling Cockpit and editing the Rocky Linux mirroring scripts resulted in something straightforward to reproduce.
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;location / {
autoindex on;
autoindex_exact_size off;
autoindex_format html;
autoindex_localtime off;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
Rocky Linux mirroring scriptsSystemd unit file for service
[Unit]
Description=Rocky Linux Mirroring script
[Service]
Type=simple
User=mirror
Group=mirror
ExecStart=/usr/local/bin/rockylinux
[Install]
WantedBy=multi-user.target
[Unit]
Description=Run Rocky Linux mirroring script daily
[Timer]
OnCalendar=*-*-* 08:13:00
OnCalendar=*-*-* 22:13:00
Persistent=true
[Install]
WantedBy=timers.target
#
# mirrorsync - Synchronize a Rocky Linux mirror
# By: Dennis Koerner <koerner@netzwerge.de>
#
# The latest version of this script can be found at:
# https://github.com/rocky-linux/rocky-tools
#
# Please read https://docs.rockylinux.org/en/rocky/8/guides/add_mirror_manager
# for further information on setting up a Rocky mirror.
#
# Copyright (c) 2021 Rocky Enterprise Software Foundation
This is a very long script in total.
Crucial parts I changed only listed the mirror to pull from and the place to put it.
# A complete list of mirrors can be found at
# https://mirrors.rockylinux.org/mirrormanager/mirrors/Rocky
src="mirrors.vinters.com::rocky"
# Your local path. Change to whatever fits your system.
# $mirrormodule is also used in syslog output.
mirrormodule="rocky-linux"
dst="/srv/${mirrormodule}"
filelistfile="fullfiletimelist-rocky"
lockfile="/home/mirror/rocky.lockfile"
logfile="/home/mirror/rocky.log"
Logfile looks something like this: the single time spec file is used to check whether another rsync needs to be run
sent 606,565 bytes received 38,808,194,155 bytes 44,839,746.64 bytes/sec
total size is 1,072,593,052,385 speedup is 27.64
End: Fri 27 Jan 2023 08:27:49 GMT
fullfiletimelist-rocky unchanged. Not updating at Fri 27 Jan 2023 22:13:16 GMT
fullfiletimelist-rocky unchanged. Not updating at Sat 28 Jan 2023 08:13:16 GMT
It was essentially easier to store fullfiletimelist-rocky in /home/mirror than anywhere else.
Very similar small modifications to the Rocky mirroring scripts were used to mirror the other distributions I'm mirroring. (Almalinux, CentOS, CentOS Stream, EPEL and Rocky Linux).
Stack Abuse: Python Naming Conventions for Variables, Functions, and Classes
Python, like any other programming language, has its own set of rules and conventions when it comes to naming variables and functions. These conventions aren't just for aesthetics or to make your code look pretty, they serve a much more important role in making your code more readable and maintainable. If you've read many of my articles on StackAbuse, I talk a lot about writing readable code. By following Pythonic best-practices in naming and formatting your code, you'll make it much more readable for others (and yourself).
In this article, we'll explore the different naming conventions used in Python and understand why they matter.
Why Naming Conventions MatterImagine working on a large codebase where variables and functions are named/formatted haphazardly. It would be a nightmare to understand what each variable or function does, let alone debug or add new features. This is one of the reasons why we put so much emphasis on following conventions.
Naming conventions are basically just agreed-upon standards that programmers follow when naming their variables, functions, classes, and other code elements. They provide a level of predictability that makes it easier to understand the purpose of a piece of code. This is especially important when you're working in a team.
Following naming conventions isn't just about making your code understandable to others. It's also about making it easier for your future self. You might understand your code perfectly well now, but you might not remember what everything does six months down the line.
Variable Naming ConventionsIn Python, variable names are more than just placeholders for values - they are a vital part of your code's readability. Python's variable naming convention is based on the principle of "readability counts", one of the guiding philosophies of Python.
A variable name in Python should be descriptive and concise, making it easy for anyone reading your code to understand what the variable is used for. It should start with a lowercase letter, and it can include letters, numbers, and underscores. However, it cannot start with a number.
Here are some examples:
name = "John Doe" age = 30 is_student = FalseNote: Python is case sensitive, which means age, Age, and AGE are three different variables.
In Python, we commonly use snake_case for variable names, where each word is separated by an underscore. This is also known as lower_case_with_underscores.
student_name = "John Doe" student_age = 30 is_student = False Function Naming ConventionsLike variable names, function names in Python should be descriptive and concise. The function name should clearly indicate what the function does. Python's naming conventions for functions are similar to its conventions for variables.
In Python, we typically use snake_case for function names. Here's an example:
def calculate_sum(a, b): return a + b result = calculate_sum(5, 3) print(result) # Output: 8Note: It's a good practice to use verbs in function names since a function typically performs an action.
In addition to snake_case, Python also uses PascalCase for naming classes, and occassionally camelCase, but we'll focus on those in another section. For now, remember that consistency in your naming convention is important for to writing clean, Pythonic code.
Class Naming ConventionsFor naming classes in Python, a different set of conventions applies compared to naming variables or functions. In Python, class names typically use PascalCase, also known as UpperCamelCase. This means that the name starts with an uppercase letter and has no underscores between words. Each word in the name should also start with an uppercase letter.
Here's an example to illustrate the naming convention for classes:
class ShoppingCart: def __init__(self, items=[]): self.items = items def add_item(self, item): self.items.append(item) my_cart = ShoppingCart() my_cart.add_item("apple")In this example, ShoppingCart is a class that adheres to the PascalCase naming convention.
Note: While function names often use verbs to indicate actions, class names usually employ nouns or noun phrases. This is because a class often represents a thing or a concept rather than an action.
Sometimes you'll encounter classes that contain acronyms or initialisms. In such cases, it's conventional to keep the entire acronym uppercase:
class HTTPResponse: def __init__(self, status_code, content): self.status_code = status_code self.content = contentJust like with functions, the key to good class naming is to be descriptive and concise. The name should clearly convey the class's purpose or functionality. And as always, maintaining consistency in your naming conventions throughout your codebase is vital for readability and maintainability.
ConclusionIn this article, we've explored the importance of naming conventions in Python, and how they contribute to code readability and maintainability. We've showed the different types of naming conventions for variables, functions, and classes, like PascalCasing and snake_casing.
Python does not enforce these conventions, but adhering to them is considered good practice and can really improve your code's readability, especially when working in teams.
Mike Driscoll: Textual Apps Coming to a Website Near You
Textual is an amazing Python package for creating Text-Based User Interfaces (TUIs) in Python. You can learn more in An Intro to Textual – Creating Text User Interfaces with Python.
However, Textual isn’t only about creating user interfaces for your terminal. The Textual team is also making Textual for the web! Textual Cloud Service will allow developers to run their terminal GUIs in web applications.
When creating a Textual-based web application, you will use a Textual Agent. These agents can be configured to serve single or multiple textual apps using a TCP/IP connection to a cloud service that supports the Websocket protocol.
According to the Textual Cloud Service documentation, the benefits of using their service are as follows:
- Works over proxies – The websocket protocol is designed to cooperate with network infrastructure such as proxies.
- Bypasses firewalls – Firewalls are generally configured to allow outgoing TCP/IP connections.
- Encrypted – Connections are encrypted with industry standards.
- Compressed – Data is compressed on the fly.
Keep an eye on the Textual product so you know when this new feature goes live!
Related Reading- Python Based Textual Apps Are Coming to the Web – InfoWorld.
- An Intro to Textual – Creating Text User Interfaces with Python
The post Textual Apps Coming to a Website Near You appeared first on Mouse Vs Python.
Stack Abuse: Creating a Dictionary with Comprehension in Python
As you've probably come to learn with Python, there are quite a few ways to do an operation, some methods being better than others. One of the features that contribute to its power is the ability to create dictionaries using dictionary comprehension. This Byte will introduce you to this concept and demonstrate how it can make your code more efficient and readable.
Why Use Dictionary Comprehension?Dictionary comprehension is a concise and memory-efficient way to create and populate dictionaries in Python. It follows the principle of "Do more with less code". It's not just about writing less code, it's also about making the code more readable and easier to understand.
Consider a scenario where you need to create a dictionary from a list. Without dictionary comprehension, you would need to create an empty dictionary and then use a for loop to add elements to it. With dictionary comprehension, you can do this in a single line of code, as we'll see later.
Intro to List ComprehensionBefore we dive into dictionary comprehension, let's first understand list comprehension. List comprehension is a syntactic construct available in Python for creating a list from existing lists. It follows the form of the mathematical set-builder notation (set comprehension).
Here's an example:
# Without list comprehension numbers = [1, 2, 3, 4, 5] squares = [] for n in numbers: squares.append(n**2) print(squares) # [1, 4, 9, 16, 25] # With list comprehension numbers = [1, 2, 3, 4, 5] squares = [n**2 for n in numbers] print(squares) # [1, 4, 9, 16, 25]As you can see, list comprehension allows you to create lists in a very concise way.
Link: For a deeper dive into list comprehension, check out our guide, List Comprehensions in Python.
Converting List Comprehension to Dictionary ComprehensionNow that you understand list comprehension, converting it to dictionary comprehension is pretty straightforward. The main difference is that while list comprehension outputs a list, dictionary comprehension outputs a dictionary, obviously 😉.
To convert a list comprehension to a dictionary comprehension, you need to change the brackets [] to braces {}, and add a key before the colon :.
Let's see what this would look like:
# List comprehension numbers = [1, 2, 3, 4, 5] squares = [n**2 for n in numbers] print(squares) # [1, 4, 9, 16, 25] # Dictionary comprehension numbers = [1, 2, 3, 4, 5] squares = {n: n**2 for n in numbers} print(squares) # {1: 1, 2: 4, 3: 9, 4: 16, 5: 25}In the dictionary comprehension, n is the key and n**2 is the value. The comprehension iterates over the numbers list, assigns each number to n, and then adds n as a key and n**2 as a value to the squares dictionary.
Simple Examples of Dictionary ComprehensionDictionary comprehension in Python is an efficient way to create dictionaries. It's a concise syntax that reduces the amount of code you need to write. Let's start with a simple example.
# Creating a dictionary of squares for numbers from 0 to 5 squares = {num: num**2 for num in range(6)} print(squares)Output:
{0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25}In this example, the expression num: num**2 is the key-value pair of the new dictionary. The for num in range(6) is the context of the dictionary comprehension, specifying the range of numbers to include in the dictionary.
Advanced Dictionary ComprehensionYou can also use dictionary comprehension for more complex operations. Let's take a look at a case where we create a dictionary from a list of words, with the words as keys and their lengths as values.
words = ["Python", "comprehension", "dictionary", "example"] word_lengths = {word: len(word) for word in words} print(word_lengths)Output:
{'Python': 6, 'comprehension': 13, 'dictionary': 10, 'example': 7}The expression word: len(word) generates the key-value pairs. The for word in words provides the context, iterating over each word in the list.
ConclusionDictionary comprehension in Python offers a concise and efficient way to create dictionaries. By understanding how to use it properly, you can write cleaner, more efficient code. As with any tool, the key to using this effectively is understanding its strengths and limitations.
Real Python: Get Started With Django: Build a Portfolio App
Django is a fully featured Python web framework that you can use to build complex web applications. In this tutorial, you’ll jump in and learn Django by completing an example project. You’ll follow the steps to create a fully functioning web application and, along the way, learn what some of the most important features of the framework are and how they work together.
In this tutorial, you’ll:
- Learn about the advantages of using Django
- Investigate the architecture of a Django site
- Set up a new Django project with multiple apps
- Build models and views
- Create and connect Django templates
- Upload images into your Django site
At the end of this tutorial, you’ll have a working portfolio website to showcase your projects. If you’re curious about how the final source code looks, then you can click the link below:
Get Your Code: Click here to download the Python source code for your Django portfolio project.
Learn DjangoThere are endless web development frameworks out there, so why should you learn Django over any of the others? First of all, it’s written in Python, one of the most readable and beginner-friendly programming languages out there.
Note: This tutorial assumes an intermediate knowledge of the Python language. If you’re new to programming with Python, then check out the Python Basics learning path or the introductory course.
The second reason you should learn Django is the scope of its features. When building a website, you don’t need to rely on any external libraries or packages if you choose Django. This means that you don’t need to learn how to use anything else, and the syntax is seamless because you’re using only one framework.
There’s also the added benefit that Django is straightforward to update, since the core functionality is in one package. If you do find yourself needing to add extra features, there are several external libraries that you can use to enhance your site.
One of the great things about the Django framework is its in-depth documentation. It has detailed documentation on every aspect of Django and also has great examples and even a tutorial to get you started.
There’s also a fantastic community of Django developers, so if you get stuck, there’s almost always a way forward by either checking the docs or asking the community.
Django is a high-level web application framework with loads of features. It’s great for anyone new to web development due to its fantastic documentation, and it’s especially great if you’re also familiar with Python.
Understand the Structure of a Django WebsiteA Django website consists of a single project that’s split into separate apps. The idea is that each app handles a self-contained task that the site needs to perform. As an example, imagine an application like Instagram. There are several different tasks that it needs to perform:
- User management: Logging in and out, registering, and so on
- The image feed: Uploading, editing, and displaying images
- Private messaging: Sending messages between users and providing notifications
These are each separate pieces of functionality, so if this example were a Django site, then each piece of functionality would be a different Django app inside a single Django project.
Note: A Django project contains at least one app. But even when there are more apps in the Django project, you commonly refer to a Django project as a web app.
The Django project holds some configurations that apply to the project as a whole, such as project settings, URLs, shared templates and static files. Each application can have its own database, and it’ll have its own functions to control how it displays data to the user in HTML templates.
Each application also has its own URLs as well as its own HTML templates and static files, such as JavaScript and CSS.
Django apps are structured so that there’s a separation of logic. It supports the model-view-controller pattern, which is the architecture for most web frameworks. The basic principle is that each application includes three separate files that handle the three main pieces of logic separately:
- Model defines the data structure. This is usually the database description and often the base layer to an application.
- View displays some or all of the data to the user with HTML and CSS.
- Controller handles how the database and the view interact.
If you want to learn more about the MVC pattern, then check out Model-View-Controller (MVC) Explained – With Legos.
In Django, the architecture is slightly different. Although it’s based on the MVC pattern, Django handles the controller part itself. There’s no need to define how the database and views interact. It’s all done for you!
The pattern Django utilizes is called the model-view-template (MVT) pattern. All you need to do is add some URL configurations that the views map to, and Django handles the rest!
Read the full article at https://realpython.com/get-started-with-django-1/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Mike Driscoll: An Intro to Protocol Buffers with Python
Protocol buffers are a data serialization format that is language agnostic. They are analogous to Python’s own pickle format, but one of the advantages of protocol buffers is that they can be used by multiple programming languages.
For example, Protocol buffers are supported in C++, C#, Dart, Go, Java, Kotlin, Objective-C, PHP, Ruby, and more in addition to Python. The biggest con for Protocol buffers is that far too often, the versions have changes that are not backward compatible.
In this article, you will learn how to do the following:
- Creating a Protocol format
- Compiling Your Protocol Buffers File
- Writing Messages
- Reading Messages
Let’s get started!
Creating a Protocol FormatYou’ll need your own file to create your application using protocol buffers. For this project, you will create a way to store music albums using the Protocol Buffer format.
Create a new file named music.proto and enter the following into the file:
syntax = "proto2"; package music; message Music { optional string artist_name = 1; optional string album = 2; optional int32 year = 3; optional string genre = 4; } message Library { repeated Music albums = 1; }The first line in this code is your package syntax, “proto2”. Next is your package declaration, which is used to prevent name collisions.
The rest of the code is made up of message definitions. These are groups of typed fields. There are quite a few differing types that you may use, including bool, int32, float, double, and string.
You can set the fields to optional, repeated, or required. According to the documentation, you rarely want to use required because there’s no way to unset that. In fact, in proto3, required is no longer supported.
Compiling Your Protocol Buffers FileTo be able to use your Protocol Buffer in Python, you will need to compile it using a program called protoc. You can get it here. Be sure to follow the instructions in the README file to get it installed successfully.
Now run protoc against your proto file, like this:
protoc --python_out=. .\music.proto --proto_path=.The command above will convert your proto file to Python code in your current working directory. This new file is named music_pb2.py.
Here are the contents of that file:
# -*- coding: utf-8 -*- # Generated by the protocol buffer compiler. DO NOT EDIT! # source: music.proto """Generated protocol buffer code.""" from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor_pool as _descriptor_pool from google.protobuf import symbol_database as _symbol_database from google.protobuf.internal import builder as _builder # @@protoc_insertion_point(imports) _sym_db = _symbol_database.Default() DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x0bmusic.proto\x12\x05music\"H\n\x05Music\x12\x13\n\x0b\x61rtist_name\x18\x01 \x01(\t\x12\r\n\x05\x61lbum\x18\x02 \x01(\t\x12\x0c\n\x04year\x18\x03 \x01(\x05\x12\r\n\x05genre\x18\x04 \x01(\t\"\'\n\x07Library\x12\x1c\n\x06\x61lbums\x18\x01 \x03(\x0b\x32\x0c.music.Music') _globals = globals() _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals) _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'music_pb2', _globals) if _descriptor._USE_C_DESCRIPTORS == False: DESCRIPTOR._options = None _globals['_MUSIC']._serialized_start=22 _globals['_MUSIC']._serialized_end=94 _globals['_LIBRARY']._serialized_start=96 _globals['_LIBRARY']._serialized_end=135 # @@protoc_insertion_point(module_scope)There’s a lot of magic here that uses descriptors to generate classes for you. Don’t worry about how it works; the documentation doesn’t explain it well either.
Now that your Protocol Buffer is transformed into Python code, you can start serializing data!
Writing MessagesTo start serializing your data with Protocol Buffers, you must create a new Python file.
Name your file music_writer.py and enter the following code:
from pathlib import Path import music_pb2 def overwrite(path): write_or_append = "a" while True: answer = input(f"Do you want to overwrite '{path}' (Y/N) ?").lower() if answer not in "yn": print("Y or N are the only valid answers") continue write_or_append = "w" if answer == "y" else "a" break return write_or_append def music_data(path): p = Path(path) write_or_append = "w" if p.exists(): write_or_append = overwrite(path) library = music_pb2.Library() new_music = library.albums.add() while True: print("Let's add some music!\n") new_music.artist_name = input("What is the artist name? ") new_music.album = input("What is the name of the album? ") new_music.year = int(input("Which year did the album come out? ")) more = input("Do you want to add more music? (Y/N)").lower() if more == "n": break with open(p, f"{write_or_append}b") as f: f.write(library.SerializeToString()) print(f"Music library written to {p.resolve()}") if __name__ == "__main__": music_data("music.pro")The meat of this program is in your music_data() function. Here you check if the user wants to overwrite their music file or append to it. Then, you create a Library object and prompt the user for the data they want to serialize.
For this example, you ask the user to enter an artist’s name, album, and year. You omit the genre for now, but you can add that yourself if you’d like to.
After they enter the year of the album, you ask the user if they would like to add another album. The application will write the data to disk and end if they don’t want to continue adding music.
Here is an example writing session:
Let's add some music! What is the artist name? Zahna What is the name of the album? Stronger Than Death Which year did the album come out? 2023 Do you want to add more music? (Y/N)Y Let's add some music! What is the artist name? KB What is the name of the album? His Glory Alone II Which year did the album come out? 2023 Do you want to add more music? (Y/N)NNow, let’s learn how to read your data!
Reading MessagesNow that you have your Protocol Buffer Messages written to disk, you need a way to read them.
Create a new file named music_reader.py and enter the following code:
from pathlib import Path import music_pb2 def list_music(music): for album in music.albums: print(f"Artist: {album.artist_name}") print(f"Album: {album.album}") print(f"Year: {album.year}") print() def main(path): p = Path(path) library = music_pb2.Library() with open(p.resolve(), "rb") as f: library.ParseFromString(f.read()) list_music(library) if __name__ == "__main__": main("music.pro")This code is a little simpler than the writing code was. Once again, you create an instance of the Library class. This time, you loop over the library albums and print out all the data in the file.
If you had added the music in this tutorial to your Protocol Buffers file, when you run your reader, the output would look like this:
Artist: Zahna Album: Stronger Than Death Year: 2023 Artist: KB Album: His Glory Alone II Year: 2023Give it a try with your custom music file!
Wrapping UpNow you know the basics of working with Protocol Buffers using Python. Specifically, you learned about the following:
- Creating a Protocol format
- Compiling Your Protocol Buffers File
- Writing Messages
- Reading Messages
Now is your chance to consider using this type of data serialization in your application. Once you have a plan, give it a try! Protocol buffers are a great way to create data in a programming language-agnostic way.
Further ReadingThe post An Intro to Protocol Buffers with Python appeared first on Mouse Vs Python.
EuroPython: EuroPython September 2023 Newsletter
Hello there and welcome to the post conference newsletter! We really hope you enjoyed EuroPython 2023 cause we sure did and are still recovering from all the fun and excitement. 😊
We have some updates to share with you and also wanted to use this newsletter to nostalgically look back at all the good times 🙌 we had in Prague just a month ago. Surrounded by old friends and new in the beautiful city of Prague, EuroPython 2023 was special for a lot of us 🤗 and the community, so we want to highlight some of those experiences!! So without further ado let’s get into the updates 🐍
EuroPython SocietyThe EPS board is working with our accountant and auditor to get our financial reports in order in the next couple of weeks. As soon as that is finalised, we will be excited to call for the next Annual General Assembly (GA); the actual GA will be held at least 14 days after our formal notice.
General Assembly is a great opportunity to hear about EuroPython Society&aposs developments and updates in the last year & a new board will also be elected at the end of the GA.
All EPS members are invited to attend the GA and have voting rights. Find out how to sign up to become an EPS member for free here: https://www.europython-society.org/application/
More about the EPS BoardThe EPS board is made up of up to 9 directors (including 1 chair and 1 vice chair); the board runs the day-to-day business of the EuroPython Society, including running the EuroPython conference series, and supports the community through various initiatives such as our grants programme. The board collectively takes up the fiscal and legal responsibility of the Society.
At the moment, running the annual EuroPython conference is a major task for the EPS. As such, the board members are expected to invest significant time and effort towards overseeing the smooth execution of the conference, ranging from venue selection, contract negotiations, and budgeting, to volunteer management. Every board member has the duty to support one or more EuroPython teams to facilitate decision-making and knowledge transfer.
In addition, the Society prioritises building a close relationship with local communities. Board members should not only be passionate about the Python community but have a high-level vision and plan for how the EPS could best serve the community.
How can you become an EPS 2024 board member?Any EPS member can nominate themselves for the EPS 2024 board. Nominations will be published prior to the GA.
Though the formal deadline for self-nomination is at the GA, it is recommended that you send in yours as early as possible (yes, now is a good time!) to board@europython.eu.We look forward to your email :)
& for more information check out our Call for Board Candidates!
EPS 2023 General Assembly - Call for Board CandidatesIt feels like yesterday that many of us were together in Prague or online for EuroPython 2023. Each year, the current board of the EuroPython Society (EPS) holds a General Assembly (GA). It is a precious opportunity for all our members to get together annually, and reflect on the learningsEuroPython SocietyRaquel DouConference NumbersWith 142 Talks, 22 Tutorials, 10 Special events, 5 Keynotes, 3 panel discussions happening throughout the week, our “learn more about this” bookmarks list/backlog reached new heights this year! If you know what I mean 😉
Let&aposs take a closer look at our stats, if you too are into that kinda thing.
Thank you Volunteers & Sponsors <3Year after year EuroPython shines because of the hard work of our amazing team of volunteers
But beyond the logistics and the schedules, it&aposs your smiles, your enthusiasm, and your genuine willingness to go the extra mile that truly made EuroPython 2023 truly special. Your efforts have not only fostered a sense of belonging among first time attendees but also exemplified the power of community and collaboration that lies at the heart of this conference.
Once again, thank you for being the backbone of EuroPython, for your dedication, and for showing the world yet again why people who come for the Python language end up staying for the amazing community :)
https://ep2023.europython.eu/thankyou
And a special thank you to all of the Sponsors for all of their support!
Thank you Sponsors 🥳Conference Photos & VideosThe official conference photos are up on Flickr! Do not forget to tag us when you share your favourite clicks on your socials 😉.
https://ep2023.europython.eu/photos
We know how much you would love to see and share videos of some amazing talks and keynotes we had during the conference. Rest assured we are working with our AV team to have videos edited and ready in a month or so. Stay tuned for that.
In the meantime if you want to revisit a talk you missed or just want to check out a talk again, all the live streams from across the conference days is still available on our page
https://ep2023.europython.eu/live/forum
We also have some really sweet highlight videos featuring the amazing humans of EuroPython! Check it out on Youtube.
Community write-upsIt warms our hearts to see posts from the community about their experience and stories this year! Here are some of them, please feel free to share yours by tagging us on socials @europython or mailing us at news@europython.eu
Aleksandra Golofaeva on LinkedIn: #prague #python #europython #europython2023About my experience with EuroPython 2023 in Prague! EuroPython conference is the oldest and most significant event of its kind across Europe. As a newcomer to…LinkedInAleksandra GolofaevaSena S. on LinkedIn: I do love pythons, how did you guess that 🤔🤭TL;DR pythonista shares her…I do love pythons, how did you guess that 🤔🤭TL;DR pythonista shares her own EuroPython 2023 experience from her perspective EuroPython 2023 happened at…LinkedInSena S.https://sof.dog/europython2023
Weekly Report, EuroPython 2023 - Łukasz LangaOur new Security Developer in Residence is out and about, and publishes weekly updates on what he’s up to. That inspires me to resume doing the equivalent of those updates. And what better opportunity to do that than on the heels of EuroPython 2023!lukasz.langa.pllukasz.langaMariia Lukash wrote to us saying
I wanted to express my sincere gratitude for providing me with the opportunity to attend EuroPython 2023 remotely and free of charge. The conference was truly exceptional! The speakers were incredible, and their presentations were both informative and inspiring. I learned so much from each session. This being my first-ever conference experience, I never imagined it would be so captivating and enlightening. Moreover, I was particularly impressed by the sense of community that was evident throughout the event. Once again, thank you for this incredible opportunity. I am truly grateful for the experience, and if the chance arises, I would be delighted to attend future events organized by EuroPython.Messages like these warm our hearts and pushes us to do better for the 🐍 community every single year ❤️
Code of ConductCode of Conduct Transparency Report is now published on our website
https://www.europython-society.org/europython-2023-transparency-report/
EuroPython might over but fret not there are a bunch of more amazing Python conferences happening!!!
- PyCon CZ https://cz.pycon.org/2023/ 🇨🇿
- PyCon EE https://pycon.ee/ 🇪🇪
- PyCon PT https://2023.pycon.pt/ 🇵🇹
- PyCon UK https://2023.pyconuk.org/ 🇬🇧
- PyCon ES https://2023.es.pycon.org/ 🇪🇪
- PyCon SE https://www.pycon.se/ 🇸🇪
- PyCon IE https://python.ie/pycon-2023 🇮🇪
- PyLadiesCon https://conference.pyladies.com/ 🌎
- Swiss Python Summit https://www.python-summit.ch/ 🇨🇭
Add your own jokes to PyJokes (a project invented at a EuroPython sprint) via this issue: https://github.com/pyjokes/pyjokes/issues/10
Stack Abuse: Check if an Object has an Attribute in Python
In Python, everything is an object, and each object has attributes. These attributes can be methods, variables, data types, etc. But how do we know what attribute an object has?
In this Byte, we'll discuss why it's important to check for attributes in Python objects, and how to do so. We'll also touch on the AttributeError and how to handle it.
Why Check for Attributes?Attributes are integral to Python objects as they define the characteristics and actions that an object can perform. However, not all objects have the same set of attributes. Attempting to access an attribute that an object does not have will raise an AttributeError. This is where checking for an attribute before accessing it becomes crucial. It helps to ensure that your code is robust and less prone to runtime errors.
The AttributeError in PythonAttributeError is a built-in exception in Python that is raised when you try to access or call an attribute that an object does not have. Here's a simple example:
class TestClass: def __init__(self): self.x = 10 test_obj = TestClass() print(test_obj.y)The above code will raise an AttributeError because the object test_obj does not have an attribute y. The output will be:
AttributeError: 'TestClass' object has no attribute 'y'This error can be avoided by checking if an object has a certain attribute before trying to access it.
How to Check if an Object has an AttributePython provides a couple of ways to check if an object has a specific attribute. One way is to use the built-in hasattr() function, and the other is to use a try/except block.
Using hasattr() FunctionThe simplest way to check if an object has a specific attribute in Python is by using the built-in hasattr() function. This function takes two parameters: the object and the name of the attribute you want to check (in string format), and returns True if the attribute exists, False otherwise.
Here's how you can use hasattr():
class MyClass: def __init__(self): self.my_attribute = 42 my_instance = MyClass() print(hasattr(my_instance, 'my_attribute')) # Output: True print(hasattr(my_instance, 'non_existent_attribute')) # Output: FalseIn the above example, hasattr(my_instance, 'my_attribute') returns True because my_attribute is indeed an attribute of my_instance. On the other hand, hasattr(my_instance, 'non_existent_attribute') returns False because non_existent_attribute is not an attribute of my_instance.
Using try/except BlockAnother way to check for an attribute is by using a try/except block. You can attempt to access the attribute within the try block. If the attribute does not exist, Python will raise an AttributeError which you can catch in the except block.
Here's an example:
class MyClass: def __init__(self): self.my_attribute = 42 my_instance = MyClass() try: my_instance.my_attribute print("Attribute exists!") except AttributeError: print("Attribute does not exist!")In this example, if my_attribute exists, the code within the try block will execute without any issues and "Attribute exists!" will be printed. If my_attribute does not exist, an AttributeError will be raised and "Attribute does not exist!" will be printed.
Note: While this method works, it is generally not recommended to use exceptions for flow control in Python. Exceptions should be used for exceptional cases, not for regular conditional checks.
Checking for Multiple AttributesIf you need to check for multiple attributes, you can simply use hasattr() multiple times. However, if you want to check if an object has all or any of a list of attributes, you can use the built-in all() or any() function in combination with hasattr().
Here's an example:
class MyClass: def __init__(self): self.attr1 = 42 self.attr2 = 'Hello' self.attr3 = None my_instance = MyClass() attributes = ['attr1', 'attr2', 'attr3', 'non_existent_attribute'] print(all(hasattr(my_instance, attr) for attr in attributes)) # Output: False print(any(hasattr(my_instance, attr) for attr in attributes)) # Output: TrueIn this code, all(hasattr(my_instance, attr) for attr in attributes) returns False because not all attributes in the list exist in my_instance. However, any(hasattr(my_instance, attr) for attr in attributes) returns True because at least one attribute in the list exists in my_instance.
ConclusionIn this Byte, we've explored different ways to check if an object has a specific attribute in Python. We've learned how to use the hasattr() function, how to use a try/except block to catch AttributeError, and how to check for multiple attributes using all() or any().
Qt for MCUs 2.5.1 LTS Released
Qt for MCUs 2.5.1 LTS (Long-Term Support) has been released and is available for download. As a patch release, Qt for MCUs 2.5.1 LTS provides bug fixes and other improvements, and maintains source compatibility with Qt for MCUs 2.5.x. It does not add any new functionality.
Python People: Naomi Ceder
Naomi is an elected fellow of the PSF, and has served as chair of its board of directors.
Topics:
- Building replacement leadership for every endeavor you start
- What the PSF board does
- Keeping Python's growth in increasing diversity
- Learning foreign languages
- PyCon Charlas
- London
- Guitar and music
- The Quick Python Book
- Community building
- Retiring