Planet Python

Subscribe to Planet Python feed
Planet Python -
Updated: 15 hours 30 min ago Dockerizing Django with Postgres, Gunicorn, and Traefik

Thu, 2023-02-16 17:28
This tutorial details how to configure Django to run on Docker along with Postgres, Gunicorn, Traefik, and Let's Encrypt.
Categories: FLOSS Project Planets

Philippe Normand: WebRTC in WebKitGTK and WPE, status updates, part I

Thu, 2023-02-16 15:30

Some time ago we at Igalia embarked on the journey to ship a GStreamer-powered WebRTC backend. This is a long journey, it is not over, but we made some progress …

Categories: FLOSS Project Planets

Mike Driscoll: The Basics of Python Live Course

Thu, 2023-02-16 08:20

Have you wanted to learn Python but haven’t been able to get started? Perhaps you did start and then got stuck.

That’s okay! Mike Driscoll is giving a new Basics of Python live course on Lighthall. There will be approximately four hours of content in this course.

Register Now!You will learn about the following topics:

  • The REPL
  • Strings
  • Lists
  • Tuples
  • Dictionaries
  • Sets
  • Booleans and None
  • Conditional statements
  • Loops
  • Comprehensions
  • Exception handling
  • Basic File handing
  • Importing
  • Functions
  • Classes

Once you understand the basic building blocks of Python, you will be able to level up much more easily.

This course costs $50 USD.

What you’ll get:

  • Access to live chat
  • Get your questions answered
  • Meet fellow learners in the community
  • All sessions are recorded and you’ll have access to the recordings


Register Now!

The post The Basics of Python Live Course appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Kushal Das: SamNet Vinterkonferensen 2023

Thu, 2023-02-16 07:03

This Tuesday I attended SamNet Vinterkonferensen, jointly organized by ISOC-SE, SNUS, DFRI and, focusing on technology, the internet, privacy, and decentralization. The organizers gave me caution before hand as the whole conference was in Swedish :)

The venue was Internetstiftelsen, which is already one of my favorite small conference venues in Stockholm (as we have done many open spaces there in the last 1 year).

After the morning coffee and breakfast, the day started with a talk about "blockchain", it felt more like a 2015 version of the presentation :) After that a very good detailed description of IPv6 and adoption. The third talk was on DNS from Mikael Kullberg. This presentation was a perfect mix of technical details and fun :)

After the fika break, there was another govt talk about e-identification. And it broke my brain. The level of Swedish was too much, and my brain refused to do any real-time translation/understanding of the Swedish afterward. So, I spent the time in the lobby talking to people and writing some code.

The second half starts with Tobias Pulls talking about his work on anonymity and Tor network. There are a few slides with detailed graphs, and I had difficulty to understand them. Though Pulls mentioned before that he had to work hard to get all the English terms translated into Swedish. Next, MC took the stage to talk about Tillitis.

Last part of the day I spent listening to folks discussion different DNS/packets/anonymity related topics.

My goal was to meet more people and listen to more technical discussions in Swedish. So, I count the conference a success :)

Categories: FLOSS Project Planets

Reuven Lerner: Sharpen your Pandas skills with “Bamboo Weekly”

Thu, 2023-02-16 06:50

Data is the future. Heck, you could make a compelling argument that it’s the present.

If you know how to work with data, then your career is virtually assured. You’ll have your pick of cool jobs, interesting projects, good employers, and smart colleagues.

Even better? The most popular language for working with data is Python. Which means that Python skills give you an edge in getting such work.

But Python’s builtin data structures are too big, slow, and clunky for working with data.They’re just not the right tool for the job.

Fortunately, we have Pandas. This library can do it all — importing, exporting, cleaning, and analzying data.

Pandas is fast, flexible, and powerful. The fact that it’s written in Python means that you can combine it with your own functions and classes, or any of the 400,000+ packages on PyPI. It’s no surprise that companies are switching away from Excel and Matlab, in favor of Pandas.

However, Pandas is a really big, complex library. Mastering it can take years, because there’s just so much to learn and remember.

I’ve been teaching Pandas for years — and I’m constantly learning new ways to do things, often from my students. I’m always reaching for the documentation, because there’s no way to remember all of the methods and options.

If you’ve ever learned a foreign language, then you know the only way to fluency is constant practice. Even when you’re fluent, you need practice to keep your skills sharp.

In the same way, the only way to improve your Pandas skills is constant practice.

But where can you get such practice? And more importantly, where you can you get practice with real-world data, on relevant topics?

I’ve always included such real-world data sets in my courses. Whether it’s my corporate training, my online Pandas course, my Pandas Workout book, or even my data bootcamp, I use interesting data sets that we can relate to, and ask questions that are likely in data-analysis projects.

But a course can only cover so much content. And besides, a lot of learning happens over time, as the ideas drip-drip-drip into your brain, helping you to gain fluency.

That’s why I’m excited to announce my newest product, Bamboo Weekly.

Bamboo Weekly is all about improving your Pandas fluency, one week at a time:

  • Every Wednesday, I’ll pose questions having to do with current events. I’ll point you to a public data set you can use to answer those questions.
  • On Thursday, I’ll share my answer with you, helping you to improve your Pandas skills. I’ll go through it in my usual, detailed style, explaining what I’m doing and why I’m doing it that way.

Over time, Bamboo Weekly will sharpen your skills, so that you can get an amazing data-related job — or just write more efficient, idiomatic, debuggable Pandas.

I’ve been publishing Bamboo Weekly for a few weeks already, and the content is currently 100% free of charge, in the archive.

I’ll continue to publish some free editions. But the majority of the issues, as well as the accompanying discussion, will be limited to paid subscribers.

Meanwhile, take a look! This week’s problem is all about earthquakes, analyzing data from the horrific natural disaster that took place on February 6th in Turkey and Syria. I hope that you’ll find the data, and its analysis, as interesting as I did.

Please join me at

Questions? Comments? Thoughts? Just e-mail me at, or at @reuvenmlerner on Twitter.

The post Sharpen your Pandas skills with “Bamboo Weekly” appeared first on Reuven Lerner.

Categories: FLOSS Project Planets

Codementor: Declaration Defination Invocation in the functions of C programming language.

Thu, 2023-02-16 00:27
continuation in fundamentals of C programming series
Categories: FLOSS Project Planets

PyCon: PyCon US 2023 Schedule Launch!

Wed, 2023-02-15 11:23

The 2023 PyCon US schedule has been announced! With two decades of this event series under our belt, this year's conference is sure to be a memorable one. We’re looking forward to celebrating 20 years of gathering together to connect the worldwide Python community!

Here you can now access the dates of our Tutorials, Talk Tracks, Charlas Track, and Sponsor Presentations. Posters will be featured for the second year in a row and displayed during all open hours of the Expo Hall as well as alongside presentations by the authors on Sunday, April 23, 2023. All together, there will be a wide array of topics that we hope experienced and newer Python users will find engaging.

Many thanks to all of those that submitted proposals this year! The schedule would not be the same without all your hard work. 

Thank You Committees and Reviewers!

Without the efforts of the committee members that volunteered their time and hard work in launching the PyCon US Call for Proposals, this would not have been possible. Thank you to all of our committee members and reviewers!

Their commitment to managing the process of preparing for CFPs to launch and managing the review process began over 6 months ago. We truly could not have accomplished the final result of launching the schedule today without each of them.

  • Tutorial Committee: Sarah Kuchinsky, Merilys Huhn, Mridul Seth, Mfon Eti-mfon
  • Program Committee: Philippe Gagnon
  • Charlas Committee: Denny Perez, Cristián Maureira-Fredes, Adolfo Fitoria, Arturo Pacheco, Alison Orellana
  • Poster Committee: Kristen McIntyre

In addition, we want to send a huge thank you to the numerous volunteers that reviewed each of the submissions and worked long hours making sure PyCon US has a great line-up. The conference would not take place without the support and time of these volunteers.

Tutorial, Summit, Sponsor Presentation & Event Registration

Registration is now open for Tutorials, Sponsor Presentations, Summits, Mentored Sprints for Diverse Beginners, and the PyLadies Auction.

Be sure to register for the conference here if you have not already. And keep in mind that Tutorials sell out quickly! So if you are planning to attend a Tutorial, be sure to register early. 

NOTE: Please be sure to hit “Check out and Pay” when registering for Tutorials. If you do not complete your invoice and a Tutorial sells out, it will be removed from your cart and you will no longer be able to reserve a spot for that session. 

Celebrating 20 Years of PyCon US!

To commemorate this special anniversary year, we would like to create a video to celebrate the Pythonistas who make every PyCon US possible. 

Whether you're attending PyCon US for the first time ever or for the 20th time, we'd like for you to be a part of the video! Head to this form and submit your favorite memories and stories of PyCon US or what you hope to learn during your time here. 

We'll create a short video from your submissions which will be uploaded to PyCon US's YouTube Channel. We will also stream it on our online platform, and show it in person at PyCon US 2023. 

See you all soon in Salt Lake City!

Categories: FLOSS Project Planets

Real Python: How to Flush the Output of the Python Print Function

Wed, 2023-02-15 09:00

Do you want to build a compact visual progress indicator for your Python script using print(), but your output doesn’t show up when you’d expect it to? Or are you piping the logs of your script to another application, but you can’t manage to access them in real time? In both cases, data buffering is the culprit, and you can solve your troubles by flushing the output of print().

In this tutorial, you’ll learn how to:

  • Flush the output data buffer explicitly using the flush parameter of print()
  • Change data buffering for a single function, the whole script, and even your entire Python environment
  • Determine when you need to flush the data buffer explicitly and when that isn’t necessary

By repeatedly running a short code snippet that you change only slightly, you’ll see that if you run print() with its default arguments, then its execution is line-buffered in interactive mode, and block-buffered otherwise.

You’ll get a feel for what all of that means by exploring the code practically. But before you dive into changing output stream buffering in Python, it’s helpful to revisit how it happens by default, and understand why you might want to change it.

Free Sample Code: Click here to download the free sample code that you’ll use to dive deep into flushing the output of the Python print function.

Understand How Python Buffers Output

When you make a write call to a file-like object, Python buffers the call by default—and that’s a good idea! Disk write and read operations are slow in comparison to random-access memory (RAM) access. When your script makes fewer system calls for write operations by batching characters in a RAM data buffer and writing them all at once to disk with a single system call, then you can save a lot of time.

To put the use case for buffering into a real-world context, think of traffic lights as buffers for car traffic. If every car crossed an intersection immediately upon arrival, it would end in gridlock. That’s why the traffic lights buffer traffic from one direction while the other direction flushes.

Note: Data buffers are generally size-based, not time-based, which is where the traffic analogy breaks down. In the context of a data buffer, the traffic lights would switch if a certain number of cars were queued up and waiting.

However, there are situations when you don’t want to wait for a data buffer to fill up before it flushes. Imagine that there’s an ambulance that needs to get past the crossroads as quickly as possible. You don’t want it to wait at the traffic lights until there’s a certain number of cars queued up.

In your program, you usually want to flush the data buffer right away when you need real-time feedback on code that has executed. Here are a couple of use cases for immediate flushing:

  • Instant feedback: In an interactive environment, such as a Python REPL or a situation where your Python script writes to a terminal

  • File monitoring: In a situation where you’re writing to a file-like object, and the output of the write operation gets read by another program while your script is still executing—for example, when you’re monitoring a log file

In both cases, you need to read the generated output as soon as it generates, and not only when enough output has assembled to flush the data buffer.

There are many situations where buffering is helpful, and there are some situations where too much buffering can be a disadvantage. Therefore, there are different types of data buffering that you can implement where they fit best:

  • Unbuffered means that there’s no data buffer. Every byte creates a new system call and gets written independently.

  • Line-buffered means that there’s a data buffer that collects information in memory, and once it encounters a newline character (\n), the data buffer flushes and writes the whole line in one system call.

  • Fully-buffered (block-buffered) means that there’s a data buffer of a specific size, which collects all the information that you want to write. Once it’s full, it flushes and sends all its contents onward in a single system call.

Python uses block buffering as a default when writing to file-like objects. However, it executes line-buffered if you’re writing to an interactive environment.

To better understand what that means, write a Python script that simulates a countdown:

# from time import sleep for second in range(3, 0, -1): print(second) sleep(1) print("Go!")

By default, each number shows up right when print() is called in the script. But as you develop and tweak your countdown timer, you might run into a situation where all your output gets buffered. Buffering the whole countdown and printing it all at once when the script finishes would lead to a lot of confusion for the athletes waiting at the start line!

So how can you make sure that you won’t run into data buffering issues as you develop your Python script?

Add a Newline for Python to Flush Print Output

If you’re running a code snippet in a Python REPL or executing it as a script directly with your Python interpreter, then you won’t run into any issues with the script shown above.

In an interactive environment, the standard output stream is line-buffered. This is the output stream that print() writes to by default. You’re working with an interactive environment any time that your output will display in a terminal. In this case, the data buffer flushes automatically when it encounters a newline character ("\n"):

When interactive, the stdout stream is line-buffered. (Source)

Read the full article at »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Python for Beginners: Convert Python Dictionary to XML String or File

Wed, 2023-02-15 09:00

XML files are used to store and transmit data between the software systems. While developing software in python, you might also need to convert a python object to XML format for storing or transmitting the data. This article discusses how to convert a python dictionary into an XML string or File.

Table of Contents
  1. What is XML File Format?
  2. Python Dictionary to XML Using the dicttoxml Module
  3. Convert Python Dictionary to XML Using the xmltodict Module
  4. Dictionary to XML With Pretty Printing in Python
  5. Convert Dictionary to XML File in Python
    1. Dictionary to File Using XML Strings in Python
    2. Convert Dictionary Directly to XML File in Python
  6. Conclusion
What is XML File Format?

XML (Extensible Markup Language) is a markup language used for encoding and structuring data in a format that is both human-readable and machine-readable. It provides a flexible way to describe and store data in a form that can be easily exchanged between different computer systems and applications.

  • XML is a self-describing language, meaning that the structure of the data is defined by tags within the document, allowing for the creation of custom markup languages for specific purposes.
  • It is commonly used for exchanging data over the internet, for example in the form of web services, and for storing data in a format that can be easily processed by applications.
  • In an XML document, the data is stored between tags in a hierarchical structure, similar to an HTML document. Each tag can contain other tags, allowing for the creation of complex data structures. The tag names and their structure can be defined by the user, allowing for customization and flexibility in representing the data.

The syntax for storing data in XML format is as follows.

<field_name> value </field_name>

A field can have one or more fields inside itself. For instance, consider the following example.

<?xml version="1.0"?> <employee> <name>John Doe</name> <age>35</age> <job> <title>Software Engineer</title> <department>IT</department> <years_of_experience>10</years_of_experience> </job> <address> <street>123 Main St.</street> <city>San Francisco</city> <state>CA</state> <zip>94102</zip> </address> </employee>

This is an example of an XML document that contains information about an employee.

  • The document starts with the declaration “<?xml version="1.0"?>” which specifies the version of XML used in the document.
  • The root element of the document is the "employee" element, which contains all the other elements in the document. Within the "employee" element, there are four sub-elements: "name", "age", "job", and "address".
  • The "name" element contains the name of the employee, which is "John Doe". The "age" element contains the employee’s age, which is 35.
  • The "job" element contains information about the employee’s job, including the job title, department, and years of experience.
  • The "address" element contains information about the employee’s address. It has four sub-elements: "street", "city", "state", and "zip".

The corresponding python dictionary for the above file is as follows.

{"employee":{'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}}

Now, we will discuss different ways to convert a Python Dictionary to XML format in Python. 

For this, we will use two modules namely dicttoxml, and dict2xml modules. Both modules work in almost the same manner. 

You can install the modules using PIP as shown below.

pip3 install dicttoxml dict2xml

In earlier versions of Python, you can use the following command to install the modules.

pip install dicttoxml dict2xml

We will also use the xmltodict module to convert a dictionary to XML format.

Python Dictionary to XML Using the dicttoxml Module

The dicttoxml module provides us with the dicttoxml() function that we can use to convert a dictionary to an XML string. The dicttoxml() function takes a python dictionary as its input argument and returns an XML string as shown below.

import dicttoxml python_dict={"employee":{'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} print("The python dictionary is:") print(python_dict) xml_string=dicttoxml.dicttoxml(python_dict) print("The XML string is:") print(xml_string)


The python dictionary is: {'employee': {'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} The XML string is: b'<?xml version="1.0" encoding="UTF-8" ?><root><employee type="dict"><name type="str">John Doe</name><age type="str">35</age><job type="dict"><title type="str">Software Engineer</title><department type="str">IT</department><years_of_experience type="str">10</years_of_experience></job><address type="dict"><street type="str">123 Main St.</street><city type="str">San Francisco</city><state type="str">CA</state><zip type="str">94102</zip></address></employee></root>'

In this example, we first created a python dictionary. Then, we used the dicttoxml() function defined in the dicttoxml module to convert the dictionary to an XML string.

In the above output, you can observe that each item in the XML format also contains the data type of the value. If you don’t want the data type of values in the XML string, you can set the attr_type parameter to False in the dicttoxml() function as shown below.

import dicttoxml python_dict={"employee":{'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} print("The python dictionary is:") print(python_dict) xml_string=dicttoxml.dicttoxml(python_dict,attr_type=False) print("The XML string is:") print(xml_string)


The python dictionary is: {'employee': {'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} The XML string is: b'<?xml version="1.0" encoding="UTF-8" ?><root><employee><name>John Doe</name><age>35</age><job><title>Software Engineer</title><department>IT</department><years_of_experience>10</years_of_experience></job><address><street>123 Main St.</street><city>San Francisco</city><state>CA</state><zip>94102</zip></address></employee></root>'

In this example, we have set the attr_type parameter to False in the dicttoxml() function. Hence, the output XML string doesn’t contain the data type specifications.

Convert Python Dictionary to XML Using the xmltodict Module

Instead of using the dicttoxml module, you can also use the xmltodict module to convert a python dictionary to an XML string. I have already discussed how to convert an XML string to a python dictionary using the xmltodict module.

To convert a python dictionary to XML format, the xmltodict module provides us with the unparse() function. The unparse() function takes a dictionary as its input and converts it into an XML string as shown below.

import xmltodict python_dict={"employee":{'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} print("The python dictionary is:") print(python_dict) xml_string=xmltodict.unparse(python_dict) print("The XML string is:") print(xml_string)


The python dictionary is: {'employee': {'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} The XML string is: <?xml version="1.0" encoding="utf-8"?> <employee><name>John Doe</name><age>35</age><job><title>Software Engineer</title><department>IT</department><years_of_experience>10</years_of_experience></job><address><street>123 Main St.</street><city>San Francisco</city><state>CA</state><zip>94102</zip></address></employee>

In this example, we have used the unparse() method defined in the xmltodict module to convert a python dictionary to an XML string.

Dictionary to XML With Pretty Printing in Python

In all the above examples, the dictionary is converted into an XML string without any formatting. If you want to structure the XML string in a nice format, you can use the dict2xml module to convert the python dictionary into a pretty printed XML string. The dict2xml() function defined in the dict2xml module takes a dictionary as its input argument and returns a pretty printed XML string as shown below.

import dict2xml python_dict={"employee":{'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} print("The python dictionary is:") print(python_dict) xml_string=dict2xml.dict2xml(python_dict) print("The XML string is:") print(xml_string)


The python dictionary is: {'employee': {'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} The XML string is: <employee> <address> <city>San Francisco</city> <state>CA</state> <street>123 Main St.</street> <zip>94102</zip> </address> <age>35</age> <job> <department>IT</department> <title>Software Engineer</title> <years_of_experience>10</years_of_experience> </job> <name>John Doe</name> </employee>

In this example, we have converted a python dictionary to an XML string using the dict2xml() function defined in the dict2xml module. In the output, you can observe that the XML string is pretty printed. However, it doesn’t contain the header containing the XML version specifier. This means that the dict2xml() function only converts the data into XML format. It doesn’t create a valid XML string from the data.

Convert Dictionary to XML File in Python

To convert a dictionary into an XML file, we can use two approaches. In the first approach, we can first convert the dictionary into an XML string and then save it using file operations. Alternatively, we can also directly convert the dictionary into an XML file. Let us discuss both approaches.

Dictionary to File Using XML Strings in Python

To convert a python dictionary into an XML file by first converting it into a string, we will use the following steps.

  • First, we will convert the dictionary into an XML string using the dicttoxml() function defined in the dicttoxml module.
  • Next, we will open a file with a .xml extension in write mode using the open() function. The open() function takes the file name as its first input argument and the python literal “w” as its second argument. After execution, it returns a file pointer.
  • Once we get the file pointer, we will write the XML string into the file using the write() method. The write() method, when invoked on a file pointer, takes the string as its input argument and saves the string into the file.
  • Finally, we will close the file using the close() method.

After execution of the above steps, the python dictionary will be converted into an XML file. You can observe this in the following example.

import dicttoxml python_dict={"employee":{'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} xml_string=dicttoxml.dicttoxml(python_dict,attr_type=False) xml_file=open("person.xml","w") xml_file.write(str(xml_string)) xml_file.close()

The output file is:

Output XML File

The dicttoxml() function returns a string in byte format. Hence, the XML file is saved into the file as a byte string as you can observe in the above image.

The dicttoxml module doesn’t save the file content in good formatting. To format the XML string in the file, you can use the dict2xml module instead of the dicttoxml module as shown in the following example.

import dict2xml python_dict={"employee":{'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} xml_string=dict2xml.dict2xml(python_dict) xml_file=open("person.xml","w") xml_file.write(str(xml_string)) xml_file.close()


Output XML File

In the above image, you can observe that the dict2xml() method returns a formatted XML string. However, it doesn’t specify XML version or encoding format. Hence, this approach also doesn’t work very well.

So, instead of converting the dictionary to XML string and then saving it to a file, we can directly save the XML formatted data to the file using the xmltodict module.

Convert Dictionary Directly to XML File in Python

We can use the unparse() method defined in the xmltodict module to convert a python dictionary directly into an XML file. For this, we will use the following steps.

  • First, we will open a file with a .xml extension in write mode using the open() function. The open() function takes the file name as its first input argument and the literal “w” as its second argument. After execution, it returns a file pointer.
  • Once we get the file pointer, we will write the python into the file using the unparse() function defined in the xmltodict module. The unparse() function takes the python dictionary as its first input argument and the file pointer as the input to its output parameter.  After execution, it saves the dictionary into the file as XML.
  • Finally, we will close the file using the close() method.

After execution of the above steps, the python dictionary will be converted into an XML file as shown below.

import xmltodict python_dict={"employee":{'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} xml_file=open("person.xml","w") xmltodict.unparse(python_dict,output=xml_file) xml_file.close()


Output XML File

In the above output, you can observe that we get a valid XML document when we convert a dictionary to an XML file using the xmltodict.unparse() method.


In this article, we have discussed different ways to convert a python dictionary into an XML string or File.

To learn more about python programming, you can read this article on how to convert a dictionary to YAML in Python. You might also like this article on custom json encoders in python.

I hope you enjoyed reading this article. Stay tuned for more informative articles.

Happy Learning!

The post Convert Python Dictionary to XML String or File appeared first on

Categories: FLOSS Project Planets

Kay Hayen: Python 3.11 and Nuitka Progress

Wed, 2023-02-15 06:29

In my all in with Nuitka post and my first post Python 3.11 and Nuitka´__, I promised to give you more updates on Python 3.11 and in general. So this is where 3.11 is at, and the TLDR is, experimental support is coming with Nuitka 1.5 release.

What is now

Currently Nuitka 1.4 contains already some preparations for 3.11 support, but also from the feedback I had read, on how Nuitka has lost its performance lead over the releases, a huge amount of catching up on the quality and depth of integration, was put onto the plate, and so that became a large part of the 1.4 focus.

These preparations mostly addressed frames, unified separate code generation for them, and since with 3.11 generators frames were far too buggy, this had to be continued, and interestingly many frame issues were solved by some great unification right after 1.4 release for 1.5 develop.

And from here, most frame issues got debugged and fixed. This is the area where 3.11 had the biggest impact on Nuitka internals.

The 1.5 development release now gives this kind of output.

Nuitka:WARNING: The Python version '3.11' is not officially supported by Nuitka:WARNING: Nuitka '1.5rc7', but an upcoming release will change that. In Nuitka:WARNING: the mean time use Python version '3.10' instead or newer Nuitka:WARNING: Nuitka.

Using develop should be good. But I expect to be better with every pre-release until 1.5 happens. Follow it there if you want.

What you can do?

Not a lot yet, as a mere mortal, what you can and should do is to consider becoming a subscriber of Nuitka commercial, even if you do not need the IP protection features it mostly has. All commonly essential packaging and performance features are entirely free, and I have put incredible amounts of works in this, and I need to now make a living off it.

Working on 3.11 support, just so tings then continue to work, mostly on my own deserves financial support. Once a year, for weeks on end, I am re-implementing features that a much larger collective of developers came up with.

But don’t get me wrong. This may sounds like complaining here. Not at all. I love these things. I hate to be under some kind of time pressure though, but it seems that is coming to an end, so it’s all good now.

What was done?

So the “basic” tests of Nuitka are finally passing. That is by itself a huge step, and the now work on the CPython3.10 test suite, has been started to be executed with Python 3.11, and fixes are applied.

Doing that, still some bigger issues were still found, e.g. uncompiled frames as the parent of compiled frames, when inspected, did not automatically provide a f_back, these now have to created on the fly. This sort of things, which are not commonly observed by code.

Esp. the usual trouble makers, test_generators, test_coroutines, and test_asyncgen (with one test part as an exception) are passing. Honestly these are always the most scary to me. Debugging coroutines using asyncgen is a fairly large chunk of my time spent on Nuitka in recent years.

Compiled frames appear to be not entirely correctly seen for tracebacks and stacks when uncompiled code looks upward, but that seems negligible right now.

Up to test_inspect things appear to be working fine, and there is some kind of TODO list now, that I maintain on the roadmap document. I add TODOs there, e.g. because I had to disable attribute optimization with Python3.11 so that is going to harm performance and should be revisited before claiming full support.

As for inspect module, there will be more functions that need monkey patching, esp. related to frames, e.g. getframeinfo has no tolerance for the lack of bytecode in compiled function and their frames.

The Process

This was largely explained in my previous post. I will just put where we are now and skip completed steps and avoid repeating it too much.

The current phase is to run the CPython 3.10 test suite compiled with 3.11 and compare its results to running with plain 3.11, and expect it to be the same. It is normal for CPython 3.10 tests to fail with 3.11, and this does several things.

Test failures of 3.11 with 3.10 code actually give new test coverage. It has happened in previous new Python releases, that only then it became apparent that there were slight incompatibilities in exceptions, etc.

Also now a test may passes where it should not, i.e. 3.11 changed behavior and now Nuitka still follows 3.10 behavior. So maybe a new check is done or something like that, and Nuitka needs a version guarded change that also makes it behave the same with 3.11 version. But of course, when using 3.10 it has to retain the older behavior still.

The other thing, is when Python 3.11 passes, but Nuitka is not. Typically then Nuitka has some sort of change missing to the internal codes, often only a changed error message, exception type, etc. but sometimes this can be also a bigger thing.

During this phase it is hard to know where we stand. I do not really want to do repeated analysis. So e.g. when the exception for using a non-context manager in a with statement fails, I am not really interested to see how many other tests are failing because of that or other things. I am going to fix that first and only then I will look what the next failure is.

Since I will get to fix all the things, anyway, I tend to address most of the issues I find immediately and delay further executing of the test suite. Only when I understand the issue clearly enough and see that it will take a lot of time for a corner case, then I will add fixing it to the roadmap.

This I am doing e.g. with the single test part failure in test_asyncgen, because I suspect, that some change in 3.11 has happened, or that I will revisit the topic in the 3.11 test suite, which often has more dedicated tests to highlight changes. The one not good with Nuitka right now does not look important, nor easy to analyze.

In the next phase, the 3.11 test suite is used in the same way. Then we will get to support new features, new behaviors, newly allowed things, and achieve super compatibility with 3.11 as we always do for every CPython release. All the while doing this, the CPython3.10 test suite will be executed with 3.11 by my internal CI, immediately reporting when things change for the worse.

This phase has not even started however. But probably results of it will be in 1.6 I would assume now.

Intermediate Results

Once the Python 3.11 more or less well passed the 3.10 test suite without big issues, experimental support for it will be proclaimed and 1.5 shall be released. The warning from above will be given, but the error that 1.4 gave you will cease, and come back for 3.12 probably.


Very hard to predict. It feels close now. Supporting existing Nuitka is also a side tracking thing, that makes it unclear how much time I will have for it.

And the worst things with debugging is that I just never know how much time it will be. I have spent almost a day staring at debugging traces for the coroutine code, before these worked finally. And during that time it didn’t feel like progressing at all.

I think, look back at Python changes since 2.6, which was the first thing Nuitka supported, and still does btw, 3.5 and coroutines, 3.6 and asyncgen, and then 3.10 and match statements, the 3.11 release will probably have been the hardest.

Benefits for older Python too

I mentioned stuff before, that I will not repeat only new stuff. So the frame changes caused me to solve most of the issues by doing cleanups and refactoring that allowed for enhancements present in 1.4 and coming to 1.5 some more, covering generators as well.

Most likely, attribute lookups will gain the same JIT approach the Python 3.11 allows for now, and maybe that will be possible to backport to old Python as well. Not sure yet. For now, they are actually worse than with 3.10, while CPython made them faster. Not quite good for benchmarking at this time.

Expected results

I need to repeat this. People tend to expect that gains from Nuitka and enhancements of CPython stack up. The truth of the matter is, no they do not. CPython is now applying some tricks that Nuitka already did, some a decade ago. Not using its bytecode will then become less of a benefit, but that’s OK, this is not what Nuitka is about.

We need to get somewhere else entirely anyway, in terms of speed up. I will be talking about PGO and C types a lot in the coming year, that is at least the hope. The boost of 1.4 will only be the start. Once 3.11 support is sorted out, int will be getting dedicated code too, that’s where things will become interesting.

Final Words

Look ma, I posted about something that is not complete. The temptation to just wait until I finish it was so huge. But I resisted successfully.

Categories: FLOSS Project Planets

Python Software Foundation: The Case for a Second Developer-in-Residence for Python

Wed, 2023-02-15 04:12

As the currently serving sole developer in residence, I’m often asked if there will be more people holding the same position in the future. I strongly believe there should be and that it’s crucial to the long term success of this role. The only open matter is finding sustainable sponsorship for the position.

The current developer in residence

My day-to-day work revolves around an array of maintenance tasks for the Python Software Foundation with focus on CPython. Since I started in July 2021, I’ve done among others:

  • PR review and merging: 627 merges to CPython that lead to closing of 276 issues on the bug tracker, and many more code reviews on Github;
  • release management for the 3.8 and 3.9 branches as well as release notes and announcements for other releases;
  • following the Python security response team reports that lead to several security releases of Python;
  • following the buildbot fleet status and reacting to failures, including maintenance of the only buildbot that runs big memory tests;
  • project management of the transition from our previous custom issue tracker to Github Issues;
  • migration from a previous custom CLA management bot to EdgeDB CLA bot;
  • co-administering including responding to moderation requests;
  • co-administering core Python Discord;
  • co-chairing the Python Language Summit at PyCon US;
  • reviewing talk submissions on the Program Committee for PyCon US;
  • facilitating cooperation with other significant Python projects: HPy, PyPy, nogil;
  • public speaking (5 events in 2021, 4 events in 2022).
The missing big picture

While I find this work fulfilling and there’s always more things for me to do that other contributors suggest, one facet of the work can overshadow another. I cannot be in all places at once. Most importantly, while removing obstacles for other core developers (often volunteers) is indeed where we should put paid effort, I sometimes get asked: what’s your big project?

At this point I cannot say I had any large personal contribution over the past 18 months of work on CPython, which is ironic, given that I spent more time on it than in the preceding 11 years of core development combined. I had a few attempts at larger changes but inevitably the small busywork eats up my attention.

Adding another developer in residence more than doubles the position’s positive impact

It’s worth noting that the codebase of CPython is over a million lines of code now and even working on it full time does not mean a single person groks it all. That means that what you’re getting from Łukasz, the Developer in Residence, is something else than what you’d get from Magdalena, the Developer in Residence.

That alone means it would be worth having another person with a complementing skill set. But I believe there’s more.

Compared to working solo, having a team of two people paid full time to improve the developer experience for the rest of the core contributors, would allow us to take on larger sweeping projects. What we would end up doing would be definitely consulted with the Steering Council and we would take suggestions from the role’s sponsor. But there’s many possibilities!

We could add official build support for a new platform like iOS. We could improve test coverage of CPython tests, including coverage of trickier bits like the platform-specific code paths, C code, or code involved in CPython’s interpreter startup. We could revamp the buildbot master server to be more performant. We could be taking on implementation of accepted PEPs. We could upgrade to be more informative and easy to use. We could move the rest of the custom CPython bots to Github Actions, decreasing needed maintenance, improving performance and reliability. Those are just some ideas.

There is one more reason why I’m rooting for another person to join this position. Having another developer in residence would buffer any turbulence the other person has. Whenever I’m sick, or travel, or I’m stuck with a particularly stubborn problem, there would reliably be somebody else the other core developers could count on. This is important not only for them but also for me personally as it would decrease anxiety that builds up any time I’m unable to help somebody who needs me.

The ability to split work between two people is something I think about often. In theory there’s a whole team of core developers out there but since they’re mostly volunteers, I’m in no position to tell them what to do. Having a peer paid by the PSF would be different. It would be fair game to share the burden of a gnarly boring task, and that sounds like a wonderful improvement to me.

What if there isn’t another developer in residence?

I’m not saying the other person is required for me to stay productive. If we don’t find the budget for it, the situation is still better than having no developer in residence at all, I’d like to believe. So far I haven’t received much feedback on my work but I’m always open to hearing suggestions.

Categories: FLOSS Project Planets

Codementor: DataTypes in C programming Language

Wed, 2023-02-15 00:12
fundamentals of C Programming series continued...
Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #564 (Feb. 14, 2023)

Tue, 2023-02-14 14:30

#564 – FEBRUARY 14, 2023
View in Browser »

Monorepos in Python

A monorepo is a source control pattern in some organizations where a single code repository is shared for many or all projects. This interview with David Vujic discusses monorepos and the set of Python tools that can help you succeed with this pattern.

Business Process Models With Python and SpiffWorkflow

Can you describe your business processes with flowcharts? What if you could define the steps in a standard notation and implement the workflows in pure Python? This week on the show, Dan Funk from Sartography is here to discuss SpiffWorkflow.

Connect, Integrate & Automate Your Data - From Python or Any Other Application

At CData, we simplify connectivity between the application and data sources that power business, making it easier to unlock the value of data. Our SQL-based connectors streamline data access making it easy to access real-time data from on-premise or cloud databases, SaaS, APIs, NoSQL and Big Data →

Functional Python, Part II: Dial M for Monoid

This article is about “commandeering techniques from richly typed, functional languages into Python for fun and profit.” The focus is on Typeclasses and continuation-passing style.

SQLAlchemy 2.0 Released


DjangoCon Europe 2023 (Edinburgh) Call for Participation


Python 3.11.2, Python 3.10.10 and 3.12.0 Alpha 5 Are Available


Mypy 1.0 Released


Discussions It Is Becoming Difficult for Me to Be Productive in Python

This discussion is based on Avinash’s article of the same name, where he describes his journey from type-less to typed languages and why he is finding it harder to refactor his Python.

Python Jobs Software Engineer - Backend/Python (100% Remote) (Anywhere)


Python Video Course Instructor (Anywhere)

Real Python

Python Tutorial Writer (Anywhere)

Real Python

More Python Jobs >>>

Articles & Tutorials Python Parquet and Arrow: Using PyArrow With Pandas

Parquet and Arrow are two Apache projects available in Python via the PyArrow library. Parquet is column-oriented storage format for arrays and tables of data, while Arrow is an in-memory columnar format for data analysis. This article describes how to use them and how they compare to Pandas DataFrames.

How to Split a Python List or Iterable Into Chunks

This tutorial provides an overview of how to split a Python list into chunks. You’ll learn several ways of breaking a list into smaller pieces using the standard library, third-party libraries, and custom code. You’ll also split multidimensional data to synthesize an image with parallel processing.

Retool - The Fastest Way For Developers to Build and Launch Mobile Apps

Build and deploy mobile apps to iOS, Android, and as PWAs with no mobile expertise—all you need is JS and SQL. Retool Mobile is the fast way for developers to build business apps for teams on the go, at a warehouse, or in the field. And now teams of up to 5 users can build mobile apps for free →
RETOOL sponsor

Some Reasons to Avoid Cython

Cython lets you seamlessly merge Python syntax with calls into C or C++ code, making it easy to write high-performance extensions, but it is not the best tool in all circumstances. This article goes over some of the limitations and problems with Cython, and suggests alternatives.

Pandas Illustrated: The Definitive Visual Guide to Pandas

“Pandas is an industry standard for analyzing data in Python. With a few keystrokes, you can load, filter, restructure, and visualize gigabytes of heterogeneous information.” Learn all about Pandas with key illustrations to help understand the core concepts.

Standout Features in Django 4.2

Django 4.2 is slated for April and is currently in alpha release. This article covers some standout features that are coming, including psycopg v3 support, database comments, and lookups on field instances.

The Technology Behind GitHub’s New Code Search

For the last year, GitHub has been making large changes to how you can search for code on their site. This article describes what went into building the world’s largest public code search index.

Find Your Next Tech Job Through Hired

Hired is home to thousands of companies, from startups to Fortune 500s, that are actively hiring the best engineers, designers, data scientists, and more. Create a profile to let hiring managers extend interview requests to you. Sign for free today!
HIRED sponsor

Python Testing Tools Taxonomy

This entry in the Python wiki is an exhaustive list of testing tools and libraries. Content includes unit testing, mocking, fuzz testing, web testing, coverage tools, and much more.

Securely Deploy a FastAPI App With NGINX and Gunicorn

In this tutorial, you’ll learn how to use NGINX, and Gunicorn+Uvicorn to deploy a FastAPI app, and generate a free SSL certificate for it.

Projects & Code rtx: Runtime Executor (asdf Rust Clone)


django-admin-confirm: Mixin for Confirming Changes


tidypolars: tidyverse (R) Clone in Polars


jupyter-scheduler: Run Jupyter Notebooks as Jobs


pynimate: Python Package for Statistical Data Animations


Events Dash and Data Visualisation, New Zealand Python User Group

February 15, 2023

Weekly Real Python Office Hours Q&A (Virtual)

February 15, 2023

PyConFr 2023

February 16 to February 20, 2023

Python Northwest

February 16, 2023

PyLadies Dublin

February 16, 2023

Happy Pythoning!
This was PyCoder’s Weekly Issue #564.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Everyday Superpowers: Use ABCs for less-restrictive type hints.

Tue, 2023-02-14 11:38

Many people, especially those new to type hints, write hints that make things harder for users. This article will give you one tool to make your code easier to use.

Categories: FLOSS Project Planets

Real Python: Getters and Setters in Python

Tue, 2023-02-14 09:00

If you come from a language like Java or C++, then you’re probably used to writing getter and setter methods for every attribute in your classes. These methods allow you to access and mutate private attributes while maintaining encapsulation. In Python, you’ll typically expose attributes as part of your public API and use properties when you need attributes with functional behavior.

Even though properties are the Pythonic way to go, they can have some practical drawbacks. Because of this, you’ll find some situations where getters and setters are preferable over properties.

In this video course, you’ll:

  • Write getter and setter methods in your classes
  • Replace getter and setter methods with properties
  • Explore other tools to replace getter and setter methods in Python
  • Decide when setter and getter methods can be the right tool for the job

To get the most out of this course, you should be familiar with Python object-oriented programming. It’ll also be a plus if you have basic knowledge of Python properties and descriptors.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Ned Batchelder: Late initialization, reconsidered

Tue, 2023-02-14 07:20

A few days ago I posted Late initialization with mypy, and people gave me feedback, and I realized they were right. The placebo solution described there is clever, but too clever. It circumvents the value of static type checking.

The comments on the blog post were telling me this, but what helped most was a Mastodon thread with Glyph, especially when he said:

I am using “correct” to say “type-time semantics consistently matching runtime semantics.”

The placebo works to say “there’s always a thing,” but it’s not a real thing. A more useful and clearer solution is to admit that sometimes there isn’t a thing, and to be explicit about that.

I actually took a slightly middle-ground approach. Some of the “sometimes” attributes already had null implementations I could use, but I removed all of the explicit “Placebo” classes.

Categories: FLOSS Project Planets

Django Weblog: Django security releases issued: 4.1.7, 4.0.10, and 3.2.18

Tue, 2023-02-14 03:35

In accordance with our security release policy, the Django team is issuing Django 4.1.7, Django 4.0.10, and Django 3.2.18. These releases addresses the security issue detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2023-24580: Potential denial-of-service vulnerability in file uploads

Passing certain inputs to multipart forms could result in too many open files or memory exhaustion, and provided a potential vector for a denial-of-service attack.

The number of files parts parsed is now limited via the new DATA_UPLOAD_MAX_NUMBER_FILES setting.

Thanks to Jakob Ackermann for the report.

This issue has severity "moderate" according to the Django security policy.

Affected supported versions
  • Django main branch
  • Django 4.2 (currently at pre-release alpha status)
  • Django 4.1
  • Django 4.0
  • Django 3.2

Patches to resolve the issue have been applied to Django's main branch and the 4.2, 4.1, 4.0, and 3.2 release branches. The patches may be obtained from the following changesets:

The following releases have been issued:

The PGP key ID used for this release is Carlton Gibson: E17DF5C82B4F9D00

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to, and not via Django's Trac instance or the django-developers list. Please see our security policies for further information.

Categories: FLOSS Project Planets

Python Bytes: #323 AI search wars have begun

Tue, 2023-02-14 03:00
<a href='' style='font-weight: bold;'>Watch on YouTube</a><br> <br> <p><strong>About the show</strong></p> <p>Sponsored by <a href=""><strong>Microsoft for Startups Founders Hub</strong></a>.</p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href=""><strong></strong></a></li> <li>Brian: <a href=""><strong></strong></a> - may be a minute or two late. </li> <li>Show: <a href=""><strong></strong></a></li> <li>Special guest: <a href=""><strong>Pamela Fox</strong></a> -</li> </ul> <p>Join us on YouTube at <a href=""><strong></strong></a> to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too.</p> <p><strong>Michael #1:</strong> <a href=""><strong>camply</strong></a></p> <ul> <li>A tool to find campsites at sold out campgrounds through sites like and Yellowstone</li> <li>Finding reservations at sold out campgrounds can be tough.</li> <li>Searches the APIs of booking services like <a href=""><strong></strong></a> (which indexes thousands of campgrounds across the USA) to continuously check for cancellations and availabilities to pop up.</li> <li>Once a campsite becomes available, camply sends you a notification to book your spot!</li> <li><p>Want to camp in a tower in California?</p> <p>camply campgrounds --search "Fire Lookout Towers" --state CA</p></li> </ul> <p><strong>Brian #2:</strong> <a href=""><strong>hatch-fancy-pypi-readme</strong></a></p> <ul> <li>Your ✨Fancy✨ Project Deserves a ✨Fancy✨ PyPI Readme! 🧐</li> <li>Hynek Schlawack</li> <li>Include lots of extras in a <ul> <li>text fragments</li> <li>files, like or, with custom start, stop, pattern includes, etc.</li> <li>regular expression substitutions</li> </ul></li> <li>Several projects with examples, including <a href="">black</a>.</li> </ul> <p><strong>Pamela</strong> <strong>#3:</strong> <a href=""><strong>Pyodide dev branch now supports 3.11</strong></a></p> <ul> <li><a href="">Python 3.11 PR</a></li> <li><a href="">Benchmark Py3.11 and Py3.10</a> </li> <li><a href="">pyodide console</a></li> <li><a href="">TODO list for 0.23.0 alpha release</a> </li> <li><a href="">Dis-this: specializing adaptive interpreter</a> </li> <li><a href="">Recursion visualizer</a></li> </ul> <p><strong>Michael #4:</strong> <a href=""><strong>EU hates open source?</strong></a></p> <ul> <li>via <strong>Pamphile Roy</strong></li> <li>The Cyber Resilience Act (CRA) is an interesting and important proposal for a European law that aims to drive the safety and integrity of software</li> <li>The proposal includes a requirement for self-certification by suppliers of software to attest conformity with the requirements of the CRA including security, privacy and the absence of Critical Vulnerability Events (CVEs).</li> <li>We recognize that the European Commission has framed an exception in recital 10 attempting to ensure these provisions do not accidentally impact Open Source software. </li> <li>However, drawing on more than two decades of experience, we at the Open Source Initiative can clearly see that the current text will cause extensive problems for Open Source software.</li> <li>Since the goal is to avoid harming Open Source software this goal should be stated at the start of the paragraph as the rationale, replacing the introductory wording about avoiding harm to "research and innovation" to avoid over-narrowing the exception.</li> <li>The reference to "non-commercial" as a qualifier should be substituted. The term “commercial” has always led to legal uncertainty for software and is a term which should not be applied in the context of open source</li> <li>OSI recommends further work on the Open Source exception to the requirements within the body of the Act to <em>exclude all activities prior to commercial deployment of the software</em> and to clearly ensure that <em>responsibility for CE marks does not rest with any actor who is not a direct commercial beneficiary of deployment</em>.</li> </ul> <p><strong>Brian #5:</strong> <a href=""><strong>So, Single (‘) or Double (“) Quotes in Python?</strong></a></p> <ul> <li>Marcin Kozak</li> <li>PEP8 doesn’t recommend anything.</li> <li>REPL uses single quotes. &gt;&gt;&gt; x = "one" &gt;&gt;&gt; x 'one'</li> <li>Black sides with “double quotes”, due to the apostrophe in the string problem. <ul> <li>'Don\'t be so sad.' vs “Don’t be sad.”</li> </ul></li> <li>You get to pick, and don’t be bullied by black-fanatics.</li> <li>There’s always <a href="">blue</a>, which is just like black, but <ul> <li>defaults to single-quotes</li> <li>line length defaults to 79, not black’s 88.</li> <li>preserves whitespace before hash marks for right hanging comments (so multiple lines can line up).</li> </ul></li> </ul> <p><strong>Pamela</strong> <strong>#6:</strong> <a href=""><strong>Frozen-Flask</strong></a> </p> <ul> <li><a href="">Pamela’s PR for moving to Frozen Flask</a> </li> <li><a href="">Stepping down as a maintainer</a></li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>What does everyone think of <a href="">GitHub pricing</a>?</li> </ul> <p>Michael:</p> <ul> <li>Much much better transcripts, for example, <a href=""><strong>this episode</strong></a>. <ul> <li>Means <a href="">our search</a> works way better too</li> </ul></li> <li>The AI search wars have begun - <a href="">Google Panics Over ChatGPT [The AI Wars Have Begun] video</a> <ul> <li><a href="">Microsoft Bing rockets to the top of the App Store after announcing ChatGPT integration</a></li> <li><a href="">Google shares lose $100 billion after company's AI chatbot makes an error during demo</a></li> </ul></li> <li><a href="">Free PyCharm</a> for all the Talk Python customers</li> <li>Thanks for the help with finding a good Flutter dev.</li> <li>Important Talk Python episode: <a href="">Fusion Ignition Breakthrough and Python</a></li> </ul> <p>Pamela:</p> <ul> <li><a href="">Github pyproject.toml support</a>.</li> <li><a href="">Python Package Template</a></li> </ul> <p><strong>Jokes:</strong> </p> <ul> <li><a href=""><strong>$McTitle</strong></a></li> <li><a href=""><strong>Worst input fields</strong></a></li> </ul>
Categories: FLOSS Project Planets

Codementor: Ways to Implement increment operator in C Language..

Tue, 2023-02-14 00:22
fundamentals of C programming series..
Categories: FLOSS Project Planets

Glyph Lefkowitz: Data Classification

Mon, 2023-02-13 19:44

Is there a place for non-@dataclass classes in Python any more?

I have previously — and somewhat famously — written favorably about @dataclass’s venerable progenitor, attrs, and how you should use it for pretty much everything.

At the time, attrs was an additional dependency, a piece of technology that you could bolt on to your Python stack to make your particular code better. While I advocated for it strongly, there are all the usual implicit reasons against using a new thing. It was an additional dependency, it might not interoperate with other convenience mechanisms for type declarations that you were already using (i.e. NamedTuple), it might look weird to other Python programmers familiar with existing tools, and so on. I don’t think that any of these were good counterpoints, but there was nevertheless a robust discussion to be had in addressing them all.

But for many years now, dataclasses have been — and currently are — built in to the language. They are increasingly integrated to the toolchain at a deep level that is difficult for application code — or even other specialized tools — to replicate. Everybody knows what they are. Few or none of those reasons apply any longer.

For example, classes defined with @dataclass are now optimized as a C structure might be when you compile them with mypyc, a trick that is extremely useful in some circumstances, which even attrs itself now has trouble keeping up with.

This all raises the question for me: beyond backwards compatibility, is there any point to having non-@dataclass classes any more? Is there any remaining justification for writing them in new code?

Consider my original example, translated from attrs to dataclasses. First, the non-dataclass version:

1 2 3 4 5class Point3D: def __init__(self, x, y, z): self.x = x self.y = y self.z = z

And now the dataclass one:

1 2 3 4 5 6 7from dataclasses import dataclass @dataclass class Point3D: x: int y: int z: int

Many of my original points still stand. It’s still less repetitive. In fewer characters, we’ve expressed considerably more information, and we get more functionality (repr, sorting, hashing, etc). There doesn’t seem to be much of a downside besides the strictness of the types, and if typing.Any were a builtin, x: any would be fine for those who don’t want to unduly constrain their code.

The one real downside of the latter over the former right now is the need for an import. Which, at this point, just seems… confusing? Wouldn’t it be nicer to be able to just write this:

1 2 3 4class Point3D: x: int y: int z: int

and not need to faff around with decorator semantics and fudging the difference between Mypy (or Pyright or Pyre) type-check-time and Mypyc or Cython compile time? Or even better, to not need to explain the complexity of all these weird little distinctions to new learners of Python, and to have to cover import before class?

These tools all already treat the @dataclass decorator as a totally special language construct, not really like a decorator at all, so to really explore it you have to explain a special case and then a special case of a special case. The extension hook for this special case of the special case notwithstanding.

If we didn’t want any new syntax, we would need a from __future__ import dataclassification or some such for a while, but this doesn’t seem like an impossible bar to clear.

There are still some folks who don’t like type annotations at all, and there’s still the possibility of awkward implicit changes in meaning when transplanting code from a place with dataclassification enabled to one without, so perhaps an entirely new unambiguous syntax could be provided. One that more closely mirrors the meaning of parentheses in def, moving inheritance (a feature which, whether you like it or not, is clearly far less central to class definitions than ‘what fields do I have’) off to its own part of the syntax:

1 2 3data Point3D(x: int, y: int, z: int) from Vector: def method(self): ...

which, for the “I don’t like types” contingent, could reduce to this in the minimal case:

1 2data Point3D(x, y, z): pass

Just thinking pedagogically, I find it super compelling to imagine moving from teaching def foo(x, y, z):... to data Foo(x, y, z):... as opposed to @dataclass class Foo: x: int....

I don’t have any desire for semantic changes to accompany this, just to make it possible for newcomers to ignore the circuitous historical route of the @dataclass syntax and get straight into defining their own types with legible reprs from the very beginning of their Python journey.

(And make it possible for me to skip a couple of lines of boilerplate in short examples, as a bonus.)

I’m curious to know what y’all think, though. Shoot me an email or a toot and let me know.

In particular:

  1. Do you think there’s some reason I’m missing why Python’s current method for defining classes via a bunch of dunder methods is still better than dataclasses, or should stick around into the future for reasons beyond “compatibility”?
  2. Do you think “compatibility” is sufficient reason to keep the syntax the way it is forever, and I’m underestimating the cost of adding a keyword like this?
  3. If you do think that a change should be made, would you prefer:
    1. changing the meaning of class itself via a __future__ import,
    2. a new data keyword like the one I’ve proposed,
    3. a new keyword that functions exactly like the one I have proposed but really want to bikeshed the word data a bunch,
    4. something more incremental like just putting dataclass and field in builtins,
    5. or an option I haven’t even contemplated here?

If I find I’m not alone in this perhaps I will wander over to the Python discussion boards to have a more substantive conversation...

Thank you to my patrons who are helping me while I try to turn… whatever this is… along with open source maintenance and application development, into a real job. Do you want to see me pursue ideas like this one further? If so, you can support me on Patreon as well!

Categories: FLOSS Project Planets