Planet Python

Subscribe to Planet Python feed
Planet Python - http://planetpython.org/
Updated: 16 hours 23 min ago

PyCharm: Learning Resources for pytest

Mon, 2024-07-29 04:31

In this blog post, we’ll look at how PyCharm helps you when you’re working with pytest, and we will signpost you to a bunch of resources for learning pytest. While some of these resources were created by us here at JetBrains, others were crafted by the storytellers in the pytest community.

Using pytest in PyCharm

PyCharm has extensive support for pytest, including a dedicated test pytest runner. PyCharm also gives you code completion for the test subject and pytest fixtures as detailed assert failure reports so you can get to the root of the problem quickly. 

Download PyCharm

Resources for pytest

If you like reading blog posts, we have plenty of those. If videos are more your thing, we have some awesome content there, too. If you prefer a sequential course, we have some pointers for you, and if you prefer to snuggle down and read a book, we’ve got a great recommendation as well. There’s a little something here for everyone!  

Although the majority of these resources don’t assume any prior knowledge of pytest, some delve deeper into the subject than others – so there’s plenty to explore when you’re ready. 

First, let me point you to two important links:

  • The pytest page on the JetBrains Guide serves as a starting point for all the pytest resources we’ve created.
  • Brian Okken maintains a website for all his pytest resources (some of which are free, whereas others are paid). 
Videos about pytest

We have a video tutorial series composed of nine videos on pytest that start from the beginning of pytest-time. You can check out the tutorials on YouTube.

If there’s something specific you want to take a look at, the individual video topics are:

If you want a super-speedy refresher of pytest in PyCharm, you can watch PyCharm and pytest in Under 7 Minutes (beware – it’s fast!)

Tutorials about pytest

If you prefer learning by doing, we have some great pytest tutorials for you. First, the above video series is also available as a written tutorial in the JetBrains Guide.

Alternatively, Brian Okken has produced a detailed tutorial on everything pytest if you want to explore all areas. This tutorial is paid content, but it’s well worth it! 

Books about pytest

If you prefer reading, we have lots of blog posts for you. Here are some of the pytest resources on the JetBrains Guide:

Additionally, Brian has a blog that covers a diverse range of pytest subjects you can dive into. 

While we’re on the subject of reading, Brian has also written an excellent book on pytest that you can purchase and curl up with if that’s your thing.

Official pytest documentation

Last but not least, the official pytest documentation is another one to bookmark and keep close by as you go on your journey to pytest mastery. 

Conclusion

The Python testing framework pytest is packed with helpful features such as fixtures, mocking, and parametrizing that make testing your applications easier, giving you confidence in the quality of your code. Go ahead and try pytest out and let us know what you learn on your path to Python testing excellence!

Categories: FLOSS Project Planets

Zato Blog: Automating telecommunications networks with Python and SFTP

Mon, 2024-07-29 03:43
Automating telecommunications networks with Python and SFTP 2024-07-29, by Dariusz Suchojad

In telecommunications, the Secure File Transfer Protocol (SFTP) serves as a critical mechanism for secure and reliable file exchange between different network components devices, and systems, whether it is updating configurations, network monitoring, exchanging customer data, or facilitating software updates. Conversely, Python is an ideal tool for the automation of telecommunications networks thanks to its readability and versatility.

Let's dive into how to employ the two effectively and efficiently using the Zato integration and automation platform.

Dashboard

The first step is to define a new SFTP connection in your Dashboard, as in the screenshots below.

The form lets you provide all the default options that apply to each SFTP connection - remote host, what protocol to use, whether file metadata should be preserved during transfer, logging level and other details that you would typically provide.

Simply fill it out with the same details that you would use if it were command line-based SFTP connections.

Pinging

The next thing, right after the creation of a new connection, is to ping it to check if the server is responding.

Pinging opens a new SFTP connection and runs the ping command - in the screenshot above it was ls . - a practically no-op command whose sole purpose is to let the connection confirm that commands in fact can be executed, which proves the correctness of the configuration.

This will either returns details of why a connection could not be established or the response time if it was successful.

Cloud SFTP console

Having validated the configuration by pinging it, we can now execute SFTP commands straight in Dashboard from a command console:

Any SFTP command, or even a series of commands, can be sent and responses retrieved immediately. It is also possible to increase the logging level for additional SFTP protocol-level details.

This makes it possible to rapidly prototype file transfer functionality as a series of scripts that can be next moved as they are to Python-based services.

Python automation

Now, in Python, your API automation services have access to an extensive array of capabilities - from executing transfer commands individually or in batches to the usage of SFTP scripts previously created in your Dashboard.

Here is how Python can be used in practice:

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class MySFTPService(Service): def handle(self): # Connection to use conn_name = 'My SFTP Connection' # Get a handle to the connection object conn = self.out.sftp[conn_name].conn # Execute an arbitrary script with one or more SFTP commands, like in web-admin my_script = 'ls -la /remote/path' conn.execute(my_script) # Ping a remote server to check if it responds conn.ping() # Download an entry, possibly recursively conn.download('/remote/path', '/local/path') # Like .download but remote path must point to a file (exception otherwise) conn.download_file('/remote/path', '/local/path') # Makes the contents of a remote file available on output out = conn.read('/remote/path') # Uploads a local file or directory to remote path conn.upload('/local/path', '/remote/path') # Writes input data out to a remote file data = 'My data' conn.write(data, '/remote/path') # Create a new directory conn.create_directory('/path/to/new/directory') # Create a new symlink conn.create_symlink('/path/to/new/symlink') # Create a new hard-link conn.create_hardlink('/path/to/new/hardlink') # Delete an entry, possibly recursively, no matter what kind it is conn.delete('/path/to/delete') # Like .delete but path must be a directory conn.delete_directory('/path/to/delete') # Like .delete but path must be a file conn.delete_file('/path/to/delete') # Like .delete but path must be a symlink conn.delete_symlink('/path/to/delete') # Get information about an entry, e.g. modification time, owner, size and more info = conn.get_info('/remote/path') self.logger.info(info.last_modified) self.logger.info(info.owner) self.logger.info(info.size) self.logger.info(info.size_human) self.logger.info(info.permissions_oct) # A boolean flag indicating if path is a directory result = conn.is_directory('/remote/path') # A boolean flag indicating if path is a file result = conn.is_file('/remote/path') # A boolean flag indicating if path is a symlink result = conn.is_symlink('/remote/path') # List contents of a directory - items are in the same format that .get_info uses items = conn.list('/remote/path') # Move (rename) remote files or directories conn.move('/from/path', '/to/path') # An alias to .move conn.rename('/from/path', '/to/path') # Change mode of entry at path conn.chmod('600', '/path/to/entry') # Change owner of entry at path conn.chown('myuser', '/path/to/entry') # Change group of entry at path conn.chgrp('mygroup', '/path/to/entry') Summary

Given how important SFTP is in telecommunications, having a convenient and easy way to automate it using Python is an essential ability in a network engineer's skill-set.

Thanks to the SFTP connections in Zato, you can prototype SFTP scripts in Dashboard and employ them in API services right after that. To complement it, a full Python API is available for programmatic access to remote file servers.

Combined, the features make it possible to create scalable and reusable file transfer services in a quick and efficient manner using the most convenient programming language, Python.

More resources

Click here to read more about using Python and Zato in telecommunications
What is a Network Packet Broker? How to automate networks in Python?
What is an integration platform?
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?

More blog posts
Categories: FLOSS Project Planets

Turnkey Linux: Python PEP 668 - working with "externally managed environment"

Sun, 2024-07-28 23:25

Python Linux users would have noticed that python is now an "externally managed environment" on newer releases. I suspect that it has caused many frustrations. It certainly did for me - at least initially. Marcos, a long term friend of TurnKey recently reached out to me to ask about the best way to work around this when developing on our Odoo appliance.

The issue

Before I go on, for those of you who are not sure what I'm talking about, try installing or updating a python package via pip. It will fail with an error message:

error: externally-managed-environment × This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification.

Whilst this does fix a legitmate issue, it's also a big PITA for developers. To read more about the rationale, please see PEP 668.

Resolution options

As per the message, installing Python packages via apt is preferable. But what about if you need a python library not available in Debian? Or a newer version that what Debian provides? As noted by the error message, using a Python venv is the next best option. But that means duplication of any apt packages you may already have installed therefore bloat. It also means that you miss out on the automated security updates that TurnKey provides for Debian packages. The only remaining option is to "break system packages". That doesn't sound good! It will revert your system to the behavior before the application of PEP 668 - thus making life more complicated in the future...

Pros and Cons of each approach.

Assuming that you want/need versions and/or packages not available in Debian, what is the best path? Obviously each option note has pros and cons, so which way should you go? In his message to me, Marcos nicely laid out the pros and cons of each of the 2 suggested approaches, so I'll share them here:

Virtual Environment Pros:
  • Isolates application dependencies, avoiding conflicts with system packages.
  • Allows for more flexible and up-to-date package management.
Cons:
  • Adds complexity to the setup and maintenance process.
  • Increases the overall footprint and resource requirements of the deployment.
System-Wide Installation Pros:
  • Simpler setup and integration with the system.
  • Utilizes the standard Turnkey Linux deployment model.
Cons:
  • Potential conflicts with system-managed packages.
  • Limited by the constraints imposed by Debian 12.
Another - perhaps better - option

Another option not noted in the pip error message is to create a virtual environment, with the system python passed through. Whilst it's still not perfect, in my option it is by far the best option - unless of course you can get by just using system packages alone. TBH, I'm a bit surprised that it's not noted in the error message. It's pretty easy to set up, just include adding the '--system-site-packages' switch when creating your virtual environment. I.e.:

python3 -m venv --system-site-packages /path/to/venv What you get with this approach

Let's have a look at what you get when using '--system-site-packages'. First let's create an example venv. Note that all of this is running as root from root's home (/root). Although for any AWS Marketplace users (or non TurnKey users) most of these commands should work fine as a "sudo" user (for apt installs).

root@core ~# mkdir ~/test_venv root@core ~# python3 -m venv --system-site-packages ~/test_venv root@core ~# source ~/test_venv/bin/activate (test_venv) root@core ~#

Now for a demonstration of what happens when you use it.

I'll use a couple of apt packages with my examples:

  • python3-pygments (initially installed)
  • python3-socks (initially not-installed)

Continuing on from creating the venv above, let's confirm the package versions and status:

(test_venv) root@core ~# apt list python3-pygments python3-socks Listing... Done python3-pygments/stable,now 2.14.0+dfsg-1 all [installed,automatic] python3-socks/stable 1.7.1+dfsg-1 all

So we have python3-pygments installed and it's version 2.14.0. python3-socks is not installed, but the available version is 1.7.1. Now let's check that the installed package (pygments) is available in the venv and that it's the system version. For those not familiar with grep, the grep command does a case-insensitive search for lines that include socks or pygments.

(test_venv) root@core ~# pip list | grep -i 'socks\|pygments' Pygments 2.14.0

Let's install python3-socks and check the status again:

(test_venv) root@core ~# apt install -y python3-socks [...] (test_venv) root@core ~# apt list python3-pygments python3-socks Listing... Done python3-pygments/stable,now 2.14.0+dfsg-1 all [installed,automatic] python3-socks/stable,now 1.7.1+dfsg-1 all [installed]

Ok so python3-socks is installed now. And it's instantly available in our venv:

(test_venv) root@core ~# pip list | grep -i 'socks\|pygments' Pygments 2.14.0 PySocks 1.7.1

Woohoo! :) And we can still install and/or update packages in our venv with pip?:

(test_venv2) root@core ~# pip install --upgrade pygments Requirement already satisfied: pygments in /usr/lib/python3/dist-packages (2.14.0) Collecting pygments Downloading pygments-2.18.0-py3-none-any.whl (1.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 6.5 MB/s eta 0:00:00 Installing collected packages: pygments Attempting uninstall: pygments Found existing installation: Pygments 2.14.0 Not uninstalling pygments at /usr/lib/python3/dist-packages, outside environment /root/test_venv2 Can't uninstall 'Pygments'. No files were found to uninstall. Successfully installed pygments-2.18.0

Yes! When using pip install in our venv there are some extra lines related to the system package, but otherwise it's the same as using a standalone venv. Let's double check the new version:

(test_venv) root@core ~# pip list | grep -i 'socks\|pygments' Pygments 2.18.0 PySocks 1.7.1

So we've updated from the system version of Pygments to 2.18.0, but the system version still exists - and is still 2.14.0:

(test_venv) root@core ~# apt list python3-pygments Listing... Done python3-pygments/stable,now 2.14.0+dfsg-1 all [installed,automatic]

So what happens if we remove the pip installed version?:

(test_venv) root@core ~# pip uninstall pygments Found existing installation: Pygments 2.18.0 Uninstalling Pygments-2.18.0: Would remove: /root/test_venv2/bin/pygmentize /root/test_venv2/lib/python3.11/site-packages/pygments-2.18.0.dist-info/* /root/test_venv2/lib/python3.11/site-packages/pygments/* Proceed (Y/n)? y Successfully uninstalled Pygments-2.18.0

This time there is no mention of the system package. Let's double check the system and the venv:

(test_venv) root@core ~# apt list python3-pygments Listing... Done python3-pygments/stable,now 2.14.0+dfsg-1 all [installed,automatic] (test_venv) root@core ~# pip list | grep -i 'pygments' Pygments 2.14.0

Yep, the system package is still there and it's still in the venv!

The best of both worlds

So using '--system-site-packages' is essentially the best of both worlds. Where possible you can use system packages via apt, but you still have all the advantages of a virtual environment. In my opinion it's the best option by far! What do you think? Feel free to share your thoughts and feedback below.

Blog Tags: pythonvirtual environmentvenvaptpipdebianlinux
Categories: FLOSS Project Planets

Kushal Das: Multi-factor authentication in django

Fri, 2024-07-26 10:24

Multi-factor authentication is a must have feature in any modern web application. Specially providing support for both TOTP (think applications on phone) and FIDO2 (say Yubikeys) usage. I created a small Django demo mfaforgood which shows how to enable both.

I am using django-mfa3 for all the hard work, but specially from a PR branch from my friend Giuseppe De Marco.

I also fetched the cbor-js package in the repository so that hardware tokens for FIDO2 to work. I hope this example will help you add the MFA support to your Django application.

Major points of the code
  • Adding example templates from MFA project, with admin theme and adding cbor-js to the required templates.
  • Adding mfa to INSTALLED_APPS.
  • Adding mfa.middleware.MfaSessionMiddleware to MIDDLEWARE.
  • Adding MFA_DOMAIN and MFA_SITE_TITLE to settings.py.
  • Also adding STATICFILES_DIRS.
  • Adding mfa.views.MFAListView as the Index view of the application.
  • Also adding mfa URLs.

After login for the first time one can enable MFA in the following screen.

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #214: Build Captivating Display Tables in Python With Great Tables

Fri, 2024-07-26 08:00

Do you need help making data tables in Python look interesting and attractive? How can you create beautiful display-ready tables as easily as charts and graphs in Python? This week on the show, we speak with Richard Iannone and Michael Chow from Posit about the Great Tables Python library.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Python Software Foundation: Notice of Python Software Foundation Bylaws change, effective 10 August 2024

Fri, 2024-07-26 07:33
There has been a lot of attention directed at our Bylaws over the last few weeks, and as a result of that conversation, the Board was alerted to a defect in our Bylaws that exposes the Foundation to an unbounded financial liability.

Specifically, Bylaws Article XIII as originally written compels the Python Software Foundation to extend indemnity coverage to individual Members (including our thousands of “Basic Members”) in certain cases, and to advance legal defense expenses to individual Members with surprisingly few restrictions.

Further, the Bylaws compel the Foundation to take out insurance to cover these requirements, however, insurance of this nature is not actually available to 501(c)(3) nonprofit corporations such as the Python Software Foundation to purchase, and thus it is impossible in practice to comply with this requirement.

In the unlikely but not impossible event of the Foundation being called upon to advance such expenses, the potential financial burden would be virtually unlimited, and there would be no recourse to insurance.

As this is an existential threat to the Foundation, the Board has agreed that it must immediately reduce the Foundation’s exposure, and has opted to exercise its ability to amend the Bylaws by a majority vote of the Board directors, rather than by putting it to a vote of the membership, as allowed by Bylaws Article XI.

Acting on legal advice, the full Board has voted unanimously to amend its Bylaws to no longer extend an offer to indemnify, advance legal expenses, or insure Members when they are not serving at the request of the Foundation. The amended Bylaws still allow for indemnification of a much smaller set of individuals acting on behalf of the PSF such as Board Members and officers, which is in line with standard nonprofit governance practices and for which we already hold appropriate insurance.

The full text of the changes can be viewed at https://github.com/python/psf-bylaws/compare/a35a607...298843b

These changes shall become effective on Saturday 10 August 2024, 15 days from the date of this notice.

Any questions about these changes may be sent to psf@python.org. We gladly welcome further suggestions or recommendations for future Bylaws amendments.

Thank you,

The PSF Board of Directors
Categories: FLOSS Project Planets

Python Software Foundation: Python’s Supportive and Welcoming Environment is Tightly Coupled to Its Progress

Fri, 2024-07-26 05:49
Python is as popular as it is today because we have gone above and beyond to make this a welcoming community. Being a friendly and supportive community is part of how we are perceived by the wider world and is integral to the wide popularity of Python. We won a “Wonderfully Welcoming Award” last year at GitHub Universe. Over and over again, the tech press refers to Python as a supportive community. We aren’t the fastest, the newest or the best-funded programming language, but we are the most welcoming and supportive. Our philosophy is a big part of why Python is a fantastic choice for not only new programmers, glue programmers, and folks who split their time between research and programming but for everyone who wants to be part of a welcoming community.

We believe to be “welcoming” means to do our best to provide all participants with a safe, civil, and respectful environment when they are engaging with our community - on our forums, at PyCon events, and other spaces that have committed to following our Code of Conduct. That kind of environment doesn’t happen by accident - a lot of people have worked hard over a long time to figure out the best ways to nurture this welcoming quality for the Python community. That work has included drafting and improving the Code of Conduct, crafting and implementing processes for enforcing it, and moderating the various online spaces where it applies. And most importantly the huge, collective effort of individuals across the community, each putting in consistent effort to show up in all the positive ways that make the Python community the warm and welcoming place that we know.

The recent slew of conversations, initially kicked off in response to a bylaws change proposal, has been pretty alienating for many members of our community. They haven’t all posted publicly to explain their feelings, but they have found other ways to let the PSF know how they are feeling.
  • After the conversation on PSF-Vote had gotten pretty ugly, forty-five people out of ~1000 unsubscribed. (That list has since been put on announce-only)
  • We received a lot of Code of Conduct reports or moderation requests about the PSF-vote mailing list and the discuss.python.org message board conversations. (Several reports have already been acted on or closed and the rest will be soon).
  • PSF staff received private feedback that the blanket statements about “neurodiverse people”, the bizarre motives ascribed to the people in charge of the PSF and various volunteers and the sideways comments about the kinds of people making reports were also very off-putting.
As an open source code community, we do most things out in the open which is a fantastic strategy for code. (Many eyes, shallow bugs, etc.) We also try to be transparent about what is going on here at the Foundation and are always working to improve visibility into our policies, current resource levels, spending priorities and aspirations. Sometimes staff and volunteers are a little too busy “doing the work" to “talk about the work” but we do our best to be responsive, especially in the areas that people want to know more about. That said, sometimes things do need to be kept confidential, for privacy, legal, or other good reasons.

Some examples:
  • Most Code of Conduct reports – Oftentimes, these reports have the potential to affect both the reporter and the reported person’s reputations and livelihoods so our practice is to keep them confidential when possible to protect everyone involved. Some of you have been here long enough to remember the incident at PyCon US in 2013, an example of the entire internet discussing a Code of Conduct violation that led to negative repercussions for everyone involved, but especially for the person who reported the behavior.
  • Legal advice and proceedings – It is an unfortunate fact of the world that the legal system(s) we operate under sometimes require us to keep secret information we might otherwise prefer to disclose, often because doing so could open us up to liability in a way that would create significant risk to the PSF or it could potentially put us in violation of laws or regulation. It’s our responsibility to follow legal guidance about how to protect the Foundation, our resources, and our mission in these situations.
  • Mental health, personal history, or disability status – Community members should not, for example, have to disclose their status as neurodivergent or share their history with abuse so that others can decide if they are allowed to be offended. Community members should also not be speculating about other individuals’ characteristics or experience in this regard.
We have a moral imperative – as one of the very best places to bring new people into tech and into open source – to keep being good at welcoming new people. If we do not rise and continue to rise every day to this task, then we are not fulfilling our own mission, “to support and facilitate the growth of a diverse and international community of Python programmers.” Technical skills are a game-changer for the people who acquire them and joining a vast global network of people with similar interests opens many doors. Behavior that contributes to a hostile environment around Python or throws up barriers and obstacles to those who would join the Python community must be addressed because it endangers what we have built here.

Part of the care-taking of a diverse community “where everyone feels welcome” sadly often means asking some people to leave – or at least take a break. This is known as the paradox of tolerance. We can not tolerate intolerance and we will not allow combative and aggressive behavior to ruin the experience in our spaces for everyone else. People do make honest mistakes and don’t always understand the impact that their words have had. All we ask is that as community members we all do our best to adhere to the Code of Conduct we’ve committed to as a community, and that we gracefully accept feedback when our efforts fall short. Sometimes that means learning that the words, assumptions or tone you’re using aren’t coming across the way you’ve intended. When a person’s words and actions repeatedly come in conflict with our community norms and cause harm, and that pattern hasn’t changed in response to feedback – then we have to ask people to take a break or as a last resort to leave the conversation.

Our forum, mailing lists and events will continue to be moderated. We want to thank everyone who contributed positively to the recent conversations and everyone who made the hard choice to write to us to point out off-putting, harmful, unwelcoming or offensive comments. We especially want to thank all the volunteers who serve on the Python Discourse moderation team and our Code of Conduct Working Group. We know it’s been a long couple of weeks, and although your work may occasionally be draining and unpleasant, it is also absolutely essential and endlessly appreciated by the vast majority of the community. Thank you for everything you do!


Sincerely,
Deb Nicholson
Dawn Wages
Tania Allard
KwonHan Bae
Kushal Das
Georgi Ker
Jannis Leidel
Cristián Maureira-Fredes
Christopher Neugebauer
Denny Perez
Cheuk Ting Ho
Simon Willison

Categories: FLOSS Project Planets

Talk Python to Me: #472: State of Flask and Pallets in 2024

Fri, 2024-07-26 04:00
Flask is one of the most important Python web frameworks and powers a bunch of the internet. David Lord, Flask's lead maintainer is here to give us an update on the state of Flask and Pallets in 2024. If you care about where Flask is and where it's going, you'll definitely want to listen in.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>David on Mastodon</b>: <a href="https://mas.to/@davidism" target="_blank" rel="noopener">@davidism</a><br/> <b>David on X</b>: <a href="https://twitter.com/davidism" target="_blank" rel="noopener">@davidism</a><br/> <b>State of Pallets 2024 FlaskCon Talk</b>: <a href="https://www.youtube.com/watch?v=TYeMf0bCbr8" target="_blank" rel="noopener">youtube.com</a><br/> <b>FlaskCon</b>: <a href="https://flaskcon.com/2024/" target="_blank" rel="noopener">flaskcon.com</a><br/> <b>FlaskCon 2024 Talks</b>: <a href="https://www.youtube.com/playlist?list=PL-MSuSC-Kjb6n0HsxU_knxCOLuToQm44z" target="_blank" rel="noopener">youtube.com</a><br/> <b>Pallets Discord</b>: <a href="https://discord.com/invite/pallets" target="_blank" rel="noopener">discord.com</a><br/> <b>Pallets Eco</b>: <a href="https://github.com/pallets-eco" target="_blank" rel="noopener">github.com</a><br/> <b>JazzBand</b>: <a href="https://jazzband.co" target="_blank" rel="noopener">jazzband.co</a><br/> <b>Pallets Github Org</b>: <a href="https://github.com/pallets" target="_blank" rel="noopener">github.com</a><br/> <b>Jinja</b>: <a href="https://github.com/pallets/jinja" target="_blank" rel="noopener">github.com</a><br/> <b>Click</b>: <a href="https://github.com/pallets/click" target="_blank" rel="noopener">github.com</a><br/> <b>Werkzeug</b>: <a href="https://github.com/pallets/werkzeug" target="_blank" rel="noopener">github.com</a><br/> <b>MarkupSafe</b>: <a href="https://github.com/pallets/markupsafe" target="_blank" rel="noopener">github.com</a><br/> <b>ItsDangerous</b>: <a href="https://github.com/pallets/itsdangerous" target="_blank" rel="noopener">github.com</a><br/> <b>Quart</b>: <a href="https://github.com/pallets/quart" target="_blank" rel="noopener">github.com</a><br/> <b>pypistats</b>: <a href="https://pypistats.org/packages/flask" target="_blank" rel="noopener">pypistats.org</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=EvNx5fwcib0" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/472/state-of-flask-and-pallets-in-2024" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Categories: FLOSS Project Planets

Real Python: Quiz: Getting Started With Testing in Python

Thu, 2024-07-25 08:00

In this quiz, you’ll test your understanding of testing your Python code.

Testing in Python is a huge topic and can come with a lot of complexity, but it doesn’t need to be hard. You can get started creating simple tests for your application in a few easy steps and then build on it from there.

With this quiz, you can check your understanding of the fundamentals of Python testing. Good luck!

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: Quiz: Python Basics: Lists and Tuples

Thu, 2024-07-25 08:00

In Python Basics: Lists and Tuples, you’ve met two new and important data structures:

  • Lists
  • Tuples

Both of these data types are sequences, meaning they are objects that contain other objects in a certain order. They each have some important distinguishing properties and come with their own set of methods for interacting with objects of each type.

In this quiz, youll test your knowledge of:

  • Creating lists and tuples
  • Indexing and slicing lists and tuples
  • Iterating over these containers
  • Understanding their differences, specifically the impact of mutability
  • Adding and removing items from a list

Then, you can move on to other Python Basics courses.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

EuroPython Society: EuroPython 2024 Code of Conduct Transparency Report

Thu, 2024-07-25 02:00

The 2024 version of the EuroPython conference took place both online and in person in July 2024. This was the second conference under our new Code of Conduct (CoC), and we had Code of Conduct working group members continuously available both online and in person.

Reports

We had 4 Code of Conduct working group members continuously available both online and in person. Over the course of the conference the Code of Conduct team was made aware of the following issues:

  • A disabled person had requested reserved seating for talks, but when he arrived the first day, there was none. He reported this to a CoC member, who filed a report with Ops. It turned out that while the request had been gathered on the web form, there was no mechanism to get that information to the people involved. Once they were informed, the issue was quickly resolved, and the reporter expressed satisfaction with the way it was handled.
  • One person was uncomfortable with having their last name shown on Discord. They were informed that they could change that as soon as the registration bot ran, limiting the exposure to a minute or so, or that they could come to the registration desk for assistance. The report came via email and there was no response to the email suggesting those options.
  • An attendee reported that one talk&aposs slides included a meme that seemed to reflect a racist trope. The CoC team reviewed that talk&aposs slides, and agreed that the meme might be interpreted that way. A member of the CoC team contacted the presenter who immediately agreed to remove that meme before uploading the slides, and the video team was alerted to edit that meme out of the talk video before final publication.
  • There were multiple reports that the toilet signage was confusing and causing people to be uncomfortable with choosing a toilet. Once this was reported the signage was adjusted to make the gender designation visible and no further reports were received. It should be noted that none of the complaints objected to the text of the signs, just to the fact that covering of gender markers led to people entering a toilet they didn&apost want to.
  • The CoC team also were presented with a potential lightning talk topic that had caused complaints at another conference due to references to current wars that some viewers found disturbing. Since lightning talks are too short for content warnings to be effective, and since they are not reviewed in any detail by the programme committee, the CoC team counselled the prospective presenter against using the references that had been problematic at a prior conference. Given that advice, the presenter elected not to submit that topic.
Categories: FLOSS Project Planets

PyPy: Abstract interpretation in the Toy Optimizer

Wed, 2024-07-24 10:48

This is a cross-post from Max Bernstein from his excellent blog where he writes about programming languages, compilers, optimizations, virtual machines. He's looking for a (dynamic language runtime or compiler related) job too.

CF Bolz-Tereick wrote some excellent posts in which they introduce a small IR and optimizer and extend it with allocation removal. We also did a live stream together in which we did some more heap optimizations.

In this blog post, I'm going to write a small abtract interpreter for the Toy IR and then show how we can use it to do some simple optimizations. It assumes that you are familiar with the little IR, which I have reproduced unchanged in a GitHub Gist.

Abstract interpretation is a general framework for efficiently computing properties that must be true for all possible executions of a program. It's a widely used approach both in compiler optimizations as well as offline static analysis for finding bugs. I'm writing this post to pave the way for CF's next post on proving abstract interpreters correct for range analysis and known bits analysis inside PyPy.

Before we begin, I want to note a couple of things:

  • The Toy IR is in SSA form, which means that every variable is defined exactly once. This means that abstract properties of each variable are easy to track.
  • The Toy IR represents a linear trace without control flow, meaning we won't talk about meet/join or fixpoints. They only make sense if the IR has a notion of conditional branches or back edges (loops).

Alright, let's get started.

Welcome to abstract interpretation

Abstract interpretation means a couple different things to different people. There's rigorous mathematical formalism thanks to Patrick and Radhia Cousot, our favorite power couple, and there's also sketchy hand-wavy stuff like what will follow in this post. In the end, all people are trying to do is reason about program behavior without running it.

In particular, abstract interpretation is an over-approximation of the behavior of a program. Correctly implemented abstract interpreters never lie, but they might be a little bit pessimistic. This is because instead of using real values and running the program---which would produce a concrete result and some real-world behavior---we "run" the program with a parallel universe of abstract values. This abstract run gives us information about all possible runs of the program.1

Abstract values always represent sets of concrete values. Instead of literally storing a set (in the world of integers, for example, it could get pretty big...there are a lot of integers), we group them into a finite number of named subsets.2

Let's learn a little about abstract interpretation with an example program and example abstract domain. Here's the example program:

v0 = 1 v1 = 2 v2 = add(v0, v1)

And our abstract domain is "is the number positive" (where "positive" means nonnegative, but I wanted to keep the words distinct):

top / \ positive negative \ / bottom

The special top value means "I don't know" and the special bottom value means "empty set" or "unreachable". The positive and negative values represent the sets of all positive and negative numbers, respectively.

We initialize all the variables v0, v1, and v2 to bottom and then walk our IR, updating our knowledge as we go.

# here v0:bottom = 1 v1:bottom = 2 v2:bottom = add(v0, v1)

In order to do that, we have to have transfer functions for each operation. For constants, the transfer function is easy: determine if the constant is positive or negative. For other operations, we have to define a function that takes the abstract values of the operands and returns the abstract value of the result.

In order to be correct, transfer functions for operations have to be compatible with the behavior of their corresponding concrete implementations. You can think of them having an implicit universal quantifier forall in front of them.

Let's step through the constants at least:

v0:positive = 1 v1:positive = 2 # here v2:bottom = add(v0, v1)

Now we need to figure out the transfer function for add. It's kind of tricky right now because we haven't specified our abstract domain very well. I keep saying "numbers", but what kinds of numbers? Integers? Real numbers? Floating point? Some kind of fixed-width bit vector (int8, uint32, ...) like an actual machine "integer"?

For this post, I am going to use the mathematical definition of integer, which means that the values are not bounded in size and therefore do not overflow. Actual hardware memory constraints aside, this is kind of like a Python int.

So let's look at what happens when we add two abstract numbers:

top positive negative bottom top top top top bottom positive top positive top bottom negative top top negative bottom bottom bottom bottom bottom bottom

As an example, let's try to add two numbers a and b, where a is positive and b is negative. We don't know anything about their values other than their signs. They could be 5 and -3, where the result is 2, or they could be 1 and -100, where the result is -99. This is why we can't say anything about the result of this operation and have to return top.

The short of this table is that we only really know the result of an addition if both operands are positive or both operands are negative. Thankfully, in this example, both operands are known positive. So we can learn something about v2:

v0:positive = 1 v1:positive = 2 v2:positive = add(v0, v1) # here

This may not seem useful in isolation, but analyzing more complex programs even with this simple domain may be able to remove checks such as if (v2 < 0) { ... }.

Let's take a look at another example using an sample absval (absolute value) IR operation:

v0 = getarg(0) v1 = getarg(1) v2 = absval(v0) v3 = absval(v1) v4 = add(v2, v3) v5 = absval(v4)

Even though we have no constant/concrete values, we can still learn something about the states of values throughout the program. Since we know that absval always returns a positive number, we learn that v2, v3, and v4 are all positive. This means that we can optimize out the absval operation on v5:

v0:top = getarg(0) v1:top = getarg(1) v2:positive = absval(v0) v3:positive = absval(v1) v4:positive = add(v2, v3) v5:positive = v4

Other interesting lattices include:

  • Constants (where the middle row is pretty wide)
  • Range analysis (bounds on min and max of a number)
  • Known bits (using a bitvector representation of a number, which bits are always 0 or 1)

For the rest of this blog post, we are going to do a very limited version of "known bits", called parity. This analysis only tracks the least significant bit of a number, which indicates if it is even or odd.

Parity

The lattice is pretty similar to the positive/negative lattice:

top / \ even odd \ / bottom

Let's define a data structure to represent this in Python code:

class Parity: def __init__(self, name): self.name = name def __repr__(self): return self.name

And instantiate the members of the lattice:

TOP = Parity("top") EVEN = Parity("even") ODD = Parity("odd") BOTTOM = Parity("bottom")

Now let's write a forward flow analysis of a basic block using this lattice. We'll do that by assuming that a method on Parity is defined for each IR operation. For example, Parity.add, Parity.lshift, etc.

def analyze(block: Block) -> None: parity = {v: BOTTOM for v in block} def parity_of(value): if isinstance(value, Constant): return Parity.const(value) return parity[value] for op in block: transfer = getattr(Parity, op.name) args = [parity_of(arg.find()) for arg in op.args] parity[op] = transfer(*args)

For every operation, we compute the abstract value---the parity---of the arguments and then call the corresponding method on Parity to get the abstract result.

We need to special case Constants due to a quirk of how the Toy IR is constructed: the constants don't appear in the instruction stream and instead are free-floating.

Let's start by looking at the abstraction function for concrete values---constants:

class Parity: # ... @staticmethod def const(value): if value.value % 2 == 0: return EVEN else: return ODD

Seems reasonable enough. Let's pause on operations for a moment and consider an example program:

v0 = getarg(0) v1 = getarg(1) v2 = lshift(v0, 1) v3 = lshift(v1, 1) v4 = add(v2, v3) v5 = dummy(v4)

This function (which is admittedly a little contrived) takes two inputs, shifts them left by one bit, adds the result, and then checks the least significant bit of the addition result. It then passes that result into a dummy function, which you can think of as "return" or "escape".

To do some abstract interpretation on this program, we'll need to implement the transfer functions for lshift and add (dummy will just always return TOP). We'll start with add. Remember that adding two even numbers returns an even number, adding two odd numbers returns an even number, and mixing even and odd returns an odd number.

class Parity: # ... def add(self, other): if self is BOTTOM or other is BOTTOM: return BOTTOM if self is TOP or other is TOP: return TOP if self is EVEN and other is EVEN: return EVEN if self is ODD and other is ODD: return EVEN return ODD

We also need to fill in the other cases where the operands are top or bottom. In this case, they are both "contagious"; if either operand is bottom, the result is as well. If neither is bottom but either operand is top, the result is as well.

Now let's look at lshift. Shifting any number left by a non-zero number of bits will always result in an even number, but we need to be careful about the zero case! Shifting by zero doesn't change the number at all. Unfortunately, since our lattice has no notion of zero, we have to over-approximate here:

class Parity: # ... def lshift(self, other): # self << other if other is ODD: return EVEN return TOP

This means that we will miss some opportunities to optimize, but it's a tradeoff that's just part of the game. (We could also add more elements to our lattice, but that's a topic for another day.)

Now, if we run our abstract interpretation, we'll collect some interesting properties about the program. If we temporarily hack on the internals of bb_to_str, we can print out parity information alongside the IR operations:

v0:top = getarg(0) v1:top = getarg(1) v2:even = lshift(v0, 1) v3:even = lshift(v1, 1) v4:even = add(v2, v3) v5:top = dummy(v4)

This is pretty awesome, because we can see that v4, the result of the addition, is always even. Maybe we can do something with that information.

Optimization

One way that a program might check if a number is odd is by checking the least significant bit. This is a common pattern in C code, where you might see code like y = x & 1. Let's introduce a bitand IR operation that acts like the & operator in C/Python. Here is an example of use of it in our program:

v0 = getarg(0) v1 = getarg(1) v2 = lshift(v0, 1) v3 = lshift(v1, 1) v4 = add(v2, v3) v5 = bitand(v4, 1) # new! v6 = dummy(v5)

We'll hold off on implementing the transfer function for it---that's left as an exercise for the reader---and instead do something different.

Instead, we'll see if we can optimize operations of the form bitand(X, 1). If we statically know the parity as a result of abstract interpretation, we can replace the bitand with a constant 0 or 1.

We'll first modify the analyze function (and rename it) to return a new Block containing optimized instructions:

def simplify(block: Block) -> Block: parity = {v: BOTTOM for v in block} def parity_of(value): if isinstance(value, Constant): return Parity.const(value) return parity[value] result = Block() for op in block: # TODO: Optimize op # Emit result.append(op) # Analyze transfer = getattr(Parity, op.name) args = [parity_of(arg.find()) for arg in op.args] parity[op] = transfer(*args) return result

We're approaching this the way that PyPy does things under the hood, which is all in roughly a single pass. It tries to optimize an instruction away, and if it can't, it copies it into the new block.

Now let's add in the bitand optimization. It's mostly some gross-looking pattern matching that checks if the right hand side of a bitwise and operation is 1 (TODO: the left hand side, too). CF had some neat ideas on how to make this more ergonomic, which I might save for later.3

Then, if we know the parity, optimize the bitand into a constant.

def simplify(block: Block) -> Block: parity = {v: BOTTOM for v in block} def parity_of(value): if isinstance(value, Constant): return Parity.const(value) return parity[value] result = Block() for op in block: # Try to simplify if isinstance(op, Operation) and op.name == "bitand": arg = op.arg(0) mask = op.arg(1) if isinstance(mask, Constant) and mask.value == 1: if parity_of(arg) is EVEN: op.make_equal_to(Constant(0)) continue elif parity_of(arg) is ODD: op.make_equal_to(Constant(1)) continue # Emit result.append(op) # Analyze transfer = getattr(Parity, op.name) args = [parity_of(arg.find()) for arg in op.args] parity[op] = transfer(*args) return result

Remember: because we use union-find to rewrite instructions in the optimizer (make_equal_to), later uses of the same instruction get the new optimized version "for free" (find).

Let's see how it works on our IR:

v0 = getarg(0) v1 = getarg(1) v2 = lshift(v0, 1) v3 = lshift(v1, 1) v4 = add(v2, v3) v6 = dummy(0)

Hey, neat! bitand disappeared and the argument to dummy is now the constant 0 because we know the lowest bit.

Wrapping up

Hopefully you have gained a little bit of an intuitive understanding of abstract interpretation. Last year, being able to write some code made me more comfortable with the math. Now being more comfortable with the math is helping me write the code. It's nice upward spiral.

The two abstract domains we used in this post are simple and not very useful in practice but it's possible to get very far using slightly more complicated abstract domains. Common domains include: constant propagation, type inference, range analysis, effect inference, liveness, etc. For example, here is a a sample lattice for constant propagation:

"-inf"; bottom -> "-2"; bottom -> "-1"; bottom -> 0; bottom -> 1; bottom -> 2; bottom -> "+inf"; "-inf" -> negative; "-2" -> negative; "-1" -> negative; 0 -> top; 1 -> nonnegative; 2 -> nonnegative; "+inf" -> nonnegative; negative -> nonzero; nonnegative -> nonzero; nonzero->top; {rank=same; "-inf"; "-2"; "-1"; 0; 1; 2; "+inf"} {rank=same; nonnegative; negative;} } -->

It has multiple levels to indicate more and less precision. For example, you might learn that a variable is either 1 or 2 and be able to encode that as nonnegative instead of just going straight to top.

Check out some real-world abstract interpretation in open source projects:

If you have some readable examples, please share them so I can add.

Acknowledgements

Thank you to CF Bolz-Tereick for the toy optimizer and helping edit this post!

  1. In the words of abstract interpretation researchers Vincent Laviron and Francesco Logozzo in their paper Refining Abstract Interpretation-based Static Analyses with Hints (APLAS 2009):

    The three main elements of an abstract interpretation are: (i) the abstract elements ("which properties am I interested in?"); (ii) the abstract transfer functions ("which is the abstract semantics of basic statements?"); and (iii) the abstract operations ("how do I combine the abstract elements?").

    We don't have any of these "abstract operations" in this post because there's no control flow but you can read about them elsewhere! 

  2. These abstract values are arranged in a lattice, which is a mathematical structure with some properties but the most important ones are that it has a top, a bottom, a partial order, a meet operation, and values can only move in one direction on the lattice.

    Using abstract values from a lattice promises two things:

    • The analysis will terminate
    • The analysis will be correct for any run of the program, not just one sample run

  3. Something about __match_args__ and @property... 

Categories: FLOSS Project Planets

Real Python: Hugging Face Transformers: Leverage Open-Source AI in Python

Wed, 2024-07-24 10:00

Transformers is a powerful Python library created by Hugging Face that allows you to download, manipulate, and run thousands of pretrained, open-source AI models. These models cover multiple tasks across modalities like natural language processing, computer vision, audio, and multimodal learning. Using pretrained open-source models can reduce costs, save the time needed to train models from scratch, and give you more control over the models you deploy.

In this tutorial, you’ll learn how to:

  • Navigate the Hugging Face ecosystem
  • Download, run, and manipulate models with Transformers
  • Speed up model inference with GPUs

Throughout this tutorial, you’ll gain a conceptual understanding of Hugging Face’s AI offerings and learn how to work with the Transformers library through hands-on examples. When you finish, you’ll have the knowledge and tools you need to start using models for your own use cases. Before starting, you’ll benefit from having an intermediate understanding of Python and popular deep learning libraries like pytorch and tensorflow.

Get Your Code: Click here to download the free sample code that shows you how to use Hugging Face Transformers to leverage open-source AI in Python.

Take the Quiz: Test your knowledge with our interactive “Hugging Face Transformers” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Hugging Face Transformers

In this quiz, you'll test your understanding of the Hugging Face Transformers library. This library is a popular choice for working with transformer models in natural language processing tasks, computer vision, and other machine learning applications.

The Hugging Face Ecosystem

Before using Transformers, you’ll want to have a solid understanding of the Hugging Face ecosystem. In this first section, you’ll briefly explore everything that Hugging Face offers with a particular emphasis on model cards.

Exploring Hugging Face

Hugging Face is a hub for state-of-the-art AI models. It’s primarily known for its wide range of open-source transformer-based models that excel in natural language processing (NLP), computer vision, and audio tasks. The platform offers several resources and services that cater to developers, researchers, businesses, and anyone interested in exploring AI models for their own use cases.

There’s a lot you can do with Hugging Face, but the primary offerings can be broken down into a few categories:

  • Models: Hugging Face hosts a vast repository of pretrained AI models that are readily accessible and highly customizable. This repository is called the Model Hub, and it hosts models covering a wide range of tasks, including text classification, text generation, translation, summarization, speech recognition, image classification, and more. The platform is community-driven and allows users to contribute their own models, which facilitates a diverse and ever-growing selection.

  • Datasets: Hugging Face has a library of thousands of datasets that you can use to train, benchmark, and enhance your models. These range from small-scale benchmarks to massive, real-world datasets that encompass a variety of domains, such as text, image, and audio data. Like the Model Hub, 🤗 Datasets supports community contributions and provides the tools you need to search, download, and use data in your machine learning projects.

  • Spaces: Spaces allows you to deploy and share machine learning applications directly on the Hugging Face website. This service supports a variety of frameworks and interfaces, including Streamlit, Gradio, and Jupyter notebooks. It is particularly useful for showcasing model capabilities, hosting interactive demos, or for educational purposes, as it allows you to interact with models in real time.

  • Paid offerings: Hugging Face also offers several paid services for enterprises and advanced users. These include the Pro Account, the Enterprise Hub, and Inference Endpoints. These solutions offer private model hosting, advanced collaboration tools, and dedicated support to help organizations scale their AI operations effectively.

These resources empower you to accelerate your AI projects and encourage collaboration and innovation within the community. Whether you’re a novice looking to experiment with pretrained models, or an enterprise seeking robust AI solutions, Hugging Face offers tools and platforms that cater to a wide range of needs.

This tutorial focuses on Transformers, a Python library that lets you run just about any model in the Model Hub. Before using transformers, you’ll need to understand what model cards are, and that’s what you’ll do next.

Understanding Model Cards

Model cards are the core components of the Model Hub, and you’ll need to understand how to search and read them to use models in Transformers. Model cards are nothing more than files that accompany each model to provide useful information. You can search for the model card you’re looking for on the Models page:

Hugging Face Models page

On the left side of the Models page, you can search for model cards based on the task you’re interested in. For example, if you’re interested in zero-shot text classification, you can click the Zero-Shot Classification button under the Natural Language Processing section:

Hugging Face Models page filtered for zero-shot text classification models

In this search, you can see 266 different zero-shot text classification models, which is a paradigm where language models assign labels to text without explicit training or seeing any examples. In the upper-right corner, you can sort the search results based on model likes, downloads, creation dates, updated dates, and popularity trends.

Each model card button tells you the model’s task, when it was last updated, and how many downloads and likes it has. When you click a model card button, say the one for the facebook/bart-large-mnli model, the model card will open and display all of the model’s information:

A Hugging Face model card

Even though a model card can display just about anything, Hugging Face has outlined the information that a good model card should provide. This includes detailed information about the model, its uses and limitations, the training parameters and experiment details, the dataset used to train the model, and the model’s evaluation performance.

A high-quality model card also includes metadata such as the model’s license, references to the training data, and links to research papers that describe the model in detail. In some model cards, you’ll also get to tinker with a deployed instance of the model via the Inference API. You can see an example of this in the facebook/bart-large-mnli model card:

Tinker with Hugging Face models using the Inference API Read the full article at https://realpython.com/huggingface-transformers/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: Quiz: Logging in Python

Wed, 2024-07-24 08:00

In this quiz, you’ll test your understanding of Python’s logging module.

Logging is a very useful tool in a programmer’s toolbox. It can help you develop a better understanding of the flow of a program and discover scenarios that you might not have thought of while developing.

Logs provide developers with an extra set of eyes that are constantly looking at the flow an application is going through.

They can store information, like which user or IP accessed the application. If an error occurs, then they can provide more insights than a stack trace by telling you what the state of the program was before it arrived and the line of code where the error occurred.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: Quiz: Python Protocols: Leveraging Structural Subtyping

Wed, 2024-07-24 08:00

Test your understanding of how to create and use Python protocols while providing type hints for your functions, variables, classes, and methods.

Take this quiz after reading our Python Protocols: Leveraging Structural Subtyping tutorial.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: Quiz: Hugging Face Transformers

Wed, 2024-07-24 08:00

In this quiz, you’ll test your understanding of Hugging Face Transformers. This library is a popular choice for working with transformer models in natural language processing tasks, computer vision, and other machine learning applications.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Django Weblog: Django 5.1 release candidate 1 released

Wed, 2024-07-24 06:51

Django 5.1 release candidate 1 is the final opportunity for you to try out a kaleidoscope of improvements before Django 5.1 is released.

The release candidate stage marks the string freeze and the call for translators to submit translations. Provided no major bugs are discovered that can't be solved in the next two weeks, Django 5.1 will be released on or around August 7. Any delays will be communicated on the on the Django forum.

Please use this opportunity to help find and fix bugs (which should be reported to the issue tracker), you can grab a copy of the release candidate package from our downloads page or on PyPI.

The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E

Categories: FLOSS Project Planets

scikit-learn: Interview with Adam Li, scikit-learn Team Member

Tue, 2024-07-23 20:00
Author: Reshama Shaikh , Adam Li

BIO: Adam is currently a Postdoctoral Research Scientist at Columbia University in the Causal Artificial Intelligence Lab, directed by Dr. Elias Bareinboim. He is an NSF-funded Computing Innovation Research Fellow. He did his PhD in biomedical engineering, specializing in computational neuroscience and machine learning at Johns Hopkins University working with Dr. Sridevi V. Sarma in the Neuromedical Control Systems group. He also jointly obtained a MS in Applied Mathematics and Statistics with a focus in statistical learning theory, optimization and matrix analysis. He was fortunate to be a NSF-GRFP fellow, Whitaker International Fellow, Chateaubriand Fellow and ARCS Chapter Scholar during his time at JHU. Adam officially joined the scikit-learn team as a maintainer in July 2024.

Link to scikit-learn contributions (issues, pull requests):

  1. Tell us about yourself.

    I currently live in New York City, where I work on theoretical and applied AI research through the lens of causal inference, statistical modeling, dynamical systems and signal processing. My current research is focused on telling a causal story, specifically in the case one has multiple distributions of data from the same causal system. For example, one may have access to brain recordings from monkeys and humans. Given these heterogeneous datasets, I am interested in answering: what causal relationships can we learn. This is known as the causal discovery problem, where given data, one attempts to learn what causes what. Another problem that I work on that is highly relevant to generative AI is the problem of causal representation learning. Here, I develop theory and train deep neural networks to understand causality among latent factors. Specifically, we demonstrate how to leverage multiple datasets and a causal neural network to generate data that is causally realistic. This can enable more robust data generation from general latent variable models.

  2. How did you first become involved in open source and scikit-learn?

    I first got involved in open source as a user. I was making the switch from Matlab to Python and started using packages like numpy and scipy pretty regularly. In my PhD research, I dealt with a lot of electrophysiological data (i.e. EEG brain recordings). I was writing hundreds of lines of code to load and preprocess data, and it was always changing based on different constraints. That was when I discovered MNE-BIDS, a Python package within the MNE framework for reading and writing brain recording data in a structured format. This changed my life because now my preprocessing and data loading code was a few lines of code that adhered to an open standard tested by thousands of researchers. I realized the value of open source, and began contributing in my spare time.

  3. We would love to learn of your open source journey.

    I first started contributing to open-source in the MNE organization. This package implements data structures for the processing and analysis of neural recording data (e.g. MEG, EEG, iEEG data). I contributed over 70 pull requests in the MNE-BIDS package, and subsequently was invited to be a maintainer for MNE-BIDS and MNE-Python. Later one, I participated in a Google Summer of Code to port the connectivity submodule within MNE-Python to a new package, known as MNE-Connectivity. I added new data structures, and algorithms for the sake of improving the feature developments for connectivity algorithms among neural recording data. Later on, I also worked with a team on porting a neural network architecture from Matlab to the MNE framework to automatically classify ICA derived components. This became known as MNE-ICALabel. These experiences gave me the experience necessary to work in a large asynchronous team environment that is common in OSS. It also taught me how to begin contributing to an OSS project. This led me to scikit-learn.

    I first got involved in scikit-learn as a user, who was heavily interested in the decision tree model in scikit-learn (random forest, randomized trees). Here, I was interested in contributing a new oblique decision tree model that was a generalization of the existing random forest model. However, the code was not easily added to scikit-learn, and currently the decision to include it is inconclusive. Throughout this process, I learned about the challenges and intricacies of maintaining such a large OSS project as scikit-learn. It is not trivial to simply add new features to a large OSS project because code comes with a maintenance cost, and should fit with the current internal design. At this point in time, there were very few maintainers that were able to maintain the tree submodule, and as such new features are included conservatively.

    I was eager to improve the project to enable more exciting features for the community, so I began contributing to scikit-learn starting with smaller issues such as documentation improvements, or minor bug fixes to get acquainted with the codebase. I also refactored various Cython code to begin upgrading the codebase, especially in the tree submodule. Throughout this process, I identified other projects the maintainers team were working on, and also contributed there. For example, I added metadata routing to a variety of different functions and estimators in scikit-learn. I also began reviewing PRs for the tree submodule and metadata routing where I had knowledge. I also added missing-value support for extremely randomized tree models (called ExtraTrees in scikit-learn). This allows users to pass in data that contains missing values (encoded as np.nan) to ExtraTrees. Around this time, I was invited to join the maintainer team of scikit-learn. More recently, I have taken on the project to add categorical data support to the decision tree models, which will make random forests and extremely randomized tree models more performant and capable to handle real world settings where there is commonly categorical data.

  4. To which OSS projects and communities do you contribute?

    I currently primarily contribute to scikit-learn, PyWhy (a community for causal inference in Python), and also develop my own OSS project: treeple. Treeple is an exciting package that implements different decision tree models beyond those offered in scikit-learn with an efficient Cython implementation stemming from the scikit-learn tree internals.

  5. What do you find alluring about OSS?

    OSS is so exciting because of the impact it has. Everyone from private projects to other OSS projects will use OSS. Any fixes to documentation, performance improvements, or new features will potentially impact the workflows of potentially millions of people. This is what makes contributing to OSS so exciting. Moreover, this impact ensures that best practices are usually carried out in these projects, and it’s a great playground to learn from the best, while giving back to the larger community.

  6. What pain points do you observe in community-led OSS?

    Right now, community lead OSS moves very slowly in most places. This is for a number of very good reasons: i) not releasing buggy features that may impact millions of people, and ii) backwards compatibility. One of the challenges of maintaining a high-quality OSS project is that you would like to satisfy your users, who may all utilize different components of the project from different versions. As such, many community led OSS projects take a conservative approach when implementing new features and new ideas. However, there may be many exciting better features that are already known by the community, but still lack an OSS implementation.

    I think this can be partially solved by increased funding for OSS, so OSS maintainers and developers are able to dedicate more time to maintaining and improving the projects. In addition, I think this can be improved if more developers in the community contribute to said OSS projects. I hope that I have convinced you though that contributing to OSS is impactful and highly educational.

  7. If we discuss how far OS has evolved in 10 years, what would you like to see happen?

    I think more interoperability and integrated workflows for projects will make projects that utilize OSS more streamlined and efficient. For example, right now there are different array libraries (e.g. numpy, cupy, xarray, pytorch, etc.), which all support some manner of a n-dimensional array, but with a slightly different API. This makes it very painful to transition across different libraries that use different arrays. In addition, there are multiple dataframe libraries, such as pandas and polars, and this problem of API consistency also arises there.

    Some work has been made on the Array-API front to allow different array libraries to serve as backends given a common API. This will enable GPU acceleration for free without a single code change, which is great! This will be exciting because users will eventually only have to write code in a single way, and can then leverage any array/dataframe library that has different advantages and disadvantages based on the user use case.

  8. What are your hobbies, outside of work and open source?

    I enjoy running, trying new restaurants and bars, cooking and reading. I’m currently training for a half-marathon, where my goal is to run under 8 minutes per mile. I’m also trying to perfect a salad with an asian-themed dressing. In a past life, I was a bboy (breakdancer) for ten years until I stopped in graduate school because I got busy (and old).

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #639 (July 23, 2024)

Tue, 2024-07-23 15:30

#639 – JULY 23, 2024
View in Browser »

Asyncio gather() Handle Exceptions

The asyncio.gather() function takes an optional argument return_exceptions which changes the behavior of the co-routine when an error occurs. This article shows you how to use it.
JASON BROWNLEE

Python Protocols: Leveraging Structural Subtyping

In this tutorial, you’ll learn about Python’s protocols and how they can help you get the most out of using Python’s type hint system and static type checkers.
REAL PYTHON

Prod Alerts? You Should be Autoscaling

Let Judoscale solve your scaling issues. We support Django, Flask, and FastAPI, and we also autoscale your Celery and RQ task queues. Traffic spike? Scaled up. Quiet night? Scaled down. Work queue backlog? No problem →
JUDOSCALE sponsor

Trying Out Free-Threaded Python on macOS

Free-threaded mode is available as a compile option in the beta of Python 3.13. Simon decided to try it out on his Mac.
SIMON WILLISON

Quiz: Python’s Built-in Functions: A Complete Exploration

Take this quiz to test your knowledge about the available built-in functions in Python. By taking this quiz, you’ll deepen your understanding of how to use these functions and the common programming problems they cover, from mathematical computations to Python-specific features.
REAL PYTHON

Python 3.13.0 Beta 4 Released

CPYTHON DEV BLOG

Articles & Tutorials Exercises Course: Introduction to Web Scraping With Python

In this course, you’ll practice the main steps of the web scraping process. You’ll write a script that uses Python’s requests library to scrape and parse data from a website. You’ll also interact with HTML forms using tools like Beautiful Soup and Mechanical Soup to extract specific information.
REAL PYTHON course

When Generators Get Cleaned Up

When playing with generators in asynchronous code, Sam ran into some things he didn’t quite expect with how and when resources get cleaned up. This article shows you what you can and can’t control, and in some cases whether you should.
SAM EISENHANDLER

Posit Connect - Help Your Data Science Team Share and Collaborate

Tired of emailing files around & trying to use general-purpose collaboration tools? Posit Connect makes it easy to share, collaborate, & get feedback on your data science work including Jupyter notebooks, Streamlit, & other analytics applications.
POSIT sponsor

Different Ways I Start Writing New Code

In this article, the author talks about the different approaches they use to create new code depending on the mood, the feature or bug at hand, and their confidence level. Five different approaches are covered.
JUHA-MATTI SANTALA

A Python Epoch Timestamp Timezone Trap

Date and times are problematic and there are all sorts of problems you can get into when you use them in Python. This post talks about one challenge when using epoch timestamp numbers.
JOËL PERRAS

How to Convert a Python Script Into a Web App

This post shows you how to take a Python script and turn it into a web application using FastAPI. It includes information about how to choose endpoints and how to do authentication.
AHMED LEMINE

Instrumenting Python GIL With eBPF

eBPF is a tool available on Linux to dig into the profile of your code. In this article, Nikolay uses it to inspect when the GIL gets invoked in CPython.
NIKOLAY SIVOK

Approximate Counting in Django and Postgres

Pagination in Django uses the rather slow SELECT COUNT(*) SQL call. This article looks at how to speed up counting with Django and PostgreSQL.
NIK TOMAZIC

Assignment vs. Mutation in Python

In Python, “change” can mean two different things. Assignment changes which object a variable points to. Mutation, changes the object itself.
TREY HUNNER

Flask vs Django in 2024

This post compares Flask and Django, from what they specialize in to what it takes to do a personal web site in each.
WILL VINCENT

Making an Iterator Out of a Function

You can use the Python built-in function iter with two arguments to create an iterator from a function.
RODRIGO GIRÃO SERRÃO

Developing GraphQL APIs in Django With Strawberry

This tutorial details how to integrate GraphQL with Django using Strawberry.
TESTDRIVEN.IO • Shared by Michael Herman

Projects & Code cbfa: Class-Based Views for FastAPI

GITHUB.COM/POMPONCHIK • Shared by Evgeniy Blinov (pomponchik)

Satyrn: macOS App for Jupyter Nobtebooks

SATYRN.APP

Datasketch: Probabilistic Data Structures

PYPI.ORG

pyNES: Python Programming for Nintendo 8 Bits

GITHUB.COM/GUTOMAIA

staged-script: Divide Automation Scripts Into Stages

GITHUB.COM/SANDIALABS • Shared by Jason M. Gates

Events Weekly Real Python Office Hours Q&A (Virtual)

July 24, 2024
REALPYTHON.COM

SPb Python Drinkup

July 25, 2024
MEETUP.COM

PyOhio 2024

July 27 to July 28, 2024
PYOHIO.ORG

PyDelhi User Group Meetup

July 27, 2024
MEETUP.COM

Lightning Talks

July 27, 2024
MEETUP.COM

Happy Pythoning!
This was PyCoder’s Weekly Issue #639.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Mike Driscoll: ANN: ObjectListView3 for wxPython

Tue, 2024-07-23 11:44

ObjectListView is a third-party wxPython widget that wraps the wx.ListCtrl. I have used it for over 10 years in quite a few different GUI applications because it works much nicer than wx.ListCtrl does. Unfortunately, ObjectListView was never integrated into wxPython core like some other amazing third-party packages were, and so it has become broken over the past couple of years such that you can’t use it with the latest version of Python / wxPython.

So, I decided to fork the project, as its defects were simple enough that I could resolve them quickly. The new version of ObjectListView is now called ObjectListView3.

You can get the new version of ObjectListview here:

Note that this version of ObjectListView works with Python 3.11+ and wxPython 4.2+. It may work with earlier versions as well.

Installation

If you’d like to install ObjectListView3, you can use Python’s pip to do so like this:

python -m pip install ObjectListView3

Now let’s see how an example of using ObjectListView!

Sample Usage

The following is an application that starts out by listing two books in an ObjectListView3 widget. When you press the “Update OLV” button, it will add three more books to the ObjectListView3 widget.

This demonstrates how to create the ObjectListview3 widget and update it after the application is running:

import wx from ObjectListView3 import ObjectListView, ColumnDefn class Book(object): """ Model of the Book object Contains the following attributes: 'ISBN', 'Author', 'Manufacturer', 'Title' """ def __init__(self, title, author, isbn, mfg): self.isbn = isbn self.author = author self.mfg = mfg self.title = title class MainPanel(wx.Panel): def __init__(self, parent): super().__init__(parent=parent, id=wx.ID_ANY) self.products = [ Book("wxPython in Action", "Robin Dunn", "1932394621", "Manning"), Book("Hello World", "Warren and Carter Sande", "1933988495", "Manning"), ] self.dataOlv = ObjectListView( self, wx.ID_ANY, style=wx.LC_REPORT | wx.SUNKEN_BORDER ) self.setBooks() # Allow the cell values to be edited when double-clicked self.dataOlv.cellEditMode = ObjectListView.CELLEDIT_SINGLECLICK # create an update button updateBtn = wx.Button(self, wx.ID_ANY, "Update OLV") updateBtn.Bind(wx.EVT_BUTTON, self.updateControl) # Create some sizers mainSizer = wx.BoxSizer(wx.VERTICAL) mainSizer.Add(self.dataOlv, 1, wx.ALL | wx.EXPAND, 5) mainSizer.Add(updateBtn, 0, wx.ALL | wx.CENTER, 5) self.SetSizer(mainSizer) def updateControl(self, event): """ Update the ObjectListView """ product_dict = [ { "title": "Core Python Programming", "author": "Wesley Chun", "isbn": "0132269937", "mfg": "Prentice Hall", }, { "title": "Python Programming for the Absolute Beginner", "author": "Michael Dawson", "isbn": "1598631128", "mfg": "Course Technology", }, { "title": "Learning Python", "author": "Mark Lutz", "isbn": "0596513984", "mfg": "O'Reilly", }, ] data = self.products + product_dict self.dataOlv.SetObjects(data) def setBooks(self, data=None): self.dataOlv.SetColumns( [ ColumnDefn("Title", "left", 220, "title"), ColumnDefn("Author", "left", 200, "author"), ColumnDefn("ISBN", "right", 100, "isbn"), ColumnDefn("Mfg", "left", 180, "mfg"), ] ) self.dataOlv.SetObjects(self.products) class MainFrame(wx.Frame): def __init__(self): wx.Frame.__init__( self, parent=None, id=wx.ID_ANY, title="ObjectListView Demo", size=(800, 600), ) panel = MainPanel(self) class OLVDemoApp(wx.App): def __init__(self, redirect=False, filename=None): super().__init__(redirect, filename) def OnInit(self): # create frame here frame = MainFrame() frame.Show() return True def main(): """ Run the demo """ app = OLVDemoApp() app.MainLoop() if __name__ == "__main__": main()

When you initially run this code, you will see the following application:

When you press the “Update OLV” button, the application will update to look like the following:

Wrapping Up

The ObjectListView3 widget makes working with tabular data easy in wxPython. You also get nice editing, sorting and more from this widget via different mixins or styles. Give ObjectListView3 a try when you create your next wxPython GUI application.

The post ANN: ObjectListView3 for wxPython appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Pages