Planet Python

Subscribe to Planet Python feed
Planet Python - http://planetpython.org/
Updated: 20 hours 57 min ago

Stack Abuse: [Fixed] The "ValueError: list.remove(x): x not in list" Error in Python

Sat, 2023-08-19 12:55
Introduction

In Python, or any high-level language for that matter, we commonly have to remove items from a list/array. However, you might occasionally encounter an error like ValueError: list.remove(x): x not in list. This error occurs when you try to remove an item from a list that doesn't actually exist.

In this Byte, we'll dive into why this error happens and how you can handle it.

Understanding the Error

The remove() function in Python is used to remove the first occurrence of a value from a list. However, if the value you're trying to remove doesn't exist in the list, Python will raise a ValueError.

fruits = ['apple', 'banana', 'cherry'] fruits.remove('orange')

This will output: ValueError: list.remove(x): x not in list.

Verifying Element Existence Before Removal

One way to prevent the ValueError is to check if the item exists in the list before trying to remove it. This can be done using the in keyword.

fruits = ['apple', 'banana', 'cherry'] if 'orange' in fruits: fruits.remove('orange')

In this case, 'orange' is not in the list, so the remove() function is not called, and therefore no error is raised.

Try/Except Error Handling

Another way to handle this error is to use a try/except block. This allows you to attempt to remove the item, and if it doesn't exist, Python will execute the code in the except block instead of raising an error.

fruits = ['apple', 'banana', 'cherry'] try: fruits.remove('orange') except ValueError: print('Item not found in list')

In this case, because 'orange' is not in the list, Python prints 'Item not found in list' instead of raising a ValueError.

The difference between this method and the previous one shown is really about personal preference and readability. Both work perfectly well, so choose the one you prefer most.

Multiple Item Removal

When it comes to removing multiple items from a list, things can get a bit trickier. If you try to remove multiple items in a loop and one of them doesn't exist, a ValueError will be raised.

fruits = ['apple', 'banana', 'cherry'] items_to_remove = ['banana', 'orange'] for item in items_to_remove: fruits.remove(item)

This will output: ValueError: list.remove(x): x not in list.

To handle this, you can combine the previous techniques we've already shown. You can use a try/except block inside the loop and check if the item exists before trying to remove it.

fruits = ['apple', 'banana', 'cherry'] items_to_remove = ['banana', 'orange'] for item in items_to_remove: if item in fruits: try: fruits.remove(item) except ValueError: print(f'Error removing {item} from list')

In this case, Python will remove 'banana' from the list, and when it tries to remove 'orange' and fails, it will print 'Error removing orange from list' instead of raising a ValueError.

Using List Comprehension

Instead of explicitly removing the item with list.remove(x), you can use list comprehension to create a new list that excludes the element you want to remove. This can be a more efficient way of handling the removal of an item from a list. Here's an example:

my_list = [1, 2, 3, 4, 5] x = 3 my_list = [i for i in my_list if i != x] print(my_list)

This will output:

[1, 2, 4, 5]

In this code, we're creating a new list that includes all items from my_list except x.

My main issue with this method is that it's not very obvious what you're trying to achieve from looking at the code, which can be confusing to collaborators (or even your future self).

Handling Nested Lists

When dealing with nested lists, the list.remove(x) method can only remove the entire sublist, not a specific element within the sublist. If you try to remove a specific element within the sublist, you'll encounter the ValueError: list.remove(x): x not in list error.

For example:

my_list = [[1, 2], [3, 4]] x = 2 my_list.remove(x)

This will raise a ValueError because x is not an element in my_list, it's an element within a sublist of my_list.

Verify Correct Value Type

It's important to ensure that the value you're trying to remove from the list is of the correct type. If it's not, you'll encounter the ValueError error. For example, trying to remove a string from a list of integers will raise this error.

my_list = [1, 2, 3, 4, 5] x = '2' my_list.remove(x)

This is an easy thing to overlook. For example, when you're taking input from a user, everything comes in as a string, and you may forget to convert '2' to the integer 2.

ValueError with the index() Method

The index() method in Python can be used to find the index of a specific element in a list. However, if the element is not in the list, you'll get the ValueError.

For example:

my_list = [1, 2, 3, 4, 5] x = 6 print(my_list.index(x))

This will raise a ValueError because x is not in my_list.

Again, to avoid the ValueError when using the index() method, you can follow the same methods as above and first check if the element is in the list. If it is, you can then find its index.

my_list = [1, 2, 3, 4, 5] x = 6 if x in my_list: print(my_list.index(x)) else: print(f"{x} is not in the list.")

In this case, the output will be "6 is not in the list." instead of a ValueError.

One-liner If/Else Error Handling

In Python, we have the flexibility to handle errors in a single line of code using the if/else statement. This is particularly handy when dealing with these kinds of exceptions. Here's how to do it:

my_list = ['apple', 'banana', 'cherry'] value_to_remove = 'banana' my_list.remove(value_to_remove) if value_to_remove in my_list else None

The advantage here is that the code is more compact and doesn't take up as many lines.

Applying index() Method Correctly

Another common scenario where you might encounter ValueError: x not in list is when you're using the index() method. The index() method returns the index of the specified element in the list. But if the element is not in the list, it raises the error.

Here's how you can apply the index() method correctly:

my_list = ['apple', 'banana', 'cherry'] value_to_search = 'banana' try: index_value = my_list.index(value_to_search) print(f'Value found at index: {index_value}') except ValueError: print('Value not in list')

In this example, we use a try/except block to catch the ValueError if the value is not found in the list. If the value is found, the index of the value is printed.

Note: Remember, the index() method only returns the first occurrence of the value in the list. If you need to find all occurrences, you'll need to use a different approach.

Conclusion

Python provides a number of ways to handle the ValueError: list.remove(x): x not in list exception. By verifying element existence before removal, using try/except blocks, or employing a one-liner if/else statement, you can verify that your code will run without issues. Additionally, by applying the index() method correctly, you can avoid ValueError when searching for a specific element in a list.

Categories: FLOSS Project Planets

PyCharm: Live stream: Who Is Behind Django? An Interview with the DSF President

Sat, 2023-08-19 10:37

We’ve all seen and used the famous Django framework, but there’s a lot more to a successful project than commits. The Django Software Foundation is a hallmark achievement in the Python community, but who’s behind it? In this interview, we’ll be talking with DSF President Chaim Kirby about its history, what it does, what it needs, and what’s next.

Date: August, 22

Time: 15:00 UTC (check your timezone)

Register

Chaim Kirby, Django Software Foundation President

An engineering leader with over 20 years of experience building complex web applications in the health tech sector, Chaim Kirby is the President of the Django Software Foundation. In addition to his current role as Head of Engineering at Budgie-Health, Chaim also serves as an administrator of PySlackers, a Python-focused Slack channel with over 40,000 registered users. He’s been working with Django since v1.1 and has been a community participant since 2014.

Categories: FLOSS Project Planets

Go Deh: OEISify

Sat, 2023-08-19 06:04

Best read on larger than a phone screen

The title is a rather mean pun on ossify, but it stuck.

 

I had been creating series of integers from prior work, and had been looking them up, manually, on

The On-Line Encyclopedia of Integer Sequences, OEIS. Work progressed and I was creating a table of related sequences and looked to automate the OEIS searches.I found a description of the textual internal format here which seemed steraight-forward and "awkable" - meaning that the format is written in a way that a short awk program could parse. I did find a couple of old python libraries that said they did this, but I wanted to flex my parsing skills and this seemed an enticing.
Mile high looking downChecking that internal format, the example result seem to be of the form:
%<TAG> A<number> <value>
To show search results in the internal format you need to make a slight change to a url:
https://oeis.org/search?fmt=text&q=3%2C11%2C26%2C50%2C85 (test)

Reading the internal format in more detail I saw that the %S %T and %U lines are all giving members of the sequence - there seems to be no reason to keep a distinction, I could parse and return all the values in order. The %p, %t, %o lines when you concatenate all of the same type, give program sources in three different languages. They can be treated similarly. the %k keywords are best returned as a list of strings. 
The other keywords, if they appear more than once in a sequence definition then subsequent values can append as new lines, accumulating a final value.
Extra Info givenWhen you look at a search result in text format you get a Search: and Showing line I might want to return this data and maybe also derive a Next entry from the details of Showing that is what is needed to generate the next page of results for the given search.
The RegexpYep, I started with a regexp, so put a lot of sample text into regex101 as TEST STRING then generated the regexp used in the program. 
The OutputA Python dict with all string keys. Each found A-series will be a nested dict of keys corresponding to the %-letter tags.Keys Search, Showing and Next are about the search and so appear in the top dict. The Code: oeis_request.py
 #!/bin/env python3# -*- coding: utf-8 -*-"""
Module to search and return sequences from:    The On-Line Encyclopedia of Integer Sequences® (OEIS®)    https://oeis.org/    https://oeis.org/eishelp1.html
Standalone Usage:    oeis_request.py <query> <start>    oeis_request.py --help    oeis_request.py --test

Created on Sat Aug 12 07:57:53 2023
@author: paddy3118, paddy3118.blogspot.com
"""
from pprint import ppimport reimport sslimport sysfrom typing import Anyimport urllib.requestimport urllib.parseimport json  # pylint: disable=W0611

URL = 'https://oeis.org/search'
DBG = not True
finditer = re.compile(r"""    # https://regex101.com/r/iNL0ve/2
    (?: Search: \s+ (?P<SEARCH> .* ))    | (?: (?P<SHOWING> ^Showing\s.*))    | (?: ^% (?P<_TAG>  (?P<SEQ>  [STU] ) |  (?P<PROG>  [pto] ) |  (?P<REST> [A-Za-z]) ) \s+             (?P<ANUMBER> A\d+) \s+ (?P<VALUE> .*) )    """, re.MULTILINE | re.VERBOSE).finditer
OEISTYPE = dict[str, str | dict[str, str | list[int | str]]]
def oeis_request(query: str, start: int=0) -> OEISTYPE:    """    Search OEIS for query returning a page of results from position start as a Python dict

    Parameters    ----------    query : str        The OEIS query in a format consistent with https://oeis.org/hints.html.    start : int, optional        page offset when multiple pages of results are available. The default is 0.
    Returns    -------    dict[str,                   # key0, top level keys: SEARCH, SHOWING, NEXT         str                    # string value,         | dict[                # Or a nested dict for each sequence definition:                str,                # key1, sequence level string keys,                str |               # And either a string value,                list[int            # A list for the int Sequence,                     | str]]]       # or a list of the str Keywords; as value.
        Returned Python datastructure representing the (page of) sequences found.        Top level keys are the 'SEARCH' and what results are 'SHOWING', together        with the sequence numbers of returned sequences like 'A123456' - which        all have one sub-dict as its value. Top level key 'NEXT', if present,        has as its value the query and start value pre-computed to retrieve the        next page of results from oeis_request.
        All possible keys of a sequence sub-dict are the values of global variable        RECORD_MAP    """
    # https://stackoverflow.com/a/60671292/10562    # Security risk as ssl cert checking is skipped for this function    # pylint: disable=protected-access    tmp, ssl._create_default_https_context = ssl._create_default_https_context, \                                             ssl._create_unverified_context    # pylint: enable=protected-access
    data = {}    data['fmt'] = 'text'    data['q'] = query    if start:        data['start'] = start    url_values = urllib.parse.urlencode(data)    url = URL + '?' + url_values    if DBG:        sys.stderr.write(url+'\n')        sys.stderr.flush()    with urllib.request.urlopen(url) as req:        txt =req.read().decode('utf8')
    # Restored:    ssl._create_default_https_context = tmp    # pylint: disable=protected-access
    return oeis_parser(txt)
def oeis_parser(text: str) -> dict[str, Any]:  # pylint: disable=too-many-branches    """    Parse text formatted like https://oeis.org/eishelp1.html
    Parameters    ----------    text : str        text returned from OEIS search.
    Returns    -------    dict[str, Any]        parsed data
    """    data = {}
    for matchobj in finditer(text):        group = matchobj.groupdict()        try:            anumber = group['ANUMBER']            if anumber:  # Skip None                adict = data.setdefault(anumber, {})        except KeyError:            anumber = ''        try:            value = group['VALUE']        except KeyError:            value = ''
        if val:=group[key0:='SEARCH']:            data[key0] = val        elif val:=group[key0:='SHOWING']:            data[key0] = val
        elif val:=group[key1:='SEQ']:            # Concat all %S $T and %U lines as list[str]            adict.setdefault(key1, []).extend(int(v) for v in                                             value.strip().strip(',').split(','))        elif val:=group['PROG']:            # Concat all individual %p %t %o lines as list[str]            adict.setdefault(RECORD_MAP[val], []).append(value.strip())        elif (val:=group['REST']) and val == 'K':            # Split and concat Keywords, %K lines as list[str]            adict.setdefault(RECORD_MAP[val], []).extend(value.strip().split(','))        elif val:=group['REST']:            # Concat other %_ lines as list[str]            adict.setdefault(RECORD_MAP[val], []).append(value.strip())        else:            # Should never arrive here from the regexp!            assert False, f"Got {group = }"
    # fixup    for key, subd in data.items():        if key[0] == 'A' and issubclass(type(subd), dict):            for record, value in subd.items():                if record != 'Keywords' and issubclass(type(value), list) \                   and issubclass(type(value[0]), str):                    subd[record] = '\n'.join(value)
    next_from_showing(data)
    return data
def next_from_showing(parsed: OEISTYPE)-> None:    """    Parse "SHOWING" key to generate "NEXT" value that would request the next page
    If already at the end then no next key is in data    parsed is altered in-place
    Parameters    ----------    parsed : OEISTYPE        Parsed data to insert a NEXT key/value.
    Returns    -------    None.        argument updated in-place
    """    matchobj = re.match(r"^Showing\s+(\d+)-(\d+)\s+of\s+(\d+)\s*$", parsed.get('SHOWING', ''))    if matchobj:        _start, stop, end = [int(x) for x in matchobj.groups()]        if stop < end:            search = parsed['SEARCH'].split(':', 1)[-1]            parsed['NEXT'] = [search, stop]            return
    if 'NEXT' in parsed:        del parsed['NEXT']

record_info = """%I A000001 Identification line (required)%S A000001 First line of sequence (required)%T A000001 2nd line of sequence.%U A000001 3rd line of sequence.%N A000001 Name (required)%D A000001 Reference Detailed reference line.%H A000001 Link to other site.%F A000001 Formula .%Y A000001 Cross-references to other sequences.%A A000001 Author (required)%O A000001 Offset (required)%E A000001 Etc Extensions, errors, etc.%e A000001 Examples examples to illustrate initial terms.%p A000001 Maple program.%t A000001 Mathematica program.%o A000001 OtherProgram in another language.%K A000001 Keywords (required)%C A000001 Comments.""".strip().splitlines()
RECORD_MAP = {words[0][1]: words[2]              for line in record_info              for words in [line.split()]}# pp(RECORD_MAP, sort_dicts=False)
HELP = f"""\{__doc__}
function oeis_request=====================
{oeis_request.__doc__}"""
if __name__ == '__main__':    if '--help' in sys.argv:        print(HELP)        sys.exit(0)
    if '--test' in sys.argv[1:]:        _req, _start = '1,2,3,4,5,6,6,7,7,8', 0    elif len(sys.argv) == 3:        _req, _start = sys.argv[1:]  # pylint: disable=W0632    else:        print(HELP)        sys.exit(1)
    _data = oeis_request(_req, _start)  # pylint: disable=E0601    pp(_data, width=512, sort_dicts=False)    #print(json.dumps(_data, indent=2))

 
Test output$ ./oeis_request.py 3,11,26,50,85 0 {'SEARCH': 'seq:3,11,26,50,85', 'SHOWING': 'Showing 1-1 of 1', 'A051925': {'Identification': '#84 Jun 26 2022 03:06:23',             'SEQ': [0, 0, 3, 11, 26, 50, 85, 133, 196, 276, 375, 495, 638, 806, 1001, 1225, 1480, 1768, 2091, 2451, 2850, 3290, 3773, 4301, 4876, 5500, 6175, 6903, 7686, 8526, 9425, 10385, 11408, 12496, 13651, 14875, 16170, 17538, 18981, 20501, 22100, 23780],             'Name': 'a(n) = n*(2*n+5)*(n-1)/6.',             'Comments.': 'Related to variance of number of inversions of a random permutation of n letters.\n'                          'Zero followed by partial sums of A005563. - _Klaus Brockhaus_, Oct 17 2008\n'                          'a(n)/12 is the variance of the number of inversions of a random permutation of n letters. See evidence in Mathematica code below. - _Geoffrey Critzer_, May 15 2010\n'                          'The sequence is related to A033487 by A033487(n-1) = n*a(n) - Sum_{i=0..n-1} a(i) = n*(n+1)*(n+2)*(n+3)/4. - _Bruno Berselli_, Apr 04 2012\n'                          "Deleting the two 0's leaves row 2 of the convolution array A213750. - _Clark Kimberling_, Jun 20 2012\n"                          'For n>=4, a(n-2) is the number of permutations of 1,2...,n with the distribution of up (1) - down (0) elements 0...0110 (the first n-4 zeros), or, the same, a(n-2) is up-down coefficient {n,6} (see comment in A060351). - _Vladimir Shevelev_, Feb 15 2014',             'Reference': 'V. N. Sachkov, Probabilistic Methods in Combinatorial Analysis, Cambridge, 1997.',             'Link': 'Vincenzo Librandi, <a href="/A051925/b051925.txt">Table of n, a(n) for n = 0..1000</a>\nJ. Wang and H. Li, <a href="http://dx.doi.org/10.1016/S0012-365X(01)00301-6">The upper bound of essential chromatic numbers of hypergraphs</a>, Discr. Math. 254 (2002), 555-564.\n<a href="/index/Rec#order_04">Index entries for linear recurrences with constant coefficients</a>, signature (4,-6,4,-1).',             'Formula': 'a(n) = A000330(n) - n. - _Andrey Kostenko_, Nov 30 2008\nG.f.: x^2*(3-x)/(1-x)^4. - _Colin Barker_, Apr 04 2012\na(n) = 4*a(n-1) - 6*a(n-2) + 4*a(n-3) - a(n-4). - _Vincenzo Librandi_, Apr 27 2012\nE.g.f.: (x^2/6)*(2*x + 9)*exp(x). - _G. C. Greubel_, Jul 19 2017',             'Mathematica': 'f[{x_, y_}] := 2 y - x^2; Table[f[Coefficient[ Series[Product[Sum[Exp[i t], {i, 0, m}], {m, 1, n - 1}]/n!, {t, 0, 2}], t, {1, 2}]], {n, 0, 41}]*12 (* _Geoffrey Critzer_, May 15 2010 *)\nCoefficientList[Series[x^2*(3-x)/(1-x)^4,{x,0,50}],x] (* _Vincenzo Librandi_, Apr 27 2012 *)',             'OtherProgram': '(PARI) {print1(a=0, ","); for(n=0, 42, print1(a=a+(n+1)^2-1, ","))} \\\\ _Klaus Brockhaus_, Oct 17 2008\n(Magma) I:=[0, 0, 3, 11]; [n le 4 select I[n] else 4*Self(n-1)-6*Self(n-2)+4*Self(n-3)-Self(n-4): n in [1..50]]; // _Vincenzo Librandi_, Apr 27 2012',             'Cross-references': 'Cf. A000330, A005563, A033487.',             'Keywords': ['nonn', 'easy'],             'Offset': '0,3',             'Author': '_N. J. A. Sloane_, Dec 19 1999'}}$

A test with paged resultsHighlighting the NEXT value of what is needed to get the next page of results
$ ./oeis_request.py 1,2,3,4,5 0 {'SEARCH': 'seq:1,2,3,4,5', 'SHOWING': 'Showing 1-10 of 7513', 'A000027': {'Identification': 'M0472 N0173 #637 Aug 14 2023 15:10:40',             ... 'A007953': {'Identification': '#280 Jun 18 2023 11:41:19',             ... 'A000961':  ... 'A002260': {'Identification': '#205 Feb 03 2023 18:43:52',             ... 'NEXT': ['1,2,3,4,5', 10]}$
Categories: FLOSS Project Planets

Talk Python to Me: #427: 10 Tips and Ideas for the Beginner to Expert Python Journey

Sat, 2023-08-19 04:00
Getting started in Python is pretty easy. There's even a t-shirt that jokes about it: I learned Python, it was a good weekend. But to go from know how to create variables and writing loops, to building amazing things like FastAPI or Instagram, well there is this little gap between those two things. On this episode we welcome Eric Matthes to the show. He has thought a lot about teaching Python and comes to share his 10 tips for going from Python beginner to expert. <br/><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Eric on LinkedIn</b>: <a href="https://www.linkedin.com/in/eric-matthes-598765205/" target="_blank" rel="noopener">linkedin.com</a><br/> <b>Mostly Python Newsletter</b>: <a href="https://mostlypython.substack.com" target="_blank" rel="noopener">mostlypython.substack.com</a><br/> <b>Python Crash Course Book</b>: <a href="https://nostarch.com/python-crash-course-3rd-edition" target="_blank" rel="noopener">nostarch.com</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=m_DgGjMg4hM" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/427/10-tips-and-ideas-for-the-beginner-to-expert-python-journey" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div><br/> <strong>Sponsors</strong><br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>
Categories: FLOSS Project Planets

Brett Cannon: State of standardized lock files for Python: August 2023

Fri, 2023-08-18 20:30

Since people seemed to like my June 2023 post on the state of WASI support for CPython, I thought I would do one for another one of my other long-gestating projects: coming up with a standardized lock file format for Python packaging.

&#x1F4A1;When I say "lock file" I&aposm talking about pinning your dependencies and their versions and writing it to a file, like pip-compile from pip-tools takes a requirements.in file and produces a requirements.txt file. I am not talking about file locking like fcntl.flock().

On the VS Code team, we have taken the position that we much prefer working with standards over anything that&aposs tool-specific when dealing with anything in our core Python extension. As such, I have been helping out in trying to standardize things in Python packaging. Probably the most visible thing I helped with was establishing pyproject.toml via PEP 518. I also drove the creation of the [project] table in pyproject.toml via PEP 621.

For me, the next thing to standardize was a lock file format. Historically, people either manually pinned their dependencies to a specific version or they used a requirements.txt file. The former is rather tedious and often misses indirect dependencies and the latter isn&apost actually a standard but a pip feature. Both of those things together made me want to come up with a file format that made environment reproducibility possible by making it easy for people to get the exact same package versions installed. I also wanted to take the opportunity to help people do installations in a more secure fashion on top of reproducibility as it takes 3 extra flags to pip to make it install things securely.

That led me to write PEP 665. The goal was to create a lock file format around wheels which would facilitate people installing packages in a consistent, secure way. Unfortunately, after 10 months of working on the PEP, it was ultimately rejected. I personally believe the rejection was due to lack of sdist support – which goes against the "secure" goal I had since they can basically do anything during install time – and due to a lack of understanding around how important lock files are for security purposes (let alone just making sure you can replicate your environment in other places).

And so I decided I needed a proof-of-concept lock file format in order to show the importance of this. That would require being able to do a few things:

  1. Get a list of top-level dependencies that need to be installed from the user
  2. Communicate with a Python package index server like PyPI to find out what packages (and their wheels) are available
  3. Resolve the dependency graph to know what needs to ultimately be installed
  4. Create a lock file for that dependency graph of wheel files
  5. Install the wheel files listed in the lock file

Step 1 is somewhat taken care of by pyproject.toml and project.dependencies, although if you&aposre not writing code that&aposs meant to eventually end up being in a wheel it&aposs a bit of an abuse of that data (there&aposs been a discussion about how projects not destined for being a wheel should write down their dependencies, but I don&apost know if it&aposs going to go anywhere). Step 2 is taken care of via the simple repository API, which can be either HTML or JSON-based (I created mousebender to smooth over the details between the two types of API response formats, and that project is also where I&aposm hosting all of this work related to the proof-of-concept I want to end up with).

Step 3 is where I&aposm currently at. Working with a resolver like resolvelib means you need the initial sets of requirements, the constraints it has to operate under (e.g., platform details), and the ability to update the requirements the resolver is working with as it learns about more edges in the dependency graph. As I mentioned above, I can cheat about the initial set of requirements by grabbing them from pyproject.toml. The constraints are covered by packaging.markers and packaging.tags (and I wrote the latter module, so I&aposm "lucky" to be overly familiar with what&aposs required for this situation). So that leaves updating requirements as you discover new edges to the dependency graph for step 3.

But how do you end up with new edges of the dependency graph? Well, every dependency has its own dependencies. So you what you end up doing is once you think you know what wheel you want to install you get the metadata for that wheel and read what requirements it has. That might sound simple, but the core metadata specification says wheel metadata gets written to a METADATA file that is formatted using email headers; not quite so easy as reading some JSON. Plus it has a lot of types of fields, the parsing requirements per field have changed over the years, etc. As such, the idea came up of putting some code into the packaging project – which I&aposm a co-maintainer of – so there could be a baseline core metadata parser which handled parsing this metadata, both in a forgiving and strict manner (for this project I need strict parsing of the dependency information).

I got the forgiving parsing done in packaging 23.4.0 via packaging.metadata. But today I got the strict parsing merged which also provides a higher-level API using richer object representations. All told, this part took me over 2.5 years to complete.

And with that, someone can tell me what their dependencies are, PyPI can tell me what wheels it has, and I can read what dependencies those wheels have. The next step is taking resolvelib and creating a resolver to generate the dependency graph. I&aposm planning to make the design of my resolver code flexible so that you can do interesting things like resolve for the oldest dependencies as well as the newest (handy for testing the range of versions you claim you support), most and least specific wheels (so you can see what your actual platform restrictions are), and to be able to specify the platform details so you can resolve for a different platform than you&aposre running on (handy if your production environment is different than your development one). Those last two are important to me for work purposes as it would allow me to create a resolver that only supported pure Python wheels which is necessary for WASI since there isn&apost extension module support for that platform (yet).

Categories: FLOSS Project Planets

CodersLegacy: apipkg Tutorial: Enhanced Lazy Loading in Python

Fri, 2023-08-18 16:41

Welcome to this tutorial on using the apipkg library in Python! In this tutorial, we’ll explore how to use the apipkg library to efficiently manage your imports and only import modules when they are actually accessed. This technique is known as lazy loading or deferred importing.

There are other libraries which can accomplish similar tasks, such as importlib. But apipkg has something unique about it, that takes lazy loading to the next level. Let’s find out what!

Table Of Contents
  1. What is apipkg?
  2. Installation
  3. Setting Up the Project Structure
  4. Implementing Lazy Loading with apipkg in Python
  • Running the Example
  • Conclusion
  • What is apipkg?

    The apipkg library is a Python package that provides a way to control the importing behavior of your modules. It allows you to define mappings between attribute names and module paths. This means that you can delay the actual import of a module until a specific attribute or function from that module is accessed.

    This can lead to improved performance and reduced memory consumption in your applications. It is also possible that some modules never need to be imported (if that particular feature wasn’t needed by the user). This leads to even more performance benefits.

    Most importantly, lazy loading modules reduces the startup time for your application. This is a big deal in applications where startup time is critical.

    Installation

    Before we begin, make sure you have the apipkg library installed. You can install it using pip:

    pip install apipkg Setting Up the Project Structure

    Let’s start by creating the necessary files and folder structure for this tutorial.

    tutorial/ ├── main.py └── package/ ├── __init__.py ├── moduleA.py └── moduleB.py

    Here’s a brief overview of the purpose of each file:

    • main.py: This is the main script where we’ll use the lazy loading technique with the apipkg library.
    • package/__init__.py: This file will initialize the mappings for lazy loading using the apipkg library.
    • package/moduleA.py: This module contains a function for addition.
    • package/moduleB.py: This module contains a function for multiplication.

    The __init__.py file is essential for creating a package, not just for the purposes of using with apipkg. Without this file, we wouldn’t be able to import these modules.

    Implementing Lazy Loading with apipkg in Python

    Let’s go through the code step by step to understand how lazy loading works using the apipkg library.

    Step 1: Importing the Required Modules

    In the main.py file, start by importing the necessary modules:

    import package import time

    We’re importing the package module, which will utilize the lazy loading technique, and the time module to add delays for demonstration purposes.

    Step 2: Initializing Lazy Loading in __init__.py

    In the package/__init__.py file, we’ll use the apipkg library to define the mappings for lazy loading:

    import apipkg apipkg.initpkg(__name__, { 'path': { 'add': "package.moduleA:add", 'mul': "package.moduleB:mul" } })

    Here, we’re specifying that the attribute add should be imported from the moduleA when accessed, and similarly, the attribute mul should be imported from the moduleB when accessed.

    Note: The name “path” here is arbitrary.

    Step 3: Implementing Lazy Loading Functions

    In package/moduleA.py and package/moduleB.py, implement the functions for addition and multiplication, respectively:

    # moduleA.py print("Module A") def add(num1, num2): return num1 + num2 # moduleB.py print("Module B") def mul(num1, num2): return num1 * num2

    The print statements will help us visualize when the modules are actually imported. You will understand this better when we actually run the code.


    Step 4: Using Lazy Loaded Functions

    In main.py, let’s use the lazy loaded functions with some delays to observe the behavior:

    time.sleep(1) print(package.path.add(2, 4)) time.sleep(1) print(package.path.mul(2, 4))

    Here, we’re calling the add and mul functions from the package module. Remember that the actual import of the corresponding modules will only occur when these functions are accessed.

    Running the Example

    Now that we have everything set up, you can run the main.py script using your Python interpreter (or run it from an IDE)

    python main.py

    As you run the script, you will observe from the output, that “Module A” and “Module B” print statements from moduleA.py and moduleB.py only appear when the respective functions are called. This demonstrates the lazy loading behavior implemented using the apipkg library.

    Here is the output:

    Module A 6 Module B 8

    If these modules had not been lazy loaded, both print statements would be located at the very beginning. However, this is not the case. “Module B” was only printed when that module was accessed.

    Conclusion

    This marks the end of the Python apipkg Tutorial. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the tutorial content can be asked in the comments section below.

    The post apipkg Tutorial: Enhanced Lazy Loading in Python appeared first on CodersLegacy.

    Categories: FLOSS Project Planets

    Stack Abuse: Resolving "ModuleNotFoundError: No module named encodings" in Python

    Fri, 2023-08-18 10:55
    Introduction

    Python is a powerful and versatile programming language, but sometimes you may encounter errors that seem perplexing. One such error is the "ModuleNotFoundError: No module named encodings". This error can occur due to various reasons, and in this Byte, we will explore how to resolve it.

    Why did I get this error?

    Before we get into the solution, let's first understand the error. The "ModuleNotFoundError: No module named encodings" error usually occurs when Python cannot locate the encodings module. This module is crucial for Python to function properly because it contains the necessary encodings that Python uses to convert bytes into strings and vice versa.

    >>> import encodings Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'encodings'

    This error message indicates that Python is unable to locate the 'encodings' module.

    Note: The encodings module is a built-in Python module, and its absence can be due to incorrect installation or configuration of Python.

    Setting Python in your System's PATH

    One of the common reasons for this error is that Python is not correctly set up in your system's PATH. The PATH is an environment variable on Unix-like operating systems, DOS, OS/2, and Microsoft Windows, specifying a set of directories where executable programs are located.

    To add Python to the system's PATH, you need to locate your Python installation directory and add it to the PATH environment variable.

    In Unix-like operating systems, you can add Python to your PATH by editing the .bashrc or .bash_profile file in your home directory. Add the following line, replacing /path/to/python with the actual path to your Python installation:

    $ echo 'export PATH="/path/to/python:$PATH"' >> ~/.bashrc

    In Windows, you can add Python to your PATH by editing the system environment variables:

    1. Right-click on 'Computer' and click on 'Properties'.
    2. Click on 'Advanced system settings'.
    3. Click on 'Environment Variables'.
    4. In the system variables section, find the 'Path' variable, select it, and click on 'Edit'.
    5. In the 'Variable value' field, append the path to your Python installation with a semicolon (;) before it.
    6. Click 'OK' to close all dialog boxes.

    After adding Python to your PATH, you should be able to import the encodings module without any issues.

    Conclusion

    In this Byte, we've explored the 'ModuleNotFoundError: No module named encodings' error in Python and discussed a common solution - adding Python to your system's PATH.

    If you're still facing issues, you may need to reinstall Python. This process varies depending on your operating system, so be sure to look up specific instructions for your OS.

    Categories: FLOSS Project Planets

    Stack Abuse: Generating Random Hex Colors in Python

    Fri, 2023-08-18 09:14
    Introduction

    Let's say you're building a visualization tool and need to generate a few colors - how would you do that? Or maybe you have a base color and need to generate one that's similar. These are just a few use-cases in which you'd need to figure out how to generate random hexadecimal colors.

    In this Byte, we'll walk you through a few ways to create these colors in Python.

    Generating Random Hex Colors

    Luckily the task is pretty straightforward, thanks to the built-in random module. Each color in the hexadecimal format is represented by a six-digit combination of numbers and letters (from A to F), where each pair of digits represents the intensity of red, green, and blue colors, respectively.

    Here's a simple way to generate random hexadecimal colors:

    import random def generate_random_color(): return '#{:06x}'.format(random.randint(0, 0xFFFFFF)) print(generate_random_color())

    When you run this code, you'll get a random color each time. The {:06x} is a format specification for six hexadecimal digits, prefixed with a '#'.

    Using random.choice() in Older Python Versions

    If you're using an older version of Python, you can still generate random hexadecimal strings with the random.choice() function. Here's how:

    import random def generate_random_color(): return '#' + ''.join([random.choice('0123456789ABCDEF') for _ in range(6)]) print(generate_random_color())

    This code generates a random color by choosing a random character from the string '0123456789ABCDEF' six times.

    Random Hex Colors with Formatted String Literals

    Python 3.6 introduced a new way of handling string formatting - Formatted String Literals, or f-strings. You can use these to generate random hex colors as well:

    import random def generate_random_color(): r = random.randint(0, 255) g = random.randint(0, 255) b = random.randint(0, 255) return f'#{r:02x}{g:02x}{b:02x}' print(generate_random_color())

    This code generates random values for red, green, and blue, and then formats them as two-digit hexadecimal numbers.

    Sure, I'll help you expand on your article by adding an introduction, a section on creating random colors within a certain range restriction, and a conclusion. I'll try to maintain a conversational tone throughout.

    Generating Random Hex Colors within a Range

    Sometimes, you might want to restrict the range of your colors to get a specific look or theme. Let's say you want to create only shades of blue. You can do that by keeping the red and green channels at zero (or some other low value) and only generating random values for the blue channel. Here's one way to do that:

    import random def generate_random_blue(): return f'#0000{random.randint(0, 255):02x}' print(generate_random_blue())

    You can even specify ranges for red, green, and blue to get a color within a specific spectrum:

    def generate_random_color_within_range(r_min, r_max, g_min, g_max, b_min, b_max): r = random.randint(r_min, r_max) g = random.randint(g_min, g_max) b = random.randint(b_min, b_max) return f'#{r:02x}{g:02x}{b:02x}' print(generate_random_color_within_range(100, 150, 50, 100, 200, 255))

    With this, you'll get a random color within the defined RGB ranges, allowing you to control the overall hue, saturation, and brightness.

    Conclusion

    Whether you're looking for a bit of randomness or a more controlled shade, generating random hex colors in Python is pretty easy. From classic techniques to the newer f-strings, Python many ways to achieve this.

    Categories: FLOSS Project Planets

    Stack Abuse: Generating Random Hexadecimal Strings in Python

    Fri, 2023-08-18 08:46
    Introduction

    Hexadecimal strings and colors are used quite frequently in the world of programming. Whether you're generating a unique identifier or picking a random color for a UI element, being able to generate random hexadecimal strings and colors is a nice tool to have in your arsenal.

    In this Byte, we'll explore how you can generate these random hexadecimal strings in Python.

    Creating Random Hexadecimal Strings in Python

    A hexadecimal string is a string of characters from the set [0-9, a-f]. They're often used in computing to represent binary data in a form that's more human-readable.

    Here's how you can generate a random hexadecimal string in Python:

    import os def random_hex_string(length=6): return os.urandom(length).hex() print(random_hex_string())

    When you run this script, it will output a random hexadecimal string of the given length, which in this case defaults to 6.

    Byte Objects to String Conversion

    In the above code, we're using the os.urandom() function to generate a string of random bytes, and then converting it to a hexadecimal string with the .hex() method. This is because os.urandom() actually returns a bytes object, which is a sequence of integers in the range 0 <= x < 256, which can then be converted to a hex string.

    import os random_bytes = os.urandom(6) print(type(random_bytes)) # <class 'bytes'> hex_string = random_bytes.hex() print(type(hex_string)) # <class 'str'>

    In the above code, we first print the type of the object returned by os.urandom(), which is <class 'bytes'>. Then, after converting it to a hex string with the .hex() method, we print the type again, which is now <class 'str'>.

    Random Hex Strings with random.choices()

    Another way to generate a random hexadecimal string in Python is by using the random.choices() function. This function returns a list of elements chosen from the input iterable, with replacement. Here's how you can use it:

    import random import string def random_hex_string(length=6): return ''.join(random.choices(string.hexdigits, k=length)) print(random_hex_string())

    When you run this script, it will output a random hexadecimal string of the specified length. It's not necessarily better than the previous method, but for readability, some people may prefer this over using os.urandom().

    Random Hex Strings with secrets.token_hex()

    Python's secrets module, introduced in Python 3.6, provides functions for generating secure random numbers for managing secrets. One of these functions is secrets.token_hex(), which generates a secure random text string in hexadecimal. Here's how you can use it:

    import secrets def random_hex_string(length=6): return secrets.token_hex(length) print(random_hex_string())

    When you run this script, it will output a secure random hexadecimal string of the specified length.

    Note: The secrets module should be used for generating data for secret tokens, authentication, security tokens, and related cases, where security is a concern.

    Conclusion

    In this Byte, we've covered how to generate random hexadecimal strings in Python. Whether you're using the latest version of Python or an older one, there are several ways to achieve this.

    Categories: FLOSS Project Planets

    Stack Abuse: Using For and While Loops for User Input in Python

    Fri, 2023-08-18 06:00
    Introduction

    Python, a high-level, interpreted programming language, is known for its simplicity and readability. One of the many features that make Python so powerful is the for and while loops. These loops provide the ability to execute a block of code repeatedly, which can be particularly useful when dealing with user inputs.

    In this Byte, we will explore how to use for and while loops for user input in Python.

    User Input with For Loops

    The for loop in Python is used to iterate over a sequence (such as a list, tuple, dictionary, string, or range) or other iterable objects. Iterating over a sequence is called traversal. Let's see how we can use a for loop to get user input.

    for i in range(3): user_input = input("Please enter something: ") print("You entered: ", user_input)

    When you run this code, it will ask the user to input something three times because we have used range(3) in our for loop. The input() function reads a line from input (usually user), converts it into a string, and returns that string.

    Integer Input using for Loops

    You might be wondering how you can get integer inputs from users. Well, Python has got you covered. You can use the int() function to convert the user input into an integer. Here's how you can do it:

    for i in range(3): user_input = int(input("Please enter a number: ")) print("You entered: ", user_input)

    In this code, the int(input()) function reads a line from input, converts it into a string, then converts that string into an integer, and finally returns that integer.

    Note: Be careful when using int(input()). If the user enters something that can't be converted into an integer, Python will raise a ValueError. For a production system, you need to do more input validation and cleaning.

    List Comprehension as an Alternative

    While for loops are powerful, Python provides an even more concise way to create lists based on existing lists. This is known as list comprehension. List comprehension can sometimes be a more efficient way to handle lists, especially when dealing with user input. Let's see how we can use list comprehension to get user input:

    user_input = [input("Please enter something: ") for i in range(3)] print("You entered: ", user_input)

    In this code, we use list comprehension to create a new list that contains three elements entered by the user. This list is then printed out.

    This is sometimes preferred as list comprehension can be a more compact and readable alternative to for loops when dealing with user input in Python.

    User Input with While Loops in Python

    while loops in Python are a fundamental control flow structure that allows us to repeat a block of code until a certain condition is met. This can be particularly useful when we want to handle user input in a repetitive or continuous manner. Let's take a look at a simple example:

    while True: user_input = input("Please enter some text: ") if user_input == "quit": break print(f'You entered: {user_input}')

    In the above code, we're using a while loop to continuously ask the user for input. The loop will only terminate when the user enters the word "quit". Here's what the output might look like:

    Please enter some text: Hello You entered: Hello Please enter some text: quit

    Note: Remember that the input() function in Python always returns a string. If you want to work with other data types, you'll need to convert the user's input accordingly.

    Numeric Input using While Loops

    Now, let's say we want to get numeric input from the user. We can do this by using the int() function to convert the user's input into an integer. Here's an example:

    while True: user_input = input("Please enter a number: ") if user_input == "quit": break number = int(user_input) print(f'You entered the number: {number}')

    In this case, if the user enters anything other than a number, Python will raise a ValueError. To handle this, we can use a try/except block:

    while True: user_input = input("Please enter a number: ") if user_input == "quit": break try: number = int(user_input) print(f'You entered the number: {number}') except ValueError: print("That's not a valid number!") Conclusion

    In this Byte, we've explored how to use for and while loops in Python to take in user input. We've seen how these loops allow us to repeat a block of code until a certain condition is met, which can be particularly useful when we want to take user input continuously. We also saw how to handle numeric input and how to use try/except blocks to handle errors.

    Categories: FLOSS Project Planets

    CodersLegacy: Python Lazy Loading with Importlib

    Fri, 2023-08-18 03:26

    Welcome to this tutorial on Python Lazy Loading with importlib! In this tutorial, we’ll explore how to use the importlib library to achieve lazy loading of modules in your Python code.

    Table Of Contents
    1. Introduction to Importlib
    2. What is Lazy Loading?
    3. Benefits of Lazy Loading
    4. Using Importlib for Lazy Loading
  • Using LazyLoader for Enhanced Lazy Loading
  • Introduction to Importlib

    The importlib library is a part of the Python standard library and provides programmatic access to Python’s import system. It allows you to dynamically load and manage modules and packages in your code. One of the features provided by importlib is the ability to perform lazy loading, which is particularly useful when you want to optimize resource usage and application startup times.

    What is Lazy Loading?

    Lazy loading is a design concept that involves deferring the loading of modules until they are actually needed at runtime. In the traditional approach, modules are imported and loaded at the beginning of the program’s execution, regardless of whether they will be used or not.

    This upfront loading can lead to increased memory consumption and longer startup times, especially in larger applications where numerous modules might be imported.

    Lazy loading takes a different approach. It allows modules to be loaded only when they are required by a specific part of the code. This means that modules are loaded on-demand, right before they are about to be used, rather than at the start of the program. This approach optimizes resource usage and improves the overall performance of your application.

    Benefits of Lazy Loading

    Lazy loading offers several significant benefits, especially in scenarios where modules are not always needed or when you’re dealing with resource-intensive applications. Here are some of the key advantages:

    1. Reduced Startup Time: By loading only the modules that are actually required during the program’s execution, you can significantly reduce the initial startup time. This is particularly important for applications with complex dependencies and large codebases.
    2. Lower Memory Consumption: Since only the necessary modules are loaded into memory, you can keep memory consumption under control. Unnecessary modules won’t occupy memory space until they are actually used.
    3. Improved Performance: Lazy loading can lead to improved overall performance by ensuring that the application focuses its resources on the modules that are actively in use. This can result in faster response times and smoother user experiences.
    4. Optimized Resource Usage: Applications with modular structures might have parts that are seldom or never used. With lazy loading, these less frequently used modules won’t be loaded into memory until they are needed, optimizing the application’s resource usage.
    5. Dynamic Loading: Lazy loading provides a way to dynamically load and use modules at runtime. This can be especially beneficial when dealing with plugins, extensions, or user-generated content where the full set of modules isn’t known in advance.
    Using Importlib for Lazy Loading

    The importlib library allows you to load modules and packages dynamically. To use importlib for lazy loading, follow these steps:

    First, you need to import the importlib module:

    import importlib

    Instead of using the traditional import statement, you can use the importlib.import_module() function to load a module dynamically when needed. Here’s an example:

    def load_module(module_name): module = importlib.import_module(module_name) return module # Somewhere else in your code my_module = load_module('my_module')

    By using importlib.import_module(), you are effectively performing lazy loading because the module is loaded only when the load_module() function is called.

    Loading Modules Based on User Input

    One common scenario where dynamic loading is useful is when you want to load a module based on user input. For example, consider an application that offers different functionality modules, and the user selects the module they want to use. Here’s how you can use importlib to achieve this:

    import importlib def load_module_by_name(module_name): try: module = importlib.import_module(module_name) return module except ImportError: print(f"Module {module_name} not found.") return None # Get user input for the desired module user_input = input("Enter the name of the module you want to load: ") loaded_module = load_module_by_name(user_input) if loaded_module: # Use the loaded module loaded_module.some_function() Conditional Loading with If Statements

    Another use case for dynamic loading is loading modules conditionally based on certain runtime conditions. This can help avoid unnecessary module loading and optimize the application’s performance. Here’s an example of how to conditionally load a module using an if statement:

    import importlib def load_module_conditionally(condition): if condition: module_name = "module_a" # Replace with the actual module name module = importlib.import_module(module_name) return module else: return None # Check a condition to determine if the module should be loaded should_load_module = True # Replace with your actual condition loaded_module = load_module_conditionally(should_load_module) Using LazyLoader for Enhanced Lazy Loading

    The LazyLoader class provided by the importlib library offers an enhanced way to achieve lazy loading. LazyLoader postpones the execution of the loader of a module until the module has an attribute accessed.

    import importlib.util import sys def lazy(fullname): try: return sys.modules[fullname] except KeyError: spec = importlib.util.find_spec(fullname) module = importlib.util.module_from_spec(spec) loader = importlib.util.LazyLoader(spec.loader) # Make module with proper locking and get it inserted into sys.modules. loader.exec_module(module) return module

    To use the lazy() function, simply call it with the full name of the module you want to lazily load. For example:

    lazy_module = lazy('my_lazy_module')

    Calling the lazy() function that we have created will not import the module immediately. It will instead wait until an attribute or method from the module is accessed, and then it will import the module (if it already hasn’t been).

    For example:

    lazy_module.method() # This triggers the import

    It is not recommended to use this feature unless start-up time is critical to your application.

    This marks the end of the “Python Lazy Loading with Importlib” Tutorial. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the tutorial content can be asked in the comments section below.

    The post Python Lazy Loading with Importlib appeared first on CodersLegacy.

    Categories: FLOSS Project Planets

    Anwesha Das: Ansible 8.3.0 out now

    Thu, 2023-08-17 10:59

    I released the Ansible 8.3.0 on 15th August, 2023. This is the Ansible stable release. You can read the full Changelog here

    You can install it via pip.

    python3 -m pip install ansible==8.3.0 --user

    You can have a look at the announcement.

    Fun fact while working on the release Github worked below 30 KiB/s speed the whole day.

    To follow all our updates on Ansible project and community susbcribe to Bullhorn, our weekly Newsletter. Join us in our community discussion at this Matrix room.

    Categories: FLOSS Project Planets

    Stack Abuse: How to Select Columns in Pandas Based on a String Prefix

    Thu, 2023-08-17 10:40
    Introduction

    Pandas is a powerful Python library for working with and analyzing data. One operation that you might need to perform when working with data in Pandas is selecting columns based on their string prefix. This can be useful when you have a large DataFrame and you want to focus on specific columns that share a common prefix.

    In this Byte, we'll explore a few methods to achieve this, including creating a series to select columns and using DataFrame.loc.

    Select All Columns Starting with a Given String

    Let's start with a simple DataFrame:

    import pandas as pd data = { 'item1': [1, 2, 3], 'item2': [4, 5, 6], 'stuff1': [7, 8, 9], 'stuff2': [10, 11, 12] } df = pd.DataFrame(data) print(df)

    Output:

    item1 item2 stuff1 stuff2 0 1 4 7 10 1 2 5 8 11 2 3 6 9 12

    To select columns that start with 'item', you can use list comprehension:

    selected_columns = [column for column in df.columns if column.startswith('item')] print(df[selected_columns])

    Output:

    item1 item2 0 1 4 1 2 5 2 3 6 Creating a Series to Select Columns

    Another approach to select columns based on their string prefix is to create a Series object from the DataFrame columns, and then use the str.startswith() method. This method returns a boolean Series where a True value means that the column name starts with the specified string.

    selected_columns = pd.Series(df.columns).str.startswith('item') print(df.loc[:, selected_columns])

    Output:

    item1 item2 0 1 4 1 2 5 2 3 6 Using DataFrame.loc to Select Columns

    The DataFrame.loc method is primarily label-based, but may also be used with a boolean array. The ix indexer for DataFrame is deprecated now, as it has a number of problems. .loc will raise a KeyError when the items are not found.

    Consider the following example:

    selected_columns = df.columns[df.columns.str.startswith('item')] print(df.loc[:, selected_columns])

    Output:

    item1 item2 0 1 4 1 2 5 2 3 6

    Here, we first create a boolean array that is True for columns starting with 'item'. Then, we use this array to select the corresponding columns from the DataFrame using the .loc indexer. This method is more efficient than the previous ones, especially for large DataFrames, as it avoids creating an intermediate list or Series.

    Applying DataFrame.filter() for Column Selection

    The filter() function in pandas DataFrame provides a flexible and efficient way to select columns based on their names. It is especially useful when dealing with large datasets with many columns.

    The filter() function allows us to select columns based on their labels. We can use the like parameter to specify a string pattern that matches the column names. However, if we want to select columns based on a string prefix, we can use the regex parameter.

    Here's an example:

    import pandas as pd # Create a DataFrame df = pd.DataFrame({ 'product_id': [101, 102, 103, 104], 'product_name': ['apple', 'banana', 'cherry', 'date'], 'product_price': [1.2, 0.5, 0.75, 1.3], 'product_weight': [150, 120, 50, 60] }) # Select columns that start with 'product' df_filtered = df.filter(regex='^product') print(df_filtered)

    This will output:

    product_id product_name product_price product_weight 0 101 apple 1.20 150 1 102 banana 0.50 120 2 103 cherry 0.75 50 3 104 date 1.30 60

    In the above code, the ^ symbol is a regular expression that matches the start of a string. Therefore, '^product' will match all column names that start with 'product'.

    Next: The filter() function returns a new DataFrame that shares the data with the original DataFrame. So, any modifications to the new DataFrame will not affect the original DataFrame.

    Conclusion

    In this Byte, we explored different ways to select columns in a pandas DataFrame based on a string prefix. We learned how to create a Series and use it to select columns, how to use the DataFrame.loc function, and how to apply the DataFrame.filter() function. Of course, each of these methods has its own advantages and use cases. The choice of method depends on the specific requirements of your data analysis task.

    Categories: FLOSS Project Planets

    Matt Layman: Deployment Checklist - Building SaaS with Python and Django #168

    Wed, 2023-08-16 20:00
    In this episode, I added the deployment checklist and improved the security of the app. Then we moved to work to set up the database to use DATABASE_URL and prepare to use Postgres.
    Categories: FLOSS Project Planets

    Mike Driscoll: Global Interpreter Lock Optional in Python 3.13

    Wed, 2023-08-16 18:44

    Python’s Global Interpreter Lock (GIL) may finally be coming to an end. The Python Steering Council recently announced that they are accepting PEP 703. This PEP proposes adding a build configuration (--disable-gil) to CPython, which will turn off the GIL.

    The Python Global Interpreter Lock or GIL, is a mutex or lock that only ever allows the Python interpreter to run in a single thread.

    You can read more about this topic in the What is the Python GIL? article that was published last month.

    What About Python 3.12?

    Of course, the earliest you’ll get to try this new build config flag is in early builds for Python 3.13, which probably won’t be available until late in 2023 or early 2024.

    In the meantime, you might want to read up on PEP 684, which allows sub-interpreters to work in Python 3.12. You can start playing around with those in the release candidate builds for 3.12 now.

    Wrapping Up

    These are exciting times in Python land. While there aren’t any new core features like structural pattern matching in 3.10, there are still some important improvements and changes coming to the language that will impact Python in profound ways.

    The post Global Interpreter Lock Optional in Python 3.13 appeared first on Mouse Vs Python.

    Categories: FLOSS Project Planets

    Real Python: Python Polars: A Lightning-Fast DataFrame Library

    Wed, 2023-08-16 10:00

    In the world of data analysis and manipulation, Python has long been the go-to language. With extensive and user-friendly libraries like NumPy, pandas, PySpark, and Dask, there’s a solution available for almost any data-driven task. Among these libraries, one name that’s been generating a significant amount of buzz lately is Polars.

    Polars is a high-performance DataFrame library, designed to provide fast and efficient data processing capabilities. Inspired by the reigning pandas library, Polars takes things to another level, offering a seamless experience for working with large datasets that might not fit into memory.

    In this tutorial, you’ll learn:

    • Why Polars is so performant and attention-grabbing
    • How to work with DataFrames, expressions, and contexts
    • What the lazy API is and how to use it
    • How to integrate Polars with external data sources and the broader Python ecosystem

    After reading, you’ll be equipped with the knowledge and resources necessary to get started using Polars for your own data tasks. Before reading, you’ll benefit from having a basic knowledge of Python and experience working with tabular datasets. You should also be comfortable with DataFrames from any of the popular DataFrame libraries.

    Get Your Code: Click here to download the free sample code that shows you how to optimize your data processing with the Python Polars library.

    The Python Polars Library

    Polars has caught a lot of attention in a short amount of time, and for good reason. In this first section, you’ll get an overview of Polars and a preview of the library’s powerful features. You’ll also learn how to install Polars along with any dependencies that you might need for your data processing task.

    Getting to Know Polars

    Polars combines the flexibility and user-friendliness of Python with the speed and scalability of Rust, making it a compelling choice for a wide range of data processing tasks. So, what makes Polars stand out among the crowd? There are many reasons, one of the most prominent being that Polars is lightning fast.

    The core of Polars is written in Rust, a language that operates at a low level with no external dependencies. Rust is memory-efficient and gives you performance on par with C or C++, making it a great language to underpin a data analysis library. Polars also ensures that you can utilize all available CPU cores in parallel, and it supports large datasets without requiring all data to be in memory.

    Note: If you want to take a deeper dive into Polars’ features, check out this Real Python Podcast episode with Liam Brannigan. Liam is a Polars contributor, and he offers a nice firsthand perspective on Polars’ capabilities.

    Another standout feature of Polars is its intuitive API. If you’re already familiar with libraries like pandas, then you’ll feel right at home with Polars. The library provides a familiar yet unique interface, making it easy to transition to Polars. This means you can leverage your existing knowledge and codebase while taking advantage of Polars’ performance gains.

    Polars’ query engine leverages Apache Arrow to execute vectorized queries. Exploiting the power of columnar data storage, Apache Arrow is a development platform designed for fast in-memory processing. This is yet another rich feature that gives Polars an outstanding performance boost.

    These are just a few key details that make Polars an attractive data processing library, and you’ll get to see these in action throughout this tutorial. Up next, you’ll get an overview of how to install Polars.

    Installing Python Polars

    Before installing Polars, make sure you have Python and pip installed on your system. Polars supports Python versions 3.7 and above. To check your Python version, open a terminal or command prompt and run the following command:

    $ python --version

    If you have Python installed, then you’ll see the version number displayed below the command. If you don’t have Python 3.7 or above installed, follow these instructions to get the correct version.

    Polars is available on PyPI, and you can install it with pip. Open a terminal or command prompt, create a new virtual environment, and then run the following command to install Polars:

    (venv) $ python -m pip install polars

    This command will install the latest version of Polars from PyPI onto your machine. To verify that the installation was successful, start a Python REPL and import Polars:

    >>>>>> import polars as pl

    If the import runs without error, then you’ve successfully installed Polars. You now have the core of Polars installed on your system. This is a lightweight installation of Polars that allows you to get started without extra dependencies.

    Polars has other rich features that allow you to interact with the broader Python ecosystem and external data sources. To use these features, you need to install Polars with the feature flags that you’re interested in. For example, if you want to convert Polars DataFrames to pandas DataFrames and NumPy arrays, then run the following command when installing Polars:

    Read the full article at https://realpython.com/polars-python/ »

    [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

    Categories: FLOSS Project Planets

    PyBites: Break Out of Tutorial Hell, Build up The Python Coding Habit on Our Platform 💪

    Wed, 2023-08-16 04:51

    Ready to level up your Python skills?

    Stop tutorial paralysis and start implementing

    Here are 10 reasons coding on our platform (CodeChalleng.es) is so effective for (aspiring) Python programmers.

    1⃣ Real World Problems

    PyBites allows you to learn Python by solving real world problems, not just tutorial toy examples.

    This fosters active learning and helps you retain language features.

    2⃣ Deep Understanding

    From aspiring web developers to data scientists, our platform offers a wide variety of 400 bite-sized exercises to help you gain fluency in Python.

    Exercises range from core Python to FastAPI, data analysis / science, bioinformatics, regular expressions, algorithms, OOP, string manipulation, web scraping and much more.

    3⃣ Expanding Skills

    People that code with us love how Pybites expands their Python skills, from list comprehensions, lambdas, context managers to regexes, decorators, and many more useful concepts.

    4⃣ Quality Training

    We focus on teaching clean, Pythonic code in the context of real world scenarios.

    Our platform is known for its clarity and effectiveness in bridging the Python skills gap.

    You also learn the important skill of coding towards the tests, looking at error output and debugging.

    5⃣ Consistency

    Pybites gives you clear direction so you can focus on what really matters: writing Python code.

    No more tutorial paralysis or getting stuck on information overload. You code consistently.

    Just wanted to drop a quick line here to say that Pybites helped me land my job as a Software Developer! Unlike other platforms, Pybites gives you a taste of real-world problems through bite-sized challenges; this is exactly what I needed to prepare for technical interviews. Seriously, big thanks to Bob and Julian for creating this and helping so many become better developers, you guys rock!!!
    – Sangeeta Jadoonanan

    6⃣ Get Certified

    With us, learning is fun! Earn Ninja belts , badges, and certificates that will boost your LinkedIn profile.

    Show the world that you know Python, and you know how to use it in the real world because that is what we simulate.

    7⃣ Comfort Zone

    Growth happens outside your comfort zone. Our platform is designed to push you beyond your limits, essential for becoming proficient.

    Many people that coded on our platform admitted that they were pushed outside of their comfort zone which was at times uncomfortable, but they also contributed that to their rapid growth!

    8⃣ Daily Workouts

    Keep your Python skills sharp with our daily challenges. It’s like a gym membership for your coding abilities.

    To see consistent physical results you work out multiple times a week. Likewise to become an ace at Python, you code daily. With our gamification (e.g. streak calendar) we make this easier.

    9⃣ Community Support

    You’re not alone! We have a welcoming and friendly community of Pythonistas ready to support and learn with you.

    Stay Motivated

    Our gamification system is designed to keep you motivated consistently. With scoreboards, badges, and a 100 days of code challenge, you’re bound to stay committed!

    So why wait? Join almost 45K Pythonistas, sign up for FREE and start your Python journey NOW.

    https://codechalleng.es

    We also offer school, team and interviewing tiers. Pricing information is here.

    Happy coding

    Here are what people are saying that have coded on our platform:

    I’d tried various other learning platforms, and enjoyed them to a greater or lesser extent, but for me the Pybites platform with its gamified instant gratification and superb depth and breadth is the only one that’s kept me coming back for more consistently for months at a time.
    – Chris Patti

    I’ve found my first job as a Java/Python developer and I sincerely believe that it’s thanks to my daily training on the PyBites platform.
    – Francois Noel

    Every bite of Py has been educational and I am continuously being pushed way past my comfort zone and have grown accustomed to an ever changing and dynamic environment. At this rate, I’ll be a pro in no time!
    – Martin Uribe

    I think PyBites is the greatest thing since sliced bread. I’ve used Hackerrank, Codesignal, Geeks2Geeks, TopCoder, Euler, Leetcode, etc, etc, etc. All of them have their good points and their uses, and some I still use regularly, but *PyBites* is the one that’s most useful to me, clearest, and best put together. PyBites is helping me fill in gaps in my Python skills, and level up.
    – Andrew Jarcho

    I absolutely love the codechalleng.es website, the ethos of learning by doing is fantastic and your platform is one of the best I have used, no more ‘tutorial paralysis’ here!
    – Lee Cullip

    PyBites exercises are a fantastic way to learn by doing. I’ve gained much more experience and confidence in my coding doing the bites on this site than I have in a couple of years of using books and video tutorials. I wish this site had existed when I first started learning Python, it would have made it much easier and more fun.
    – Rebecca Mackley

    I have spent time on a number of “coding tutorial” sites, often-times feeling like it’s the same old task-oriented exercises (i.e. take an input and print all even numbers between 0 and the input to the console). PyBites is the first platform that has felt much more “well-rounded”, challenging me to solve actual problems, as well as implement “full” code [as opposed to snippets]. Great work!
    – Dan Haight

    I’m a newbie in Python and after taking some tutorials and work on them, I felt that I needed more. More practice, more structured and focused practice…as I have discovered here: deliberate practice. After a month working in the platform (just a month) I feel more confident in writing code and solving problems. I know that it is only the beginning of a great and not easy journey, but in this month I have realized that you have to work outside your comfort zone in order to grow and learn new skills.
    – Alberto Sastre

    In my journey, I’ve had a fair share of experiences with different sites teaching Python. Out of those sites, PyBites code challenge is by far the best I’ve ever experienced. What’s so good about it is it holds your hand just enough for you to do your own research and study. At the end of the day, what raises you as a coder is not the tutorials, that hold your hands throughout the whole way, but the ones that let go of your hands when they know you can stand on your feet, albeit it’s very difficult sometimes.
    – Daniel Elder

    I did a bit of the bites while doing the 100 days but then never came back. I watched lots of tutorials to get concepts and I noticed that when I learned the most was when I was using Python at work in real situations, solving real problem. This is exactly what PyBites is.
    – Pedro Junqueira

    Categories: FLOSS Project Planets

    PyCoder’s Weekly: Issue #590 (Aug. 15, 2023)

    Tue, 2023-08-15 15:30

    #590 – AUGUST 15, 2023
    View in Browser »

    How to Annotate Methods That Return self

    In this tutorial, you’ll learn how to use the Self type hint in Python to annotate methods that return an instance of their own class. You’ll gain hands-on experience with type hints and annotations of methods that return an instance of their class, making your code more readable and maintainable.
    REAL PYTHON

    Creating a Context Manager in Python

    Objects with __enter__ and __exit__ methods can be used as context managers in Python. This article (and screencast) explains most of what you’ll want to know when creating your own context managers.
    TREY HUNNER • Shared by Trey Hunner

    If App Signup is Giving You Headaches, Here’s Your Answer: FusionAuth’s Modern Guide to OAuth

    Our Modern Guide to OAuth is chock full of real-world examples, without fluffy BS. We made it, you learn it. It’s a win-win. And when you’re ready, you can spin up an instance of FusionAuth for FREE in just five minutes →
    FUSIONAUTH sponsor

    PEP 723: Embedding pyproject.toml in Single-File Scripts

    This PEP proposes a metadata format which a single-file script can use to specify dependency and tool information for IDEs and external development tools. It replaces PEP 722.
    PYTHON.ORG

    PyPI: 2FA Enforcement for New User Registrations

    PYPI.ORG

    PSF Announces New PyPI Safety & Security Engineer

    PYTHON SOFTWARE FOUNDATION

    Python 3.12.0 RC 1 Released

    CPYTHON DEV BLOG

    Articles & Tutorials Prompt Engineering: A Practical Example

    Learn prompt engineering techniques with a practical, real-world project to get better results from large language models. This tutorial covers zero-shot and few-shot prompting, delimiters, numbered steps, role prompts, chain-of-thought prompting, and more. Improve your LLM-assisted projects today.
    REAL PYTHON

    Why Static Languages Suffer From Complexity

    An extremely detailed, deep dive on how static type systems impact the consistency of languages. Hirrolot compares a variety of lesser known languages to see the consequences of their decisions. See also the associated Hacker News discussion.
    HIRROLOT

    Companies like GitLab, Snowflake, and Slack Scan Their Code for Vulnerabilities Using Semgrep

    Scan your code and dependencies for security vulnerabilities for free with Semgrep - the trusted OSS tool used by top companies like GitLab, Snowflake, and Slack. No security expertise needed, simply add your project and let Semgrep do the work in just minutes →
    SEMGREP sponsor

    Python’s list: A Deep Dive With Examples

    In this tutorial, you’ll dive deep into Python’s lists. You’ll learn how to create them, update their content, populate and grow them, and more. Along the way, you’ll code practical examples that will help you strengthen your skills with this fundamental data type in Python.
    REAL PYTHON

    A Complete Comparison of Sorting Algorithms

    An comprehensive comparison on the performance of 9 major Sorting Algorithms and how well they perform under varying circumstances. The aim of the article is to show you where each type of Algorithms shines, and where it does badly.
    CODERSLEGACY.COM • Shared by Raahim Siddiqi

    Support Bootstrap-Alerts in Python-Markdown

    If you’re using Markdown on your blog, or any website, a conversion pipeline allows you to create your own rules and widgets. This article shows you how to integrate Bootstrap Alert boxes into a Markdown workflow.
    FLORIAN DAHLITZ • Shared by Florian Dahlitz

    How a Simple Import Can Modify the Interpreter

    This article shows a sample module that swaps the values for 8 and 9, not something generally recommended. Learn how side effects from an import can impact your code and just what the integer object cache is.
    KEN SCHUTTE

    Does Elegance Matter?

    In this article, the author explains why he thinks that elegance should be a fundamental driver when you are writing (Python) code, and gives a few tips on how to write elegant code.
    MATHSPP.COM • Shared by Rodrigo Girão Serrão

    Why Python Is Amazing

    This opinion piece by Jos is a counter to the “its a terrible language” posts you come across once and a while. Read why Jos thinks Python is amazing.
    JOS VISSER

    Llama From Scratch

    This blog post provides step by step instructions on how to implement llama from scratch, using a dramatically scaled-down version for training.
    BRIAN KITANO

    Learn How to Deploy Scientific AI Models to Edge Environments, Using OpenVINO Model Server

    🤔 Can cell therapy and AI be used together? Learn how to efficiently build and deploy scientific AI models using open-source technologies with Beckman Coulter Life Sciences at our upcoming DevCon OpenVINO webinar. #DevCon2023
    INTEL CORPORATION sponsor

    Adversarial Attacks on Aligned LLMs

    Deep CS paper on how to abuse Large Language Models and work around restrictions where the model is refusing to answer.
    ZOU, WANG, KOLTER, & FREDRIKSON

    2022 PSF Annual Report

    The annual report from the Python Software Foundation details all the changes and events at the PSF last year.
    PYTHON.ORG

    Projects & Code dpv: Alternative to pyenv-virtualenv and virtualenvwrapper

    GITHUB.COM/CAIOARIEDE • Shared by Caio Ariede

    nodice-cli: Word List Generator With No Dependencies

    GITHUB.COM/AVNIGO

    briefcase: Convert Python to a Native Application

    GITHUB.COM/BEEWARE

    django_simple_notification: REST Notification System

    GITHUB.COM/MAHMOUDNASSER01

    pyOCD: Python for Arm Cortex-M Microcontrollers

    GITHUB.COM/PYOCD

    Events Weekly Real Python Office Hours Q&A (Virtual)

    August 16, 2023
    REALPYTHON.COM

    PyData Bristol Meetup

    August 17, 2023
    MEETUP.COM

    PyLadies Dublin

    August 17, 2023
    PYLADIES.COM

    DjangoConAU 2023

    August 18 to August 19, 2023
    DJANGOCON.COM.AU

    PyCon AU 2023

    August 18 to August 23, 2023
    PYCON.ORG.AU

    Chattanooga Python User Group

    August 18 to August 19, 2023
    MEETUP.COM

    PyCon Latam 2023

    August 24 to August 27, 2023
    PYLATAM.ORG

    Happy Pythoning!
    This was PyCoder’s Weekly Issue #590.
    View in Browser »

    [ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

    Categories: FLOSS Project Planets

    Stack Abuse: Counting Non-NaN Values in DataFrame Columns

    Tue, 2023-08-15 10:41
    Introduction

    Data cleaning is an important step in any data science project. In Python, Pandas DataFrame is a commonly used data structure for data manipulation and analysis.

    In this Byte, we will focus on handling non-NaN (Not a Number) values in DataFrame columns. We will learn how to count and calculate total non-NaN values, and also treat empty strings as NA values.

    Counting Non-NaN Values in DataFrame Columns

    Pandas provides the count() function to count the non-NaN values in DataFrame columns. Let's start by importing the pandas library and creating a simple DataFrame.

    import pandas as pd import numpy as np data = {'Name': ['Tom', 'Nick', 'John', np.nan], 'Age': [20, 21, 19, np.nan]} df = pd.DataFrame(data) print(df)

    Output:

    Name Age 0 Tom 20.0 1 Nick 21.0 2 John 19.0 3 NaN NaN

    Now, we can count the non-NaN values in each column using the count() method:

    print(df.count())

    Output:

    Name 3 Age 3 dtype: int64 Calculating Total Non-NaN Values in DataFrame

    If you want to get the total number of non-NaN values in the DataFrame, you can use the count() function combined with sum().

    print(df.count().sum())

    Output:

    6

    This indicates that there are a total of 6 non-NaN values in the DataFrame.

    Treating Empty Strings as NA Values

    In some cases, you might want to treat empty strings as NA values. You can use the replace() function to replace empty strings with np.nan.

    data = {'Name': ['Tom', 'Nick', '', 'John'], 'Age': [20, 21, '', 19]} df = pd.DataFrame(data) print(df)

    Output:

    Name Age 0 Tom 20 1 Nick 21 2 3 John 19

    Now, replace the empty strings with np.nan:

    df.replace('', np.nan, inplace=True) print(df)

    Output:

    Name Age 0 Tom 20.0 1 Nick 21.0 2 NaN NaN 3 John 19.0

    Note: This operation changes the DataFrame in-place. If you want to keep the original DataFrame intact, don't use the inplace=True argument.

    Using notna() to Count Non-Missing Values

    A slightly more direct way to filter and count non-NaN values is with the notna() method.

    Let's start with a simple DataFrame:

    import pandas as pd data = {'Name': ['John', 'Anna', None, 'Mike', 'Sarah'], 'Age': [28, None, None, 32, 29], 'City': ['New York', 'Los Angeles', None, 'Chicago', 'Boston']} df = pd.DataFrame(data) print(df)

    This will output:

    Name Age City 0 John 28.0 New York 1 Anna NaN Los Angeles 2 None NaN None 3 Mike 32.0 Chicago 4 Sarah 29.0 Boston

    You can see that our DataFrame has some missing values (NaN or None).

    Now, if you want to count the non-missing values in the 'Name' column, you can use notna():

    print(df['Name'].notna().sum())

    This will output:

    4

    The notna() function returns a Boolean Series where True represents a non-missing value and False represents a missing value. The sum() function is then used to count the number of True values, which represent the non-missing values.

    Conclusion

    In this Byte, we've learned how to count non-NaN values in DataFrame columns. Handling missing data is an important step in data preprocessing. The notna() function, among other functions in Pandas, provides a straightforward way to count non-missing values in DataFrame columns.

    Categories: FLOSS Project Planets

    Real Python: Process Images Using the Pillow Library and Python

    Tue, 2023-08-15 10:00

    When you look at an image, you see the objects and people in it. However, when you read an image programmatically with Python or any other language, the computer sees an array of numbers. In this video course, you’ll learn how to manipulate images and perform basic image processing using the Python Pillow library.

    Pillow and its predecessor, PIL, are the original Python libraries for dealing with images. Even though there are other Python libraries for image processing, Pillow remains an important tool for understanding and dealing with images.

    To manipulate and process images, Pillow provides tools that are similar to ones found in image processing software such as Photoshop. Some of the more modern Python image processing libraries are built on top of Pillow and often provide more advanced functionality.

    In this video course, you’ll learn how to:

    • Read images with Pillow
    • Perform basic image manipulation operations
    • Use Pillow for image processing
    • Use NumPy with Pillow for further processing
    • Create animations using Pillow

    In this video course, you’ll get an overview of what you can achieve with the Python Pillow library through some of its most common methods. Once you gain confidence using these methods, then you can use Pillow’s documentation to explore the rest of the methods in the library. If you’ve never worked with images in Python before, this is a great opportunity to jump right in!

    [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

    Categories: FLOSS Project Planets

    Pages