Feeds
The Russian Lullaby: How to set up a local development environment (LDE) for Drupal
You are probably interested in setting up a workign environment for Drupal-based projects or maybe you have new members in your development team, so the configuration of the correct development environment is a fundamental part of the process of working with Drupal, you are right. By reading this how-to guide, you will implement a complete and ready-to-go Drupal working environment ready for versions 8, 9, and 10 of our favorite CMS/framework. Do you want to start?…
Picture from Unsplash, user Mathyas Kurmann, @mathyaskurmann.
This content has been constructed as a …
Seth Michael Larson: Releases on the Python Package Index are never “done”
Published 2024-01-24 by Seth Larson
Reading time: minutes
PEP 740 is a proposal to add support for digital attestations to PyPI artifacts, for example publish provenance attestations, which can be verified and used by tooling.
William Woodruff has been working on PEP 740 which is in draft on GitHub, William addressed my feedback this week. During this work the open-endedness of PyPI releases came up during our discussion, specifically how it is a common gotcha for folks designing tools and policy for multiple software ecosystems difficult.
What does it mean for PyPI releases to be open-ended? It means that you can always upload new files to an existing release on PyPI even if the release has been created for years. This is because a PyPI “release” is only a thin layer aggregating a bunch of files on PyPI that happen to share the same version.
This discussion between us was opened up as a wider discussion on discuss.python.org about this property. Summarizing this discussion:
- New Python releases mean new wheels need to be built for non-ABI3 compatible projects. IMO this is the most compelling reason to keep this property.
- Draft releases seem semi-related, being able to put artifacts into a "queue" before making them public.
- Ordering of which wheel gets evaluated as an installation candidate isn't defined well. Up to installers, tends to be more specific -> less specific.
- PyPI doesn't allow single files to be yanked even though PEP 592 allows for yanking at the file level instead of only the release level.
- The "attack" vector is fairly small, this property would mostly only provide additional secrecy for attackers by blending into existing releases.
CPython 3.13.0a3 was released, this is the very first CPython release that contains any SBOM metadata at all, and thus we can create an initial draft SBOM document.
Much of the work on CPython's SBOMs was done to fix issues related to pip's vendored dependencies and issues found by downstream distributors of CPython builds like Red Hat. The issues were as follows:
- Don't require internet access to run the SBOM script. We use internet access to automatically generate metadata for pip, but if the internet isn't available we should continue using the metadata that we already have (assuming the file hasn't changed) and then rely on CI which should always have internet access (the script fails in CI) to verify the values.
- If pip wheel is removed, don't raise an unskippable error. Redistributors will typically remove the wheel in favor of their own distribution of pip for ensurepip.
- Enumerate pip's vendored dependencies in the SBOM. This requires parsing the vendor.txt script inside of pip's vendor directory.
All of these issues are mostly related and touch the same place in the codebase, so resulted in a medium-sized pull request to fix all the issues together.
On the release side, I've addressed feedback from the first round of reviews for generating SBOMs for source code artifacts and uploading them during the release. Once those SBOMs start being generated they'll automatically begin being added to python.org/downloads.
Other items- Two new Developer-in-Residence roles have been filled at the Python Software Foundation. Welcome, Petr Viktorin as the Deputy Developer-in-Residence and Serhiy Storchaka as the Supporting Developer-in-Residence. We've already gotten a chance to collaborate and I look forward to even more.
- scikit-learn is considering build reproducibility.
- Wrote my piece for the Python Software Foundation Annual Impact report.
- Submitted to the OpenSSF SOSS Community Day Call for Proposals (see you in Washington!)
- Reviewed a fix by Erlend Aasland for the SBOM generation script.
- I published a blog post which provides guidance on how to remove a maintainer from an open source project to reduce the attack surface of an open source project.
That's all for this week! 👋 If you're interested in more you can read last week's report.
Thanks for reading! ♡ Did you find this article helpful and want more content like it? Get notified of new posts by subscribing to the RSS feed or the email newsletter.
This work is licensed under CC BY-SA 4.0
Kay Hayen: Nuitka Package Configuration Part 3
This is the third part of a post series under the tag package_config that explains the Nuitka package configuration in more detail. To recap, Nuitka package configuration is the way Nuitka learns about hidden dependencies, needed DLLs, data files, and just generally avoids bloat in the compilation. The details are here on a dedicate page on the web site in Nuitka Package Configuration but reading on will be just fine.
Problem PackageEach post will feature one package that caused a particular problem. In this case, we are talking about the package toga.
Problems like with this package are typically encountered in standalone mode only, but they also affect accelerated mode, since it doesn’t compile all the things desired in that case. Some packages, and in this instance look at what OS they are running on, environment variables, etc. and then in a relatively static fashion, but one that Nuitka cannot see through, loads a what it calls “backend” module.
We are going to look at that in some detail, and will see a workaround applied with the anti-bloat engine doing code modification on the fly that make the choice determined at compile time, and visible to Nuitka is this way.
Initial SymptomThe initial symptom reported was that toga did suffer from broken version lookups and therefor did not work, and we encountered even two things, that prevented it, one was about the version number. It was trying to do int after resolving the version of toga by itself to None.
Traceback (most recent call last): File "C:\py\dist\toga1.py", line 1, in <module> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "C:\py\dist\toga\__init__.py", line 1, in <module toga> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "C:\py\dist\toga\app.py", line 20, in <module toga.app> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "C:\py\dist\toga\widgets\base.py", line 7, in <module toga.widgets.base> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "C:\py\dist\travertino\__init__.py", line 4, in <module travertino> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "C:\py\dist\setuptools_scm\__init__.py", line 7, in <module setuptools_scm> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "C:\py\dist\setuptools_scm\_config.py", line 15, in <module setuptools_scm._config> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "C:\py\dist\setuptools_scm\_integration\pyproject_reading.py", line 8, in <module setuptools_scm._integration.pyproject_reading> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "C:\py\dist\setuptools_scm\_integration\setuptools.py", line 62, in <module setuptools_scm._integration.setuptools> File "C:\py\dist\setuptools_scm\_integration\setuptools.py", line 29, in _warn_on_old_setuptools ValueError: invalid literal for int() with base 10: 'unknown'So, this is clearly something that we consider bloat in the first place, to runtime lookup your own version number. The use of setuptools_scm is implying the use of setuptools, for which the version cannot be determined, and that’s crashing.
Step 1 - Analysis of initial crashingSo first thing, we did was to repair setuptools, to know its version. It is doing it a bit different, because it cannot use itself. Our compile time optimization failed there, but also would be overkill. We never came across this, since we avoid setuptools very hard normally, but it’s not good to be incompatible.
- module-name: 'setuptools.version' anti-bloat: - description: 'workaround for metadata version of setuptools' replacements: "pkg_resources.get_distribution('setuptools').version": "repr(__import__('setuptools.version').version.__version__)"We do not have to include all metadata for setuptools here, just to get that one item, so we chose to make a simple string replacement here, that just looks the value up at compile time and puts it into the source code automatically. That removes the pkg_resources.get_distribution() call entirely.
With that, setuptools_scm was not crashing anymore. That’s good. But we don’t really want it to be included, since it’s good for dynamically detecting the version from git, and what not, but including the framework for building C extensions, not a good idea in the general case. Nuitka therefore said this:
Nuitka-Plugins:WARNING: anti-bloat: Undesirable import of 'setuptools_scm' (intending to Nuitka-Plugins:WARNING: avoid 'setuptools') in 'toga' (at Nuitka-Plugins:WARNING: 'c:\3\Lib\site-packages\toga\__init__.py:99') encountered. It may Nuitka-Plugins:WARNING: slow down compilation. Nuitka-Plugins:WARNING: Complex topic! More information can be found at Nuitka-Plugins:WARNING: https://nuitka.net/info/unwanted-module.htmlSo that’s informing the user to take action. And in the case of optional imports, i.e. ones where using code will handle the ImportError just fine and work without it, we can use do this.
- module-name: 'toga' anti-bloat: - description: 'remove setuptools usage' no-auto-follow: 'setuptools_scm': '' when: 'not use_setuptools'He we say, no not automatically follow setuptools_scm reports, unless there is other code that still does it. In that way, the import still happens if some other part of the code imports the module, but only then. We no longer enforce the non-usage of a module here, we just make that decision based on other uses being present.
With this the bloat warning, and the inclusion of setuptools_scm into the compilation is removed, and you always want to make as small as possible and remove those packages that do not contribute anything but overhead, aka bloat.
The next thing discovered was that toga needs the toga-core distribution to version check. For that, we use the common solution, and tell that we want to include the metadata of it, for when toga is part of a compilation.
- module-name: 'toga' data-files: include-metadata: - 'toga-core'So that moved the entire issue of version looks to resolved.
Step 2 - Dynamic Backend dependencyNow on to the backend issue. What remained was a need for including the platform specific backend. One that can even be overridden by an environment variable. For full compatibility, we invented something new. Typically what we would have done is to create a toga plugin for the following snippet.
- module-name: 'toga.platform' variables: setup_code: 'import toga.platform' declarations: 'toga_backend_module_name': 'toga.platform.get_platform_factory().__name__' anti-bloat: - change_function: 'get_platform_factory': "'importlib.import_module(%r)' % get_variable('toga_backend_module_name')"There is a whole new thing here, a new feature that was added specifically for this to be easy to do. And with the backend selection being complex and partially dynamic code, we didn’t want to hard code that. So we added support for variables and their use in Nuitka Package Configuration.
The first block variables defines a mapping of expressions in declarations that will be evaluated at compile time given the setup code under setup_code.
This then allows us to have a variable with the name of the backend that toga decides to use. We then change the very complex function get_platform_factory that we used used, for compilation, to be replacement that Nuitka will be able to statically optimize and see the backend as a dependency and use it directly at run time, which is what we want.
Final remarksI am hoping you will find this very helpful information and will join the effort to make packaging for Python work out of the box. Adding support for toga was a bit more complex, but with the new tool, once identified to be that kind of backend issue, it might have become a lot more easy.
Lessons learned. We should cover packages that we routinely remove from compilation, like setuptools, but e.g. also IPython. This will have to added, such that setuptools_scm cannot cloud the vision to actual issues.
Quansight Labs Blog: Captioning: A Newcomer’s Guide
The Drop Times: Technology and People Make Drupal Happen: Fran Garcia
PyCoder’s Weekly: Issue #613 (Jan. 23, 2024)
#613 – JANUARY 23, 2024
View in Browser »
This is a follow-on post to Chris’s article from last year called Fourteen tools at least twelve too many. “Are there still fourteen tools, or are there even more? Has Python packaging improved in a year?”
CHRIS WARRICK
This post describes running Python code on a “soft” air-gapped system, one without direct internet access. Installing packages in a clean environment and moving them to the air-gapped machine has challenges. Read Ibrahim’s take on how he solved the problem.
IBRAHIM AHMED
Get ready to elevate your web development process with the newly released Full Stack FastAPI App Generator by MongoDB, offering a simplified setup process for building modern full-stack web applications with FastAPI and MongoDB →
MONGODB sponsor
After you implement the main functionality of a web project, it’s good to understand how your users interact with your app and where they may run into errors. In this tutorial, you’ll enhance your Flask project by creating error pages and logging messages.
REAL PYTHON
How can you measure the quality of a large language model? What tools can measure bias, toxicity, and truthfulness levels in a model using Python? This week on the show, Jodie Burchell, developer advocate for data science at JetBrains, returns to discuss techniques and tools for evaluating LLMs With Python.
REAL PYTHON podcast
This article presents various aspects you need to consider when choosing a database for your project - querying, performance, ORMs, migrations, etc. It shows how things are approached differently for Postgres vs. DynamoDB and includes examples in Python.
JAN GIACOMELLI • Shared by Jan Giacomelli
Hear from our technical team on how we’ve built Temporal Cloud to deliver world-class latency, performance, and availability for the smallest and largest workloads. Whether you’re using Temporal Cloud or self-host, this series will be full of insights into how to optimize your Temporal Service →
TEMPORAL sponsor
“As with every technology stack, Python has its advantages and limitations. The key to success is to use Python at the right time and in the right place.” This guide talks about what a product owner needs to know to take on a Python project.
PAVLO PYLYPENKO • Shared by Alina
In this video course, you’ll explore how to make HTTP requests using Python’s handy built-in module, urllib.request. You’ll try out examples and go over common errors, all while learning more about HTTP requests and Python in general.
REAL PYTHON course
Nvidia has created GPU-based replacements for NumPy and other tools and promises significant speed-ups, but the comparison may not be accurate. Read on to learn if GPU replacements for CPU-based libraries are really that much faster.
ITAMAR TURNER-TRAURING
Your Django migrations are piling up in your repo? You want to clean them up without a hassle? Check out this new package django-migration-zero that helps make migration management a piece of cake!
RONNY VEDRILLA • Shared by Sarah Boyce
To understand NumPy, you need to understand the ndarray type. This article starts with Python’s native lists and shows you when you need to move to NumPy’s ndarray data type.
STEPHEN GRUPPETTA • Shared by Stephen Gruppetta
PyPy is an alternative implementation of Python, and its C API compatibility layer has some performance issues. This article describes on-going work to improve its performance.
MAX BERNSTEIN
It’s not uncommon to find yourself reading Excel in Python. This article compares several ways to read Excel from Python and how they perform.
HAKI BENITA
This article provides an in-depth walkthrough of how requests are processed in a Flask application.
TESTDRIVEN.IO • Shared by Michael Herman
GITHUB.COM/LEWOUDAR • Shared by Kevin Tewouda
Autometrics-py: Metrics to Debug in ProductionGITHUB.COM/AUTOMETRICS-DEV • Shared by Adelaide Telezhnikova
django-cte: Common Table Expressions (CTE) for Django Events Weekly Real Python Office Hours Q&A (Virtual) January 24, 2024
REALPYTHON.COM
January 25, 2024
MEETUP.COM
January 25, 2024
MEETUP.COM
January 27, 2024
MEETUP.COM
January 27, 2024
PYTHON.ORG.BR
Happy Pythoning!
This was PyCoder’s Weekly Issue #613.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
TechBeamers Python: Python Map vs List Comprehension – The Difference Between the Two
In this tutorial, we’ll explain the difference between Python map vs list comprehension. Both map and list comprehensions are powerful tools in Python for applying functions to each element of a sequence. However, they have different strengths and weaknesses, making them suitable for different situations. Here’s a breakdown: What is the Difference Between the Python […]
The post Python Map vs List Comprehension – The Difference Between the Two appeared first on TechBeamers.
FSF News: Hayley Tsukayama will speak about grassroots activism at LibrePlanet 2024
ADCI Solutions: A Guide to Creating Pages with Layout Builder
In this post, we explain to all novice Drupal developers and Drupal site owners how to develop a page layout for a Drupal-based site using the Layout Builder.
This is part 2 of the series on the Layout Builder. You can find the first post here: Layout Builder | The power module in a nutshell.
ADCI Solutions: Upgrade Drupal 9 to 10 twice as fast
With Composer and several useful modules, your Drupal 9 site can be upgraded to Drupal 10 as quickly as possible. Here is a step-by-step guide on how to do this and save you time.
Real Python: Python Basics: Lists and Tuples
Python lists are similar to real-life lists. You can use them to store and organize a collection of objects, which can be of any data type. Instead of just storing one item, a list can hold multiple items while allowing manipulation and retrieval of those items. Because lists are mutable, you can think of them as being written in pencil. In other words, you can make changes.
Tuples, on the other hand, are written in ink. They’re similar to lists in that they can hold multiple items, but unlike lists, tuples are immutable, meaning you can’t modify them after you’ve created them.
In this video course, you’ll learn:
- What lists and tuples are and how they’re structured
- How lists and tuples differ from other data structures
- How to define and manipulate lists and tuples in your Python code
By the end of this course, you’ll have a solid understanding of Python lists and tuples, and you’ll be able to use them effectively in your own programming projects.
This video course is part of the Python Basics series, which accompanies Python Basics: A Practical Introduction to Python 3. You can also check out the other Python Basics courses.
Note that you’ll be using IDLE to interact with Python throughout this course.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
LN Webworks: Voice Search Optimization & Set Up for Drupal: A Step-by-Step Setup Guide!
Drupal voice search has evolved from being a mere trend to becoming a standard feature for websites today. If you find yourself wondering, 'How do I enable search based on voice recognition on my Drupal website?' — you're in the right place.
Integrating voice search functionality into your Drupal site is not only modern but also enhances user experience and is extremely important for SEO ranking. In this blog post, we'll walk you through the steps to set up search based on voice recognition, making your Drupal site more accessible and user-friendly. But before we dive into the steps, let’s understand…
Python Bytes: #368 That episode where we just ship open source
Specbee: (Not Just Any) Drupal VS WordPress Blogpost - Your Top 5 FAQs Answered
Glyph Lefkowitz: Your Text Editor (Probably) Isn’t Malware Any More
In 2015, I wrote one of my more popular blog posts, “Your Text Editor Is Malware”, about the sorry state of security in text editors in general, but particularly in Emacs and Vim.
It’s nearly been a decade now, so I thought I’d take a moment to survey the world of editor plugins and see where we are today. Mostly, this is to allay fears, since (in today’s landscape) that post is unreasonably alarmist and inaccurate, but people are still reading it.
Problem Is It Fixed? vim.org is not available via https Yep! http://www.vim.org/ redirects to https://www.vim.org/ now. Emacs's HTTP client doesn't verify certificates by default Mostly! The documentation is incorrect and there are some UI problems1, but it doesn’t blindly connect insecurely. ELPA and MELPA supply plaintext-HTTP package sources Kinda. MELPA correctly responds to HTTP only with redirects to HTTPS, and ELPA at least offers HTTPS and uses HTTPS URLs exclusively in the default configuration. You have to ship your own trust roots for Emacs. Fixed! The default installation of Emacs on every platform I tried (including Windows) seems to be providing trust roots. MELPA offers to install code off of a wiki. Yes. Wiki packages were disabled entirely in 2018.The big takeaway here is that the main issue of there being no security whatsoever on Emacs and Vim package installation and update has been fully corrected.
Where To Go Next?Since I believe that post was fairly influential, in particular in getting MELPA to tighten up its security, let me take another big swing at a call to action here.
More modern editors have made greater strides towards security. VSCode, for example, has enabled the Chromium sandbox and added some level of process separation. Emacs has not done much here yet, but over the years it has consistently surprised me with its ability to catch up to its more modern competitors, so I hope it will surprise me here as well.
Even for VSCode, though, this sandbox still seems pretty permissive — plugins still seem to execute with the full trust of the editor itself — but it's a big step in the right direction. This is a much bigger task than just turning on HTTPS, but I really hope that editors start taking the threat of rogue editor packages seriously before attackers do, and finding ways to sandbox and limit the potential damage from third-party plugins, maybe taking a cue from other tools.
AcknowledgmentsThank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support me on Patreon as well!
-
the documention still says “gnutls-verify-error” defaults to nil and that means no certificate verification, and maybe it does do that if you are using raw TLS connections, but in practice, url-retrieve-synchronously does appear to present an interactive warning before proceeding if the certificate is invalid or expired. It still has yet to catch up with web browsers from 2016, in that it just asks you “do you want to do this horribly dangerous thing? y/n” but that is a million times better than proceeding without user interaction. ↩
Seth Michael Larson: Removing maintainers from open source projects
Published 2024-01-23 by Seth Larson
Reading time: minutes
Here's a tough but common situation for open source maintainers:
- You want a project you co-maintain to be more secure by reducing the attack surface.
- There are one or more folks in privileged roles who previously were active contributors, but now aren't active.
- You don't want to take away from or upset the folks who have contributed to the project before you.
These three points feel like they're in contention. This article is here to help resolve this contention and potentially spur some thinking about succession for open source projects.
Why do people do open source?Most rewards that come from contributing to open source are either intrinsic (helping others, learning new skills, interest in a topic, improve the world) or for recognition (better access to jobs, proof of a skill-set, “fame” from a popular project). Most folks don't get paid to work on open source for their first project, so it's unlikely to be their initial motivation.
Recognition is typically what feels “at stake” when removing a previous maintainer from operational roles on an open source project.
Let's split recognition into another two categories: operational and celebratory. Operational recognition is the category of recognition that has security implications like access to sensitive information or publishing rights. Celebratory has no security implications, it's there because we want to thank contributors for the work they've done for the project. Here's some examples of the two categories:
Operational:
- Additional access on source control like GitHub (“commit bit”)
- Additional access on package repository like PyPI
- Listing email addresses for security contacts
Celebratory:
- Author and maintainer annotation in package metadata
- Elevating contributors into a triager role
- Maintainer names listed in the README
- Thanking contributors in release notes
- Guest blog posts about the project
You'll notice that the celebratory recognition might be a good candidate for offsetting the removal of incidental operational recognition (like your account being listed on PyPI).
Suggestions for removing maintainers' with empathyEnsure the removal of operational recognition is supplanted by deliberate celebratory recognition. Consider thanking the removed individual publicly in a blog post, release notes, or social media for their contributions and accomplishments. If there isn't already a permanent place to celebrate past maintainers consider adding a section to the documentation or README.
Don't take action until you've reached out to the individual. Having your access removed without any acknowledgement feels bad and there's no way around that fact. Even if you don't receive a reply, sending a message and waiting some time should be a bare minimum.
Practice regular deliberate celebratory recognition. Thank folks for their contributions, call them out by name in release notes, list active and historical maintainers in the documentation. This fulfills folks that are motivated by recognition and might inspire them to contribute again.
Think more actively about succession. In one of the many potential positive outcomes for an open source project, you will be succeeded by other maintainers and someone else may one day be in the position that you are in today.
How can you prepare that individual to have a better experience than you are right now? I highly recommend Sumana Harihareswara's writing on this topic. There are tips like:
- Actively recruit maintainers by growing and promoting contributors.
- Talk about succession openly while you are still active on the project.
- Give privileges or responsibility to folks that repeatedly contribute positively, starting from triaging or reviewing code.
- Recognize when you are drifting away from a project and make it known to others, even if you intend to contribute in the future.
Thanks for reading! ♡ Did you find this article helpful and want more content like it? Get notified of new posts by subscribing to the RSS feed or the email newsletter.
This work is licensed under CC BY-SA 4.0
Python Morsels: None in Python
Python's None value is used to represent emptiness. None is the default function return value.
Table of contents
- Python's None value
- None is falsey
- None represents emptiness
- The default function return value is None
- None is like NULL in other programming languages
Python has a special object that's typically used for representing emptiness. It's called None.
If we look at None from the Python REPL, we'll see nothing at all:
>>> name = None >>>Though if we print it, we'll see None:
>>> name = None >>> name >>> print(name) NoneWhen checking for None values, you'll usually see Python's is operator used (for identity) instead of the equality operator (==):
>>> name is None True >>> name == None TrueWhy is that?
Well, None has its own special type, the NoneType, and it's the only object of that type:
>>> type(None) <class 'NoneType'>In fact, if we got a reference to that NoneType class, and then we called that class to make a new instance of it, we'll actually get back the same exact instance, always, every time we call it:
>>> NoneType = type(None) >>> NoneType() is None TrueThe NoneType class is a singleton class. So comparing to None with is works because there's only one None value. No object should compare as equal to None unless it is None.
None is falseyWe often rely on the …
Read the full article: https://www.pythonmorsels.com/none/TechBeamers Python: Is Python Map Faster than Loop?
In this short tutorial, we’ll quickly compare Python map vs loop. We’ll try to assess whether the Python map is faster than the loop or vice-versa. The comparison between using map and a loop (such as a for loop) in Python depends on the specific use case and the nature of the operation you are […]
The post Is Python Map Faster than Loop? appeared first on TechBeamers.
Glyph Lefkowitz: Okay, I’m A Centrist I Guess
Today I saw a short YouTube video about “cozy games” and started writing a comment, then realized that this was somehow prompting me to write the most succinct summary of my own personal views on politics and economics that I have ever managed. So, here goes.
Apparently all I needed to trim down 50,000 words on my annoyance at how the term “capitalism” is frustratingly both a nexus for useful critque and also reductive thought-terminating clichés was to realize that Animal Crossing: New Horizons is closer to my views on political economy than anything Adam Smith or Karl Marx ever wrote.
Cozy games illustrate that the core mechanics of capitalism are fun and motivating, in a laboratory environment. It’s fun to gather resources, to improve one’s skills, to engage in mutually beneficial exchanges, to collect things, to decorate. It’s tremendously motivating. Even merely pretending to do those things can captivate huge amounts of our time and attention.
In real life, people need to be motivated to do stuff. Not because of some moral deficiency, but because in a large complex civilization it’s hard to tell what needs doing. By the time it’s widely visible to a population-level democratic consensus of non-experts that there is an unmet need — for example, trash piling up on the street everywhere indicating a need for garbage collection — that doesn’t mean “time to pick up some trash”, it means “the sanitation system has collapsed, you’re probably going to get cholera”. We need a system that can identify utility signals more granularly and quickly, towards the edges of the social graph. To allow person A to earn “value credits” of some kind for doing work that others find valuable, then trade those in to person B for labor which they find valuable, even if it is not clearly obvious to anyone else why person A wants that thing. Hence: money.
So, a market can provide an incentive structure that productively steers people towards needs, by aggregating small price signals in a distributed way, via the communication technology of “money”. Authoritarian communist states are famously bad at this, overproducing “necessary” goods in ways that can hold their own with the worst excesses of capitalists, while under-producing “luxury” goods that are politically seen as frivolous.
This is the kernel of truth around which the hardcore capitalist bootstrap grindset ideologues build their fabulist cinematic universe of cruelty. Markets are motivating, they reason, therefore we must worship the market as a god and obey its every whim. Markets can optimize some targets, therefore we must allow markets to optimize every target. Markets efficiently allocate resources, and people need resources to live, therefore anyone unable to secure resources in a market is undeserving of life. Thus we begin at “market economies provide some beneficial efficiencies” and after just a bit of hand-waving over some inconvenient details, we get to “thus, we must make the poor into a blood-sacrifice to Moloch, otherwise nobody will ever work, and we will all die, drowning in our own laziness”. “The cruelty is the point” is a convenient phrase, but among those with this worldview, the prosperity is the point; they just think the cruelty is the only engine that can possibly drive it.
Cozy games are therefore a centrist1 critique of capitalism. They present a world with the prosperity, but without the cruelty. More importantly though, by virtue of the fact that people actually play them in large numbers, they demonstrate that the cruelty is actually unnecessary.
You don’t need to play a cozy game. Tom Nook is not going to evict you from your real-life house if you don’t give him enough bells when it’s time to make rent. In fact, quite the opposite: you have to take time away from your real-life responsibilities and work, in order to make time for such a game. That is how motivating it is to engage with a market system in the abstract, with almost exclusively positive reinforcement.
What cozy games are showing us is that a world with tons of “free stuff” — universal basic income, universal health care, free education, free housing — will not result in a breakdown of our society because “no one wants to work”. People love to work.
If we can turn the market into a cozy game, with low stakes and a generous safety net, more people will engage with it, not fewer. People are not lazy; laziness does not exist. The motivation that people need from a market economy is not a constant looming threat of homelessness, starvation and death for themselves and their children, but a fun opportunity to get a five-star island rating.
AcknowledgmentsThank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support me on Patreon as well!
-
Okay, I guess “far left” on the current US political compass, but in a just world socdems would be centrists. ↩