Feeds
Jonathan Dowland: The scourge of Electron, the nostalgia of Pidgin
For reasons I won't go into right now, I've spent some of this year working on a refurbished Lenovo Thinkpad Yoga 260. Despite it being relatively underpowered, I love almost everything about it.
Unfortunately the model I bought has 8G RAM which turned out to be more limiting than I thought it would be. You can do incredible things with 8G of RAM: incredible, wondrous things. And most of my work, whether that's wrangling containers, hacking on OpenJDK, or complex Haskell projects, are manageable.
Where it falls down is driving the modern scourge: Electron, and by proxy, lots of modern IM tools: Slack (urgh), Discord (where one of my main IRC social communities moved to), WhatsApp Web1 and even Signal Desktop.
For that reason, I've (temporarily) looked at alternatives, and I was pleasantly surprised to find serviceable plugins for Pidgin, the stalwart Instant Messenger multiplexer. I originally used Pidgin (then called Gaim) back in the last century, at the time to talk to ICQ, MSN Messenger and AIM (all but ICQ2 long dead). It truly is an elegant weapon from a more civilized age.
Discord from within Pidgin
The plugins are3:
- purple-discord (Debian package purple-discord)
- libpurple-signald
- requiring signald
- whatsapp-purple
Pidgin with all of these plugins loaded runs perfectly well and consumes fractions of the RAM that each of those Electron apps did prior.
A side-effect of moving these into Pidgin (in particular Discord) is a refocussing of the content. Fewer distractions around the text. The lack of auto-link embedding, and other such things, make it a cleaner, purer experience.
This made me think of the Discord community I am in (I'm really only active in one). It used to be an IRC channel of people that I met through a mutual friend. Said friend recently departed Discord, due to the signal to noise ratio being too poor, and the incessant nudge to click on links, engage, engage, engage.
I wonder if the experience — mediated by Pidgin — would be more tolerable to them?
What my hexchat looks like
I'm still active in one IRC channel (and inactive in many more). I could consider moving IRC into Pidgin as well. At the moment, my IRC client of choice is hexchat, which (like Pidgin) is still using GTK2 for the UI. There's something perversely pleasant about that.
- if you go to the trouble of trying to run it as an application distinct from your web browser.↩
- I'm still somewhat surprised ICQ is still going. I might try and recover my old ID.↩
- There may or may not be similar plugins for Slack, but as I (am forced to) use that for corporate stuff, I'm steering clear of them.↩
Real Python: The Real Python Podcast – Episode #183: Exploring Code Reviews in Python and Automating the Process
What goes into a code review in Python? Is there a difference in how a large organization practices code review compared to a smaller one? What do you do if you're a solo developer? This week on the show, Brendan Maginnis and Nick Thapen from Sourcery return to talk about code review and automated code assistance.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Web Review, Week 2023-49
Let’s go for my web review for the week 2023-49.
Mobilizon V4: the maturity stage – FramablogTags: tech, framasoft, privacy, social-media, foss
Another important software release. Let’s wish luck to the new maintainers!
https://framablog.org/2023/12/05/mobilisation-v4-the-maturity-stage/
Tags: tech, wikipedia, knowledge, programming
Interesting move, I’m wondering how far this will go. Reuse of those functions in other Wikimedia project will be critical to its success.
Tags: tech, wikipedia, knowledge, community, sociology
Very interesting study, shows how toxic comments impact contributions. Gives a good idea of the probability for people to leave. In the case of Wikipedia this highlights reasons which contribute to the lack of diversity in the contributors. This is a complex community issue in general, this studies does a good thing by shedding some light on the dynamics in the case of Wikipedia.
https://academic.oup.com/pnasnexus/article/2/12/pgad385/7457939?login=false
Tags: tech, python, community
The fact that they felt the need to write such a letter is troubling. What’s going on in the Python Software Foundation really? Something needs to be fixed it seems.
https://pythonafrica.blogspot.com/2023/12/an-open-letter-to-python-software_5.html
Tags: tech, web, mozilla, google
Let’s hope it won’t get there… I wish people would abandon Chrome en masse. I unfortunately don’t see it happening and it’ll just weaken the Web.
https://www.brycewray.com/posts/2023/11/firefox-brink/
Tags: tech, google, browser, attention-economy
Still using Chrome? What are you waiting for to change for another browser which doesn’t play against your interests.
Tags: tech, automotive, privacy
A senator is stepping up and rightfully pointing the finger at automakers. Let’s hope more will follow.
Tags: tech, DRM, copyright
If you can’t download it without DRMs you just don’t own it, you’re renting it. This is completely different.
https://www.theverge.com/2023/12/5/23989290/playstation-digital-ownership-sucks
Tags: tech, security, bios
Fascinating vulnerability. When the BIOS is at fault with a crappy image parser… you can’t do much to prevent problems from happening.
Tags: tech, ai, gpt, surveillance, spy
Definitely one of the worrying aspects of reducing human labor needs for analyzing texts. Surveillance is on the brink of being increased thanks to it.
https://www.schneier.com/blog/archives/2023/12/ai-and-mass-spying.html
Tags: tech, ai, machine-learning, gpt, copyright
A glimpse into how those generator models can present a real copyright problem… there should be more transparency on the training data sets.
https://www.404media.co/google-researchers-attack-convinces-chatgpt-to-reveal-its-training-data/
Tags: tech, ai, machine-learning, gpt, copyright, criticism
This is clearly an uphill battle. And yes, this is because it’s broken by design, it should be opt-in and not opt-out.
https://neil-clarke.com/block-the-bots-that-feed-ai-models-by-scraping-your-website/
Tags: tech, ai, machine-learning, gpt
The Large Language Model arm race is still going strong. Models are still mostly hidden behind APIs of course, and this is likely consuming lots of energy to run. Results seem interesting though, even though I suspect they’re over inflating the “safety” built in all this. Also be careful of the demo videos, they’ve been reported as heavily edited and misleading…
https://blog.google/technology/ai/google-gemini-ai/#availability
Tags: tech, linguistics, ai, machine-learning, gpt
LLMs training had a bias from the start… and now we got a feedback loop since people are posting generated content online which is then used for training again. Expect idiosyncrasies to increase with time.
https://blog.j11y.io/2023-11-22_multifaceted/
Tags: tech, ai, machine-learning, gpt, translation, speech
Now this is a very interesting use of generator models. I find this more exciting than the glorified chatbots.
https://ai.meta.com/research/seamless-communication/#our-approach
Tags: tech, ai, machine-learning, gpt, foss
Very interesting review, we can see some interesting strengths and weaknesses. Also gives a good idea of the different ways to evaluate such models.
https://arxiv.org/abs/2311.16989
Tags: tech, security, server, self-hosting
Could indeed turn into a nice alternative to fail2ban.
https://blog.ppom.me/en-reaction/
Tags: tech, web, services, webhooks
Interesting attempt at having webhooks implementation a bit more standardized. This is indeed needed, currently everyone does them in slightly different ways and sometimes the quality is debatable. If it gets adopted it’d give a good baseline.
https://www.standardwebhooks.com/
Tags: tech, date, time, unix, complexity
Good reminder that the raw UNIX timestamps have interesting properties and often are enough not necessarily needing to rely on something more complex. Also gives nice criteria for picking between said timestamps and higher level types.
https://scorpil.com/post/you-dont-need-datetime/
Tags: tech, web, html, css, frontend
Nice examples showing JavaScript use can be reduced in the browser. HTML and CSS are gaining nice features.
https://www.htmhell.dev/adventcalendar/2023/2/
Tags: tech, tests, documentation
Interesting approach to reduce the amount of undocumented features because we simply forgot to update the documentation. Shows a few inspiring tricks to get there.
https://simonwillison.net/2018/Jul/28/documentation-unit-tests/
Tags: tech, c++
Good walkthrough the finer points of members initialization in C++. Worth keeping in mind.
https://www.sandordargo.com/blog/2023/11/22/struct-initialization
Tags: tech, programming, type-systems
Somehow unsurprising, this is often the case: less validation code, but also less automated tests. With types you can write unwanted states out of existence.
https://evanhahn.com/when-static-types-make-your-code-shorter/
Tags: tech, rust, optimization, profiling
The Rust tooling makes it super easy to profile your programs. This is neat.
https://ntietz.com/blog/profiling-rust-programs-the-easy-way/
Tags: tech, graphics, 3d, learning
Very nice collection of tidbits of information for the main topics in computer graphics. A good way to get started or refresh the basics.
https://mrl.cs.nyu.edu/~perlin/graphics/
Tags: tech, web, game, 3d, simulation
Very cool simplified simulator. Gives a good idea of how this roughly works.
https://dalton-nrs.manchester.ac.uk/
Tags: tech, hacking, raspberry-pi, music, networking, usb
Alright, this is definitely a cool hack.
https://blog.rgsilva.com/smartifying-my-hi-fi-system/
Tags: tech, quality, project-management, programming
An interesting interpretation of what was behind the “move fast and break things” motto which is nowadays blindly followed by some. A bit of a long piece but does the job and propose a more complete view and an alternative approach.
https://blog.glyph.im/2023/12/safer-not-later.html
Tags: tech, tdd, design
Definitely this. Again TDD helps to work on the design, but it’s not a silver bullet which will give you the right design on a platter.
https://tidyfirst.substack.com/p/tdd-isnt-design
Tags: tech, culture, blameless, quality, trust
Shows why it’s important to go for a blameless culture, also outside of postmortem. This is definitely not easy but worth it.
https://www.gybe.ca/a-few-words-about-blameless-culture/
Tags: tech, organization, strategy, planning
Good blueprint for building up and following up (the most important part really) of a strategy in your organization.
https://www.lenareinhard.com/articles/annual-engineering-organization-strategy-planning
Tags: tech, remote-working
Looks like remote work is here to stay for good now.
https://www.cnbc.com/2023/11/30/return-to-office-is-dead-stanford-economist-says-heres-why.html
Tags: business, organization, decision-making
Good point on why you don’t want to drive your organization simply on RFCs. Yet another fad of “this worked for them, let’s do it as well”… per usual this fails to take into account the specificity of the context where it worked.
https://jacobian.org/2023/dec/1/against-rfcs/
Tags: tech, management, decision-making
Nice ideas for decision making in larger groups.
https://jacobian.org/2023/dec/5/how-to-decide/
Bye for now!
LostCarPark Drupal Blog: Drupal Advent Calendar day 8 - Disclosure Menu
It’s time to open door number 8 of the Drupal Advent Calendar, and today we’re joined by Chris Wells (chrisfromredfin) to tell us about the Disclosure Menu module.
The importance of a seamless and inclusive website navigation cannot be overstated. Creating digital environments where everyone feels welcomed and capable is a central ethos in the Drupal community. And that’s why we were surprised that after spending so much time with menus over the years, there still wasn’t a truly accessible menu module available for Drupal.
The disclosure menu in action (with minimum theming)Our narrative begins with a standard website component: a hoverable menu…
TagsDrupal Core News: Request for comment: Project Update Working Group
The Drupal core committers and Drupal 10 readiness initiative are seeking feedback on a proposed new working group. The group's mission is to focus on contributed modules where a maintainer has not updated to the next major Drupal version. This includes modules where the maintainer has requested assistance as well as modules where the maintainer is no longer active. This effort will benefit the entire Drupal ecosystem.
This group will have elevated privileges on Drupal.org like those that exist for the Security Team and Site Moderators.
BackgroundCurrently the Project Update Bot generates automated compatibility patches for contributed projects. These patches are reviewed and tested by Drupal community members and then set to the "Reviewed & tested by the community" status.
However, for some modules, these patches are not committed in a timely fashion. This creates a barrier to updating to the next Drupal major version for sites that use this module.
There are existing workarounds. One is the Composer Lenient plugin which allows affected sites to install a patched version of the module. However, this is not a substitute for having a compatible version of the module.
ProposalEstablish a working group that has the ability to appoint its members as a temporary maintainer of a project. The only task of the temporary maintainer is to review, test and commit a patch or merge request that makes the module compatible with the new Drupal major version and optionally create a new release. The group will be able to take this action in the following circumstances:
-
The project MUST have a canonical issue for updating to the next major version of Drupal. This issue MUST have a patch or merge request. The issue MUST be marked "Reviewed & tested by the community" and MUST NOT have had feedback from a module maintainer within the past two weeks. The following proposal refers to this as the contributed project issue.
-
An attempt MUST have been made by members of the community to contact the module maintainers via their Drupal.org contact form. Record of this attempt MUST be recorded on the contributed project issue.
-
An attempt SHOULD be made by members of the community to contact the module maintainers via a messaging platform such as the Drupal community Slack. Record of this attempt MUST be recorded on the contributed project issue.
-
If there is no response from the module maintainer for seven (7) days, a member of the community MAY escalate the module to the Project Update Working Group. To escalate a module, create a separate issue in the Project Update Working Group issue queue. This is termed the project update group issue. An attempt SHOULD be made to notify members of the Project Update working group via a messaging platform such as the Drupal community Slack.
-
The Project Update Working Group MUST make a subsequent attempt to contact the module maintainers via their Drupal.org contact form. This communication MUST outline that failure to respond within seven (7) days may result in the Project update Working Group committing the contributed project issue on their behalf. Record of this contact MUST be recorded on the contributed project issue. Any communication between the Project Update Working Group and the module maintainers MUST be recorded on the project update group issue.
-
When the seven-day period from item 5 has elapsed, the maintainer has had two weeks overall to respond. At this point, a member of the Project Update Working Group MUST decide on the next step. The next step is to either intervene or not. If the decision is to intervene, then the group must also decide if a tagged release is to be made as well as committing the change. When making the decision the Project Update Working Group member MUST do the following.
-
Take into consideration recent activity from the maintainer in the project.
-
Take into consideration the age of the contributed project issue.
-
Take into account the complexity of the patch/merge request. They must work to avoid regressions. The level of automated test coverage for the project SHOULD be used to inform the likelihood of a regression.
-
Take into account the quality of the reviews.
-
Take into account the possible lifespan of the module and the needs of the community. For example, if the module duplicates functionality added to core or another module, then they may decide not to intervene.
-
Consider if the module is looking for new maintainers and if anyone has nominated themself for the role. The Project Update Working Group SHOULD favor supporting a new maintainer over intervention.
-
The Project Update Working Group SHOULD aim to achieve compatibility with the major version in a backwards-compatible way.
-
-
If a member of the Project Update Working Group decides to intervene and commit the patch, then the following occurs:
-
A record of the decision MUST be recorded on the contributed project issue.
-
The member of the Project Update Working Group MUST nominate to make the commit and/or release. Record of this nomination MUST occur on the contributed project issue.
-
The member of the Project Update Working Group MUST make a temporary change to the project's maintainers to add themself as a maintainer. Record of this change MUST be made on the contributed project issue.
-
The member of the Project Update Working Group with temporary maintainer access will then commit the patch or merge request. This MUST be recorded on the contributed project issue.
-
The member of the Project Update Working Group MUST acknowledge that the commit was made on the contributed project issue.
-
If it was decided that a release should be made, a member of the Project Update working group will create a tag and add a release node for the tag on Drupal.org. The member making this action MUST make a record of this on the contributed project issue. The release MUST follow semantic versioning rules for backwards compatibility. The member SHOULD strive to make a new minor version to allow sites to install a compatible version without updating the major version of Drupal.
-
If the module maintainer has not requested assistance from The Project Update group, a member of the Project Update Working Group MUST update the project node on Drupal.org to change it to 'No further development'. If the module has opted in to Security team coverage, the member of the Project Update group MAY opt the module out of this coverage.
-
Any member of the Project Update Working Group MUST then mark the original contributed project issue as fixed. This action SHOULD NOT prevent opening of new issues for the project for major version compatibility.
-
A member of the Project Update Working Group MUST revoke the temporary maintainer rights within fourteen (14 days). Record of this change MUST be recorded on the contributed project issue.
-
If the module was marked 'No further development' and if no such issue exists for the contributed project - a member of the Project Update Working Group MUST open a new issue in the project's queue seeking a new maintainer.
-
If additional compatibility issues are found between the module and the next major version of Drupal, the process above repeats.
-
The working group will comprise community members who self-nominate. Interested community members must receive two seconding recommendations from other community members. Nomination and seconding will occur publicly on Drupal.org in the Project Update Working Group issue queue. Community members will be able to share their thoughts or concerns on the nominees' applications. Concerns relating to conduct of members of the group MUST follow Drupal's standard Community Working Group processes.
The initial membership of the group will comprise at least five (5) individuals. Members of the group should have a record of maintaining core or contributed projects and have the git-vetted role on Drupal.org. In addition the group may contain provisional members. These members will not have the ability to change project maintainers and will require the support of a full member to carry out their duties.
The initial make up of the group will be vetted by the core committer team and security team. Subsequent appointments will be vetted by the Project Update Working Group with a fourteen day period for veto from the security team and/or core committers.
Membership of the group is for a single major update. For example, from Drupal 10 to Drupal 11. The first major update in which the group is active will be from Drupal 10 to 11. At the end of each major cycle, members can opt to renew their membership for the next major update cycle. As with the original nomination, this process will happen in public and require two seconding recommendations from the community.
Additional lifecycle optionTo complement this process, it is proposed that a new Abandoned lifecycle status is added for project info files.
If this is successful, the following changes will be made;
-
The process at (6) above will be amended such that the module's info file is updated to set the lifecycle value to 'abandoned'.
-
A lifecycle link is added that points to the issue in the project's queue where a new maintainer is sought.
Community feedback is sought on the proposed process. Please use this issue to add your input. The feedback period will last until Friday January 12th 2024.
Glyph Lefkowitz: Annotated At Runtime
PEP 0593 added the ability to add arbitrary user-defined metadata to type annotations in Python.
At type-check time, such annotations are… inert. They don’t do anything. Annotated[int, X] just means int to the type-checker, regardless of the value of X. So the entire purpose of Annotated is to provide a run-time API to consume metadata, which integrates with the type checker syntactically, but does not otherwise disturb it.
Yet, the documentation for this central purpose seems, while not exactly absent, oddly incomplete.
The PEP itself simply says:
A tool or library encountering an Annotated type can scan through the annotations to determine if they are of interest (e.g., using isinstance()).
But it’s not clear where “the annotations” are, given that the PEP’s entire “consuming annotations” section does not even mention the __metadata__ attribute where the annotation’s arguments go, which was only even added to CPython’s documentation. Its list of examples just show the repr() of the relevant type.
There’s also a bit of an open question of what, exactly, we are supposed to isinstance()-ing here. If we want to find arguments to Annotated, presumably we need to be able to detect if an annotation is an Annotated. But isinstance(Annotated[int, "hello"], Annotated) is both False at runtime, and also a type-checking error, that looks like this:
1Argument 2 to "isinstance" has incompatible type "<typing special form>"; expected "_ClassInfo"The actual type of these objects, typing._AnnotatedAlias, does not seem to have a publicly available or documented alias, so that seems like the wrong route too.
Now, it certainly works to escape-hatch your way out of all of this with an Any, build some version-specific special-case hacks to dig around in the relevant namespaces, access __metadata__ and call it a day. But this solution is … unsatisfying.
What are you looking for?Upon encountering these quirks, it is understandable to want to simply ask the question “is this annotation that I’m looking at an Annotated?” and to be frustrated that it seems so obscure to straightforwardly get an answer to that question without disabling all type-checking in your meta-programming code.
However, I think that this is a slightly misframing of the problem. Code that is inspecting parameters for an annotation is going to do something with that annotation, which means that it must necessarily be looking for a specific set of annotations. Therefore the thing we want to pass to isinstance is not some obscure part of the annotations’ internals, but the actual interesting annotation type from your framework or application.
When consuming an Annotated parameter, there are 3 things you probably want to know:
- What was the parameter itself? (type: The type you passed in.)
- What was the name of the annotated object (i.e.: the parameter name, the attribute name) being passed the parameter? (type: str)
- What was the actual type being annotated? (type: type)
And the things that we have are the type of the Annotated we’re querying for, and the object with annotations we are interrogating. So that gives us this function signature:
1 2 3 4 5def annotated_by( annotated: object, kind: type[T], ) -> Iterable[tuple[str, T, type]]: ...To extract this information, all we need are get_args and get_type_hints; no need for __metadata__ or get_origin or any other metaprogramming. Here’s a recipe:
1 2 3 4 5 6 7 8 9 10 11 12def annotated_by( annotated: object, kind: type[T], ) -> Iterable[tuple[str, T, type]]: for k, v in get_type_hints(annotated, include_extras=True).items(): all_args = get_args(v) if not all_args: continue actual, *rest = all_args for arg in rest: if isinstance(arg, kind): yield k, arg, actualIt might seem a little odd to be blindly assuming that get_args(...)[0] will always be the relevant type, when that is not true of unions or generics. Note, however, that we are only yielding results when we have found the instance type in the argument list; our arbitrary user-defined instance isn’t valid as a type annotation argument in any other context. It can’t be part of a Union or a Generic, so we can rely on it to be an Annotated argument, and from there, we can make that assumption about the format of get_args(...).
This can give us back the annotations that we’re looking for in a handy format that’s easy to consume. Here’s a quick example of how you might use it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15@dataclass class AnAnnotation: name: str def a_function( a: str, b: Annotated[int, AnAnnotation("b")], c: Annotated[float, AnAnnotation("c")], ) -> None: ... print(list(annotated_by(a_function, AnAnnotation))) # [('b', AnAnnotation(name='b'), <class 'int'>), # ('c', AnAnnotation(name='c'), <class 'float'>)] AcknowledgmentsThank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support me on Patreon as well! I am also available for consulting work if you think your organization could benefit from expertise on topics like “how do I do Python metaprogramming, but, like, not super janky”.
James Bennett: Use "pip install" safely
This is part of a series of posts I’m doing as a sort of Python/Django Advent calendar, offering a small tip or piece of information each day from the first Sunday of Advent through Christmas Eve. See the first post for an introduction.
Managing dependencies should be boringLast year I wrote a post about “boring” dependency management in Python, where I advocated a setup based entirely around standard Python packaging tools, in that …
Reproducible Builds (diffoscope): diffoscope 253 released
The diffoscope maintainers are pleased to announce the release of diffoscope version 253. This version includes the following changes:
* Improve DOS/MBR extraction by adding support for 7z. (Closes: reproducible-builds/diffoscope#333) * Process objdump symbol comment filter inputs as the Python "bytes" type (and not str). (Closes: reproducible-builds/diffoscope#358) * Add a missing RequiredToolNotFound import. * Update copyright years.You find out more by visiting the project homepage.
Python Engineering at Microsoft: Python in Visual Studio Code – December 2023 Release
We’re excited to announce the December 2023 release of the Python and Jupyter extensions for Visual Studio Code!
This release includes the following announcements:
- Configurable debugging options added to Run button menu
- Show Type Hierarchy with Pylance
- Deactivate command support for automatically activated virtual environments in the terminal
- Setting to turn REPL Smart Send on/off and a message when it is unsupported
If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.
Configurable debugging options added to Run button menuThe Python Debugger extension now has configurable debug options under the Run button menu. When you select Python Debugger: Debug using launch.json and there is an existing launch.json in your workspace, it shows all available debug configurations you can pick to start the debugger. In the case you do not have an existing launch.json, you will be prompted to select a debug configuration template to create a launch.json file for your Python application, and then can run your application using this configuration.
Show Type Hierarchy with PylanceYou can now more conveniently explore and navigate through your Python projects’ types relationships when using Pylance. This can be helpful when working with large codebases with complex type relationships.
When you right-click on a symbol, you can select Show Type Hierarchy to open the type hierarchy view. From there you can navigate through the symbol’s subtypes as well as super-types.
Deactivate command support for automatically activated virtual environments in the terminalThe Python extension has a new activation mechanism that activates the selected environment in your default terminal without running any explicit activation commands. This is currently behind an experimental flag and can be enabled through the following User setting: "python.experiments.optInto": ["pythonTerminalEnvVarActivation"] as mentioned in our August 2023 release notes.
However, one problem with this activation mechanism is that it didn’t support the deactivate command because there is no inherent activation script. We received feedback that this is an important part of some users’ workflow, so we have added support for deactivate when the selected default terminal is PowerShell or Command Prompt. We plan to add support for additional terminals in the future.
Setting to turn REPL Smart Send on/off and a message when it is unsupportedWhen attempting to use Smart Send via kbstyle(Shift+Enter) on a Python file that contains unsupported Python code (e.g., Python 2 source code), there is now a warning message and a setting to deactivate REPL Smart Send. Users are also able to change their user and workspace specific behavior for REPL Smart Send via the python.REPL.enableREPLSmartSend setting.
Other Changes and EnhancementsWe have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:
- The Pylance extension has adjusted its release cadence to monthly stable releases and nightly pre-release builds, similar to the Python extension release cadence. These changes will allow for more extensive testing on stable builds and a more reliable user experience.
- String inputs for numerical values are now supported in attach debug configurations with the Python Debugger extension (@vscode-python-debugger#115).
- The Python test adapter rewrite experiment has been rolled out to 100% of users. For the time being, you can opt-out by adding "python.experiments.optOutFrom" : "pythonTestAdapter" in your settings.json, but we will soon drop this experimental flag and adopt this new architecture.
We would also like to extend special thanks to this month’s contributors:
- @zyxue Make version extraction more robust in vscode-black-formatter#306
- @loskutov Fixed mis-escaped string literal in vscode-black-formatter#312
- @bhagya-98 Improve setting description for default interpreter setting in vscode-black-formatter#324
- @Kelly-LC Remove Python 3.7 from test actions in vscode-black-formatter#322
- @aarushinair Update dependency packages in vscode-black-formatter#326
- @mvasilkov Fixed typo in README.md in vscode-black-formatter#358
- @kirankumarmanku Added workaround and early returning if file name matched excluded arg in vscode-autopep8#142
- @norasoliman Remove Python 3.7 from test actions in vscode-autopep8#165
- @wuyumin Fix config file setting in README.md in vscode-autopep8#156
- @vinitaparasrampuria Capitalize Python in README.md and settings description in vscode-autopep8#167
- @JamzumSum Improved shell identifier on case-insensitive system in vscode-python#22391
- @trysten Add consoleTitle to launch.json properties schema in vscode-python#22406
- @shanesaravia Resolve test suite discovery import errors due to path ordering in vscode-python#22454
- @johnhany97 Pass along Python interpreter to jedi-language-server in vscode-python#22466
Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.
The post Python in Visual Studio Code – December 2023 Release appeared first on Python.
Dima Kogan: roslanch and =LD_PRELOAD=
This is part 2 of our series entitled "ROS people don't know how to use computers". This is about ROS1. ROS2 is presumably broken in some completely different way, but I don't know.
Unlike normal people, the ROS people don't "run" applications. They "launch" "nodes" from "packages" (these are "ROS" packages; obviously). You run
roslaunch PACKAGE THING.launchThen it tries to find this PACKAGE (using some rules that nobody understands), and tries to find the file THING.launch within this package. The .launch file contains inscrutable xml, which includes other inscrutable xml. And if you dig, you eventually find stuff like
<node pkg="PACKAGE" name="NAME" type="TYPE" args="...." ...>This defines the thing that runs. Unexpectedly, the executable that ends up running is called TYPE.
I know that my particular program is broken, and needs an LD_PRELOAD (exciting details described in another rant in the near future). But the above definition doesn't have a clear way to add that. Adding it to the type fails (with a very mysterious error message). Reading the docs tells you about launch-prefix, which sounds exactly like what I want. But when I add LD_PRELOAD=/tmp/whatever.so I get
RLException: Roslaunch got a 'No such file or directory' error while attempting to run: LD_PRELOAD=/tmp/whatever.so ..../TYPE .....But this is how you're supposed to be attaching gdb and such! Presumably it looks at the first token, and makes sure it's a file, instead of simply prepending it to the string it passes to the shell. So your options are:
- Do only approved ROS things in the docs (which are limited, since the docs were written by people who don't know how to use computers)
- Be expert-enough to work around it
I'm expert-enough. You do this:
launch-prefix="/lib64/ld-linux-x86-64.so.2 --preload /tmp/whatever.so"CodersLegacy: Pytest Tutorial: Mastering Unit Testing in Python
Welcome to a ALL-IN-ONE Tutorial designed to meet all your testing requirements. Whether you’re just starting with the fundamentals to build a solid conceptual foundation or aiming to craft professional-grade test cases for entire projects, this guide has got you covered. The focus of this tutorial will be around the popular “Pytest” library.
Table Of Contents- Understanding Unit Testing
- Why Pytest?
- Getting Started with pytest
- Pytest Tutorial: Writing your First Test
- Pytest Tutorial: Parameterized Testing
- Pytest Tutorial: Command-Line Options
- How to use Pytest effectively in Larger Projects
- Conclusion
In the world of programming, a unit is the smallest part of your code, like a single function or method. Unit testing involves examining these individual parts to ensure they’re doing what they’re supposed to. It’s like putting each piece of the puzzle under a microscope to make sure it fits perfectly and does its job without causing trouble for the whole picture.
Imagine you’re building a complex building. The traditional approach might be to wait for the entire building to be completed, then performing tests on it to test its integrity. Unit testing on the other hand, would have you test the integrity of each floor as you build it.
Here is another scenario:
You have a function that’s supposed to add two numbers together. Unit testing for this function would involve giving it different pairs of numbers and checking if it consistently produces the correct sum. It’s like asking, “Hey, can you add 2 and 3? What about 0 and 0? Or even -1 and 1?” Each time, the unit test checks if the function gives the right answer. These different pairs of numbers must be defined carefully to ensure that the function works under a variety of different circumstances. For example, the first pair might use negative numbers, second pair might use positive numbers, and third pair might target an “error” case where a string and a number are used as inputs (with the expectation of the test failing).
Why bother with this meticulous process? Because, it helps catch bugs early on, before they turn into big, tangled problems.
Unit testing ensures that each building block of your code functions as expected, creating a solid foundation for your software structure. It’s a practice that developers swear by because it not only saves time but also makes your code more reliable.
Why Pytest?Let’s explore some of the key features that make pytest a popular choice for testing in Python.
One of pytest‘s strengths is its ability to automatically discover and run tests in your project. By default, pytest identifies files with names starting with “test_” or ending with “_test.py” and considers them as test modules. It then discovers test functions or methods within these modules.
pytest uses a simplified and expressive syntax for writing tests. Test functions don’t need to be part of a class, and assertions can be made using the assert statement directly. This leads to more readable and concise test code.
Pytest supports parameterized testing, enabling developers to run the same test with multiple sets of inputs. This feature is incredibly beneficial for testing a variety of scenarios without duplicating test code.
And many more such benefits (that we can’t explain without getting too technical).
Getting Started with pytestTo get started with pytest, you need to install it. You can do this using pip, the Python package installer, with the following command:
pip install pytestOnce installed, you can run tests using the pytest command.
Pytest Tutorial: Writing your First TestLet’s start by creating a basic test using pytest. Create a file named test_example.py with the following content:
# test_example.py def add(x, y): return x + y def test_add(): assert add(2, 3) == 5 assert add(0, 0) == 0 assert add(-1, 1) == 0In this example, we define a simple add function and a corresponding test function using pytest‘s assert statement. The test checks whether the add function produces the expected results for different input values.
To execute the tests, run the following command in your terminal:
pytest test_example.pypytest will discover and run all test functions in the specified file, by looking for functions with the word “test” in their names. If the tests pass, you’ll see an output indicating success. Otherwise, pytest will provide detailed information about the failures.
Pytest Tutorial: Parameterized Testingpytest supports parameterized testing, enabling you to run the same test with multiple sets of inputs. This is achieved using the @pytest.mark.parametrize decorator.
import pytest def add(x, y): return x + y @pytest.mark.parametrize("input_a, input_b, expected", [ (2, 3, 5), (0, 0, 0), (-1, 1, 0), ]) def test_add(input_a, input_b, expected): result = add(input_a, input_b) assert result == expectedIn this example, the test_add function is executed three times with different input values, reducing code duplication and making it easier to cover various scenarios.
Pytest Tutorial: Command-Line Optionspytest provides a plethora of command-line options to customize test runs. For example, you can specify the directory or files to test, run specific tests or test classes, and control the verbosity of the output.
Here are key command-line options:
Use the -k option to specify a substring match for test names. For instance:
pytest -k test_moduleThis command runs all tests containing “test_module” in their names.
Specify a specific directory or file to test:
pytest tests/test_module.pyExecute tests from a specific file or directory, allowing targeted testing.
Adjust the verbosity level with the -v option to get more detailed output:
pytest -vDisplay test names and results. Useful for understanding test execution flow.
Increase verbosity for even more detailed information:
pytest -vvProvide additional information about skipped tests and setup/teardown stages.
Utilize custom markers to categorize and selectively run tests. For example:
pytest -m slowThis command runs tests marked with @pytest.mark.slow, allowing you to separate and focus on tests specifically categorized as slow-running.
Select tests based on their outcome, such as only running failed tests:
pytest --lfRun only the tests that failed in the last test run.
Speed up test runs by leveraging parallel execution:
pytest -n autoThis command runs tests in parallel, utilizing all available CPU cores.
Generate detailed reports in various formats, such as HTML or XML:
pytest --html=report.htmlThis command produces an HTML report for a more visual representation of test results, aiding in result analysis and sharing with stakeholders.
These command-line options empower developers to fine-tune their testing processes, making Pytest a flexible and customizable tool for projects of any scale. Whether you need to run specific tests, control output verbosity, or generate comprehensive reports, Pytest’s command-line options provide the versatility needed for efficient and effective testing.
How to use Pytest effectively in Larger ProjectsAs the size of your project grows, with the number of files and lines of code increasing significantly, the need to organize your code becomes even more important. While it may seem tempting to write all your “test” functions in the same file as your regular code, this is not an ideal solution.
Instead, it is recommended to create a separate file where all of your tests are written. Depending on the size of the project, you can even have multiple test files (e.g. one test file for each class).
Opting for this approach introduces potential complications. When test cases are written in a separate file, a common concern arises: How do we invoke the functions intended for testing?
This requires careful structuring of your project and code to ensure that individual functions and classes of your project can be imported by the pytest files.
Here is a good project structure to follow, where each of the files in src folder represent an independent module (e.g. a single class), and each of the files in the tests folder corresponds to a file in the src folder.
project_root/ |-- src/ | |-- __init__.py | |-- users.py | |-- services.py | |-- tests/ | |-- __init__.py | |-- test_users.py | |-- test_services.py | | -- main.pyThe __init__.py file is an important addition to the src folder, where all of our project files are stored (excluding the main driver code). When this file is created in a folder, that folder will be recognized by Python as a Python Package, and enables other files to import files from within this folder. You do not have to put anything in this file (leave it empty, though it can be customized with special statements).
It is also necessary to put the __init__.py file in the tests folder, in order for imports between it, and the src folder to succeed.
Example scenario: Importing the users.py file from test_users.py.
from src.users import * ConclusionThis marks the end of the Pytest tutorial.
By incorporating unit testing into your development workflow, you can catch and fix bugs early, improve code maintainability, and ensure that your software functions as intended. With pytest, the journey of mastering unit testing in Python becomes not only effective but also enjoyable. So, go ahead, write those tests, and build robust, reliable Python applications with confidence!
The post Pytest Tutorial: Mastering Unit Testing in Python appeared first on CodersLegacy.
Python Insider: Python 3.12.1 is now available
Python 3.12.1 is now available.
https://www.python.org/downloads/release/python-3121/
Python 3.12 is the newest major release of the Python programming language, and it contains many new features and optimizations. 3.12.1 is the latest maintenance release, containing more than 400 bugfixes, build improvements and documentation changes since 3.12.0.
- More flexible f-string parsing , allowing many things previously disallowed (PEP 701).
- Support for the buffer protocol in Python code (PEP 688).
- A new debugging/profiling API (PEP 669).
- Support for isolated subinterpreters with separate Global Interpreter Locks (PEP 684).
- Even more improved error messages. More exceptions potentially caused by typos now make suggestions to the user.
- Support for the Linux perf profiler to report Python function names in traces.
- Many large and small performance improvements (like PEP 709 and support for the BOLT binary optimizer), delivering an estimated 5% overall performance improvement.
- New type annotation syntax for generic classes (PEP 695).
- New override decorator for methods (PEP 698).
- The deprecated wstr and wstr_length members of the C implementation of unicode objects were removed, per PEP 623.
- In the unittest module, a number of long deprecated methods and classes were removed. (They had been deprecated since Python 3.1 or 3.2).
- The deprecated smtpd and distutils modules have been removed (see PEP 594 and PEP 632. The setuptools package continues to provide the distutils module.
- A number of other old, broken and deprecated functions, classes and methods have been removed.
- Invalid backslash escape sequences in strings now warn with SyntaxWarning instead of DeprecationWarning, making them more visible. (They will become syntax errors in the future.)
- The internal representation of integers has changed in preparation for performance enhancements. (This should not affect most users as it is an internal detail, but it may cause problems for Cython-generated code.)
For more details on the changes to Python 3.12, see What’s new in Python 3.12.
More resources- Online Documentation.
- PEP 693, the Python 3.12 Release Schedule.
- Report bugs via GitHub Issues.
- Help fund Python directly or via GitHub Sponsors, and support the Python community.
Enjoy the new releases
Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.
Your release team,
Thomas Wouters
Ned Deily
Steve Dower
Łukasz Langa