Feeds
LN Webworks: A Beginner’s Guide To Hreflang and Multilingual SEO
Strengthening your SEO game is essential! But are you taking all the proper measures to accomplish that? If not, then this blog is here to help you. With the help of a multilingual SEO, switching to your preferred language in a global website with just a prompt has become easy.
But that's not all, there is more to it than what appears. In order to make your website content accessible to everyone, it is essential to be clear about what kind of SEO technicalities to use. And if ignored it might adversely affect your website's ranking.
What Is Multilingual SEO?Multilingual SEO is a technique that improves the accessibility of your website by getting rid of the language barrier. This process is based on the language to localize your website content into different languages by preference. In short, allowing you to reach more and more audiences beyond your region or country.
LostCarPark Drupal Blog: Drupal Advent Calendar day 9 - Media Management
Welcome back to the ninth day of Drupal Advent Calendar, and behind today’s door we find the Media track of Drupal Starshot. Media Management is an area where Drupal has traditionally not been strong compared to other content management systems, yet it has a lot of very powerful features that Drupal CMS will hopefully refine to make it one of the best media management platforms on the web.
In the Track Leads keynote at DrupalCon Barcelona, Tony Barker, the Track Lead for Media Management in Drupal CMS, outlined how he is building the track to help marketers tell their story and connect with…
TagsPython Bytes: #413 python-build-standalone finds a home
Zato Blog: New API Integration Tutorial in Python
Do you know what airports, telecom operators, defense forces and health care organizations have in common?
They all rely heavily on deep-backend software systems which are integrated and automated using principled methodologies, innovative techniques and well-defined implementation frameworks.
If you'd like to learn how to integrate and automate such complex systems correctly, head over to the new API integration tutorial that will show you how to do it in Python too.
# -*- coding: utf-8 -*- # Zato from zato.server.service import Service # ############################################################################## class MyService(Service): """ Returns user details by the person's name. """ name = 'api.my-service' # I/O definition input = '-name' output = 'user_type', 'account_no', 'account_balance' def handle(self): # For later use name = self.request.input.name or 'partner' # REST connections crm_conn = self.out.rest['CRM'].conn billing_conn = self.out.rest['Billing'].conn # Prepare requests crm_request = {'UserName':name} billing_params = {'USER':name} # Get data from CRM crm_data = crm_conn.get(self.cid, crm_request).data # Get data from Billing billing_data = billing_conn.post(self.cid, params=billing_params).data # Extract the business information from both systems user_type = crm_data['UserType'] account_no = crm_data['AccountNumber'] account_balance = billing_data['ACC_BALANCE'] self.logger.info(f'cid:{self.cid} Returning user details for {name}') # Now, produce the response for our caller self.response.payload = { 'user_type': user_type, 'account_no': account_no, 'account_balance': account_balance, } # ##############################################################################
➤ API programming screenshots
➤ Here's the API integration tutorial again
➤ More API programming examples in Python
Tryton News: Release of Relatorio 0.11.0
We are proud to announce the release of Relatorio version 0.11.0.
Relatorio is a templating library mainly for OpenDocument using also OpenDocument as source format.
This is a feature release which:
- Allow setting zip file generation options on opendocument templates
The package is available at https://pypi.org/project/relatorio/0.11.0/
The documentation is available at https://docs.tryton.org/relatorio/0.11.0/
1 post - 1 participant
Paul Wise: FLOSS Activities November 2024
This month I didn't have any particular focus. I just worked on issues in my info bubble.
Changes- swh-web: adjust admin contact wording
- zygolophodon: add option to output links
- ArchiveBot: ignore more URL sharing sites, add option to hide filtered jobs
- Debian wiki pages: PortsDocs/New, bugs.debian.org/usertags, Statistics, Teams/Debbugs/ArchitectureTags
Patches: plasma-browser-integration webext
AdministrationDebian IRC: set secret mode for an unused channel that traps people
- Respond to queries from Debian users and contributors on IRC
The SWH work was sponsored. All other work was done on a volunteer basis.
Kaidan 0.10.0: Too Much to Summarize!
We finally made it: Kaidan’s next release with so many features that we cannot summarize them in one sentence!
Most of the work has been funded by NLnet via NGI Assure and NGI Zero Entrust with public money provided by the European Commission. If you want Kaidan’s progress to continue and keep more free software projects alive, please share and sign the open letter for further funding!
Now to the bunch of Kaidan’s new and great features:
Group chats with invitations, user listing, participant mentioning and private/public group chat filtering are supported now. In order to use it, you need an XMPP provider that supports MIX-Core, MIX-PAM and MIX-Admin. Unfortunately, there are not many providers supporting it yet since it is a comparatively recent group chat variant.
You do not need to quote messages just to reply to them any longer. The messages are referenced internally without bloating the conversation. After clicking on a referenced message, Kaidan even jumps to it. In addition, Kaidan allows you to remove unwanted messages locally.
We added an overview of all shared media to quickly find the image you received some time ago. You can define when to download media automatically. Furthermore, connecting to the server is now really fast - no need to wait multiple seconds just to see your latest offline messages anymore.
If you enter a chat address (e.g., to add a contact), its server part is now autocompleted if available. We added filter options for contacts and group chats. After adding labels to them, you can even search by those labels. And if you do not want to get any messages from someone, you can block them.
In case you need to move to a new account (e.g., if you are dissatisfied with your current XMPP provider), Kaidan helps you with that. For example, it transfers your contacts and informs them about the move. The redesigned onboarding user interface including many fixes assists with choosing a new provider and creating an account on it.
We updated Kaidan to the API v2 of XMPP Providers to stay up-to-date with the project’s data. If you are an operator of a public XMPP provider and would like Kaidan’s users to easily create accounts on it, simply ask to add it to the provider list.
The complete list of changes can be found in the changelog section. There is also a technical overview of all currently supported features.
Please note that we currently focus on new features instead of supporting more systems. Once Kaidan has a reasonable feature set, we will work on that topic again. Even if Kaidan is making good progress, keep in mind that it is not yet a stable app.
ChangelogFeatures:
- Add server address completion (fazevedo)
- Allow to edit account’s profile (jbb)
- Store and display delivery states of message reactions (melvo)
- Send pending message reactions after going online (melvo)
- Enable user to resend a message reaction if it previously failed (melvo)
- Open contact addition as page (mobile) or dialog (desktop) (melvo)
- Add option to open chat if contact exists on adding contact (melvo)
- Use consistent page with search bar for searching its content (melvo)
- Add local message removal (taibsu)
- Allow reacting to own messages (melvo)
- Add login option to chat (melvo)
- Display day of the week or “yesterday” for last messages (taibsu, melvo)
- Add media overview (fazevedo, melvo)
- Add contact list filtering by account and labels (i.e., roster groups) (incl. addition/removal) (melvo, tech-bash)
- Add message date sections to chat (melvo)
- Add support for automatic media downloads (fazevedo)
- Add filtering contacts by availability (melvo)
- Add item to contact list on first received direct message (melvo)
- Add support for blocking chat addresses (lnj)
- Improve notes chat (chat with oneself) usage (melvo)
- Place avatar above chat address and name in account/contact details on narrow window (melvo)
- Reload camera device for QR code scanning as soon as it is plugged in / enabled (melvo)
- Provide slider for QR code scanning to adjust camera zoom (melvo)
- Add contact to contact list on receiving presence subscription request (melvo)
- Add encryption key authentication via entering key IDs (melvo)
- Improve connecting to server and authentication (XEP-0388: Extensible SASL Profile (SASL 2), XEP-0386: Bind 2, XEP-0484: Fast Authentication Streamlining Tokens, XEP-0368: SRV records for XMPP over TLS) (lnj)
- Support media sharing with more clients even for sharing multiple files at once (XEP-0447: Stateless file sharing v0.3) (lnj)
- Display and check media upload size limit (fazevedo)
- Redesign message input field to use rounded corners and resized/symbolic buttons (melvo)
- Add support for moving account data to another account, informing contacts and restoring settings for moved contacts (XEP-0283: Moved) (fazevedo)
- Add group chat support with invitations, user listing, participant mentioning and private/public group chat filtering (XEP-0369: Mediated Information eXchange (MIX), XEP-0405: Mediated Information eXchange (MIX): Participant Server Requirements, XEP-0406: Mediated Information eXchange (MIX): MIX Administration, XEP-0407: Mediated Information eXchange (MIX): Miscellaneous Capabilities) (melvo)
- Add button to cancel message correction (melvo)
- Display marker for new messages (melvo)
- Add enhanced account-wide and per contact notification settings depending on group chat mentions and presence (melvo)
- Focus input fields appropriately (melvo)
- Add support for replying to messages (XEP-0461: Message Replies) (melvo)
- Indicate that Kaidan is busy during account deletion and group chat actions (melvo)
- Hide account deletion button if In-Band Registration is not supported (melvo)
- Embed login area in page for QR code scanning and page for web registration instead of opening start page (melvo)
- Redesign onboarding user interface including new page for choosing provider to create account on (melvo)
- Handle various corner cases that can occur during account creation (melvo)
- Update to XMPP Providers v2 (melvo)
- Hide voice message button if uploading is not supported (melvo)
- Replace custom images for message delivery states with regular theme icons (melvo)
- Free up message content space by hiding unneeded avatars and increasing maximum message bubble width (melvo)
- Highlight draft message text to easily see what is not sent yet (melvo)
- Store sent media in suitable directories with appropriate file extensions (melvo)
- Allow sending media with less steps from recording to sending (melvo)
- Add media to be sent in scrollable area above message input field (melvo)
- Display original images (if available) as previews instead of their thumbnails (melvo)
- Display high resolution thumbnails for locally stored videos as previews instead of their thumbnails (melvo)
- Send smaller thumbnails (melvo)
- Show camera status and reload camera once plugged in for taking pictures or recording videos (melvo)
- Add zoom slider for taking pictures or recording videos (melvo)
- Show overlay with description when files are dragged to be dropped on chats for being shared (melvo)
- Show location previews on a map (melvo)
- Open locations in user-defined way (system default, in-app, web) (melvo)
- Delete media that is only captured for sending but not sent (melvo)
- Add voice message recorder to message input field (melvo)
- Add inline audio player (melvo)
- Add context menu entry for opening directory of media files (melvo)
- Show collapsible buttons to send media/locations inside of message input field (melvo)
- Move button for adding hidden message part to new collapsible button area (melvo)
Bugfixes:
- Fix index out of range error in message search (taibsu)
- Fix updating last message information in contact list (melvo)
- Fix multiple corrections of the same message (melvo, taibsu)
- Request delivery receipts for pending messages (melvo)
- Fix sorting roster items (melvo)
- Fix displaying spoiler messages (melvo)
- Fix displaying errors and encryption warnings for messages (melvo)
- Fix fetching messages from server’s archive (melvo)
- Fix various encryption problems (melvo)
- Send delivery receipts for catched up messages (melvo)
- Do not hide last message date if contact name is too long (melvo)
- Fix displaying emojis (melvo)
- Fix several OMEMO bugs (melvo)
- Remove all locally stored data related to removed accounts (melvo)
- Fix displaying media preview file names/sizes (melvo)
- Fix disconnecting from server when application window is closed including timeout on connection problems (melvo)
- Fix media/location sharing (melvo)
- Fix handling emoji message reactions (melvo)
- Fix moving pinned chats (fazevedo)
- Fix drag and drop for files and pasting them (melvo)
- Fix sending/displaying media in selected order (lnj, melvo)
Notes:
- Kaidan is REUSE-compliant now
- Kaidan requires Qt 5.15 and QXmpp 1.9 now
- Source code (.tar.xz) (sig signed with 04EFAD0F7A4D9724)
- Linux (Flatpak on Flathub)
Or install Kaidan for your distribution:
Freexian Collaborators: Debian Contributions: OpenMPI transitions, cPython 3.12.7+ update uploads, Python 3.13 Transition, and more! (by Anupa Ann Joseph, Stefano Rivera)
Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.
Transition management, by Emilio Pozuelo MonfortEmilio has been helping finish the mpi-defaults switch to mpich on 32-bit architectures, and the openmpi transitions. This involves filing bugs for the reverse dependencies, doing NMUs, and requesting removals for outdated (Not Built from Source) binaries on 32-bit architectures where openmpi is no longer available. Those transitions got entangled with a few others, such as the petsc stack, and were blocking many packages from migrating to testing. These transitions were completed in early December.
cPython 3.12.7+ update uploads, by Stefano RiveraPython 3.12 had failed to build on mips64el, due to an obscure dh_strip failure. The mips64el porters never figured it out, but the missing build on mips64el was blocking migration to Debian testing. After waiting a month, enough changes had accumulated in the upstream 3.12 maintenance git branch that we could apply them in the hope of changing the output enough to avoid breaking dh_strip. This worked.
Of course there were other things to deal with too. A test started failing due to a Debian-specific patch we carry for python3.x-minimal, and it needed to be reworked. And Stefano forgot to strip the trailing + from PY_VERSION, which confuses some python libraries. This always requires another patch when applying git updates from the maintenance branch. Stefano added a build-time check to catch this mistake in the future. Python 3.12.7 migrated.
Python 3.13 Transition, by Stefano Rivera and Colin WatsonDuring November the Python 3.13-add transition started. This is the first stage of supporting a new version of Python in Debian archive (after preparatory work), adding it as a new supported but non-default version. All packages with compiled Python extensions need to be re-built to add support for the new version.
We have covered the lead-up to this transition in the past. Due to preparation, many of the failures we hit were expected and we had patches waiting in the bug tracker. These could be NMUed to get the transition moving. Others had been known about but hadn’t been worked on, yet.
Some other packages ran into new issues, as we got further into the transition than we’d been able to in preparation. The whole Debian Python team has been helping with this work.
The rebuild stage of the 3.13-add transition is now over, but many packages need work before britney will let python3-defaults migrate to testing.
Limiting build concurrency based on available RAM, by Helmut GrohneIn recent years, the concurrency of CPUs has been increasing as has the demand for RAM by linkers. What has not been increasing as quickly is the RAM supply in typical machines. As a result, we more frequently run into situations where the package builds exhaust memory when building at full concurrency. Helmut initiated a discussion about generalizing an approach to this in Debian packages. Researching existing code that limits concurrency as well as providing possible extensions to debhelper and dpkg to provide concurrency limits based on available system RAM. Thus far there is consensus on the need for a more general solution, but ideas are still being collected for the precise solution.
MiniDebConf Toulouse at Capitole du LibreThe whole Freexian Collaborator team attended MiniDebConf Toulouse, part of the Capitole du Libre event. Several members of the team gave talks:
- Santiago spoke on Linux Live Patching in Debian, presenting an update on the idea since DebConf 24. This includes the initial requirements for the livepatch package format, that would be used to distribute the livepatches.
- Stefano, Colin, Enrico, and Carles spoke on Using Debusine to Automate QA.
- Santiago and Roberto spoke on How LTS Goes Beyond LTS.
- Helmut spoke on Cross Building.
- Carles gave a lightning talk on po-debconf-manager.
Stefano and Anupa worked as part of the video team, streaming and recording the event’s talks.
Miscellaneous contributions-
Stefano looked into packaging the latest upstream python-falcon version in Debian, in support of the Python 3.13 transition. This appeared to break python-hug, which is sadly looking neglected upstream, and the best course of action is probably its removal from Debian.
-
Stefano uploaded videos from various 2024 Debian events to PeerTube and YouTube.
-
Stefano and Santiago visited the site for DebConf 2025 in Brest, after the MiniDebConf in Toulouse, to meet with the local team and scout out the venue.
The on-going DebConf 25 organization work of last month also included handling the logo and artwork call for proposals.
-
Stefano helped the press team to edit a post for bits.debian.org on OpenStreetMap’s migration to Debian.
-
Carles implemented multiple language support on po-debconf-manager and tested it using Portuguese-Brazilian during MiniDebConf Toulouse. The system was also tested and improved by reviewing more than 20 translations to Catalan, creating merge requests for those packages, and providing user support to new users. Additionally, Carles implemented better status transitions, configuration keys management and other small improvements.
-
Helmut sent 32 patches for cross build failures. The wireplumber one was an interactive collaboration with Dylan Aïssi.
-
Helmut continued to monitor the /usr-move, sent a patch for lib64readline8 and continued several older patch conversations. lintian now reports some aliasing issues in unstable.
-
Helmut initiated a discussion on the semantics of *-for-host packages. More feedback is welcome.
-
Helmut improved the crossqa.debian.net infrastructure to fail running lintian less often in larger packages.
-
Helmut continued maintaining rebootstrap mostly dropping applied patches and continuing discussions of submitted patches.
-
Helmut prepared a non-maintainer upload of gzip for several long-standing bugs.
-
Colin came up with a plan for resolving the multipart vs. python-multipart name conflict, and began work on converting reverse-dependencies.
-
Colin upgraded 42 Python packages to new upstream versions. Some were complex: python-catalogue had some upstream version confusion, pydantic and rpds-py involved several Rust package upgrades as prerequisites, and python-urllib3 involved first packaging python-quart-trio and then vendoring an unpackaged test-dependency.
-
Colin contributed Incus support to needrestart upstream.
-
Lucas set up a machine to do a rebuild of all ruby reverse dependencies to check what will be broken by adding ruby 3.3 as an alternative interpreter. The tool used for this is mass-rebuild and the initial rebuilds have already started. The ruby interpreter maintainers are planning to experiment with debusine next time.
-
Lucas is organizing a Debian Ruby sprint towards the end of January in Paris. The plan of the team is to finish any missing bits of Ruby 3.3 transition at the time, try to push Rails 7 transition and fix RC bugs affecting the ruby ecosystem in Debian.
-
Anupa attended a Debian Publicity team meeting in-person during MiniDebCamp Toulouse.
-
Anupa moderated and posted in the Debian Administrator group in LinkedIn.
Seth Michael Larson: New experimental Debian package for Cosign (Sigstore)
Published 2024-12-09 by Seth Larson
Reading time: minutes
Cosign has a new experimental package available for Debian thanks to the work of Simon Josefsson. Simon and I had an email exchange about Sigstore and Cosign on Debian after the discussion about PEP 761 (Deprecation and discontinuation of PGP signatures).
Debian and other downstream distros of Python and Python packages are incredibly important consumers of verification materials. Because these distros actually verify materials for every build of a package, this increases the confidence for other users using these same artifacts even without those users directly verifying the materials themselves. We need more actors in the ecosystem doing end-to-end verification to dissuade attackers from supply-chain attacks targeting artifact repositories like python.org and PyPI.
Trying Cosign in DockerI gave the experimental package a try using the Debian Docker image to verify CPython 3.14.0-alpha2's tarball and verification materials:
$ docker run --rm -it debian:bookworm # Install the basics for later use. apt-get install ca-certificates wget # Add Simon's experimental package repo # and install Cosign! :party: $ echo "deb [trusted=yes] https://salsa.debian.org/jas/cosign/-/jobs/6682245/artifacts/raw/aptly experimental main" | \ tee --append /etc/apt/sources.list.d/add.list $ apt-get update $ apt-get install cosign $ cosign version ______ ______ _______. __ _______ .__ __. / | / __ \ / || | / _____|| \ | | | ,----'| | | | | (----`| | | | __ | \| | | | | | | | \ \ | | | | |_ | | . ` | | `----.| `--' | .----) | | | | |__| | | |\ | \______| \______/ |_______/ |__| \______| |__| \__| cosign: A tool for Container Signing, Verification and Storage in an OCI registry.Now we can test Cosign out with CPython's artifacts. We expect Hugo van Kemenade (hugo@python.org) as the release manager for Python 3.14:
# Download the source and Sigstore bundle $ wget https://www.python.org/ftp/python/3.14.0/Python-3.14.0a2.tgz $ wget https://www.python.org/ftp/python/3.14.0/Python-3.14.0a2.tgz.sigstore # Verify with Cosign! $ cosign verify-blob \ --certificate-identity hugo@python.org \ --certificate-oidc-issuer https://github.com/login/oauth \ --bundle ./Python-3.14.0a2.tgz.sigstore \ --new-bundle-format \ ./Python-3.14.0a2.tgz Verified OKOverall, this is working as expected from my point-of-view! Simon had a few open questions mostly for Cosign's upstream project. I am hopeful that this means we'll begin seeing Sigstore bundles and their derivatives (such as attestations from the Python Package Index) be used for downstream verification by distros like Debian. Exciting times ahead!
New Bundle FormatMy first attempt didn't include the --new-bundle-format option and that resulted in an opaque error. Hopefully this user-experience issue will be phased out and Cosign will "default" to the new bundle format? I included this error strictly for folks searching for this error message and wanting to fix their issue.
Error: bundle does not contain cert for verification, please provide public key main.go:74: error during command execution: bundle does not contain cert for verification, please provide public key This critical role would not be possible without funding from the Alpha-Omega project.Have thoughts or questions? Let's chat over email or social:
sethmichaellarson@gmail.com
@sethmlarson@fosstodon.org
Want more articles like this one? Get notified of new posts by subscribing to the RSS feed or the email newsletter. I won't share your email or send spam, only whatever this is!
Want more content now? This blog's archive has ready-to-read articles. I also curate a list of cool URLs I find on the internet.
Find a typo? This blog is open source, pull requests are appreciated.
Thanks for reading! ♡ This work is licensed under CC BY-SA 4.0
︎Python⇒Speed: Reducing CO₂ emissions with faster software
What can you as a software developer do to fight climate change? My first and primary answer is getting involved with local politics. However, if you write software that operates at sufficient scale, you can also reduce carbon emissions by making your software faster.
In this article we’ll cover:
- Why more computation uses more electricity.
- Why you probably don’t need to think about this most of the time.
- Reducing emissions by reducing compute time.
- Reducing emissions with parallelism (even with the same amount of compute time!).
- Some alternative scenarios and caveats: embodied emissions and Jevons Paradox.
GNUnet News: GNUnet 0.23.0
We are pleased to announce the release of GNUnet 0.23.0.
GNUnet is an alternative network stack for building secure, decentralized and
privacy-preserving distributed applications.
Our goal is to replace the old insecure Internet protocol stack.
Starting from an application for secure publication of files, it has grown to
include all kinds of basic protocol components and applications towards the
creation of a GNU internet.
This is a new major release. It breaks protocol compatibility with the 0.22.0X versions. Please be aware that Git master is thus henceforth (and has been for a while) INCOMPATIBLE with the 0.22.0X GNUnet network, and interactions between old and new peers will result in issues. In terms of usability, users should be aware that there are still a number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.23.0 release is still only suitable for early adopters with some reasonable pain tolerance .
Download links- gnunet-0.23.0.tar.gz ( signature )
- gnunet-0.23.0-meson.tar.xz ( signature ) NEW: Test tarball made using the meson build system.
- gnunet-gtk-0.23.0.tar.gz ( signature )
- gnunet-fuse-0.23.0.tar.gz ( signature )
The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A
Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/
ChangesA detailed list of changes can be found in the git log , the NEWS and the bug tracker . Noteworthy highlights are
- Code review: A number of issues found during a code review have been addressed.
- util : A GNUNET_OS_ProjectData must now be passed to some APIs that are commonly used by third parties using libgnunetutil (e.g. Taler, GNUnet-Gtk) as to properly handle cases where the GNUnet installation directory is different from the third-party directory.
- Build System : Improved build times by outsourcing handbook to prebuilt files and only generating GANA source files manually.
- There are known major design issues in the CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
- There are known moderate implementation limitations in CADET that negatively impact performance.
- There are known moderate design issues in FS that also impact usability and performance.
- There are minor implementation limitations in SET that create unnecessary attack surface for availability.
- The RPS subsystem remains experimental.
In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.
ThanksThis release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, TheJackiMonster, oec, ch3, and Martin Schanzenbach.
Dirk Eddelbuettel: pinp 0.0.11 on CRAN: Maintenance
A new version of our pinp package arrived on CRAN today, and is the first release in four years. The pinp package allows for snazzier one or two column Markdown-based pdf vignettes, and is now used by a few packages. A screenshot of the package vignette can be seen below. Additional screenshots are at the pinp page.
This release contains no new features or new user-facing changes but reflects the standard package and repository maintenance over the four-year window since the last release: updating of actions, updating of URLs and addressing small packaging changes spotted by ever-more-vigilant R checking code.
The NEWS entry for this release follows.
Changes in pinp version 0.0.11 (2024-12-08)Standard package maintenance for continuous integration, URL updates, and packaging conventions
Correct two minor nags in the Rd file
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the ping page. For questions or comments use the issue tracker off the GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Design System December Updates
Hey team!
Back with a series of updates on the Plasma Design System work that we are doing. All videos contain English captions.
Leave your feedback or let us know if you have any questions.
Dries Buytaert: Drupal CMS: the official name for Drupal Starshot
We're excited to announce that "Drupal CMS" will be the official name for the product developed by the Drupal Starshot Initiative.
The name "Drupal CMS" was chosen after user testing with both newcomers and experienced users. This name consistently scored highest across all tested groups, including marketers unfamiliar with Drupal.
Participants appreciated the clarity it brings:
Having the words CMS in the name will make it clear what the product is. People would know that Drupal was a content management system by the nature of its name, rather than having to ask what Drupal is. I'm a designer familiar with the industry, so the term CMS or content management system is the term or phrase that describes this product most accurately in my opinion. I think it is important to have CMS in the title.The name "Drupal Starshot" will remain an internal code name until the first release of Drupal CMS, after which it will most likely be retired.
Freelock Blog: Show a mix of future and past events
Another automation we did for Programming Librarian, a site for librarians to plan educational programs, involved events. They wanted to always feature 3 events on the home page, and the most important events were in the future.
Real Python: How to Run Your Python Scripts and Code
Running a Python script is a fundamental task for any Python developer. You can execute a Python .py file through various methods depending on your environment and platform. On Windows, Linux, and macOS, use the command line by typing python script_name.py to run your script. You can also use the python command with the -m option to execute modules. This tutorial covers these methods and more, ensuring you can run Python scripts efficiently.
By the end of this tutorial, you’ll understand that:
- Running a Python .py script involves using the python command followed by the script’s filename in the terminal or command prompt.
- Running Python from the command prompt requires you to open the command prompt, navigate to the script’s directory, and execute it using python script_name.py.
- Running a .py file in Windows can be done directly from the command prompt or by double-clicking the file if Python is associated with .py files.
- Running a Python script without Python installed is possible by using online interpreters or converting scripts to executables, but it’s more flexible to install Python and run scripts natively.
To get the most out of this tutorial, you should know the basics of working with your operating system’s terminal and file manager. It’d also be beneficial for you to be familiar with a Python-friendly IDE or code editor and with the standard Python REPL (Read-Eval-Print Loop).
Free Download: Get a sample chapter from Python Tricks: The Book that shows you Python’s best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.
Take the Quiz: Test your knowledge with our interactive “How to Run Your Python Scripts” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Run Your Python ScriptsOne of the most important skills you need to build as a Python developer is to be able to run Python scripts and code. Test your understanding on how good you are with running your code.
What Scripts and Modules AreIn computing, the term script refers to a text file containing a logical sequence of orders that you can run to accomplish a specific task. These orders are typically expressed in a scripting language, which is a programming language that allows you to manipulate, customize, and automate tasks.
Scripting languages are usually interpreted at runtime rather than compiled. So, scripts are typically run by some kind of interpreter, which is responsible for executing each order in a sequence.
Python is an interpreted language. Because of that, Python programs are commonly called scripts. However, this terminology isn’t completely accurate because Python programs can be way more complex than a simple, sequential script.
In general, a file containing executable Python code is called a script—or an entry-point script in more complex applications—which is a common term for a top-level program. On the other hand, a file containing Python code that’s designed to be imported and used from another Python file is called a module.
So, the main difference between a module and a script is that modules store importable code while scripts hold executable code.
Note: Importable code is code that defines something but doesn’t perform a specific action. Some examples include function and class definitions. In contrast, executable code is code that performs specific actions. Some examples include function calls, loops, and conditionals.
In the following sections, you’ll learn how to run Python scripts, programs, and code in general. To kick things off, you’ll start by learning how to run them from your operating system’s command line or terminal.
How to Run Python Scripts From the Command LineIn Python programming, you’ll write programs in plain text files. By convention, files containing Python code use the .py extension, and there’s no distinction between scripts or executable programs and modules. All of them will use the same extension.
Note: On Windows systems, the extension can also be .pyw for those applications that should use the pythonw.exe launcher.
To create a Python script, you can use any Python-friendly code editor or IDE (integrated development environment). To keep moving forward in this tutorial, you’ll need to create a basic script, so fire up your favorite text editor and create a new hello.py file containing the following code:
Python hello.py print("Hello, World!") Copied!This is the classic "Hello, World!" program in Python. The executable code consists of a call to the built-in print() function that displays the "Hello, World!" message on your screen.
With this small program ready, you’re ready to learn different ways to run it. You’ll start by running the program from your command line, which is arguably the most commonly used approach to running scripts.
Using the python CommandTo run Python scripts with the python command, you need to open a command-line window and type in the word python followed by the path to your target script:
Windows PowerShell PS> python .\hello.py Hello, World! PS> py .\hello.py Hello, World! Copied! Shell $ python ./hello.py Hello, World! Copied! Read the full article at https://realpython.com/run-python-scripts/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: Using and Creating Global Variables in Your Python Functions
In Python, global variables are accessible across your entire program, including within functions. Understanding how Python handles global variables is key to writing efficient code. This tutorial will guide you through accessing and modifying global variables in Python functions using the global keyword and the globals() function. You’ll also learn to manage scope and avoid potential conflicts between local and global variables.
You’ll explore how to create global variables inside functions and apply strategies to minimize their use, ensuring your code remains clean and maintainable. After reading this tutorial, you’ll be adept at managing global variables and understanding their impact on your Python code.
By the end of this tutorial, you’ll understand that:
- A global variable in Python is a variable defined at the module level, accessible throughout the program.
- Accessing and modifying global variables inside Python functions can be achieved using the global keyword or the globals() function.
- Python handles name conflicts by searching scopes from local to built-in, potentially causing name shadowing challenges.
- Creating global variables inside a function is possible using the global keyword or globals(), but it’s generally not recommended.
- Strategies to avoid global variables include using constants, passing arguments, and employing classes and methods to encapsulate state.
To follow along with this tutorial, you should have a solid understanding of Python programming, including fundamental concepts such as variables, data types, scope, mutability, functions, and classes.
Get Your Code: Click here to download the free sample code that you’ll use to understand when and how to work with global variables in your Python functions.
Take the Quiz: Test your knowledge with our interactive “Using and Creating Global Variables in Your Python Functions” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Using and Creating Global Variables in Your Python FunctionsIn this quiz, you'll test your understanding of how to use global variables in Python functions. With this knowledge, you'll be able to share data across an entire program, modify and create global variables within functions, and understand when to avoid using global variables.
Using Global Variables in Python FunctionsGlobal variables are those that you can access and modify from anywhere in your code. In Python, you’ll typically define global variables at the module level. So, the containing module is their scope.
Note: You can also define global variables inside functions, as you’ll learn in the section Creating Global Variables Inside a Function.
Once you’ve defined a global variable, you can use it from within the module itself or from within other modules in your code. You can also use global variables in your functions. However, those cases can get a bit confusing because of differences between accessing and modifying global variables in functions.
To understand these differences, consider that Python can look for variables in four different scopes:
- The local, or function-level, scope, which exists inside functions
- The enclosing, or non-local, scope, which appears in nested functions
- The global scope, which exists at the module level
- The built-in scope, which is a special scope for Python’s built-in names
To illustrate, say that you’re inside an inner function. In that case, Python can look for names in all four scopes.
When you access a variable in that inner function, Python first looks inside that function. If the variable doesn’t exist there, then Python continues with the enclosing scope of the outer function. If the variable isn’t defined there either, then Python moves to the global and built-in scopes in that order. If Python finds the variable, then you get the value back. Otherwise, you get a NameError:
Python >>> # Global scope >>> def outer_func(): ... # Non-local scope ... def inner_func(): ... # Local scope ... print(some_variable) ... inner_func() ... >>> outer_func() Traceback (most recent call last): ... NameError: name 'some_variable' is not defined >>> some_variable = "Hello from global scope!" >>> outer_func() Hello from global scope! Copied!When you launch an interactive session, it starts off at the module level of global scope. In this example, you have outer_func(), which defines inner_func() as a nested function. From the perspective of this nested function, its own code block represents the local scope, while the outer_func() code block before the call to inner_func() represents the non-local scope.
If you call outer_func() without defining some_variable in either of your current scopes, then you get a NameError exception because the name isn’t defined.
If you define some_variable in the global scope and then call outer_func(), then you get Hello! on your screen. Internally, Python has searched the local, non-local, and global scopes to find some_variable and print its content. Note that you can define this variable in any of the three scopes, and Python will find it.
This search mechanism makes it possible to use global variables from inside functions. However, while taking advantage of this feature, you can face a few issues. For example, accessing a variable works, but directly modifying a variable doesn’t work:
Python >>> number = 42 >>> def access_number(): ... return number ... >>> access_number() 42 >>> def modify_number(): ... number = 7 ... >>> modify_number() >>> number 42 Copied!The access_number() function works fine. It looks for number and finds it in the global scope. In contrast, modify_number() doesn’t work as expected. Why doesn’t this function update the value of your global variable, number? The problem is the scope of the variable. You can’t directly modify a variable from a high-level scope like global in a lower-level scope like local.
Internally, Python assumes that any name directly assigned within a function is local to that function. Therefore, the local name, number, shadows its global sibling.
In this sense, global variables behave as read-only names. You can access their values, but you can’t modify them.
Note: The discussion about modifying global variables inside functions revolves around assignment operations rather than in-place mutations of mutable objects. You’ll learn about the effects of mutability on global variables in the section Understanding How Mutability Affects Global Variables.
Read the full article at https://realpython.com/python-use-global-variable-in-function/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: Asynchronous Tasks With Django and Celery
Integrating Celery with your Django application allows you to offload time-consuming tasks, ensuring smooth user experiences. Celery is a distributed task queue that processes tasks asynchronously, preventing delays in your web app’s response time. By using Celery with Django, you can efficiently manage tasks like sending emails, processing images, and analyzing data without slowing down your application.
Celery works by leveraging a message broker like Redis to communicate between your Django app and Celery workers. This setup enables you to handle tasks outside the main execution thread, improving your app’s performance.
By the end of this tutorial, you’ll understand that:
- Celery is a distributed task queue that handles tasks outside the main Django app flow.
- Python’s Celery excels at offloading work and scheduling tasks independently.
- Using Celery in Django helps maintain app responsiveness during time-intensive tasks.
- Configuring Celery in Django involves setting up a message broker and defining tasks.
- A Celery worker performs tasks asynchronously, freeing up the main app.
- Running a task in Celery requires calling the task with .delay() or .apply_async().
- Celery is not a message queue but uses a message broker like Redis for communication.
You’re in the right place if you’ve never used Celery in a Django app before, or if you’ve peeked into Celery’s documentation but couldn’t find your way around. You’ll learn how to configure Celery in Django to handle tasks asynchronously, ensuring your application remains responsive and efficient.
To focus this tutorial on the essentials, you’ll integrate Celery into an existing Django app. Go ahead and download the code for that app so that you can follow along:
Get Your Code: Click here to download free the sample code you’ll use to integrate Celery into your Django app.
Python Celery BasicsCelery is a distributed task queue that can collect, record, schedule, and perform tasks outside of your main program.
Note: Celery dropped support for Windows in version 4, so while you may still be able to get it to work on Windows, you’re better off using a different task queue, such as huey or Dramatiq, instead.
In this tutorial, you’ll focus on using Celery on UNIX systems, so if you’re trying to set up a distributed task queue on Windows, then this might not be the right tutorial for you.
To receive tasks from your program and send results to a back end, Celery requires a message broker for communication. Redis and RabbitMQ are two message brokers that developers often use together with Celery.
In this tutorial, you’ll use Redis as the message broker. To challenge yourself, you can stray from the instructions and use RabbitMQ as a message broker instead.
If you want to keep track of the results of your task runs, then you also need to set up a results back end database.
Note: Connecting Celery to a results back end is optional. Once you instruct Celery to run a task, it’ll do its duty whether you keep track of the task result or not.
However, keeping a record of all task results is often helpful, especially if you’re distributing tasks to multiple queues. To persist information about task results, you need a database back end.
You can use many different databases to keep track of Celery task results. In this tutorial, you’ll work with Redis both as a message broker and as a results back end. By using Redis, you limit the dependencies that you need to install because it can take on both roles.
You won’t do any work with the recorded task results in the scope of this tutorial. However, as a next step, you could inspect the results with the Redis command-line interface (CLI) or pull information into a dedicated page in your Django project.
Why Use Celery?There are two main reasons why most developers want to start using Celery:
- Offloading work from your app to distributed processes that can run independently of your app
- Scheduling task execution at a specific time, sometimes as recurring events
Celery is an excellent choice for both of these use cases. It defines itself as “a task queue with focus on real-time processing, while also supporting task scheduling” (Source).
Even though both of these functionalities are part of Celery, they’re often addressed separately:
- Celery workers are worker processes that run tasks independently from one another and outside the context of your main service.
- Celery beat is a scheduler that orchestrates when to run tasks. You can use it to schedule periodic tasks as well.
Celery workers are the backbone of Celery. Even if you aim to schedule recurring tasks using Celery beat, a Celery worker will pick up your instructions and handle them at the scheduled time. What Celery beat adds to the mix is a time-based scheduler for Celery workers.
In this tutorial, you’ll learn how to integrate Celery with Django to perform operations asynchronously from the main execution thread of your app using Celery workers.
You won’t tackle task scheduling with Celery beat in this tutorial, but once you understand the basics of Celery tasks, you’ll be well equipped to set up periodic tasks with Celery beat.
Read the full article at https://realpython.com/asynchronous-tasks-with-django-and-celery/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: Effective Python Testing With pytest
pytest is a popular testing framework for Python that simplifies the process of writing and executing tests. To start using pytest, install it with pip in a virtual environment. pytest offers several advantages over unittest that ships with Python, such as less boilerplate code, more readable output, and a rich plugin ecosystem.
pytest comes packed with features to boost your productivity. Its fixtures allow for explicit dependency declarations, making tests more understandable and reducing implicit dependencies. Parametrization in pytest helps prevent redundant test code by enabling multiple test cases from a single test function definition. This framework is highly customizable, so you can tailor it to your project’s needs.
By the end of this tutorial, you’ll understand that:
- Using pytest requires installing it with pip in a virtual environment to set up the pytest command.
- pytest allows for less code, easier readability, and more features compared to unittest.
- Managing test dependencies and state with pytest is made efficient through the use of fixtures, which provide explicit dependency declarations.
- Parametrization in pytest helps avoid redundant test code by allowing multiple test scenarios from a single test function.
- Assertion introspection in pytest provides detailed information about failures in the test report.
Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level.
Take the Quiz: Test your knowledge with our interactive “Effective Testing with Pytest” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Effective Testing with PytestIn this quiz, you'll test your understanding of pytest, a Python testing tool. With this knowledge, you'll be able to write more efficient and effective tests, ensuring your code behaves as expected.
How to Install pytestTo follow along with some of the examples in this tutorial, you’ll need to install pytest. As most Python packages, pytest is available on PyPI. You can install it in a virtual environment using pip:
Windows PowerShell PS> python -m venv venv PS> .\venv\Scripts\activate (venv) PS> python -m pip install pytest Copied! Shell $ python -m venv venv $ source venv/bin/activate (venv) $ python -m pip install pytest Copied!The pytest command will now be available in your installation environment.
What Makes pytest So Useful?If you’ve written unit tests for your Python code before, then you may have used Python’s built-in unittest module. unittest provides a solid base on which to build your test suite, but it has a few shortcomings.
A number of third-party testing frameworks attempt to address some of the issues with unittest, and pytest has proven to be one of the most popular. pytest is a feature-rich, plugin-based ecosystem for testing your Python code.
If you haven’t had the pleasure of using pytest yet, then you’re in for a treat! Its philosophy and features will make your testing experience more productive and enjoyable. With pytest, common tasks require less code and advanced tasks can be achieved through a variety of time-saving commands and plugins. It’ll even run your existing tests out of the box, including those written with unittest.
As with most frameworks, some development patterns that make sense when you first start using pytest can start causing pains as your test suite grows. This tutorial will help you understand some of the tools pytest provides to keep your testing efficient and effective even as it scales.
Less BoilerplateMost functional tests follow the Arrange-Act-Assert model:
- Arrange, or set up, the conditions for the test
- Act by calling some function or method
- Assert that some end condition is true
Testing frameworks typically hook into your test’s assertions so that they can provide information when an assertion fails. unittest, for example, provides a number of helpful assertion utilities out of the box. However, even a small set of tests requires a fair amount of boilerplate code.
Imagine you’d like to write a test suite just to make sure that unittest is working properly in your project. You might want to write one test that always passes and one that always fails:
Python test_with_unittest.py from unittest import TestCase class TryTesting(TestCase): def test_always_passes(self): self.assertTrue(True) def test_always_fails(self): self.assertTrue(False) Copied!You can then run those tests from the command line using the discover option of unittest:
Shell (venv) $ python -m unittest discover F. ====================================================================== FAIL: test_always_fails (test_with_unittest.TryTesting) ---------------------------------------------------------------------- Traceback (most recent call last): File "...\effective-python-testing-with-pytest\test_with_unittest.py", line 10, in test_always_fails self.assertTrue(False) AssertionError: False is not true ---------------------------------------------------------------------- Ran 2 tests in 0.006s FAILED (failures=1) Copied!As expected, one test passed and one failed. You’ve proven that unittest is working, but look at what you had to do:
Read the full article at https://realpython.com/pytest-python-testing/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: Python Timer Functions: Three Ways to Monitor Your Code
A timer is a powerful tool for monitoring the performance of your Python code. By using the time.perf_counter() function, you can measure execution time with exceptional precision, making it ideal for benchmarking. Using a timer involves recording timestamps before and after a specific code block and calculating the time difference to determine how long your code took to run.
In this tutorial, you’ll explore three different approaches to implementing timers: classes, decorators, and context managers. Each method offers unique advantages, and you’ll learn when and how to use them to achieve optimal results. Plus, you’ll have a fully functional Python timer that can be applied to any program to measure execution time efficiently.
By the end of this tutorial, you’ll understand that:
- time.perf_counter() is the best choice for accurate timing in Python due to its high resolution.
- You can create custom timer classes to encapsulate timing logic and reuse it across multiple parts of your program.
- Using decorators lets you seamlessly add timing functionality to existing functions without altering their code.
- You can leverage context managers to neatly measure execution time in specific code blocks, improving both resource management and code clarity.
Along the way, you’ll gain deeper insights into how classes, decorators, and context managers work in Python. As you explore real-world examples, you’ll discover how these concepts can not only help you measure code performance but also enhance your overall Python programming skills.
Decorators Q&A Transcript: Click here to get access to a 25-page chat log from our Python decorators Q&A session in the Real Python Community Slack where we discussed common decorator questions.
Python TimersFirst, you’ll take a look at some example code that you’ll use throughout the tutorial. Later, you’ll add a Python timer to this code to monitor its performance. You’ll also learn some of the simplest ways to measure the running time of this example.
Python Timer FunctionsIf you check out the built-in time module in Python, then you’ll notice several functions that can measure time:
Python 3.7 introduced several new functions, like thread_time(), as well as nanosecond versions of all the functions above, named with an _ns suffix. For example, perf_counter_ns() is the nanosecond version of perf_counter(). You’ll learn more about these functions later. For now, note what the documentation has to say about perf_counter():
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration. (Source)
First, you’ll use perf_counter() to create a Python timer. Later, you’ll compare this with other Python timer functions and learn why perf_counter() is usually the best choice.
Example: Download TutorialsTo better compare the different ways that you can add a Python timer to your code, you’ll apply different Python timer functions to the same code example throughout this tutorial. If you already have code that you’d like to measure, then feel free to follow the examples with that instead.
The example that you’ll use in this tutorial is a short function that uses the realpython-reader package to download the latest tutorials available here on Real Python. To learn more about the Real Python Reader and how it works, check out How to Publish an Open-Source Python Package to PyPI. You can install realpython-reader on your system with pip:
Shell $ python -m pip install realpython-reader Copied!Then, you can import the package as reader.
You’ll store the example in a file named latest_tutorial.py. The code consists of one function that downloads and prints the latest tutorial from Real Python:
Python latest_tutorial.py 1from reader import feed 2 3def main(): 4 """Download and print the latest tutorial from Real Python""" 5 tutorial = feed.get_article(0) 6 print(tutorial) 7 8if __name__ == "__main__": 9 main() Copied!realpython-reader handles most of the hard work:
- Line 1 imports feed from realpython-reader. This module contains functionality for downloading tutorials from the Real Python feed.
- Line 5 downloads the latest tutorial from Real Python. The number 0 is an offset, where 0 means the most recent tutorial, 1 is the previous tutorial, and so on.
- Line 7 prints the tutorial to the console.
- Line 9 calls main() when you run the script.
When you run this example, your output will typically look something like this:
Shell $ python latest_tutorial.py # Python Timer Functions: Three Ways to Monitor Your Code A timer is a powerful tool for monitoring the performance of your Python code. By using the `time.perf_counter()` function, you can measure execution time with exceptional precision, making it ideal for benchmarking. Using a timer involves recording timestamps before and after a specific code block and calculating the time difference to determine how long your code took to run. [ ... ] ## Read the full article at https://realpython.com/python-timer/ » * * * Copied! Read the full article at https://realpython.com/python-timer/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]