FLOSS Project Planets

Promet Source: A Deep Dive on Lenovo's Multilingual Drupal Site

Planet Drupal - Wed, 2024-01-03 19:44
Takeaway: Drupal’s comprehensive approach to multilingual features has made it a go-to solution for inclusive, global digital platforms. Both Marco Angles and I have worked on Lenovo, with me focusing on content management and Marco focusing on development. Collaborating on multilingual projects, we both have witnessed the seamless integration of Drupal's capabilities in managing diverse languages.
Categories: FLOSS Project Planets

Matt Layman: Legal and Stripe - Building SaaS with Python and Django #179

Planet Python - Wed, 2024-01-03 19:00
In this episode, we took care of the legal obligations of the site by setting up Terms of Service and a Privacy Policy page. Then we moved on to the next portion of signup, which is to configure Stripe to create customers and prepare, ultimately, to accept subscription payments.
Categories: FLOSS Project Planets

Michael J. Ross: Web Is Still in Beta

Planet Drupal - Wed, 2024-01-03 19:00
The Web Is Still in Beta Michael J. Ross 2024-01-04

Back in the early 1990s, when the World Wide Web was being discovered by the worldwide computer users — at least those with Internet connections — new websites were being crafted and made public at a rapid pace that accelerated as a growing number of creative or just plain curious people taught themselves how to format text and images using simple HTML. Only later did Cascading Style Sheets (CSS) allow for a much cleaner separation between content and its layout and other visual styling.

At that time, most websites — including those of major corporations — suffered from a clunky appearance that, by today's standards, would be judged as rather primitive or at least unpolished. This was much more pronounced in sites created by overenthusiastic amateurs who couldn't resist spicing up their web pages with jarringly bright colors, annoying auto-playing music tracks, and an assortment of groan-inducing images, such as animated mailboxes, spinning envelopes, or any of the other aesthetic sins characteristic of the personal web pages that composed GeoCities. Even the most staid websites would use various "under construction" images to indicate that a particular page or entire section of the site was still under development.

While few Internet users today would lament the passing of the more garish GIFs and other appalling web page decorations, it is notable that we almost never see the relatively conservative digital construction signs anymore, or even text notifications that a page is unfinished . And what about the web applications, such as Google Maps, that would remain for years in a state of "beta" — which presumably means the app is unfinished and has not reached the stage of an initial release, version 1.0 — and yet is being used by millions of people? Nowadays, simple sites and rudimentary web apps will be published with no mention of being in beta or under construction. Why is that?

Is it because all websites are now operationally and aesthetically flawless and all web apps are performing wonderfully, with no need for future planned updates? Clearly not. Instead, it is probably due to a combination of factors, including the following:

  • The state of web flux is now a given. Most if not all of us, especially web designers and developers, learned long ago that the sites and apps that we create will be called upon to meet ever-changing needs, whether necessitated by paying customers, demanding project managers, or just our own evolving sense of what we want the software to do and how it can look even better than before. The functionality and thus complexity of our present-day sites and apps are multiples of what was deemed acceptable three decades earlier — to say nothing of the ever-increasing security vulnerabilities and needed countermeasures. Any expectations of reaching a final state of perfection are simply unrealistic.
  • These days it is easier than ever to build a new website or web app, using a wide range of tools, including tried-and-tested web frameworks, content management systems (such as WordPress and Drupal), and third-party services to do much of the heavy lifting. Through the use of prepackaged themes, products built with a modest or even no budget can be quickly given an attractive look and feel.

The Web is unfinished, and that's a good thing.

Copyright © 2024 Michael J. Ross. All rights reserved.
Categories: FLOSS Project Planets

Michael Ablassmeier: Migrating a system to Hetzner cloud using REAR and kexec

Planet Debian - Wed, 2024-01-03 19:00

I needed to migrate an existing system to an Hetzner cloud VPS. While it is possible to attach KVM consoles and custom ISO images to dedicated servers, i didn’t find any way to do so with regular cloud instances.

For system migrations i usually use REAR, which has never failed me. (and also has saved my ass during recovery multiple times). It’s an awesome utility!

It’s possible to do this using the Hetzner recovery console too, but using REAR is very convenient here, because it handles things like re-creating the partition layout and network settings automatically!

The steps are:

  • Create bootable REAR rescue image on the source system.
  • Register a target system in Hetzner Cloud with at least the same disk size as the source system.
  • Boot the REAR image’s initrd and kernel from the running VPS system using kexec
  • Make the REAR recovery console accessible via SSH (or use Hetzners console).
  • Let REAR do its magic and re-partition the system.
  • Restore the system data to the freshly partitioned disks
  • Let REAR handle the bootloader and network re-configuration.
Example

To create a rescue image on the source system:

apt install rear echo OUTPUT=ISO > /etc/rear/local.conf rear mkrescue -v [..] Wrote ISO image: /var/lib/rear/output/rear-debian12.iso (185M)

My source system had a 128 GB disk, so i registered an instance on Hetzner cloud with greater disk size to make things easier:

Now copy the ISO image to the newly created instance and extract its data:

apt install kexec-tools scp rear-debian12.iso root@49.13.193.226:/tmp/ modprobe loop mount -o loop rear-debian12.iso /mnt/ cp /mnt/isolinux/kernel /tmp/ cp /mnt/isolinux/initrd.cgz /tmp/

Install kexec if not installed already:

apt install kexec-tools

Note down the current gateway configuration, this is required later on to make the REAR recovery console reachable via SSH:

root@testme:~# ip route default via 172.31.1.1 dev eth0 172.31.1.1 dev eth0 scope link

Reboot the running VPS instance into the REAR recovery image using somewhat the same kernel cmdline:

root@testme:~# cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-6.1.0-13-amd64 root=UUID=5174a81e-5897-47ca-8fe4-9cd19dc678c4 ro consoleblank=0 systemd.show_status=true console=tty1 console=ttyS0 kexec --initrd /tmp/initrd.cgz --command-line="consoleblank=0 systemd.show_status=true console=tty1 console=ttyS0" /tmp/kernel Connection to 49.13.193.226 closed by remote host. Connection to 49.13.193.226 closed

Now watch the system on the Console booting into the REAR system:

Login the recovery console (root without password) and fix its default route to make it reachable:

ip addr [..] 2: enp1s0 .. $ ip route add 172.31.1.1 dev enp1s0 $ ip route add default via 172.31.1.1 ping 49.13.193.226 64 bytes from 49.13.193.226: icmp_seq=83 ttl=52 time=27.7 ms

The network configuration might differ, the source system in this example used DHCP, as the target does. If REAR detects changed static network configuration it guides you through the setup pretty nicely.

Login via SSH (REAR will store your ssh public keys in the image) and start the recovery process, follow the steps as suggested by REAR:

ssh -l root 49.13.193.226 Welcome to Relax-and-Recover. Run "rear recover" to restore your system ! RESCUE debian12:~ # rear recover Relax-and-Recover 2.7 / Git Running rear recover (PID 673 date 2024-01-04 19:20:22) Using log file: /var/log/rear/rear-debian12.log Running workflow recover within the ReaR rescue/recovery system Will do driver migration (recreating initramfs/initrd) Comparing disks Device vda does not exist (manual configuration needed) Switching to manual disk layout configuration (GiB sizes rounded down to integer) /dev/vda had size 137438953472 (128 GiB) but it does no longer exist /dev/sda was not used on the original system and has now 163842097152 (152 GiB) Original disk /dev/vda does not exist (with same size) in the target system Using /dev/sda (the only available of the disks) for recreating /dev/vda Current disk mapping table (source => target): /dev/vda => /dev/sda Confirm or edit the disk mapping 1) Confirm disk mapping and continue 'rear recover' [..] User confirmed recreated disk layout [..]

This step re-recreates your original disk layout and mounts it to /mnt/local/ (this example uses a pretty lame layout, but usually REAR will handle things like lvm/btrfs just nicely):

mount /dev/sda3 on /mnt/local type ext4 (rw,relatime,errors=remount-ro) /dev/sda1 on /mnt/local/boot type ext4 (rw,relatime)

Now clone your source systems data to /mnt/local/ with whatever utility you like to use and exit the recovery step. After confirming everything went well, REAR will setup the bootloader (and all other config details like fstab entries and adjusted network configuration) for you as required:

rear> exit Did you restore the backup to /mnt/local ? Are you ready to continue recovery ? yes User confirmed restored files Updated initramfs with new drivers for this system. Skip installing GRUB Legacy boot loader because GRUB 2 is installed (grub-probe or grub2-probe exist). Installing GRUB2 boot loader... Determining where to install GRUB2 (no GRUB2_INSTALL_DEVICES specified) Found possible boot disk /dev/sda - installing GRUB2 there Finished 'recover'. The target system is mounted at '/mnt/local'. Exiting rear recover (PID 7103) and its descendant processes ... Running exit tasks

Now reboot the recovery console and watch it boot into your target systems configuration:

Being able to use this procedure for complete disaster recovery within Hetzner cloud VPS (using off-site backups) gives me a better feeling, too.

Categories: FLOSS Project Planets

John Goerzen: Live Migrating from Raspberry Pi OS bullseye to Debian bookworm

Planet Debian - Wed, 2024-01-03 18:33

I’ve been getting annoyed with Raspberry Pi OS (Raspbian) for years now. It’s a fork of Debian, but manages to omit some of the most useful things. So I’ve decided to migrate all of my Pis to run pure Debian. These are my reasons:

  1. Raspberry Pi OS has, for years now, specified that there is no upgrade path. That is, to get to a newer major release, it’s a reinstall. While I have sometimes worked around this, for a device that is frequently installed in hard-to-reach locations, this is even more important than usual. It’s common for me to upgrade machines for a decade or more across Debian releases and there’s no reason that it should be so much more difficult with Raspbian.
  2. As I noted in Consider Security First, the security situation for Raspberry Pi OS isn’t as good as it is with Debian.
  3. Raspbian lags behind Debian – often times by 6 months or more for major releases, and days or weeks for bug fixes and security patches.
  4. Raspbian has no direct backports support, though Raspberry Pi 3 and above can use Debian’s backports (per my instructions as Installing Debian Backports on Raspberry Pi)
  5. Raspbian uses a custom kernel without initramfs support

It turns out it is actually possible to do an in-place migration from Raspberry Pi OS bullseye to Debian bookworm. Here I will describe how. Even if you don’t have a Raspberry Pi, this might still be instructive on how Raspbian and Debian packages work.

WARNINGS

Before continuing, back up your system. This process isn’t for the neophyte and it is entirely possible to mess up your boot device to the point that you have to do a fresh install to get your Pi to boot. This isn’t a supported process at all.

Architecture Confusion

Debian has three ARM-based architectures:

  • armel, for the lowest-end 32-bit ARM devices without hardware floating point support
  • armhf, for the higher-end 32-bit ARM devices with hardware float (hence “hf”)
  • arm64, for 64-bit ARM devices (which all have hardware float)

Although the Raspberry Pi 0 and 1 do support hardware float, they lack support for other CPU features that Debian’s armhf architecture assumes. Therefore, the Raspberry Pi 0 and 1 could only run Debian’s armel architecture.

Raspberry Pi 3 and above are capable of running 64-bit, and can run both armhf and arm64.

Prior to the release of the Raspberry Pi 5 / Raspbian bookworm, Raspbian only shipped the armhf architecture. Well, it was an architecture they called armhf, but it was different from Debian’s armhf in that everything was recompiled to work with the more limited set of features on the earlier Raspberry Pi boards. It was really somewhere between Debian’s armel and armhf archs. You could run Debian armel on those, but it would run more slowly, due to doing floating point calculations without hardware support. Debian’s raspi FAQ goes into this a bit.

What I am going to describe here is going from Raspbian armhf to Debian armhf with a 64-bit kernel. Therefore, it will only work with Raspberry Pi 3 and above. It may theoretically be possible to take a Raspberry Pi 2 to Debian armhf with a 32-bit kernel, but I haven’t tried this and it may be more difficult. I have seen conflicting information on whether armhf really works on a Pi 2. (If you do try it on a Pi 2, ignore everything about arm64 and 64-bit kernels below, and just go with the linux-image-armmp-lpae kernel per the ARMMP page)

There is another wrinkle: Debian doesn’t support running 32-bit ARM kernels on 64-bit ARM CPUs, though it does support running a 32-bit userland on them. So we will wind up with a system with kernel packages from arm64 and everything else from armhf. This is a perfectly valid configuration as the arm64 – like x86_64 – is multiarch (that is, the CPU can natively execute both the 32-bit and 64-bit instructions).

(It is theoretically possible to crossgrade a system from 32-bit to 64-bit userland, but that felt like a rather heavy lift for dubious benefit on a Pi; nevertheless, if you want to make this process even more complicated, refer to the CrossGrading page.)

Prerequisites and Limitations

In addition to the need for a Raspberry Pi 3 or above in order for this to work, there are a few other things to mention.

If you are using the GPIO features of the Pi, I don’t know if those work with Debian.

I think Raspberry Pi OS modified the desktop environment more than other components. All of my Pis are headless, so I don’t know if this process will work if you use a desktop environment.

I am assuming you are booting from a MicroSD card as is typical in the Raspberry Pi world. The Pi’s firmware looks for a FAT partition (MBR type 0x0c) and looks within it for boot information. Depending on how long ago you first installed an OS on your Pi, your /boot may be too small for Debian. Use df -h /boot to see how big it is. I recommend 200MB at minimum. If your /boot is smaller than that, stop now (or use some other system to shrink your root filesystem and rearrange your partitions; I’ve done this, but it’s outside the scope of this article.)

You need to have stable power. Once you begin this process, your pi will mostly be left in a non-bootable state until you finish. (You… did make a backup, right?)

Basic idea

The basic idea here is that since bookworm has almost entirely newer packages then bullseye, we can “just” switch over to it and let the Debian packages replace the Raspbian ones as they are upgraded. Well, it’s not quite that easy, but that’s the main idea.

Preparation

First, make a backup. Even an image of your MicroSD card might be nice. OK, I think I’ve said that enough now.

It would be a good idea to have a HDMI cable (with the appropriate size of connector for your particular Pi board) and a HDMI display handy so you can troubleshoot any bootup issues with a console.

Preparation: access

The Raspberry Pi OS by default sets up a user named pi that can use sudo to gain root without a password. I think this is an insecure practice, but assuming you haven’t changed it, you will need to ensure it still works once you move to Debian. Raspberry Pi OS had a patch in their sudo package to enable it, and that will be removed when Debian’s sudo package is installed. So, put this in /etc/sudoers.d/010_picompat:

pi ALL=(ALL) NOPASSWD: ALL

Also, there may be no password set for the root account. It would be a good idea to set one; it makes it easier to log in at the console. Use the passwd command as root to do so.

Preparation: bluetooth

Debian doesn’t correctly identify the Bluetooth hardware address. You can save it off to a file by running hcitool dev > /root/bluetooth-from-raspbian.txt. I don’t use Bluetooth, but this should let you develop a script to bring it up properly.

Preparation: Debian archive keyring

You will next need to install Debian’s archive keyring so that apt can authenticate packages from Debian. Go to the bookworm download page for debian-archive-keyring and copy the URL for one of the files, then download it on the pi. For instance:

wget http://http.us.debian.org/debian/pool/main/d/debian-archive-keyring/debian-archive-keyring_2023.3+deb12u1_all.deb

Use sha256sum to verify the checksum of the downloaded file, comparing it to the package page on the Debian site.

Now, you’ll install it with:

dpkg -i debian-archive-keyring_2023.3+deb12u1_all.deb Package first steps

From here on, we are making modifications to the system that can leave it in a non-bootable state.

Examine /etc/apt/sources.list and all the files in /etc/apt/sources.list.d. Most likely you will want to delete or comment out all lines in all files there. Replace them with something like:

deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free deb https://deb.debian.org/debian bookworm-backports main non-free-firmware contrib non-free

(you might leave off contrib and non-free depending on your needs)

Now, we’re going to tell it that we’ll support arm64 packages:

dpkg --add-architecture arm64

And finally, download the bookworm package lists:

apt-get update

If there are any errors from that command, fix them and don’t proceed until you have a clean run of apt-get update.

Moving /boot to /boot/firmware

The boot FAT partition I mentioned above is mounted at /boot by Raspberry Pi OS, but Debian’s scripts assume it will be at /boot/firmware. We need to fix this. First:

umount /boot mkdir /boot/firmware

Now, edit fstab and change the reference to /boot to be to /boot/firmware. Now:

mount -v /boot/firmware cd /boot/firmware mv -vi * ..

This mounts the filesystem at the new location, and moves all its contents back to where apt believes it should be. Debian’s packages will populate /boot/firmware later.

Installing the first packages

Now we start by installing the first of the needed packages. Eventually we will wind up with roughly the same set Debian uses.

apt-get install linux-image-arm64 apt-get install firmware-brcm80211=20230210-5 apt-get install raspi-firmware

If you get errors relating to firmware-brcm80211 from any commands, run that install firmware-brcm80211 command and then proceed. There are a few packages that Raspbian marked as newer than the version in bookworm (whether or not they really are), and that’s one of them.

Configuring the bootloader

We need to configure a few things in /etc/default/raspi-firmware before proceeding. Edit that file.

First, uncomment (or add) a line like this:

KERNEL_ARCH="arm64"

Next, in /boot/cmdline.txt you can find your old Raspbian boot command line. It will say something like:

root=PARTUUID=...

Save off the bit starting with PARTUUID. Back in /etc/default/raspi-firmware, set a line like this:

ROOTPART=PARTUUID=abcdef00

(substituting your real value for abcdef00).

This is necessary because the microSD card device name often changes from /dev/mmcblk0 to /dev/mmcblk1 when switching to Debian’s kernel. raspi-firmware will encode the current device name in /boot/firmware/cmdline.txt by default, which will be wrong once you boot into Debian’s kernel. The PARTUUID approach lets it work regardless of the device name.

Purging the Raspbian kernel

Run:

dpkg --purge raspberrypi-kernel Upgrading the system

At this point, we are going to run the procedure beginning at section 4.4.3 of the Debian release notes. Generally, you will do:

apt-get -u upgrade apt full-upgrade

Fix any errors at each step before proceeding to the next. Now, to remove some cruft, run:

apt-get --purge autoremove

Inspect the list to make sure nothing important isn’t going to be removed.

Removing Raspbian cruft

You can list some of the cruft with:

apt list '~o'

And remove it with:

apt purge '~o'

I also don’t run Bluetooth, and it seemed to sometimes hang on boot becuase I didn’t bother to fix it, so I did:

apt-get --purge remove bluez Installing some packages

This makes sure some basic Debian infrastructure is available:

apt-get install wpasupplicant parted dosfstools wireless-tools iw alsa-tools apt-get --purge autoremove Installing firmware

Now run:

apt-get install firmware-linux Resolving firmware package version issues

If it gives an error about the installed version of a package, you may need to force it to the bookworm version. For me, this often happened with firmware-atheros, firmware-libertas, and firmware-realtek.

Here’s how to resolve it, with firmware-realtek as an example:

  1. Go to https://packages.debian.org/PACKAGENAME – for instance, https://packages.debian.org/firmware-realtek. Note the version number in bookworm – in this case, 20230210-5.

  2. Now, you will force the installation of that package at that version:

    apt-get install firmware-realtek=20230210-5
  3. Repeat with every conflicting package until done.

  4. Rerun apt-get install firmware-linux and make sure it runs cleanly.

Also, in the end you should be able to:

apt-get install firmware-atheros firmware-libertas firmware-realtek firmware-linux Dealing with other Raspbian packages

The Debian release notes discuss removing non-Debian packages. There will still be a few of those. Run:

apt list '?narrow(?installed, ?not(?origin(Debian)))'

Deal with them; mostly you will need to force the installation of a bookworm version using the procedure in the section Resolving firmware package version issues above (even if it’s not for a firmware package). For non-firmware packages, you might possibly want to add --mark-auto to your apt-get install command line to allow the package to be autoremoved later if the things depending on it go away.

If you aren’t going to use Bluetooth, I recommend apt-get --purge remove bluez as well. Sometimes it can hang at boot if you don’t fix it up as described above.

Set up networking

We’ll be switching to the Debian method of networking, so we’ll create some files in /etc/network/interfaces.d. First, eth0 should look like this:

allow-hotplug eth0 iface eth0 inet dhcp iface eth0 inet6 auto

And wlan0 should look like this:

allow-hotplug wlan0 iface wlan0 inet dhcp wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

Raspbian is inconsistent about using eth0/wlan0 or renamed interface. Run ifconfig or ip addr. If you see a long-named interface such as enx<something> or wlp<something>, copy the eth0 file to the one named after the enx interface, or the wlan0 file to the one named after the wlp interface, and edit the internal references to eth0/wlan0 in this new file to name the long interface name.

If using wifi, verify that your SSIDs and passwords are in /etc/wpa_supplicant/wpa_supplicant.conf. It should have lines like:

network={ ssid="NetworkName" psk="passwordHere" }

(This is where Raspberry Pi OS put them).

Deal with DHCP

Raspberry Pi OS used dhcpcd, whereas bookworm normally uses isc-dhcp-client. Verify the system is in the correct state:

apt-get install isc-dhcp-client apt-get --purge remove dhcpcd dhcpcd-base dhcpcd5 dhcpcd-dbus Set up LEDs

To set up the LEDs to trigger on MicroSD activity as they did with Raspbian, follow the Debian instructions. Run apt-get install sysfsutils. Then put this in a file at /etc/sysfs.d/local-raspi-leds.conf:

class/leds/ACT/brightness = 1 class/leds/ACT/trigger = mmc1 Prepare for boot

To make sure all the /boot/firmware files are updated, run update-initramfs -u. Verify that root in /boot/firmware/cmdline.txt references the PARTUUID as appropriate. Verify that /boot/firmware/config.txt contains the lines arm_64bit=1 and upstream_kernel=1. If not, go back to the section on modifying /etc/default/raspi-firmware and fix it up.

The moment arrives

Cross your fingers and try rebooting into your Debian system:

reboot

For some reason, I found that the first boot into Debian seems to hang for 30-60 seconds during bootstrap. I’m not sure why; don’t panic if that happens. It may be necessary to power cycle the Pi for this boot.

Troubleshooting

If things don’t work out, hook up the Pi to a HDMI display and see what’s up. If I anticipated a particular problem, I would have documented it here (a lot of the things I documented here are because I ran into them!) So I can’t give specific advice other than to watch boot messages on the console. If you don’t even get kernel messages going, then there is some problem with your partition table or /boot/firmware FAT partition. Otherwise, you’ve at least got the kernel going and can troubleshoot like usual from there.

Categories: FLOSS Project Planets

The Drop Times: A Reddit Discussion on Changes In and Beyond Drupal

Planet Drupal - Wed, 2024-01-03 16:39
Explore the nuanced discussions within the Drupal community as users and developers grapple with the CMS's complexities, AI integration, and its future amidst the rise of alternatives. Insights from seasoned professionals shed light on Drupal's strengths, challenges, and role in a rapidly evolving digital landscape.
Categories: FLOSS Project Planets

FSF Blogs: FSD meeting recap 2023-12-29

GNU Planet! - Wed, 2024-01-03 15:58
Check out the important work our volunteers accomplished at today's Free Software Directory (FSD) IRC meeting.
Categories: FLOSS Project Planets

FSD meeting recap 2023-12-29

FSF Blogs - Wed, 2024-01-03 15:58
Check out the important work our volunteers accomplished at today's Free Software Directory (FSD) IRC meeting.
Categories: FLOSS Project Planets

kevinquillen.com: Update on List field data integrity issues in Drupal 10.2

Planet Drupal - Wed, 2024-01-03 15:44
Last week I wrote up a walkthrough in dealing with a change introduced for List field validation in Drupal 10.2 using a stored procedure to rewrite existing data. After some discussion, this change has been reverted in an upcoming patch release for Drupal 10.2:Regression from #2521800: using machine name element for ListStringItem breaks with existing data
Categories: FLOSS Project Planets

Ben Cook: Saving Utility Companies Years with Computer Vision

Planet Python - Wed, 2024-01-03 13:15

How do utility companies monitor thousands of miles of electrical wire to find small imperfections that threaten the entire system? For the entire history of electrical infrastructure, the only answer has been ‘very slowly.’

Now, Sparrow’s computer vision capabilities, combined with Fast Forward’s thermal imaging system, can accomplish what used to take over a decade in less than a month. Here’s how they do it.

How it Started

Dusty Birge, CEO of Fast Forward began inspecting power lines using a drone and crowd-sourced labor. He knew that automation would be needed to take the next step forward, but he found people in the artificial intelligence space to be overconfident and unreliable. That is until he was introduced to Ben Cook, the founder of Sparrow Computing.

Within a month, Ben provided a computer vision model that could accurately identify utility poles, and that has been continually improving ever since.

The Project

Fast Forward rigged a system of 5 thermal cameras to a car and set them to take photos every 15 feet. Sparrow’s computer vision model then ran through the nearly 2 million photos taken and extracted about 50,000 relevant images, detecting any abnormalities in the utility lines.

“Right out of the gate, we presented data to the utilities and just blew them away,” Dusty said in an interview. “The first test runs proved a scalable model that automatically monitored a company’s entire system in under a month, where traditional methods would take over a decade. These test runs successfully spotted anomalies that would have caused blackouts, but were instead able to be fixed promptly.”

The numbers speak plainly for themselves, not just in time but in cost as well. There was one hotspot that Sparrow identified, but that the utility company did not address in time. The power line failed as a result, costing the company around $800. The inspection itself cost only $4. The potential return of this technology is astronomical, and it is already being put to good use.

Going Forward

Fast Forward is already looking ahead to the next phase of this project, translating the same process to daytime cameras. Having had such success in the initial tests, Dusty is pleased to continue working with Ben and Sparrow Computing.

Ben is excited to find more real world problems that can be solved as cameras become cheaper and more ubiquitous. He wants to create more computer vision models that interact with the physical environment and revolutionize companies throughout the Midwest and the world.

To get a closer look at Sparrow Computing, reach Ben at ben@sparrow.dev.

The post Saving Utility Companies Years with Computer Vision appeared first on Sparrow Computing.

Categories: FLOSS Project Planets

Mirek Długosz: Playwright - accessing page object in event handler

Planet Python - Wed, 2024-01-03 12:37

Playwright exposes a number of browser events and provides a mechanism to respond to them. Since many of these events signal errors and problems, most of the time you want to log them, halt program execution, or ignore and move on. Logging is also shown in Playwright documentation about network, which I will use as a base for examples in this article.

Problem statement

Documentation shows event handlers created with lambda expressions, but lambda poses significant problems once you leave the territory of toy examples:

  • they should fit in single line of code
  • you can’t share them across modules
  • you can’t unit test them in isolation

Usually you want to define event handlers as normal functions. But when you attempt that, you might run into another problem - Playwright invokes event handler with some event-related data, that data does not contain any reference back to page object, and page object might contain some important contextual information.

In other words, we would like to do something similar to code below. Note that this example does not work - if you run it, you will get NameError: name 'page' is not defined.

from playwright.sync_api import sync_playwright from playwright.sync_api import Playwright def request_handler(request): print(f"{page.url} issued request: {request.method} {request.url}") def response_handler(response): print(f"{page.url} received response: {response.status} {response.url}") def run_test(playwright: Playwright): browser = playwright.chromium.launch() page = browser.new_page() page.goto("https://mirekdlugosz.com") page.on("request", request_handler) page.on("response", response_handler) page.goto("https://httpbin.org/status/404") browser.close() with sync_playwright() as playwright: run_test(playwright)

I can think of three ways of solving that: by defining a function inside a function, with functools.partialand with a factory function. Let’s take a look at all of them.

Defining a function inside a function

Most Python users are so used to defining functions at the top level of module or inside a class (we call these “methods”) that they might consider function definitions to be somewhat special. In fact, some other programming languages do encumber where functions can be defined. But in Python you can define them anywhere, including inside other functions.

from playwright.sync_api import sync_playwright from playwright.sync_api import Playwright def run_test(playwright: Playwright): def request_handler(request): print(f"{page.url} issued request: {request.method} {request.url}") def response_handler(response): print(f"{page.url} received response: {response.status} {response.url}") browser = playwright.chromium.launch() page = browser.new_page() page.goto("https://mirekdlugosz.com") page.on("request", request_handler) page.on("response", response_handler) page.goto("https://httpbin.org/status/404") browser.close() with sync_playwright() as playwright: run_test(playwright)

This works because function body is not evaluated until function is called and functions have access to names defined in their encompassing scope. So Python will look up page only when event handler is invoked by Playwright; since it’s not defined in function itself, Python will look for it in the function where event handler was defined (and then next function, if there is one, then module and eventually builtins).

I think this solution solves the most important part of the problem - it allows to write event handlers that span multiple lines. Technically it is also possible to share these handlers across modules, but you won’t see that often. They can’t be unit tested in isolation, as they depend on their parent function.

functools.partial

functools.partial documentation may be confusing, as prose sounds exactly like a description of standard function, code equivalent assumes pretty good understanding of Python internals, and provided example seems completely unnecessary.

I think about partial this way: it creates a function that has some of the arguments already filled in.

To be fair, partial is rarely needed. It allows to write shorter code, as you don’t have to repeat the same arguments over and over again. It may also allow you to provide saner library API - you can define single generic and flexible function with a lot of arguments, and few helper functions intended for external use, each with a small number of arguments.

But it’s invaluable when you have to provide your own function, but you don’t have control over arguments it will receive. Which is exactly the problem we are facing.

from functools import partial from playwright.sync_api import sync_playwright from playwright.sync_api import Playwright def request_handler(request, page=None): print(f"{page.url} issued request: {request.method} {request.url}") def response_handler(response, page=None): print(f"{page.url} received response: {response.status} {response.url}") def run_test(playwright: Playwright): browser = playwright.chromium.launch() page = browser.new_page() page.goto("https://mirekdlugosz.com") local_request_handler = partial(request_handler, page=page) local_response_handler = partial(response_handler, page=page) page.on("request", local_request_handler) page.on("response", local_response_handler) page.goto("https://httpbin.org/status/404") browser.close() with sync_playwright() as playwright: run_test(playwright)

Notice that our function takes the same arguments as Playwright event handler, and then some. When it’s time to assign event handlers, we use partial to create a new function, one that only needs argument that we will receive from Playwright - the other one is already filled in. But when function is executed, it will receive both arguments.

Factory function

Functions in Python may not only define other functions in their bodies, but also return functions. They are called “higher-order functions” and aren’t used often, with one notable exception of decorators.

from playwright.sync_api import sync_playwright from playwright.sync_api import Playwright def request_handler_factory(page): def inner(request): print(f"{page.url} issued request: {request.method} {request.url}") return inner def response_handler_factory(page): def inner(response): print(f"{page.url} received response: {response.status} {response.url}") return inner def run_test(playwright: Playwright): browser = playwright.chromium.launch() page = browser.new_page() page.goto("https://mirekdlugosz.com") page.on("request", request_handler_factory(page)) page.on("response", response_handler_factory(page)) page.goto("https://httpbin.org/status/404") browser.close() with sync_playwright() as playwright: run_test(playwright)

The key here is that inner function has access to all of enclosing scope, including values passed as arguments to outer function. This allows us to pass specific values that are only available in the place where outer function is called.

Summary

The first solution is a little different than other two, because it does not solve all of the problems set forth. On the other hand, I think it’s the easiest to understand - even beginner Python programmers should intuitively grasp what is happening and why.

In my experience higher-order functions takes some getting used to, while partial is not well-known and may be confusing at first. But they do solve our problem completely.

Categories: FLOSS Project Planets

Programiz: Python Dictionary

Planet Python - Wed, 2024-01-03 11:22
In this tutorial, you will learn about Python dictionaries - how they are created, how to access, add, and remove elements from them, and the various built-in methods associated with dictionaries.
Categories: FLOSS Project Planets

Programiz: Python for loop

Planet Python - Wed, 2024-01-03 10:58
In this article, we'll learn how to use a for loop in Python with the help of examples.
Categories: FLOSS Project Planets

Real Python: Python's Magic Methods: Leverage Their Power in Your Classes

Planet Python - Wed, 2024-01-03 09:00

As a Python developer who wants to harness the power of object-oriented programming, you’ll love to learn how to customize your classes using special methods, also known as magic methods or dunder methods. A special method is a method whose name starts and ends with a double underscore. These methods have special meanings for Python.

Python automatically calls magic methods as a response to certain operations, such as instantiation, sequence indexing, attribute managing, and much more. Magic methods support core object-oriented features in Python, so learning about them is fundamental for you as a Python programmer.

In this tutorial, you’ll:

  • Learn what Python’s special or magic methods are
  • Understand the magic behind magic methods in Python
  • Customize different behaviors of your custom classes with special methods

To get the most out of this tutorial, you should be familiar with general Python programming. More importantly, you should know the basics of object-oriented programming and classes in Python.

Get Your Code: Click here to download the free sample code that shows you how to use Python’s magic methods in your classes.

Getting to Know Python’s Magic or Special Methods

In Python, special methods are also called magic methods, or dunder methods. This latter terminology, dunder, refers to a particular naming convention that Python uses to name its special methods and attributes. The convention is to use double leading and trailing underscores in the name at hand, so it looks like .__method__().

Note: In this tutorial, you’ll find the terms special methods, dunder methods, and magic methods used interchangeably.

The double underscores flag these methods as core to some Python features. They help avoid name collisions with your own methods and attributes. Some popular and well-known magic methods include the following:

Special Method Description .__init__() Provides an initializer in Python classes .__str__() and .__repr__() Provide string representations for objects .__call__() Makes the instances of a class callable .__len__() Supports the len() function

This is just a tiny sample of all the special methods that Python has. All these methods support specific features that are core to Python and its object-oriented infrastructure.

Note: For the complete list of magic methods, refer to the special method section on the data model page of Python’s official documentation.

The Python documentation organizes the methods into several distinct groups:

Take a look at the documentation for more details on how the methods work and how to use them according to your specific needs.

Here’s how the Python documentation defines the term special methods:

A method that is called implicitly by Python to execute a certain operation on a type, such as addition. Such methods have names starting and ending with double underscores. (Source)

There’s an important detail to highlight in this definition. Python implicitly calls special methods to execute certain operations in your code. For example, when you run the addition 5 + 2 in a REPL session, Python internally runs the following code under the hood:

Python >>> (5).__add__(2) 7 Copied!

The .__add__() special method of integer numbers supports the addition that you typically run as 5 + 2.

Reading between the lines, you’ll realize that even though you can directly call special methods, they’re not intended for direct use. You shouldn’t call them directly in your code. Instead, you should rely on Python to call them automatically in response to a given operation.

Note: Even though special methods are also called magic methods, some people in the Python community may not like this latter terminology. The only magic around these methods is that Python calls them implicitly under the hood. So, the official documentation refers to them as special methods instead.

Magic methods exist for many purposes. All the available magic methods support built-in features and play specific roles in the language. For example, built-in types such as lists, strings, and dictionaries implement most of their core functionality using magic methods. In your custom classes, you can use magic methods to make callable objects, define how objects are compared, tweak how you create objects, and more.

Note that because magic methods have special meaning for Python itself, you should avoid naming custom methods using leading and trailing double underscores. Your custom method won’t trigger any Python action if its name doesn’t match any official special method names, but it’ll certainly confuse other programmers. New dunder names may also be introduced in future versions of Python.

Magic methods are core to Python’s data model and are a fundamental part of object-oriented programming in Python. In the following sections, you’ll learn about some of the most commonly used special methods. They’ll help you write better object-oriented code in your day-to-day programming adventure.

Controlling the Object Creation Process

When creating custom classes in Python, probably the first and most common method that you implement is .__init__(). This method works as an initializer because it allows you to provide initial values to any instance attributes that you define in your classes.

Read the full article at https://realpython.com/python-magic-methods/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

PyCharm: How To Learn Django: A Comprehensive Guide for Beginners

Planet Python - Wed, 2024-01-03 05:56
Learning Django can be an exciting journey for anyone looking to develop web applications, but it can be intimidating at first. In this article, we’ll provide you with a comprehensive guide on how to learn Django effectively. We’ll explore the prerequisites, the time it takes to become proficient, and various resources to help you master […]
Categories: FLOSS Project Planets

CTI Digital: Drupal Core Major Upgrades

Planet Drupal - Wed, 2024-01-03 05:41

Over the past 12 months, our teams have completed numerous Drupal upgrades. We would like to share our experiences and knowledge with anyone who has yet to undergo this process to help make it smoother for you.

Categories: FLOSS Project Planets

Brett Cannon: An experimental pip subcommand for the Python Launcher for Unix

Planet Python - Wed, 2024-01-03 00:49

There are a couple of things I always want to be true when I install Python packages for a project:

  1. I have a virtual environment
  2. Pip is up-to-date

For virtual environments, you would like them to be created as fast as possible and (usually) with the newest version of Python. For keeping pip up-to-date, it would be nice to not have to do that for every single virtual environment you have.

To help make all of this true for myself, I created an experimental Python Launcher for Unix "subcommand": py-pip. The CLI app does the following:

  1. Makes sure there is a globally cached copy of pip, and updates it if necessary
  2. Uses the Python Launcher for Unix to create a virtual environment where it finds a pyproject.toml file
  3. Runs pip using the virtual environment&aposs interpreter

This is all done via a py-pip.pyz file (which you can rename to just py-pip if you want). The py-pip.pyz file available from a release of py-pip can be made executable (e.g. chmod a+x py-pip.pyz). The shebang of the file is already set to #!/usr/bin/env py so it&aposs ready to use the newest version of Python you have installed. Stick that on your PATH and you can then use that instead of py -m pip to run pip itself.

To keep pip up-to-date, the easiest way to do that is to have only a single copy of pip to worry about. Thanks to the pip team releasing a self-contained pip.pyz along with pip always working with supported, it means if we just cache a copy of pip.pyz and keep that up-to-date then we can have that one copy to worry about.

Having a single copy of pip also means we don&apost need to install pip for each virtual environment. That lets us use microvenv and skip the overhead of installing pip in each virtual environment.

Now, this is an experiment. Much like the Python Launcher for Unix, py-pip is somewhat optimized for my own workflow. I am also keeping an eye on PEP 723 and PEP 735 as a way to only install packages that have been written down somewhere instead of ever installing a package à la carte as I think that&aposs a better practice to follow and might actually trump all of this. But since I have seen others have frustration from both forgetting the virtual environment and having to keep pip up-to-date, I decided to open source the code.

Categories: FLOSS Project Planets

FSF Blogs: LibrePlanet 2024: May 4 and 5, Wentworth Institute of Technology, Boston, MA

GNU Planet! - Tue, 2024-01-02 20:00
The dates and location of LibrePlanet 2024 have been announced!
Categories: FLOSS Project Planets

Pages