FLOSS Project Planets

The Drop Times: A Stitch in Time Saves Nine

Planet Drupal - Mon, 2023-03-13 02:57

Today, a Telugu-language movie got the Academy Award for best original song at the Oscars. While accepting the award, music composer M. M. Keeravani mentioned that he grew up listening to the Carpenters. Although he meant Karen and Richard Carpenter, the American music sensation of the '70s, three major media houses in Malayalam, another south Indian language, translated it as woodworkers.

It should be a classic example of shoddy journalism. But such mistakes are not so uncommon in vernacular media. The phrase 'prima facie,' was once misconstrued as a lady's name. One hundred eighty-six people sleeping in the railway station had washed off in a flash flood in an old story when in reality, it was sleepers on which the rails were paved. The word magazines got mistranslated as the literal monthly magazine in a story about the seizure of arms from the Sri Lankan Tamil militia. However, the editor saved the grace by finding it out before printing. While reporting a death after a 'hot dog' eating competition, a newspaper thought the man had eaten raging canines. If this is how journalists write, a techy said he would be in danger if he told Python is his bread and butter.

Now excuse me. It is the new normal. Our media houses have lost editorial prowess. Speed before accuracy is the new-age motto. In such a speed-crazy world, having your editorial arm halved would be a significant loss.

We at TDT have witnessed such a loss. As mentioned in the last newsletter, NERD Summit, and DrupalCamp NJ will happen this week. As media partners for the two camps, we had many plans to execute. And a significant part of the plans revolved around a young journalist we had just hired, S. Jayesh

S. Jayesh is a name heard in both Malayalam and Tamil literary circles. He is a poet and short story writer who translated a few novels from Tamil to Malayalam. I knew him from his previous stints, where he was a workaholic and punctual, more productive than most, but would never do overtime as was the common practice in this part of the world. A polyglot having years of experience in online media, we hired him by the end of December.

On February 13, he fell on his back, involuntarily wounding his head. He was rushed to the hospital, had to undergo two neuro surgeries as his blood clot in his head, and was in a coma stage for more than two weeks. Fortunately, he has regained consciousness but must remain in the hospital. As he lacks medical insurance, his mother has taken to alms to fund his hospitalization expenses. She is seeking around $18,300 in USD or ₹1,500,000 in INR. Until now, she could collect only 32% of the same. Even if he gets discharged, it will probably take months for him to rejoin work. So we urge the Drupal community to pour your hearts in small amounts to help him in need.

The crowdfunding request is placed on Milaap.org, a fundraising platform for medical emergencies and social causes. The platform charges no intermediary fees, and every penny donated to Jayesh will go into his mother's account for the treatment of her son.

Coming back to the past week's stories. On March 08, Wednesday, we published an Interview with Rick Hood as a primer to the NERD Summit 2023. In this exciting interview, he not only discusses Drupal but also goes into his music production interests and his past boat business.

Evolving Web has announced a training on Drupal Site Building in April. On March 15, Acquia will host a webinar on Securing The Modern Digital Landscape, and on March 16, another webinar on CDP. Tomorrow, Design4Drupal Boston will host AmyJune Hineline for an accessibility webinar.

All but three sessions of DrupalCamp Florida are online on their YouTube channel. MidCamp 2023 has announced its sessions and speakers. DrupalCamp Finland started accepting papers. NERD Summit was still accepting training session submissions as a backup. They have also pushed out a call for volunteers. DrupalCamp Poland has put early bird tickets on sale. DrupalCon Pittsburgh is seeking sponsors to support Women In Tech. The last day to apply for a volunteering opportunity in DrupalCon Lille is tomorrow.

Project Browser Initiative collects feedback via google forms about what information is most valuable to you when "browsing" for modules on drupal.org. In celebration of Women's history month, Drupal Association highlighted the work of Nichole Addeo, the Managing Director and Co-founder of Mythic Digital. ICFOSS and Zyxware Technologies joined hands to impart Drupal training for women as part of the' Back to Work for Women' campaign. 

On blogs and training materials, visit Kevin Funk's article in Acquia Developer Portal about utilizing developer workspaces with Acquia Code Studio. Alejandro Moreno Lopez, the Developer Advocate at Pantheon Platform, shared an educational video about the benefits of using Drupal for a Decoupled project.

That is for the week. Thank you,

Sincerely,
Sebin A. Jacob
Editor-in-Chief

Categories: FLOSS Project Planets

Antoine Beaupré: Framework 12th gen laptop review

Planet Debian - Sun, 2023-03-12 22:01

The Framework is a 13.5" laptop body with swappable parts, which makes it somewhat future-proof and certainly easily repairable, scoring an "exceedingly rare" 10/10 score from ifixit.com.

There are two generations of the laptop's main board (both compatible with the same body): the Intel 11th and 12th gen chipsets.

I have received my Framework, 12th generation "DIY", device in late September 2022 and will update this page as I go along in the process of ordering, burning-in, setting up and using the device over the years.

Overall, the Framework is a good laptop. I like the keyboard, the touch pad, the expansion cards. Clearly there's been some good work done on industrial design, and it's the most repairable laptop I've had in years. Time will tell, but it looks sturdy enough to survive me many years as well.

This is also one of the most powerful devices I ever lay my hands on. I have managed, remotely, more powerful servers, but this is the fastest computer I have ever owned, and it fits in this tiny case. It is an amazing machine.

On the downside, there's a bit of proprietary firmware required (WiFi, Bluetooth, some graphics) and the Framework ships with a proprietary BIOS, with currently no Coreboot support. Expect to need the latest kernel, firmware, and hacking around a bunch of things to get resolution and keybindings working right.

Like others, I have first found significant power management issues, but many issues can actually be solved with some configuration. Some of the expansion ports (HDMI, DP, MicroSD, and SSD) use power when idle, so don't expect week-long suspend, or "full day" battery while those are plugged in.

Finally, the expansion ports are nice, but there's only four of them. If you plan to have a two-monitor setup, you're likely going to need a dock.

Read on for the detailed review. For context, I'm moving from the Purism Librem 13v4 because it basically exploded on me. I had, in the meantime, reverted back to an old ThinkPad X220, so I sometimes compare the Framework with that venerable laptop as well.

This blog post has been maturing for months now. It started in September 2023 and I declared it completed in March 2023. It's the longest single article on this entire website, currently clocking at about 13,000 words. It will take an average reader a full hour to go through this thing, so I don't expect anyone to actually do that. This introduction should be good enough for most people, read the first section if you intend to actually buy a Framework. Jump around the table of contents as you see fit for after you did buy the laptop, as it might include some crucial hints on how to make it work best for you, especially on (Debian) Linux.

Advice for buyers

Those are things I wish I would have known before buying:

  1. consider buying 4 USB-C expansion cards, or at least a mix of 4 USB-A or USB-C cards, as they use less power than other cards and you do want to fill those expansion slots otherwise they snag around and feel insecure

  2. you will likely need a dock or at least a USB hub if you want a two-monitor setup, otherwise you'll run out of ports

  3. you have to do some serious tuning to get proper (10h+ idle, 10 days suspend) power savings

  4. in particular, beware that the HDMI, DisplayPort and particularly the SSD and MicroSD cards take a significant amount power, even when sleeping, up to 2-6W for the latter two

  5. beware that the MicroSD card is what it says: Micro, normal SD cards won't fit, and while there might be full sized one eventually, it's currently only at the prototyping stage

  6. the Framework monitor has an unusual aspect ratio (3:2): I like it (and it matches classic and digital photography aspect ratio), but it might surprise you

Current status

I have the framework! It's setup with a fresh new Debian bookworm installation. I've ran through a large number of tests and burn in.

I have decided to use the Framework as my daily driver, and had to buy a USB-C dock to get my two monitors connected, which was own adventure.

Specifications

Those are the specifications of the 12th gen, in general terms. Your build will of course vary according to your needs.

  • CPU: i5-1240P, i7-1260P, or i7-1280P (Up to 4.4-4.8 GHz, 4+8 cores), Iris Xe graphics
  • Storage: 250-4000GB NVMe (or bring your own)
  • Memory: 8-64GB DDR4-3200 (or bring your own)
  • WiFi 6e (AX210, vPro optional, or bring your own)
  • 296.63mm X 228.98mm X 15.85mm, 1.3Kg
  • 13.5" display, 3:2 ratio, 2256px X 1504px, 100% sRGB, >400 nit
  • 4 x USB-C user-selectable expansion ports, including
    • USB-C
    • USB-A
    • HDMI
    • DP
    • Ethernet
    • MicroSD
    • 250-1000GB SSD
  • 3.5mm combo headphone jack
  • Kill switches for microphone and camera
  • Battery: 55Wh
  • Camera: 1080p 60fps
  • Biometrics: Fingerprint Reader
  • Backlit keyboard
  • Power Adapter: 60W USB-C (or bring your own)
  • ships with a screwdriver/spludger
  • 1 year warranty
  • base price: 1000$CAD, but doesn't give you much, typical builds around 1500-2000$CAD
Actual build

This is the actual build I ordered. Amounts in CAD. (1CAD = ~0.75EUR/USD.)

Base configuration
  • CPU: Intel® Core™ i5-1240P, 1079$
  • Memory: 16GB (1 x 16GB) DDR4-3200, 104$
Customization
  • Keyboard: US English, included
Expansion Cards
  • 2 USB-C $24
  • 3 USB-A $36
  • 2 HDMI $50
  • 1 DP $50
  • 1 MicroSD $25
  • 1 Storage – 1TB $199
  • Sub-total: 384$
Accessories
  • Power Adapter - US/Canada $64.00
Total
  • Before tax: 1606$
  • After tax and duties: 1847$
  • Free shipping
Quick evaluation

This is basically the TL;DR: here, just focusing on broad pros/cons of the laptop.

Pros Cons
  • the 11th gen is out of stock, except for the higher-end CPUs, which are much less affordable (700$+)

  • the 12th gen has compatibility issues with Debian, followup in the DebianOn page, but basically: brightness hotkeys, power management, wifi, the webcam is okay even though the chipset is the infamous alder lake because it does not have the fancy camera; most issues currently seem solvable, and upstream is working with mainline to get their shit working

  • 12th gen might have issues with thunderbolt docks

  • they used to have some difficulty keeping up with the orders: first two batches shipped, third batch sold out, fourth batch should have shipped in October 2021. they generally seem to keep up with shipping. update (august 2022): they rolled out a second line of laptops (12th gen), first batch shipped, second batch shipped late, September 2022 batch was generally on time, see this spreadsheet for a crowdsourced effort to track those supply chain issues seem to be under control as of early 2023. I got the Ethernet expansion card shipped within a week.

  • compared to my previous laptop (Purism Librem 13v4), it feels strangely bulkier and heavier; it's actually lighter than the purism (1.3kg vs 1.4kg) and thinner (15.85mm vs 18mm) but the design of the Purism laptop (tapered edges) makes it feel thinner

  • no space for a 2.5" drive

  • rather bright LED around power button, but can be dimmed in the BIOS (not low enough to my taste) I got used to it

  • fan quiet when idle, but can be noisy when running, for example if you max a CPU for a while

  • battery described as "mediocre" by Ars Technica (above), confirmed poor in my tests (see below)

  • no RJ-45 port, and attempts at designing ones are failing because the modular plugs are too thin to fit (according to Linux After Dark), so unlikely to have one in the future Update: they cracked that nut and ship an 2.5 gbps Ethernet expansion card with a realtek chipset, without any firmware blob

  • a bit pricey for the performance, especially when compared to the competition (e.g. Dell XPS, Apple M1)

  • 12th gen Intel has glitchy graphics, seems like Intel hasn't fully landed proper Linux support for that chipset yet

Initial hardware setup

A breeze.

Accessing the board

The internals are accessed through five TorX screws, but there's a nice screwdriver/spudger that works well enough. The screws actually hold in place so you can't even lose them.

The first setup is a bit counter-intuitive coming from the Librem laptop, as I expected the back cover to lift and give me access to the internals. But instead the screws is release the keyboard and touch pad assembly, so you actually need to flip the laptop back upright and lift the assembly off to get access to the internals. Kind of scary.

I also actually unplugged a connector in lifting the assembly because I lifted it towards the monitor, while you actually need to lift it to the right. Thankfully, the connector didn't break, it just snapped off and I could plug it back in, no harm done.

Once there, everything is well indicated, with QR codes all over the place supposedly leading to online instructions.

Bad QR codes

Unfortunately, the QR codes I tested (in the expansion card slow, the memory slot and CPU slots) did not actually work so I wonder how useful those actually are.

After all, they need to point to something and that means a URL, a running website that will answer those requests forever. I bet those will break sooner than later and in fact, as far as I can tell, they just don't work at all. I prefer the approach taken by the MNT reform here which designed (with the 100 rabbits folks) an actual paper handbook (PDF).

The first QR code that's immediately visible from the back of the laptop, in an expansion cord slot, is a 404. It seems to be some serial number URL, but I can't actually tell because, well, the page is a 404.

I was expecting that bar code to lead me to an introduction page, something like "how to setup your Framework laptop". Support actually confirmed that it should point a quickstart guide. But in a bizarre twist, they somehow sent me the URL with the plus (+) signs escaped, like this:

https://guides.frame.work/Guide/Framework+Laptop+DIY+Edition+Quick+Start+Guide/57

... which Firefox immediately transforms in:

https://guides.frame.work/Guide/Framework/+Laptop/+DIY/+Edition/+Quick/+Start/+Guide/57

I'm puzzled as to why they would send the URL that way, the proper URL is of course:

https://guides.frame.work/Guide/Framework+Laptop+DIY+Edition+Quick+Start+Guide/57

(They have also "let the team know about this for feedback and help resolve the problem with the link" which is a support code word for "ha-ha! nope! not my problem right now!" Trust me, I know, my own code word is "can you please make a ticket?")

Seating disks and memory

The "DIY" kit doesn't actually have that much of a setup. If you bought RAM, it's shipped outside the laptop in a little plastic case, so you just seat it in as usual.

Then you insert your NVMe drive, and, if that's your fancy, you also install your own mPCI WiFi card. If you ordered one (which was my case), it's pre-installed.

Closing the laptop is also kind of amazing, because the keyboard assembly snaps into place with magnets. I have actually used the laptop with the keyboard unscrewed as I was putting the drives in and out, and it actually works fine (and will probably void your warranty, so don't do that). (But you can.) (But don't, really.)

Hardware review Keyboard and touch pad

The keyboard feels nice, for a laptop. I'm used to mechanical keyboard and I'm rather violent with those poor things. Yet the key travel is nice and it's clickety enough that I don't feel too disoriented.

At first, I felt the keyboard as being more laggy than my normal workstation setup, but it turned out this was a graphics driver issues. After enabling a composition manager, everything feels snappy.

The touch pad feels good. The double-finger scroll works well enough, and I don't have to wonder too much where the middle button is, it just works.

Taps don't work, out of the box: that needs to be enabled in Xorg, with something like this:

cat > /etc/X11/xorg.conf.d/40-libinput.conf <<EOF Section "InputClass" Identifier "libinput touch pad catchall" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Driver "libinput" Option "Tapping" "on" Option "TappingButtonMap" "lmr" EndSection EOF

But be aware that once you enable that tapping, you'll need to deal with palm detection... So I have not actually enabled this in the end.

Power button

The power button is a little dangerous. It's quite easy to hit, as it's right next to one expansion card where you are likely to plug in a cable power. And because the expansion cards are kind of hard to remove, you might squeeze the laptop (and the power key) when trying to remove the expansion card next to the power button.

So obviously, don't do that. But that's not very helpful.

An alternative is to make the power button do something else. With systemd-managed systems, it's actually quite easy. Add a HandlePowerKey stanza to (say) /etc/systemd/logind.conf.d/power-suspends.conf:

[Login] HandlePowerKey=suspend HandlePowerKeyLongPress=poweroff

You might have to create the directory first:

mkdir /etc/systemd/logind.conf.d/

Then restart logind:

systemctl restart systemd-logind

And the power button will suspend! Long-press to power off doesn't actually work as the laptop immediately suspends...

Note that there's probably half a dozen other ways of doing this, see this, this, or that.

Special keybindings

There is a series of "hidden" (as in: not labeled on the key) keybindings related to the fn keybinding that I actually find quite useful.

Key Equivalent Effect Command p Pause lock screen xset s activate b Break ? ? k ScrLk switch keyboard layout N/A

It looks like those are defined in the microcontroller so it would be possible to add some. For example, the SysRq key is almost bound to fn s in there.

Note that most other shortcuts like this are clearly documented (volume, brightness, etc). One key that's less obvious is F12 that only has the Framework logo on it. That actually calls the keysym XF86AudioMedia which, interestingly, does absolutely nothing here. By default, on Windows, it opens your browser to the Framework website and, on Linux, your "default media player".

The keyboard backlight can be cycled with fn-space. The dimmer version is dim enough, and the keybinding is easy to find in the dark.

A skinny elephant would be performed with alt PrtScr (above F11) KEY, so for example alt fn F11 b should do a hard reset. This comment suggests you need to hold the fn only if "function lock" is on, but that's actually the opposite of my experience.

Out of the box, some of the fn keys don't work. Mute, volume up/down, brightness, monitor changes, and the airplane mode key all do basically nothing. They don't send proper keysyms to Xorg at all.

This is a known problem and it's related to the fact that the laptop has light sensors to adjust the brightness automatically. Somehow some of those keys (e.g. the brightness controls) are supposed to show up as a different input device, but don't seem to work correctly. It seems like the solution is for the Framework team to write a driver specifically for this, but so far no progress since July 2022.

In the meantime, the fancy functionality can be supposedly disabled with:

echo 'blacklist hid_sensor_hub' | sudo tee /etc/modprobe.d/framework-als-blacklist.conf

... and a reboot. This solution is also documented in the upstream guide.

Note that there's another solution flying around that fixes this by changing permissions on the input device but I haven't tested that or seen confirmation it works.

Kill switches

The Framework has two "kill switches": one for the camera and the other for the microphone. The camera one actually disconnects the USB device when turned off, and the mic one seems to cut the circuit. It doesn't show up as muted, it just stops feeding the sound.

Both kill switches are around the main camera, on top of the monitor, and quite discreet. Then turn "red" when enabled (i.e. "red" means "turned off").

Monitor

The monitor looks pretty good to my untrained eyes. I have yet to do photography work on it, but some photos I looked at look sharp and the colors are bright and lively. The blacks are dark and the screen is bright.

I have yet to use it in full sunlight.

The dimmed light is very dim, which I like.

Screen backlight

I bind brightness keys to xbacklight in i3, but out of the box I get this error:

sep 29 22:09:14 angela i3[5661]: No outputs have backlight property

It just requires this blob in /etc/X11/xorg.conf.d/backlight.conf:

Section "Device" Identifier "Card0" Driver "intel" Option "Backlight" "intel_backlight" EndSection

This way I can control the actual backlight power with the brightness keys, and they do significantly reduce power usage.

Multiple monitor support

I have been able to hook up my two old monitors to the HDMI and DisplayPort expansion cards on the laptop. The lid closes without suspending the machine, and everything works great.

I actually run out of ports, even with a 4-port USB-A hub, which gives me a total of 7 ports:

  1. power (USB-C)
  2. monitor 1 (DisplayPort)
  3. monitor 2 (HDMI)
  4. USB-A hub, which adds:
  5. keyboard (USB-A)
  6. mouse (USB-A)
  7. Yubikey
  8. external sound card

Now the latter, I might be able to get rid of if I switch to a combo-jack headset, which I do have (and still need to test).

But still, this is a problem. I'll probably need a powered USB-C dock and better monitors, possibly with some Thunderbolt chaining, to save yet more ports.

But that means more money into this setup, argh. And figuring out my monitor situation is the kind of thing I'm not that big of a fan of. And neither is shopping for USB-C (or is it Thunderbolt?) hubs.

My normal autorandr setup doesn't work: I have tried saving a profile and it doesn't get autodetected, so I also first need to do:

autorandr -l framework-external-dual-lg-acer

The magic:

autorandr -l horizontal

... also works well.

The worst problem with those monitors right now is that they have a radically smaller resolution than the main screen on the laptop, which means I need to reset the font scaling to normal every time I switch back and forth between those monitors and the laptop, which means I actually need to do this:

autorandr -l horizontal && eho Xft.dpi: 96 | xrdb -merge && systemctl restart terminal xcolortaillog background-image emacs && i3-msg restart

Kind of disruptive.

Expansion ports

I ordered a total of 10 expansion ports.

I did manage to initialize the 1TB drive as an encrypted storage, mostly to keep photos as this is something that takes a massive amount of space (500GB and counting) and that I (unfortunately) don't work on very often (but still carry around).

The expansion ports are fancy and nice, but not actually that convenient. They're a bit hard to take out: you really need to crimp your fingernails on there and pull hard to take them out. There's a little button next to them to release, I think, but at first it feels a little scary to pull those pucks out of there. You get used to it though, and it's one of those things you can do without looking eventually.

There's only four expansion ports. Once you have two monitors, the drive, and power plugged in, bam, you're out of ports; there's nowhere to plug my Yubikey. So if this is going to be my daily driver, with a dual monitor setup, I will need a dock, which means more crap firmware and uncertainty, which isn't great. There are actually plans to make a dual-USB card, but that is blocked on designing an actual board for this.

I can't wait to see more expansion ports produced. There's a ethernet expansion card which quickly went out of stock basically the day it was announced, but was eventually restocked.

I would like to see a proper SD-card reader. There's a MicroSD card reader, but that obviously doesn't work for normal SD cards, which would be more broadly compatible anyways (because you can have a MicroSD to SD card adapter, but I have never heard of the reverse). Someone actually found a SD card reader that fits and then someone else managed to cram it in a 3D printed case, which is kind of amazing.

Still, I really like that idea that I can carry all those little adapters in a pouch when I travel and can basically do anything I want. It does mean I need to shuffle through them to find the right one which is a little annoying. I have an elastic band to keep them lined up so that all the ports show the same side, to make it easier to find the right one. But that quickly gets undone and instead I have a pouch full of expansion cards.

Another awesome thing with the expansion cards is that they don't just work on the laptop: anything that takes USB-C can take those cards, which means you can use it to connect an SD card to your phone, for backups, for example. Heck, you could even connect an external display to your phone that way, assuming that's supported by your phone of course (and it probably isn't).

The expansion ports do take up some power, even when idle. See the power management section below, and particularly the power usage tests for details.

USB-C charging

One thing that is really a game changer for me is USB-C charging. It's hard to overstate how convenient this is. I often have a USB-C cable lying around to charge my phone, and I can just grab that thing and pop it in my laptop. And while it will obviously not charge as fast as the provided charger, it will stop draining the battery at least.

(As I wrote this, I had the laptop plugged in the Samsung charger that came with a phone, and it was telling me it would take 6 hours to charge the remaining 15%. With the provided charger, that flew down to 15 minutes. Similarly, I can power the laptop from the power grommet on my desk, reducing clutter as I have that single wire out there instead of the bulky power adapter.)

I also really like the idea that I can charge my laptop with a power bank or, heck, with my phone, if push comes to shove. (And vice-versa!)

This is awesome. And it works from any of the expansion ports, of course. There's a little led next to the expansion ports as well, which indicate the charge status:

  • red/amber: charging
  • white: charged
  • off: unplugged

I couldn't find documentation about this, but the forum answered.

This is something of a recurring theme with the Framework. While it has a good knowledge base and repair/setup guides (and the forum is awesome) but it doesn't have a good "owner manual" that shows you the different parts of the laptop and what they do. Again, something the MNT reform did well.

Another thing that people are asking about is an external sleep indicator: because the power LED is on the main keyboard assembly, you don't actually see whether the device is active or not when the lid is closed.

Finally, I wondered what happens when you plug in multiple power sources and it turns out the charge controller is actually pretty smart: it will pick the best power source and use it. The only downside is it can't use multiple power sources, but that seems like a bit much to ask.

Multimedia and other devices

Those things also work:

  • webcam: splendid, best webcam I've ever had (but my standards are really low)
  • onboard mic: works well, good gain (maybe a bit much)
  • onboard speakers: sound okay, a little metal-ish, loud enough to be annoying, see this thread for benchmarks, apparently pretty good speakers
  • combo jack: works, with slight hiss, see below

There's also a light sensor, but it conflicts with the keyboard brightness controls (see above).

There's also an accelerometer, but it's off by default and will be removed from future builds.

Combo jack mic tests

The Framework laptop ships with a combo jack on the left side, which allows you to plug in a CTIA (source) headset. In human terms, it's a device that has both a stereo output and a mono input, typically a headset or ear buds with a microphone somewhere.

It works, which is better than the Purism (which only had audio out), but is on par for the course for that kind of onboard hardware. Because of electrical interference, such sound cards very often get lots of noise from the board.

With a Jabra Evolve 40, the built-in USB sound card generates basically zero noise on silence (invisible down to -60dB in Audacity) while plugging it in directly generates a solid -30dB hiss. There is a noise-reduction system in that sound card, but the difference is still quite striking.

On a comparable setup (curie, a 2017 Intel NUC), there is also a his with the Jabra headset, but it's quieter, more in the order of -40/-50 dB, a noticeable difference. Interestingly, testing with my Mee Audio Pro M6 earbuds leads to a little more hiss on curie, more on the -35/-40 dB range, close to the Framework.

Also note that another sound card, the Antlion USB adapter that comes with the ModMic 4, also gives me pretty close to silence on a quiet recording, picking up less than -50dB of background noise. It's actually probably picking up the fans in the office, which do make audible noises.

In other words, the hiss of the sound card built in the Framework laptop is so loud that it makes more noise than the quiet fans in the office. Or, another way to put it is that two USB sound cards (the Jabra and the Antlion) are able to pick up ambient noise in my office but not the Framework laptop.

See also my audio page.

Performance tests Compiling Linux 5.19.11

On a single core, compiling the Debian version of the Linux kernel takes around 100 minutes:

5411.85user 673.33system 1:37:46elapsed 103%CPU (0avgtext+0avgdata 831700maxresident)k 10594704inputs+87448000outputs (9131major+410636783minor)pagefaults 0swaps

This was using 16 watts of power, with full screen brightness.

With all 16 cores (make -j16), it takes less than 25 minutes:

19251.06user 2467.47system 24:13.07elapsed 1494%CPU (0avgtext+0avgdata 831676maxresident)k 8321856inputs+87427848outputs (30792major+409145263minor)pagefaults 0swaps

I had to plug the normal power supply after a few minutes because battery would actually run out using my desk's power grommet (34 watts).

During compilation, fans were spinning really hard, quite noisy, but not painfully so.

The laptop was sucking 55 watts of power, steadily:

Time User Nice Sys Idle IO Run Ctxt/s IRQ/s Fork Exec Exit Watts -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Average 87.9 0.0 10.7 1.4 0.1 17.8 6583.6 5054.3 233.0 223.9 233.1 55.96 GeoMean 87.9 0.0 10.6 1.2 0.0 17.6 6427.8 5048.1 227.6 218.7 227.7 55.96 StdDev 1.4 0.0 1.2 0.6 0.2 3.0 1436.8 255.5 50.0 47.5 49.7 0.20 -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Minimum 85.0 0.0 7.8 0.5 0.0 13.0 3594.0 4638.0 117.0 111.0 120.0 55.52 Maximum 90.8 0.0 12.9 3.5 0.8 38.0 10174.0 5901.0 374.0 362.0 375.0 56.41 -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Summary: CPU: 55.96 Watts on average with standard deviation 0.20 Note: power read from RAPL domains: package-0, uncore, package-0, core, psys. These readings do not cover all the hardware in this device. memtest86+

I ran Memtest86+ v6.00b3. It shows something like this:

Memtest86+ v6.00b3 | 12th Gen Intel(R) Core(TM) i5-1240P CLK/Temp: 2112MHz 78/78°C | Pass 2% # L1 Cache: 48KB 414 GB/s | Test 46% ################## L2 Cache: 1.25MB 118 GB/s | Test #3 [Moving inversions, 1s & 0s] L3 Cache: 12MB 43 GB/s | Testing: 16GB - 18GB [1GB of 15.7GB] Memory : 15.7GB 14.9 GB/s | Pattern: -------------------------------------------------------------------------------- CPU: 4P+8E-Cores (16T) SMP: 8T (PAR)) | Time: 0:27:23 Status: Pass \ RAM: 1600MHz (DDR4-3200) CAS 22-22-22-51 | Pass: 1 Errors: 0 -------------------------------------------------------------------------------- Memory SPD Information ---------------------- - Slot 2: 16GB DDR-4-3200 - Crucial CT16G4SFRA32A.C16FP (2022-W23) Framework FRANMACP04 <ESC> Exit <F1> Configuration <Space> Scroll Lock 6.00.unknown.x64

So about 30 minutes for a full 16GB memory test.

Software setup

Once I had everything in the hardware setup, I figured, voilà, I'm done, I'm just going to boot this beautiful machine and I can get back to work.

I don't understand why I am so naïve some times. It's mind boggling.

Obviously, it didn't happen that way at all, and I spent the best of the three following days tinkering with the laptop.

Secure boot and EFI

First, I couldn't boot off of the NVMe drive I transferred from the previous laptop (the Purism) and the BIOS was not very helpful: it was just complaining about not finding any boot device, without dropping me in the real BIOS.

At first, I thought it was a problem with my NVMe drive, because it's not listed in the compatible SSD drives from upstream. But I figured out how to enter BIOS (press F2 manically, of course), which showed the NVMe drive was actually detected. It just didn't boot, because it was an old (2010!!) Debian install without EFI.

So from there, I disabled secure boot, and booted a grml image to try to recover. And by "boot" I mean, I managed to get to the grml boot loader which promptly failed to load its own root file system somehow. I still have to investigate exactly what happened there, but it failed some time after the initrd load with:

Unable to find medium containing a live file system

This, it turns out, was fixed in Debian lately, so a daily GRML build will not have this problems. The upcoming 2022 release (likely 2022.10 or 2022.11) will also get the fix.

I did manage to boot the development version of the Debian installer which was a surprisingly good experience: it mounted the encrypted drives and did everything pretty smoothly. It even offered me to reinstall the boot loader, but that ultimately (and correctly, as it turns out) failed because I didn't have a /boot/efi partition.

At this point, I realized there was no easy way out of this, and I just proceeded to completely reinstall Debian. I had a spare NVMe drive lying around (backups FTW!) so I just swapped that in, rebooted in the Debian installer, and did a clean install. I wanted to switch to bookworm anyways, so I guess that's done too.

Storage limitations

Another thing that happened during setup is that I tried to copy over the internal 2.5" SSD drive from the Purism to the Framework 1TB expansion card. There's no 2.5" slot in the new laptop, so that's pretty much the only option for storage expansion.

I was tired and did something wrong. I ended up wiping the partition table on the original 2.5" drive.

Oops.

It might be recoverable, but just restoring the partition table didn't work either, so I'm not sure how I recover the data there. Normally, everything on my laptops and workstations is designed to be disposable, so that wasn't that big of a problem. I did manage to recover most of the data thanks to git-annex reinit, but that was a little hairy.

Bootstrapping Puppet

Once I had some networking, I had to install all the packages I needed. The time I spent setting up my workstations with Puppet has finally paid off. What I actually did was to restore two critical directories:

/etc/ssh /var/lib/puppet

So that I would keep the previous machine's identity. That way I could contact the Puppet server and install whatever was missing. I used my Puppet optimization trick to do a batch install and then I had a good base setup, although not exactly as it was before. 1700 packages were installed manually on angela before the reinstall, and not in Puppet.

I did not inspect each one individually, but I did go through /etc and copied over more SSH keys, for backups and SMTP over SSH.

LVFS support

It looks like there's support for the (de-facto) standard LVFS firmware update system. At least I was able to update the UEFI firmware with a simple:

apt install fwupd-amd64-signed fwupdmgr refresh fwupdmgr get-updates fwupdmgr update

Nice. The 12th gen BIOS updates, currently (January 2023) beta, can be deployed through LVFS with:

fwupdmgr enable-remote lvfs-testing echo 'DisableCapsuleUpdateOnDisk=true' >> /etc/fwupd/uefi_capsule.conf fwupdmgr update

Those instructions come from the beta forum post. I performed the BIOS update on 2023-01-16T16:00-0500.

Resolution tweaks

The Framework laptop resolution (2256px X 1504px) is big enough to give you a pretty small font size, so welcome to the marvelous world of "scaling".

The Debian wiki page has a few tricks for this.

Console

This will make the console and grub fonts more readable:

cat >> /etc/default/console-setup <<EOF FONTFACE="Terminus" FONTSIZE=32x16 EOF echo GRUB_GFXMODE=1024x768 >> /etc/default/grub update-grub Xorg

Adding this to your .Xresources will make everything look much bigger:

! 1.5*96 Xft.dpi: 144

Apparently, some of this can also help:

! These might also be useful depending on your monitor and personal preference: Xft.autohint: 0 Xft.lcdfilter: lcddefault Xft.hintstyle: hintfull Xft.hinting: 1 Xft.antialias: 1 Xft.rgba: rgb

It my experience it also makes things look a little fuzzier, which is frustrating because you have this awesome monitor but everything looks out of focus. Just bumping Xft.dpi by a 1.5 factor looks good to me.

The Debian Wiki has a page on HiDPI, but it's not as good as the Arch Wiki, where the above blurb comes from. I am not using the latter because I suspect it's causing some of the "fuzziness".

TODO: find the equivalent of this GNOME hack in i3? (gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"), taken from this Framework guide

Issues BIOS configuration

The Framework BIOS has some minor issues. One issue I personally encountered is that I had disabled Quick boot and Quiet boot in the BIOS to diagnose the above boot issues. This, in turn, triggers a bug where the BIOS boot manager (F12) would just hang completely. It would also fail to boot from an external USB drive.

The current fix (as of BIOS 3.03) is to re-enable both Quick boot and Quiet boot. Presumably this is something that will get fixed in a future BIOS update.

Note that the following keybindings are active in the BIOS POST check:

Key Meaning F2 Enter BIOS setup menu F12 Enter BIOS boot manager Delete Enter BIOS setup menu WiFi compatibility issues

I couldn't make WiFi work at first. Obviously, the default Debian installer doesn't ship with proprietary firmware (although that might change soon) so the WiFi card didn't work out of the box. But even after copying the firmware through a USB stick, I couldn't quite manage to find the right combination of ip/iw/wpa-supplicant (yes, after repeatedly copying a bunch more packages over to get those bootstrapped). (Next time I should probably try something like this post.)

Thankfully, I had a little USB-C dongle with a RJ-45 jack lying around. That also required a firmware blob, but it was a single package to copy over, and with that loaded, I had network.

Eventually, I did managed to make WiFi work; the problem was more on the side of "I forgot how to configure a WPA network by hand from the commandline" than anything else. NetworkManager worked fine and got WiFi working correctly.

Note that this is with Debian bookworm, which has the 5.19 Linux kernel, and with the firmware-nonfree (firmware-iwlwifi, specifically) package.

Battery life

I was having between about 7 hours of battery on the Purism Librem 13v4, and that's after a year or two of battery life. Now, I still have about 7 hours of battery life, which is nicer than my old ThinkPad X220 (20 minutes!) but really, it's not that good for a new generation laptop. The 12th generation Intel chipset probably improved things compared to the previous one Framework laptop, but I don't have a 11th gen Framework to compare with).

(Note that those are estimates from my status bar, not wall clock measurements. They should still be comparable between the Purism and Framework, that said.)

The battery life doesn't seem up to, say, Dell XPS 13, ThinkPad X1, and of course not the Apple M1, where I would expect 10+ hours of battery life out of the box.

That said, I do get those kind estimates when the machine is fully charged and idle. In fact, when everything is quiet and nothing is plugged in, I get dozens of hours of battery life estimated (I've seen 25h!). So power usage fluctuates quite a bit depending on usage, which I guess is expected.

Concretely, so far, light web browsing, reading emails and writing notes in Emacs (e.g. this file) takes about 8W of power:

Time User Nice Sys Idle IO Run Ctxt/s IRQ/s Fork Exec Exit Watts -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Average 1.7 0.0 0.5 97.6 0.2 1.2 4684.9 1985.2 126.6 39.1 128.0 7.57 GeoMean 1.4 0.0 0.4 97.6 0.1 1.2 4416.6 1734.5 111.6 27.9 113.3 7.54 StdDev 1.0 0.2 0.2 1.2 0.0 0.5 1584.7 1058.3 82.1 44.0 80.2 0.71 -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Minimum 0.2 0.0 0.2 94.9 0.1 1.0 2242.0 698.2 82.0 17.0 82.0 6.36 Maximum 4.1 1.1 1.0 99.4 0.2 3.0 8687.4 4445.1 463.0 249.0 449.0 9.10 -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Summary: System: 7.57 Watts on average with standard deviation 0.71

Expansion cards matter a lot in the battery life (see below for a thorough discussion), my normal setup is 2xUSB-C and 1xUSB-A (yes, with an empty slot, and yes, to save power).

Interestingly, playing a video in a (720p) window in a window takes up more power (10.5W) than in full screen (9.5W) but I blame that on my desktop setup (i3 + compton)... Not sure if mpv hits the VA-API, maybe not in windowed mode. Similar results with 1080p, interestingly, except the window struggles to keep up altogether. Full screen playback takes a relatively comfortable 9.5W, which means a solid 5h+ of playback, which is fine by me.

Fooling around the web, small edits, youtube-dl, and I'm at around 80% battery after about an hour, with an estimated 5h left, which is a little disappointing. I had a 7h remaining estimate before I started goofing around Discourse, so I suspect the website is a pretty big battery drain, actually. I see about 10-12 W, while I was probably at half that (6-8W) just playing music with mpv in the background...

In other words, it looks like editing posts in Discourse with Firefox takes a solid 4-6W of power. Amazing and gross.

(When writing about abusive power usage generates more power usage, is that an heisenbug? Or schrödinbug?)

Power management

Compared to the Purism Librem 13v4, the ongoing power usage seems to be slightly better. An anecdotal metric is that the Purism would take 800mA idle, while the more powerful Framework manages a little over 500mA as I'm typing this, fluctuating between 450 and 600mA. That is without any active expansion card, except the storage. Those numbers come from the output of tlp-stat -b and, unfortunately, the "ampere" unit makes it quite hard to compare those, because voltage is not necessarily the same between the two platforms.

  • TODO: review Arch Linux's tips on power saving
  • TODO: i915 driver has a lot of parameters, including some about power saving, see, again, the arch wiki, and particularly enable_fbc=1

TL:DR; power management on the laptop is an issue, but there's various tweaks you can make to improve it. Try:

  • powertop --auto-tune
  • apt install tlp && systemctl enable tlp
  • nvme.noacpi=1 mem_sleep_default=deep on the kernel command line may help with standby power usage
  • keep only USB-C expansion cards plugged in, all others suck power even when idle
  • consider upgrading the BIOS to latest beta (3.06 at the time of writing), unverified power savings
  • latest Linux kernels (6.2) promise power savings as well (unverified)
Background on CPU architecture

There were power problems in the 11th gen Framework laptop, according to this report from Linux After Dark, so the issues with power management on the Framework are not new.

The 12th generation Intel CPU (AKA "Alder Lake") is a big-little architecture with "power-saving" and "performance" cores. There used to be performance problems introduced by the scheduler in Linux 5.16 but those were eventually fixed in 5.18, which uses Intel's hardware as an "intelligent, low-latency hardware-assisted scheduler". According to Phoronix, the 5.19 release improved the power saving, at the cost of some penalty cost. There were also patch series to make the scheduler configurable, but it doesn't look those have been merged as of 5.19. There was also a session about this at the 2022 Linux Plumbers, but they stopped short of talking more about the specific problems Linux is facing in Alder lake:

Specifically, the kernel's energy-aware scheduling heuristics don't work well on those CPUs. A number of features present there complicate the energy picture; these include SMT, Intel's "turbo boost" mode, and the CPU's internal power-management mechanisms. For many workloads, running on an ostensibly more power-hungry Pcore can be more efficient than using an Ecore. Time for discussion of the problem was lacking, though, and the session came to a close.

All this to say that the 12gen Intel line shipped with this Framework series should have better power management thanks to its power-saving cores. And Linux has had the scheduler changes to make use of this (but maybe is still having trouble). In any case, this might not be the source of power management problems on my laptop, quite the opposite.

Also note that the firmware updates for various chipsets are supposed to improve things eventually.

On the other hand, The Verge simply declared the whole P-series a mistake...

Attempts at improving power usage

I did try to follow some of the tips in this forum post. The tricks powertop --auto-tune and tlp's PCIE_ASPM_ON_BAT=powersupersave basically did nothing: I was stuck at 10W power usage in powertop (600+mA in tlp-stat).

Apparently, I should be able to reach the C8 CPU power state (or even C9, C10) in powertop, but I seem to be stock at C7. (Although I'm not sure how to read that tab in powertop: in the Core(HW) column there's only C3/C6/C7 states, and most cores are 85% in C7 or maybe C6. But the next column over does show many CPUs in C10 states...

As it turns out, the graphics card actually takes up a good chunk of power unless proper power management is enabled (see below). After tweaking this, I did manage to get down to around 7W power usage in powertop.

Expansion cards actually do take up power, and so does the screen, obviously. The fully-lit screen takes a solid 2-3W of power compared to the fully dimmed screen. When removing all expansion cards and making the laptop idle, I can spin it down to 4 watts power usage at the moment, and an amazing 2 watts when the screen turned off.

Caveats

Abusive (10W+) power usage that I initially found could be a problem with my desktop configuration: I have this silly status bar that updates every second and probably causes redraws... The CPU certainly doesn't seem to spin down below 1GHz. Also note that this is with an actual desktop running with everything: it could very well be that some things (I'm looking at you Signal Desktop) take up unreasonable amount of power on their own (hello, 1W/electron, sheesh). Syncthing and containerd (Docker!) also seem to take a good 500mW just sitting there.

Beyond my desktop configuration, this could, of course, be a Debian-specific problem; your favorite distribution might be better at power management.

Idle power usage tests

Some expansion cards waste energy, even when unused. Here is a summary of the findings from the powerstat page. I also include other devices tested in this page for completeness:

Device Minimum Average Max Stdev Note Screen, 100% 2.4W 2.6W 2.8W N/A Screen, 1% 30mW 140mW 250mW N/A Backlight 1 290mW ? ? ? fairly small, all things considered Backlight 2 890mW 1.2W 3W? 460mW? geometric progression Backlight 3 1.69W 1.5W 1.8W? 390mW? significant power use Radios 100mW 250mW N/A N/A USB-C N/A N/A N/A N/A negligible power drain USB-A 10mW 10mW ? 10mW almost negligible DisplayPort 300mW 390mW 600mW N/A not passive HDMI 380mW 440mW 1W? 20mW not passive 1TB SSD 1.65W 1.79W 2W 12mW significant, probably higher when busy MicroSD 1.6W 3W 6W 1.93W highest power usage, possibly even higher when busy Ethernet 1.69W 1.64W 1.76W N/A comparable to the SSD card

So it looks like all expansion cards but the USB-C ones are active, i.e. they draw power with idle. The USB-A cards are the least concern, sucking out 10mW, pretty much within the margin of error. But both the DisplayPort and HDMI do take a few hundred miliwatts. It looks like USB-A connectors have this fundamental flaw that they necessarily draw some powers because they lack the power negotiation features of USB-C. At least according to this post:

It seems the USB A must have power going to it all the time, that the old USB 2 and 3 protocols, the USB C only provides power when there is a connection. Old versus new.

Apparently, this is a problem specific to the USB-C to USB-A adapter that ships with the Framework. Some people have actually changed their orders to all USB-C because of this problem, but I'm not sure the problem is as serious as claimed in the forums. I couldn't reproduce the "one watt" power drains suggested elsewhere, at least not repeatedly. (A previous version of this post did show such a power drain, but it was in a less controlled test environment than the series of more rigorous tests above.)

The worst offenders are the storage cards: the SSD drive takes at least one watt of power and the MicroSD card seems to want to take all the way up to 6 watts of power, both just sitting there doing nothing. This confirms claims of 1.4W for the SSD (but not 5W) power usage found elsewhere. The former post has instructions on how to disable the card in software. The MicroSD card has been reported as using 2 watts, but I've seen it as high as 6 watts, which is pretty damning.

The Framework team has a beta update for the DisplayPort adapter but currently only for Windows (LVFS technically possible, "under investigation"). A USB-A firmware update is also under investigation. It is therefore likely at least some of those power management issues will eventually be fixed.

Note that the upcoming Ethernet card has a reported 2-8W power usage, depending on traffic. I did my own power usage tests in powerstat-wayland and they seem lower than 2W.

The upcoming 6.2 Linux kernel might also improve battery usage when idle, see this Phoronix article for details, likely in early 2023.

Idle power usage tests under Wayland

Update: I redid those tests under Wayland, see powerstat-wayland for details. The TL;DR: is that power consumption is either smaller or similar.

Idle power usage tests, 3.06 beta BIOS

I redid the idle tests after the 3.06 beta BIOS update and ended up with this results:

Device Minimum Average Max Stdev Note Baseline 1.96W 2.01W 2.11W 30mW 1 USB-C, screen off, backlight off, no radios 2 USB-C 1.95W 2.16W 3.69W 430mW USB-C confirmed as mostly passive... 3 USB-C 1.95W 2.16W 3.69W 430mW ... although with extra stdev 1TB SSD 3.72W 3.85W 4.62W 200mW unchanged from before upgrade 1 USB-A 1.97W 2.18W 4.02W 530mW unchanged 2 USB-A 1.97W 2.00W 2.08W 30mW unchanged 3 USB-A 1.94W 1.99W 2.03W 20mW unchanged MicroSD w/o card 3.54W 3.58W 3.71W 40mW significant improvement! 2-3W power saving! MicroSD w/ card 3.53W 3.72W 5.23W 370mW new measurement! increased deviation DisplayPort 2.28W 2.31W 2.37W 20mW unchanged 1 HDMI 2.43W 2.69W 4.53W 460mW unchanged 2 HDMI 2.53W 2.59W 2.67W 30mW unchanged External USB 3.85W 3.89W 3.94W 30mW new result Ethernet 3.60W 3.70W 4.91W 230mW unchanged

Note that the table summary is different than the previous table: here we show the absolute numbers while the previous table was doing a confusing attempt at showing relative (to the baseline) numbers.

Conclusion: the 3.06 BIOS update did not significantly change idle power usage stats except for the MicroSD card which has significantly improved.

The new "external USB" test is also interesting: it shows how the provided 1TB SSD card performs (admirably) compared to existing devices. The other new result is the MicroSD card with a card which, interestingly, uses less power than the 1TB SSD drive.

Standby battery usage

I wrote some quick hack to evaluate how much power is used during sleep. Apparently, this is one of the areas that should have improved since the first Framework model, let's find out.

My baseline for comparison is the Purism laptop, which, in 10 minutes, went from this:

sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_now = 6045 [mAh]

... to this:

sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_now = 6037 [mAh]

That's 8mAh per 10 minutes (and 2 seconds), or 48mA, or, with this battery, about 127 hours or roughly 5 days of standby. Not bad!

In comparison, here is my really old x220, before:

sep 29 22:13:54 emma systemd-sleep[176315]: /sys/class/power_supply/BAT0/energy_now = 5070 [mWh]

... after:

sep 29 22:23:54 emma systemd-sleep[176486]: /sys/class/power_supply/BAT0/energy_now = 4980 [mWh]

... which is 90 mwH in 10 minutes, or a whopping 540mA, which was possibly okay when this battery was new (62000 mAh, so about 100 hours, or about 5 days), but this battery is almost dead and has only 5210 mAh when full, so only 10 hours standby.

And here is the Framework performing a similar test, before:

sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_full = 3518 [mAh] sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_now = 2861 [mAh]

... after:

sep 29 22:37:08 angela systemd-sleep[4743]: /sys/class/power_supply/BAT1/charge_now = 2812 [mAh]

... which is 49mAh in a little over 10 minutes (and 4 seconds), or 292mA, much more than the Purism, but half of the X220. At this rate, the battery would last on standby only 12 hours!! That is pretty bad.

Note that this was done with the following expansion cards:

  • 2 USB-C
  • 1 1TB SSD drive
  • 1 USB-A with a hub connected to it, with keyboard and LAN

Preliminary tests without the hub (over one minute) show that it doesn't significantly affect this power consumption (300mA).

This guide also suggests booting with nvme.noacpi=1 but this still gives me about 5mAh/min (or 300mA).

Adding mem_sleep_default=deep to the kernel command line does make a difference. Before:

sep 29 23:03:11 angela systemd-sleep[3699]: /sys/class/power_supply/BAT1/charge_now = 2544 [mAh]

... after:

sep 29 23:04:25 angela systemd-sleep[4039]: /sys/class/power_supply/BAT1/charge_now = 2542 [mAh]

... which is 2mAh in 74 seconds, which is 97mA, brings us to a more reasonable 36 hours, or a day and a half. It's still above the x220 power usage, and more than an order of magnitude more than the Purism laptop. It's also far from the 0.4% promised by upstream, which would be 14mA for the 3500mAh battery.

It should also be noted that this "deep" sleep mode is a little more disruptive than regular sleep. As you can see by the timing, it took more than 10 seconds for the laptop to resume, which feels a little alarming as your banging the keyboard to bring it back to life.

You can confirm the current sleep mode with:

# cat /sys/power/mem_sleep s2idle [deep]

In the above, deep is selected. You can change it on the fly with:

printf s2idle > /sys/power/mem_sleep

Here's another test:

sep 30 22:25:50 angela systemd-sleep[32207]: /sys/class/power_supply/BAT1/charge_now = 1619 [mAh] sep 30 22:31:30 angela systemd-sleep[32516]: /sys/class/power_supply/BAT1/charge_now = 1613 [mAh]

... better! 6 mAh in about 6 minutes, works out to 63.5mA, so more than two days standby.

A longer test:

oct 01 09:22:56 angela systemd-sleep[62978]: /sys/class/power_supply/BAT1/charge_now = 3327 [mAh] oct 01 12:47:35 angela systemd-sleep[63219]: /sys/class/power_supply/BAT1/charge_now = 3147 [mAh]

That's 180mAh in about 3.5h, 52mA! Now at 66h, or almost 3 days.

I wasn't sure why I was seeing such fluctuations in those tests, but as it turns out, expansion card power tests show that they do significantly affect power usage, especially the SSD drive, which can take up to two full watts of power even when idle. I didn't control for expansion cards in the above tests — running them with whatever card I had plugged in without paying attention — so it's likely the cause of the high power usage and fluctuations.

It might be possible to work around this problem by disabling USB devices before suspend. TODO. See also this post.

In the meantime, I have been able to get much better suspend performance by unplugging all modules. Then I get this result:

oct 04 11:15:38 angela systemd-sleep[257571]: /sys/class/power_supply/BAT1/charge_now = 3203 [mAh] oct 04 15:09:32 angela systemd-sleep[257866]: /sys/class/power_supply/BAT1/charge_now = 3145 [mAh]

Which is 14.8mA! Almost exactly the number promised by Framework! With a full battery, that means a 10 days suspend time. This is actually pretty good, and far beyond what I was expecting when starting down this journey.

So, once the expansion cards are unplugged, suspend power usage is actually quite reasonable. More detailed standby tests are available in the standby-tests page, with a summary below.

There is also some hope that the Chromebook edition — specifically designed with a specification of 14 days standby time — could bring some firmware improvements back down to the normal line. Some of those issues were reported upstream in April 2022, but there doesn't seem to have been any progress there since.

TODO: one final solution here is suspend-then-hibernate, which Windows uses for this

TODO: consider implementing the S0ix sleep states , see also troubleshooting

TODO: consider https://github.com/intel/pm-graph

Standby expansion cards test results

This table is a summary of the more extensive standby-tests I have performed:

Device Wattage Amperage Days Note baseline 0.25W 16mA 9 sleep=deep nvme.noacpi=1 s2idle 0.29W 18.9mA ~7 sleep=s2idle nvme.noacpi=1 normal nvme 0.31W 20mA ~7 sleep=s2idle without nvme.noacpi=1 1 USB-C 0.23W 15mA ~10 2 USB-C 0.23W 14.9mA same as above 1 USB-A 0.75W 48.7mA 3 +500mW (!!) for the first USB-A card! 2 USB-A 1.11W 72mA 2 +360mW 3 USB-A 1.48W 96mA <2 +370mW 1TB SSD 0.49W 32mA <5 +260mW MicroSD 0.52W 34mA ~4 +290mW DisplayPort 0.85W 55mA <3 +620mW (!!) 1 HDMI 0.58W 38mA ~4 +250mW 2 HDMI 0.65W 42mA <4 +70mW

Conclusions:

  • USB-C cards take no extra power on suspend, possibly less than empty slots, more testing required

  • USB-A cards take a lot more power on suspend (300-500mW) than on regular idle (~10mW, almost negligible)

  • 1TB SSD and MicroSD cards seem to take a reasonable amount of power (260-290mW), compared to their runtime equivalents (1-6W!)

  • DisplayPort takes a surprising lot of power (620mW), almost double its average runtime usage (390mW)

  • HDMI cards take, surprisingly, less power (250mW) in standby than the DP card (620mW)

  • and oddly, a second card adds less power usage (70mW?!) than the first, maybe a circuit is used by both?

A discussion of those results is in this forum post.

Standby expansion cards test results, 3.06 beta BIOS

Framework recently (2022-11-07) announced that they will publish a firmware upgrade to address some of the USB-C issues, including power management. This could positively affect the above result, improving both standby and runtime power usage.

The update came out in December 2022 and I redid my analysis with the following results:

Device Wattage Amperage Days Note baseline 0.25W 16mA 9 no cards, same as before upgrade 1 USB-C 0.25W 16mA 9 same as before 2 USB-C 0.25W 16mA 9 same 1 USB-A 0.80W 62mA 3 +550mW!! worse than before 2 USB-A 1.12W 73mA <2 +320mW, on top of the above, bad! Ethernet 0.62W 40mA 3-4 new result, decent 1TB SSD 0.52W 34mA 4 a bit worse than before (+2mA) MicroSD 0.51W 22mA 4 same DisplayPort 0.52W 34mA 4+ upgrade improved by 300mW 1 HDMI ? 38mA ? same 2 HDMI ? 45mA ? a bit worse than before (+3mA) Normal 1.08W 70mA ~2 Ethernet, 2 USB-C, USB-A

Full results in standby-tests-306. The big takeaway for me is that the update did not improve power usage on the USB-A ports which is a big problem for my use case. There is a notable improvement on the DisplayPort power consumption which brings it more in line with the HDMI connector, but it still doesn't properly turn off on suspend either.

Even worse, the USB-A ports now sometimes fails to resume after suspend, which is pretty annoying. This is a known problem that will hopefully get fixed in the final release.

Battery wear protection

The BIOS has an option to limit charge to 80% to mitigate battery wear. There's a way to control the embedded controller from runtime with fw-ectool, partly documented here. The command would be:

sudo ectool fwchargelimit 80

I looked at building this myself but failed to run it. I opened a RFP in Debian so that we can ship this in Debian, and also documented my work there.

Note that there is now a counter that tracks charge/discharge cycles. It's visible in tlp-stat -b, which is a nice improvement:

root@angela:/home/anarcat# tlp-stat -b --- TLP 1.5.0 -------------------------------------------- +++ Battery Care Plugin: generic Supported features: none available +++ Battery Status: BAT1 /sys/class/power_supply/BAT1/manufacturer = NVT /sys/class/power_supply/BAT1/model_name = Framewo /sys/class/power_supply/BAT1/cycle_count = 3 /sys/class/power_supply/BAT1/charge_full_design = 3572 [mAh] /sys/class/power_supply/BAT1/charge_full = 3541 [mAh] /sys/class/power_supply/BAT1/charge_now = 1625 [mAh] /sys/class/power_supply/BAT1/current_now = 178 [mA] /sys/class/power_supply/BAT1/status = Discharging /sys/class/power_supply/BAT1/charge_control_start_threshold = (not available) /sys/class/power_supply/BAT1/charge_control_end_threshold = (not available) Charge = 45.9 [%] Capacity = 99.1 [%]

One thing that is still missing is the charge threshold data (the (not available) above). There's been some work to make that accessible in August, stay tuned? This would also make it possible implement hysteresis support.

Ethernet expansion card

The Framework ethernet expansion card is a fancy little doodle: "2.5Gbit/s and 10/100/1000Mbit/s Ethernet", the "clear housing lets you peek at the RTL8156 controller that powers it". Which is another way to say "we didn't completely finish prod on this one, so it kind of looks like we 3D-printed this in the shop"....

The card is a little bulky, but I guess that's inevitable considering the RJ-45 form factor when compared to the thin Framework laptop.

I have had a serious issue when trying it at first: the link LEDs just wouldn't come up. I made a full bug report in the forum and with upstream support, but eventually figured it out on my own. It's (of course) a power saving issue: if you reboot the machine, the links come up when the laptop is running the BIOS POST check and even when the Linux kernel boots.

I first thought that the problem is likely related to the powertop service which I run at boot time to tweak some power saving settings.

It seems like this:

echo 'on' > '/sys/bus/usb/devices/4-2/power/control'

... is a good workaround to bring the card back online. You can even return to power saving mode and the card will still work:

echo 'auto' > '/sys/bus/usb/devices/4-2/power/control'

Further research by Matt_Hartley from the Framework Team found this issue in the tlp tracker that shows how the USB_AUTOSUSPEND setting enables the power saving even if the driver doesn't support it, which, in retrospect, just sounds like a bad idea. To quote that issue:

By default, USB power saving is active in the kernel, but not force-enabled for incompatible drivers. That is, devices that support suspension will suspend, drivers that do not, will not.

So the fix is actually to uninstall tlp or disable that setting by adding this to /etc/tlp.conf:

USB_AUTOSUSPEND=0

... but that disables auto-suspend on all USB devices, which may hurt other power usage performance. I have found that a a combination of:

USB_AUTOSUSPEND=1 USB_DENYLIST="0bda:8156"

and this on the kernel commandline:

usbcore.quirks=0bda:8156:k

... actually does work correctly. I now have this in my /etc/default/grub.d/framework-tweaks.cfg file:

# net.ifnames=0: normal interface names ffs (e.g. eth0, wlan0, not wlp166 s0) # nvme.noacpi=1: reduce SSD disk power usage (not working) # mem_sleep_default=deep: reduce power usage during sleep (not working) # usbcore.quirk is a workaround for the ethernet card suspend bug: https: //guides.frame.work/Guide/Fedora+37+Installation+on+the+Framework+Laptop/ 108?lang=en GRUB_CMDLINE_LINUX="net.ifnames=0 nvme.noacpi=1 mem_sleep_default=deep usbcore.quirks=0bda:8156:k" # fix the resolution in grub for fonts to not be tiny GRUB_GFXMODE=1024x768

Other than that, I haven't been able to max out the card because I don't have other 2.5Gbit/s equipment at home, which is strangely satisfying. But running against my Turris Omnia router, I could pretty much max a gigabit fairly easily:

[ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.09 GBytes 937 Mbits/sec 238 sender [ 5] 0.00-10.00 sec 1.09 GBytes 934 Mbits/sec receiver

The card doesn't require any proprietary firmware blobs which is surprising. Other than the power saving issues, it just works.

In my power tests (see powerstat-wayland), the Ethernet card seems to use about 1.6W of power idle, without link, in the above "quirky" configuration where the card is functional but without autosuspend.

Proprietary firmware blobs

The framework does need proprietary firmware to operate. Specifically:

  • the WiFi network card shipped with the DIY kit is a AX210 card that requires a 5.19 kernel or later, and the firmware-iwlwifi non-free firmware package
  • the Bluetooth adapter also loads the firmware-iwlwifi package (untested)
  • the graphics work out of the box without firmware, but certain power management features come only with special proprietary firmware, normally shipped in the firmware-misc-nonfree but currently missing from the package

Note that, at the time of writing, the latest i915 firmware from linux-firmware has a serious bug where loading all the accessible firmware results in noticeable — I estimate 200-500ms — lag between the keyboard (not the mouse!) and the display. Symptoms also include tearing and shearing of windows, it's pretty nasty.

One workaround is to delete the two affected firmware files:

cd /lib/firmware && rm adlp_guc_70.1.1.bin adlp_guc_69.0.3.bin update-initramfs -u

You will get the following warning during build, which is good as it means the problematic firmware is disabled:

W: Possible missing firmware /lib/firmware/i915/adlp_guc_69.0.3.bin for module i915 W: Possible missing firmware /lib/firmware/i915/adlp_guc_70.1.1.bin for module i915

But then it also means that critical firmware isn't loaded, which means, among other things, a higher battery drain. I was able to move from 8.5-10W down to the 7W range after making the firmware work properly. This is also after turning the backlight all the way down, as that takes a solid 2-3W in full blast.

The proper fix is to use some compositing manager. I ended up using compton with the following systemd unit:

[Unit] Description=start compositing manager PartOf=graphical-session.target ConditionHost=angela [Service] Type=exec ExecStart=compton --show-all-xerrors --backend glx --vsync opengl-swc Restart=on-failure [Install] RequiredBy=graphical-session.target

compton is orphaned however, so you might be tempted to use picom instead, but in my experience the latter uses much more power (1-2W extra, similar experience). I also tried compiz but it would just crash with:

anarcat@angela:~$ compiz --replace compiz (core) - Warn: No XI2 extension compiz (core) - Error: Another composite manager is already running on screen: 0 compiz (core) - Fatal: No manageable screens found on display :0

When running from the base session, I would get this instead:

compiz (core) - Warn: No XI2 extension compiz (core) - Error: Couldn't load plugin 'ccp' compiz (core) - Error: Couldn't load plugin 'ccp'

Thanks to EmanueleRocca for figuring all that out. See also this discussion about power management on the Framework forum.

Note that Wayland environments do not require any special configuration here and actually work better, see my Wayland migration notes for details.

Also note that the iwlwifi firmware also looks incomplete. Even with the package installed, I get those errors in dmesg:

[ 19.534429] Intel(R) Wireless WiFi driver for Linux [ 19.534691] iwlwifi 0000:a6:00.0: enabling device (0000 -> 0002) [ 19.541867] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2) [ 19.541881] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2) [ 19.541882] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-72.ucode failed with error -2 [ 19.541890] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2) [ 19.541895] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2) [ 19.541896] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-71.ucode failed with error -2 [ 19.541903] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2) [ 19.541907] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2) [ 19.541908] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-70.ucode failed with error -2 [ 19.541913] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2) [ 19.541916] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2) [ 19.541917] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-69.ucode failed with error -2 [ 19.541922] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2) [ 19.541926] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2) [ 19.541927] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-68.ucode failed with error -2 [ 19.541933] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2) [ 19.541937] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2) [ 19.541937] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-67.ucode failed with error -2 [ 19.544244] iwlwifi 0000:a6:00.0: firmware: direct-loading firmware iwlwifi-ty-a0-gf-a0-66.ucode [ 19.544257] iwlwifi 0000:a6:00.0: api flags index 2 larger than supported by driver [ 19.544270] iwlwifi 0000:a6:00.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 0.63.2.1 [ 19.544523] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2) [ 19.544528] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2) [ 19.544530] iwlwifi 0000:a6:00.0: loaded firmware version 66.55c64978.0 ty-a0-gf-a0-66.ucode op_mode iwlmvm

Some of those are available in the latest upstream firmware package (iwlwifi-ty-a0-gf-a0-71.ucode, -68, and -67), but not all (e.g. iwlwifi-ty-a0-gf-a0-72.ucode is missing) . It's unclear what those do or don't, as the WiFi seems to work well without them.

I still copied them in from the latest linux-firmware package in the hope they would help with power management, but I did not notice a change after loading them.

There are also multiple knobs on the iwlwifi and iwlmvm drivers. The latter has a power_schmeme setting which defaults to 2 (balanced), setting it to 3 (low power) could improve battery usage as well, in theory. The iwlwifi driver also has power_save (defaults to disabled) and power_level (1-5, defaults to 1) settings. See also the output of modinfo iwlwifi and modinfo iwlmvm for other driver options.

Graphics acceleration

After loading the latest upstream firmware and setting up a compositing manager (compton, above), I tested the classic glxgears.

Running in a window gives me odd results, as the gears basically grind to a halt:

Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate. 137 frames in 5.1 seconds = 26.984 FPS 27 frames in 5.4 seconds = 5.022 FPS

Ouch. 5FPS!

But interestingly, once the window is in full screen, it does hit the monitor refresh rate:

300 frames in 5.0 seconds = 60.000 FPS

I'm not really a gamer and I'm not normally using any of that fancy graphics acceleration stuff (except maybe my browser does?).

I installed intel-gpu-tools for the intel_gpu_top command to confirm the GPU was engaged when doing those simulations. A nice find. Other useful diagnostic tools include glxgears and glxinfo (in mesa-utils) and (vainfo in vainfo).

Following to this post, I also made sure to have those settings in my about:config in Firefox, or, in user.js:

user_pref("media.ffmpeg.vaapi.enabled", true);

Note that the guide suggests many other settings to tweak, but those might actually be overkill, see this comment and its parents. I did try forcing hardware acceleration by setting gfx.webrender.all to true, but everything became choppy and weird.

The guide also mentions installing the intel-media-driver package, but I could not find that in Debian.

The Arch wiki has, as usual, an excellent reference on hardware acceleration in Firefox.

Chromium / Signal desktop bugs

It looks like both Chromium and Signal Desktop misbehave with my compositor setup (compton + i3). The fix is to add a persistent flag to Chromium. In Arch, it's conveniently in ~/.config/chromium-flags.conf but that doesn't actually work in Debian. I had to put the flag in /etc/chromium.d/disable-compositing, like this:

export CHROMIUM_FLAGS="$CHROMIUM_FLAGS --disable-gpu-compositing"

It's possible another one of the hundreds of flags might fix this issue better, but I don't really have time to go through this entire, incomplete, and unofficial list (!?!).

Signal Desktop is a similar problem, and doesn't reuse those flags (because of course it doesn't). Instead I had to rewrite the wrapper script in /usr/local/bin/signal-desktop to use this instead:

exec /usr/bin/flatpak run --branch=stable --arch=x86_64 org.signal.Signal --disable-gpu-compositing "$@"

This was mostly done in this Puppet commit.

I haven't figured out the root of this problem. I did try using picom and xcompmgr; they both suffer from the same issue. Another Debian testing user on Wayland told me they haven't seen this problem, so hopefully this can be fixed by switching to wayland.

Graphics card hangs

I believe I might have this bug which results in a total graphical hang for 15-30 seconds. It's fairly rare so it's not too disruptive, but when it does happen, it's pretty alarming.

The comments on that bug report are encouraging though: it seems this is a bug in either mesa or the Intel graphics driver, which means many people have this problem so it's likely to be fixed. There's actually a merge request on mesa already (2022-12-29).

It could also be that bug because the error message I get is actually:

Jan 20 12:49:10 angela kernel: Asynchronous wait on fence 0000:00:02.0:sway[104431]:cb0ae timed out (hint:intel_atomic_commit_ready [i915]) Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000 Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] Resetting chip for stopped heartbeat on rcs0 Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC firmware i915/adlp_guc_70.1.1.bin version 70.1 Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9 Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC authenticated Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC submission enabled Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC SLPC enabled

It's a solid 30 seconds graphical hang. Maybe the keyboard and everything else keeps working. The latter bug report is quite long, with many comments, but this one from January 2023 seems to say that Sway 1.8 fixed the problem. There's also an earlier patch to add an extra kernel parameter that supposedly fixes that too. There's all sorts of other workarounds in there, for example this:

echo "options i915 enable_dc=1 enable_guc_loading=1 enable_guc_submission=1 edp_vswing=0 enable_guc=2 enable_fbc=1 enable_psr=1 disable_power_well=0" | sudo tee /etc/modprobe.d/i915.conf

from this comment... So that one is unsolved, as far as the upstream drivers are concerned, but maybe could be fixed through Sway.

Weird USB hangs / graphical glitches

I have had weird connectivity glitches better described in this post, but basically: my USB keyboard and mice (connected over a USB hub) drop keys, lag a lot or hang, and I get visual glitches.

The fix was to tighten the screws around the CPU on the motherboard (!), which is, thankfully, a rather simple repair.

Shipping details

I ordered the Framework in August 2022 and received it about a month later, which is sooner than expected because the August batch was late.

People (including me) expected this to have an impact on the September batch, but it seems Framework have been able to fix the delivery problems and keep up with the demand.

As of early 2023, their website announces that laptops ship "within 5 days". I have myself ordered a few expansion cards in November 2022, and they shipped on the same day, arriving 3-4 days later.

The supply pipeline

There are basically 6 steps in the Framework shipping pipeline, each (except the last) accompanied with an email notification:

  1. pre-order
  2. preparing batch
  3. preparing order
  4. payment complete
  5. shipping
  6. (received)

This comes from the crowdsourced spreadsheet, which should be updated when the status changes here.

I was part of the "third batch" of the 12th generation laptop, which was supposed to ship in September. It ended up arriving on my door step on September 27th, about 33 days after ordering.

It seems current orders are not processed in "batches", but in real time, see this blog post for details on shipping.

Shipping trivia

I don't know about the others, but my laptop shipped through no less than four different airplane flights. Here are the hops it took:

I can't quite figure out how to calculate exactly how much mileage that is, but it's huge. The ride through Alaska is surprising enough but the bounce back through Winnipeg is especially weird. I guess the route happens that way because of Fedex shipping hubs.

There was a related oddity when I had my Purism laptop shipped: it left from the west coast and seemed to enter on an endless, two week long road trip across the continental US.

Other resources
Categories: FLOSS Project Planets

digiKam 7.10.0 is released

Planet KDE - Sun, 2023-03-12 20:00
Dear digiKam fans and users, After three months of active maintenance and other bugs triage, the digiKam team is proud to present version 7.10.0 of its open source digital photo manager. See below the list of most important features coming with this release. Bundles Internal Component Updates As with the previous releases, we take care about upgrading the internal components from the Bundles. Microsoft Windows Installer, Apple macOS Package, and Linux AppImage binaries now hosts:
Categories: FLOSS Project Planets

Mike Herchel's Blog: Creating Your First Single Directory Component within Drupal

Planet Drupal - Sun, 2023-03-12 19:09
Creating Your First Single Directory Component within Drupal mherchel Sun, 03/12/2023 - 20:00
Categories: FLOSS Project Planets

The Python Coding Blog: Anatomy of a 2D Game using Python’s turtle and Object-Oriented Programming

Planet Python - Sun, 2023-03-12 14:21

When I was young, we played arcade games in their original form on tall rectangular coin-operated machines with buttons and joysticks. These games had a resurgence as smartphone apps in recent years, useful to keep one occupied during a long commute. In this article, I’ll resurrect one as a 2D Python game and use it to show the “anatomy” of such a game.

You can follow this step-by-step tutorial even if you’re unfamiliar with all the topics. In particular, this tutorial will rely on Python’s turtle module and uses the object-oriented programming (OOP) paradigm. However, you don’t need expertise in either topic, as I’ll explain the key concepts you’ll need. However, if you’re already familiar with OOP, you can easily skip the clearly-marked OOP sections.

This is the game you’ll write:

The rules of the game are simple. Click on a ball to bat it up. How long can you last before you lose ten balls?

In addition to getting acquainted with OOP principles, this tutorial will show you how such games are built step-by-step.

Note about content in this article: If you’re already familiar with the key concepts in object-oriented programming, you can skip blocks like this one. If you’re new or relatively new to the topic, I recommend you read these sections as well.

The Anatomy of a 2D Python Game | Summary

I’ll break this game down into eight key steps:

  1. Create a class named Ball and set up what should happen when you create a ball
  2. Make the ball move forward
  3. Add gravity to pull the ball downwards
  4. Add the ability to bat the ball upwards
  5. Create more balls, with each ball created after a certain time interval
  6. Control the speed of the game by setting a frame rate
  7. Add a timer and an end to the game
  8. Add finishing touches to the game

Are you ready to start coding?

A visual summary of the anatomy of a 2D Python game

Here’s another summary of the steps you’ll take to create this 2D Python game. This is a visual summary:

1. Create a Class Named Ball

You’ll work on two separate scripts to create this game:

  • juggling_ball.py
  • juggling_balls_game.py

You’ll use the first one, juggling_ball.py, to create a class called Ball which will act as a template for all the balls you’ll use in the game. In the second script, juggling_balls_game.py, you’ll use this class to write the game.

It’s helpful to separate the class definitions into a standalone module to provide a clear structure and to make it easier to reuse the class in other scripts.

Let’s start working on the class in juggling_ball.py. If you want to read about object-oriented programming in Python in more detail before you dive into this project, you can read Chapter 7 | Object-Oriented Programming. However, I’ll also summarise the key points in this article in separate blocks. And remember that if you’re already familiar with the key concepts in OOP, you can skip these additional note blocks, like the one below.

A class is like a template for creating many objects using that template. When you define a class, such as the class Ball in this project, you’re not creating a ball. Instead, you’re defining the instructions needed to create a ball.

The __init__() special method is normally the first method you define in a class and includes all the steps you want to execute each time you create an object using this class.

Create the class

Let’s start this 2D Python game. You can define the class and its __init__() special method:

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), )

The class Ball inherits from turtle.Turtle which means that an object which is a Ball is also a Turtle. You also call the initialisation method for the Turtle class when you call super().__init__().

The __init__() method creates two data attributes, .width and .height. These attributes set the size in pixels of the area in which the program will create the ball.

self is the name you use as a placeholder to refer to the object that you’ll create. Recall that when you define a class, you’re not creating any objects. At this definition stage, you’re creating the template. Therefore, you need a placeholder name to refer to the objects you’ll create in the future. The convention is to use self for this placeholder name.

You create two new data attributes when you write:
self.width = width
self.height = height

An attribute belongs to an object in a class. There are two types of attributes:
– Data attributes
– Methods

You can think of data attributes as variables attached to an object. Therefore, an object “carries” its data with it. The data attributes you create are self.width and self.height.

The other type of attribute is a method. You’ll read more about methods shortly.

The rest of the __init__() method calls Turtle methods to set the initial state of each ball. Here’s a summary of the four turtle.Turtle methods used:

  • .shape() changes the shape of the turtle
  • .color() sets the colour of the turtle (and anything it draws)
  • .penup() makes sure the turtle will not draw a line when it moves
  • .setposition() places the turtle at a specific _xy-_coordinate on the screen. The centre is (0, 0).

You set the shape of the turtle to be a circle (it’s actually a disc, but the name of the shape in turtle is “circle”). You use a random value between 0 and 1 for each of the red, green, and blue colour values when you call self.color(). And you set the turtle’s position to a random integer within the bounds of the region defined by the arguments of the __init__() method. You use the floor division operator // to ensure you get an integer value when dividing the width and height by 2.

It’s a good idea to define the .__repr__() special method for a class. As you won’t use this explicitly in this project, I won’t add it to the code in the main article. However, it’s included in the final code in the appendix.

Test the class

You can test the class you just created in the second script you’ll work on as you progress through this project, juggling_balls_game.py:

# juggling_balls_game.py import turtle from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 window = turtle.Screen() window.setup(WIDTH, HEIGHT) Ball(WIDTH, HEIGHT) Ball(WIDTH, HEIGHT) turtle.done()

You create a screen and set its size to 600 x 600 pixels. Next, you create two instances of the Ball class. Even though you define Ball in another script, you import it into the scope of your game program using from juggling_ball import Ball.

Here’s the output from this script:

You call Ball() twice in the script. Therefore, you create two instances of the class Ball. Each one has a random colour and moves to a random position on the screen as soon as it’s created.

An instance of a class is each object you create using that class. The class definition is the template for creating objects. You only have one class definition, but you can have several instances of that class.

Data attributes, which we discussed earlier, are sometimes also referred to as instance variables since they are variables attached to an instance. You’ll also read about instance methods soon.

You may have noticed that when you create the two balls, you can see them moving from the centre of the screen where they’re created to their ‘starting’ position. You want more control on when to display the objects on the screen.

To achieve this, you can set window.tracer(0) as soon as you create the screen and then use window.update() when you want to display the turtles on the screen. Any changes that happen to the position and orientation of Turtle objects (and Ball objects, too) will occur “behind the scenes” until you call window.update():

# juggling_balls_game.py import turtle from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) Ball(WIDTH, HEIGHT) Ball(WIDTH, HEIGHT) window.update() turtle.done()

When you run the script now, the balls will appear instantly in the correct starting positions. The final call to turtle.done() runs the main loop of a turtle graphics program and is needed at the end of the script to keep the program running once the final line is reached.

You’re now ready to create a method in the Ball class to make the ball move forward.

2. Make Ball Move Forward

Let’s shift back to juggling_ball.py where you define the class. You’ll start by making the ball move upwards at a constant speed.

You can set the maximum velocity that a ball can have as a class attribute .max_velocity and then create a data attribute .velocity which will be different for each instance. The value of .velocity is a random number that’s limited by the maximum value defined at the class level:

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity)

You also change the ball’s heading using the Turtle method .setheading() so that the object is pointing upwards.

A class attribute is defined for the class overall, not for each instance. This can be used when the same value is needed for every instance you’ll create of the class. You can access a class attribute like you access instance attributes. For example, you use self.max_velocity in the example above.

Next, you can create the method move() which moves the ball forward by the value stored in self.velocity.

A method is a function that’s part of a class. You’ll only consider instance methods in this project. You can think of an instance method as a function that’s attached to an instance of the class.

In Python, you access these using the dot notation. For example, if you have a list called numbers, you can call numbers.append() or numbers.pop(). Both .append() and .pop() are methods of the class list, and every instance of a list carries these methods with it.

The method .move() is an instance method, which means the object itself is passed to the method as its first argument. This is why the parameter self is within the parentheses when you define .move().

We’re not using real world units such as metres in this project. For now, you can think of this velocity as a value measured in pixels per frame instead of metres per second. The duration of each frame is equal to the time it takes for one iteration of the while loop to complete.

Therefore, if you call .move() once per frame in the game, you want the ball to move by the number of pixels in .velocity during that frame. Let’s add the .move() method:

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def move(self): self.forward(self.velocity)

You can test the new additions to the class Ball in juggling_balls_game.py:

# juggling_balls_game.py import turtle from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) ball = Ball(WIDTH, HEIGHT) while True: ball.move() window.update() turtle.done()

You test your code using a single ball for now and you call ball.move() within a while True loop.

Recall that .move() is a method of the class Ball. In the class definition in juggling_ball.py, you use the placeholder name self to refer to any future instance of the class you create. However, now you’re creating an instance of the class Ball, and you name it ball. Therefore, you can access all the attributes using this instance’s name, for example by calling ball.move().

Here’s the output from this script so far:

Note: the speed at which your while loop will run depends on your setup and operating system. We’ll deal with this later in this project. However, if your ball is moving too fast, you can slow it down by dividing its velocity by 10 or 100, say, when you define self.velocity in the __init__() method. If you can’t see any ball when you run this script, the ball may be moving so quickly out of the screen that you need to slow it down significantly.

You can create a ball that moves upwards with a constant speed. The next step in this 2D Python game is to account for gravity to pull the ball down.

3. Add Gravity to Pull Ball Downwards

Last time I checked, when you toss a ball upward, it will slow down, reach a point when, it’s stationary in the air for the briefest of moments, and then starts falling down towards the ground.

Let’s add the effect of gravity to the game. Gravity is a force that accelerates an object. This acceleration is given in metres per second squared (m/s^2). The acceleration due to the Earth’s gravity is 9.8m/s^2, and this is always a downward acceleration. Therefore, when an object is moving upwards, gravity will decelerate the object until its velocity is zero. Then it will start accelerating downwards.

In this project, we’re not using real-world units. So you can think of the acceleration due to gravity as a value in pixels per frame squared. “Frame” is a unit of time in this context, as it refers to the duration of the frame.

Acceleration is the rate of change of velocity. In the real world, we use the change of velocity per second. However, in the game you’re using change of velocity per frame. Later in this article, you’ll consider the time it takes for the while loop to run so you can set the frame time.

You can add a class attribute to define the acceleration due to gravity and define a method called .fall():

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 gravity = 0.07 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def move(self): self.forward(self.velocity) self.fall() def fall(self): self.velocity -= self.gravity

The method .fall() changes the value of the data attribute .velocity by subtracting the value stored in the class attribute .gravity from the current .velocity. You also call self.fall() within .move() so that each time the ball moves in a frame, it’s also pulled back by gravity. In this example, the value of .gravity is 0.07. Recall that you’re measuring velocity in pixels per frame. Therefore, gravity reduces the velocity by 0.07 pixels per frame in each frame.

You could merge the code in .fall() within .move(). However, creating separate methods gives you more flexibility in the future. Let’s assume you want a version of the game in which something that happens in the game suspends gravity. Having separate methods will make future modifications easier.

You could also choose not to call self.fall() within .move() and call the method directly within the game loop in juggling_balls_game.py.

You need to consider another issue now that the balls will be pulled down towards the ground. At some point, the balls will leave the screen from the bottom edge. Once this happens, you want the program to detect this so you can deal with this. You can create another method is_below_lower_edge():

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 gravity = 0.07 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def move(self): self.forward(self.velocity) self.fall() def fall(self): self.velocity -= self.gravity def is_below_lower_edge(self): if self.ycor() < -self.height // 2: self.hideturtle() return True return False

The method .is_below_lower_edge() is an instance method which returns a Boolean value. The method hides the turtle object and returns True if the ball has dipped below the lower edge of the screen. Otherwise, it returns False.

Methods are functions. Therefore, like all functions, they always return a value. You’ll often find methods such as .move() and .fall() that don’t have an explicit return statement. These methods change the state of one or more of the object’s attributes. These methods still return a value. As with all functions that don’t have a return statement, these methods return None.

The purpose of .is_below_lower_edge() is different. Although it’s also changing the object’s state when it calls self.hideturtle(), its main purpose is to return True or False to indicate whether the ball has dropped below the lower edge of the screen.

It’s time to check whether gravity works. You don’t need to change juggling_balls_game.py since the call to ball.move() in the while loop remains the same. Here’s the result of running the script now:

You can see the ball is tossed up in the air. It slows down. Then it accelerates downwards until it leaves the screen. You can also temporarily add the following line in the while loop to check that .is_below_lower_edge() works:

print(ball.is_below_lower_edge())

Since this method returns True or False, you’ll see its decision printed out each time the loop iterates.

This 2D Python game is shaping up nicely. Next, you need to add the option to bat the ball upwards.

4. Add the Ability to Bat Ball Upwards

There are two steps you’ll need to code to bat a ball upwards:

  • Create a method in Ball to bat the ball upwards by adding positive (upward) velocity
  • Call this method each time the player clicks somewhere close to the ball

You can start adding another class attribute .bat_velocity_change and the method .bat_up() in Ball:

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 gravity = 0.07 bat_velocity_change = 8 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def move(self): self.forward(self.velocity) self.fall() def fall(self): self.velocity -= self.gravity def is_below_lower_edge(self): if self.ycor() < -self.height // 2: self.hideturtle() return True return False def bat_up(self): self.velocity += self.bat_velocity_change

Each time you call the method .bat_up(), the velocity of the ball increases by the value in the class attribute .bat_velocity_change.

If the ball’s velocity is, say, -10, then “batting it up” will increase the velocity to -2 since .bat_velocity_change is 8. This means the ball will keep falling but at a lower speed.

Suppose the ball’s velocity is -3, then batting up changes this to 5 so the ball will start moving upwards. And if the ball is already moving upwards when you bat it, its upward speed will increase.

You can now shift your attention to juggling_balls_game.py. You need to make no further significant changes to the class Ball itself.

In the game, you need to call the ball’s .bat_up() method when the player clicks within a certain distance of the ball. You can use .onclick() from the turtle module for this:

# juggling_balls_game.py import turtle from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 batting_tolerance = 40 window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) ball = Ball(WIDTH, HEIGHT) while True: ball.move() window.update() turtle.done()

The variable batting_tolerance determines how close you need to be to the centre of the ball for the batting to take effect.

You define the function click_ball(x, y) with two parameters representing xy-coordinates. If the location of these coordinates is within the batting tolerance, then the ball’s .bat_up() method is called.

The call to window.onclick(click_ball) calls the function click_ball() and passes the xy-coordinates to it.

When you run this script, you’ll get the following output. You can test the code by clicking close to the ball:

Now, juggling one ball is nice and easy. How about juggling many balls?

5. Create More Instances of Ball Using a Timer

You can make a few changes to juggling_balls_game.py to have balls appear every few seconds. To achieve this, you’ll need to:

  1. Create a list to store all the Ball instances
  2. Create a new Ball instance every few seconds and add it to the list
  3. Move each of the Ball instances in the while loop
  4. Check all the Ball instances within click_ball() to see if the player clicked the ball

Start by tackling the first two of these steps. You define a tuple named spawn_interval_range. The program will create a new Ball every few seconds and add it to the list balls. The code will choose a new time interval from the range set in spawn_interval_range.

Since all the Ball instances are stored in the list balls, you’ll need to:

  • Add a for loop in the game loop so that all Ball instances move in each frame
  • Add a for loop in bat_up() to check all Ball instances for proximity to the click coordinates

You can update the code with these changes:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 batting_tolerance = 40 spawn_interval_range = (1, 5) window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] spawn_timer = time.time() spawn_interval = 0 while True: if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() for ball in balls: ball.move() window.update() turtle.done()

You start with a spawn_interval of 0 so that a Ball is created in the first iteration of the while loop. The instance of Ball is created and placed directly in the list balls. Each time a ball is created, the code generates a new spawn_interval from the range set at the top of the script.

You then loop through all the instances of Ball in balls to call their .move() method. You use a similar loop in click_ball()

This is where we can see the benefit of OOP and classes for this type of program. You create each ball using the same template: the class Ball. This means all Ball instances have the same data attributes and can access the same methods. However, the values of their data attributes can be different.

Each ball starts at a different location and has a different colour. The .velocity data attribute for each Ball has a different value, too. And therefore, each ball will move independently of the others.

By creating all balls from the same template, you ensure they’re all similar. But they’re not identical as they have different values for their data attributes.

You can run the script to see a basic version of the game which you can test:

The program creates balls every few seconds, and you can click any of them to bat them up. However, if you play this version long enough, you may notice some odd behaviour. In the next section, you’ll see why this happens and how to fix it.

6. Add a Frame Rate to Control the Game Speed

Have a look at this video of balls created by the current script. In particular, look at what happens to those balls that rise above the top edge of the screen. Does their movement look realistic?

Did you notice how, when the ball leaves the top of the screen, it immediately reappears at the same spot falling at high speed? It’s as though the ball is bouncing off the top of the screen. However, it’s not meant to do this. This is different from what you’d expect if the ball was still rising and falling normally while it was out of sight.

Let’s first see why this happens. Then you’ll fix this problem.

How long does one iteration of the while loop take?

Earlier, we discussed how the ball’s velocity is currently a value in pixels per frame. This means the ball will move by a certain number of pixels in each frame. You’re using the time it takes for each frame of the game as the unit of time.

However, there’s a problem with this logic. There are no guarantees that each frame takes the same amount of time. At the moment, the length of each frame is determined by how long it takes for the program to run one iteration of the while loop.

Look at the lines of code in the while loop. Which one do you think is the bottleneck that’s taking up most of the time?

Let’s try a small experiment. To make this a fair test, you’ll initialise the random number generator using a fixed seed so that the same “random” values are picked each time you run the program.

Let’s time 500 iterations of the while loop:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball random.seed(0) WIDTH = 600 HEIGHT = 600 batting_tolerance = 40 spawn_interval_range = (1, 5) window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] spawn_timer = time.time() spawn_interval = 0 start = time.time() count = 0 while count < 500: count += 1 if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() for ball in balls: ball.move() window.update() print(time.time() - start) turtle.done()

You run the while loop until count reaches 500 and print out the number of seconds it takes. When I run this on my system, the output is:

8.363317966461182

It took just over eight seconds to run 500 iterations of the loop.

Now, you can comment the line with window.update() at the end of the while loop. This will prevent the turtles from being displayed on the screen. All the remaining operations are still executed. The code now outputs the following:

0.004825115203857422

The same 500 iterations of the while loop now take around 5ms to run. That’s almost 2,000 times faster!

Updating the display on the screen is by far the slowest part of all the steps which occur in the while loop. Therefore, if there’s only one ball on the screen and it leaves the screen, the program no longer needs to display it. The loop speeds up significantly. This is why using pixels per frame as the ball’s velocity is flawed. The same pixels per frame value results in a much faster speed when the frame time is a lot shorter!

And even if this extreme change in frame time wasn’t an issue, for example, if you’re guaranteed to always have at least one ball on the screen at any one time, you still have no control over how fast the game runs or whether the frame rate will be constant as the game progresses.

How long do you want one iteration of the while loop to take?

To fix this problem and have more control over how long each frame takes, you can set your desired frame rate and then make sure each iteration of the while loop lasts for as long as needed. This is the fixed frame rate approach for running a game and works fine as long as an iteration in the while loop performs all its operations quicker than the frame time.

You can set the frame rate to 30 frames per second (fps), which is fine on most computers. However, you can choose a slower frame rate if needed. A frame rate of 30fps means that each frame takes 1/30 seconds—that’s 0.0333 seconds per frame.

Now that you’ve set the time each frame of the game should take, you can time how long the operations in the while loop take and add a delay at the end of the loop for the remaining time. This ensures each frame lasts 1/30 seconds.

You can implement the fixed frame rate in juggling_balls_game.py:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 frame_rate = 30 batting_tolerance = 40 spawn_interval_range = (1, 5) window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] spawn_timer = time.time() spawn_interval = 0 while True: frame_start_time = time.time() if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() for ball in balls: ball.move() window.update() loop_time = time.time() - frame_start_time if loop_time < 1 / frame_rate: time.sleep(1 / frame_rate - loop_time) turtle.done()

You start the frame timer at the beginning of the while loop. Once all frame operations are complete, you assign the amount of time taken to the variable loop_time. If this is less than the required frame time, you add a delay for the remaining time.

When you run the script now, the game will run more smoothly as you have a fixed frame rate. The velocity of the balls, measured in pixels per frame, is now a consistent value since the frame time is fixed.

You’ve completed the main aspects of the game. However, you need to have a challenge to turn this into a proper game. In the next section, you’ll add a timer and an aim in the game.

7. Add a Timer and an End to the Game

To turn this into a game, you need to:

  • Add a timer
  • Keep track of how many balls are lost

You can start by adding a timer and displaying it in the title bar:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 frame_rate = 30 batting_tolerance = 40 spawn_interval_range = (1, 5) window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] game_timer = time.time() spawn_timer = time.time() spawn_interval = 0 while True: frame_start_time = time.time() if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() for ball in balls: ball.move() window.title(f"Time: {time.time() - game_timer:3.1f}") window.update() loop_time = time.time() - frame_start_time if loop_time < 1 / frame_rate: time.sleep(1 / frame_rate - loop_time) turtle.done()

You start the game timer just before the game loop, and you display the time elapsed in the title bar in each frame of the game. The format specifier :3.1f in the f-string sets the width of the float displayed to three characters and the number of values after the decimal point to one.

Next, you can set the limit of balls you can lose before it’s ‘Game Over’! You must check whether a ball has left the screen through the bottom edge. You’ll recall you wrote the method .is_below_lower_edge() for the class Ball. This method returns a Boolean. Therefore, you can use it directly within an if statement:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 frame_rate = 30 batting_tolerance = 40 spawn_interval_range = (1, 5) balls_lost_limit = 10 window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] game_timer = time.time() spawn_timer = time.time() spawn_interval = 0 balls_lost = 0 while balls_lost < balls_lost_limit: frame_start_time = time.time() if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() for ball in balls: ball.move() if ball.is_below_lower_edge(): window.update() balls.remove(ball) turtle.turtles().remove(ball) balls_lost += 1 window.title( f"Time: {time.time() - game_timer:3.1f} | Balls lost: {balls_lost}" ) window.update() loop_time = time.time() - frame_start_time if loop_time < 1 / frame_rate: time.sleep(1 / frame_rate - loop_time) turtle.done()

You check whether each instance of Ball has left the screen through the bottom edge as soon as you move the ball. If the ball fell through the bottom of the screen:

  • You update the screen so that the ball is no longer displayed. Otherwise, you may still see the top part of the ball at the bottom of the screen
  • You remove the ball from the list of all balls
  • You also need to remove the ball from the list of turtles kept by the turtle module to ensure the objects you no longer need don’t stay in memory
  • You add 1 to the number of balls you’ve lost

You also show the number of balls lost by adding this value to the title bar along with the amount of time elapsed. The while loop will stop iterating once you’ve lost ten balls, which is the value of balls_lost_limit.

You now have a functioning game. But you can add some finishing touches to make it better.

8. Complete the Game With Finishing Touches

When writing these types of games, the “finishing touches” can take as little or as long as you want. You can always do more refining and further tweaks to make the game look and feel better.

You’ll only make a few finishing touches in this article, but you can refine your game further if you wish:

  • Change the background colour to dark grey
  • Add a final screen to show the time taken in the game
  • Ensure the balls are not created too close to the sides of the screen

You can change the background colour using .bgcolor(), which is one of the methods in the turtle module.

To add a final message on the screen, you can update the screen after the while loop and use .write(), which is a method of the Turtle class:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball # Game parameters WIDTH = 600 HEIGHT = 600 frame_rate = 30 batting_tolerance = 40 spawn_interval_range = (1, 5) balls_lost_limit = 10 # Setup window window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.bgcolor(0.15, 0.15, 0.15) window.tracer(0) # Batting function def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] # Game loop game_timer = time.time() spawn_timer = time.time() spawn_interval = 0 balls_lost = 0 while balls_lost < balls_lost_limit: frame_start_time = time.time() # Spawn new ball if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() # Move balls for ball in balls: ball.move() if ball.is_below_lower_edge(): window.update() balls.remove(ball) turtle.turtles().remove(ball) balls_lost += 1 # Update window title window.title( f"Time: {time.time() - game_timer:3.1f} | Balls lost: {balls_lost}" ) # Refresh screen window.update() # Control frame rate loop_time = time.time() - frame_start_time if loop_time < 1 / frame_rate: time.sleep(1 / frame_rate - loop_time) # Game over final_time = time.time() - game_timer # Hide balls for ball in balls: ball.hideturtle() window.update() # Show game over text text = turtle.Turtle() text.hideturtle() text.color("white") text.write( f"Game Over | Time taken: {final_time:2.1f}", align="center", font=("Courier", 20, "normal") ) turtle.done()

After the while loop, when the game ends, you stop the game timer and clear all the remaining balls from the screen. You create a Turtle object to write the final message on the screen.

To add a border at the edge of the screen to make sure no balls are created to close too the edge, you can go back to juggling_ball.py and modify the region where a ball can be created:

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 gravity = 0.07 bat_velocity_change = 8 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint( (-self.width // 2) + 20, (self.width // 2) - 20 ), random.randint( (-self.height // 2) + 20, (self.height // 2) - 20 ), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def move(self): self.forward(self.velocity) self.fall() def fall(self): self.velocity -= self.gravity def is_below_lower_edge(self): if self.ycor() < -self.height // 2: self.hideturtle() return True return False def bat_up(self): self.velocity += self.bat_velocity_change

And this completes the game! Unless you want to keep tweaking and adding more features. Here’s what the game looks like now:

Final Words

In this project, you created a 2D Python game. You started by creating a class called Ball, which inherits from turtle.Turtle. This means that you didn’t have to start from scratch to create Ball. Instead, you built on top of an existing class.

Here’s a summary of the key stages when writing this game:

  • Create a class named Ball and set up what should happen when you create a ball
  • Make the ball move forward
  • Add gravity to pull the ball downwards
  • Add the ability to bat the ball upwards
  • Create more balls, with each ball created after a certain time interval
  • Control the speed of the game by setting a frame rate
  • Add a timer and an end to the game
  • Add finishing touches to the game

You first create the class Ball and add data attributes and methods. When you create an instance of Ball, the object you create already has all the properties and functionality you need a ball to have.

Once you defined the class Ball, writing the game is simpler because each Ball instance carries everything you need it to do with it.

And now, can you beat your high score in the game?

Get the latest blog updates

No spam promise. You’ll get an email when a new blog post is published

Appendix

Here are the final versions of juggling_ball.py and juggling_balls_game.py:

juggling_ball.py # juggling_ball.py import random import turtle class Ball(turtle.Turtle): """Create balls to use for juggling""" max_velocity = 5 gravity = 0.07 bat_velocity_change = 8 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint( (-self.width // 2) + 20, (self.width // 2) - 20 ), random.randint( (-self.height // 2) + 20, (self.height // 2) - 20 ), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def __repr__(self): return f"{type(self).__name__}({self.width}, {self.height})" def move(self): """Move the ball forward by the amount required in a frame""" self.forward(self.velocity) self.fall() def fall(self): """Take the effect of gravity into account""" self.velocity -= self.gravity def is_below_lower_edge(self): """ Check is object fell through the bottom :return: True if object fell through the bottom. False if object is still above the bottom edge """ if self.ycor() < -self.height // 2: self.hideturtle() return True return False def bat_up(self): """Bat the ball upwards by increasing its velocity""" self.velocity += self.bat_velocity_change juggling_balls_game.py # juggling_balls_game.py import turtle import time import random from juggling_ball import Ball # Game parameters WIDTH = 600 HEIGHT = 600 frame_rate = 30 batting_tolerance = 40 spawn_interval_range = (1, 5) balls_lost_limit = 10 # Setup window window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.bgcolor(0.15, 0.15, 0.15) window.tracer(0) # Batting function def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] # Game loop game_timer = time.time() spawn_timer = time.time() spawn_interval = 0 balls_lost = 0 while balls_lost < balls_lost_limit: frame_start_time = time.time() # Spawn new ball if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() # Move balls for ball in balls: ball.move() if ball.is_below_lower_edge(): window.update() balls.remove(ball) turtle.turtles().remove(ball) balls_lost += 1 # Update window title window.title( f"Time: {time.time() - game_timer:3.1f} | Balls lost: {balls_lost}" ) # Refresh screen window.update() # Control frame rate loop_time = time.time() - frame_start_time if loop_time < 1 / frame_rate: time.sleep(1 / frame_rate - loop_time) # Game over final_time = time.time() - game_timer # Hide balls for ball in balls: ball.hideturtle() window.update() # Show game over text text = turtle.Turtle() text.hideturtle() text.color("white") text.write( f"Game Over | Time taken: {final_time:2.1f}", align="center", font=("Courier", 20, "normal") ) turtle.done()

The post Anatomy of a 2D Game using Python’s turtle and Object-Oriented Programming appeared first on The Python Coding Book.

Categories: FLOSS Project Planets

Brett Cannon: How virtual environments work

Planet Python - Sun, 2023-03-12 13:18

After needing to do a deep dive on the venv module (which I will explain later in this blog post as to why), I thought I would explain how virtual environments work to help demystify them.

Why do virtual environments exist?

Back in my the day, there was no concept of environments in Python: all you had was your Python installation and the current directory. That meant when you installed something you either installed it globally into your Python interpreter or you just dumped it into the current directory. Both of these approaches had their drawbacks.

Installing globally meant you didn&apost have any isolation between your projects. This led to issues like version conflicts between what one of your projects might need compared to another one. It also meant you had no idea what requirements your project actually had since you had no way of actually testing your assumptions of what you needed. This was an issue if you needed to share you code with someone else as you didn&apost have a way to test that you weren&apost accidentally wrong about what your dependencies were.

Installing into your local directory didn&apost isolate your installs based on Python version or interpreter version (or even interpreter build type, back when you had to compile your extension modules differently for debug and release builds of Python). So while you could install everything into the same directory as your own code (which you did, and thus didn&apost use src directory layouts for simplicity), there wasn&apost a way to install different wheels for each Python interpreter you had on your machine so you could have multiple environments per project (I&aposm glossing over the fact that back in my the day you also didn&apost have wheels or editable installs).

Enter virtual environments. Suddenly you had a way to install projects as a group that was tied to a specific Python interpreter. That got us the isolation/separation of only installing things you depend on (and being able to verify that through your testing), as well has having as many environments as you want to go with your projects (e.g. an environment for each version of Python that you support). So all sorts of wins! It&aposs an important feature to have while doing development (which is why it can be rather frustrating for users when Python distributors leave venv out).

How do virtual environments work?&#x1F4A1;Virtual environments are different than conda environments (in my opinion; some people disagree with me on this view). The key difference is that conda environments allow projects to install arbitrary shell scripts which are run when you activate a conda environment (which is done implicitly when you use conda run). This is why you are always expected to activate a conda environment, as some conda packages require those those shell scripts run. I won&apost be covering conda environments in this post.Their structure

There are two parts to virtual environments: their directories and their configuration file. As a running example, I&aposm going to assume you ran the command py -m venv --without-pip .venv in some directory on a Unix-based OS (you can substitute py with whatever Python interpreter you want, including the Python Launcher for Unix).

❗For simplicity I&aposm going to focus on the Unix case and not cover Windows in depth.

A virtual environment has 3 directories and potentially a symlink in the virtual environment directory (i.e. within .venv):

  1. bin ( Scripts on Windows)
  2. include ( Include on Windows)
  3. lib/pythonX.Y/site-packages where X.Y is the Python version (Lib/site-packages on Windows)
  4. lib64 symlinked to lib if you&aposre using a 64-bit build of Python that&aposs on a POSIX-based OS that&aposs not macOS

The Python executable for the virtual environment ends up in bin as various symlinks back to the original interpreter (e.g. .venv/bin/python is a symlink; Windows has a different story). The site-packages directory is where projects get installed into the virtual environment (including pip if you choose to have it installed into the virtual environment). The include directory is for any header files that might get installed for some reason from a project. The lib64 symlink is for consistency on those Unix OSs where they have such directories.

The configuration file is pyvenv.cfg and it lives at the top of your virtual environment directory (e.v. .venv/pyvenv.cfg). As of Python 3.11, it contains a few entries:

  1. home (the directory where the executable used to create the virtual environment lives; os.path.dirname(sys._base_executable))
  2. include-system-packages (should the global site-packages be included, effectively turning off isolation?)
  3. version (the Python version down to the micro version, but not with the release level, e.g. 3.12.0, but not 3.12.0a6)
  4. executable (the executable used to create the virtual environment; os.path.realpath(sys._base_executable))
  5. command (the CLI command that could have recreated the virtual environment)

On my machine, the pyvenv.cfg contents are:

home = /home/linuxbrew/.linuxbrew/opt/python@3.11/bin include-system-site-packages = false version = 3.11.2 executable = /home/linuxbrew/.linuxbrew/Cellar/python@3.11/3.11.2_1/bin/python3.11 command = /home/linuxbrew/.linuxbrew/opt/python@3.11/bin/python3.11 -m venv --without-pip /tmp/.venvExample pyvenv.cfg

One interesting thing to note is pyvenv.cfg is not a valid INI file according to the configparser module due to lacking any sections. To read fields in the file you are expected to use line.partition("=") and to strip the resulting key and value.

And that&aposs all there is to a virtual environment! When you don&apost install pip they are extremely fast to create: 3 files, a symlink, and a single file. And they are simple enough you can probably create one manually.

One point I would like to make is how virtual environments are designed to be disposable and not relocatable. Because of their simplicity, virtual environments are viewed as something you can throw away and recreate quickly (if it takes your OS a long time to create 3 directories, a symlink, and a file consisting of 292 bytes like on my machine, you have bigger problems to worry about than virtual environment relocation &#x1F609;). Unfortunately, people tend to conflate environment creation with package installation, when they are in fact two separate things. What projects you choose to install with which installer is actually separate from environment creation and probably influences your "getting started" time the most.

How Python uses a virtual environment

During start-up, Python automatically calls the site.main() function (unless you specify the -S flag). That function calls site.venv() which handles setting up your Python executable to use the virtual environment appropriately. Specifically, the site module:

  1. Looks for pyvenv.cfg in either the same or parent directory as the running executable (which is not resolved, so the location of the symlink is used)
  2. Looks for include-system-site-packages in pyvenv.cfg to decide whether the system site-packages ends up on sys.path
  3. Sets sys._home if home is found in pyvenv.cfg (sys._home is used by sysconfig)

That&aposs it! It&aposs a surprisingly simple mechanism for what it accomplishes.

When thing to notice here about how all of this works is virtual environment activation is optional. Because the site module works off of the symlink to the executable in the virtual environment to resolve everything, activation is just a convenience. Honestly, all the activation scripts do are:

  1. Puts the bin/ (or Scripts/) directory at the front of your PATH environment variable
  2. Sets VIRTUAL_ENV to the directory containing your virtual environment
  3. Tweaks your shell prompt to let you know your PATH has been changed
  4. Registers a deactivate shell function which undoes the other steps

In the end, whether you type python after activation or .venv/bin/python makes no difference to Python. Some tooling like the Python extension for VS Code or the Python Launcher for Unix may check for VIRTUAL_ENV to pick up on your intent to use a virtual environment, but it doesn&apost influence Python itself.

Introducing microvenv

In the Python extension for VS Code, we have an issue where Python beginners end up on Debian or a Debian-based distro like Ubuntu and want to create a virtual environment. Due to Debian removing venv from the default Python install and beginners not realizing there was more to install than python3, they often end up failing at creating a virtual environment  (at least initially as you can install python3-venv separately; in the next version of Debian there will be a python3-full package you can install which will include venv and pip, but it will probably take a while for all the instructions online to be updated to suggest that over python3). We believe the lack of venv is a problem as beginners should be using environments, but asking them to install yet more software can be a barrier to getting started (I&aposm also ignoring the fact pip isn&apost installed by default on Debian either which also complicates the getting started experience for beginners).

But venv is not shipped as a separate part of Python&aposs stdlib, so we can&apost simply install it from PyPI somehow or easily ship it as part of the Python extension to work around this. Since venv is in the stdlib, it&aposs developed along with the version of Python it ships with, so there&aposs no single copy which is fully compatible with all maintained versions of Python (e.g. Python 3.11 added support to use sysconfig to get the directories to create for a virtual environment, various fields in pyvenv.cfg have been added over time, use new language features may be used, etc.). While we could ship a copy of venv for every maintained version of Python, we potentially would have to ship for every micro release to guarantee we always had a working copy, and that&aposs a lot of upstream tracking to do. And even if we only shipped copies from minor release of Python, we would still have to track every micro release in case a bug in venv was fixed.

Hence I have created microvenv. It is a project which provides a single .py file which you use to create a minimal virtual environment. You can either execute it as a script or call its create() function that is analogous to venv.create(). It&aposs also compatible with all maintained versions of Python. As I (hopefully) showed above, creating a virtual environment is actually straight-forward, so I was able to replicate the necessary bits in less than 100 lines of Python code (specifically 87 lines in the 2023.1.1 release). That actually makes it small enough to pass in via python -c, which means it could be embedded in a binary as a string constant and passed as an argument when executing a Python executable as a subprocess. Hopefully that means a tool could guarantee it can always construct a virtual environment somehow.

To keep it microvenv simple, small, and maintainable, it does not contain any activation scripts. I personally don&apost want to be a shell script expert for multiple shells, nor do I want to track the upstream activation scripts (and they do change in case you were thinking "it shouldn&apost be that hard to track"). Also, in VS Code we are actually working towards implicitly activating virtual environments by updating your environment variables directly instead of executing any activation shell scripts, so the shell scripts aren&apost needed for our use case (we are actively moving away from using any activation scripts where we can as we have run into race condition problems with them when sending the command to the shell; thank goodness of conda run, but we also know people still want an activated terminal).

I&aposm also skipping Windows support because we have found the lack of venv to be a unique problem for Linux in general, and Debian-based distros specifically.

I honestly don&apost expect anyone except tool providers to use microvenv, but since it could be useful to others beyond VS Code, I decided it was worth releasing on its own. I also expect anyone using the project to only use it as a fallback when venv is not available (which you can deduce by running py -c "from importlib.util import find_spec; print(find_spec(&aposvenv&apos) is not None)"). And before anyone asks why we don&apost just use virtualenv, its wheel is 8.7MB compared to microvenv at 3.9KB; 0.05% the size, or 2175x smaller. Granted, a good chunk of what makes up virtualen&aposs wheel is probably from shipping pip and setuptools in the wheel for fast installation of those projects after virtual environment creation, but we also acknowledge our need for a small, portable, single-file virtual environment creator is rather niche and something virtualenv currently doesn&apost support (for good reason).

Our plan for the Python extension for VS Code is to use microvenv as a fallback mechanism for our Python: Create Environment command (FYI we also plan to bootstrap pip via its pip.pyz file from bootstrap.pypa.io by downloading it on-demand, which is luckily less than 2MB). That way we can start suggesting to users in various UX flows to create and use an environment when one isn&apost already being used (as appropriate, of course). We want beginners to learn about environments if they don&apost already know about them and also remind experienced users when they may have accidentally forgotten to create an environment for their workspace. That way people get the benefit of (virtual) environments with as little friction as possible.

Categories: FLOSS Project Planets

a2ps @ Savannah: a2ps 4.15.1 released [stable]

GNU Planet! - Sun, 2023-03-12 11:06


GNU a2ps is a filter which generates PostScript from various formats,
with pretty-printing features, strong support for many alphabets, and
customizable layout.

See https://www.gnu.org/software/a2ps/ for more information.

This is a bug-fix release. Users of 4.15 should upgrade. See below for more
details.


Here are the compressed sources and a GPG detached signature:
  https://ftpmirror.gnu.org/a2ps/a2ps-4.15.1.tar.gz
  https://ftpmirror.gnu.org/a2ps/a2ps-4.15.1.tar.gz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

8674b90626d6d1505af8b2ae392f2495b589a052  a2ps-4.15.1.tar.gz
l5dwi6AoBa/DtbkeBsuOrJe4WEOpDmbP3mp8Y8oEKyo  a2ps-4.15.1.tar.gz

The SHA256 checksum is base64 encoded, instead of the
hexadecimal encoding that most checksum tools default to.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify a2ps-4.15.1.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa2048 2013-12-11 [SC]
        2409 3F01 6FFE 8602 EF44  9BB8 4C8E F3DA 3FD3 7230
  uid   Reuben Thomas <rrt@sc3d.org>
  uid   keybase.io/rrt <rrt@keybase.io>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key rrt@sc3d.org

  gpg --recv-keys 4C8EF3DA3FD37230

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=a2ps&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify a2ps-4.15.1.tar.gz.sig


This release was bootstrapped with the following tools:
  Autoconf 2.71
  Automake 1.16.5
  Gnulib v0.1-5853-ge0aefd96b6

NEWS

* Noteworthy changes in release 4.15.1 (2023-03-12) [stable]
 * Bug fixes:
   - Use “grep -F” rather than obsolete fgrep.
   - Fix broken a2ps-lpr-wrapper script, and translate to sh for
     portability.


Categories: FLOSS Project Planets

Kushal Das: Everything Curl from Daniel

Planet Python - Sun, 2023-03-12 09:07

Everything Curl is a book about everything related to Curl. I read the book before online. But, now I am proud to have a physical copy signed by the author :)

It was a good evening, along with some amazing fish in dinner and wine and lots of chat about life in general.

Categories: FLOSS Project Planets

Talk Python to Me: #406: Reimagining Python's Packaging Workflows

Planet Python - Sun, 2023-03-12 04:00
The great power of Python is its over 400,000 packages on PyPI to serve as building blocks for your app. How do you get those needed packages on to your dev machine and managed within your project? What about production and QA servers? I don't even know where to start if you're shipping built software to non-dev end users. There are many variations on how this works today. And where we should go from here has become a hot topic of discussion. So today, that's the topic for Talk Python. I have a great panel of guests: Steve Dower, Pradyun Gedam, Ofek Lev, and Paul Moore.<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Python Packaging Strategy Discussion - Part 1</b>: <a href="https://discuss.python.org/t/python-packaging-strategy-discussion-part-1/22420" target="_blank" rel="noopener">discuss.python.org</a><br/> <b>Thoughts on the Python packaging ecosystem</b>: <a href="https://pradyunsg.me/blog/2023/01/21/thoughts-on-python-packaging/" target="_blank" rel="noopener">pradyunsg.me</a><br/> <b>Python Packaging Authority</b>: <a href="https://www.pypa.io/en/latest/" target="_blank" rel="noopener">pypa.io</a><br/> <b>Hatch</b>: <a href="https://hatch.pypa.io/latest/" target="_blank" rel="noopener">hatch.pypa.io</a><br/> <b>Pyscript</b>: <a href="https://pyscript.net" target="_blank" rel="noopener">pyscript.net</a><br/> <b>Dark Matter Developers: The Unseen 99%</b>: <a href="https://www.hanselman.com/blog/dark-matter-developers-the-unseen-99" target="_blank" rel="noopener">hanselman.com</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=z50B6AmQwLw" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/406/reimagining-pythons-packaging-workflows" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div><br/> <strong>Sponsors</strong><br/> <a href='https://talkpython.fm/cox'>Cox Automotive</a><br> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>
Categories: FLOSS Project Planets

ListenData: Complete Guide to Visual ChatGPT

Planet Python - Sat, 2023-03-11 23:26

In this post, we will talk about how to run Visual ChatGPT in Python with Google Colab. ChatGPT has garnered huge popularity recently due to its capability of human style response. As of now, it only provides responses in text format, which means it cannot process, generate or edit images. Microsoft recently released a solution for the same to handle images. Now you can ask ChatGPT to generate or edit the image for you.

Demo of Visual ChatGPT

In the image below, you can see the final output of Visual ChatGPT - how it looks like.

READ MORE »
Categories: FLOSS Project Planets

FreeBSD 12.3 EoL

Planet KDE - Sat, 2023-03-11 18:00

FreeBSD 12 is the “previous” series of releases. FreeBSD 13 is the “stable” series as of today (march 2023) and FreeBSD 14 is “current”, e.g. upcoming somewhen. The major versions of FreeBSD tend to bring larger changes – such as a newer base compiler, or a sudden improvement in system header-file compatibility. The previous series releases use clang 13, for instance, while stable uses clang 14. FreeBSD 12 is poorly supported by the KDE-FreeBSD team, and that’s on purpose.

Chasing compiler versions, and more importantly, C++ standard library versions, is frustrating work. Maintaining backwards compatibility has real costs – especially when most of KDE development is done with some recent gcc on Linux.

Case in point is qxmpp, which does not build on FreeBSD 12.3, like so:

src/client/QXmppHttpFileSharingProvider.cpp:163:90: error: no viable constructor or deduction guide for deduction of template arguments of 'weak_ptr' QObject::connect(state->upload.get(), &QXmppHttpUpload::progressChanged, [stateRef = std::weak_ptr(state), reportProgress = std::move(reportProgress)]()

The error message is re-formatted a little for readability, but it comes down to a lambda with a binding not being able to deduce some type. Chasing that means building and re-building and applying some serious C++ knowledge to the qxmpp codebase.

Building the same code on FreeBSD 12.4 “just works”. The STL has been updated, and so we reach the situation that the cheapest (in terms of developer time on the KDE-FreeBSD team) and easiest way to deal with this problem is just to say:

FreeBSD 12.3 is no longer supported by the KDE-FreeBSD team. Update to 12.4 or 13.1 or later.

For what it’s worth, there are similar problems with STL versioning with libquotient (tuple constructor) and nheko (no concepts header). Right now, that means that we are actively declaring older OS releases as “unsupported” in order to keep the maintainence burden (particularly: developing new patches specifically for older FreeBSD + clang versions which upstream is unlikely to find interesting) low.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: pkgKitten 0.2.3 on CRAN: Minor Update

Planet Debian - Sat, 2023-03-11 13:35

A new release 0.2.3 of pkgKitten arrived on CRAN earlier, and will be uploaded to Debian. pkgKitten makes it simple to create new R packages via a simple function invocation. A wrapper kitten.r exists in the littler package to make it even easier.

This release improves the created ‘Description:’, and updated some of the continuous integration.

Changes in version 0.2.3 (2023-03-11)
  • Small improvement to generated Description: field and Title:

  • Maintenance for continuous integration setup

More details about the package are at the pkgKitten webpage, the pkgKitten docs site, and the pkgKitten GitHub repo.

Courtesy of my CRANberries site, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

This week in KDE: Qt apps survive the Wayland compositor crashing

Planet KDE - Sat, 2023-03-11 00:07

Thanks to the heroic work of David Edmundson, Qt apps (including all KDE software) in Plasma 6 will now survive when the Wayland compositor crashes! This is huge! And work is ongoing to add this functionality to other common app toolkits, such as GTK.

Beyond that, Plasma 6 porting work continues, with more and more people using it daily. Not me yet, because I’m a scaredy-cat about this kind of instability and am waiting for it to converge a bit more But hopefully soon! Meanwhile, check out what else happened:

New Features

Konsole Now works on Windows! In addition to making the app possible to potentially distribute on Windows, this means that Windows-distributed KDE apps that have an embedded Konsole view like Kate can now actually embed Konsole itself, instead of an inferior terminal view (Waqar Ahmed and Christoph Cullmann, Konsole and Kate 23.04. Link 1 and Link 2)

It’s now possible to configure the Kickoff Application Launcher to use a grid layout for everything, not just the Favorites view (Tanbir Jishan, Plasma 6.0. Link):

User Interface Improvements

Spectacle now always shows a notification when taking a background-mode rectangular region screenshot (with Meta+Shift+PrintScreen by default) and also no longer quits if its main window happened to be open while any such notification disappears (Noah Davis, Spectacle 23.04. Link 1 and link 2)

For new users (not existing users), the system will now sleep after 15 minutes of inactivity by default, and will generate correct power profiles for convertible laptops (Plasma 5.27.3, me: Nate Graham, Link 1 and link 2)

On Discover’s app pages, the rows of buttons now become columns for narrow windows or the mobile interface, and their layout is streamlined and improved as well (Emil Velikov, Plasma 5.27.3. Link):

Welcome Center now has a mobile-friendly layout. The content itself is still fairly desktop-focused, but this will soon change as well! (me: Nate Graham, Plasma 6.0. Link):

In the Plasma Wayland Session, the Ctrl+Alt+Scroll up/down shortcut that switches virtual desktops has been changed to Meta+Alt+Scroll up/down to avoid blocking app-specific shortcuts and also generally comply with the standard that global actions use the Meta key (me: Nate Graham, Plasma 6.0. Link)

Improved how the SDDM login screen works with a touchscreen in the Plasma Wayland session: touch input works at all, tapping the Virtual Keyboard button now opens it, and the keyboard layout list can now be scrolled with a swipe (Aleix Pol Gonzalez and me: Nate Graham, Plasma 5.27.3 and Frameworks 5.104. Link 1, link 2, and link 3)

Significant Bugfixes

(This is a curated list of e.g. HI and VHI priority bugs, Wayland showstoppers, major regressions, etc.)

KRuler now works properly on Wayland, and can now be moved or resized like on X11 (Shenleban Tongying, KRuler 23.04. Link)

Fixed another way the powerdevil power management subsystem could crash with certain multi-screen setups (Aleix Pol Gonzalez, Plasma 5.27.3. Link)

Fixes a way that apps could crash in the Plasma Wayland session when a display goes to sleep (Aleix Pol Gonzalez, Plasma 5.27.3. Link)

Night Color now works on ARM-powered devices that don’t support “Gamma LUTs” but do support “Color Transform Matrices”! It still doesn’t work on NVIDIA GPUs because they don’t support either of them (Vlad Zahorodnii, Plasma 5.27.3. Link)

Red and blue color channels are no longer sometimes swapped while screencasting in the Plasma Wayland session (Aleix Pol Gonzalez, Plasma 5.27.3. Link)

Image buttons in Breeze-themed GTK apps are now displayed correctly (Janet Blackquill, Plasma 5.27.3. Link)

Fixed two major crashes in Plasma related to actions that would show window thumbnails in the Task Manager (Fushan Wen, Frameworks 5.104. Link)

When using KDE apps outside of Plasma, they should no longer get a weird color scheme that interferes with the color scheme set by the target platform (Jan Grulich, Frameworks 5.105. Link)

Other bug-related information of interest:

Automation & Systematization

Re-organized https://develop.kde.org/docs to have a more straightforward structure and make it easier to find stuff (Carl Schwan. Link)

Wrote some documentation about KRunner metadata (Alexander Lohnau, Link)

Web Presence

https://kde.org/for/scientists is now live! This web page showcases the best KDE software to use for professional, technical, scientific purposes, including by NASA and Barcalona’s ALBA Synchrotron.

https://develop.kde.org/frameworks/kirigami has been overhauled to showcase the latest state-of-the-art in KDE’s Kirigami app development framework:

Thanks to Carl Schwan, the KDE Promo team, and many others for making these impactful changes happen!

…And everything else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

If you’re a user, upgrade to Plasma 5.27! If your distro doesn’t offer it and won’t anytime soon, consider switching to a different one that ships software closer to its developer’s schedules.

If you’re a developer, consider working on known Plasma 5.27 regressions! You might also want to check out our 15-Minute Bug Initiative. Working on these issues makes a big difference quickly!

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

And finally, KDE can’t work without financial support, so consider making a donation today! This stuff ain’t cheap and KDE e.V. has ambitious hiring goals. We can’t meet them without your generous donations!

Categories: FLOSS Project Planets

KDE Ships Frameworks 5.104.0

Planet KDE - Fri, 2023-03-10 19:00

Saturday, 11 March 2023

KDE today announces the release of KDE Frameworks 5.104.0.

KDE Frameworks are 83 addon libraries to Qt which provide a wide variety of commonly needed functionality in mature, peer reviewed and well tested libraries with friendly licensing terms. For an introduction see the KDE Frameworks release announcement.

This release is part of a series of planned monthly releases making improvements available to developers in a quick and predictable manner.

New in this version Baloo
  • extactor: add KAboutData
Breeze Icons
  • Add draw-number
Extra CMake Modules
  • Load translations for application-specific language also on Win and Mac (bug 464694)
  • ECMGenerateExportHeader: fix duplicated addition of deprecation macros code
  • Find wayland.xml from wayland-scanner.pc
KConfig
  • Don't include screen connector names in screen position/size data (bug 460260)
  • Fix multimonitor window size restoration (bug 460260)
  • Sort connector names for multi-screen size/position keys (bug 460260)
KConfigWidgets
  • KConfigDialogManager: Fix logs formatting
KCoreAddons
  • Deprecate KPluginMetaData::initialPreference
  • Convert BugReportUrl in desktoptojson (bug 464600)
  • exportUrlsToPortal: stop fusing remote urls (bug 457529)
  • Show deprecation warning about desktoptojson tool
KDeclarative
  • Guard nullable property access, and bind instead of assigning once
  • AbstractKCM: Rewrite padding expressions to be more readable
  • Add import aliases, bump internal import versions as needed
  • Drop unused QML imports
  • [managedconfigmodule] Fix deprecation comments
  • [configmodule] Deprecate constructor without metadata
  • [configmodule] Deprecate setAboutData
KDocTools
  • Install version header
KFileMetaData
  • Mobi extractor: only extract what is asked (bug 465006)
KGlobalAccel
  • Skip reloading global registry settings instead of asserting
KHolidays
  • Add holidays for Dominican Rebublic (bug 324683)
  • Kf5 add cuba holidays (bug 461282)
  • holidayregion variable 'regionCode' shadows outer function
KI18n
  • KI18nLocaleData target: add include dir for version header to interface
  • Load translations for application-specific language also on Win and Mac (bug 464694)
KIconThemes
  • Properly mark panel icon group as deprecated
  • Deprecate KIconLoader overloads in KIconButton and KIconDialog
KIdleTime
  • wayland: Guard wayland object destructors (bug 465801)
KIO
  • DeleteOrTrashJob: when trashing a file in trash:/ delete it instead (bug 459545)
  • Set bug report URL for Windows Shares KCM (bug 464600)
  • OpenFileManagerWindowJob: fix opening multiple instances under Wayland [KF5] (bug 463931)
  • Add missing URLs in KCMs for reporting bugs (bug 464600)
  • kshorturifilter: return directly if cmd is empty
  • [kprocessrunner] Use aliased desktop file name for xdg-activation
Kirigami
  • Dialog: Don't let user interact with footer during transitions
  • For styling and recoloring, use down property instead of pressed
  • Fix mistyping of Kirigami.Settings.isMobile
KItemModels
  • KDescendantProxyModel: Do not remove indexes from mapping before announcing the removal
KNewStuff
  • DownloadItemsSheet: Fix scrolling (bug 448800)
KPackage Framework
  • Check pluginId contains '/' before using it as package type (bug 449727)
KPeople
  • Install version header
KRunner
  • KF5KRunnerMacros: Add compat code and warning for in KF6 renamed configure_krunner_test macro
KService
  • Fix deprecation ifdef
  • Deprecate KService::serviceTypes and KService::hasServiceType
  • application: Add X-SnapInstanceName
  • Add method to query supported protocols for a service
KTextEditor
  • Improve cstyle performance (bug 466531)
  • Improve performance of rendering spaces with dyn wrap disabled (bug 465841)
  • documentSaveCopyAs: Use async job api (bug 466571)
  • Optimize rendering spaces with dyn wrapping (bug 465841)
KWindowSystem
  • Remove extra semicolon
  • Deprecated KWindowSystem::allowExternalProcessWindowActivation
  • [kstartupinfo] Deprecate setWindowStartupId
  • [kstartupinfo] Deprecate KStartupInfo::currentStartupIdEnv
  • [kstartupinfo] Fix API docs for currentStartupIdEnv
NetworkManagerQt
  • settings: fix -Wlto-type-mismatch in NetworkManager::checkVersion decl
Prison
  • KPrisonScanner target: add include dir for version header to interface
Purpose
  • Place Purpose::Menu headers into C++ namespace subdir, w/ compat headers
QQC2StyleBridge
  • ProgressBar: Pause indeterminate animation when invisible
  • Added flat combobox without outline unless hovered
  • TextField: Fix password-protection code from affecting normal text fields (bug 453828)
  • Drawer: Fix RTL by copying sizing code from upstream Default style
  • Drawer: Use simpler sizing expressions from upstream Default style
  • Don't check for selectByMouse on a non-existent root for TextArea
  • use again the palette coming from Kirigami.Theme (bug 465054)
  • Only enable TextArea context menu when able to select by mouse
Security information

The released code has been GPG-signed using the following key: pub rsa2048/58D0EE648A48B3BB 2016-09-05 David Faure faure@kde.org Primary key fingerprint: 53E6 B47B 45CE A3E0 D5B7 4577 58D0 EE64 8A48 B3BB

Categories: FLOSS Project Planets

Matt Layman: Learn Django or Ruby on Rails?

Planet Python - Fri, 2023-03-10 19:00
I got a question from a patron on Patreon. The question is a common one, so I thought I’d share it along with my response. Was there a reason why you picked the Django for your web development? Did you consider RoR? Does it matter which stack I use at the end of the day? My primary reason for getting into Django was twofold. First, I’ve been working with Python for a long time and have a lot of comfort with the language.
Categories: FLOSS Project Planets

KDE Gear 23.04 branches created

Planet KDE - Fri, 2023-03-10 15:27

Make sure you commit anything you want to end up in the KDE Gear 23.04 releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is next week Thursday 16 of March.

More interesting dates  
  March 30: 23.04 RC (23.03.90) Tagging and Release
  April 13: 23.04 Tagging
  April 20: 23.04 Release

https://community.kde.org/Schedules/KDE_Gear_23.04_Schedule

Categories: FLOSS Project Planets

CodersLegacy: Python Tkinter Project with MySQL Database

Planet Python - Fri, 2023-03-10 14:34

In this Python Project, we will be discussing how to integrate a MySQL Database into our Tkinter application.

Why do we need a MySQL Database?

But first, let us discuss “why” we need a MySQL Database. The need for a database arises when we need to permanently store data somewhere. Larger applications, such as Reporting software, Graphing applications, etc. need some place to store relevant data, from where it can be retrieved at a later date.

A common alternative is the use of “text files” to store data. Although the text file approach is simpler (in the short term), databases have several advantages.

  1. Scalability
  2. Performance
  3. Security
  4. Backups

However, Databases might be a bit over-kill for simpler applications. It’s really a case-by-case thing, where you need to evaluate which would be more suitable for your application.

Let’s begin with the tutorial.

Pre-requisites

We are assuming you already have a working MySQL installation setup. If not, kindly complete that step before attempting to use any of the code in this tutorial. IT IS IMPORTANT THAT YOU REMEMBER THE USERNAME AND PASSWORD YOU USED DURING ITS INSTALLATION. DON’T FORGET.

Once you have installed MySQL, the next thing we need to do is install the “mysql-connector-python” library in Python.

pip show mysql-connector-python

This library will allow us to connect with the MySQL database, from our Python code, and execute SQL queries.

Initializing a MySQL Connection for our Tkinter Project

Since this a large application, I will take a proper approach than what I usually do. We will be creating two files, one called database.py, and one called UI.py. First, we will create define some basic functions inside the database.py file, before we proceed to the UI.py file.

Here is the first function, used to initialize a connection to the MySQL Database.

def initialize_connection(): conn = mysql.connector.connect( host = "localhost", user = "root", password = "db1963" ) cursor = conn.cursor() return conn, cursor

Here we call the “connect” method with three arguments – host, user, and password.

The “host” parameter specifies the location of the MySQL database server. In this code, the location is set to “localhost”, which means that the MySQL server is running on the same machine where the Python code is being executed.

The “user” parameter specifies the username that is used to log in to the MySQL database. The “password” parameter is used to specify the password associated with the user account. Enter your own username and password here, which you used to setup your MySQL database.

After the connection object is created, the next line of code creates a cursor object. The cursor object is used to execute SQL queries and interact with the MySQL database. We then return both the cursor object, and the connection object from this function.

We will later call this function from the UI.py file.

Creating a MySQL Database

Although we have initialized a MySQL connection, we still need to create a Database for our project. There is already a default database created, called “sys”, but we will be creating a new one called “tutorial”. for this application.

The below code creates a new database in a safe manner. It first checks to ensure that there already isn’t a database called “tutorial”. If it does not exist, it will execute the “CREATE DATABASE tutorial” command to create it.

def create_database(cursor): cursor.execute("SHOW DATABASES") temp = cursor.fetchall() databases = [item[0] for item in temp] if "tutorial" not in databases: cursor.execute("CREATE DATABASE tutorial") cursor.execute("USE tutorial")

At the very end it calls the “USE tutorial” command, to switch from the default database to the new one. This line must be called regardless of whether our database exists already or not.

We still aren’t done with our setup.

Next, we have to create a “Table”. A Table is basically where we are going to be storing data for a certain entity/purpose. In larger applications, a database consists of multiple tables. For example, in a game, there might be a table for “Players”, one table for “NPC’s”, one for “Enemies”, one for “Quests”, and so on.

In our application, we will storing the records for Users. So we will only have one table for “Users”. The below code creates this table, in the same manner as we created the database.

def create_table(cursor): cursor.execute("SHOW TABLES") temp = cursor.fetchall() tables = [item[0] for item in temp] if "users" not in tables: cursor.execute("""CREATE TABLE users( id INT AUTO_INCREMENT PRIMARY KEY, firstName VARCHAR(100), lastName VARCHAR(100), password VARCHAR(30), email VARCHAR(100) UNIQUE, gender VARCHAR(1), age INT, address VARCHAR(200) )""")

The above code is a little bit SQL-heavy due to the large table, with several fields. Each field has a name, a datatype, and some optional constraints. We have used three such constraints here. First we have “PRIMARY KEY” which designates the “id” field as a unique identifier we can use to query a specific user.

Likewise, we also declare the “email” as a unique column. These constraints will automatically prevent duplicates from being entered. We also have AUTO_INCREMENT for “id”, which means I will not be specifying an id, rather it will be auto generated in the numeric sequence.

We can also define lengths for each text-based field. The password has a limit of 30, names and email have a limit of 100, and gender has a limit of 1 character.

Finally after all this, we need to call these two methods somewhere.

We will place them both into the initialize_connection() function we created earlier.

def initialize_connection(): conn = mysql.connector.connect( host = "localhost", user = "root", password = "db1963" ) cursor = conn.cursor() create_database(cursor) create_table(cursor) return conn, cursor

Our setup is now complete!

Creating the Tkinter Project

The below code features a small Tkinter project we put together, which will we integrate the MySQL Database code into. There is nothing extra in the application, just a register and login window. The main window is blank, as it doesn’t really have anything to do with this tutorial. Our only concern is registering a user into our database, and logging them in later with their credentials.

Most of this is just standard tkinter code, so take your time going through it. The only thing of note right now, is the Login and Register windows, along with the imports we made from the database.py file (second line). We called the initialize_connection() method, which returned the connection and cursor object. We will be needing these later, when calling the login and register and functions.

(Screenshot of the application shown after the below code)

import tkinter as tk from database import * conn, cursor = initialize_connection() def center_window(width, height): x = (root.winfo_screenwidth() // 2) - (width // 2) y = (root.winfo_screenheight() // 2) - (height // 2) root.geometry(f'{width}x{height}+{x}+{y}') class WelcomeWindow(tk.Frame): def __init__(self, master): super().__init__() self.master = master self.master.title("Welcome") center_window(240, 120) login_button = tk.Button(self, text="Login", width=10, command=self.open_login_window) login_button.pack(padx=20, pady=(20, 10)) register_button = tk.Button(self, text="Register", width=10, command=self.open_register_window) register_button.pack(pady=10) self.pack() def open_login_window(self): for widget in self.winfo_children(): widget.destroy() self.destroy() LoginWindow(self.master) def open_register_window(self): for widget in self.winfo_children(): widget.destroy() self.destroy() RegisterWindow(self.master) class LoginWindow(tk.Frame): def __init__(self, master): super().__init__() self.master = master self.master.title("Login") self.master.resizable(False, False) center_window(240, 150) tk.Label(self, text="Username:").grid(row=0, column=0) self.username_entry = tk.Entry(self) self.username_entry.grid(row=0, column=1, padx=10, pady=10) tk.Label(self, text="Password:").grid(row=1, column=0) self.password_entry = tk.Entry(self, show="*") self.password_entry.grid(row=1, column=1, padx=10, pady=10) submit_button = tk.Button(self, text="Submit", width=8, command=self.submit) submit_button.grid(row=2, column=1, sticky="e", padx=10, pady=(10, 0)) submit_button = tk.Button(self, text="Back", width=8, command=self.back) submit_button.grid(row=2, column=0, sticky="w", padx=10, pady=(10, 0)) self.pack() def submit(self): pass def back(self): for widget in self.winfo_children(): widget.destroy() self.destroy() WelcomeWindow(self.master) class RegisterWindow(tk.Frame): def __init__(self, master): super().__init__() self.master = master self.master.title("Register") self.master.resizable(False, False) center_window(320, 350) tk.Label(self, text="First Name:").grid(row=0, column=0, sticky="w") self.first_name_entry = tk.Entry(self, width=26) self.first_name_entry.grid(row=0, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Last Name:").grid(row=1, column=0, sticky="w") self.last_name_entry = tk.Entry(self, width=26) self.last_name_entry.grid(row=1, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Password:").grid(row=2, column=0, sticky="w") self.password_entry = tk.Entry(self, show="*", width=26) self.password_entry.grid(row=2, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Email:").grid(row=3, column=0, sticky="w") self.email_entry = tk.Entry(self, width=26) self.email_entry.grid(row=3, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Gender:").grid(row=4, column=0, sticky="w") self.gender_entry = tk.Entry(self, width=10) self.gender_entry.grid(row=4, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Age:").grid(row=5, column=0, sticky="w") self.age_entry = tk.Entry(self, width=10) self.age_entry.grid(row=5, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Address:").grid(row=6, column=0, sticky="w") self.address_entry = tk.Text(self, width=20, height=3) self.address_entry.grid(row=6, column=1, padx=10, pady=10, sticky="e") submit_button = tk.Button(self, text="Submit", width=8, command=self.submit) submit_button.grid(row=7, column=1, padx=10, pady=10, sticky="e") submit_button = tk.Button(self, text="Back", width=8, command=self.back) submit_button.grid(row=7, column=0, sticky="w", padx=10, pady=(10, 10)) self.pack() def submit(self): pass def back(self): for widget in self.winfo_children(): widget.destroy() self.destroy() WelcomeWindow(self.master) class MainWindow(tk.Frame): def __init__(self, master): super().__init__() self.master = master center_window(600, 400) self.pack() root = tk.Tk() root.eval('tk::PlaceWindow . center') WelcomeWindow(root) root.mainloop()

Welcome Page:

Login Page:

Register Page:

It is important to note, that in the above code, the “submit” functions for both the Login and Register window are empty. We will developing them in the next two sections.

Registering a User

Let’s implement the registration functionality into our code. The first thing we need to do is extract the required data from our tkinter widgets (in the register window) and then pass this data to a “register” function in our database.py file, which we will create soon.

Here is the submit function in the Register window that we have filled out. We just extract all the data into a dictionary, then pass this into a register function, along with the connection and cursor we created earlier with the initialize_connection() function.

def submit(self): data = {} data["firstName"]= self.first_name_entry.get() data["lastName"]= self.last_name_entry.get() data["password"]= self.password_entry.get() data["email"]= self.email_entry.get() data["gender"]= self.gender_entry.get() data["age"]= self.age_entry.get() data["address"]= self.address_entry.get(1.0, tk.END) register(cursor, conn, data)

Next, we will implement this register function in the database.py file.

We have made use of string formatting here, to insert all the values from the dictionary into a SQL INSERT INTO command, which inserts values into a Table. The values must be in the sequence that we defined the tables. (The first value is NULL, because that is the id, which will be assigned automatically)

def register(cursor, conn, data): cursor.execute(f"""INSERT INTO users values( NULL, '{data["firstName"]}', '{data["lastName"]}', '{data["password"]}', '{data["email"]}', '{data["gender"]}', '{data["age"]}', '{data["address"]}' )""") conn.commit()

Lastly, we need to call the “commit” method, which basically implements the change we made. Think of it like “saving” changes to the database. The concept of commits is important when it comes to transaction control, and recovery in databases, which is a whole separate topic.

Our register function is now complete! We can begin registering users, but let’s wait until we have implemented the login functionality.

Logging a User in

Next, we will work on the two functions for login functionality, the submit() function inside the Login Window, and login() function inside the database.py file.

Here is the submit() function. We haven’t defined the login() function yet, but we have already decided beforehand that it will be returning either True, or False, depending on whether the login was successful or not.

def submit(self): data = {} data["email"] = self.username_entry.get() data["password"] = self.password_entry.get() if login(cursor, data) == True: print("successful login") for widget in self.winfo_children(): widget.destroy() self.destroy() MainWindow(self.master) else: print("unsuccessful login")

Here is the login() function, where we execute a “SELECT” command by filtering out users based on the given “email” and “password” using the “WHERE” clause. If the login was successful, we destroy the current window, and open the main window.

def login(cursor, data): cursor.execute(f"""SELECT * FROM users WHERE email = '{data["email"]}' AND password = '{data["password"]}' """) if cursor.fetchone() != None: return True return False

We then return a True or False value, depending on the whether a record was successfully return or not. A “None” result means no records were found.

We decided to use “email” as the “username”, since it was a unique field. It’s not a good idea to expect the user to use a numerical digit as their login, and no other field is unique to be used as a username. You may wish to create a separate field for “username” during registration if you want.

Testing our Tkinter Project + MySQL Database

Let’s fire up our application now and try it out. The first thing to do is go and register a new user, in the register window.

Next, I will hit submit to register the user, then navigate to the login page to try and login. If the login succeeds, we should be re-directed to the main window.

Here is the complete code so that you can try it out for yourself.

The UI.py file.

import tkinter as tk from database import * conn, cursor = initialize_connection() def center_window(width, height): x = (root.winfo_screenwidth() // 2) - (width // 2) y = (root.winfo_screenheight() // 2) - (height // 2) root.geometry(f'{width}x{height}+{x}+{y}') class WelcomeWindow(tk.Frame): def __init__(self, master): super().__init__() self.master = master self.master.title("Welcome") center_window(240, 120) login_button = tk.Button(self, text="Login", width=10, command=self.open_login_window) login_button.pack(padx=20, pady=(20, 10)) register_button = tk.Button(self, text="Register", width=10, command=self.open_register_window) register_button.pack(pady=10) self.pack() def open_login_window(self): for widget in self.winfo_children(): widget.destroy() self.destroy() LoginWindow(self.master) def open_register_window(self): for widget in self.winfo_children(): widget.destroy() self.destroy() RegisterWindow(self.master) class LoginWindow(tk.Frame): def __init__(self, master): super().__init__() self.master = master self.master.title("Login") self.master.resizable(False, False) center_window(240, 150) tk.Label(self, text="Username:").grid(row=0, column=0) self.username_entry = tk.Entry(self) self.username_entry.grid(row=0, column=1, padx=10, pady=10) tk.Label(self, text="Password:").grid(row=1, column=0) self.password_entry = tk.Entry(self, show="*") self.password_entry.grid(row=1, column=1, padx=10, pady=10) submit_button = tk.Button(self, text="Submit", width=8, command=self.submit) submit_button.grid(row=2, column=1, sticky="e", padx=10, pady=(10, 0)) submit_button = tk.Button(self, text="Back", width=8, command=self.back) submit_button.grid(row=2, column=0, sticky="w", padx=10, pady=(10, 0)) self.pack() def submit(self): data = {} data["email"] = self.username_entry.get() data["password"] = self.password_entry.get() if login(cursor, data) == True: print("successful login") for widget in self.winfo_children(): widget.destroy() self.destroy() MainWindow(self.master) else: print("unsuccessful login") def back(self): for widget in self.winfo_children(): widget.destroy() self.destroy() WelcomeWindow(self.master) class RegisterWindow(tk.Frame): def __init__(self, master): super().__init__() self.master = master self.master.title("Register") self.master.resizable(False, False) center_window(320, 350) tk.Label(self, text="First Name:").grid(row=0, column=0, sticky="w") self.first_name_entry = tk.Entry(self, width=26) self.first_name_entry.grid(row=0, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Last Name:").grid(row=1, column=0, sticky="w") self.last_name_entry = tk.Entry(self, width=26) self.last_name_entry.grid(row=1, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Password:").grid(row=2, column=0, sticky="w") self.password_entry = tk.Entry(self, show="*", width=26) self.password_entry.grid(row=2, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Email:").grid(row=3, column=0, sticky="w") self.email_entry = tk.Entry(self, width=26) self.email_entry.grid(row=3, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Gender:").grid(row=4, column=0, sticky="w") self.gender_entry = tk.Entry(self, width=10) self.gender_entry.grid(row=4, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Age:").grid(row=5, column=0, sticky="w") self.age_entry = tk.Entry(self, width=10) self.age_entry.grid(row=5, column=1, padx=10, pady=10, sticky="e") tk.Label(self, text="Address:").grid(row=6, column=0, sticky="w") self.address_entry = tk.Text(self, width=20, height=3) self.address_entry.grid(row=6, column=1, padx=10, pady=10, sticky="e") submit_button = tk.Button(self, text="Submit", width=8, command=self.submit) submit_button.grid(row=7, column=1, padx=10, pady=10, sticky="e") submit_button = tk.Button(self, text="Back", width=8, command=self.back) submit_button.grid(row=7, column=0, sticky="w", padx=10, pady=(10, 10)) self.pack() def submit(self): data = {} data["firstName"] = self.first_name_entry.get() data["lastName"] = self.last_name_entry.get() data["password"] = self.password_entry.get() data["email"] = self.email_entry.get() data["gender"] = self.gender_entry.get() data["age"] = self.age_entry.get() data["address"] = self.address_entry.get(1.0, tk.END) register(cursor, conn, data) def back(self): for widget in self.winfo_children(): widget.destroy() self.destroy() WelcomeWindow(self.master) class MainWindow(tk.Frame): def __init__(self, master): super().__init__() self.master = master center_window(600, 400) self.pack() root = tk.Tk() root.eval('tk::PlaceWindow . center') WelcomeWindow(root) root.mainloop()

The database.py file.

import mysql.connector def initialize_connection(): conn = mysql.connector.connect( host = "localhost", user = "root", password = "db1963" ) cursor = conn.cursor() create_database(cursor) create_table(cursor) return conn, cursor def create_database(cursor): cursor.execute("SHOW DATABASES") temp = cursor.fetchall() databases = [item[0] for item in temp] if "tutorial" not in databases: cursor.execute("CREATE DATABASE tutorial") cursor.execute("USE tutorial") def create_table(cursor): cursor.execute("SHOW TABLES") temp = cursor.fetchall() tables = [item[0] for item in temp] if "users" not in tables: cursor.execute("""CREATE TABLE users( id INT AUTO_INCREMENT PRIMARY KEY, firstName VARCHAR(100), lastName VARCHAR(100), password VARCHAR(30), email VARCHAR(100) UNIQUE, gender VARCHAR(1), age INT, address VARCHAR(200) )""") def login(cursor, data): cursor.execute(f"""SELECT * FROM users WHERE email = '{data["email"]}' AND password = '{data["password"]}' """) if cursor.fetchone() != None: return True return False def register(cursor, conn, data): print(data) cursor.execute(f"""INSERT INTO users values( NULL, '{data["firstName"]}', '{data["lastName"]}', '{data["password"]}', '{data["email"]}', '{data["gender"]}', '{data["age"]}', '{data["address"]}' )""") conn.commit()

This marks the end of the Python Tkinter Project with MySQL Database Tutorial. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the tutorial content can be asked in the comments section below.

The post Python Tkinter Project with MySQL Database appeared first on CodersLegacy.

Categories: FLOSS Project Planets

Anarcat: how to audit for open services with iproute2

Planet Python - Fri, 2023-03-10 14:16

The computer world has a tendency of reinventing the while once in a while. I am not a fan of that process, but sometimes I just have to bite the bullet and adapt to change. This post explains how I adapted to one particular change: the netstat to sockstat transition.

I used to do this to show which processes where listening on which port on a server:

netstat -anpe

It was a handy mnemonic as, in France, ANPE was the agency responsible for the unemployed (basically). That would list all sockets (-a), not resolve hostnames (-n, because it's slow), show processes attached to the socket (-p) with extra info like the user (-e). This still works, but sometimes fail to find the actual process hooked to the port. Plus, it lists a whole bunch of UNIX sockets and non-listening sockets, which are generally irrelevant for such an audit.

What I really wanted to use was really something like:

netstat -pleunt | sort

... which has the "pleut" mnemonic ("rains", but plural, which makes no sense and would be badly spelled anyway). That also only lists listening (-l) and network sockets, specifically UDP (-u) and TCP (-t).

But enough with the legacy, let's try the brave new world of sockstat which has the unfortunate acronym ss.

The equivalent sockstat command to the above is:

ss -pleuntO

It's similar to the above, except we need the -O flag otherwise ss does that confusing thing where it splits the output on multiple lines. But I actually use:

ss -plunt0

... i.e. without the -e as the information it gives (cgroup, fd number, etc) is not much more useful than what's already provided with -p (service and UID).

All of the above also show sockets that are not actually a concern because they only listen on localhost. Those one should be filtered out. So now we embark into that wild filtering ride.

This is going to list all open sockets and show the port number and service:

ss -pluntO --no-header | sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/' | sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' | sort -gu

For example on my desktop, it looks like:

anarcat@angela:~$ sudo ss -pluntO --no-header | sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/' | sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' | sort -gu [::]:* users:(("unbound",pid=1864)) 22 users:(("sshd",pid=1830)) 25 users:(("master",pid=3150)) 53 users:(("unbound",pid=1864)) 323 users:(("chronyd",pid=1876)) 500 users:(("charon",pid=2817)) 631 users:(("cups-browsed",pid=2744)) 2628 users:(("dictd",pid=2825)) 4001 users:(("emacs",pid=3578)) 4500 users:(("charon",pid=2817)) 5353 users:(("avahi-daemon",pid=1423)) 6600 users:(("systemd",pid=3461)) 8384 users:(("syncthing",pid=232169)) 9050 users:(("tor",pid=2857)) 21027 users:(("syncthing",pid=232169)) 22000 users:(("syncthing",pid=232169)) 33231 users:(("syncthing",pid=232169)) 34953 users:(("syncthing",pid=232169)) 35770 users:(("syncthing",pid=232169)) 44944 users:(("syncthing",pid=232169)) 47337 users:(("syncthing",pid=232169)) 48903 users:(("mosh-client",pid=234126)) 52774 users:(("syncthing",pid=232169)) 52938 users:(("avahi-daemon",pid=1423)) 54029 users:(("avahi-daemon",pid=1423)) anarcat@angela:~$

But that doesn't filter out the localhost stuff, lots of false positive (like emacs, above). And this is where it gets... not fun, as you need to match "localhost" but we don't resolve names, so you need to do some fancy pattern matching:

ss -pluntO --no-header | \ sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/;s/^tcp//;s/^udp//' | \ grep -v -e '^\[fe80::' -e '^127.0.0.1' -e '^\[::1\]' -e '^192\.' -e '^172\.' | \ sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' |\ sort -gu

This is kind of horrible, but it works, those are the actually open ports on my machine:

anarcat@angela:~$ sudo ss -pluntO --no-header | sed 's/^\([a- z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/;s/^tcp//;s/^udp//' | grep -v -e '^\[fe80::' -e '^127.0.0.1' -e '^\[::1\]' -e '^192\.' - e '^172\.' | sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\ 1\t/;s/,fd=[0-9]*//' | sort -gu 22 users:(("sshd",pid=1830)) 500 users:(("charon",pid=2817)) 631 users:(("cups-browsed",pid=2744)) 4500 users:(("charon",pid=2817)) 5353 users:(("avahi-daemon",pid=1423)) 6600 users:(("systemd",pid=3461)) 21027 users:(("syncthing",pid=232169)) 22000 users:(("syncthing",pid=232169)) 34953 users:(("syncthing",pid=232169)) 35770 users:(("syncthing",pid=232169)) 48903 users:(("mosh-client",pid=234126)) 52938 users:(("avahi-daemon",pid=1423)) 54029 users:(("avahi-daemon",pid=1423))

Surely there must be a better way. It turns out that lsof can do some of this, and it's relatively straightforward. This lists all listening TCP sockets:

lsof -iTCP -sTCP:LISTEN +c 15 | grep -v localhost | sort

In theory, this would do the equivalent on UDP

lsof -iUDP -sUDP:^Idle

... but in reality, it looks like lsof on Linux can't figure out the state of a UDP socket:

lsof: no UDP state names available: UDP:^Idle

... which, honestly, I'm baffled by. It's strange because ss can figure out the state of those sockets, heck it's how -l vs -a works after all. So we need something else to show listening UDP sockets.

The following actually looks pretty good after all:

ss -pluO

That will list localhost sockets of course, so we can explicitly ask ss to resolve those and filter them out with something like:

ss -plurO | grep -v localhost

oh, and look here! ss supports pattern matching, so we can actually tell it to ignore localhost directly, which removes that horrible sed line we used earlier:

ss -pluntO '! ( src = localhost )'

That actually gives a pretty readable output. One annoyance is we can't really modify the columns here, so we still need some god-awful sed hacking on top of that to get a cleaner output:

ss -nplutO '! ( src = localhost )' | \ sed 's/\(udp\|tcp\).*:\([0-9][0-9]*\)/\2\t\1\t/;s/\([0-9][0-9]*\t[udtcp]*\t\)[^u]*users:(("/\1/;s/".*//;s/.*Address:Port.*/Netid\tPort\tProcess/' | \ sort -nu

That looks horrible and is basically impossible to memorize. But it sure looks nice:

anarcat@angela:~$ sudo ss -nplutO '! ( src = localhost )' | sed 's/\(udp\|tcp\).*:\([0-9][0-9]*\)/\2\t\1\t/;s/\([0-9][0-9]*\t[udtcp]*\t\)[^u]*users:(("/\1/;s/".*//;s/.*Address:Port.*/Port\tNetid\tProcess/' | sort -nu Port Netid Process 22 tcp sshd 500 udp charon 546 udp NetworkManager 631 udp cups-browsed 4500 udp charon 5353 udp avahi-daemon 6600 tcp systemd 21027 udp syncthing 22000 udp syncthing 34953 udp syncthing 35770 udp syncthing 48903 udp mosh-client 52938 udp avahi-daemon 54029 udp avahi-daemon

Better ideas welcome.

Categories: FLOSS Project Planets

Antoine Beaupré: how to audit for open services with iproute2

Planet Debian - Fri, 2023-03-10 14:16

The computer world has a tendency of reinventing the while once in a while. I am not a fan of that process, but sometimes I just have to bite the bullet and adapt to change. This post explains how I adapted to one particular change: the netstat to sockstat transition.

I used to do this to show which processes where listening on which port on a server:

netstat -anpe

It was a handy mnemonic as, in France, ANPE was the agency responsible for the unemployed (basically). That would list all sockets (-a), not resolve hostnames (-n, because it's slow), show processes attached to the socket (-p) with extra info like the user (-e). This still works, but sometimes fail to find the actual process hooked to the port. Plus, it lists a whole bunch of UNIX sockets and non-listening sockets, which are generally irrelevant for such an audit.

What I really wanted to use was really something like:

netstat -pleunt | sort

... which has the "pleut" mnemonic ("rains", but plural, which makes no sense and would be badly spelled anyway). That also only lists listening (-l) and network sockets, specifically UDP (-u) and TCP (-t).

But enough with the legacy, let's try the brave new world of sockstat which has the unfortunate acronym ss.

The equivalent sockstat command to the above is:

ss -pleuntO

It's similar to the above, except we need the -O flag otherwise ss does that confusing thing where it splits the output on multiple lines. But I actually use:

ss -plunt0

... i.e. without the -e as the information it gives (cgroup, fd number, etc) is not much more useful than what's already provided with -p (service and UID).

All of the above also show sockets that are not actually a concern because they only listen on localhost. Those one should be filtered out. So now we embark into that wild filtering ride.

This is going to list all open sockets and show the port number and service:

ss -pluntO --no-header | sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/' | sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' | sort -gu

For example on my desktop, it looks like:

anarcat@angela:~$ sudo ss -pluntO --no-header | sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/' | sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' | sort -gu [::]:* users:(("unbound",pid=1864)) 22 users:(("sshd",pid=1830)) 25 users:(("master",pid=3150)) 53 users:(("unbound",pid=1864)) 323 users:(("chronyd",pid=1876)) 500 users:(("charon",pid=2817)) 631 users:(("cups-browsed",pid=2744)) 2628 users:(("dictd",pid=2825)) 4001 users:(("emacs",pid=3578)) 4500 users:(("charon",pid=2817)) 5353 users:(("avahi-daemon",pid=1423)) 6600 users:(("systemd",pid=3461)) 8384 users:(("syncthing",pid=232169)) 9050 users:(("tor",pid=2857)) 21027 users:(("syncthing",pid=232169)) 22000 users:(("syncthing",pid=232169)) 33231 users:(("syncthing",pid=232169)) 34953 users:(("syncthing",pid=232169)) 35770 users:(("syncthing",pid=232169)) 44944 users:(("syncthing",pid=232169)) 47337 users:(("syncthing",pid=232169)) 48903 users:(("mosh-client",pid=234126)) 52774 users:(("syncthing",pid=232169)) 52938 users:(("avahi-daemon",pid=1423)) 54029 users:(("avahi-daemon",pid=1423)) anarcat@angela:~$

But that doesn't filter out the localhost stuff, lots of false positive (like emacs, above). And this is where it gets... not fun, as you need to match "localhost" but we don't resolve names, so you need to do some fancy pattern matching:

ss -pluntO --no-header | \ sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/;s/^tcp//;s/^udp//' | \ grep -v -e '^\[fe80::' -e '^127.0.0.1' -e '^\[::1\]' -e '^192\.' -e '^172\.' | \ sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' |\ sort -gu

This is kind of horrible, but it works, those are the actually open ports on my machine:

anarcat@angela:~$ sudo ss -pluntO --no-header | sed 's/^\([a- z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/;s/^tcp//;s/^udp//' | grep -v -e '^\[fe80::' -e '^127.0.0.1' -e '^\[::1\]' -e '^192\.' - e '^172\.' | sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\ 1\t/;s/,fd=[0-9]*//' | sort -gu 22 users:(("sshd",pid=1830)) 500 users:(("charon",pid=2817)) 631 users:(("cups-browsed",pid=2744)) 4500 users:(("charon",pid=2817)) 5353 users:(("avahi-daemon",pid=1423)) 6600 users:(("systemd",pid=3461)) 21027 users:(("syncthing",pid=232169)) 22000 users:(("syncthing",pid=232169)) 34953 users:(("syncthing",pid=232169)) 35770 users:(("syncthing",pid=232169)) 48903 users:(("mosh-client",pid=234126)) 52938 users:(("avahi-daemon",pid=1423)) 54029 users:(("avahi-daemon",pid=1423))

Surely there must be a better way. It turns out that lsof can do some of this, and it's relatively straightforward. This lists all listening TCP sockets:

lsof -iTCP -sTCP:LISTEN +c 15 | grep -v localhost | sort

In theory, this would do the equivalent on UDP

lsof -iUDP -sUDP:^Idle

... but in reality, it looks like lsof on Linux can't figure out the state of a UDP socket:

lsof: no UDP state names available: UDP:^Idle

... which, honestly, I'm baffled by. It's strange because ss can figure out the state of those sockets, heck it's how -l vs -a works after all. So we need something else to show listening UDP sockets.

The following actually looks pretty good after all:

ss -pluO

That will list localhost sockets of course, so we can explicitly ask ss to resolve those and filter them out with something like:

ss -plurO | grep -v localhost

oh, and look here! ss supports pattern matching, so we can actually tell it to ignore localhost directly, which removes that horrible sed line we used earlier:

ss -pluntO '! ( src = localhost )'

That actually gives a pretty readable output. One annoyance is we can't really modify the columns here, so we still need some god-awful sed hacking on top of that to get a cleaner output:

ss -nplutO '! ( src = localhost )' | \ sed 's/\(udp\|tcp\).*:\([0-9][0-9]*\)/\2\t\1\t/;s/\([0-9][0-9]*\t[udtcp]*\t\)[^u]*users:(("/\1/;s/".*//;s/.*Address:Port.*/Netid\tPort\tProcess/' | \ sort -nu

That looks horrible and is basically impossible to memorize. But it sure looks nice:

anarcat@angela:~$ sudo ss -nplutO '! ( src = localhost )' | sed 's/\(udp\|tcp\).*:\([0-9][0-9]*\)/\2\t\1\t/;s/\([0-9][0-9]*\t[udtcp]*\t\)[^u]*users:(("/\1/;s/".*//;s/.*Address:Port.*/Port\tNetid\tProcess/' | sort -nu Port Netid Process 22 tcp sshd 500 udp charon 546 udp NetworkManager 631 udp cups-browsed 4500 udp charon 5353 udp avahi-daemon 6600 tcp systemd 21027 udp syncthing 22000 udp syncthing 34953 udp syncthing 35770 udp syncthing 48903 udp mosh-client 52938 udp avahi-daemon 54029 udp avahi-daemon

Better ideas welcome.

Categories: FLOSS Project Planets

ImageX: From Discovery to Post-Launch: The Ultimate Guide to a Web Project

Planet Drupal - Fri, 2023-03-10 12:46
From Discovery to Post-Launch: The Ultimate Guide to a Web Project amanda Fri, 03/10/2023 - 17:46

In today’s digitally-focused world, it’s nearly impossible to do business without a presence on the world wide web. In fact, your website should be a cornerstone of your marketing efforts and a primary tool for reaching your audience.

For your site to effectively drive your business forward, it needs to be well-constructed and consistently maintained. That’s a big job. And it needs the right partner.

Our four-part process for website development is a proven approach to building an impactful online presence. Our ebook outlines the expectations and considerations for each distinct phase —  Discovery, Design, Development, and Post-Launch — so that you are well-prepared to build a website that will serve your business now and into the future.

/sites/default/files/styles/original/public/2023-03/pexels-rodion-kutsaiev-9436715.jpg.webp?itok=isGsjHz3 Feature as an event Off Service Category Design UX Design User Research Journey Mapping Web Design User Personas Usability Testing Strategy Discovery Website Audit Workshops Training Storytelling Analytics Digital Marketing Develop Evolve Site Growth Drupal CMS Support IsGated 0 IsDownloadable 1 Attachement link https://marketing.imagexmedia.com/acton/attachment/43677/f-1ffcfc1b-3705-49fa-b320-a060caa98e08/0/-/-/-/-/From%20Discovery%20to%20Post-Launch%3A%20The%20Ultimate%20Guide%20to%20a%20Web%20Project.pdf
Categories: FLOSS Project Planets

Pages