Feeds

BRAINSUM: How it feels to install Drupal 10 as an UX designer

Planet Drupal - Wed, 2023-09-13 03:30
How it feels to install Drupal 10 as an UX designer iszabo Wed, 09/13/2023 - 07:30 How it feels to install Drupal 10 as an UX designer

Is it easy for a newcomer to install Drupal 10 in a local environment in 2023? With notes and feelings during the installation, I will break down the steps taken to achieve the installation.

Attention, I'm a UX guy :)

Categories: FLOSS Project Planets

Andy Blum's blog: Web Sustainability Guidelines

Planet Drupal - Tue, 2023-09-12 20:00
The W3C has authored web sustainability guidelines! In this post I share my initial reactions and share the top 10 actions to jumpstart your sustainability journey.
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppInt64 0.0.2 on CRAN: Small Update

Planet Debian - Tue, 2023-09-12 19:46

The still very new package RcppInt64 (announced a week ago in this post) arrived on CRAN earlier today in its first update, now at 0.0.2. RcppInt64 collects some of the previous conversions between 64-bit integer values in R and C++, and regroups them in a single package by providing a single header. It offers two interfaces: both a more standard as<>() converter from R values along with its companions wrap() to return to R, as well as more dedicated functions ‘from’ and ‘to’.

The package by now has its first user as we rearranged RcppFarmHash to use it. The change today makes bit64 a weak rather than strong dependency as we use it only for tests and illustrations. We also added two missing fields to DESCRIPTION and added badges to README.md.

The brief NEWS entry follows:

Changes in version 0.0.2 (2023-09-12)
  • DESCRIPTION has been extended, badges have been added to README.md

  • Package bit64 is now a Suggests:

Courtesy of my CRANberries, there is a [diffstat report relative to previous release][this release].

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

ImageX: New Development Settings Page in Drupal 10.1 Simplifies Front-end Experiences

Planet Drupal - Tue, 2023-09-12 18:46

Drupal 10.1 is a truly outstanding release for advancements in the front-end development realm. The first ground-breaking innovation that comes to mind is Single Directory Components (SDC), which burst onto the scene bringing brand-new practices for creating and managing UI components. 

Categories: FLOSS Project Planets

Games, consoles and the Meta mystery

Planet KDE - Tue, 2023-09-12 18:00

For the past few days I was at the seaside.

As my better half had some work that she needed to take with her, I also took my new laptop and when she was doing her thing, I tweaked a few small things here and there. Nothing major though, as we were still on vacation.

Games

Not that I game much these days, especially on PC1, but if I already have a Vega 8 graphic card, it would be wasteful not to at least try it.

By default I am booting into Zen kernel, as it is said to provide a smoother experience on laptops/desktops as well as with gaming.

Lutris (and Steam)

Initially installing either Lutris or Steam failed me with a SegFault, but after asking on the forums and digging through Lutris documentation, I managed to get both installed.

Then I added my Steam, GOG and Itch.io libraries to Lutris. Installing (and launching) games through Lutris is super convenient.

A problem I am still running in with though is a bit weird. Lutris offers a really nifty option to change the keyboard to US when launching a game. That is quite handy for me since my default layout in Neo2.

But the problem is that even after the game ends, (some?) GTK applications – at least Firefox, GIMP – retain US layout even after the game (or even Lutris) ends. So far I could not figure out how to fix this or work around it, apart from simply not using that option.

The few games I tried seem to run fine (e.g. Return to Monkey Island runs perfectly), but I am sure there is some performance tweaking to be made … some other day.

Mouse go whoops!

This was a bit awkward …

I have a Roccat Kone Pure mouse2, which I find to be a great programmable mouse with 12 buttons. It is marketed as a “gaming mouse”, but honestly I use it to bind some keyboard shortcuts to mouse buttons.

As is often the case with these things, to update and modify the firmware on the device itself there is a (no longer maintained, but stable) (semi-)official niche one-man Linux tool – Roccat Tools. The tool still works fine, even though it has not seen an update since 2019.

If you want to write firmware to the (mouse) device, you need write premissions to it, which is obviously and sanely, not the case the default. The packages do provide the correct UDev rules, but you still need to add yourself to the roccat group.

And here is where it got ugly.

I forgot to use --append in the usermod command … yeah, not great. Do not forget to use it.

To fix my stupid mistake, I had to log in as root (luckly I could!) and add myself to the right groups from memory. I hope I got it right 😅

Console

Nowadays I rarely use the pure TTY console, and am typically in a PTY like Konsole or Alacritty.

Regardless, when that happens, I do like having a nice environment, so I did at least some basics.

Fonts

Default console fonts are not very pretty and I needed to tweak at least the FONT_MAP to get some common Slovenian characters (č, š, ž) to work anyway, so I went font window-shopping.

There are not many console fonts – too limiting and not sexy enough for designers, I assume – but there are a few, and in the end I settled for Spleen3.

Later I noticed that Spleen is the default console font in OpenBSD since 2019, which brings me hope this font will continue to be maintained.

After refreshing my memory with the Arch wiki: Linux Console, I added the following to /etc/vconsole.conf:

KEYMAP=slovene FONT=spleen-12x24 FONT_MAP=8859-2

I did also consider the following fonts, and here is why I did not chose them:

  • tamzen – especially its PowerLine variant did make my CLI prettier, but č, š, ž support was lacking;
  • terminus – the venerable Terminus was really close to staying my console font of choice, but I just wanted something more fun.
Mouse

Having a working mouse/touchpad is also a nice thing to have in a console, so I went and installed Consolation. It was as simple as:

yay consolation systemctl enable consolation.service

GPM probably works fine still, but apparently its code is hard to maintain at this stage and not working great with touchpads, so I tried the much more modern Consolation.

It is very flexible, but out of the box it worked fine enough for me, so I did not mess with it right now. I may later though.

Boot splash screen

Since I had some extra time, I decided to also include a splash screen when booting.

I decided for Plymouth, as it seems more powerful and maintained than the alternatives.

After a simple yay --sync plymouth plymouth-kcm I downloaded and selected my preferred theme through the Plymouth KCM.

Then I needed to enable Plymouth by adding it to Dracut with:

/etc/dracut.conf.d/plymouth.conf:

force_drivers+=" amdgpu " add_dracutmodules+=" plymouth "

… and generating the new sudo dracut-rebuild and reboot.

For some reason without forcing the drivers with force_drivers it would override my font settings with defaults again. Hat-tip to dalto for helping me with that issue.

To actually make it apply, I had to pass the splash kernel parameter to GRUB, as described in Arch Wiki: Kernel parameters. For now I decided not to use quiet, but I may enable it later. Eh, I did go for quiet by the end of the day. Looks nicer and I can always lnav /var/log/boot.log to see the boot logs4.

I considered making the boot silent, but at least for now, I decided not to.

Background in console

I tasted blood.

I wanted to pimp my TTY as much as I did back during my Gentoo days.

I wanted a pretty backround and frame even in my console!

Sadly, it seems there is an issue with modern kernels (5.17 and newer) and the patch that is needed to get that to work. Also FBSplash seems to not have been maintaind in a while, to the extent that even Gentoo removed it.

So, I gave up on that piece of nostalgia. Oh, well, good times …

KDE X11 vs Wayland

After using Plasma on Wayland for a few days and then using it on X11 again, I noticed a few more nice things on Wayland:

  • Touchpad gestures exist on Wayland, but not on X11 – I am surprised to how much I look forward to the gestures being polished, now that I have a large trackpad.
  • Some things – e.g. the pop-down hints in Kate while you type – look much nicer on Wayland, but that we knew.

There are some very big caveats when using Plasma on Wayland right now though, but it is being worked on:

  • Applications do not prompt to save unsaved work, causing data loss – KDE bug № 461176 – this is a big big issue, but is being worked on.
  • When the compositor crashes or restarts, non-Qt apps are killed — work is ongoing to fix this and just recently David ”DeD” Edmundson blogged about great progress on how with Wayland this would not just get fixed, but make sessions much more robust than we have ever seen before, with added bonuses of essentially safe-states for apps, upgrading without logging out, etc. Go read DeD’s blog post, that stuff is mind-blowingly amazing!
  • Global Menu is not supported for non-Qt appsKDE bug № 424485 – I have been using an auto-hiding Global Menu for many years now, to save vertical space, but with a 14" screen, I am OK without it.

For now, I will probably switch between X11 and Wayland, depending on whether the priority of the day is a) to make sure I do not forget to save things I worked on, or b) a more shiny and fluid experience.

Disable hibernate

Since I never suspend do disk (hibernate, S4), I disabled that, so the icon in Plasma menu goes away.

Following Arch Wiki: KDE, I simply created:

/etc/systemd/sleep.conf.d/00-disable-hibernation.conf with the following in it:

[Sleep] AllowHibernation=no AllowSuspendThenHibernate=no AllowHybridSleep=no Missing packages

While I was adding a few Plasmoids to my desktop, I noticed some were missing, which also caused the TodoList Plasmoid not to work.

After asking on the forums a bit, I installed the whole Plasma metapackage with yay plasma-meta and restarted it. As for KDE Gears, I prefer to install each application separately.

While I was at it, I added a few more KCM modules as well:

yay --sync colord-kde systemd-kcm kcmsystemd kcm-wacomtablet kcm-polkit-kde sddm-kcm plymouth-kcm

What was also missing was spell-checking. Since Sonnet supports several spell-checking back-ends, I installed the Hunspell dictionaries of the languages I typically use. That should also make them available to LibreOffice.

While I was at it I also did yay --sync languagetool libreoffice-extension-languagetool to enable grammar checking in LibreOffice through the awesome LanguageTool.

Make GTK apps look Qt

To provide some better visual consistency between Qt and GTK applications, I installed kde-gtk-config to be able to chose the GTK theme also within KDE, and breeze-gtk as the theme of my choice. I say “some better visual consistency”, because some applications use GTK2, some GTK3, some GTK4, some LibAdwaita directly, so there are more variables than just “GTK”.

I decided against removal of CSD – although, I dislike them – because it seems how they are done and set up is still in flux, so fixing it for GTK3 might break some edge cases, but also still not fix it for GTK4 etc.

There is/was also a way to (try to) force GTK applications to use the KDE file chooser etc. through XDG Desktop Portal, but GNOME says that is not a feature, but a debugging tool, so until that gets introduced as a feature, I decided not to mess with it.

This was as far as I cared to push it, as I did not want things to break. If you want to do more, the Arch Wiki on Qt and GTK is a good starting point.

Plasma-ify Firefox

Firefox is a GTK application I use the most often, and it also has some quirks of its own, so I spend some extra time with it.

First I made sure the plasma-browser-integration package and Plasma integration Firefox add-on are installed. Those make sure that the browser more neatly integrates with Plasma – e.g., tabs and bookmarks show in KRunner, the Media Player plasmoid (better) shows what is playing in Firefox, native download notifications are used, Firefox integrates with KDE Connect, etc.

For the next step I had to make sure that xdg-desktop-portal and xdg-desktop-portal-kde packages are installed.

Then, following the Arch Wiki: Firefox, I added the following to my ~/.mozilla/firefox/????????.default-release/user.js (NB: ???????? is actually a random-looking set of characters and will be different for you than it is for me):

// Enables XDG Desktop Portal integration (e.g. to use KDE Plasma file picker) // https://wiki.archlinux.org/title/Firefox#XDG_Desktop_Portal_integration user_pref("widget.use-xdg-desktop-portal.file-picker", 1); user_pref("widget.use-xdg-desktop-portal.mime-handler", 1); user_pref("widget.use-xdg-desktop-portal.settings", 1); user_pref("widget.use-xdg-desktop-portal.location", 1); user_pref("widget.use-xdg-desktop-portal.open-uri", 1); // Enables further KDE integration (to disable duplicated entry in Media Player) // https://wiki.archlinux.org/title/Firefox#KDE_integration user_pref("media.hardwaremediakeys.enabled", false);

Now I get both the open and save file dialogues from Plasma also in Firefox. The above forces a few other things to be pulled from KDE Plasma (through XDG Portals).

The easiest way to force Firefox to use server-side window decorations (e.g. how Plasma does it), is to right-click on its toolbar and select Customize Toolbar. There enable the Title Bar checkbox.5

Since I use Sidebery to organise tabs in vertical tree, I want to hide the default tab bar. To do so, I just added the following to ~/.mozilla/firefox/????????.default-release/chrome/userChrome.css:

/* Hides top tabs toolbar, so Tree Style Tabs are the only tabs available */ #TabsToolbar { visibility: collapse !important; }

More complex tweaks: Firefox CSS Hacks

If you need more styling hacks, MrOtherGuy maintains a huge selection of more complex ones, together with instructions on Firefox CSS Hacks.

For help, I found the #FirefoxCSS Matrix channel very helpful.

I ultimately – but after a lot trial-and-error, because I initially forgot how I did it a decade ago – did not need MrOtherGuy’s Firefox CSS Hacks, but it is a great resource!

At least for now, I decided to use vanilla Firefox, but might move to SUSE’s Firefox KDE fork down the line, later on (or not, we will see, there seems to be some movement upstream).

Enable Bluetooth

Although Bluez was installed, I could not see any Bluetooth devices in KDE.

The problem was very simply that by default Bluetooth is not enabled on EndeavourOS in SystemD.

A quick fix was to simply run:

systemctl enable bluetooth.service systemctl start bluetooth.service Enable KDE Connect

KDE Connect is a great tool.

Initially it did not find my mobile phone, because of the firewall, which is by default enabled on EndeavourOS.

All it took though was to open up the Firewall KCM and there from the list of pre-defined rules find KDE Connect and add it. Super simple 😁

Emoji

I also noticed that Emoji were missing by default.

To correct that, I installed the otf-openmoji package (with limited success).

Why OpenMoji and not any of the more well known options?

Well, honestly, I have a soft spot for the underdogs. Also some of the designs there just looked cleaner and nicer to me, compared to JoyPixels, Noto or Twemoji.

OpenMoji’s monochrome designs are very clean and nice, but some symbols – like flags – just do not work in monochrome. I hope this gets fixed in the future, so a healthy mix can exist.

For some reason (at least on Arch / EndeavourOS), this Emoji font defaults to the monochrome “Black” version instead of the coloured one, I followed the hint in this comment and added the following to ~/.config/fontconfig/fonts.conf for the “Black” version to be ignored:

<!-- Block OpenMoji Black from the list of fallback fonts. --> <selectfont> <rejectfont> <pattern> <patelt name="family"> <string>OpenMoji</string> </patelt> <patelt name="style"> <string>Black</string> </patelt> </pattern> </rejectfont> </selectfont>

… but, this does not seem to work for me. In fact, even deleting the /usr/share/fonts/openmoji/OpenMoji-Black.ttf file did not fix it for me.

What did work for me though was to download the (old bitmap) CBDT version of OpenMoji-color-cbdt.ttf and installing that as a user font.

OpenMoji is open for help

If you like OpenMoji – as I do – and actually have the design skills and knowledge – unlike I do –, it would be great if you contributed to OpenMoji.

Meta key mystery

And, finally, the Meta (a.k.a. Win) key mystery …

What started happeneing was, that seemingly at random times the Meta key would just stop working. Which is really annoying since a lot of shortcuts use it. (e.g. Meta+W for the Overview, Meta+Q for the Activity Manager, Meta+Tab for quick Activity switching, …)

I tried figuring it out myself, but could not. After some time, I did figure out a work-around that it sometimes started working again, if I suspended the machine to RAM (sleep) and then woke it up again. Sometimes this would require several cycles.

So I asked for help on the EndeavourOS forums.

After further communal head-scratching, I wrote an e-mail to Slimbook’s support. And their answer came just a tiny bit quicker as someone else’s (correct) suggestion on the forum. Which is to say, both methods were very fast!

Turns out I am an idiot.

Fn+F2 is hard-coded to disable/enable the Meta key … apparently enough gamers like it that way that several manufacturers have that feature.

So what happened was, that sometimes when I pressed Ctrl+F2 to switch to Virtual Desktop 2, I would inadvertently (also) press Fn, as I am still getting used to the new keyboard. Remember, Ctrl and Fn are famously switched on ThinkPads, which were my main laptop until now.

And as for why putting the laptop to sleep would sometimes fix the issue … Well, the suspend button combination on Slimbook is Fn+F1, so I must have sometimes also pressed F2 in the process sometimes.

Next time

That was quite a chunk of work that I originally intended to do much later.

The laptop did not wake up from suspend to RAM (a.k.a. S3, sleep) a few times, so I am investigating what this is about. I will likely dedicate a separate blog post just for that.

As for the next blog post, it will be either that or something Btrfs or backup related.

hook out → going to the sea was great, but a bit too short

  1. When I play nowadays, it is usually on Nintendo Switch, as it is very convenient to play either on the TV or hand-held. 

  2. In fact, between me and my brother we have four identical ones, so one pair is being used, while the other pair is being repaired. Throughout the years these mice have seen several repairs and mods by now, with different sets of button switches, wheel encoders etc. Perhaps I should publish my take on them, if anyone is interested (drop me a line). What will I do, once – hopefully far away in the future – our Roccat mice finally fail completely, I do not know, but I suspect a trip down the DIY path with something like Ploopy

  3. I also used spleen32x64 11pt as the font in Yakuake, but kept Hack in Konsole

  4. If you do not see that option, you could just tell KWin to force the window title and decorations for all Firefox windows. You can add a new Window Rule through Plasma’s System Settings

Categories: FLOSS Project Planets

Jo Shields: Building a NAS

Planet Debian - Tue, 2023-09-12 17:33
The status quo

Back in 2015, I bought an off-the-shelf NAS, a QNAP TS-453mini, to act as my file store and Plex server. I had previously owned a Synology box, and whilst I liked the Synology OS and experience, the hardware was underwhelming. I loaded up the successor QNAP with four 5TB drives in RAID10, and moved all my files over (after some initial DoA drive issues were handled).

QNAP TS-453mini product photo

That thing has been in service for about 8 years now, and it’s been… a mixed bag. It was definitely more powerful than the predecessor system, but it was clear that QNAP’s OS was not up to the same standard as Synology’s – perhaps best exemplified by “HappyGet 2”, the QNAP webapp for downloading videos from streaming services like YouTube, whose icon is a straight rip-off of StarCraft 2. On its own, meaningless – but a bad omen for overall software quality

The logo for QNAP HappyGet 2 and Blizzard’s StarCraft 2 side by side

Additionally, the embedded Celeron processor in the NAS turned out to be an issue for some cases. It turns out, when playing back videos with subtitles, most Plex clients do not support subtitles properly – instead they rely on the Plex server doing JIT transcoding to bake the subtitles directly into the video stream. I discovered this with some Blu-Ray rips of Game of Thrones – some episodes would play back fine on my smart TV, but episodes with subtitled Dothraki speech would play at only 2 or 3 frames per second.

The final straw was a ransomware attack, which went through all my data and locked every file below a 60MiB threshold. Practically all my music gone. A substantial collection of downloaded files, all gone. Some of these files had been carried around since my college days – digital rarities, or at least digital detritus I felt a real sense of loss at having to replace. This episode was caused by a ransomware targeting specific vulnerabilities in the QNAP OS, not an error on my part.

So, I decided to start planning a replacement with:

  • A non-garbage OS, whilst still being a NAS-appliance type offering (not an off-the-shelf Linux server distro)
  • Full remote management capabilities
  • A small form factor comparable to off-the-shelf NAS
  • A powerful modern CPU capable of transcoding high resolution video
  • All flash storage, no spinning rust

At the time, no consumer NAS offered everything (The Asustor FS6712X exists now, but didn’t when this project started), so I opted to go for a full DIY rather than an appliance – not the first time I’ve jumped between appliances and DIY for home storage.

Selecting the core of the system

There aren’t many companies which will sell you a small motherboard with IPMI. Supermicro is a bust, so is Tyan. But ASRock Rack, the server division of third-tier motherboard vendor ASRock, delivers. Most of their boards aren’t actually compliant Mini-ITX size, they’re a proprietary “Deep Mini-ITX” with the regular screw holes, but 40mm of extra length (and a commensurately small list of compatible cases). But, thankfully, they do have a tiny selection of boards without the extra size, and I stumbled onto the X570D4I-2T, a board with an AMD AM4 socket and the mature X570 chipset. This board can use any AMD Ryzen chip (before the latest-gen Ryzen 7000 series); has built in dual 10 gigabit ethernet; IPMI; four (laptop-sized) RAM slots with full ECC support; one M.2 slot for NVMe SSD storage; a PCIe 16x slot (generally for graphics cards, but we live in a world of possibilities); and up to 8 SATA drives OR a couple more NVMe SSDs. It’s astonishingly well featured, just a shame it costs about $450 compared to a good consumer-grade Mini ITX AM4 board costing less than half that.

I was so impressed with the offering, in fact, that I crowed about it on Mastodon and ended up securing ASRock another sale, with someone else looking into a very similar project to mine around the same timespan.

The next question was the CPU. An important feature of a system expected to run 24/7 is low power, and AM4 chips can consume as much as 130W under load, out of the box. At the other end, some models can require as little as 35W under load – the OEM-only “GE” suffix chips, which are readily found for import on eBay. In their “PRO” variant, they also support ECC (all non-G Ryzen chips support ECC, but only Pro G chips do). The top of the range 8 core Ryzen 7 PRO 5750GE is prohibitively expensive, but the slightly weaker 6 core Ryzen 5 PRO 5650GE was affordable, and one arrived quickly from Hong Kong. Supplemented with a couple of cheap 16 GiB SODIMM sticks of DDR4 PC-3200 direct from Micron for under $50 a piece, that left only cooling as an unsolved problem to get a bootable test system.

The official support list for the X570D4I-2T only includes two rackmount coolers, both expensive and hard to source. The reason for such a small list is the non standard cooling layout of the board – instead of an AM4 hole pattern with the standard plastic AM4 retaining clips, it has an Intel 115x hole pattern with a non-standard backplate (Intel 115x boards have no backplate, the stock Intel 115x cooler attaches to the holes with push pins). As such every single cooler compatibility list excludes this motherboard. However, the backplate is only secured with a mild glue – with minimal pressure and a plastic prying tool it can be removed, giving compatibility with any 115x cooler (which is basically any CPU cooler for more than a decade). I picked an oversized low profile Thermalright AXP120-X67 hoping that its 120mm fan would cool the nearby MOSFETs and X570 chipset too.

Thermalright AXP120-X67, AMD Ryzen 5 PRO 5650GE, ASRock Rack X570D4I-2T, all assembled and running on a flat surface Testing up to this point

Using a spare ATX power supply, I had enough of a system built to explore the IPMI and UEFI instances, and run MemTest86 to validate my progress. The memory test ran without a hitch and confirmed the ECC was working, although it also showed that the memory was only running at 2933 MT/s instead of the rated 3200 MT/s (a limit imposed by the motherboard, as higher speeds are considered overclocking). The IPMI interface isn’t the best I’ve ever used by a long shot, but it’s minimum viable and allowed me to configure the basics and boot from media entirely via a Web browser.

Memtest86 showing test progress, taken from IPMI remote control window

One sad discovery, however, which I’ve never seen documented before, on PCIe bifurcation.

With PCI Express, you have a number of “lanes” which are allocated in groups by the motherboard and CPU manufacturer. For Ryzen prior to Ryzen 7000, that’s 16 lanes in one slot for the graphics card; 4 lanes in one M.2 connector for an SSD; then 4 lanes connecting the CPU to the chipset, which can offer whatever it likes for peripherals or extra lanes (bottlenecked by that shared 4x link to the CPU, if it comes down to it).

It’s possible, with motherboard and CPU support, to split PCIe groups up – for example an 8x slot could be split into two 4x slots (eg allowing two NVMe drives in an adapter card – NVME drives these days all use 4x). However with a “Cezanne” Ryzen with integrated graphics, the 16x graphics card slot cannot be split into four 4x slots (ie used for for NVMe drives) – the most bifurcation it allows is 8x4x4x, which is useless in a NAS.

Screenshot of PCIe 16x slot bifurcation options in UEFI settings, taken from IPMI remote control window

As such, I had to abandon any ideas of an all-NVMe NAS I was considering: the 16x slot split into four 4x, combined with two 4x connectors fed by the X570 chipset, to a total of 6 NVMe drives. 7.6TB U.2 enterprise disks are remarkably affordable (cheaper than consumer SATA 8TB drives), but alas, I was locked out by my 5650GE. Thankfully I found out before spending hundreds on a U.2 hot swap bay. The NVMe setup would be nearly 10x as fast as SATA SSDs, but at least the SATA SSD route would still outperform any spinning rust choice on the market (including the fastest 10K RPM SAS drives)

Containing the core

The next step was to pick a case and power supply. A lot of NAS cases require an SFX (rather than ATX) size supply, so I ordered a modular SX500 unit from Silverstone. Even if I ended up with a case requiring ATX, it’s easy to turn an SFX power supply into ATX, and the worst result is you have less space taken up in your case, hardly the worst problem to have.

That said, on to picking a case. There’s only one brand with any cachet making ITX NAS cases, Silverstone. They have three choices in an appropriate size: CS01-HS, CS280, and DS380. The problem is, these cases are all badly designed garbage. Take the CS280 as an example, the case with the most space for a CPU cooler. Here’s how close together the hotswap bay (right) and power supply (left) are:

Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome

With actual cables connected, the cable clearance problem is even worse:

Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome

Remember, this is the best of the three cases for internal layout, the one with the least restriction on CPU cooler height. And it’s garbage! Total hot garbage! I decided therefore to completely skip the NAS case market, and instead purchase a 5.25″-to-2.5″ hot swap bay adapter from Icy Dock, and put it in an ITX gamer case with a 5.25″ bay. This is no longer a served market – 5.25″ bays are extinct since nobody uses CD/DVD drives anymore. The ones on the market are really new old stock from 2014-2017: The Fractal Design Core 500, Cooler Master Elite 130, and Silverstone SUGO 14. Of the three, the Fractal is the best rated so I opted to get that one – however it seems the global supply of “new old stock” fully dried up in the two weeks between me making a decision and placing an order – leaving only the Silverstone case.

Icy Dock have a selection of 8-bay 2.5″ SATA 5.25″ hot swap chassis choices in their ToughArmor MB998 series. I opted for the ToughArmor MB998IP-B, to reduce cable clutter – it requires only two SFF-8611-to-SF-8643 cables from the motherboard to serve all eight bays, which should make airflow less of a mess. The X570D4I-2T doesn’t have any SATA ports on board, instead it has two SFF-8611 OCuLink ports, each supporting 4 PCI Express lanes OR 4 SATA connectors via a breakout cable. I had hoped to get the ToughArmor MB118VP-B and run six U.2 drives, but as I said, the PCIe bifurcation issue with Ryzen “G” chips meant I wouldn’t be able to run all six bays successfully.

NAS build in Silverstone SUGO 14, mid build, panels removed Silverstone SUGO 14 from the front, with hot swap bay installed Actual storage for the storage server

My concept for the system always involved a fast boot/cache drive in the motherboard’s M.2 slot, non-redundant (just backups of the config if the worst were to happen) and separate storage drives somewhere between 3.8 and 8 TB each (somewhere from $200-$350). As a boot drive, I selected the Intel Optane SSD P1600X 58G, available for under $35 and rated for 228 years between failures (or 11,000 complete drive rewrite cycles).

So, on to the big expensive choice: storage drives. I narrowed it down to two contenders: new-old-stock Intel D3-S4510 3.84TB enterprise drives, at about $200, or Samsung 870 QVO 8TB consumer drives, at about $375. I did spend a long time agonizing over the specification differences, the ZFS usage reports, the expected lifetime endurance figures, but in reality, it came down to price – $1600 of expensive drives vs $3200 of even more expensive drives. That’s 27TB of usable capacity in RAID-Z1, or 23TB in RAID-Z2. For comparison, I’m using about 5TB of the old NAS, so that’s a LOT of overhead for expansion.

Storage SSD loaded into hot swap sled Booting up

Bringing it all together is the OS. I wanted an “appliance” NAS OS rather than self-administering a Linux distribution, and after looking into the surrounding ecosystems, decided on TrueNAS Scale (the beta of the 2023 release, based on Debian 12).

TrueNAS Dashboard screenshot in browser window

I set up RAID-Z1, and with zero tuning (other than enabling auto-TRIM), got the following performance numbers:

IOPSBandwidth4k random writes19.3k75.6 MiB/s4k random reads36.1k141 MiB/sSequential writes–2300 MiB/sSequential reads–3800 MiB/sResults using fio parameters suggested by Huawei

And for comparison, the maximum theoretical numbers quoted by Intel for a single drive:

IOPSBandwidth4k random writes16k?4k random reads90k?Sequential writes–280 MiB/sSequential reads–560 MiB/sNumbers quoted by Intel SSD successors Solidigm.

Finally, the numbers reported on the old NAS with four 7200 RPM hard disks in RAID 10:

IOPSBandwidth4k random writes4301.7 MiB/s4k random reads800632 MiB/sSequential writes–311 MiB/sSequential reads–566 MiB/s

Performance seems pretty OK. There’s always going to be an overhead to RAID. I’ll settle for the 45x improvement on random writes vs. its predecessor, and 4.5x improvement on random reads. The sequential write numbers are gonna be impacted by the size of the ZFS cache (50% of RAM, so 16 GiB), but the rest should be a reasonable indication of true performance.

It took me a little while to fully understand the TrueNAS permissions model, but I finally got Plex configured to access data from the same place as my SMB shares, which have anonymous read-only access or authenticated write access for myself and my wife, working fine via both Linux and Windows.

And… that’s it! I built a NAS. I intend to add some fans and more RAM, but that’s the build. Total spent: about $3000, which sounds like an unreasonable amount, but it’s actually less than a comparable Synology DiskStation DS1823xs+ which has 4 cores instead of 6, first-generation AMD Zen instead of Zen 3, 8 GiB RAM instead of 32 GiB, no hardware-accelerated video transcoding, etc. And it would have been a whole lot less fun!

The final system, powered up

(Also posted on PCPartPicker)

Categories: FLOSS Project Planets

Robin Wilson: Pandas-FSDR: a simple function for finding significant differences in pandas DataFrames

Planet Python - Tue, 2023-09-12 16:36

In the spirit of my Previously Unpublicised Code series, today I’m going to share Pandas-FSDR. This is a simple library with one function which finds significant differences between two columns in a pandas DataFrame.

For example, imagine you had the following data frame:

Subject UK World Biology 50 40 Geography 75 80 Computing 100 50 Maths 1500 1600

You may be interested in the differences between the values for the UK and the World (these could be test scores or something similar). Pandas-FSDR will tell you – by running one function you can get output like this:

  • Maths is significantly smaller for UK (1500 for UK compared to 1600 for World)
  • Computing is significantly larger for UK (100 for UK compared to 50 for World)

Differences are calculated in absolute and relative terms, and all thresholds can be altered by changing parameters to the function. The function will even output pre-formatted Markdown text for display in an IPython notebook, inclusion in a dashboard or similar. The output above was created by running this code:

result = FSDR(df, 'UK', 'World', rel_thresh=30, abs_thresh=75)

This is a pretty simple function, but I thought it might be worth sharing. I originally wrote it for some contract data science work I did years ago, where I was sharing the output of Jupyter Notebooks with clients directly, and wanted something that would ‘write the text’ of the comparisons for me, so it could be automatically updated when I had new data. If you don’t want it to write anything then it’ll just output a list of row indices which have significant differences.

Anyway, it’s nothing special but someone may find it useful.

The code and full documentation are available in the Pandas-FSDR Github repository

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #594 (Sept. 12, 2023)

Planet Python - Tue, 2023-09-12 15:30

#594 – SEPTEMBER 12, 2023
View in Browser »

Playing With Genetic Algorithms in Python

A Genetic Algorithm (GA) is an AI technique where random code is mutated and tested for fitness iteratively until a solution is found. This article shows you a couple of problems solved using GAs in Python. Associated HN discussion.
JOSEP RUBIÓ PIQUÉ

Generate Beautiful QR Codes With Python

In this tutorial, you’ll learn how to use Python to generate QR codes, from your standard black-and-white QR codes to beautiful ones with your favorite colors. You’ll learn how to format QR codes, rotate them, and even replace the static background with moving images.
REAL PYTHON

Finally—Pandas Practice That Isn’t Boring

You won’t get fluent with Pandas doing boring, irrelevant, toy exercises. Bamboo Weekly poses questions about current events, using real-world data sets—and offers clear, comprehensive solutions in Jupyter notebooks. Challenge yourself, and level up your Pandas skills every Wednesday →
BAMBOO WEEKLY sponsor

I’m Mr. Null. My Name Makes Me Invisible to Computers

NULL is a magic word in many computer languages. This article is by someone who has Null as a last name, and the consequences that entails. See also this Radiolab Podcast Episode for a similar topic.
CHRISTOHPER NULL

2023 Django Developers Survey

DJANGO SOFTWARE FOUNDATION

Python 3.12.0 Release Candidate 2 Available

CPYTHON DEV BLOG

Pandas 2.1.0 Released

PYDATA.ORG

Discussions Why Prefer Indentation Over Block Markers?

STACKEXCHANGE.COM

Articles & Tutorials Launching an HTTP Server in One Line of Python Code

In this tutorial, you’ll learn how to host files with a single command using an HTTP server built into Python. You’ll also extend it by making a miniature web framework able to serve dynamic content from HTML templates. Along the way, you’ll run CGI scripts and use encryption over HTTPS.
REAL PYTHON

Introducing flake8-logging

The Python standard library’s logging module is a go-to for adding observability to applications, but there are right and wrong ways to use it. This article is about a new linter that explicitly looks for problems with your logging calls.
ADAM JOHNSON

Fully Managed Postgres + Great Support

Crunchy Bridge is a different support experience. Our team is passionate about Postgres education giving you all the information and tools to make informed choices. If you’re a developer starting out or a more seasoned expert in databases, you’ll appreciate thorough, timely, and in-depth responses →
CRUNCHY DATA sponsor

Apple Vision Framework via PyObjC for Text Recognition

Learn how to use PyObjC to interface with the Apple Vision Framework and create a script to detect text in images. Become familiar with how PyObjC works and how it maps functions and methods from Objective C to Python.
YASOOB KHALID • Shared by Yasoob Khalid

Class Concepts: Object-Oriented Programming in Python

Python uses object-oriented programming to group data and associated operations together into classes. In this video course, you’ll learn how to write object-oriented code with classes, attributes, and methods.
REAL PYTHON course

Switching to Hatch

Oliver used Poetry for most of his projects, but recently tried out Hatch instead. This blog post covers what it took to get things going and what features he used, including how he ditched tox.
OLIVER ANDRICH

The Python Dictionary Dispatch Pattern

The dictionary dispatch pattern is when you keep references to functions in a dictionary and change code behavior based on keys. Learn how to use this pattern in practice.
JAMES GALLAGHER

Analysing and Parsing the Contents of PyPI

High-level statistics gathered from PyPI, including how popular language features are, project sizes (tensorflow accounts for 16% of the data on PyPI!) and growth.
TOM FORBES • Shared by Tom Forbes

Writing a C Compiler in 500 Lines of Python

This post details how to build a C compiler, step-by-step, using Python. A great intro to compilers. The target source is WASM, so learn a bit about that too.
THEIA VOGEL

Filters in Django: filter(A, B) vsfilter(A).filter(B)

An advanced dive into the Django ORM, how it handles joins, and what that means for your code.
APIROBOT.ME • Shared by Denis

My Favorite Python Tricks for LeetCode Questions

A collection of intermediate-level Python tricks and tools. Write more Pythonic code!
JJ BEHRENS

What Is Wrong With TOML?

Some YAML people talk about why TOML is too limited.
HITCHDEV.COM

Projects & Code StrictYAML: Type-Safe, Restricted Subset of the YAML

HITCHDEV.COM

dara: Create Interactive Web Apps in Pure Python

GITHUB.COM/CAUSALENS

JobSpy: Scraper for LinkedIn, Indeed & ZipRecruiter

GITHUB.COM/CULLENWATSON

krypton: Data Encryption at Rest and IAM for Python

GITHUB.COM/KRPTN

iommi: Your First Pick for a Django Power Chord

GITHUB.COM/IOMMIROCKS

Events Weekly Real Python Office Hours Q&A (Virtual)

September 13, 2023
REALPYTHON.COM

PyData Amsterdam 2023

September 14 to September 17, 2023
PYDATA.ORG

Python Atlanta

September 14 to September 15, 2023
MEETUP.COM

Kiwi PyCon 2023

September 15 to September 18, 2023
KIWIPYCON.NZ

PyCon CZ 2023

September 15 to September 18, 2023
PYCON.ORG

PyData Seattle: Language Creators Charity Fundraiser

September 19 to September 20, 2023
PYDATA.ORG

PyCon Uganda

September 21 to September 24, 2023
PYCON.ORG

PyCon UK 2023

September 22 to September 26, 2023
PYCONUK.ORG

Happy Pythoning!
This was PyCoder’s Weekly Issue #594.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

unifont @ Savannah: Unifont 15.1.01 Released

GNU Planet! - Tue, 2023-09-12 14:48

12 September 2023 Unifont 15.1.01 is now available.
This is a major release.  This release no longer builds TrueType fonts by default, as announced over the past year.  They have been replaced with their OpenType equivalents.  TrueType fonts can still be built manually by typing "make truetype" in the font directory.

This release also includes a new Hangul Syllables Johab 6/3/1 encoding proposed by Ho-Seok Ee.  New Hangul supporting software for this encoding allows formation of all double-width Hangul syllables, including those with ancient letters that are outside the Unicode Hangul Syllables range.  Details are in the ChangeLog file.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-15.1.01/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-15.1.01/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-15.1.01/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-15.1.01/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-15.1.01/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      https://unifoundry.com/unifont/unifont-utilities.html

Information about Hangul modifications is at

      https://unifoundry.com/hangul/index.html

and

      http://unifoundry.com/hangul/hangul-generation.html

Categories: FLOSS Project Planets

Malthe Borch: Switching to managed encryption keys

Planet Python - Tue, 2023-09-12 12:39

In most systems I come across, private keys are all over the place, made available as secrets to the workloads that need them. The trouble is not just that sometimes, secrets are not really secret, but more fundamentally, that private keys are something that we don't need to handle directly at all, they're basically too hot to handle 🔥.

Instead, we can use a remote service to manage our private keys such as Azure Key Vault, obviating the need to handle them ourselves.

Ideally, the keys managed by the service are in fact generated there, too, such that at no point in time is the key ever available to you. We can ask the service to sign or decrypt some data, but the actual key is never revealed.

But before we talk about how we can switch to such managed keys, let's look at a few example use-cases:

  • Authentication

    Lots of systems are integrated using public/private key pairs. For example, when we commit to GitHub, many developers use SSH to connect, using key pair-based authentication; or when we connect to a database, the authentication protocol might use key pair-based authentication instead of a password, for an extra layer of transport security.

  • Session signing

    Most web applications require the signing of session cookies, a token that proves to the web application that you're logged in, or simply that you have an "open session". Typically, this is configured using a locally available signing key, but again, we can improve security by using a managed key (and it's possible to be clever here and amortize the cost of using the managed key over a period of time or number of sessions created).

The operation we need in both of these cases is the ability to sign a payload. Managed keys are perfect for this!

The Snowflake client library for Python has an authentication system that supports multiple methods. Working on a data platform solution for Grundfos, a Danish company and also the world's largest pump manufacturer, I contributed a change to extend this authentication system to accomodate managed keys (the existing system supported only concrete private keys such as a file on disk).

For Python, the cryptography package has become the defacto standard for encryption. The RSAPrivateKey interface represents an abstract private key that uses traditional RSA asymmetric encryption.

With the change linked above, it's now possible to implement a custom private key object which defers operations to a managed key service since the Snowflake library now takes bytes | RSAPrivateKey as input.

I'm collaborating with Microsoft to provide this functionality out of the box with the azure-keyvault-keys package, a feature that's likely to go into the next release. In code, this would look like the following:

from azure.keyvault.keys import KeyClient from azure.keyvault.keys.crypto import ManagedRsaKey from snowflake.connector import connect key_client = KeyClient( "https://<keyvault-name>.vault.azure.net/", credential=AZURE_CREDENTIAL ) ctx = connect( # Here goes the usual connection parameters ... private_key=ManagedRsaKey(KEY_NAME, key_client) )

The prerequisite to have this work is to use ALTER USER ... SET RSA_PUBLIC_KEY = '<key-data>'. You can download the public key in PEM-format (suitable as key data here) using the portal or Azure CLI:

$ az keyvault key download \ --vault-name <keyvault-name> \ -n <key-name> \ --encoding PEM \ -f /dev/stdout

Using managed keys is not free. There is a cost for each operation, but for Azure Key Vault for example, using standard RSA 2048-bit keys, these operations are as cheap as reading a secret. As mentioned previously, in some cases, we can be clever and use a surrogate signing key for a period of time. This also reduces the operational dependence on the remote service.

These costs (and infrastructure complexity) should be easily offset by the much lessened burden of compliance with security protocol. If you never saw a private key, there is no risk having compromised it. Access to using a managed key can be granted on a higher level and revoked just as easily.

Categories: FLOSS Project Planets

FSF Events: Free Software Directory meeting on IRC: Friday, September 15, starting at 12:00 EDT (16:00 UTC)

GNU Planet! - Tue, 2023-09-12 12:34
Join the FSF and friends on Friday, September 15, from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.
Categories: FLOSS Project Planets

Drupal Association blog: A Farewell From Von

Planet Drupal - Tue, 2023-09-12 12:12

With a heart full of joy, sadness, pride, and premature nostalgia, I will be departing the Drupal Association on 21 September, 2023 and will no longer serve on the leadership team as your Director, Programs. 

Over the last two years at the Drupal Association, I have had the honor to work with so many incredible change-makers in the non-profit and Open Source world. I’ve grown beyond what I ever imagined in my relationship with the free and open web, and I’m so grateful to this community for trusting me with leading many of the Drupal Association’s most critical programs. I’m also deeply grateful for the entire staff at the Drupal Association for trusting me to help build our workplace into one that is rooted in equity, access, and employee agency. Cultivating a healthy culture at a remote global organization is one of things I’m most proud of leaving behind, and I’m confident that the leadership team will continue to nurture our working environment to be one where everyone can thrive. 

Thank you so much to the Drupal community and the DA staff/board for making the last 2 years some of the most fulfilling, empowering, and productive of my career. It's been my pleasure to work hand-in-hand with you all on DrupalCon, Discover Drupal, contribution enablement, and DEI best practices, and I will take all I've learned into the next chapter of my career. It is my hope that I’ve left you all in a good place, and have had a positive impact on your experience in the Drupal ecosystem. I have the utmost faith in my colleagues to continue to deliver high-impact, equitable programs that make Drupal amazing. 

Feel free to find me on the Drupal Community Slack (vonreyes) in my last two weeks, or at vonreyes.carrd.co if you want to stay in touch in the future.


Left to right: Von with Nikki Flores; Von with Iwantha Lekamge; Von with Angie Sabin 
File attachments:  vonheader.png
Categories: FLOSS Project Planets

Stack Abuse: Why does Python Code Run Faster in a Function?

Planet Python - Tue, 2023-09-12 11:04
Introduction

Python is not necessarily known for its speed, but there are certain things that can help you squeeze out a bit more performance from your code. Surprisingly, one of these practices is running code in a function rather than in the global scope. In this article, we'll see why Python code runs faster in a function and how Python code execution works.

Python Code Execution

To understand why Python code runs faster in a function, we need to first understand how Python executes code. Python is an interpreted language, which means it reads and executes code line by line. When Python executes a script, it first compiles it to bytecode, an intermediate language that's closer to machine code, and then the Python interpreter executes this bytecode.

def hello_world(): print("Hello, World!") import dis dis.dis(hello_world) 2 0 LOAD_GLOBAL 0 (print) 2 LOAD_CONST 1 ('Hello, World!') 4 CALL_FUNCTION 1 6 POP_TOP 8 LOAD_CONST 0 (None) 10 RETURN_VALUE

The dis module in Python disassembles the function hello_world into bytecode, as seen above.

Note: The Python interpreter is a virtual machine that executes the bytecode. The default Python interpreter is CPython, which is written in C. There are other Python interpreters like Jython (written in Java), IronPython (for .NET), and PyPy (written in Python and C), but CPython is the most commonly used.

Why Python Code Runs Faster in a Function

Consider a simplified example with a loop that iterates over a range of numbers:

def my_function(): for i in range(100000000): pass

When this function is compiled, the bytecode might look something like this:

SETUP_LOOP 20 (to 23) LOAD_GLOBAL 0 (range) LOAD_CONST 3 (100000000) CALL_FUNCTION 1 GET_ITER FOR_ITER 6 (to 22) STORE_FAST 0 (i) JUMP_ABSOLUTE 13 POP_BLOCK LOAD_CONST 0 (None) RETURN_VALUE

The key instruction here is STORE_FAST, which is used to store the loop variable i.

Now let's consider the bytecode if the loop is at the top level of a Python script:

SETUP_LOOP 20 (to 23) LOAD_NAME 0 (range) LOAD_CONST 3 (100000000) CALL_FUNCTION 1 GET_ITER FOR_ITER 6 (to 22) STORE_NAME 1 (i) JUMP_ABSOLUTE 13 POP_BLOCK LOAD_CONST 2 (None) RETURN_VALUE

Notice the STORE_NAME instruction is used here, rather than STORE_FAST.

The bytecode STORE_FAST is faster than STORE_NAME because in a function, local variables are stored in a fixed-size array, not a dictionary. This array is directly accessible via an index, making variable retrieval very quick. Basically, it's just a pointer lookup into the list and an increase in the reference count of the PyObject, both of which are highly efficient operations.

On the other hand, global variables are stored in a dictionary. When you access a global variable, Python has to perform a hash table lookup, which involves calculating a hash and then retrieving the value associated with it. Though this is optimized, it's still inherently slower than an index-based lookup.

Benchmarking and Profiling Python Code

Want to test this for yourself? Try benchmarking and profiling your code.

Benchmarking and profiling are important practices in performance optimization. They help you understand how your code behaves and where the bottlenecks are.

Benchmarking is where you time your code to see how long it takes to run. You can use Python's built-in time module, as we'll show later, or use more sophisticated tools like timeit.

Profiling, on the other hand, provides a more detailed view of your code's execution. It shows you where your code spends most of its time, which functions are called, and how often. Python's built-in profile or cProfile modules can be used for this.

Here's an example of how you can profile your Python code:

import cProfile def loop(): for i in range(10000000): pass cProfile.run('loop()')

This will output a detailed report of all the function calls made during the execution of the loop function.

Note: Profiling adds quite a bit of overhead to your code execution, so the execution time shown by the profiler will be longer than the actual execution time.

Benchmarking Code in a Function vs. Global Scope

In Python, the speed of code execution can vary depending on where the code is executed - in a function or in the global scope. Let's compare the two using a simple example.

Consider the following code snippet that calculates the factorial of a number:

def factorial(n): result = 1 for i in range(1, n + 1): result *= i return result print(factorial(20))

Now let's run the same code but in the global scope:

n = 20 result = 1 for i in range(1, n + 1): result *= i print(result)

To benchmark these two pieces of code, we can use the timeit module in Python, which provides a simple way to time small bits of Python code.

import timeit def benchmark(): start = timeit.default_timer() # your code here end = timeit.default_timer() print(end - start) # benchmark function code benchmark(factorial(20)) # benchmark global scope code benchmark(result)

You'll find that the function code executes faster than the global scope code. This is because Python executes function code faster due to a variety of reasons we'll discuss in other sections.

Profiling Code in a Function vs. Global Scope

Python provides a built-in module called cProfile for this purpose. Let's use it to profile our factorial code in a function and in the global scope.

import cProfile def profile(): pr = cProfile.Profile() pr.enable() # your code here pr.disable() pr.print_stats() # profile function code profile(factorial(20)) # profile global scope code profile(result)

From the profiling results, you'll see that the function code is more efficient in terms of time and space. This is due to the way Python's interpreter, CPython, handles function calls, which we'll discuss in the next section.

Optimizing Python Function Performance

Given that Python functions tend to run faster than equivalent code in the global scope, it's worth looking into how we can further optimize our function performance.

Of course, because of what we saw earlier, one strategy is to use local variables instead of global variables. Here's an example:

import time # Global variable x = 5 def calculate_power_global(): for i in range(1000000): y = x ** 2 # Accessing global variable def calculate_power_local(x): for i in range(1000000): y = x ** 2 # Accessing local variable start = time.time() calculate_power_global() end = time.time() print(f"Execution time with global variable: {end - start} seconds") start = time.time() calculate_power_local(x) end = time.time() print(f"Execution time with local variable: {end - start} seconds")

In this example, calculate_power_local will typically run faster than calculate_power_global, because it's using a local variable instead of a global one.

Another optimization strategy is to use built-in functions and libraries whenever possible. Python's built-in functions are implemented in C, which is much faster than Python. Similarly, many Python libraries, such as NumPy and Pandas, are also implemented in C or C++, making them faster than equivalent Python code.

For example, consider the task of summing a list of numbers. You could write a function to do this:

def sum_numbers(numbers): total = 0 for number in numbers: total += number return total

However, Python's built-in sum function will do the same thing, but faster:

numbers = [1, 2, 3, 4, 5] total = sum(numbers)

Try timing these two code snippets yourself and figure out which one is faster!

Conclusion

In this article, we've explored the interesting world of Python code execution, specifically focusing on why Python code tends to run faster when encapsulated in a function. We briefly looked into the concepts of benchmarking and profiling, providing practical examples of how these processes can be carried out in both a function and the global scope.

We also discussed a few ways to optimize your Python function performance. While these tips can certainly make your code run faster, you should use certain optimizations carefully as it's important to balance readability and maintainability with performance.

Categories: FLOSS Project Planets

Drupal Association blog: Drupal, innovation and the future

Planet Drupal - Tue, 2023-09-12 10:47
The vision

Dries has lined up what’s next for Drupal’s roadmap. Drupal is for ambitious site builders. At this point everyone should know or should have heard his vision.

Rephrasing Dries words, Drupal “[…] has become much bigger than a CMS alone”. But what does “bigger” really mean? Is it a tool to build apps? Could it become an AI toolset? Could it be much more?

For those who don’t know me, I have been involved in Drupal for nearly two decades (pretty much my whole professional career), in roles that span from Software Engineer, Technical Lead, Technical Architect, Solutions Architect, Developer relations, and more recently, Program Manager of the Innovation Program in the Drupal Association (DA).

I was hired by the DA to spearhead an effort to accelerate innovation and grow contribution to Drupal. In my first few months, I've been [reading/learning/listening], gathering my thoughts about the problems we need to solve as a community, and to start building that “supportive environment, […] allowing time for ideas to flourish'' (Dries, keynote 2023). In other words, I’m trying to find with the rest of my colleagues and community, what “bigger” means, and find how we can plant 1000 seeds and let them bloom

This is the first part of a series of blog posts where I would like to share my ideas and learnings about the challenges we face, and the opportunities we have to grow as a project and as a community.

Technology, innovation, and the future

Drupal innovation. Drupalcon Pittsburgh echoes are still resonating strongly in my ears. If you were not lucky enough to attend, I will give you a quick one word summary: INNOVATION. From Dries' keynote to the different sessions, to the conversations around coffee breaks and meals… and even weeks after DrupalCon, people are talking about this topic. Innovation was showing a clear presence all over the place.

We left Pittsburgh with some powerful new ideas from the community to explore. But are we doing enough outside of our community to capture the attention and inspire new talent to join Drupal? 

If you want my opinion (and, if you are reading this, this is actually just my thoughts ;-) ), we are at a key moment for Drupal.

Drupal has been in the market for quite a few years. We recently celebrated its 20 year anniversary. Our beloved technology has changed immensely, and thanks to that, Drupal has not just survived, but thrived all these years. Just look at what was around you 20 years ago. The mobile phones, the computers, the cars, or even how we used to build our houses… very little has survived unchanged. In fact, very few technologies have survived since. Drupal, however, has changed, evolved, survived, and thrived.

Drupal is a public good that benefits many people and organisations.

Innovation as I already said was the main topic in this year’s Driesnote, and from what I could hear and read outside of the rooms and on social networks, it was a well-received topic. The keynote was full of hope and made people excited again about Drupal and our future.

Even outside of the keynote, innovation was the trend in many sessions. And, as part of that innovation trend, decoupled was having a clear presence, and it could actually be the key for Drupal. The key, amongst other things, is to attract new developers and young people that we are in need of (more on that in my upcoming articles).

You have probably heard or read the StackOverflow survey. If you haven’t, stop what you are doing and go read it.

If you don’t have the time, I’ll give you a couple of snapshots. Technologies that developers want to use (top and bottom of the list):

I have a few takeaways from some of the decoupled sessions and those surveys. On one hand, yes, Drupal as a technology is just amongst the 0.43% of people who want to try it for the first time (another reason for optimism is that this year’s data is a bit higher). However, the top technologies that developers want to try are things like ReactJS, NextJS, … frameworks that work pretty well alongside Drupal, that even matured years before this trend arrived and has made Drupal extremely well suited for this market. I’ll come back to this at a later time with more details and thoughts.

Drupal's innovation is invisible, unless we can find a way to reach outside our existing community 

Let me give you another snapshot:

The second takeaway is that Drupal is amongst the most dreaded technologies. Yes, that is rough. However, is that because that poll was answered by people who have not tried the technology since maybe Drupal 7 or even Drupal 6 or 5? With so many changes in the developer experience, performance improvements, and so on, if you haven’t tried Drupal even in the last few years, you are likely having the wrong impression about the framework.

My theory, and what other people have shared with me during the days we spent together at DrupalCon Pittsburgh, is exactly that. Those opinions come from developers who have not tried Drupal for some time. But Drupal has changed an enormous deal since.

Yes, quite a lot of things have changed. Some core components of Drupal have become quite modern and exciting, like Symfony components, Object Orientation, Events, … Drupalisms (which were a pain of the past and a constant reason for discussion and beginner friction) have slowly started to fade in favour of Symfony and PHP vocabulary. A more Drupal agnostic, developer friendly framework, which should attract much needed new blood. Or that was the theory…

And the truth is that Drupal nowadays is a powerful, developer friendly, modern framework, full of potential and exciting features and capabilities. But, the young and new developers do not seem to be arriving to Drupal.

Should we maybe make a bigger effort to show how much Drupal has changed, and how powerful and developer friendly it is nowadays? And should we make that clear OUTSIDE of our Drupal communities?

To sum up

I think we can look at those statistics from a different point of view. A point of view which makes me excited and hopeful for the future of Drupal. The most interesting and attractive technologies are precisely those that we have been pushing as a community to support. Headless frameworks like Vue, NextJS or ReactJS. If we do just a little push and make Drupal THE CMS of reference for those frameworks, we can guarantee growth and success for the next phase we are already in, while we start thinking about the next one. While we plant the seeds.

The same goes for the problem of attracting younger generations to Drupal.

This is probably not going to be easy. But similarly, it’s not going to be a hard, insurmountable task. And it may require not just coding contributions but other contributions as well, like marketing or even collaboration with other communities. We have an amazing and exciting technology that goes beyond CMS. Not just Drupal developers should know about this fact.

So, my message today is an optimistic one. Yes, we don’t have the best statistics in our favour right now to stay optimistic about the PRESENT of Drupal. However, we have some exciting numbers to show what needs to be done to stay VERY optimistic about the FUTURE of Drupal. Let’s prepare for that future, and let’s make Drupal the innovative framework that it has always been by ensuring that everyone outside our community knows about it. 

“Spending the time to capture what Drupal is for could energize and empower people to make better decisions when adopting, building and marketing Drupal”.

I’m here to listen to you. What role would you like for the Drupal Association on this? What would you like the community to do? Do you have ideas? Let’s talk: Find me on Twitter, LinkedIn, or Mastodon.

Categories: FLOSS Project Planets

Brian Okken: pytest Primary Power Course Is Ready

Planet Python - Tue, 2023-09-12 10:33
Everything you need to get started with pytest and use it effectively. Learn about: Test Functions Structure functions effectively Fixtures setup, teardown, and so much more Builtin Fixtures many common testing tasks are pre-built and ready to use Parametrization turn one test into many test cases Markers Builtin markers provided by pytest Custom markers for test selection Combining markers and fixtures to alter pytest behavior Back To School Special To celebrate wrapping up the first course, I’m offering pytest Primary Power for $49 for a limited time, and the bundle for $99.
Categories: FLOSS Project Planets

GNU Guix: A new Quality Assurance tool for Guix

GNU Planet! - Tue, 2023-09-12 10:30

Maintaining and expanding Guix's collection of packages can be complicated. As a distribution with around 22,000 packages, spanning across around 7 architectures and with support for cross-compilation, it's quite common for problems to occur when making changes.

Quality Assurance (QA) is a general term to describe the approach taken to try and ensure something meets expectations. When applied to software, the term testing is normally used. While Guix is software, and has tests, much more than those tests are needed to maintain Guix as a distribution.

So what might quality relate to in the context of Guix as a distribution? This will differ from person to person, but these are some common concerns:

  • Packages successfully building (both now, and without any time bombs for the future)
  • The packaged software functioning correctly
  • Packages building on or for a specific architecture
  • Packages building reproducibly
  • Availability of translations for the package definitions
Tooling to help with Quality Assurance

There's a range of tools to help maintain Guix. The package linters are a set of simple tools, they cover basic things from the naming of packages to more complicated checkers that look for security issues for example.

The guix weather tool looks at substitute availability information and can indicate how many substitutes are available for the current Guix and system. The guix challenge tool is similar, but it highlights package reproducibility issues, which is when the substitutes and local store items (if available) differ.

For translations, Guix uses Weblate which can provide information on how many translations are available.

The QA front-page

Then there's the relatively new Quality Assurance (QA) front-page, the aim of which is to bring together some of the existing Quality Assurance related information, as well as new being a good place to do additional QA tasks.

The QA front-page started as a service to coordinate automated testing for patches. When a patch or patch series is submitted to guix-patches@gnu.org, it is automatically applied to create a branch; then once the information is available from the Data Service about this branch, the QA front-page web interface lets you view which packages were modified and submits builds for these changes to the Build Coordinator behind bordeaux.guix.gnu.org to provide build information about the modified packages.

A very similar process applies for branches other than the master branch, the QA front-page queries issues.guix.gnu.org to find out which branch is going to be merged next, then follows the same process for patches.

For both patches and branches the QA front-page displays information about the effects of the changes. When this information is available, it can assist with reviewing the changes and help get patches merged quicker. This is a work in progress though, and there's much more that the QA front-page should be able to do as providing clearer descriptions of the changes or any other problems that should be addressed.

How to get involved?

There's plenty of ways to get involved or contribute to the QA front-page.

If you submit patches to Guix, the QA front-page will attempt to apply the patches and show what's changed. You can click through from issues.guix.gnu.org to qa.guix.gnu.org via the QA badge by the status of the issue.

From the QA front-page, you can also view the list of branches which includes the requests for merging if they exist. Similar to the patch series, for the branch the QA front-page can display information about the package changes and substitute availability.

There's also plenty of ways to contribute to the QA front-page and connected tools. You can find some ideas and information on how to run the service in the README and if you have any questions or patches, please email guix-devel@gnu.org.

Acknowledgments

Thanks to Simon Tournier and Ludovic Courtès for providing feedback on an earlier draft of this post.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

Categories: FLOSS Project Planets

Real Python: Inheritance and Internals: Object-Oriented Programming in Python

Planet Python - Tue, 2023-09-12 10:00

Python includes mechanisms for writing object-oriented code where the data and operations on that data are structured together. The class keyword is how you create these structures in Python. The definition of a class can be based on other classes, allowing the creation of hierarchical structures and promoting code reuse. This mechanism is known as inheritance.

In this course, you’ll learn about:

  • Basic class inheritance
  • Multi-level inheritance, or classes that inherit from classes
  • Classes that inherit directly from more than one class, or multiple inheritance
  • Special methods that you can use when writing classes
  • Abstract base classes for classes that you don’t want to fully implement yet

This course is the second in a three-part series. Part one is an introduction to class syntax, teaching you how to write a class and use its attributes and methods. Part three dives deeper into the philosophy behind writing good object-oriented code.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

John Goerzen: A Maze of Twisty Little Pixels, All Tiny

Planet Debian - Tue, 2023-09-12 09:40

Two years ago, I wrote Managing an External Display on Linux Shouldn’t Be This Hard. Happily, since I wrote that post, most of those issues have been resolved.

But then you throw HiDPI into the mix and it all goes wonky.

If you’re running X11, basically the story is that you can change the scale factor, but it only takes effect on newly-launched applications (which means a logout/in because some of your applications you can’t really re-launch). That is a problem if, like me, you sometimes connect an external display that is HiDPI, sometimes not, or your internal display is HiDPI but others aren’t. Wayland is far better, supporting on-the-fly resizes quite nicely.

I’ve had two devices with HiDPI displays: a Surface Go 2, and a work-issued Thinkpad. The Surface Go 2 is my ultraportable Linux tablet. I use it sparingly at home, and rarely with an external display. I just put Gnome on it, in part because Gnome had better on-screen keyboard support at the time, and left it at that.

On the work-issued Thinkpad, I really wanted to run KDE thanks to its tiling support (I wound up using bismuth with it). KDE was buggy with Wayland at the time, so I just stuck with X11 and ran my HiDPI displays at lower resolutions and lived with the fuzziness.

But now that I have a Framework laptop with a HiDPI screen, I wanted to get this right.

I tried both Gnome and KDE. Here are my observations with both:

Gnome

I used PaperWM with Gnome. PaperWM is a tiling manager with a unique horizontal ribbon approach. It grew on me; I think I would be equally at home, or maybe even prefer it, to my usual xmonad-style approach. Editing the active window border color required editing ~/.local/share/gnome-shell/extensions/paperwm@hedning:matrix.org/stylesheet.css and inserting background-color and border-color items in the paperwm-selection section.

Gnome continues to have an absolutely terrible picture for configuring things. It has no less than four places to make changes (Settings, Tweaks, Extensions, and dconf-editor). In many cases, configuration for a given thing is split between Settings and Tweaks, and sometimes even with Extensions, and then there are sometimes options that are only visible in dconf. That is, where the Gnome people have even allowed something to be configurable.

Gnome installs a power manager by default. It offers three options: performance, balanced, and saver. There is no explanation of the difference between them. None. What is it setting when I change the pref? A maximum frequency? A scaling governor? A balance between performance and efficiency cores? Not only that, but there’s no way to tell it to just use performance when plugged in and balanced or saver when on battery. In an issue about adding that, a Gnome dev wrote “We’re not going to add a preference just because you want one”. KDE, on the other hand, aside from not mucking with your system’s power settings in this way, has a nice panel with “on AC” and “on battery” and you can very easily tweak various settings accordingly. The hostile attitude from the Gnome developers in that thread was a real turnoff.

While Gnome has excellent support for Wayland, it doesn’t (directly) support fractional scaling. That is, you can set it to 100%, 200%, and so forth, but no 150%. Well, unless you manage to discover that you can run gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']" first. (Oh wait, does that make a FIFTH settings tool? Why yes it does.) Despite its name, that allows you to select fractional scaling under Wayland. For X11 apps, they will be blurry, a problem that is optional under KDE (more on that below).

Gnome won’t show the battery life time remaining on the task bar. Yikes. An extension might work in some cases. Not only that, but the Gnome battery icon frequently failed to indicate AC charging when AC was connected, a problem that didn’t exist on KDE.

Both Gnome and KDE support “night light” (warmer color temperatures at night), but Gnome’s often didn’t change when it should have, or changed on one display but not the other.

The appindicator extension is pretty much required, as otherwise a number of applications (eg, Nextcloud) don’t have their icon display anywhere. It does, however, generate a significant amount of log spam. There may be a fix for this.

Unlike KDE, which has a nice inobtrusive popup asking what to do, Gnome silently automounts USB sticks when inserted. This is often wrong; for instance, if I’m about to dd a Debian installer to it, I definitely don’t want it mounted. I learned this the hard way. It is particularly annoying because in a GUI, there is no reason to mount a drive before the user tries to access it anyhow. It looks like there is a dconf setting, but then to actually mount a drive you have to open up Files (because OF COURSE Gnome doesn’t have a nice removable-drives icon like KDE does) and it’s a bunch of annoying clicks, and I didn’t want to use the GUI file manager anyway. Same for unmounting; two clicks in KDE thanks to the task bar icon, but in Gnome you have to open up the file manager, unmount the drive, close the file manager again, etc.

The ssh agent on Gnome doesn’t start up for a Wayland session, though this is easily enough worked around.

The reason I completely soured on Gnome is that after using it for awhile, I noticed my laptop fans spinning up. One core would be constantly busy. It was busy with a kworker events task, something to do with sound events. Logging out would resolve it. I believe it to be a Gnome shell issue. I could find no resolution to this, and am unwilling to tolerate the decreased battery life this implies.

The Gnome summary: it looks nice out of the box, but you quickly realize that this is something of a paper-thin illusion when you try to actually use it regularly.

KDE

The KDE experience on Wayland was a little bit opposite of Gnome. While with Gnome, things start out looking great but you realize there are some serious issues (especially battery-eating), with KDE things start out looking a tad rough but you realize you can trivially fix them and wind up with a very solid system.

Compared to Gnome, KDE never had a battery-draining problem. It will show me estimated battery time remaining if I want it to. It will do whatever I want it to when I insert a USB drive. It doesn’t muck with my CPU power settings, and lets me easily define “on AC” vs “on battery” settings for things like suspend when idle.

KDE supports fractional scaling, to any arbitrary setting (even with the gsettings thing above, Gnome still only supports it in 25% increments). Then the question is what to do with X11-only applications. KDE offers two choices. The first is “Scaled by the system”, which is also the only option for Gnome. With that setting, the X11 apps effectively run natively at 100% and then are scaled up within Wayland, giving them a blurry appearance on HiDPI displays. The advantage is that the scaling happens within Wayland, so the size of the app will always be correct even when the Wayland scaling factor changes. The other option is “Apply scaling themselves”, which uses native X11 scaling. This lets most X11 apps display crisp and sharp, but then if the system scaling changes, due to limitations of X11, you’ll have to restart the X apps to get them to be the correct size. I appreciate the choice, and use “Apply scaling by themselves” because only a few of my apps aren’t Wayland-aware.

I did encounter a few bugs in KDE under Wayland:

sddm, the display manager, would be slow to stop and cause a long delay on shutdown or reboot. This seems to be a known issue with sddm and Wayland, and is easily worked around by adding a systemd TimeoutStopSec.

Konsole, the KDE terminal emulator, has weird display artifacts when using fractional scaling under Wayland. I applied some patches and rebuilt Konsole and then all was fine.

The Bismuth tiling extension has some pretty weird behavior under Wayland, but a 1-character patch fixes it.

On Debian, KDE mysteriously installed Pulseaudio instead of Debian’s new default Pipewire, but that was easily fixed as well (and Pulseaudio also works fine).

Conclusions

I’m sticking with KDE. Given that I couldn’t figure out how to stop Gnome from deciding to eat enough battery to make my fan come on, the decision wasn’t hard. But even if it weren’t for that, I’d have gone with KDE. Once a couple of things were patched, the experience is solid, fast, and flawless. Emacs (my main X11-only application) looks great with the self-scaling in KDE. Gimp, which I use occasionally, was terrible with the blurry scaling in Gnome.

Update: Corrected the gsettings command

Categories: FLOSS Project Planets

PyBites: Debunking 7 Myths About Software Development Coaching

Planet Python - Tue, 2023-09-12 08:27

If you give a man a fish, you feed him for a day. If you teach a man to fish, you feed him for a lifetime.

Chinese proverb Transformative power of guidance

10 years ago I was overweight, maybe not more than +12 kg, but it definitely had a bearing on the quality of my life and (!) future health perspective.

Back then I thought I ate reasonably healthy and did my daily walk (the World Health Organization (WHO) recommends you “do at least 150–300 minutes of moderate-intensity aerobic physical activity”, I definitely did that).

Now I know: I simply ate too much sugar and my total caloric intake was too high. I also wasn’t hitting the gym. No muscles, no “engine” to burn more energy (and even when I started a routine I lost muscle because I did not align my diet for it to be effective).

The difference between THEN “assuming I was doing things ok-ish” yet getting bad results, and NOW, maintaining a lean physique quite effortlessly, not having to be a saint with my diet either, is … coaching.

When I was out of shape, I recognized a significant gap and grew increasingly frustrated with the status quo. Until one day I said: “f* it, I need to address this!” or I will live with the negative consequences the rest of my life!

So I sought help. What I did not know yet at the time was that getting a professional coach will move you from the slow to fast lane.

Success leaves clues.

Jim Rohn

… and a coach will show you those clues!

It’s also kind of reassuring knowing that with a coach you just have to listen to that “one source of truth” (especially in this all too distracting world!). That when you follow their advice and put in the daily effort, you will get similar results (at least relative to the level you are currently at).

So I did just that, dropped 10kg, live in a body I am happy with, and the rest is history.

Effective coaching has that power. It can get you out of a rut, it will give you clarity about your goals and makes you laser focused in order to achieve them.

Just as I needed guidance to navigate my fitness journey, many find the same to be true in their software development careers.

However, some people are skeptical. They see “coaching” more as a tool for athletes and business leaders.

Which brings me to …

Debunking 7 common myths of software coaching

In the rest of this article I will show you why it’s a must for software developers too.

Although there is a fundamental difference between “getting lean” and “landing a developer job”, as a (aspiring) developer, applying the general principles of coaching can help you get to your goals faster.

Myth 1

What will I gain from software coaching that I can’t just figure out on my own?

There are so many (free) resources out there. You can get a whole education just by spending hours consuming them, right?

Wrong. This mentality leads to what’s known as “tutorial paralysis.”

Tutorial paralysis is the phenomenon where individuals become overly reliant on tutorials and educational content. Instead of actively working on projects or problems on their own, they continue to watch or read one tutorial after another, mistakenly believing they are making progress.

In reality, they are stuck in a loop of passive learning without any real-world application. Your time and effort is best spent working on concrete goals, with somebody that reviews your work giving continuous expert advice and keeps the higher-level goal in mind.

No “passive” learning method gives you this, and that’s where all of the free resources fall short. They are valuable but only as an add-on to a goal-focused + guided approach.

Even classroom training suffers from this shortage, it’s too passive. The information needs to go both ways and feedback on goal oriented work is what really sticks and where the real learning happens. This isn’t rocket science, we see it every day with the people that work with us.

Myth 2

It’s just about the tech skills.

This definitely is an appealing thought which we entertained for a long time.

Until we reflected back on our careers and made a balance sheet of what “assets” really contributed to our growth. Tech skills were high up there, unmissable, but so were “soft skills” (we like to group them under “mindset” rather).

Things like the ability to communicate well, asserting influence, negotiation skills and grit + persistence. Coaching by a HUMAN is super powerful here, because it’s through human interaction and 1:1 (and group) conversations that we nurture those types of skills.

It also requires a deep trust in the person you work with, because this stuff is often very personal. Coaching is built on trust and that’s also where you can get very deep.

You might actually not know what deeper issues you have stashed away and that are consciously (or unconsciously) holding you back. Working closely with a coach you trust and through working on complex things together (which again requires will hit both tech and soft skills), deeper things get addressed that you were not even aware of in the first place. This is important and that’s where we have seen people’s progress going through the roof.

Myth 3

I am too much of a beginner or too advanced for coaching.

Coaching is for all levels.

For a beginner, coaching provides an incredible boost of motivation and a foundation in the basics.

But for those more advanced, its value doesn’t diminish. In fact, even top professionals in various fields, from sports to business, continuously seek coaching to refine their skills and gain new insights.

With a more advanced person, coaching can be about fine-tuning, autonomous growth, and strategic course correction.

You might think: “My situation is so unique, I doubt a coach can help me”. Again, coaches are humans so they can (and will) adjust their styles and levels to each person they work with and at all phases of the coaching journey. It’s the perfect customized learning form, and this is the reason we think it’s so highly effective.

Regardless of your skill level, whether you’re a beginner or advanced, coaching can be tailored to meet your specific needs

Myth 4

Fitness milestones are very tangible, for software devs this is not the case.

True, right? Fitness is all about the nominal weight progression (measured daily on the scale), a six-pack for the more fitness aficionados, calories tracked, number of cheat meals. All very tangible indeed.

But in software we can get very specific too:

  • Number of quality projects on your GitHub, packages shippped to PyPI.
  • Code quality can be measured, both by how you write code and general “care” you put into your projects (e.g. adding a test suite + proper documentation < why is FastAPI so popular?)
  • Number of successful code reviews or pull requests merged.
  • Number of tech blog posts (or YouTube videos or other content pieces) published every year
  • Number of meaningful contributions to open-source projects (“greens” on GitHub profile).
  • Etc.

Everybody that we’ve worked with has improved on multiple aspects above, both because their tech skills improved but also their confidence to start (or continue) putting their work out there. As the saying goes:

The harder I work the lucker I get!

– Samuel Goldwyn Myth 5

It takes too much time and/or with enough time I will figure it out myself.

The beauty of coaching is that results show up after months (not years), sometimes even weeks!

This is because it changes the way you think, and everything starts with thought. And this will compound over time, because a new mindset will pay dividends moving forward. So no, it does not take too much time per se.

The “I will figure it out” is a bit more insidious, because yes, you can get very far by yourself.

However, there is a category of unknown unknowns that is hard, if not impossible, to really see and grasp without having worked with more experienced people in your field. They will open your eyes.

Going through PDM was eye-opening. Once shown what’s possible, you can’t unsee the potential—or the challenges.

PDM Client (a year after finishing the program) Myth 6

A coach will do the work for me, so I won’t learn as effectively.

When I started as a coach I fell into the trap of doing too much for my clients, specifically writing parts of the code. it’s typical for the engineer in us: we love to code hence we will do so whenever we can.

But that’s not the most effective for people that needed to learn and really understand. Hence I changed and prefer to show the way. There is no better experience (for both coach and client) than “showing just enough”, enabling clients to find answers by themselves.

It’s often said that coaches unlock people’s potential. This is interesting, it means that you already have it in you, but you mostly need the help of a coach to get it out. Coaching people is much more about enabling them to succeed and this is again a very human endeavor!

This also means that a coach does not have to have all the answers, they are learning with you. They cannot (and should not be) specialist in all fancy new technologies (falling into the trap of shiny new object syndrome), they are much better when they have a wide scope and generalist skillset (for the reading list: Range).

Myth 7

Coaching / working with somebody is expensive.

Yes, the upfront cost of coaching might seem high. However, what you should consider is the Return on Investment (ROI) it offers.

Think about it this way: if a coach accelerates your learning and career progression by even a year or two, how much is that worth in salary, job opportunities, or personal growth? The insights, skills, and connections you gain through coaching can be invaluable. These benefits can lead to better job positions, increased earning potential, and greater job satisfaction—outcomes that far exceed the initial cost of coaching.

A prime example is Matt, a participant in our PDM program. Through coaching, he not only developed his technical skills but also saw a significant boost in his earnings:

Furthermore, there’s the non-tangible ROI. The increased confidence, clearer direction, reduced stress, and the elimination of potentially years of wandering aimlessly in your career, wondering if you’re doing the right things.

Every change requires an investment. Investing in coaching isn’t just about spending money; it’s about investing in your future self. The prospect of making exponential leaps in your career and personal growth makes the cost of coaching pale in comparison.

It takes courage to invest in your growth. But the regret of missed opportunities and unfulfilled potential can be a much greater expense in the long run.

Conclusion

Just as the old proverb goes: “We don’t give you the fish, we teach you how to fish.” By embarking on a coaching journey, you’re not only acquiring immediate skills but also fostering a transformative mindset that will be the cornerstone of your future growth and success.

Beyond tangible results like completed projects and an enriched GitHub profile, the true value lies in the new approach and perspective you’ll adopt. An approach that empowers you to tackle challenges more efficiently and capitalize on opportunities more effectively, setting you up for long-term success in your career.

Are you ready to leap forward in your development journey?

Check out our Python coaching options

I went from being unsure about my skills and feeling like an imposter to launching an MVP (Minimal Viable Product); a cloud based video trans-coding solution. At the outset of the program they gave me a survey of my goals and desired outcomes and molded my time with them to suit me and those goals.

Aaron J (PDM Client in Canada)
Categories: FLOSS Project Planets

Checkpoint Restore For Graphical Applications using QtWayland compositor handoffs

Planet KDE - Tue, 2023-09-12 04:22

Checkpoint restore allows you to safe the state of an application to disk at any given pooint during its execution, and then recover it at that exact point.

Categories: FLOSS Project Planets

Pages