Feeds

Programiz: Python for loop

Planet Python - Wed, 2024-01-03 10:58
In this article, we'll learn how to use a for loop in Python with the help of examples.
Categories: FLOSS Project Planets

Real Python: Python's Magic Methods: Leverage Their Power in Your Classes

Planet Python - Wed, 2024-01-03 09:00

As a Python developer who wants to harness the power of object-oriented programming, you’ll love to learn how to customize your classes using special methods, also known as magic methods or dunder methods. A special method is a method whose name starts and ends with a double underscore. These methods have special meanings for Python.

Python automatically calls magic methods as a response to certain operations, such as instantiation, sequence indexing, attribute managing, and much more. Magic methods support core object-oriented features in Python, so learning about them is fundamental for you as a Python programmer.

In this tutorial, you’ll:

  • Learn what Python’s special or magic methods are
  • Understand the magic behind magic methods in Python
  • Customize different behaviors of your custom classes with special methods

To get the most out of this tutorial, you should be familiar with general Python programming. More importantly, you should know the basics of object-oriented programming and classes in Python.

Get Your Code: Click here to download the free sample code that shows you how to use Python’s magic methods in your classes.

Getting to Know Python’s Magic or Special Methods

In Python, special methods are also called magic methods, or dunder methods. This latter terminology, dunder, refers to a particular naming convention that Python uses to name its special methods and attributes. The convention is to use double leading and trailing underscores in the name at hand, so it looks like .__method__().

Note: In this tutorial, you’ll find the terms special methods, dunder methods, and magic methods used interchangeably.

The double underscores flag these methods as core to some Python features. They help avoid name collisions with your own methods and attributes. Some popular and well-known magic methods include the following:

Special Method Description .__init__() Provides an initializer in Python classes .__str__() and .__repr__() Provide string representations for objects .__call__() Makes the instances of a class callable .__len__() Supports the len() function

This is just a tiny sample of all the special methods that Python has. All these methods support specific features that are core to Python and its object-oriented infrastructure.

Note: For the complete list of magic methods, refer to the special method section on the data model page of Python’s official documentation.

The Python documentation organizes the methods into several distinct groups:

Take a look at the documentation for more details on how the methods work and how to use them according to your specific needs.

Here’s how the Python documentation defines the term special methods:

A method that is called implicitly by Python to execute a certain operation on a type, such as addition. Such methods have names starting and ending with double underscores. (Source)

There’s an important detail to highlight in this definition. Python implicitly calls special methods to execute certain operations in your code. For example, when you run the addition 5 + 2 in a REPL session, Python internally runs the following code under the hood:

Python >>> (5).__add__(2) 7 Copied!

The .__add__() special method of integer numbers supports the addition that you typically run as 5 + 2.

Reading between the lines, you’ll realize that even though you can directly call special methods, they’re not intended for direct use. You shouldn’t call them directly in your code. Instead, you should rely on Python to call them automatically in response to a given operation.

Note: Even though special methods are also called magic methods, some people in the Python community may not like this latter terminology. The only magic around these methods is that Python calls them implicitly under the hood. So, the official documentation refers to them as special methods instead.

Magic methods exist for many purposes. All the available magic methods support built-in features and play specific roles in the language. For example, built-in types such as lists, strings, and dictionaries implement most of their core functionality using magic methods. In your custom classes, you can use magic methods to make callable objects, define how objects are compared, tweak how you create objects, and more.

Note that because magic methods have special meaning for Python itself, you should avoid naming custom methods using leading and trailing double underscores. Your custom method won’t trigger any Python action if its name doesn’t match any official special method names, but it’ll certainly confuse other programmers. New dunder names may also be introduced in future versions of Python.

Magic methods are core to Python’s data model and are a fundamental part of object-oriented programming in Python. In the following sections, you’ll learn about some of the most commonly used special methods. They’ll help you write better object-oriented code in your day-to-day programming adventure.

Controlling the Object Creation Process

When creating custom classes in Python, probably the first and most common method that you implement is .__init__(). This method works as an initializer because it allows you to provide initial values to any instance attributes that you define in your classes.

Read the full article at https://realpython.com/python-magic-methods/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

PyCharm: How To Learn Django: A Comprehensive Guide for Beginners

Planet Python - Wed, 2024-01-03 05:56
Learning Django can be an exciting journey for anyone looking to develop web applications, but it can be intimidating at first. In this article, we’ll provide you with a comprehensive guide on how to learn Django effectively. We’ll explore the prerequisites, the time it takes to become proficient, and various resources to help you master […]
Categories: FLOSS Project Planets

CTI Digital: Drupal Core Major Upgrades

Planet Drupal - Wed, 2024-01-03 05:41

Over the past 12 months, our teams have completed numerous Drupal upgrades. We would like to share our experiences and knowledge with anyone who has yet to undergo this process to help make it smoother for you.

Categories: FLOSS Project Planets

John Goerzen: Consider Security First

Planet Debian - Tue, 2024-01-02 19:38

I write this in the context of my decision to ditch Raspberry Pi OS and move everything I possibly can, including my Raspberry Pi devices, to Debian. I will write about that later.

But for now, I wanted to comment on something I think is often overlooked and misunderstood by people considering distributions or operating systems: the huge importance of getting security updates in an automated and easy way.

Background

Let’s assume that these statements are true, which I think are well-supported by available evidence:

  1. Every computer system (OS plus applications) that can do useful modern work has security vulnerabilities, some of which are unknown at any given point in time;

  2. During the lifetime of that computer system, some of these vulnerabilities will be discovered. For a (hopefully large) subset of those vulnerabilities, timely patches will become available.

Now then, it follows that applying those timely patches is a critical part of having a system that it as secure as possible. Of course, you have to do other things as well – good passwords, secure practices, etc – but, fundamentally, if your system lacks patches for known vulnerabilities, you’ve already lost at the security ballgame.

How to stay patched

There is something of a continuum of how you might patch your system. It runs roughly like this, from best to worst:

  1. All components are kept up-to-date automatically, with no intervention from the user/operator

  2. The operator is automatically alerted to necessary patches, and they can be easily installed with minimal intervention

  3. The operator is automatically alerted to necessary patches, but they require significant effort to apply

  4. The operator has no way to detect vulnerabilities or necessary patches

It should be obvious that the first situation is ideal. Every other situation relies on the timeliness of human action to keep up-to-date with security patches. This is a fallible situation; humans are busy, take trips, dismiss alerts, miss alerts, etc. That said, it is rare to find any system living truly all the way in that scenario, as you’ll see.

What is “your system”?

A critical point here is: what is “your system”? It includes:

  • Your kernel
  • Your base operating system
  • Your applications
  • All the libraries needed to run all of the above

Some OSs, such as Debian, make little or no distinction between the base OS and the applications. Others, such as many BSDs, have a distinction there. And in some cases, people will compile or install applications outside of any OS mechanism. (It must be stressed that by doing so, you are taking the responsibility of patching them on your own shoulders.)

How do common systems stack up?
  • Debian, with its support for unattended-upgrades, needrestart, debian-security-support, and such, is largely category 1. It can automatically apply security patches, in most cases can restart the necessary services for the patch to take effect, and will alert you when some processes or the system must be manually restarted for a patch to take effect (for instance, a kernel update). Those cases requiring manual intervention are category 2. The debian-security-support package will even warn you of gaps in the system. You can also use debsecan to scan for known vulnerabilities on a given installation.

  • FreeBSD has no way to automatically install security patches for things in the packages collection. As with many rolling-release systems, you can’t automate the installation of these security patches with FreeBSD because it is not safe to blindly update packages. It’s not safe to blindly update packages because they may bring along more than just security patches: they may represent major upgrades that introduce incompatibilities, etc. Unlike Debian’s practice of backporting fixes and thus producing narrowly-tailored patches, forcing upgrades to newer versions precludes a “minimal intervention” install. Therefore, rolling release systems are category 3.

  • Things such as Snap, Flatpak, AppImage, Docker containers, Electron apps, and third-party binaries often contain embedded libraries and such for which you have no easy visibility into their status. For instance, if there was a bug in libpng, would you know how many of your containers had a vulnerability? These systems are category 4 – you don’t even know if you’re vulnerable. It’s for this reason that my Debian-based Docker containers apply security patches before starting processes, and also run unattended-upgrades and friends.

The pernicious library problem

As mentioned in my last category above, hidden vulnerabilities can be a big problem. I’ve been writing about this for years. Back in 2017, I wrote an article focused on Docker containers, but which applies to the other systems like Snap and so forth. I cited a study back then that “Over 80% of the :latest versions of official images contained at least one high severity vulnerability.” The situation is no better now. In December 2023, it was reported that, two years after the critical Log4Shell vulnerability, 25% of apps were still vulnerable to it. Also, only 21% of developers ever update third-party libraries after introducing them into their projects.

Clearly, you can’t rely on these images with embedded libraries to be secure. And since they are black box, they are difficult to audit.

Debian’s policy of always splitting libraries out from packages is hugely beneficial; it allows finegrained analysis of not just vulnerabilities, but also the dependency graph. If there’s a vulnerability in libpng, you have one place to patch it and you also know exactly what components of your system use it.

If you use snaps, or AppImages, you can’t know if they contain a deeply embedded vulnerability, nor could you patch it yourself if you even knew. You are at the mercy of upstream detecting and remedying the problem – a dicey situation at best.

Who makes the patches?

Fundamentally, humans produce security patches. Often, but not always, patches originate with the authors of a program and then are integrated into distribution packages. It should be noted that every security team has finite resources; there will always be some CVEs that aren’t patched in a given system for various reasons; perhaps they are not exploitable, or are too low-impact, or have better mitigations than patches.

Debian has an excellent security team; they manage the process of integrating patches into Debian, produce Debian Security Advisories, maintain the Debian Security Tracker (which maintains cross-references with the CVE database), etc.

Some distributions don’t have this infrastructure. For instance, I was unable to find this kind of tracker for Devuan or Raspberry Pi OS. In contrast, Ubuntu and Arch Linux both seem to have active security teams with trackers and advisories.

Implications for Raspberry Pi OS and others

As I mentioned above, I’m transitioning my Pi devices off Raspberry Pi OS (Raspbian). Security is one reason. Although Raspbian is a fork of Debian, and you can install packages like unattended-upgrades on it, they don’t work right because they use the Debian infrastructure, and Raspbian hasn’t modified them to use their own infrastructure. I don’t see any Raspberry Pi OS security advisories, trackers, etc. In short, they lack the infrastructure to support those Debian tools anyhow.

Not only that, but Raspbian lags behind Debian in both new releases and new security patches, sometimes by days or weeks.

A future post will include instructions for migrating Raspberry Pis to Debian.

Categories: FLOSS Project Planets

Jacob Adams: Fixing My Shell

Planet Debian - Tue, 2024-01-02 19:00

For an embarassingly long time, my shell has unnecessarily tried to initialize a console font in every kind of interactive terminal.

This leaves the following error message in my terminal:

Couldn't get a file descriptor referring to the console.

It even shows up twice when running tmux!

Clearly I’ve done something horrible to my configuration, and now I’ve got to clean it up.

How does Shell Initialization Work?

The precise files a shell reads at start-up is somewhat complex, and defined by this excellent chart 1:

For the purposes of what I’m trying to fix, there are two paths that matter.

  • Interactive login shell startup
  • Interactive non-login shell startup

As you can see from the above, trying to distinguish these two paths in bash is an absolute mess. zsh, in contrast, is much cleaner and allows for a clear distinction between these two cases, with login shell configuration files as a superset of configuration files used by non-login shells.

How did we get here?

I keep my configuration files in a config repository.

Some time ago I got quite frustrated at this whole shell initialization thing, and just linked everything together in one profile file:

.mkshrc -> config/profile .profile -> config/profile

This appears to have created this mess.

Move to ZSH

I’ve wanted to move to zsh for a while, and took this opportunity to do so. So my new configuration files are .zprofile and .zshrc instead of .mkshrc and .profile (though I’m going to retain those symlinks to allow my old configurations to continue working).

mksh is a nice simple shell, but using zsh here allows for more consistency between my home and $WORK environments, and will allow a lot more powerful extensions.

Updating my Prompt

ZSH prompts use a totally different configuration via variable expansion. However, it also uses the PROMPT variable, so I set that to the needed values for zsh.

There’s an excellent ZSH prompt generator at https://zsh-prompt-generator.site that I used to get these variables, though I’m sure they’re in the zsh documentation somewhere as well.

I wanted a simple prompt with user (%n), host (%m), and path (%d). I also wanted a % at the end to distinguish this from other shells.

PROMPT="%n@%m%d%% " Fixing mksh prompts

This worked but surprisingly mksh also looks at PROMPT, leaving my mksh prompt as the literal prompt string without expansion.

Fixing this requires setting up a proper shrc and linking it to .mkshrc and .zshrc.

I chose to move my existing aliases script to this file, as it also broke in non-login shells when moved to profile.

Within this new shrc file we can check what shell we’re running via $0:

if [ "$0" = "/bin/zsh" ] || [ "$0" = "zsh" ] || [ "$0" = "-zsh" ]

I chose to add plain zsh here in case I run it manually for whatever reason. I also added -zsh to support tmux as that’s what it presents as $0. This also means you’ll need to be careful to quote $0 or you’ll get fun shell errors.

There’s probably a better way to do this, but I couldn’t find something that was compatible with POSIX shell, which is what most of this has to be written in to be compatible with mksh and zsh2.

We can then setup different prompts for each:

if [ "$0" = "/bin/zsh" ] || [ "$0" = "zsh" ] || [ "$0" = "-zsh" ] then PROMPT="%n@%m%d%% " else # Borrowed from # http://www.unixmantra.com/2013/05/setting-custom-prompt-in-ksh.html PS1='$(id -un)@$(hostname -s)$PWD$ ' fi Setting Console Font in a Better Way

I’ve been setting console font via setfont in my .profile for a while. I’m not sure where I picked this up, but it’s not the right way. I even tried to only run this in a console with -t but that only checks that output is a terminal, not specifically a console.

if [ -t 1 ] then setfont /usr/share/consolefonts/Lat15-Terminus20x10.psf.gz fi

This also only runs once the console is logged into, instead of initializing it on boot. The correct way to set this up, on Debian-based systems, is reconfiguring console-setup like so:

dpkg-reconfigure console-setup

From there you get a menu of encoding, character set, font, and then font size to configure for your consoles.

VIM mode

To enable VIM mode for ZSH, you simply need to set:

bindkeys -v

This allows you to edit your shell commands with basic VIM keybinds.

Getting back Ctrl + Left Arrow and Ctrl + Right Arrow

Moving around one word at a time with Ctrl and the arrow keys is broken by vim mode unfortunately, so we’ll need to re-enable it:

bindkey "^[[1;5C" forward-word bindkey "^[[1;5D" backward-word Better History Search

But of course we’re now using zsh so we can do better than just the same configuration as we had before.

There’s an excellent substring history search plugin that we can just source without a plugin manager3

source $HOME/config/zsh-history-substring-search.zsh # Keys are weird, should be ^[[ but it's not bindkey '^[[A' history-substring-search-up bindkey '^[OA' history-substring-search-up bindkey '^[[B' history-substring-search-down bindkey '^[OB' history-substring-search-down

For some reason my system uses ^[OA and ^[OB as up and down keys. It seems ^[[A and ^[[B are the defaults, so I’ve left them in, but I’m confused as to what differences would lead to this. If you know, please let me know and I’ll add a footnote to this article explaining it.

Back to history search. For this to work, we also need to setup history logging:

export SAVEHIST=1000000 export HISTFILE=$HOME/.zsh_history export HISTFILESIZE=1000000 export HISTSIZE=1000000 Why did it show up twice for tmux?

Because tmux creates a login shell. Adding:

echo PROFILE

to profile and:

echo SHRC

to shrc confirms this with:

PROFILE SHRC SHRC

For now, profile sources shrc so that running twice is expected.

But after this exploration and diagram, it’s clear we don’t need that for zsh. Removing this will break remote bash shells (see above diagram), but I can live without those on my development laptop.

Removing that line results in the expected output for a new terminal:

SHRC

And the full output for a new tmux session or console:

PROFILE SHRC

So finally we’re back to a normal state!

This post is a bit unfocused but I hope it helps someone else repair or enhance their shell environment.

If you liked this4, or know of any other ways to manage this I could use, let me know at fixmyshell@tookmund.com.

  1. This chart comes from the excellent Shell Startup Scripts article by Peter Ward. I’ve generated the SVG from the graphviz source linked in the article. 

  2. Technically it has be compatible with Korn shell, but a quick google seems to suggest that that’s actually a subset of POSIX shell. 

  3. I use oh-my-zsh at $WORK but for now I’m going to simplify my personal configuration. If I end up using a lot of plugins I’ll reconsider this. 

  4. Or if you’ve found any typos or other issues that I should fix. 

Categories: FLOSS Project Planets

Gaming only on Linux, one year in

Planet KDE - Tue, 2024-01-02 18:00

I have now been playing games only on Linux for a year, and it has been great.

With the GPU shortage, I had been waiting for prices to come back to reasonable levels before buying a new GPU. So far, I had always bought NVIDIA GPUs as I was using Windows to run games and the NVIDIA drivers had a better “reputation” than the AMD/Radeon ones.

With Valve’s Proton seriously taking off thanks to the Steam Deck, I wanted to get rid of the last computer in the house that was running Microsoft Windows, that I had kept only for gaming.

But the NVIDIA drivers story on Linux had never been great, especially on distributions that move kernel versions quickly to follow upstream releases like Fedora. I had tried using the NVIDIA binary drivers on Fedora Kinoite but quickly ran into some of the issues that we have listed in the docs.

At the time, the Universal Blue project did not exist yet (Jorge Castro started it a bit later in the year), otherwise I would have probably used that instead. If you need NVIDIA support today on Fedora Atomic Desktops (Silverblue, Kinoite, etc.), I heavily recommend using the Universal Blue images.

Hopefully this will be better in the future for NVIDIA users with the work on NVK

So, at the beginning of last year (January 2023), I finally decided to buy an AMD Radeon RX 6700 XT GPU card.

What a delight. Nothing to setup, fully supported out of the box, works perfectly on Wayland. Valve’s Proton does wonders. I can now play on my Linux box all the games that I used to play on Windows and they run perfectly. Just from last year, I played Age of Wonders 4 and Baldur’s Gate 3 without any major issue, and did it pretty close to the launch dates. Older titles usually work fairly well too.

Sure, some games require some little tweaks, but it is nothing compared to the horrors of managing a Windows machine. And some games require tweaks on Windows as well (looking at you Cyberpunk 2077). The best experience is definitely with games bought on Steam which usually work out of the box. For those where it is not the case, protondb is usually a good source to find the tweaks needed to make the games work. I try to keep the list of tweaks I use for the games that I play updated on my profile there.

I am running all of this on Fedora Kinoite with the Steam Flatpak. If you want a more console-like or Steam Deck-like experience on your existing computers, I recommend checking out the great work from the Bazzite team.

Besides Steam, I use Bottles, Cartridge and Heroic Games Launcher as needed (all as Flatpaks). I have not looked at Origins or Uplay/Ubisoft Connect games yet.

According to protondb, the only games from my entire Steam library that are not supported are mostly multiplayer games that require some specific anti-cheat that is only compatible with Windows.

I would like to say a big THANK YOU to all the open source graphics and desktop developers out there and to (in alphabetical order) AMD, Collabora, Igalia, Red Hat, Valve, and other companies for employing people or funding the work that makes gaming on Linux a reality.

Happy new year and happy gaming on Linux!

Categories: FLOSS Project Planets

Matthew Garrett: Dealing with weird ELF libraries

Planet Debian - Tue, 2024-01-02 14:04
Libraries are collections of code that are intended to be usable by multiple consumers (if you're interested in the etymology, watch this video). In the old days we had what we now refer to as "static" libraries, collections of code that existed on disk but which would be copied into newly compiled binaries. We've moved beyond that, thankfully, and now make use of what we call "dynamic" or "shared" libraries - instead of the code being copied into the binary, a reference to the library function is incorporated, and at runtime the code is mapped from the on-disk copy of the shared object[1]. This allows libraries to be upgraded without needing to modify the binaries using them, and if multiple applications are using the same library at once it only requires that one copy of the code be kept in RAM.

But for this to work, two things are necessary: when we build a binary, there has to be a way to reference the relevant library functions in the binary; and when we run a binary, the library code needs to be mapped into the process.

(I'm going to somewhat simplify the explanations from here on - things like symbol versioning make this a bit more complicated but aren't strictly relevant to what I was working on here)

For the first of these, the goal is to replace a call to a function (eg, printf()) with a reference to the actual implementation. This is the job of the linker rather than the compiler (eg, if you use the -c argument to tell gcc to simply compile to an object rather than linking an executable, it's not going to care about whether or not every function called in your code actually exists or not - that'll be figured out when you link all the objects together), and the linker needs to know which symbols (which aren't just functions - libraries can export variables or structures and so on) are available in which libraries. You give the linker a list of libraries, it extracts the symbols available, and resolves the references in your code with references to the library.

But how is that information extracted? Each ELF object has a fixed-size header that contains references to various things, including a reference to a list of "section headers". Each section has a name and a type, but the ones we're interested in are .dynstr and .dynsym. .dynstr contains a list of strings, representing the name of each exported symbol. .dynsym is where things get more interesting - it's a list of structs that contain information about each symbol. This includes a bunch of fairly complicated stuff that you need to care about if you're actually writing a linker, but the relevant entries for this discussion are an index into .dynstr (which means the .dynsym entry isn't sufficient to know the name of a symbol, you need to extract that from .dynstr), along with the location of that symbol within the library. The linker can parse this information and obtain a list of symbol names and addresses, and can now replace the call to printf() with a reference to libc instead.

(Note that it's not possible to simply encode this as "Call this address in this library" - if the library is rebuilt or is a different version, the function could move to a different location)

Experimentally, .dynstr and .dynsym appear to be sufficient for linking a dynamic library at build time - there are other sections related to dynamic linking, but you can link against a library that's missing them. Runtime is where things get more complicated.

When you run a binary that makes use of dynamic libraries, the code from those libraries needs to be mapped into the resulting process. This is the job of the runtime dynamic linker, or RTLD[2]. The RTLD needs to open every library the process requires, map the relevant code into the process's address space, and then rewrite the references in the binary into calls to the library code. This requires more information than is present in .dynstr and .dynsym - at the very least, it needs to know the list of required libraries.

There's a separate section called .dynamic that contains another list of structures, and it's the data here that's used for this purpose. For example, .dynamic contains a bunch of entries of type DT_NEEDED - this is the list of libraries that an executable requires. There's also a bunch of other stuff that's required to actually make all of this work, but the only thing I'm going to touch on is DT_HASH. Doing all this re-linking at runtime involves resolving the locations of a large number of symbols, and if the only way you can do that is by reading a list from .dynsym and then looking up every name in .dynstr that's going to take some time. The DT_HASH entry points to a hash table - the RTLD hashes the symbol name it's trying to resolve, looks it up in that hash table, and gets the symbol entry directly (it still needs to resolve that against .dynstr to make sure it hasn't hit a hash collision - if it has it needs to look up the next hash entry, but this is still generally faster than walking the entire .dynsym list to find the relevant symbol). There's also DT_GNU_HASH which fulfills the same purpose as DT_HASH but uses a more complicated algorithm that performs even better. .dynamic also contains entries pointing at .dynstr and .dynsym, which seems redundant but will become relevant shortly.

So, .dynsym and .dynstr are required at build time, and both are required along with .dynamic at runtime. This seems simple enough, but obviously there's a twist and I'm sorry it's taken so long to get to this point.

I bought a Synology NAS for home backup purposes (my previous solution was a single external USB drive plugged into a small server, which had uncomfortable single point of failure properties). Obviously I decided to poke around at it, and I found something odd - all the libraries Synology ships were entirely lacking any ELF section headers. This meant no .dynstr, .dynsym or .dynamic sections, so how was any of this working? nm asserted that the libraries exported no symbols, and readelf agreed. If I wrote a small app that called a function in one of the libraries and built it, gcc complained that the function was undefined. But executables on the device were clearly resolving the symbols at runtime, and if I loaded them into ghidra the exported functions were visible. If I dlopen()ed them, dlsym() couldn't resolve the symbols - but if I hardcoded the offset into my code, I could call them directly.

Things finally made sense when I discovered that if I passed the --use-dynamic argument to readelf, I did get a list of exported symbols. It turns out that ELF is weirder than I realised. As well as the aforementioned section headers, ELF objects also include a set of program headers. One of the program header types is PT_DYNAMIC. This typically points to the same data that's present in the .dynamic section. Remember when I mentioned that .dynamic contained references to .dynsym and .dynstr? This means that simply pointing at .dynamic is sufficient, there's no need to have separate entries for them.

The same information can be reached from two different locations. The information in the section headers is used at build time, and the information in the program headers at run time[3]. I do not have an explanation for this. But if the information is present in two places, it seems obvious that it should be able to reconstruct the missing section headers in my weird libraries? So that's what this does. It extracts information from the DYNAMIC entry in the program headers and creates equivalent section headers.

There's one thing that makes this more difficult than it might seem. The section header for .dynsym has to contain the number of symbols present in the section. And that information doesn't directly exist in DYNAMIC - to figure out how many symbols exist, you're expected to walk the hash tables and keep track of the largest number you've seen. Since every symbol has to be referenced in the hash table, once you've hit every entry the largest number is the number of exported symbols. This seemed annoying to implement, so instead I cheated, added code to simply pass in the number of symbols on the command line, and then just parsed the output of readelf against the original binaries to extract that information and pass it to my tool.

Somehow, this worked. I now have a bunch of library files that I can link into my own binaries to make it easier to figure out how various things on the Synology work. Now, could someone explain (a) why this information is present in two locations, and (b) why the build-time linker and run-time linker disagree on the canonical source of truth?

[1] "Shared object" is the source of the .so filename extension used in various Unix-style operating systems
[2] You'll note that "RTLD" is not an acryonym for "runtime dynamic linker", because reasons
[3] For environments using the GNU RTLD, at least - I have no idea whether this is the case in all ELF environments

comments
Categories: FLOSS Project Planets

Thomas Koch: Good things come ... state folder

Planet Debian - Tue, 2024-01-02 11:05
Posted on January 2, 2024 Tags: debian, free software, life

Just a little while ago (10 years) I proposed the addition of a state folder to the XDG basedir specification and expanded the article XDGBaseDirectorySpecification in the Debian wiki. Recently I learned, that version 0.8 (from May 2021) of the spec finally includes a state folder.

Granted, I wasn’t the first to have this idea (2009), nor the one who actually made it happen.

Now, please go ahead and use it! Thank you.

Categories: FLOSS Project Planets

Thomas Koch: Know your tools - simple backup with rsync

Planet Debian - Tue, 2024-01-02 11:05
Posted on June 9, 2022 Tags: debian, free software

I’ve been using rsync for years and still did not know its full powers. I just wanted a quick and dirty simple backup but realised that rsnapshot is not in Debian anymore.

However you can do much of rsnapshot with rsync alone nowadays.

The --link-dest option (manpage) solves the part of creating hardlinks to a previous backup (found here). So my backup program becomes this shell script in ~/backups/backup.sh:

#!/bin/sh SERVER="${1}" BACKUP="${HOME}/backups/${SERVER}" SNAPSHOTS="${BACKUP}/snapshots" FOLDER=$(date --utc +%F_%H-%M-%S) DEST="${SNAPSHOTS}/${FOLDER}" LAST=$(ls -d1 ${SNAPSHOTS}/????-??-??_??-??-??|tail -n 1) rsync \ --rsh="ssh -i ${BACKUP}/sshkey -o ControlPath=none -o ForwardAgent=no" \ -rlpt \ --delete --link-dest="${LAST}" \ ${SERVER}::backup "${DEST}"

The script connects to rsync in daemon mode as outlined in section “USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION” in the rsync manpage. This allows to reference a “module” as the source that is defined on the server side as follows:

[backup] path = / read only = true exclude from = /srv/rsyncbackup/excludelist uid = root gid = root

The important bit is the read only setting that protects the server against somebody with access to the ssh key to overwrit files on the server via rsync and thus gaining full root access.

Finally the command prefix in ~/.ssh/authorized_keys runs rsync as daemon with sudo and the specified config file:

command="sudo rsync --config=/srv/rsyncbackup/config --server --daemon ."

The sudo setup is left as an exercise for the reader as mine is rather opinionated.

Unfortunately I have not managed to configure systemd timers in the way I wanted and therefor opened an issue: “Allow retry of timer triggered oneshot services with failed conditions or asserts”. Any help there is welcome!

Categories: FLOSS Project Planets

Thomas Koch: Missing memegen

Planet Debian - Tue, 2024-01-02 11:05
Posted on May 1, 2022 Tags: debian, free software, life

Back at $COMPANY we had an internal meme-site. I had some reputation in my team for creating good memes. When I watched Episode 3 of Season 2 from Yes Premier Minister yesterday, I really missed a place to post memes.

This is the full scene. Please watch it or even the full episode before scrolling down to the GIFs. I had a good laugh for some time.

With Debian, I could just download the episode from somewhere on the net with youtube-dl and easily create two GIFs using ffmpeg, with and without subtitle:

ffmpeg -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 tmp/tragic.gif ffmpeg -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 \ -vf "subtitles=tmp/sub.srt:force_style='Fontsize=60'" tmp/tragic_with_subtitle.gif

And this sub.srt file:

1 00:00:10,000 --> 00:00:12,000 Tragic.

I believe, one needs to install the libavfilter-extra variant to burn the subtitle in the GIF.

Some

space

to

hide

the

GIFs.

The Premier Minister just learned, that his predecessor, who was about to publish embarassing memories, died of a sudden heart attack:

I can’t actually think of a meme with this GIF, that the internal thought police community moderation would not immediately take down.

For a moment I thought that it would be fun to have a Meme-Site for Debian members. But it is probably not the right time for this.

Maybe somebody likes the above GIFs though and wants to use them somewhere.

Categories: FLOSS Project Planets

Thomas Koch: lsp-java coming to debian

Planet Debian - Tue, 2024-01-02 11:05
Posted on March 12, 2022 Tags: debian

The Language Server Protocol (LSP) standardizes communication between editors and so called language servers for different programming languages. This reduces the old problem that every editor had to implement many different plugins for all different programming languages. With LSP an editor just needs to talk LSP and can immediately provide typicall IDE features.

I already packaged the Emacs packages lsp-mode and lsp-haskell for Debian bullseye. Now lsp-java is waiting in the NEW queue.

I’m always worried about downloading and executing binaries from random places of the internet. It should be a matter of hygiene to only run binaries from official Debian repositories. Unfortunately this is not feasible when programming and many people don’t see a problem with running multiple curl-sh pipes to set up their programming environment.

I prefer to do such stuff only in virtual machines. With Emacs and LSP I can finally have a lightweight textmode programming environment even for Java.

Unfortunately the lsp-java mode does not yet work over tramp. Once this is solved, I could run emacs on my host and only isolate the code and language server inside the VM.

The next step would be to also keep the code on the host and mount it with Virtio FS in the VM. But so far the necessary daemon is not yet in Debian (RFP: #1007152).

In Detail I uploaded these packages:

Categories: FLOSS Project Planets

Thomas Koch: Waiting for a STATE folder in the XDG basedir spec

Planet Debian - Tue, 2024-01-02 11:05
Posted on February 18, 2014 Tags: debian, free software

The XDG Basedirectory specification proposes default homedir folders for the categories DATA (~/.local/share), CONFIG (~/.config) and CACHE (~/.cache). One category however is missing: STATE. This category has been requested several times but nothing happened.

Examples for state data are:

  • history files of shells, repls, anything that uses libreadline
  • logfiles
  • state of application windows on exit
  • recently opened files
  • last time application was run
  • emacs: bookmarks, ido last directories, backups, auto-save files, auto-save-list

The missing STATE category is especially annoying if you’re managing your dotfiles with a VCS (e.g. via VCSH) and you care to keep your homedir tidy.

If you’re as annoyed as me about the missing STATE category, please voice your opinion on the XDG mailing list.

Of course it’s a very long way until applications really use such a STATE directory. But without a common standard it will never happen.

Categories: FLOSS Project Planets

Cleaning up KDE's metadata - the little things matter too

Planet KDE - Tue, 2024-01-02 09:29
Cleaning up KDE's metadata – the little things matter too

Lots of my KDE contributions revolve around plugin code and their metadata, meaning I have a good overview of where and how metadata is used. In this post, I will highlight some recent changes and show you how to utilize them in your Plasma Applets and KRunner plugins!

Applet and Containment metadata

Applets (or Widgets) are one of Plasma's main selling points regarding customizability. Next to user-visible information like the name, description and categories, there is a need for some technical metadata properties. This includes X-Plasma-API-Minimum-Version for the compatible versions, the ID and the package structure, which should always be “Plasma/Applet”.

For integrating with the system tray, applets had to specify the X-Plasma-NotificationArea and X-Plasma-NotificationAreaCategory properties. The first one says that it may be shown in the system tray and the second one says in which category it belongs. But since we don't want any applets without categories in there, the first value is redundant and may be omitted! Also, it was treated like a boolean value, but only "true" or "false" were expected. I stumbled upon this when correcting the types in metadata files.
This was noticed due to preparations for improving the JSON linting we have in KDE. I will resume working on it and might also blog about it :).

What most applets in KDE specify is X-KDE-MainScript, which determines the entrypoint of the applet. This is usually “ui/main.qml”, but in some cases the value differs. When working with applets, it is confusing to first have to look at the metadata in order to find the correct file. This key was removed in Plasma6 and the file is always ui/main.qml. Since this was the default value for the property, you may even omit it for Plasma5.
The same filename convention is also enforced for QML config modules (KCMs).

What all applets needed to specify was X-Plasma-API. This is typically set to "declarativeappletscript", but we de facto never used its value, but enforced it being present. This was historically needed because in Plasma4, applets could be written in other scripting languages. From Plasma6 onward, you may omit this key.

In the Plasma repositories, this allowed me to clean up over 200 lines of JSON data.

metadata.desktop files of Plasma5 addons

Just in case you were wondering: We have migrated from metadata.desktop files to only metadata.json files in Plasma6. This makes providing metadata more consistent and efficient. In case your projects still use the old format, you can run desktoptojson -i pathto/metadata.desktop and remove the file afterward.
See https://develop.kde.org/docs/plasma/widget/properties/#kpackagestructure for more detailed information. You can even do the conversion when targeting Plasma5 users!

Default object path for KRunner DBus plugins

Another nice addition is that “/runner” will now be the default object path. This means you can omit this one key. Check out the template to get started: https://invent.kde.org/frameworks/krunner/-/tree/master/templates/runner6python. DBus-Runners that worked without deprecations in Plasma5 will continue to do so in Plasma6! For C++ plugins, I will make a porting guide like blog post soonish, because I have started porting my own plugins to work with Plasma6.

Finally, I want to wish you all a happy and productive new year!

Categories: FLOSS Project Planets

Pages