Feeds

Antoine Beaupré: Using LSP in Emacs and Debian

Planet Debian - Wed, 2022-04-27 16:36

The Language Server Protocol (LSP) is a neat mechanism that provides a common interface to what used to be language-specific lookup mechanisms (like, say, running a Python interpreter in the background to find function definitions).

There is also ctags shipped with UNIX since forever, but that doesn't support looking backwards ("who uses this function"), linting, or refactoring. In short, LSP rocks, and how do I use it right now in my editor of choice (Emacs, in my case) and OS (Debian) please?

Editor (emacs) setup

First, you need to setup your editor. The Emacs LSP mode has pretty good installation instructions which, for me, currently mean:

apt install elpa-lsp-mode

and this .emacs snippet:

(use-package lsp-mode :commands (lsp lsp-deferred) :hook ((python-mode go-mode) . lsp-deferred) :demand t :init (setq lsp-keymap-prefix "C-c l") ;; TODO: https://emacs-lsp.github.io/lsp-mode/page/performance/ ;; also note re "native compilation": <+varemara> it's the ;; difference between lsp-mode being usable or not, for me :config (setq lsp-auto-configure t)) (use-package lsp-ui :config (setq lsp-ui-flycheck-enable t) (add-to-list 'lsp-ui-doc-frame-parameters '(no-accept-focus . t)) (define-key lsp-ui-mode-map [remap xref-find-definitions] #'lsp-ui-peek-find-definitions) (define-key lsp-ui-mode-map [remap xref-find-references] #'lsp-ui-peek-find-references))

Note: this configuration might have changed since I wrote this, see my init.el configuration for the most recent config.

The main reason for choosing lsp-mode over eglot is that it's in Debian (and eglot is not). (Apparently, eglot has more chance of being upstreamed, "when it's done", but I guess I'll cross that bridge when I get there.)

I already had lsp-mode partially setup in Emacs so I only had to do this small tweak to switch and change the prefix key (because s-l or mod is used by my window manager). I also had to pin LSP packages to bookworm here so that it properly detects pylsp (the older version in Debian bullseye only supports pyls, not packaged in Debian).

This won't do anything by itself: Emacs will need something to talk with to provide the magic. Those are called "servers" and are basically different programs, for each programming language, that provide the magic.

Servers setup

The Emacs package provides a way (M-x lsp-install-server) to install some of them, but I prefer to manage those tools through Debian packages if possible, just like lsp-mode itself. Those are the servers I currently know of in Debian:

package languages ccls C, C++, ObjectiveC clangd C, C++, ObjectiveC elpa-lsp-haskell Haskell fortran-language-server Fortran gopls Golang python3-pyls Python

There might be more such packages, but those are surprisingly hard to find. I found a few with apt search "Language Server Protocol", but that didn't find ccls, for example, because that just said "Language Server" in the description (which also found a few more pyls plugins, e.g. black support).

Note that the Python packages, in particular, need to be upgraded to their bookworm releases to work properly (here). It seems like there's some interoperability problems there that I haven't quite figured out yet. See also my Puppet configuration for LSP.

Finally, note that I have now completely switched away from Elpy to pyls, and I'm quite happy with the results. lsp-mode feels slower than elpy but I haven't done any of the performance tuning and this will improve even more with native compilation. And lsp-mode is much more powerful. I particularly like the "rename symbol" functionality, which ... mostly works.

Remaining work Puppet and Ruby

I still have to figure how to actually use this: I mostly spend my time in Puppet these days, there is no server listed in the Emacs lsp-mode language list, but there is one listed over at the upstream language list, the puppet-editor-services server.

But it's not packaged in Debian, and seems somewhat... involved. It could still be a good productivity boost. The Voxpupuli team have vim install instructions which also suggest installing solargraph, the Ruby language server, also not packaged in Debian.

Bash

I guess I do a bit of shell scripting from time to time nowadays, even though I don't like it. So the bash-language-server may prove useful as well.

Other languages

Here are more language servers available:

Categories: FLOSS Project Planets

Anarcat: building Debian packages under qemu with sbuild

Planet Python - Wed, 2022-04-27 16:29

I've been using sbuild for a while to build my Debian packages, mainly because it's what is used by the Debian autobuilders, but also because it's pretty powerful and efficient. Configuring it just right, however, can be a challenge. In my quick Debian development guide, I had a few pointers on how to configure sbuild with the normal schroot setup, but today I finished a qemu based configuration.

Why

I want to use qemu mainly because it provides better isolation than a chroot. I sponsor packages sometimes and while I typically audit the source code before building, it still feels like the extra protection shouldn't hurt.

I also like the idea of unifying my existing virtual machine setup with my build setup. My current VM is kind of all over the place: libvirt, vagrant, GNOME Boxes, etc?). I've been slowly converging over libvirt however, and most solutions I use right now rely on qemu under the hood, certainly not chroots...

I could also have decided to go with containers like LXC, LXD, Docker (with conbuilder, whalebuilder, docker-buildpackage), systemd-nspawn (with debspawn), or whatever: I didn't feel those offer the level of isolation that is provided by qemu.

The main downside of this approach is that it is (obviously) slower than native builds. But on modern hardware, that cost should be minimal.

How

Basically, you need this:

sudo mkdir -p /srv/sbuild/qemu/ sudo apt install sbuild-qemu sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian

Then to make this used by default, add this to ~/.sbuildrc:

# run autopkgtest inside the schroot $run_autopkgtest = 1; # tell sbuild to use autopkgtest as a chroot $chroot_mode = 'autopkgtest'; # tell autopkgtest to use qemu $autopkgtest_virt_server = 'qemu'; # tell autopkgtest-virt-qemu the path to the image # use --debug there to show what autopkgtest is doing $autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ]; # tell plain autopkgtest to use qemu, and the right image $autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ]; # no need to cleanup the chroot after build, we run in a completely clean VM $purge_build_deps = 'never'; # no need for sudo $autopkgtest_root_args = '';

Note that the above will use the default autopkgtest (1GB, one core) and qemu (128MB, one core) configuration, which might be a little low on resources. You probably want to be explicit about this, with something like this:

# extra parameters to pass to qemu # --enable-kvm is not necessary, detected on the fly by autopkgtest my @_qemu_options = ['--ram-size=4096', '--cpus=2']; # tell autopkgtest-virt-qemu the path to the image # use --debug there to show what autopkgtest is doing $autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ]; $autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];

This configuration will:

  1. create a virtual machine image in /srv/sbuild/qemu for unstable
  2. tell sbuild to use that image to create a temporary VM to build the packages
  3. tell sbuild to run autopkgtest (which should really be default)
  4. tell autopkgtest to use qemu for builds and for tests
Remaining work

One thing I haven't quite figured out yet is the equivalent of those two schroot-specific commands from my quick Debian development guide:

  • sbuild -c unstable-amd64-sbuild - build in the unstable chroot even though another suite is specified (e.g. UNRElEASED, unstable-backports or unstable-security)

  • schroot -c unstable-amd64-sbuild - enter the unstable chroot to make tests, changes will be discarded

  • sbuild-shell unstable - enter the unstable chroot to make permanent changes, which will not be discarded

In other words: "just give me a shell in that VM". It seems to me autopkgtest-virt-qemu should have a magic flag that does that, but it doesn't look like that's a thing. When that program starts, it just says ok and sits there. When autopkgtest massages it just the right way, however, it will do this funky commandline:

qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm

... which is a typical qemu commandline, I regret to announce. I managed to somehow boot a VM similar to the one autopkgtest provisions with this magic incantation:

mkdir tmp cd tmp qemu-img create -f qcow2 -F qcow2 -b /srv/sbuild/qemu/unstable-amd64.img overlay.img mkdir shared qemu-system-x86_64 -m 4096 -smp 2 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:$PWD/monitor,server,nowait -serial unix:$PWD/ttyS0,server,nowait -serial unix:$PWD/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=$PWD/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=$PWD/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm

That gives you a VM like autopkgtest which has those peculiarities:

  • the shared directory is, well, shared with the VM
  • port 10022 is forward to the VM's port 22, presumably for SSH, but not SSH server is started by default
  • the ttyS1 and ttyS2 UNIX sockets are mapped to the first two serial ports (use nc -U to talk with those)
  • the monitor socket is a qemu control socket (see the QEMU monitor documentation)

So I guess I could make a script out of this but for now this will have to be good enough.

Nitty-gritty details no one cares about

I'm having a hard time making heads or tails of this, but please bear with me.

In sbuild + schroot, there's this notion that we don't really need to cleanup after ourselves inside the schroot, as the schroot will just be delted anyways. This behavior seems to be handled by the internal "Session Purged" parameter.

At least in lib/Sbuild/Build.pm, we can see this:

my $is_cloned_session = (defined ($session->get('Session Purged')) && $session->get('Session Purged') == 1) ? 1 : 0; [...] if ($is_cloned_session) { $self->log("Not cleaning session: cloned chroot in use\n"); } else { if ($purge_build_deps) { # Removing dependencies $resolver->uninstall_deps(); } else { $self->log("Not removing build depends: as requested\n"); } }

The schroot builder defines that parameter as:

$self->set('Session Purged', $info->{'Session Purged'});

... which is ... a little confusing to me. $info is:

my $info = $self->get('Chroots')->get_info($schroot_session);

... so I presume that depends on whether the schroot was correctly cleaned up? I stopped digging there...

ChrootUnshare.pm is way more explicit:

$self->set('Session Purged', 1);

I wonder if we should do something like this with the autopkgtest backend. I guess people might technically use it with something else than qemu, but qemu is the typical use case of the autopkgtest backend, in my experience. Or at least certainly with things that cleanup after themselves. Right?

For some reason, before I added this line to my configuration:

$purge_build_deps = 'never';

... the "Cleanup" step would just completely hang. It was quite bizarre.

Who

Thanks lavamind for the introduction to the sbuild-qemu package.

Categories: FLOSS Project Planets

Antoine Beaupré: building Debian packages under qemu with sbuild

Planet Debian - Wed, 2022-04-27 16:29

I've been using sbuild for a while to build my Debian packages, mainly because it's what is used by the Debian autobuilders, but also because it's pretty powerful and efficient. Configuring it just right, however, can be a challenge. In my quick Debian development guide, I had a few pointers on how to configure sbuild with the normal schroot setup, but today I finished a qemu based configuration.

Why

I want to use qemu mainly because it provides better isolation than a chroot. I sponsor packages sometimes and while I typically audit the source code before building, it still feels like the extra protection shouldn't hurt.

I also like the idea of unifying my existing virtual machine setup with my build setup. My current VM is kind of all over the place: libvirt, vagrant, GNOME Boxes, etc?). I've been slowly converging over libvirt however, and most solutions I use right now rely on qemu under the hood, certainly not chroots...

I could also have decided to go with containers like LXC, LXD, Docker (with conbuilder, whalebuilder, docker-buildpackage), systemd-nspawn (with debspawn), or whatever: I didn't feel those offer the level of isolation that is provided by qemu.

The main downside of this approach is that it is (obviously) slower than native builds. But on modern hardware, that cost should be minimal.

How

Basically, you need this:

sudo mkdir -p /srv/sbuild/qemu/ sudo apt install sbuild-qemu sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian

Then to make this used by default, add this to ~/.sbuildrc:

# run autopkgtest inside the schroot $run_autopkgtest = 1; # tell sbuild to use autopkgtest as a chroot $chroot_mode = 'autopkgtest'; # tell autopkgtest to use qemu $autopkgtest_virt_server = 'qemu'; # tell autopkgtest-virt-qemu the path to the image # use --debug there to show what autopkgtest is doing $autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ]; # tell plain autopkgtest to use qemu, and the right image $autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ]; # no need to cleanup the chroot after build, we run in a completely clean VM $purge_build_deps = 'never'; # no need for sudo $autopkgtest_root_args = '';

Note that the above will use the default autopkgtest (1GB, one core) and qemu (128MB, one core) configuration, which might be a little low on resources. You probably want to be explicit about this, with something like this:

# extra parameters to pass to qemu # --enable-kvm is not necessary, detected on the fly by autopkgtest my @_qemu_options = ['--ram-size=4096', '--cpus=2']; # tell autopkgtest-virt-qemu the path to the image # use --debug there to show what autopkgtest is doing $autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ]; $autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];

This configuration will:

  1. create a virtual machine image in /srv/sbuild/qemu for unstable
  2. tell sbuild to use that image to create a temporary VM to build the packages
  3. tell sbuild to run autopkgtest (which should really be default)
  4. tell autopkgtest to use qemu for builds and for tests
Remaining work

One thing I haven't quite figured out yet is the equivalent of those two schroot-specific commands from my quick Debian development guide:

  • sbuild -c unstable-amd64-sbuild - build in the unstable chroot even though another suite is specified (e.g. UNRElEASED, unstable-backports or unstable-security)

  • schroot -c unstable-amd64-sbuild - enter the unstable chroot to make tests, changes will be discarded

  • sbuild-shell unstable - enter the unstable chroot to make permanent changes, which will not be discarded

In other words: "just give me a shell in that VM". It seems to me autopkgtest-virt-qemu should have a magic flag that does that, but it doesn't look like that's a thing. When that program starts, it just says ok and sits there. When autopkgtest massages it just the right way, however, it will do this funky commandline:

qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm

... which is a typical qemu commandline, I regret to announce. I managed to somehow boot a VM similar to the one autopkgtest provisions with this magic incantation:

mkdir tmp cd tmp qemu-img create -f qcow2 -F qcow2 -b /srv/sbuild/qemu/unstable-amd64.img overlay.img mkdir shared qemu-system-x86_64 -m 4096 -smp 2 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:$PWD/monitor,server,nowait -serial unix:$PWD/ttyS0,server,nowait -serial unix:$PWD/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=$PWD/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=$PWD/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm

That gives you a VM like autopkgtest which has those peculiarities:

  • the shared directory is, well, shared with the VM
  • port 10022 is forward to the VM's port 22, presumably for SSH, but not SSH server is started by default
  • the ttyS1 and ttyS2 UNIX sockets are mapped to the first two serial ports (use nc -U to talk with those)
  • the monitor socket is a qemu control socket (see the QEMU monitor documentation)

So I guess I could make a script out of this but for now this will have to be good enough.

Nitty-gritty details no one cares about

I'm having a hard time making heads or tails of this, but please bear with me.

In sbuild + schroot, there's this notion that we don't really need to cleanup after ourselves inside the schroot, as the schroot will just be delted anyways. This behavior seems to be handled by the internal "Session Purged" parameter.

At least in lib/Sbuild/Build.pm, we can see this:

my $is_cloned_session = (defined ($session->get('Session Purged')) && $session->get('Session Purged') == 1) ? 1 : 0; [...] if ($is_cloned_session) { $self->log("Not cleaning session: cloned chroot in use\n"); } else { if ($purge_build_deps) { # Removing dependencies $resolver->uninstall_deps(); } else { $self->log("Not removing build depends: as requested\n"); } }

The schroot builder defines that parameter as:

$self->set('Session Purged', $info->{'Session Purged'});

... which is ... a little confusing to me. $info is:

my $info = $self->get('Chroots')->get_info($schroot_session);

... so I presume that depends on whether the schroot was correctly cleaned up? I stopped digging there...

ChrootUnshare.pm is way more explicit:

$self->set('Session Purged', 1);

I wonder if we should do something like this with the autopkgtest backend. I guess people might technically use it with something else than qemu, but qemu is the typical use case of the autopkgtest backend, in my experience. Or at least certainly with things that cleanup after themselves. Right?

For some reason, before I added this line to my configuration:

$purge_build_deps = 'never';

... the "Cleanup" step would just completely hang. It was quite bizarre.

Who

Thanks lavamind for the introduction to the sbuild-qemu package.

Categories: FLOSS Project Planets

Mike Driscoll: Python 101 - The REPL (Video)

Planet Python - Wed, 2022-04-27 16:23

In this tutorial, you will learn what a REPL is and why it is useful. I also show you a couple of alternative REPL environments in this tutorial, such as IDLE and IPython.

Related Articles

The post Python 101 - The REPL (Video) appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

FSF News: FSF job opportunity: Licensing and compliance manager

GNU Planet! - Wed, 2022-04-27 15:06
The Free Software Foundation (FSF), a Massachusetts 501(c)(3) charity with a worldwide mission to protect computer user freedom, seeks a motivated and talented Boston-based individual to be our full-time licensing and compliance manager.
Categories: FLOSS Project Planets

"Morphex's Blogologue": Some more work on an Ethereum (classic) accounting tool

Planet Python - Wed, 2022-04-27 12:07
So, I've hacked some more on the tool I'm building for accounting purposes.

I guess since the last time I've posted on it, there are mainly two things I've been working on, one is valuation of crypto currency, the other is correctness of generated CSVs.

I've followed a simple principle when it comes to valuation of the crypto currency; and that is, first in, first out. Other options could be first in, highest value out, or first in, lowest value out.

It looks like in Norway, there is also the option of first in, whichever you want out; meaning you can receive crypto, and for tax purposes, choose which crypto currency you sell first. Which could be useful for tax purposes, to increase or decrease the tax owed due to dealings with crypto currency.

But it's a bit to keep track of, and I think I've gotten most of it done.

Another part however, is making sure the values in the CSV / Spreadsheet are correct, and I noticed that somewhere along the line of transactions, the account balance was off, compared to the state in etherscan.io for example. So I started looking, and figured out that it was due to a transaction that was registered, but didn't complete, because it ran out of gas. The max fee to complete the transaction was too low.

But as the transaction is still registered, you still have to pay the gas fee, so for that transaction, the value of the crypto transferred to another account is zero, and the gas fee still needs to be deducted.

You can see more about this on the go-ethereum issue tracker

https://github.com/ethereum/go-ethereum/issues/24768

there is also information in the commit:

https://github.com/morphex/ethereum-classic-taxman/commit/1a...

I think this shows, really well, how good Python integrates with the command line, and how easy it is to get something done in Python. A handful of changed lines, and it is possible to manually exclude a set of comma-separated transactions.

Of course this is also due to knowledge of how Python works, but yes, a great scripting and prototyping language Python is.
Categories: FLOSS Project Planets

Real Python: Why Is It Important to Close Files in Python?

Planet Python - Wed, 2022-04-27 10:00

At some point in your Python coding journey, you learn that you should use a context manager to open files. Python context managers make it easy to close your files once you’re done with them:

with open("hello.txt", mode="w") as file: file.write("Hello, World!")

The with statement initiates a context manager. In this example, the context manager opens the file hello.txt and manages the file resource as long as the context is active. In general, all the code in the indented block depends on the file object being open. Once the indented block either ends or raises an exception, then the file will close.

If you’re not using a context manager or you’re working in a different language, then you might explicitly close files with the try … finally approach:

try: file = open("hello.txt", mode="w") file.write("Hello, World!") finally: file.close()

The finally block that closes the file runs unconditionally, whether the try block succeeds or fails. While this syntax effectively closes the file, the Python context manager offers less verbose and more intuitive syntax. Additionally, it’s a bit more flexible than simply wrapping your code with try … finally.

You probably use context managers to manage files already, but have you ever wondered why most tutorials and four out of five dentists recommend doing this? In short, why is it important to close files in Python?

In this tutorial, you’ll dive into that very question. First, you’ll learn about how file handles are a limited resource. Then you’ll experiment with the consequences of not closing your files.

Free Download: Get a sample chapter from CPython Internals: Your Guide to the Python 3 Interpreter showing you how to unlock the inner workings of the Python language, compile the Python interpreter from source code, and participate in the development of CPython.

In Short: Files Are Resources Limited by the Operating System

Python delegates file operations to the operating system. The operating system is the mediator between processes, such as Python, and all the system resources, such as the hard drive, RAM, and CPU time.

When you open a file with open(), you make a system call to the operating system to locate that file on the hard drive and prepare it for reading or writing. The operating system will then return an unsigned integer called a file handle on Windows and a file descriptor on UNIX-like systems, including Linux and macOS:

A Python process making a system call and getting the integer 10 as the file handle

Once you have the number associated with the file, you’re ready to do read or write operations. Whenever Python wants to read, write, or close the file, it’ll make another system call, providing the file handle number. The Python file object has a .fileno() method that you can use to find the file handle:

>>>>>> with open("test_file.txt", mode="w") as file: ... file.fileno() ... 4

The .fileno() method on the opened file object will return the integer used by the operating system as a file descriptor. Just like how you might use an ID field to get a record from a database, Python provides this number to the operating system every time it reads or writes from a file.

Read the full article at https://realpython.com/why-close-file-python/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Krita 5.0.6 Released

Planet KDE - Wed, 2022-04-27 07:03

Today we release Krita 5.0.6. This is a bug fix release with two crash fixes:

  • A crash when working with vector layers or vector selections and using undo a lot: BUG:447985
  • A crash deleting a vector layer with an animated transparency mask: BUG:452396


Krita is a free and open source project. Please consider supporting the project by joining the development fund, donating or by buying training videos! With your support, we can keep the core team working on Krita full-time.

Download Windows

If you’re using the portable zip files, just open the zip file in Explorer and drag the folder somewhere convenient, then double-click on the Krita icon in the folder. This will not impact an installed version of Krita, though it will share your settings and custom resources with your regular installed version of Krita. For reporting crashes, also get the debug symbols folder.

Note that we are not making 32 bits Windows builds anymore.

Linux

The separate gmic-qt appimage is no longer needed.

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

macOS

Note: if you use macOS Sierra or High Sierra, please check this video to learn how to enable starting developer-signed binaries, instead of just Apple Store binaries.

Android

We consider Krita on ChromeOS as ready for production. Krita on Android is still beta. Krita is not available for Android phones, only for tablets, because the user interface needs a large screen.

Source code md5sum

For all downloads:

Key

The Linux appimage and the source .tar.gz and .tar.xz tarballs are signed. You can retrieve the public key here. The signatures are here (filenames ending in .sig).

The post Krita 5.0.6 Released appeared first on Krita.

Categories: FLOSS Project Planets

Qt Creator 7.0.1 released

Planet KDE - Wed, 2022-04-27 05:26

We are happy to announce the release of Qt Creator 7.0.1!

Categories: FLOSS Project Planets

Abhijeet Pal: Django 4.1 adds async-compatible interface to QuerySet

Planet Python - Wed, 2022-04-27 03:53

The much-awaited pull request for an async-compatible interface to Queryset just got merged into the main branch of Django.

Pull Request - https://github.com/django/django/pull/14843

The Django core team has been progressively adding async support to the framework. Asynchronous views and middlewares were part of the Django 3.1 release and with the latest changes now Django ORM will be able …

Categories: FLOSS Project Planets

Russ Allbery: Review: Sorceress of Darshiva

Planet Debian - Wed, 2022-04-27 00:30

Review: Sorceress of Darshiva, by David Eddings

Series: The Malloreon #4 Publisher: Del Rey Copyright: December 1989 Printing: November 1990 ISBN: 0-345-36935-1 Format: Mass market Pages: 371

This is the fourth book of the Malloreon, the sequel series to the Belgariad. Eddings as usual helpfully summarizes the plot of previous books (the one thing about his writing that I wish more authors would copy), this time by having various important people around the world briefed on current events. That said, you don't want to start reading here (although you might wish you could).

This is such a weird book.

One could argue that not much happens in the Belgariad other than map exploration and collecting a party, but the party collection involves meddling in local problems to extract each new party member. It's a bit of a random sequence of events, but things clearly happen. The Malloreon starts off with a similar structure, including an explicit task to create a new party of adventurers to save the world, but most of the party is already gathered at the start of the series since they carry over from the previous series. There is a ton of map exploration, but it's within the territory of the bad guys from the previous series. Rather than local meddling and acquiring new characters, the story is therefore chasing Zandramas (the big bad of the series) and books of prophecy.

This could still be an effective plot trigger but for another decision of Eddings that becomes obvious in Demon Lord of Karanda (the third book): the second continent of this world, unlike the Kingdoms of Hats world-building of the Belgariad, is mostly uniform. There are large cities, tons of commercial activity, and a fairly effective and well-run empire, with only a little local variation. In some ways it's a welcome break from Eddings's previous characterization by stereotype, but there isn't much in the way of local happenings for the characters to get entangled in.

Even more oddly, this continental empire, which the previous series set up as the mysterious and evil adversaries of the west akin to Sauron's domain in Lord of the Rings, is not mysterious to some of the party at all. Silk, the Drasnian spy who is a major character in both series, has apparently been running a vast trading empire in Mallorea. Not only has he been there before, he has houses and factors and local employees in every major city and keeps being distracted from the plot by his cutthroat capitalist business shenanigans. It's as if the characters ventured into the heart of the evil empire and found themselves in the entirely normal city next door, complete with restaurant recommendations from one of their traveling companions.

I think this is an intentional subversion of the normal fantasy plot by Eddings, and I kind of like it. We have met the evil empire, and they're more normal than most of the good guys, and both unaware and entirely uninterested in being the evil empire. But in terms of dramatic plot structure, it is such an odd choice. Combined with the heroes being so absurdly powerful that they have no reason to take most dangers seriously (and indeed do not), it makes this book remarkably anticlimactic and weirdly lacking in drama.

And yet I kind of enjoyed reading it? It's oddly quiet and comfortable reading. Nothing bad happens, nor seems very likely to happen. The primary plot tension is Belgarath trying to figure out the plot of the series by tracking down prophecies in which the plot is written down with all of the dramatic tension of an irritated rare book collector. In the middle of the plot, the characters take a detour to investigate an alchemist who is apparently immortal, featuring a university on Melcena that could have come straight out of a Discworld novel, because investigating people who spontaneously discover magic is of arguably equal importance to saving the world. Given how much the plot is both on rails and clearly willing to wait for the protagonists to catch up, it's hard to argue with them. It felt like a side quest in a video game.

I continue to find the way that Eddings uses prophecy in this series to be highly amusing, although there aren't nearly enough moments of the prophecy giving Garion stage direction. The basic concept of two competing prophecies that are active characters in the world attempting to create their own sequence of events is one that would support a better series than this one. It's a shame that Zandramas, the main villain, is rather uninteresting despite being female in a highly sexist society, highly competent, a different type of shapeshifter (I won't say more to avoid spoilers for earlier books), and the anchor of the other prophecy. It's good material, but Eddings uses it very poorly, on top of making the weird decision to have her talk like an extra in a Shakespeare play.

This book was astonishingly pointless. I think the only significant plot advancement besides map movement is picking up a new party member (who was rather predictable), and the plot is so completely on rails that the characters are commenting about the brand of railroad ties that Eddings used. Ce'Nedra continues to be spectacularly irritating. It's not, by any stretch of the imagination, a good book, and yet for some reason I enjoyed it more than the other books of the series so far. Chalk one up for brain candy when one is in the right mood, I guess.

Followed by The Seeress of Kell, the epic (?) conclusion.

Rating: 6 out of 10

Categories: FLOSS Project Planets

Russell Coker: PIN for Login

Planet Debian - Tue, 2022-04-26 23:18

Windows 10 added a new “PIN” login method, which is an optional login method instead of an Internet based password through Microsoft or a Domain password through Active Directory. Here is a web page explaining some of the technology (don’t watch the YouTube video) [1]. There are three issues here, whether a PIN is any good in concept, whether the specifics of how it works are any good, and whether we can copy any useful ideas for Linux.

Is a PIN Any Good?

A PIN in concept is a shorter password. I think that less secure methods of screen unlocking (fingerprint, face unlock, and a PIN) can be reasonably used in less hostile environments. For example if you go to the bathroom or to get a drink in a relatively secure environment like a typical home or office you don’t need to enter a long password afterwards. Having a short password that works for short time periods of screen locking and a long password for longer times could be a viable option.

It could also be an option to allow short passwords when the device is in a certain area (determined by GPS or Wifi connection). Android devices have in the past had options to disable passwords when at home.

Is the Windows 10 PIN Any Good?

The Windows 10 PIN is based on TPM security which can provide real benefits, but this is more of a failure of Windows local passwords in not using the TPM than a benefit for the PIN. When you login to a Windows 10 system you will be given a choice of PIN or the configured password (local password or AD password).

As a general rule providing a user a choice of ways to login is bad for security as an attacker can use whichever option is least secure.

The configuration options for Windows 10 allow either group policy in AD or the registry to determine whether PIN login is allowed but doesn’t have any control over when the PIN can be used which seems like a major limitation to me.

The claim that the PIN is more secure than a password would only make sense if it was a viable option to disable the local password or AD domain password and only use the PIN. That’s unreasonably difficult for home users and usually impossible for people on machines with corporate management.

Ideas For Linux

I think it would be good to have separate options for short term and long term screen locks. This could be implemented by having a screen locking program use two different PAM configurations for unlocking after short term and long term lock periods.

Having local passwords based on the TPM might be useful. But if you have the root filesystem encrypted via the TPM using systemd-cryptoenroll it probably doesn’t gain you a lot. One benefit of the TPM is limiting the number of incorrect attempts at guessing the password in hardware, the default is allowing 32 wrong attempts and then one every 10 minutes. Trying to do that in software would allow 32 guesses and then a hardware reset which could average at something like 32 guesses per minute instead of 32 guesses per 320 minutes. Maybe something like fail2ban could help with this (a similar algorithm but for password authentication guesses instead of network access).

Having a local login method to use when there is no Internet access and network authentication can’t work could be useful. But if the local login method is easier then an attacker could disrupt Internet access to force a less secure login method.

Is there a good federated authentication system for Linux? Something to provide comparable functionality to AD but with distributed operation as a possibility?

Related posts:

  1. Should Passwords Expire? It’s widely regarded that passwords should be changed regularly. The...
  2. Email Passwords I was doing some routine sysadmin work for a client...
  3. Case Sensitivity and Published Passwords When I first started running a SE Linux Play Machine...
Categories: FLOSS Project Planets

The Drop Times: Exclusion begets Exclusion!

Planet Drupal - Tue, 2022-04-26 22:41
Daelynn Moyer, Software Engineer and Project Manager at Fast Radius, while discussing microaggression went on to say ‘Exclusion begets exclusion’.
Categories: FLOSS Project Planets

PreviousNext: A modern alternative to Hooks

Planet Drupal - Tue, 2022-04-26 21:22

This post introduces a completely new way of implementing Drupal hooks. You can finally get rid of your .module files, eliminating many calls to \Drupal with dependency injection in hooks.

by daniel.phin / 27 April 2022 Introduction

A pattern emerged in Drupal 8 where hooks would be implemented in a traditional .module file, then quickly handed off to a class via a service call or instantiated via the ClassResolver. Drupal core utilises the ClassResolver hook pattern thoroughly in .module files in Content Moderation, Layout Builder, and Workspaces module in order for core hooks to be overridable and partially embrace Dependency Injection (DI).

/** * Implements hook_entity_presave(). */ function content_moderation_entity_presave(EntityInterface $entity) { return \Drupal::service('class_resolver') ->getInstanceFromDefinition(EntityOperations::class) ->entityPresave($entity); }

With Drupal 9.4, core has been improved to a point where almost all* hook invocations are dispatched via the ModuleHandler service. This now allows third party projects to supplement ModuleHandler via the service decorator pattern.

Hux is one such project taking advantage of this centralisation, allowing hooks to be implemented in a new way:

Sample 1: my_module/src/Hooks/SampleHooks.php

declare(strict_types=1); namespace Drupal\my_module\Hooks; use Drupal\hux\Attribute\Hook; /** * Sample hooks. */ final class SampleHooks { #[Hook('entity_access')] public function myEntityAccess(EntityInterface $entity, string $operation, AccountInterface $account): AccessResult { return AccessResult::neutral(); }

This file is all that's needed to implement hooks. Keep reading to uncover how this works, including alters, hooks overrides, and dependency injection.

HuxThings you’ll need
  • Drupal 9.4 or later.
    Patches in this issue can be used for Drupal 9.3.
  • PHP 8.0 or later
  • The Hux project
    composer require drupal/hux
Implementing Hooks Classes and Hooks

To begin implementing hooks, create a new class in the Hooks  namespace, in a 'Hooks' directory. The class name can be anything.

Sample 2: my_module/src/Hooks/SampleHooks.php

declare(strict_types=1); namespace Drupal\my_module\Hooks; /** * Sample hooks. */ final class SampleHooks { }

Add a public method with the Hook attribute. PHP attributes are new to PHP 8.0 and are similar to annotations already made familiar by Drupal 8. Don’t forget to import the Hook attribute with use.

The method name can be anything. The first parameter of the hook attribute must be the hook without the ‘hook_’ prefix. For example, if implementing hook_entity_access, use Hook('entity_access'). Alters use a different attribute, scroll down for information about alters.

Add the parameters and return typehints specific to the hook being implemented. Though these are not enforced or validated by Hux or Drupal.

Sample 3: my_module/src/Hooks/SampleHooks.php

declare(strict_types=1); namespace Drupal\my_module\Hooks; use Drupal\hux\Attribute\Hook; /** * Sample hooks. */ final class SampleHooks { #[Hook('entity_access')] public function myEntityAccess(EntityInterface $entity, string $operation, AccountInterface $account): AccessResult { return AccessResult::neutral(); } }

As of April 2022, Drupal’s Coder does not yet recognise PHP attributes. So an untagged development version of Coder is needed if methods need to have documentation without triggering coding standards errors.

Sample 4: my_module/src/Hooks/SampleHooks.php

declare(strict_types=1); namespace Drupal\my_module\Hooks; use Drupal\hux\Attribute\Hook; /** * Sample hooks. */ final class SampleHooks { /** * Implements hook_entity_access(). */ #[Hook('entity_access')] public function myEntityAccess(EntityInterface $entity, string $operation, AccountInterface $account): AccessResult { return AccessResult::neutral(); } }

This is all that is needed to implement hooks using Hux.

Once caches have been cleared, the hook class will be discovered. From then, you don’t need to tediously clear the cache to add more hooks. Hux will discover hooks automatically, thanks to the super-powers of PHP attributes.

Implementing Alters

Alters work very similarly to Hux Hook implementations. Alters can be implemented alongside Hooks in a hooks class.

Add a public method with the Alter attribute, and import it with use.

The method name can be anything. The first parameter of the alter attribute must be the alter without both the `hook_' prefix and the ‘_alter’ suffix. For example, if implementing hook_user_format_name_alter, use Alter('user_format_name').

Sample 5: my_module/src/Hooks/SampleHooks.php

declare(strict_types=1); namespace Drupal\my_module\Hooks; use Drupal\hux\Attribute\Alter; /** * Sample hooks. */ final class SampleHooks { #[Alter('user_format_name')] public function myCustomAlter(string &$name, AccountInterface $account): void { $name .= ' altered!'; } }

A minority of hooks in Drupal and contrib are alters only by name, such as hook_views_query_alter, and instead go through the hook invocation system. So the Hook attribute must be used, while retaining the '_alter' suffix.

Hook Replacements

You can even declare a hook is a replacement for another hook, causing the replaced hook to not be invoked.

For example, if we want to replace Medias’ media_entity_access hook, which is an implementation of hook_entity_access

Sample 6: my_module/src/Hooks/SampleHooks.php

declare(strict_types=1); namespace Drupal\my_module\Hooks; use Drupal\hux\Attribute\ReplaceOriginalHook; /** * Sample hooks. */ final class SampleHooks { #[ReplaceOriginalHook(hook: 'entity_access', moduleName: 'media')] public function myEntityAccess(EntityInterface $entity, string $operation, AccountInterface $account): AccessResult { return AccessResult::neutral(); } }

A callable can optionally be received to directly invoke the replaced hook.

Set originalInvoker parameter to TRUE and add a callable parameter before the original hook parameters:

Sample 7: my_module/src/Hooks/SampleHooks.php

declare(strict_types=1); namespace Drupal\my_module\Hooks; use Drupal\hux\Attribute\ReplaceOriginalHook; /** * Sample hooks. */ final class SampleHooks { #[ReplaceOriginalHook(hook: 'entity_access', moduleName: 'media', originalInvoker: TRUE)] public function myEntityAccess(callable $originalInvoker, EntityInterface $entity, string $operation, AccountInterface $account): AccessResult { $originalResult = $originalInvoker($entity, $operation, $account); return AccessResult::neutral(); } }Dependency Injection with Hooks Classes

An advantage of using Hooks classes is dependency injection. No longer do you need to reach out to \Drupal::service and friends. Instead, all external dependencies of hook can be known upfront, which also improves the unit-testability of hooks.

Sample 8: my_module/src/Hooks/SampleHooks.php

declare(strict_types=1); namespace Drupal\my_module\Hooks; use Drupal\hux\Attribute\Hook; use Drupal\Core\DependencyInjection\ContainerInjectionInterface; /** * Sample hooks. */ final class SampleHooks implements ContainerInjectionInterface { public function __construct( private EntityTypeManagerInterface $entityTypeManager, ) { } public static function create(ContainerInterface $container): static { return new static( $container->get('entity_type.manager'), ); } #[Hook('entity_access')] public function myEntityAccess(EntityInterface $entity, string $operation, AccountInterface $account): AccessResult { // Do something with dependencies. $this->entityTypeManager->loadMultiple(...); return AccessResult::neutral(); } }

Continue reading for dependency injection without ContainerInjectionInterface

Hooks Classes without Auto-discovery

In some cases, you might find that more control is needed over the hooks class, such as wanting the class to live in a different directory, or to declare dependencies without using container injection or being container-aware.

In this case, a service can be declared in a services.yml file, tagging the service with ‘hooks’. Hux will pick up the service and treat it exactly like the auto-discovery method. In fact, auto-discovery does exactly this under the hood, declaring private hooks-tagged services.

services: my_module.my_hooks: class: Drupal\my_module\MyHooks arguments: - '@entity_type.manager' tags: - { name: hooks, priority: 100 }

This approach is ideal if you want to quickly migrate existing .module hooks → ClassResolver implementations to Hooks classes. Simply remove the hooks in .module files, add an entry to a services.yml file, and then add appropriate attributes.

Summary
  • For classes in Hooks/ directories to be discovered, they need at least one public method with a Hux attribute. Without an attribute, these classes/files will be ignored.
  • Once the container is aware of a hooks class, more hooks can be added without cache clears.
  • Each module can have as many hook classes as you desire, named in any way.
  • A hook can be implemented multiple times per module!
  • A hook method can have any name.
  • A hook class has no interface. 
  • Using container injection is completely optional. Alternatively, DI can be achieved by declaring a service manually.
  • Performance is priority. Hux acts as a decorator for core ModuleHandler. After discovery, there is only a very small runtime overhead.
  • * Works with most hooks. hook_theme is a notable example of a hook that does not work, along with theme preprocessors. Though preprocessors are less hooks and more analogous to callbacks.
Concluding...

Hux is a step towards a cleaner codebase. Eliminate .module files and messy .inc files. In most cases, procedural, or functions in the global namespace are no longer needed.

An events-based approach to hooks doesn’t need to be the next evolution of hooks in Drupal.

Thanks to Lee Rowlands for the idea of the auto-discovery approach and to clients of PreviousNext which have adopted this approach in the early days.

Consider Hux for your next round of private or contrib development! 🪝

Tagged Hooks, Dependency Injection, Hux
Categories: FLOSS Project Planets

Drupal Core News: Drupal 10 will be released December 14, 2022

Planet Drupal - Tue, 2022-04-26 20:31
The finalized release date for Drupal 10.0.0 is December 14, 2022

As we announced previously, our Drupal 10 release schedule included three possible windows for the Drupal 10 release date. Today, we have finalized that we will use the third and final release window of December 14, 2022. This gives site owners 11 months to update from Drupal 9 to Drupal 10.

Why is the August release date no longer an option?

We've worked hard over the past months to complete the requirements and strategic objectives for Drupal 10. The community has been hard at work removing deprecated code, deprecating unneeded dependencies, updating our JavaScript, and preparing modules to be moved to contrib.

The most critical requirement for Drupal 10 is our CKEditor 5 integration. CKEditor 4 is end-of-life at the end of 2023, so Drupal 10 must use CKEditor 5 instead. We've spent thousands of hours working on Drupal's CKEditor 5 integration and collaborating closely with the CKEditor team. We also sprinted on CKEditor 5 at Drupal Developer Days, including accessibility and upgrade path testing. Through our work, we've discovered additional critical issues that need to be solved in order for CKEditor 5 to be stable, and these issues won't be completed in time for the May 13 beta deadline required for the August release.

What are the advantages of the December release date?

Releasing 10.0.0 in December instead of August gives us more time to stabilize CKEditor. It also means more time for site owners to test moving their content from CKEditor 4 to CKEditor 5 in Drupal 9, so that we can ensure a smooth and safe upgrade path for this major change.

Additionally, with the December release, we have more time to complete strategic requirements for Drupal's themes including updating JavaScript dependencies to the latest major versions, making Olivero and Claro Drupal core's default themes so that Drupal 10 has a fresh new look, and stabilizing the Starterkit Theme Generator to improve the theme developer experience and make maintaining Drupal core easier.

The December release also means we can release Drupal 10 with Symfony 6.2, which will have improvements and bug fixes over the current 6.0 version, and also will reduce the workload for our Security Team.

As we announced previously, Drupal 10 will require PHP 8.1, and the December release means that most hosting service providers will support PHP 8.1, so that sites don't have to wait for platform fixes to start their Drupal 10 upgrades. PHP 8.2 is also scheduled for release in November, and Drupal 10 will include as much forward-compatibility with it as possible. (PHP 8.1 will remain the minimum requirement for Drupal 10 until November 2024.)

The beta deadline for Drupal 10's requirements is September 9, 2022

Under the December schedule, Drupal 9.4.0 will be released on June 15. Drupal 9 and 10 development will continue in 9.5.x and 10.0.x after that date. All requirements for Drupal 10 must be completed by the beta deadline of September 9, 2022. The week of September 12, beta versions of both Dupal 9.5 and Drupal 10 will be released, and the stabilization and testing phase will begin.

While this gives us four extra months to complete the strategic requirements for Drupal 10, there is still a lot to do and we need your help! Read our previous announcement about how you can help, or join the #d10readiness channel in the Drupal community Slack to help with the latest needs. Or, if you're attending DrupalCon Portland, come to the session on Getting ready for Drupal 10 and the contribution events.

However, new deprecations for removal in Drupal 10 must still be completed in Drupal 9.4 by May 12, 2022

Drupal 9 is end-of-life in November 2023, because that is the end of life for both Symfony 4 and CKEditor 4, which are dependencies of Drupal 9. This means site owners will only have 11 months to upgrade their sites from Drupal 9 to Drupal 10.

We can make this easier on site owners by having as many modules Drupal-10-ready as possible on the day 10.0.0 is released. In order to make it easier for modules to add Drupal 10 compatibility now, no new deprecations for Drupal 10 will be added to Drupal 9.5. This means that the public API of Drupal 9.4 will be essentially the same as the API of Drupal 10, so modules can create their Drupal 10 versions using Drupal 9.4.

The only exception is that core modules and themes may still be deprecated and moved to contributed projects as-is, since the upgrade path simply requires installing the contributed version of the project and ensuring that modules, themes, or sites declare the correct dependency on the contributed project.

Much of the migration path for contributed modules and themes can now be automated with upgrade status and Drupal rector.

Since Drupal 9.5 deprecations will not be removed in Drupal 10, we need to finalize what will be deprecated by the beta deadline for Drupal 9.4 on May 12. This includes deprecations of legacy JavaScript APIs and dependencies as well as PHP APIs.

There is still time to improve Drupal 9.4

Drupal 9.4.0-alpha1 will be released the week of May 2, and 9.4.0-beta1 will be released the week of May 16. This means that three weeks remain until the beta deadline for Drupal 9.4, so there is still a little time to complete improvements for it! In particular, with your help, we still have a chance to make Claro stable, and to make it and Olivero the default themes of the Standard profile. If you're attending Drupal/Con Portland, check the Birds of a Feather (BoF) schedule for the Claro BoF. Read more on the DrupalCon Portland contribution events page (scroll down for Claro contribution information). Also join the #d10readiness channel in the Drupal community Slack to be involved in Drupal 10-related changes in Drupal 9.

Categories: FLOSS Project Planets

Tim Retout: Exploring StackRox

Planet Debian - Tue, 2022-04-26 16:07

At the end of March, the source code to StackRox was released, following the 2021 acquisition by Red Hat. StackRox is a Kubernetes security tool which is now badged as Red Hat Advanced Cluster Security (RHACS), offering features such as vulnerability management, validating cluster configurations against CIS benchmarks, and some runtime behaviour analysis. In fact, it’s such a diverse range of features that I have trouble getting my head round it from the product page or even the documentation.

Source code is available via the StackRox organisation on GitHub, and the most obviously interesting repositories seem to be:

  • stackrox/stackrox, containing the main application, written in Go
  • stackrox/scanner, the vulnerability scanner, also in Go. From a first glance at the go.mod file, it does not seem to share much code with Clair, which is interesting.
  • stackrox/collector, the runtime analysis component, in C++ but also with hooks into the kernel.

My initial curiosity has been around the ‘collector’, to better understand what runtime behaviour the tool can actually pick up. I was intrigued to find that the actual kernel component is a patched version of Falco’s kernel module/eBPF probes; a few features are disabled compared to Falco, e.g. page faults and signal events.

There’s a list of supported syscalls in driver/syscall_table.c, which seems to have drifted slightly or be slightly behind the upstream Falco version? In particular I note the absence of io_uring, but given RHACS is mainly deployed on Linux 4.18 at the moment (RHEL 8) this is probably a non-issue. (But relevant if anyone were to run it on newer kernels.)

That’s as far as I’ve got for now. Red Hat are making great efforts to reach out to the community; there’s a Slack channel, and office hours recordings, and a community hub to explore further. It’s great to see new free software projects created through acquisition in this way - I’m not sure I remember seeing a comparable example.

Categories: FLOSS Project Planets

Tim Retout: Blog Posts

Planet Debian - Tue, 2022-04-26 16:07
Categories: FLOSS Project Planets

Daniel Roy Greenfeld: Live Discussion with Sebastián Ramírez (Tiangolo)

Planet Python - Tue, 2022-04-26 15:37

On April 26th I had a live discussion with Sebastián Ramírez, creator of FastAPI, Typer, SQL Model, and more.

LINKS:

  • https://tiangolo.com
  • https://fastapi.tiangolo.com
  • https://typer.tiangolo.com
  • https://sqlmodel.tiangolo.com
  • https://forethought.ai/
  • https://forethought.ai/careers/
  • https://octopusenergy.com/careers
Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #522 (April 26, 2022)

Planet Python - Tue, 2022-04-26 15:30

#522 – APRIL 26, 2022
View in Browser »

Type Hints in Code Supporting Multiple Python Versions

The typing module continues to evolve, with new features in every Python version. This can make it tricky if you’re trying to type code that supports multiple Python versions. Learn just what you can do when you need to support Type Hints in multiple versions.
ADAM JOHNSON

PyCon US 2022: Getting the Most Out of Your Conference Visit

Tips for getting the most out of your visit to PyCon US, the world’s biggest Python conference taking place April 27, 2022 to May 3, 2022 in Salt Lake City, Utah. Whether you’re a first-timer or a seasoned attendee, this guide will help you get ready to have a great PyCon. If you’re attending, stop by the Real Python booth and say hello! :)
REAL PYTHON

Proactively Monitor Python App Uptime with Datadog APM

Datadog APM empowers developer teams to identify anomalies, resolve issues, and improve application performance. Easily identify bottlenecks, errors, heavy traffic issues, slow-running queries, and more with end-to-end application tracing and continuous profiling. Start a free Datadog APM trial →
DATADOG sponsor

Add Additional Attributes to Enum Members in Python

Sometimes you want your Enum objects to reference more than just a single piece of data. You can use a tuple but then you have to de-reference it. This article shows a technique used in http.HTTPStatus that you can do in your own code.
REDOWAN DELOWAR

Building a Django User Management System

In this video course, you’ll learn how to extend your Django application with a user management system, complete with email sending and third-party authentication.
REAL PYTHON course

PyOhio Call for Proposals Open Through May 2

PYOHIO.ORG • Shared by Dave Forgac

Discussions Python’s “Type Hints” Are a Bit of a Disappointment to Me

HACKER NEWS

Where Can I See Examples of Large S/W Architecture?

HACKER NEWS

Python Jobs Gameful Learning Developer (Ann Arbor, MI, USA)

University of Michigan

Data & Operations Engineer (Ann Arbor, MI, USA)

University of Michigan

Python Technical Architect (USA)

Blenderbox

Academic Innovation Software Developer (Ann Arbor, MI, USA)

University of Michigan

Software Development Lead (Ann Arbor, MI, USA)

University of Michigan

Lead Software Engineer (Anywhere)

Right Side Up

Data Engineer (Chicago, IL, USA)

Aquatic Capital Managment

More Python Jobs >>>

Articles & Tutorials From 30 Lines of Code to 11: Rock Paper Scissors in Python

When you’re a beginner you need projects that allow you to practice basic concepts. But do you ever revisit those projects as a more advanced developer? This article looks at one common beginner Python project — implementing Rock Paper Scissors in Python — and how you could approach the game logic from a more advanced perspective.
DAVIDAMOS.DEV • Shared by David Amos

Python Bidirectional Dictionary

Learn about the Bidict library, a bi-directional dictionary where your keys and your values can both be used to look up an item. This can be a useful tool when dealing with mapped data like country code to country name where you want to look up either side of the relationship.
CHRISTOPHER TAO

Join Our Free Online Community With Anaconda Nucleus

Anaconda Nucleus is our education and community engagement platform. The platform features a wealth of data science content ranging from articles to webinars to videos and more. Join to ask questions and engage other data professionals →
ANACONDA sponsor

Use Python to Send Notifications During Model Training

We’ve all been there. Whether you are experimenting with a new fun model or grinding for that Kaggle competition prize pool, it can be hard to leave your models running in peace. Learn how to use Twilio’s API to notify you during your model training.
BRADEN RIGGS

What is Synthetic Data?

Synthetic data is artificially annotated information that is generated by computer algorithms or simulations, commonly used as an alternative to real-world data. Learn where it can be useful and how it helps train your machine learning algorithms.
GRETEL.AI • Shared by Mason Egger

Python 3.11 Preview: Task and Exception Groups

Python 3.11 will be released in October 2022. In this tutorial, you’ll install the latest alpha release of Python 3.11 in order to preview task and exception groups and learn about how they can improve your asynchronous programming in Python.
REAL PYTHON

Create an Interactive Dashboard With Pandas and hvPlot

This article will show you the easiest way to create an interactive dashboard in Python from any Pandas DataFrame. If you already know some Pandas, you can almost immediately use hvPlot .interactive and turn your data into a dashboard.
SOPHIA YANG

How I Integrated Zapier Into My Django Project

Zapier is a no-code tool that takes input from a wide variety of web applications and connects their output to other applications. This article walks you through what you need to do to integrate Zapier with your Django project.
AIDAS BENDORAITIS

Concurrent Web Scraping With Selenium and Docker Swarm

This tutorial shows you how to use Python and Selenium Grid to build a parallel web scraper. By packaging it up in Docker and executing it in a swarm, you can scrape all the things!
MICHAEL HERMAN

Find Your Next Tech Job Through Hired

Hired has 1000s of companies of all sizes who are actively hiring developers, data scientists, mobile engineers, and more. It’s really simple: create a profile with your skills for hiring managers to reach you directly. Sign up today!
HIRED sponsor

Top 10 VSCode Extensions for More Productive Python

Bas’s top 10 VSCode extensions for Python, including tools for indentation management, comments, tests, type hints, docstrings, and more.
BAS STEINS

Common Python Anti-Patterns to Watch Out For

Fifteen code patterns that are problematic in Python and what alternatives to use instead.
KOUSHIK THOTA

Projects & Code memray: Memory Profiler for Python

GITHUB.COM/BLOOMBERG

git-gud: Command-Line Game to Learn git

GITHUB.COM/BENTHAYER

dunk: Prettier Git Diffs

GITHUB.COM/DARRENBURNS

pandera: Data Validation Library for Pandas Dataframes

GITHUB.COM/PANDERA-DEV

pypyr: make Alternative for Automation Pipelines

GITHUB.COM/PYPYR

Events Weekly Real Python Office Hours Q&A (Virtual)

April 27, 2022
REALPYTHON.COM

PyCon US 2022

April 27 to May 6, 2022
PYCON.ORG

PyKla Monthly Meetup

April 27, 2022
MEETUP.COM

PyStaDa

April 27, 2022
PYSTADA.GITHUB.IO

SPb Python Drinkup

April 28, 2022
MEETUP.COM

Happy Pythoning!
This was PyCoder’s Weekly Issue #522.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

A win for open is a win for all: Interview with The Open Organization

Open Source Initiative - Tue, 2022-04-26 12:36
We spoke with Bryan Behrenshausen, Community Architect for the Open Organization in the Open Source Program Office at Red Hat, about this inspiring project and get his perspective on all things open source.
Categories: FLOSS Research

Pages