Feeds
PSA: KDecoration API break in Plasma 6.3
Fractional scaling is hard. Anyone that had the misfortune of working on it knows that… so it won’t surprise a lot of people that it’s not all figured out yet! Today I’ll talk about the fractional scaling problems with KWin’s server side decorations, and why we need to do an API break to fix it.
What’s the problem?This is the simplest part. Many decorations have elements that need to be pixel perfect, like outlines that are only a single pixel wide. When they’re not perfectly scaled, or positioned wrongly, that’s sometimes quite visible and annoying:
What causes these issues?
The source of all evil with fractional scaling is also the cause of most issues here: Integer logical coordinates.
Logical coordinates are a way to represent the size of something on the screen in a mostly display-independent way and are quite useful for the size and position of things like windows or the cursor. They’re calculated in a really simple way:
coordinate_logical = coordinate_pixels / scaleWith just that equation, there are no problems just yet - you can just multiply the logical coordinate with the display scale, and you get back the original coordinate in pixels. When you round that logical coordinate, and do some calculations with it, things get weird though… let’s look at the concrete example of a window at scale 1.25, and with a 1 pixel wide outline:
unit outline width window width outline width total size total size in pixels (integer) pixels 1 27 1 29 29 fractional logical 0.8 21.6 0.8 23.2 29 integer logical 1 22 1 24 30As you might’ve guessed, KWin’s decoration plugin API is using integer logical coordinates, and this mismatch between the window size vs. the size of its components causes most of the problems. Just doing a straight forward int -> float conversion isn’t enough to fix this though, a few more changes are needed.
Changes in KWinKWin will provide decorations with the fractional logical size of windows, provide them with the scale factor they should render for, and use the decoration’s fractional border sizes to position the window and decoration pieces properly in the scene.
Changes in DecorationsBecause of the API break, decorations using the C++ API need to be updated to the new KDecoration3 API, or they will not be loaded. A minimalistic port would only need to round all the values, but there will of course still be fractional scaling issues with that.
Assuming you want to make the decoration work properly with fractional scaling, you also need to use the provided scale factor to calculate border sizes, and when painting things with QPainter, you need to take care to snap all geometries to the pixel grid, or anti-aliasing may turn single-pixel lines into a blurry mess.
Note that this work isn’t completed yet, and some additional API changes may happen while we’re breaking the API already. A porting guide with all the changes will be provided before the release of Plasma 6.3.
As Aurorae decorations are just svg files, they are not affected by this API break and will continue to work like before without any changes.
If you have any questions about this change, or about how to port a decoration over to the new API, please reach out to us at #kwin:kde.org on matrix!
GNUnet News: GNUnet 0.22.2
This is a bugfix release for gnunet 0.22.1. It fixes some regressions and minor bugs.
Links
- Source: https://ftpmirror.gnu.org/gnunet/gnunet-0.22.2.tar.gz ( https://ftpmirror.gnu.org/gnunet/gnunet-0.22.2.tar.gz.sig )
- Source (meson): https://buildbot.gnunet.org/releases/gnunet-0.22.2-meson.tar.gz ( https://buildbot.gnunet.org/releases/gnunet-0.22.2-meson.tar.gz.sig )
- Detailed list of changes: https://git.gnunet.org/gnunet.git/log/?h=v0.22.2
- NEWS: https://git.gnunet.org/gnunet.git/tree/NEWS?h=v0.22.2
- The list of closed issues in the bug tracker: https://bugs.gnunet.org/changelog_page.php?version_id=459
The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A
Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/
Paolo Melchiorre: 2025 Django Software Foundation board nomination
My self-nomination statement for the 2025 Django Software Foundation (DSF) board of directors elections
Community Working Group posts: Nominate someone for the 2025 Aaron Winborn Award
The Drupal Community Working Group is pleased to announce that nominations for the 2025 Aaron Winborn Award are now open. This is your chance to recognize someone for their service, integrity, kindness, and above-and-beyond commitment to the Drupal community.
In addition to receiving a physical award, winners of the award also receive a scholarship and travel stipend for them to attend DrupalCon North America and recognition in a plenary session at the event.
Nominations are now open to everyone in the Drupal community! Whether someone has made an impact locally, regionally, or across the globe, we want you to nominate them. If you know someone who’s made a meaningful difference, big or small, now’s the perfect chance to recognize their contributions.
The Aaron Winborn Award was established to honor the legacy of Aaron Winborn, a long-time Drupal contributor whose battle with Amyotrophic Lateral Sclerosis (ALS), also known as Lou Gehrig's Disease ended on March 24, 2015. Inspired by a suggestion from Hans Riemenschneider (https://www.drupal.org/u/nonprofit), the Community Working Group, with the support of the Drupal Association, created this award to celebrate individuals who embody Aaron's spirit and dedication.
Nominations are open until Friday, March 21, 2025.
A committee consisting of the Community Working Group members (Conflict Resolution Team) as well as past award winners will select a winner from the nominations.
* Current members of the CWG Conflict Resolution Team and previous winners are not eligible for winning the award.
Previous winners of the award are:
- 2015: Cathy Theys
- 2016: Gábor Hojtsy
- 2017: Nikki Stevens
- 2018: Kevin Thull
- 2019: Leslie Glynn
- 2020: Baddý Breidert
- 2021: AmyJune Hineline
- 2022: Angie Byron
- 2023: Randy Fay
- 2024: Mike Anello
Now is your chance to be heard, show, support, and recognize an amazing community member!
Please submit a nomination today!
Call for Creators!If you or someone you know is an amazing creator who’d like to help craft one of our future Aaron Winborn Awards, please reach out to the Drupal Community Working Group.
Talking Drupal: Talking Drupal #473 - Color in CSS with Sass
Today we are talking about Color with CSS, Sass, and bringing it all into Drupal with guest Aubrey Sambor . We’ll also cover Navigation Extra Tools as our module of the week.
For show notes visit: https://www.talkingDrupal.com/473
Topics- A little career background
- Why Front end
- Do you prefer JS or CSS
- How do colors work today in CSS
- Is this different from the past
- What is gamut
- Can color functions help with contrast
- What color functions make you the most excited
- Is Sass still a thing
- Do you use preprocessors with color functions
- Post CSS in Drupal
- Any modules you can recommend to help with CSS colros
- Any benefit for single directory compontents or web components
- New England Drupal Camp
- Color in CSS: using new spaces, functions, and techniques to make your site shine
- Text wrap
- Gamut
- Do you still need Sass in 2023
Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Aubrey Sambor - star-shaped.org starshaped
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you been using the new Navigation module in Drupal core, but wanted some of the useful links previously available in the Admin Toolbar Tools submodule? There’s a module for that
- Module name/project name:
- Brief history
- How old: created in Oct 2024, less than a week ago by friend of the podcast James Shields aka lostcarpark
- Versions available: 1.0.0-beta3 which works with Drupal 10.3 and 11
- Maintainership
- Actively maintained, already 3 releases
- Security coverage - too new, but hopefully will have in time
- Test coverage
- Number of open issues: 8 “open” issues, 4 of which are bugs, but all but one of which are now marked as fixed with the latest release
- Usage stats:
- 12 sites
- Module features and usage
- With this module enabled, the new left side Navigation menu available in Drupal core will include links to clear caches (all or a specific cache), run cron, and run database updates
- It’s a good example of a module that does something very specific and very useful, so I wanted to share it with our listeners as quickly as possible
- I know these functions are ones I’ve been missing in my own Drupal 11 dev sites, so I’m looking forward to using this module right away
Improving Xwayland window resizing
One of the quickest ways to determine whether particular application runs using Xwayland is to resize one of its windows and see how it behaves, for example
A script element has been removed to ensure Planet works properly. Please find it in the original post.While it can be handy for the debugging purposes, overall, it makes the Plasma Wayland session look less polished. So, one of the goals for 6.3 was to fix this visual glitch.
This article will provide some background behind what caused the glitch and how we addressed it. Just in case, here’s the same application, which was shown in a screen cast above, but with the corresponding resizing fixes in:
A script element has been removed to ensure Planet works properly. Please find it in the original post. X11 frame synchronization protocol(s)On X11, all window changes typically take place immediately, including resizing. This can lead to some issues. For example, if a window is resized, it can take a while until the application repaints the window with the new size. What if the compositing manager decides to compose the screen in meanwhile? You’re likely going to see some sort of visual glitches, e.g. the window contents getting cropped or seeing parts of the window that have not been repainted yet.
In order to address this issue, there exists an X11 protocol to synchronize window repaints during interactive resize. An application/client wishing to participate in this protocol needs to list _NET_WM_SYNC_REQUEST in the WM_PROTOCOLS property of the client window and also set the XID of the XSync counter in the _NET_WM_SYNC_REQUEST_COUNTER property. When the WM wants to resize the window, the following will happen:
- The window manager sends a _NET_WM_SYNC_REQUEST client message containing a serial that the client will need to put in the XSync counter after processing a ConfigureNotify event that will be generated after the window is resized. The compositing manager and the window manager will block window updates until the XSync request acknowledgement is received;
- The WM resizes the client window, for example by calling the xcb_configure_window() function;
- The client would then repaint the window with the new size and update the XSync counter with the serial that it had received in step 1;
- The window manager and the compositing manager unblock window updates after receiving receiving the XSync request acknowledgement. For example, now, the window can be repainted by the compositing manager and there shouldn’t be glitches as long as the client behaves well.
Note that the window manager and the compositing manager are often the same. For example, both KWin and Mutter are compositing managers and window managers.
The frame synchronization protocol described above is called basic frame synchronization protocol. There is also an extended frame synchronization protocol, but it is not standardized and it is implemented only by a few compositing managers.
_NET_WM_SYNC_REQUEST and XwaylandKWin supports the basic frame synchronization protocol, so there should be no visual glitches when resizing X11 windows in the Plasma Wayland session, right? At quick glance, yes, but we forget about the most important detail: Wayland compositors don’t use XCompositeNameWindowPixmap() or xcb_composite_name_window_pixmap() to grab the contents of X11 windows, instead they rely on Xwayland attaching graphics buffers to wl_surface objects, so there is no strict order between the Wayland compositor receiving an XSync request acknowledgement and graphics buffers for the new window size.
In order to help better understand the issue, let’s consider a concrete example. Assume that a window with geometry 0,0 100x100 is being resized by dragging its left edge. If the left edge is dragged 10px to the right, the following will happen:
- A _NET_WM_SYNC_REQUEST client message will be sent to the client containing the XSync counter serial that must be set after processing the ConfigureNotify event that will be generated after the Wayland compositor calls xcb_configure_window() with the new window size;
- The Wayland compositor calls xcb_configure_window() to actually resize the window;
- The client receives the sync request client message and the ConfigureNotify event, repaints the window, and acknowledges the sync request;
- The Wayland compositor receives the sync request acknowledgement and updates the window position to 10,0.
But here is the problem, when the window position is updated to 10,0, it’s not guaranteed that the wl_surface associated with the X11 window has a buffer with the new window size, i.e. 90x100. It can take a while until Xwayland commits a graphics buffer with the right size. In meanwhile, the compositor could compose the next frame with the new window position, i.e. 10,0, but old surface size, i.e. 100x100. It would look as if the right window edge sticks out of the window decoration. After Xwayland attaches a buffer with the right size, the right window edge will correct itself.
So, ideally, the Wayland compositor should update the window position after receiving the XSync request acknowledgement and Xwayland attaching a new graphics buffer to the wl_surface.
With that in mind, the frame synchronization procedure looks as follows:
- The compositor blocks wl_surface commits by setting the _XWAYLAND_ALLOW_COMMITS property to 0 for the toplevel X11 window. This is needed to ensure the consistent order between XSync request acknowledgements and wl_surface commits. As long as the _XWAYLAND_ALLOW_COMMITS property is set to 0, Xwayland will not attempt to commit the wayland surface, for example attach a new graphics buffer after the client repaints the window;
- The compositor sends a _NET_WM_SYNC_REQUEST client message as before;
- The compositor resizes the client window as before;
- The client repaints the window and acknowledges the XSync request as before;
- After receiving the XSync acknowledgement, the compositor unblocks surface commits by setting the _XWAYLAND_ALLOW_COMMITS property to 1. Note that the window updates are still blocked, i.e. the window position is not updated yet;
- After Xwayland commits the wl_surface with a new graphics buffer, the window updates are unblocked, e.g. the window position is updated.
The frame synchronization process looks more involved with Xwayland, but it is still manageable.
_NET_WM_SYNC_REQUEST support in applicationsMost applications that use GTK and Qt support _NET_WM_SYNC_REQUEST, but there are applications that don’t participate in the frame synchronization protocol. If you use one of those apps, you will observe visual glitches during interactive resize.
Closing wordsFrame synchronization is a difficult problem, and requires some very intricate code both on the compositor and the client side. But with the changes that we’ve made, I’m proud to say that KWin is one of the few compositors that properly handles frame synchronization for X11 windows on Wayland!
Sven Hoexter: GKE version 1.31.1-gke.1678000+ is a baddy
Just a "warn your brothers" for people foolish enough to use GKE and run on the Rapid release channel.
Update from version 1.31.1-gke.1146000 to 1.31.1-gke.1678000 is causing trouble whenever NetworkPolicy resources and a readinessProbe are configured. As a workaround we started to remove the NetworkPolicy resources. E.g. when kustomize is involved with a patch like this:
- patch: |- $patch: delete apiVersion: "networking.k8s.io/v1" kind: NetworkPolicy metadata: name: dummy target: kind: NetworkPolicyWe tried to update to the latest version - right now 1.31.1-gke.2008000 - which did not change anything. Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000 because that is now the oldest release of 1.31.1 which I can find in the regular and rapid release channels. The last known good version 1.31.1-gke.1146000 is not available to try a downgrade.
The Open Source Initiative Announces the Release of the Industry’s First Open Source AI Definition
RALEIGH, N.C., Oct. 28, 2024 — ALL THINGS OPEN 2024 — After a year-long, global, community design process, the Open Source Definition (OSAID) v.1.0 is available for public use.
The release of version 1.0 was announced today at All Things Open 2024, an industry conference focused on common issues of interest to the worldwide Open Source community. The OSAID offers a standard by which community-led, open and public evaluations will be conducted to validate whether or not an AI system can be deemed Open Source AI. This first stable version of the OSAID is the result of multiple years of research and collaboration, an international roadshow of workshops, and a year-long co-design process led by the Open Source Initiative (OSI), globally recognized by individuals, companies and public institutions as the authority that defines Open Source.
“The co-design process that led to version 1.0 of the Open Source AI Definition was well-developed, thorough, inclusive and fair,” said Carlo Piana, OSI board chair. “It adhered to the principles laid out by the board, and the OSI leadership and staff followed our directives faithfully. The board is confident that the process has resulted in a definition that meets the standards of Open Source as defined in the Open Source Definition and the Four Essential Freedoms, and we’re energized about how this definition positions OSI to facilitate meaningful and practical Open Source guidance for the entire industry.”
“The new definition requires Open Source models to provide enough information about their training data so that a ‘skilled person can recreate a substantially equivalent system using the same or similar data,’ which goes further than what many proprietary or ostensibly Open Source models do today,” said Ayah Bdeir, who leads AI strategy at Mozilla. “This is the starting point to addressing the complexities of how AI training data should be treated, acknowledging the challenges of sharing full datasets while working to make open datasets a more commonplace part of the AI ecosystem. This view of AI training data in Open Source AI may not be a perfect place to be, but insisting on an ideologically pristine kind of gold standard that will not actually be met by any model builder could end up backfiring.”
“We welcome OSI’s stewardship of the complex process of defining Open Source AI,” said Liv Marte Nordhaug, CEO of the Digital Public Goods Alliance (DPGA) secretariat. “The Digital Public Goods Alliance secretariat will build on this foundational work as we update the DPG Standard as it relates to AI as a category of DPGs.”
“Transparency is at the core of EleutherAI’s non-profit mission. The Open Source AI Definition is a necessary step towards promoting the benefits of Open Source principles in the field of AI,” said Stella Biderman, executive director at the EleutherAI Institute. “We believe that this definition supports the needs of independent machine learning researchers and promotes greater transparency among the largest AI developers.”
“Arriving at today’s OSAID version 1.0 was a difficult journey, filled with new challenges for the OSI community,” said OSI Executive Director, Stefano Maffulli. “Despite this delicate process, filled with differing opinions and uncharted technical frontiers—and the occasional heated exchange—the results are aligned with the expectations set out at the start of this two-year process. This is a starting point for a continued effort to engage with the communities to improve the definition over time as we develop with the broader Open Source community the knowledge to read and apply OSAID v.1.0.”
The text of the OSAID v.1.0 as well as a partial list of the many global stakeholders who endorse the definition can be found here: https://opensource.org/ai
About the Open Source Initiative
Founded in 1998, the Open Source Initiative (OSI) is a non-profit corporation with global scope formed to educate about and advocate for the benefits of Open Source and to build bridges among different constituencies in the Open Source community. It is the steward of the Open Source Definition and the Open Source AI Definition, setting the foundation for the global Open Source ecosystem. Join and support the OSI mission today at: https://opensource.org/join.
Trey Hunner: Adding keyboard shortcuts to the Python REPL
I talked about the new Python 3.13 REPL a few months ago and after 3.13 was released. I think it’s awesome.
I’d like to share a secret feature within the Python 3.13 REPL which I’ve been finding useful recently: adding custom keyboard shortcuts.
This feature involves a PYTHONSTARTUP file, use of an unsupported Python module, and dynamically evaluating code.
In short, we may be getting ourselves into trouble. But the result is very neat!
Thanks to Łukasz Llanga for inspiring this post via his excellent EuroPython keynote talk.
The goal: keyboard shortcuts in the REPLFirst, I’d like to explain the end result.
Let’s say I’m in the Python REPL on my machine and I’ve typed numbers =:
1 >>> numbers =I can now hit Ctrl-N to enter a list of numbers I often use while teaching (Lucas numbers):
1 numbers = [2, 1, 3, 4, 7, 11, 18, 29]That saved me some typing!
Getting a prototype workingFirst, let’s try out an example command.
Copy-paste this into your Python 3.13 REPL:
1 2 3 4 5 6 7 8 9 10 11 from _pyrepl.simple_interact import _get_reader from _pyrepl.commands import Command class Lucas(Command): def do(self): self.reader.insert("[2, 1, 3, 4, 7, 11, 18, 29]") reader = _get_reader() reader.commands["lucas"] = Lucas reader.bind(r"\C-n", "lucas")Now hit Ctrl-N.
If all worked as planned, you should see that list of numbers entered into the REPL.
Cool! Now let’s generalize this trick and make Python run our code whenever it starts.
But first… a disclaimer.
Here be dragons 🐉Notice that _ prefix in the _pyrepl module that we’re importing from? That means this module is officially unsupported.
The _pyrepl module is an implementation detail and its implementation may change at any time in future Python versions.
In other words: _pyrepl is designed to be used by Python’s standard library modules and not anyone else. That means that we should assume this code will break in a future Python version.
Will that stop us from playing with this module for the fun of it?
It won’t.
Creating a PYTHONSTARTUP fileSo we’ve made one custom key combination for ourselves. How can we setup this command automatically whenever the Python REPL starts?
We need a PYTHONSTARTUP file.
When Python launches, if it sees a PYTHONSTARTUP environment variable it will treat that environment variable as a Python file to run on startup.
I’ve made a /home/trey/.python_startup.py file and I’ve set this environment variable in my shell’s configuration file (~/.zshrc):
1 export PYTHONSTARTUP=$HOME/.python_startup.pyTo start, we could put our single custom command in this file:
1 2 3 4 5 6 7 8 9 10 11 12 13 try: from _pyrepl.simple_interact import _get_reader from _pyrepl.commands import Command except ImportError: pass # Not in the new pyrepl OR _pyrepl implementation changed else: class Lucas(Command): def do(self): self.reader.insert("[2, 1, 3, 4, 7, 11, 18, 29]") reader = _get_reader() reader.commands["lucas"] = Lucas reader.bind(r"\C-n", "lucas")Note that I’ve stuck our code in a try-except block. Our code only runs if those _pyrepl imports succeed.
Note that this might still raise an exception when Python starts if the reader object’s command attribute or bind method change in a way that breaks our code.
Personally, I’d like to see those breaking changes occur print out a traceback the next time I upgrade Python. So I’m going to leave those last few lines without their own catch-all exception handler.
Generalizing the codeHere’s a PYTHONSTARTUP file with a more generalized solution:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 try: from _pyrepl.simple_interact import _get_reader from _pyrepl.commands import Command except ImportError: pass else: # Hack the new Python 3.13 REPL! cmds = { r"\C-n": "[2, 1, 3, 4, 7, 11, 18, 29]", r"\C-f": '["apples", "oranges", "bananas", "strawberries", "pears"]', } from textwrap import dedent reader = _get_reader() for n, (key, text) in enumerate(cmds.items(), start=1): name = f"CustomCommand{n}" exec(dedent(f""" class _cmds: class {name}(Command): def do(self): self.reader.insert({text!r}) reader.commands[{name!r}] = {name} reader.bind({key!r}, {name!r}) """)) # Clean up all the new variables del _get_reader, Command, dedent, reader, cmds, text, key, name, _cmds, nThis version uses a dictionary to map keyboard shortcuts to the text they should insert.
Note that we’re repeatedly building up a string of Command subclasses for each shortcut, using exec to execute the code for that custom Command subclass, and then binding the keyboard shortcut to that new command class.
At the end we then delete all the variables we’ve made so our REPL will start the clean global environment we normally expect it to have:
1 2 3 4 Python 3.13.0 (main, Oct 8 2024, 10:37:56) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> dir() ['__annotations__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__']Is this messy?
Yes.
Is that a needless use of a dictionary that could have been a list of 2-item tuples instead?
Yes.
Does this work?
Yes.
Doing more interesting and risky stuffNote that there are many keyboard shortcuts that may cause weird behaviors if you bind them.
For example, if you bind Ctrl-i, your binding may trigger every time you try to indent. And if you try to bind Ctrl-m, your binding may be ignored because this is equivalent to hitting the Enter key.
So be sure to test your REPL carefully after each new binding you try to invent.
If you want to do something more interesting, you could poke around in the _pyrepl package to see what existing code you can use/abuse.
For example, here’s a very hacky way of making a binding to Ctrl-x followed by Ctrl-r to make this import subprocess, type in a subprocess.run line, and move your cursor between the empty string within the run call:
1 2 3 4 5 6 7 8 9 10 11 12 class _cmds: class Run(Command): def do(self): from _pyrepl.commands import backward_kill_word, left backward_kill_word(self.reader, self.event_name, self.event).do() self.reader.insert("import subprocess\n") code = 'subprocess.run("", shell=True)' self.reader.insert(code) for _ in range(len(code) - code.index('""') - 1): left(self.reader, self.event_name, self.event).do() reader.commands["subprocess_run"] = _cmds.Run reader.bind(r"\C-x\C-r", "subprocess_run") What keyboard shortcuts are available?As you play with customizing keyboard shortcuts, you’ll likely notice that many key combinations result in strange and undesirable behavior when overridden.
For example, overriding Ctrl-J will also override the Enter key… at least it does in my terminal.
I’ll list the key combinations that seem unproblematic on my setup with Gnome Terminal in Ubuntu Linux.
Here are Control key shortcuts that seem to be complete unused in the Python REPL:
- Ctrl-N
- Ctrl-O
- Ctrl-P
- Ctrl-Q
- Ctrl-S
- Ctrl-V
Note that overriding Ctrl-H is often an alternative to the backspace key
Here are Alt/Meta key shortcuts that appear unused on my machine:
- Alt-A
- Alt-E
- Alt-G
- Alt-H
- Alt-I
- Alt-J
- Alt-K
- Alt-M
- Alt-N
- Alt-O
- Alt-P
- Alt-Q
- Alt-S
- Alt-V
- Alt-W
- Alt-X
- Alt-Z
You can add an Alt shortcut by using \M (for “meta”). So r"\M-a" would capture Alt-A just as r"\C-a" would capture Ctrl-A.
Here are keyboard shortcuts that can be customized but you might want to consider whether the current default behavior is worth losing:
- Alt-B: backward word (same as Ctrl-Left)
- Alt-C: capitalize word (does nothing on my machine…)
- Alt-D: kill word (delete to end of word)
- Alt-F: forward word (same as Ctrl-Right)
- Alt-L: downcase word (does nothing on my machine…)
- Alt-U: upcase word (does nothing on my machine…)
- Alt-Y: yank pop
- Ctrl-A: beginning of line (like the Home key)
- Ctrl-B: left (like the Left key)
- Ctrl-E: end of line (like the End key)
- Ctrl-F: right (like the Right key)
- Ctrl-G: cancel
- Ctrl-H: backspace (same as the Backspace key)
- Ctrl-K: kill line (delete to end of line)
- Ctrl-T: transpose characters
- Ctrl-U: line discard (delete to beginning of line)
- Ctrl-W: word discard (delete to beginning of word)
- Ctrl-Y: yank
- Alt-R: restore history (within history mode)
Find something fun while playing with the _pyrepl package’s inner-workings?
I’d love to hear about it! Comment below to share what you found.
Real Python: Beautiful Soup: Build a Web Scraper With Python
Web scraping is the automated process of extracting data from the internet. The Python libraries Requests and Beautiful Soup are powerful tools for the job. To effectively harvest the vast amount of data available online for your research, projects, or personal interests, you’ll need to become skilled at web scraping.
In this tutorial, you’ll learn how to:
- Inspect the HTML structure of your target site with your browser’s developer tools
- Decipher data encoded in URLs
- Use Requests and Beautiful Soup for scraping and parsing data from the internet
- Step through a web scraping pipeline from start to finish
- Build a script that fetches job offers from websites and displays relevant information in your console
If you like learning with hands-on examples and have a basic understanding of Python and HTML, then this tutorial is for you! Working through this project will give you the knowledge and tools you need to scrape any static website out there on the World Wide Web. You can download the project source code by clicking on the link below:
Get Your Code: Click here to download the free sample code that you’ll use to learn about web scraping in Python.
Take the Quiz: Test your knowledge with our interactive “Beautiful Soup: Build a Web Scraper With Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Beautiful Soup: Build a Web Scraper With PythonIn this quiz, you'll test your understanding of web scraping using Python. By working through this quiz, you'll revisit how to inspect the HTML structure of a target site, decipher data encoded in URLs, and use Requests and Beautiful Soup for scraping and parsing data from the Web.
What Is Web Scraping?Web scraping is the process of gathering information from the internet. Even copying and pasting the lyrics of your favorite song can be considered a form of web scraping! However, the term “web scraping” usually refers to a process that involves automation. While some websites don’t like it when automatic scrapers gather their data, which can lead to legal issues, others don’t mind it.
If you’re scraping a page respectfully for educational purposes, then you’re unlikely to have any problems. Still, it’s a good idea to do some research on your own to make sure you’re not violating any Terms of Service before you start a large-scale web scraping project.
Reasons for Automated Web ScrapingSay that you like to surf—both in the ocean and online—and you’re looking for employment. It’s clear that you’re not interested in just any job. With a surfer’s mindset, you’re waiting for the perfect opportunity to roll your way!
You know about a job site that offers precisely the kinds of jobs you want. Unfortunately, a new position only pops up once in a blue moon, and the site doesn’t provide an email notification service. You consider checking up on it every day, but that doesn’t sound like the most fun and productive way to spend your time. You’d rather be outside surfing real-life waves!
Thankfully, Python offers a way to apply your surfer’s mindset. Instead of having to check the job site every day, you can use Python to help automate the repetitive parts of your job search. With automated web scraping, you can write the code once, and it’ll get the information that you need many times and from many pages.
Note: In contrast, when you try to get information manually, you might spend a lot of time clicking, scrolling, and searching, especially if you need large amounts of data from websites that are regularly updated with new content. Manual web scraping can take a lot of time and be highly repetitive and error-prone.
There’s so much information on the internet, with new information constantly being added. You’ll probably be interested in some of that data, and much of it is out there for the taking. Whether you’re actually on the job hunt or just want to automatically download all the lyrics of your favorite artist, automated web scraping can help you accomplish your goals.
Challenges of Web ScrapingThe internet has grown organically out of many sources. It combines many different technologies, styles, and personalities, and it continues to grow every day. In other words, the internet is a hot mess! Because of this, you’ll run into some challenges when scraping the web:
-
Variety: Every website is different. While you’ll encounter general structures that repeat themselves, each website is unique and will need personal treatment if you want to extract the relevant information.
-
Durability: Websites constantly change. Say you’ve built a shiny new web scraper that automatically cherry-picks what you want from your resource of interest. The first time you run your script, it works flawlessly. But when you run the same script a while later, you run into a discouraging and lengthy stack of tracebacks!
Unstable scripts are a realistic scenario because many websites are in active development. If a site’s structure changes, then your scraper might not be able to navigate the sitemap correctly or find the relevant information. The good news is that changes to websites are often small and incremental, so you’ll likely be able to update your scraper with minimal adjustments.
Still, keep in mind that the internet is dynamic and keeps on changing. Therefore, the scrapers you build will probably require maintenance. You can set up continuous integration to run scraping tests periodically to ensure that your main script doesn’t break without your knowledge.
An Alternative to Web Scraping: APIsSome website providers offer application programming interfaces (APIs) that allow you to access their data in a predefined manner. With APIs, you can avoid parsing HTML. Instead, you can access the data directly using formats like JSON and XML. HTML is primarily a way to visually present content to users.
When you use an API, the data collection process is generally more stable than it is through web scraping. That’s because developers create APIs to be consumed by programs rather than by human eyes.
The front-end presentation of a site might change often, but a change in the website’s design doesn’t affect its API structure. The structure of an API is usually more permanent, which means it’s a more reliable source of the site’s data.
However, APIs can change as well. The challenges of both variety and durability apply to APIs just as they do to websites. Additionally, it’s much harder to inspect the structure of an API by yourself if the provided documentation lacks quality.
Read the full article at https://realpython.com/beautiful-soup-web-scraper-python/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Drupal life hack's: Secrets of Secure Development in Drupal: Key Functions
Real Python: Quiz: Beautiful Soup: Build a Web Scraper With Python
In this quiz, you’ll test your understanding of web scraping with Python, Requests, and Beautiful Soup.
By working through this quiz, you’ll revisit how to inspect the HTML structure of your target site with your browser’s developer tools, decipher data encoded in URLs, use Requests and Beautiful Soup for scraping and parsing data from the Web, and gain an understanding of what a web scraping pipeline looks like.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Golems GABB: Using JavaScript Frameworks - React, Vue, Angular in Drupal
The integration of JavaScript frameworks, like React, Vue, and Angular, with Drupal has sparked a wave of creativity and innovation. It goes beyond building websites. This blog explores the benefits and methods of integrating these frameworks with Drupal, demonstrating how this fusion enhances front-end development and user engagement.
Traditional static websites and strict CMS constraints are becoming a thing of the past. Nowadays, developers are embracing the adaptability and engagement provided by JavaScript frameworks to design UI within the Drupal environment.
To really understand how JavaScript frameworks such as React, Vue, and Angular work with Drupal, you need to know about the frontend environment of Drupal and the difficulties developers meet while creating complicated user interfaces in this strong content management system.
Thomas Lange: 30.000 FAIme jobs created in 7 years
The number of FAIme jobs has reached 30.000. Yeah!
At the end of this November the FAIme web service for building customized ISOs turns 7 years old.
It had reached 10.000 jobs in March 2021 and 20.000 jobs were reached in
June 2023. A nice increase of the usage.
Here are some statistics for the jobs processed in 2024:
Type of jobs 3% cloud image 11% live ISO 86% install ISO Distribution 2% bullseye 8% trixie 12% ubuntu 24.04 78% bookworm Misc- 18% used a custom postinst script
- 11% provided their ssh pub key for passwordless root login
- 50% of the jobs didn't included a desktop environment at all, the others used GNOME, XFCE or KDE or the Ubuntu desktop the most.
- The biggest ISO was a FAIme job which created a live ISO with a desktop and some additional packages This job took 30min to finish and the resulting ISO was 18G in size.
The cloud and live ISOs need more time for their creation because the FAIme server needs to unpack and install all packages. For the install ISO the packages are only downloaded. The amount of software packages also affects the build time. Every ISO is build in a VM on an old 6-core E5-1650 v2. Times given are calculated from the jobs of the past two weeks.
Job type Avg Max install no desktop 1 min 2 min install GNOME 2 min 5 minThe times for Ubuntu without and with desktop are one minute higher than those mentioned above.
Job type Avg Max live no desktop 4 min 6 min live GNOME 8 min 11 minThe times for cloud images are similar to live images.
A New FeatureFor a few weeks now, the system has been showing the number of jobs ahead of you in the queue when you submit a job that cannot be processed immediately.
The Next MilestoneAt the end of this years the FAI project will be 25 years old. If you have a success story of your FAI usage to share please post it to the linux-fai mailing list or send it to me. Do you know the FAI questionnaire ? A lot of reports are already available.
Here's an overview what happened in the past 20 years in the FAI project.
About FAImeFAIme is the service for building your own customized ISO via a web interface. You can create an installation or live ISO or a cloud image. Several Debian releases can be selected and also Ubuntu server or Ubuntu desktop installation ISOs can be customized. Multiple options are available like selecting a desktop and the language, adding your own package list, choosing a partition layout, adding a user, choosing a backports kernel, adding a postinst script and some more.
Python Bytes: #407 Back to the future, destination 3.14
Zato Blog: Salesforce API integrations and connected apps
This instalment in a series of articles about API integrations with Salesforce covers connected apps - how to create them and how to obtain their credentials needed to exchange REST messages with Salesforce.
In Salesforce's terminology, a connected app is, essentially, an API client. It has credentials, a set of permissions, and it works on behalf of a user in an automated manner.
In particular, the kind of a connected app that I am going to create below is one that can be used in backend, server-side integrations that operate without any direct input from end users or administrators, i.e. the app is created once, its permissions and credentials are set once, and then it is able to work uninterrupted in the background, on server side.
Server-side systems are quite unlike other kinds of apps, such as mobile ones, that assume there is a human operator involved - they have their own work characteristics, related yet different, and I am not going to cover them here.
Note that permission types and their scopes are a separate, broad subject and they will described in a separate how-to article.
Finally, I assume that you are either an administrator in a Salesforce organization or that you are preparing information for another person with similar grants in Salesforce.
Conceptually, there is nothing particularly unusual about Salesforce connected apps, it is just its own mini-world of jargon and, at the end of the day, it simply enables you to invoke APIs that Salesforce is built on. It is just that knowing where to click, what to choose and how to navigate the user interface can be a daunting challenge that this article hopes to make easier to overcome.
The stepsFor an automated, server-side connected app to make use of Salesforce APIs, the requirements are:
- Having access to username/password credentials
- Creating a connected app
- Granting permissions to the app (not covered in this article)
- Obtaining a customer key and customer secret for the app
You will note that there are four credentials in total:
- Username
- Password
- Customer key
- Customer secret
Also, depending on what chapter of the Salesforce documentation you are reading, you will note that the customer key can be also known as "client_id" whereas another name for the customer secret is "client_secret". These two pairs mean the same.
Access to username/password credentialsFor starters, you need to have an account in Salesforce, a combination of username + password that you can log in with and on whose behalf the connected app will be created:
Creating a connected appOnce you are logged in, go to Setup in the top right-hand corner:
In the search box, look up "app manager":
Next, click the "New Connected App" button to the right:
Fill out the basic details such as "Connect App Name" and make sure that you select "Enable OAuth Settings". Then, given that in this document we are not dealing with the subject of permissions at all, grant full access to the connected app and finally click "Save" at the bottom of the page.
Obtaining a customer key and customer secretWe have a connected app but we still do not know what its customer key and secret are. To reveal it, go to the "App Manager" once more, either via the search box or using the menu on the left hand side.
Find your app in the list and click "View" in the list of actions. Observe that it is "View", not "Edit" or "Manage", where you can check what the credentials are:
The customer key and secret van be now revealed in the "API (Enable OAuth Settings)" section:
This concludes the process - you have a connected app and all the credentials needed now.
TestingSeeing as this document is part of a series of how-tos in the context of Zato, if you would like to integrate with Salesforce in Python, at this point you will be able to follow the steps in another where everything is detailed separately.
Just as a quick teaser, it would look akin to the below.
... # Salesforce REST API endpoint to invoke path = '/sobjects/Campaign/' # Build the request to Salesforce based on what we received request = { 'Name': input.name, 'Segment__c': input.segment, } # Create a reference to our connection definition .. salesforce = self.cloud.salesforce['My Salesforce Connection'] # .. obtain a client to Salesforce .. with salesforce.conn.client() as client: # type: SalesforceClient # .. create the campaign now. response = client.post(path, request) ...On a much lower level, however, if you would just like to quickly test out whether you configured the connected app correctly, you can invoke from command line a Salesforce REST endpoint that will return an OAuth token, as below.
Note that, as I mentioned it previously, client_id is the same as customer key and client_secret is the same as customer secret.
curl https://example.my.salesforce.com/services/oauth2/token \ -H "X-PrettyPrint: 1" \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'grant_type=password' \ --data-urlencode 'username=hello@example.com' \ --data-urlencode 'password=my.password' \ --data-urlencode 'client_id=my.customer.key' \ --data-urlencode 'client_secret=my.client.secret'The result will be, for instance:
{ "access_token" : "008e0000000PTzLPb!4Vzm91PeIWJo.IbPzoEZf2ygEM.6cavCt0YwAGSM", "instance_url" : "https://example.my.salesforce.com", "id" : "https://login.salesforce.com/id/008e0000000PTzLPb/0081fSUkuxPDrir000j1", "token_type" : "Bearer", "issued_at" : "1649064143961", "signature" : "dwb6rwNIzl76kZq8lQswsTyjW2uwvTnh=" }Above, we have an OAuth bearer token on output - this can be used in subsequent, business REST calls to Salesforce but how to do it exactly in practice is left for another article.
Next steps:➤ Read about how to use Python to build and integrate enterprise APIs that your tests will cover
➤ Python API integration tutorial
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
qtatech.com blog: La fin de Drupal 7 : Pourquoi tant de sites choisissent WordPress ?
Since January 5, 2025, Drupal 7 has officially reached its end of life. This iconic Content Management System (CMS), used by thousands of sites worldwide, no longer receives security updates or official support. This deadline has prompted many site administrators to rethink their strategy and select a new platform to ensure continuity.
Droptica: Data Migration to Drupal Using Products from External Database - Guide
How can you perform a product data migration from an external database to Drupal using the tools available within the Migrate API? In this blog post, I’ll show you how to connect to the database, prepare the data structure, and use the migration tools available in Drush. This entry is aimed at people who have already had experience with migrations as well as those who are just getting started with them. I encourage you to read the article or watch the video of the “Nowoczesny Drupal” series.
PreviousNext: New resource: How to prepare open source Requests for Proposals
The Drupal Association has published client guides to RFPs that prioritise open source software solutions.
by fiona.crowson / 28 October 2024In a recent blog post, 'How to write an RFP for Open Source Solutions: Featuring Drupal Certified Partners', the Drupal Association outlines:
- the advantages of open source software
- tips for finding the ideal service provider (and why Drupal Certified Partners like PreviousNext make for good partners)
- guidance for crafting a successful RFP
- strategies for evaluating proposals
Clients also have access to a downloadable open source Request For Proposal (aka Request For Quote) template.
The core of the guide provides a detailed overview about why choosing a Drupal Certified Partner is the key to the technical expertise, smooth collaboration and commitment to quality and innovation that helps ensure the success of your projects. PreviousNext has been able to attain the current top ranked Drupal Certified Partner status globally by demonstrating our proven track record, commitment to the Drupal open source community and our verifiable capabilities.
Our team is highly experienced and happy to answer your questions about the advantages of Drupal, so please feel free to get in touch.
Quick links- How to write an RFP article
- RFP template download
Kwave Update - October 2024
Kwave is an audio editor based on the KDE Frameworks. It was started in 1998 by Martin Wilz, and Thomas Eschenbacher has been the main developer since 1999. In recent years development has slowed. I wanted to do some software development and contribute to KDE, and I’m interested in audio, so towards the end of 2023 I started working on Kwave.
Kwave had not been ported to Qt 6 and KDE Frameworks 6 yet, so that’s what I started working towards. My first merge requests were to update deprecated code. (MR Convert plugin desktop files to json, MR Port away from deprecated Qt API, MR port away from deprecated I18N_NOOP macros, MR bump KF5_MIN_VERSION and update where KMessageBox API has been deprecated, MR port QRegExp to QRegularExpression)
With that preparatory work done, I worked at porting Kwave to Qt 6 and KDE Frameworks 6. Most of that work was straight-forward. The biggest changes were in Qt Multimedia, which Kwave can use for playback and recording. I finally got that done and merged in August 2024, just after version 24.08 was branched, so that change will get released in version 24.12 in December 2024. (MR port to Qt6 and KF6)
Next I did some code cleanup. (MR use ECMGenerateExportHeader, MR add braces to avoid ambiguous else, MR call KCrash::initialize() after KAboutData::setApplicationData())
Laurent Montel added the FreeBSD job to the Continuous Integration configuration, but the build failed initially. I’ve never ran FreeBSD, but with a few tries and pushing changes to trigger CI, I managed to get the CI to pass. I’m glad Laurent took the initiative here, because the FreeBSD job uses clang, so with the existing Linux job using gcc, CI makes sure Kwave builds with both compilers now. (MR Add freebsd)
I applied for a KDE Developer account and got approved on August 24, 2024. Now I could commit changes myself instead of having to remind others to do it.
Carl Schwan cleaned up some code and updated the zoom toolbar to use standard icons, which enabled removing the built-in zoom icons. (MR Modernize ZoomToolbar) That was the incentive I needed to remove the rest of the built-in icons and use standard icons instead, which helps Kwave fit the users theme better. (MR use icons from current theme) I also reordered the playback toolbar in a way that seemed more logical to me. (MR update player toolbar)
I investigated a bug: Kwave Playback settings dialog loads incorrectly until you switch playback methods and fixed it in MR: make sure a valid method gets selected when PlayBackDialog opens
I have some more work in progress, and I plan to continue working on Kwave. I will try to blog about what I’m doing, but I’m not going to commit to any regular schedule.
Get InvolvedKwave depends on the rest of KDE. It is built on the Frameworks, and the KDE sysadmin team keeps the infrastructure running. You can help KDE by getting involved, or at least donate.
If you would like to help improve Kwave, use it and try out its features! If you have questions or ideas, discuss them. If you find bugs, report them. If you want to get involved with development, download the source code and start hacking!
If you think my own work on Kwave is worth something and you can afford it, you can donate to me through Liberapay, Stripe, or PayPal.