Feeds

Konsole Layout Automation (part 1)

Planet KDE - Mon, 2024-06-10 20:00

Do you find yourself opening up the same tabs in your terminal and running the same commands every day? Have you wanted to make this easier, say, by clicking a desktop icon or running a short command?

Recently, I saw a question along these lines in the KDE support room on Matrix. Someone wanted to have Konsole automatically open some tabs and run specific commands when it's started. Since I've done this kind of thing, for personal and professional environments, I thought I'd share. You too can avoid doing the same manual terminal setup day after day, and just get to what you're working on.

Here in Part 1, I'll outline how to open tabs with different profiles and run commands in them. Next week, in Part 2, I'll share how to set up layouts (a.k.a. split panes) and run commands in those. You can combine these techniques to make your setup truly flexible. Using aliases, functions and scripts, you can bring up a complex layout with one command or button click.

Inspiration

I have a motto: If I have to do the exact same thing more than twice (on a computer), I should probably script / automate it. It's an up front time investment that saves me a lot of manual tedium later.

My inspiration for scripting terminal layouts came from iTerm2's customization abilities. After a few years of using it on a work-provided MacBook Pro, I was spoiled. It's truly nice how easy it is to create profiles and save splits in the GUI. You can then tweak their configurations using any text editor. When I searched for how to do this with Linux terminals, I found quite a few people were also looking for an "iTerm2 equivalent for Linux".

Goal: Launch a Konsole window with tabs which run commands

💡 Note For more detailed instructions on using Konsole, please see the output of konsole --help and take a look at the Konsole Handbook

We'll start with that request from Matrix: having Konsole create tabs and run commands in them when it starts. Here are the steps I used to accomplish this. Feel free to use this as inspiration and adapt for your own needs.

Use case: Quickly set up a working environment that I use daily to save time every day. There will be a couple of tabs connected to different remote hosts, one that changes to a notes directory, and one running a system monitor.

Here's an overview of the steps:

  • Create a Konsole profile which will run a command and optionally use another color theme
  • Create a tab layout file to launch Konsole with which will open a tab with that profile
  • Create a profile with a different command for an additional tab, and then identify commands for a couple of the other tabs
  • Update the tab layout file to launch Konsole with all the desired tabs

Now let's get into the details.

Create the first Konsole profile

Making custom profiles for different hosts are where my adventures in customizing Konsole began.

Use case: I would like to automatically connect to my hosted VPS in a tab with a specific color theme.

Reasoning: Running commands using a theme is quicker and easier than typing by hand (especially as they get more complex). With a custom color scheme, I can see at a glance which tab has my server session vs. any other system. This helps me avoid typing commands for my server into another system and vice-versa.

💡 Note Profiles are stored as plain text files in /.local/share/konsole/. Profile names are cAsE senSiTive. Since profile files are plain text, you can create, edit, and delete them with scripts. That is beyond the scope of this post.

The first step is to set up a custom profile with a custom color theme, a custom tab title format, and a command.

I'm using "eternal terminal" (et) for its ability to reconnect, rather than ssh to connect to the VPS. I also have an entry for the host in .ssh/config

In Settings -> Manage Profiles - New I created a profile named VPS. The following are the options I changed. Others were left as-is ("hostname" is a stand-in for the actual VPS hostname):

Name: VPS Command: et VPS Tab title format: %n hostname Tab Color: I chose a color Appearance - Color scheme & font: I chose a different color scheme

Now that I have a custom profile, I can use it with a new tab in Konsole. When I go to File -> New Tab and click "VPS", the tab will open with the new profile, and the command will run to connect to the server.

You can use Konsole's command line options in the following way to have it launch with a tab using a profile.

konsole --new-tab --profile VPS

This command can be used with an alias, function, script, Windows batch file, desktop shortcut or whatever you prefer.

There's another way to launch Konsole which is better for opening multiple tabs. It uses a "tabs configuration file".

Create a tabs configuration file

💡 Note I have found this to be unreliable if any tab uses ssh or et There's an open bug report for this. Until this is resolved, you may have better luck with a shell script that opens multiple tabs, I'll add information below.

Now I'll create a tabs configuration file with the VPS profile in it.

Each line in this file describes a tab for Konsole to open. The format is detailed in the command line options documentation. I keep these in ~/konsole_layouts/, keep yours in whatever directory works for you. I'll save this file as "vps-tabs".

Each line must have at least a profile or a command field. It can have both, and there are other options as well.
Do not specify a command on a line for a tab if you also use a profile which itself has a command. This may lead to unexpected behavior.

If you only need to run a command in a tab, you can just use the command field. You combine a command with a profile (that has no command) if you want to customize the tab further.

This is what will go in the file for now for my first tab:

title: VPS ;; profile: VPS

The command below will start Konsole using this file. (Change file path to point to the file on your system). Opening Konsole in this way can also be done with an alias, script etc.

konsole --hold --tabs-from-file ~/konsole_layouts/vps-tabs

Notice that I'm using --hold which tells Konsole to keep tabs open even after their command has exited. This will help debugging any problems that might arise (unless the window crashes entirely or fails to load). Once things are set up and running smoothly, you don't need to use --hold, if you prefer.

Create additional profiles

Use case: At the start of the day, I want to automatically open tabs which connect to both servers I need to work with. I want an additional tab open to my notes directory and another one running a system monitor.

Time to add an additional profile to Konsole:

  • Server2 - this will use ssh to connect to a second server and use a profile for a different color theme

I won't create profiles for the system monitor tab or the Notes tab. I'll run those commands directly from the tab configuration file.

Update the tabs file with additional tab definitions

At this point, I'll add lines to the tabs configuration file I created.

  • A tab for the Server2 profile.
  • A tab which changes to a notes directory, and uses the Sand profile with colors I prefer for them.
  • A tab for the system monitor which runs the btm command and uses another profile with yet another color scheme.

The file now looks like this:

title: VPS ;; profile: VPS title: SysAdminNotes ;; workdir: ~/Nextcloud/Notes/SysAdmin ;; profile: Sand title: Server2 ;; profile: Server2 title: NavyBlue ;; command: btm ;; profile: NavyBlue

And now, when I run konsole --hold --tabs-from-file ~/konsole_layouts/vps-tabs, Konsole launches with all the tabs I need.

Bonus: Run commands after connecting to a remote host

It's possible with both ssh and et to run commands on a remote host after connection. For instance, I could update the command in my VPS profile like so:

Name: VPS Command: et VPS -c 'sudo apt update && apt list --upgradable; exec /bin/zsh' Tab title format: %n hostname Tab Color: I chose a color Appearance - Color scheme & font: I chose a different color scheme

The command not only connects to my VPS, it also displays a list of any available updates, and stays connected to the host. Note the exec /bin/zsh at the end. This will keep an interactive terminal open after other commands are run.

With ssh, to run a command on a remote host while staying connected, the command is slightly different:

ssh -t VPS 'sudo apt update && apt list --upgradable; zsh' But what if I need to connect to a jumphost before the remote host and I still want to run a command after I connect?

This can be done with either SSH or Eternal Terminal. For SSH, the process is relatively easy, and plenty of sites online provide instructions. Eternal Terminal, on the other hand, is a bit more tricky.

It took some research and experimentation to get this working with Eternal Terminal, so here it is ("jumphost" is a placeholder for the actual hostname of the jumphost).

Command=et jumphost -c "ssh -t remotehost 'cd /some/directory && . ../some_script && cd ../another-directory; exec /bin/zsh'" Alternative to using a tabs file: a shell script or function

If you have difficulties with using a tabs configuration file, you can use a shell script (or function) that takes advantage of the command line options Konsole has. For instance, using the -e flag we can have tabs run commands:

#!/usr/bin/env bash konsole --new-tab -e echo This is tab 1 & konsole --new-tab -e echo This is tab 2 & konsole --new-tab -e echo This is tab 3 & konsole --new-tab -e echo This is tab 4 &

You can combine commands with profiles as well:

#!/usr/bin/env bash konsole --new-tab --profile VPS & konsole --new-tab --profile Sand --workdir: ~/Nextcloud/Notes/SysAdmin & konsole --new-tab --profile Server2 & konsole --new-tab --profile NavyBlue -e btm &

Explore the Konsole docs and experiment!

Known issues
  • Launching Konsole with --tabs-from-file will open an extra tab in addition to the ones from the configuration file. This has an open bug report.
  • Sometimes, with --tabs-from-file, the new Konsole window immediately disappears. This may be related to ssh/et being in a command. I filed a bug report for that.
  • If using a tabs configuration file, or a script: only use Eternal Terminal with one tab. Trying to use it with more than one tab will result in et core dumping in one of them.
Categories: FLOSS Project Planets

Switching to GNU/Linux: Mentally

Planet KDE - Mon, 2024-06-10 20:00

Stallman was right; in the wake of Microsoft’s announcement of its much-maligned Recall feature and widespread public backlash to the terms and conditions for Adobe Creative Cloud products, it’s clear that trust in big tech and the software it produces is rapidly eroding. Under the circumstances, it’s no surprise that Free/Libre and Open Source Software (FLOSS) is seeing an uptick in interest from the public at large. So as ever more average users consider “switching to Linux,” it strikes me that while there exist tomes on the technical aspects, there seems to be much less written on the shift in thinking that is part and parcel of every experienced and well-adjusted FLOSS user. So if you’re making the switch or know someone who is, here’s some advice to make the most of the transition.

He was right. CC BY-NC-ND 3.0 Welcome #

First of all: welcome to GNU/Linux! You’ve chosen the operating system that powers bullet trains, the world’s fastest supercomputers, U.S.A. air traffic control, CERN’s Large Hadron Collider, and Google, Amazon, and Microsoft’s cloud services, used by NASA, the People’s Liberation Army, the Turkish government, whitehouse.gov, the U.S.A. Department of Defense, France’s national police force, ministry of agriculture, and parliament, Iceland’s public schools, the Dutch Police Internet Research and Investigation Network, Burlington Coat Factory, Peugeot, DreamWorks Animation, the Chicago Mercantile Exchange, the London Stock Exchange, the New York Stock Exchange, and Stephen Fry.

As you’ve no doubt inferred by now, GNU/Linux users span from your everyday cat-video viewer to large institutions and organizations where operating system reliability and performance means the difference between life and death. No matter where you are on this spectrum, with a little humility, open-mindedness, and perseverance, I promise that you can find your self every bit as happily at-home with GNU/Linux as you were with whatever OS you’ve been using up to this point. This may mean giving up a long-trusted piece of software for something new and different, but for many new users the most hard-won battle is a change in mentality.

You’re not a power-user anymore #

I’ve heard it said that the most “computer literate” people often find it especially arduous to adjust to GNU/Linux. I’ve been there; it’s a frightening thing to go from the person family, friends, and neighbors call to help with problems with any device that has so much as an LED on it to feeling like that clueless relative with a dozen toolbars installed on their outdated version of Internet Explorer. The reality is that while you’ve gotten very good at navigating the operating system that you’ve been using for the past twenty years, very little of that knowledge is useful in GNU/Linux. This is something you’re going to have to accept early on: no matter what distro you choose, it’s going to be different to Windows or MacOS in very fundamental ways.

This means that, no matter your mastery of Windows keyboard shortcuts, or how convoluted your AutoHotkey config may be, it’s going to take you some time to grasp the basics. Beyond that, the bar to become a GNU/Linux power-user is much, much higher than it is on proprietary operating systems. In case you’re feeling intimidated, know that this comes with some serious advantages. GNU/Linux systems come with a practically limitless potential for mastery, efficiency, and customization. In time, you’ll be able to customize your GUI to your exact specifications, automate system maintenance, and knock out common tasks with a speed you wouldn’t have thought possible on your old OS.

Embrace the new #

Switching to GNU/Linux is, in some ways, much more convenient than switching from, say, MacOS to Windows. Chiefly, most distros can be configured to run a wide range of software built for MacOS, Windows, or Android with minimal fuss. That said, I strongly encourage new users to explore FLOSS alternatives built on and for GNU/Linux. FLOSS projects often get a bad rap among users of proprietary operating systems because while a piece of software may run on these systems, the experience is rarely as good as it is on the system is was designed for: usually, GNU/Linux. FLOSS mainstays such as LibreOffice, Krita, Inkscape, Scribus, Kdenlive, and Ardour are at their best on GNU/Linux in terms of appearance, performance, and features. There are professionals of every stripe who do their work with an exclusively FLOSS toolset, from graphic design to video editing, audio production, data analytics, and more. If they can do it, so can you! Don’t let the one piece of proprietary software that just won’t work put you off of your new operating system when there’s a whole new ecosystem of incredible software to explore.

The latest development version of Scribus running on EndeavourOS. I guarantee you it doesn’t look this good on Windows.

New users of FLOSS projects often complain that the user interface or workflow of the tool they’re trying is “unintuitive.” Occasionally, these complaints hit on an area that genuinely could use some improvement, but more often, new users are simply expressing frustration that the workflow of a FLOSS project is different from what they are used to. These applications are not mere clones of their proprietary counterparts; they are projects in their own right, with unique goals, ideals, features, and workflows. Getting through a work project a little more slowly at first is not necessarily a flaw in the tool, it likely just means that you need a bit more practice. In time, you’ll come to learn and appreciate killer features that go above and beyond the capabilities of software produced by even the largest tech companies.

As a GNU/Linux user, you’re part of a community #

When you switch to GNU/Linux, you’re not a customer any more. FLOSS projects are largely build by communities of volunteers who work on what they find interesting or important for their own reasons. There’s no support line to call, no one to complain to if something breaks, and no one is losing anything by you choosing not to use their software. If you need help, or if you want to help make a FLOSS project better, you’re going to have to engage with the wider community. Every project has a forum, a Matrix or IRC channel, or some other means of connecting users and developers. If you have a problem you can’t solve on your own, these are the places to go to get help. Sign up and make a good faith effort to learn the rules and etiquette of the community, and chances are someone will be more than willing to help you find a solution out of sheer civic-mindedness.

There is likewise a great deal of pleasure and satisfaction to be gained by returning that kindness: by being an active participator in the communities you join, you’ll help others overcome the stumbling blocks you once faced and foster connections with others who share your interests. Beyond the community alone, there is something wonderful about using software that you’ve helped shape; contributing well written bug reports, monetary donations, writing documentation, or testing new releases makes a direct positive impact on the tools you rely on each day. It’s one thing to use FLOSS projects for reasons of ethics, privacy, or mere utility, but seeing a page of documentation you’ve written go live for anyone in the world to learn from, seeing a bug you reported vanish after an update, a theme you created get added to a game, or experiencing your feature request given form in a release really draws you in. You’re no longer at the mercy of some large tech company who only cares about profit; you’re part of a community that cares about people, ideas, and making its software better, more efficient, more usable, and more useful for everyone.

The FLOSS mindset #

To distill what I’ve said above: Things are going to be different, and you may feel disempowered and frustrated for a while until you catch up again. The solution to this, beyond simple patience, is to embrace the fact that by using FLOSS projects, you become a part of the process of making them. Join the community with respect and humility, allow yourself to receive help and kindness from others, and you’ll begin to once again remember how it feels to earn your skills. In time, you’ll be the one offering help, you’ll dance circles around any Windows power-user, and you’ll be using tools that you’ve helped make better. Again I say: welcome. With these small shifts in your thinking, you’re going to be in for a good time.

Categories: FLOSS Project Planets

Help wanted! Port KDE Frameworks oss-fuzz builds to Qt6/KF6

Planet KDE - Mon, 2024-06-10 19:01

If you're looking for an isolated and straightforward way to start contributing to KDE, you're in the right place. At KDE, we use fuzzing via oss-fuzz to try to ensure our libraries are robust against broken inputs. Here's how you can help us in this essential task.

What is Fuzzing?

Fuzzing involves feeding "random" [1] data into our code to check its robustness against invalid or unexpected inputs. This is crucial for ensuring the security and stability of applications that process data without direct user control.

Why is Fuzzing Important?

Imagine receiving an image via email, saving it to your disk, and opening it in Dolphin. This will make Dolphin create a thumbnail of the image. If the image is corrupted and our image plugin code isn't robust, the best-case scenario is that Dolphin crashes. In the worst case, it could lead to a security breach. Hence, fuzzing helps prevent such vulnerabilities.

How You Can Help:

We need to update the build of KDE libraries in oss-fuzz to use Qt6. This task could be challenging because it involves static compilation and ensuring the correct flags are passed for all compilation units.

Steps to Contribute:

  1. Start with karchive Project

    • Download oss-fuzz and go into the karchive subfolder.
    • Update the Dockerfile to download Qt from the dev branch and KDE Frameworks from the master branch.
  2. Update build.sh Script:

    • Modify the build.sh script to compile Qt6 (this will be harder since it involves moving from qmake to cmake) and KDE Frameworks 6.
  3. Check karchive_fuzzer.cc:

    • This file might need updates, but they should be relatively easy.
    • At the top of karchive_fuzzer.cc, you'll find a comment with the three commands that oss-fuzz runs. Use these to test the image building, fuzzer building, and running processes.

Need Help?

If you have questions or need assistance, please contact me at aacid@kde.org or ping me on Matrix at @tsdgeos:kde.org

Note:

[1] Smart fuzzing engines don't generate purely random data. They use semi-random and semi-smart techniques to efficiently find issues in the code.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #454 - Drupal API Client

Planet Drupal - Mon, 2024-06-10 14:00

Today we are talking about Drupal’s API Client, What it does, and why you might need it with guest Brian Perry. We’ll also cover Iconify Icons as our module of the week.

For show notes visit: www.talkingDrupal.com/454

Topics
  • Brian what is new with you!
  • Elevator pitch for Drupal API Client
  • What was Pitchburg like
  • Is this a normalizer for JSON API
  • Why is this JS framework agnostic
  • What is typescript and how does Drupal API Client use it
  • Looking at the quick start guide the second step is to create an instance, where do you do that
  • Who is this module for
  • Will Drupal API Client be added to core
  • What is on the roadmap
  • How does this relate to Chapter Three and Next.js
  • What is the spin up time
  • How will Starshot impact this
Resources Guests

Brian Perry - brianperry.dev brianperry

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Randy Fay - rfay

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you ever wanted to empower your content creators to place icons from a massive, open source library into your Drupal site? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created on May 22 of this year, so less than two weeks ago, by David Galeano (gxleano) of Factorial
    • Versions available: 1.0.0 which supports Drupal 9.3 or newer, right up to Drupal 11
  • Maintainership
    • Actively maintained
    • Security coverage
    • Test coverage
    • Documentation
    • Number of open issues: 2 open issues, neither of which are bugs
  • Usage stats:
    • 1 site
  • Module features and usage
    • Out of the box the module provides both a CKEditor button for placing icons, and a new field type. It even provides a new form element that can be used in custom forms, a render element you can use to programmatically put an icon into something like a custom block, and a Twig extension that can be used to place icons in templates.
    • According to the project page, the Iconify icon library includes more than 200,000 icons, though in my limited experimentation it seems like there are some duplicates between icon sets. Speaking of which, Iconify provides over 150 different icon sets, and in this module’s configuration you can specify which ones you want to be available on your site.
    • Placing an icon is as simple as using an autocomplete to search the names of the icons available, and a preview is shown for each of the matches found.
    • The field widget and the CKEditor button both give content creators options for what size and color to use for the icons. For myself I’d prefer to lock some of those options down (for example, make that part of the field’s display configuration instead), but I’m sure that could be added as part of a different widget.
    • I can think of a few Drupal sites I’ve built where this would have been really handy, so I’m interested to play around with this module some more, and see how it evolves.
Categories: FLOSS Project Planets

The Drop Times: Drupal's New Horizons: Innovations, Community, and the Starshot Initiative

Planet Drupal - Mon, 2024-06-10 12:52

Drupal's popularity is soaring because it's easy to use and flexible enough to fit any project. Businesses love it for its ability to grow with them, whether they're just starting out or already established. With each update, Drupal improves, offering more features and making it easier for everyone to build amazing websites.

One big thing to watch out for is the Starshot Initiative. It's all about making Drupal easier to use by adding some cool features from the start. Think of it as getting a new phone with all the apps you need installed. Starshot is like that for Drupal, making it simpler for everyone to get started and build something great.

The best part about Drupal is its community. People worldwide come together to share ideas, help each other, and improve Drupal. Whether you're a beginner or an expert, there's always something new to learn and someone to help you.

As Drupal grows, so does its potential. More and more businesses are choosing Drupal because it's reliable, adaptable, and just plain works. With initiatives like Starshot on the horizon, Drupal's future looks brighter.

With that said, let's now explore what updates and news we have covered last week:

In the interview with Alka Elizabeth, sub-editor at The Drop Times, Esmeralda Braad-Tijhoff, discusses her background, rise in the Drupal community, and ongoing efforts to promote inclusivity and collaboration within the tech industry. She shares insights into her roles with the Stichting Drupal Nederland and the Network of European Drupal Associations (NEDA), reflecting on her achievements and vision for the future of open-source solutions in public sector projects.

I had the opportunity to connect with some of the speakers of the upcoming DrupalJam to gain their insights on their sessions. Frederik Wouters sheds light on integrating RAG and OpenAI technologies within organizational infrastructures, emphasizing the significance of data quality and privacy considerations. Mikko Hämäläinen delves into the complexities of Digital Experience Platforms (DXP), elucidating their role in addressing customer expectations and internal silos in customer management. Finally, Mathias Bolt Lesniak advocates for the importance of open-source software in governmental policies, urging society to rediscover its benefits and advocate for its adoption.

Additionally, PHPCamp took place on 8th June 2024. I spoke with the organizers to provide insights into what attendees could expect. Amit Kumar Singh, a key organizer, shared his thoughts on why PHPCamp holds significance in the tech community.

Recently, Jay Callicott, an experienced Drupal Architect, introduced a new tool: DrupalX. DrupalX aims to simplify the process of building enterprise-level websites using Drupal. The Drop Times approached him for a comment, where he highlighted DrupalX's focus on improving the Drupal development experience, particularly for enterprise projects. Jay anticipates DrupalX will be beneficial in addressing common challenges developers face. Click here to read more about DrupalX.

The Drupal Starshot Initiative is picking up speed, and if you're interested in getting involved, now's the perfect time! The complete session calendar is now accessible, featuring a range of interactive Zoom calls. These sessions are tailored to offer updates and guidance on how you can contribute to this thrilling initiative.

After its unveiling at DrupalCon Portland 2024, the initial Drupal Starshot session, spearheaded by Dries Buytaert on May 31, 2024, attracted over 200 participants. The session discussed key topics such as community involvement, funding strategies, and governance plans. These discussions underscored the project's mission to streamline Drupal site construction through improved features and accessibility.

The Drupal Starshot project has introduced its new leadership team headed by Dries Buytaert. This team comprises key figures such as Technical Lead Tim Plunkett, User Experience Lead Cristina Chumillas, Product Owner Pamela Barone, and Contribution Coordinator Gábor Hojtsy. Together, they are committed to elevating the Drupal platform by prioritizing product vision, technical excellence, and user experience.

The Drupal Association has invited leaders of Local Associations from Asia to the inaugural Local Association Leaders meet-up on June 11, 2024. This initiative aims to strengthen the success of Drupal Local Associations by directly involving community leaders committed to advancing the Drupal project within their respective regions.

To celebrate Pride Month 2024, the Drupal Association is highlighting international LGBTQ+ organizations and donating proceeds from Pride-themed apparel sales. Community members can vote for the recipient organization, and they're encouraged to share their Pride stories for potential social media features.

The Drop Times has officially become a media partner for Drupaljam 2024, set to take place on June 12, 2024, at Fabrique Utrecht. We'll offer extensive coverage of all the latest updates throughout the event. But that's not all! The Drop Times has also been named the official Media Partner for DrupalCamp Spain 2024. This event is a cornerstone gathering for the Drupal community and is scheduled to be held from October 24 to 26 at the Hotel DeLoix Aqua Center. Stay tuned for comprehensive coverage of both Drupaljam 2024 and DrupalCamp Spain 2024!

Twin Cities Drupal Camp has opened the Call for Submissions for its upcoming event on September 12-13, 2024. With rolling session acceptance, early submissions have a higher chance of selection. The event will feature general sessions lasting 45 minutes, allowing for interactive audience Q&A sessions.

Registration for DrupalCamp Asheville 2024 is now open. The camp, which will take place from July 12 to 14, 2024, offers an exciting lineup of events. The camp includes three days of training, an unconference, various sessions, and community activities, all set against the scenic backdrop of Asheville. 

In other news, MidCamp will return with new dates in 2025, from May 20 to 22, at the DePaul University Student Center in Chicago. This change aligns with an earlier DrupalCon, providing the community a great opportunity to gather in the spring. Key events leading up to MidCamp include opening the call for speakers in the fall of 2024, with final selections and the schedule announcement expected in late January or early February 2025.

It summarises the week's major updates, though we couldn't include every story in this selection. Until next week... 

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely
Kazima Abbas
Sub-editor, The Drop Times

Categories: FLOSS Project Planets

Real Python: Python News: What's New From May 2024

Planet Python - Mon, 2024-06-10 10:00

May was packed with exciting updates and events in the Python community. This month saw the release of the first beta version of Python 3.13, the conclusion of PyCon US 2024, and the announcement of the keynote speakers for EuroPython 2024. Additionally, PEP 649 has been delayed until the Python 3.14 release, and the Python Software Foundation published its 2023 Annual Impact Report.

Get ready to explore the recent highlights!

Join Now: Click here to join the Real Python Newsletter and you'll never miss another Python tutorial, course update, or post.

The First Beta Version of Python 3.13 Released

After nearly a year of continuous development, the first beta release of Python 3.13 was made available to the general public. It marks a significant milestone in Python’s annual release cycle, officially kicking off the beta testing phase and introducing a freeze on new features. Beyond this point, Python’s core developers will shift their focus to only identifying and fixing bugs, enhancing security, and improving the interpreter’s performance.

While it’s still months before the final release planned for October 2024, as indicated by the Python 3.13 release schedule, third-party library maintainers are strongly encouraged to test their packages with this new Python version. The goal of early beta testing is to ensure compatibility and address any issues that may arise so that users can expect a smoother transition when Python 3.13 gets officially released later this year.

Although you shouldn’t use a beta version in any of your projects, especially in production environments, you can go ahead and try out the new version today. To check out Python’s latest features, you must install Python 3.13.0b1 using one of several approaches.

Note: If you’d like to share your feedback or file a bug against a pre-release development version, then open an issue on Python’s GitHub repository.

The quickest and arguably the most straightforward way to manage multiple Python versions alongside your system-wide interpreter is to use a tool like pyenv in the terminal:

Shell $ pyenv install 3.13.0b1 $ pyenv shell 3.13.0b1 $ python Python 3.13.0b1 (main, May 15 2024, 10:41:55) [GCC 13.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> Copied!

The highlighted line brings the first beta release of Python 3.13 onto your computer, while the following command temporarily sets the path to the python executable in your current shell session. As a result, the python command points to the specified Python interpreter.

Alternatively, you can use a Python installer, which you’ll find at the bottom of the downloads page, or run Python in an isolated Docker container to keep it completely separate from your operating system. However, for ultimate control, you can try building the interpreter from source code based on the instructions in the README file. This method will let you experiment with more advanced features, like turning off the GIL.

Unlike previous Python releases, which introduced a host of tangible syntactical features that you could get your hands on, this one mainly emphasizes internal optimizations and cleanup. That said, according to the official release summary document, there are a few notable new features that will be immediately visible to most Python programmers:

Some of these features don’t work on Windows at the moment because they rely on Unix-specific libraries, so you won’t see any difference unless you’re a macOS or Linux user. The good news is that Windows support is coming in the second beta release, which will arrive soon, thanks to Real Python team member Anthony Shaw.

Read the full article at https://realpython.com/python-news-may-2024/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

LN Webworks: Drupal Translation Module: How to Create Multilingual Drupal Websites

Planet Drupal - Mon, 2024-06-10 08:27

It typically occurs when you click on a website and become disinterested in utilizing it because there are no language preferences. In addition, you will be more interested in browsing the website if you can easily navigate its user interface and select your preferred language, regardless of where you live. 

According to some reports: Multilingual websites can reach 75% more internet users and improve user experience, as 60% of global consumers prefer browsing in their native language. Businesses with such websites see a 70% average increase in conversion rates. That’s why it is preferred that you must create a multilingual website to help with website traffic and your business strategy.

How?

Categories: FLOSS Project Planets

Robin Wilson: Introducing offline_folium

Planet Python - Mon, 2024-06-10 05:35

Another new-ish package that I’ve never got around to writing about on my blog is offline_folium. It has a somewhat niche use-case, but it seems like a few people have found it useful.

In brief, it allows you to use the folium package for creating interactive maps from Python, but without an internet connection. Folium is built on top of the Leaflet web mapping library – and it loads all the relevant JS and CSS files directly from CDNs. This means that if you try and use folium without an internet connection it just doesn’t work. Offline_folium works around this by downloading the files when you do have an internet connection, and then re-writing the links to the files to point to the offline versions.

I originally created this package when doing freelance work on a project for the UK Navy – they wanted to have interactive maps on an air-gapped computer, so I built a system that would allow this (in this case, the JS/CSS file download step was run as part of building the ‘app’ we produced). I should note at this point that without an internet connection the default OpenStreetMap background mapping will not work. In some situations that would be a significant problem – but we were using other background mapping, coastline vector files and so on, so it wasn’t a problem for us.

So, how do you use this package?

First, install it using pip install offline_folium. Then make sure you have an internet connection and run python -m offline_folium – this will download the JS/CSS files and store them in a sensible place (chosen automatically). Now you’re all set up.

Once you have no internet connection and want to use folium, you must import it in a slightly different way – first import offline from offline_folium, and then import folium, like this:

from offline_folium import offline import folium

Now you can use folium as usual, for example by creating a simple map object:

m = folium.Map()

So, that’s pretty-much it. People are actively submitting pull requests at the moment, so hopefully the functionality will expand to work with various folium plugins too.

For more information (or to submit a PR yourself), see the Github repo.

Categories: FLOSS Project Planets

Python Software Foundation: It’s time to make nominations for the PSF Board Election!

Planet Python - Mon, 2024-06-10 05:00

This year’s Board Election Nomination period opens tomorrow and closes on June 25th. Who runs for the board? People who care about the Python community, who want to see it flourish and grow, and also have a few hours a month to attend regular meetings, serve on committees, participate in conversations, and promote the Python community. Check out our Life as Python Software Foundation Director video to learn more about what being a part of the PSF Board entails. We also invite you to review our Annual Impact Report for 2023 to learn more about the PSF mission and what we do.

Current Board members want to share what being on the Board is like and are making themselves available to answer all your questions about responsibilities, activities, and time commitments via online chat. Please join us on the PSF Discord for the Board Election Office Hours (June 11th, 4-5 PM UTC and June 18th, 12-1 PM UTC)  to talk with us about running for the PSF Board. 

Board Election Timeline

Nominations open: Tuesday, June 11th, 2:00 pm UTC
Nominations close: Tuesday, June 25th, 2:00 pm UTC
Voter application cut-off date: Tuesday, June 25th, 2:00 pm UTC
Announce candidates: Thursday, June 27th
Voting start date: Tuesday, July 2nd, 2:00 pm UTC
Voting end date: Tuesday, July 16th, 2:00 pm UTC 

Not sure what UTC is for you locally? Check here!

Nomination details

You can nominate yourself or someone else. We encourage you to reach out to people before you nominate them to ensure they are enthusiastic about the potential of joining the Board. Nominations open on Tuesday, June 11th, 2:00 PM UTC, and end on June 25th, 2:00 PM UTC.

Please send nominations and questions regarding nominations to psf-board-nominations@pyfound.org. Include the name, email address, and an endorsement of the nominee's candidacy for the PSF Board. To get an idea of what a nomination looks like, check out the Nominees for 2023 PSF Board page. After the nomination period closes, we will request a statement and other relevant information from the nominees to publish for voter review.

Voting Reminder!

Every PSF Voting Member (Supporting, Managing, Contributing, and Fellow) needs to affirm their membership to vote in this year’s election. You should have received an email from "psf@psfmember.org <Python Software Foundation>" with subject "[Action Required] Affirm your PSF Membership voting status" that contains information on how to affirm your voting status.

You can see your membership record and status on your PSF Member User Information page. If you are a voting-eligible member and do not already have a login, please create an account on psfmembership.org first and then email psf-donations@python.org so we can link your membership to your account.

Categories: FLOSS Project Planets

Tryton News: Security Release for issue #92

Planet Python - Mon, 2024-06-10 04:00

Ashish Kunwar has found that python-sql accepts any string in the offset or limit parameters when python is ran with -O which makes any system exposing those vulnerable to an SQL injection attack.

Impact

CVSS v3.0 Base Score: 9.1

  • Attack Vector: Network
  • Attack Complexity: Low
  • Privileges Required: Low
  • User Interaction: None
  • Scope: Changed
  • Confidentiality: High
  • Integrity: Low
  • Availability: Low
Workaround

Do not use the -O switch or PYTHONOPTIMIZE environment variable when executing python.

Resolution

All affected users should upgrade python-sql to the latest version.

Affected versions: <= 1.5.0
Non affected versions: >= 1.5.1

Reference Concerns?

Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/python-sql with the confidential checkbox checked.

1 post - 1 participant

Read full topic

Categories: FLOSS Project Planets

Zato Blog: HL7 FHIR Integrations in Python

Planet Python - Mon, 2024-06-10 04:00
HL7 FHIR Integrations in Python 2024-06-10, by Dariusz Suchojad

HL7 FHIR, pronounced "fire", is a data model and message transfer protocol designed to facilitate the exchange of information among systems used in health care settings.

In such environments, a FHIR server will assume the role of a central repository of health records with other systems integrating with it, potentially in a hub-and-spoke fashion, thus letting the FHIR server become a unified and consistent source of data that would otherwise stay confined to a silo of each individual health information system.

While FHIR is the way forward, the current reality of health care systems is that much of the useful and actionable information is distributed and scattered among many individual data sources - paper-based directories, applications or data bases belonging to the same or different enterprises - and that directly hampers the progress towards delivering good health care. Anyone witnessing health providers copy-and-pasting the same information from one application to another, not having access to the already existing data, not to mention people not having an easy way to access their own data about themselves either, can understand what the lack of interoperability looks like externally.

The challenges that integrators face are two-fold. On the one hand, the already existing systems, including software as well as medical appliances, were often not, or are still not being, designed for the contemporary inter-connected world. On the other hand, FHIR in itself is a relatively new technology which means that it is not straightforward to re-use the existing skills and competencies.

Zato is an open-source platform that makes it possible to integrate systems with FHIR using Python. Specifically, its support for FHIR enables quick on-boarding of integrators who may be new to health care interoperability, who are coming to FHIR with previous experience or interest in web development technologies, and who need an easy way to get started with and to navigate the complex landscape of health care integrations.

Connecting to FHIR servers

Outgoing FHIR connections are what allows Python-based services to communicate with FHIR servers. Throughout the rest of the chapter, the following definition will be used. It connects to a live, publicly available FHIR server.

Filling out the form below will suffice, there is no need for any server restarts. This principle, that restarts are not needed, applies all throughout the platform, whenever you change any piece of configuration, it will be automatically propagated as necessary.

  • Name: FHIR.Sample
  • Address: https://simplifier.net/zato
  • Security: No security definition (we will talk about security later)
  • TLS CA Certs: Default bundle

Retrieving data from FHIR servers

In Python code, you obtain client connections to FHIR servers through self.out.hl7.fhir objects, as in the example below which first refers to the server by its name and then looks up all the patients in the server.

The structure of the Patient resource that we expect to receive can be found here.

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class FHIService1(Service): name = 'demo.fhir.1' def handle(self) -> 'None': # Connection to use conn_name = 'FHIR.Sample' with self.out.hl7.fhir[conn_name].conn.client() as client: # This is how we can refer to patients patients = client.resources('Patient') # Get all active patients, sorted by their birth date result = patients.sort('active', '-birthdate') # Log the result that we received for elem in result: self.logger.info('Received -> %s', elem['name'])

Invoking the service will store in logs the data expected:

INFO - Received -> [{'use': 'official', 'family': 'Chalmers', 'given': ['Peter', 'James']}]

For comparison, this is what the FHIR server displays in its frontend. - the information is the same.

Storing data in FHIR servers

To save information in a FHIR server, create the required resources and call .save to permanently store the data in the server. Resources can be saved either individually (as in the example below) or as a bundle.

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class CommandsService(Service): name = 'demo.fhir.2' def handle(self) -> 'None': # Connection to use conn_name = 'FHIR.Sample' with self.out.hl7.fhir[conn_name].conn.client() as client: # First, create a new patient patient = client.resource('Patient') # Save the patient in the FHIR server patient.save() # Create a new appointment object appointment = client.resource('Appointment') # Who will attend it participant = { 'actor': patient, 'status':'accepted' } # Fill out the information about the appointment appointment.status = 'booked' appointment.participant = [participant] appointment.start = '2022-11-11T11:11:11.111+00:00' appointment.end = '2022-12-22T22:22:22.222+00:00' # Save the appointment in the FHIR server appointment.save() Learning what FHIR resources to use

The "R" in FHIR stands for "Resources" and the sample code above uses resources such a Patient or Appointment but how does one learn what other resources exist and what they look like? In other words, how does one learn the underlying data model?

First, you need to get familiar with the spec itself which, in addition to textual information, offers visualizations of the data model. For instance, here is the description of the Observation object, including details such as all the attributes an Observation is composed of as well as their multiplicities.

Secondly, do spend time with FHIR servers such as Simplifier. Use Zato services to create test resources, look them up and compare the results with what the spec says. There is no substitute for experimentation when learning a new data model.

FHIR security

Outgoing FHIR connections can be secured in several ways, depending on what a given FHIR requires:

  • With Basic Auth definitions
  • With OAuth definitions
  • With SSL/TLS. If the server is not a public one (e.g. it is in a private network with a private IP address), you may need to upload the server's certificate to Zato first if you plan to use SSL/TLS because, without it, the server's certificate may be rejected.
MLLP, HL7 v2 and v3

While FHIR is what new deployments use, it is worth to add that there are still other HL7 versions frequently seen in integrations:

  • Version 2, using its own MLLP protocol
  • Version 3, using XML

Both of them can be used in Zato services, in both directions. For instance, it is possible to both receive HL7 v2 messages as well as to send them to external applications. It is also possible to send v2 messages using REST in addition to MLLP.

More blog posts
Categories: FLOSS Project Planets

The Drop Times: DrupalCollab: Drupal Community in the Largest 500 Cities in the World

Planet Drupal - Mon, 2024-06-10 02:12
The Drop Times conducted an in-depth analysis of Drupal users across the world's largest cities, focusing on locations with populations over one million. Using LinkedIn data, this study identifies key urban centers with significant Drupal communities, highlighting 81 cities with more than 500 "Drupal People." This analysis aims to support local community growth and enhance global Drupal engagement. Discover the cities where Drupal is thriving and learn how to get involved in organizing and promoting local events.
Categories: FLOSS Project Planets

The Drop Times: On Using LinkedIn to Analyze the Size of the Drupal Community

Planet Drupal - Mon, 2024-06-10 01:49
The Drop Times aims to enhance and track the global growth of the Drupal community. By utilizing LinkedIn's public search capabilities, we can estimate the geographical spread of Drupal users. Despite certain limitations, this method provides a practical approach to gauge community growth and plan targeted events. Learn how we navigate data challenges and leverage LinkedIn for meaningful insights.
Categories: FLOSS Project Planets

Week 2 recap - Aseprite's pixel perfect

Planet KDE - Sun, 2024-06-09 23:59
Hi, it's week 2 and my last scheduled week for research. I spent this week looking over more of Krita's code base and pixel-perfect algorithm. There was a large focus on looking at examples of how pixel-perfect is achieved (specifically in Aseprite)....
Categories: FLOSS Project Planets

New Human Interface Guidelines

Planet KDE - Sun, 2024-06-09 18:38

Today I’d like to share a new set of Human Interface Guidelines (HIG) for KDE’s software that I’ve written, replacing the old one. This work was done over the past several months in consultation with many KDE designers and developers; the merge request says 42 people were CCd, and almost 500 comments were posted during the 2+ month review process.

You can read the document at https://develop.kde.org/hig.

Wait, why does anyone need this at all?

Strictly speaking, we don’t need a HIG. Developers with an excellent eye for design can usually produce good results even without one. But for most of us, it’s useful to have guidelines to follow so you don’t have to think about the design side too much. Even for design-oriented developers, it’s useful to be able to have a quick reference for common patterns and rules.

And having a HIG is a good idea for any organization that wants for its software to share a similar look-and-feel and mode of operation, or any small individual developer who wants their software to match fit in well with a specific target platform (ours, in this case). When software fits into the visual and functional conventions of the platform with which it wants to integrate most closely, people already familiar with that platform learn to use it faster and like using it more. It feels at home to them. Comfortable, familiar, appealing.

Having and following a HIG makes all of this possible. And besides, we already had one anyway! But it needed major work.

Ok, so what was wrong with the old one?

To be honest, it wasn’t very good. I say this as a contributor to the old one! But it suffered from multiple problems that demanded a total rewrite:

  • It was hard to find anything because the basic structure was confused, with excessive segmentation.
  • Content was severely outdated and reflected design paradigms that KDE developers haven’t used in years, and that the industry in general has moved away from. It was also missing a lot of information relevant to how we write apps today.
  • Far too wordy; hard to find the actionable recommendations amid all the filler text!
  • Full of old mockups and screenshots that weren’t even needed to illustrate the point, and which would become out of date again quickly if we took the time to update them.
  • Many basic style errors: misspellings, incorrect English grammar, awkwardly phrased sentences, etc.

For these reasons, very few KDE developers or designers still paid any attention to the old HIG, besides of a small number of pages that did still have a reasonably high threshold of information density. And for the same reason, the HIG got very few contributions. I think when something is in a sufficiently poor state, it can even discourage people from contributing, because any small improvement seems like a raindrop in the ocean of urgent need. It’s demoralizing.

How is this new one any better?

The new one was informed by research into how KDE apps look and work today. Some inspiration for structure and organization was taken from the ElementaryOS, GNOME, Apple, and Google HIGs — but not content! The content is all KDE.

The new HIG had the following design goals:

  • Make 100% of the content actionable. No filler text, no rambling philosophy, no redundancy — just actionable recommendations for how to design your app. Short and sweet, and to the point!
  • Reflect how KDE designs software today. For example we’ve seen a huge growth in new Kirigami-based apps using QtQuick as their UI technology (including some older KDE apps getting their UIs ported to Kirigami), and Kirigami is the UI platform that’s improving most quickly in KDE. So I made the decision to focus the recommendations on Kirigami as a platform. There’s still some content about QtWidgets, but it’s a Kirigami-focused document.
  • Have flat navigation, so it’s easy to find everything. No deeply nested sub-pages and awkward categorization that makes you wonder why a page is in this category and not that one.
  • Get the new content into good enough shape that people feel comfortable contributing. Hopefully people will start to submit tweaks, improvements, bug-fixes, and maintenance!
Is it 100% finished?

No, absolutely not! The HIG is intended to be a living document that evolves over time to better describe KDE’s design goals and software development paradigms.

Like any rewrite, there are bound to be rough edges and omissions compared to the old version. Maybe I missed a piece of useful information in the old HIG that had been buried somewhere but retained some value. Maybe there’s low-hanging fruit for improvement. Help out by contributing! It’s just some text with markdown styling; contributing to the HIG is waaaaaaay easier then contributing code.

In particular, there are some known omissions and limitations that you can help with:

  • It needs more images. Multiple pages have embedded TODO comments where it would be nice to add a relevant image to illustrate a point.
  • There’s nothing at all about visual style in general, simply delegating those decisions to the system’s active theme. To a certain extent this is intentional because we support visual theming, but it would also be good to add content about KDE’s default Breeze theme. I deliberately omitted this for now because a bunch of KDE’s designers are working on a fancy new style, and developers are working on a whole new theming engine to apply it. So the world around the HIG is in a state of flux. But if anyone wanted to add more about the current Breeze style anyway, that would be nice. Just know that it may be replaced once the new style is released.
  • …With one exception: the entire section on Breeze icon design. This was kept from the old HIG content because it remains useful. Still, the presentation could use an overhaul to improve information density and collapse needless sub-pages into one parent page.
  • The recommendation to use Kirigami is awkward for powerful apps that use features not currently available in Kirigami-based apps, such as dockable sidebars/panes, customizable toolbars, customizable keyboard shortcuts, and more. If you’re a developer, help add these features!
Ok cool, how do I contribute?

That’s great, it’s super duper appreciated! See these two links to learn how to contribute changes:

If that’s too scary, we can help you get set up! Contact folks and we’ll see what we can do.

Still too scary? Then you can donate to KDE. Our budget is tiny, so your money genuinely does have an impact!

Categories: FLOSS Project Planets

Release of KDE Stopmotion 0.8.7

Planet KDE - Sun, 2024-06-09 17:35

Today marks the release of  KDE Stopmotion 0.8.7!

About Stopmotion

Stopmotion is a Free Open Source application to create stop-motion animations. It helps you capture and edit the frames of your animation and export them as a single file.

Direct capture from webcams, MiniDV cameras, and DSLR cameras. It offers onion-skinning, import images from disk, and time lapse photography. Stopmotion supports multiple scenes, frame editing, basic sound track, animation playback at different frame rates, and GIMP integration for image. Movies can be exported to a file and to Cinelerra frame lists.

Technically, it is a C++ / Qt application with optional dependencies to camera capture libraries.

Changes in release 0.8.7

This release comes with no new features, but improvements to the project itself.

Changes

  • The project is now officially called to KDE Stopmotion. The former name Linux Stopmotion is no longer used.
  • Support for qmake has been removed. Use CMake instead.

Features

  • Port serialization to libarchive. libtar is abandoned. (thanks to Bastian Germann)

Bugfixes

  • The .sto files miss the tar trailer. (#16, thanks to Bastian Germann for providing a fix)

Improvements

  • Use pkg-config to find dependencies vorbisfile and xml2 (thanks to Barak Pearlmutter)
  • Remove code that relies on deprecations in Qt 5; this is a preparation to move to Qt 6.
Future plans
  • Transition from Qt 5 to version 6. I am stuck with my port as QAudioDeviceInfo that was dropped in Qt 6. I need some help to port Stopmotion to the new way to handle audio with Qt 6 / Qt Mulimedia.
  • We should integrate better to KDE's tech stack: Internationalization, using KDE libraries, update and reformat documentation.
Get involved!

If you are interested, give Stopmotion a try. Reach out to our mailing list stopmotion@kde.org or have a look into our project. Share your ideas or get involved!

Categories: FLOSS Project Planets

#! code: Drupal 10: Testing Migration Process Plugins

Planet Drupal - Sun, 2024-06-09 13:47

Drupal's migration system allows the use of a number of different plugins to perform source, processing, and destination functions. 

Process plugins are responsible for copying and sometimes manipulating data into the destination. There are a number of different process plugins that allow you to get data in different ways and and apply it to your destination fields.

Both the core Migrate module and the excellent Migrate Plus module contain a number of different process plugins that you can use to process your data in different ways.

Out of the box, the default process plugin is the get plugin, which can be used like this in your migration scripts.

destination_field: plugin: get source: source_field

This is often shortened to the following, which has exactly the same functionality.

destination_field: source_field

Most of the time you will want to avoid creating custom plugins, but sometimes your migration requirements will necessitate their use. You might find that your source data is very messy and needs to be cleaned up before importing it into the site. Process plugins are a really good way of doing this, but it is essential that you write tests to cover every situation that you might encounter. 

In this article we will look at two custom migrate process plugins that are built in different ways and how to test them. This will dive into some concepts around Drupal plugin management, dependency injection, as well as unit testing and data providers with PHPUnit.

First, let's look at the migration script that we will be using in this article. All of the source code for this migration example is available on GitHub.

Read more

Categories: FLOSS Project Planets

Firefox profiles and Plasma launchers (X11)

Planet KDE - Sun, 2024-06-09 13:00

I’m a heavy user of Firefox profiles. Apart from using different profiles for different activities, I also have a few extra profiles that all run in the Default activity.

This means that I need to have different icons shown in Plasma’s panel in order to be able to easily differentiate which profile a window belongs to.

Sure, I use the tasks applet which shows the window title instead of the icon-only one (I prefer usability to minimalism), but still, it isn’t enough as sometimes the active tab in a Firefox window might not have the most informative title.

Plasma seems to rely on the application name and the window class when choosing the icon it will show in the panel. Which means that, by default, all Firefox instances end up having the same icon.

Librewolf with a custom profile icon

Fortunately, Firefox allows you to specify the window class it should use through command line arguments.

firefox -P ProfileName --class WindowClassName

And, to connect a launcher to a specific window class, you just need to add the following line to the .desktop file:

StartupWMClass=WindowClassName

So, in order to have a nicely supported Firefox profile, you can create a launcher with a desktop file similar to the following:

[Desktop entry] Exec=firefox -P SocialSites --class FirefoxSocialSites Icon=user-available-symbolic StartupWMClass=FirefoxSocialSites

It also works with Firefox derivatives such as Librewolf (which can be seen in the screenshot above) and others.

You can support my work on Patreon, or you can get my book Functional Programming in C++ at Manning if you're into that sort of thing. -->
Categories: FLOSS Project Planets

Ed Crewe: Software Development with Generative AI - 2024 Update

Planet Python - Sun, 2024-06-09 12:37

Why write an update?
I wrote a blog post on Software Development with Generative AI last year, which was questioning the approach of the current AI software authoring assistants. I believe the bigger picture holds true that to fully utilize AI to write software, will require an entirely different approach. Changing the job of a software developer in a far more radical manner and perhaps making many of today's software languages redundant.

However I also raised the issue that I found the current generative AI helpers utility questionable for seasoned developers:
"The generative AI can help students and others who are learning to code in a computer language, but can it actually improve productivity for real, full time, developers who are fluent in that language?
I think that question is currently debatable... (but it is improving rapidly) ... We may reach that point within a year or two"

Well it hasn't been a year or two, just 6 months. But I believe the addition of the Chat window to CoPilot and an improvement in the accuracy of its models has already made a significant difference. 

On balance I would now say that even a fluent programmer may get some benefits from its use. Given the speed of improvement it is likely that all commercial programming will use an AI assistant within a few years. 


To delay the inevitable and not embed it in to your work process is like King Canute commanding the sea to retreat. There are increasing numbers of alternatives available too. However as the market leader I believe it is worth going in to slightly more depth as to the current state of play with CoPilot.

Copilot Features

The new Chat window within your IDE gives you a context sensitive version of Copilot ChatGPT that can act as a pair programmer and code reviewer for your work. 

If you have enabled auto-complete then you instigate that usage by writing functional comments, ie prompts then tabbing out to accept the suggestions it responds with.

To override these prompts, you instead can use dot and get real code completion options (as long as your IDE is configured correctly). Since code completion has your whole codebase as context, it complements CoPilot reasonably well. But whilst the code completion is always correct, CoPilot is less so, probably more like 75% now compared to its initial release level of 50%

It takes some time to improve the quality of your prompting. An effort must be made to eradicate any nuance, assumption, implication or subtlety from your English. Precise mechanical instructions are what are required. However its language model will have learnt common usage. So if you ask it to sort out your variables it will understand that you mean replace all hardcoded values in the body of your code with a set of constants defined at the top, explain that is what it thinks you mean and give you the code that does that.

You can ask it anything about the usage of the language you are working in, how something should be coded, alternatives to that etc. So taking a pair programming approach and explaining what you are about to code and why to CoPilot chat as you go,  can be very useful. Given rubber duck programming is useful, having an intelligent duck that can answer back ... is clearly more so. 

It excels as a learning tool, largely replacing Googling and Stack Overflow with an IDE embedded search for learning new languages. But even for a language you know well, there can be details and nuances of usage you have overlooked or changes in syntactic standards with new releases you have missed.

You can also ask it to give your file a code review. Where it will list out a series of suggested refactors that it judges would improve it.

Copilot Limitations

Currently however there are many limitations, understanding them, helps you know how to use CoPilot and not turn it off in frustration at its failings! 

The most important one is that CoPilot's context is extremely limited. There is no RAG enhancement yet, no learning from your usage. It may seem to improve with usage, but that is just you getting better at using it. It does not learn about you and your coding style as you might expect, given a dumb shopping site does that as standard.

It does not create a user context for you and populate it with your codebase. It simply grabs the content of the currently edited file and the Chat prompt text and the language version for the session as a big query. The same for the auto-suggestion. But here the chat text is from the comments or doc strings on the lines preceding. 

Posting the lot to a fixed CoPilot LLM that is some months out of date. Although apparently it has weekly updates from continuous retraining. 

This total lack of context can mean the only way you can get CoPilot to suggest what you actually want is to write very detailed prompts. It is often simpler to just cut and paste example code as comments into the file - please rewrite blah like this ... paste example. Since only if its in the file or latest Chat question will it get posted to inform the response.

At the time of writing CoPilot is due to at least retain and learn from Chat window history to extend its context a little. But currently it only knows about the currently open file and latest Chat message. Other providers have tools that do load the whole code base, for example Cody, plus there are open source tools to post more of your code base to ChatGPT or to an open source LLM.

As this blog post update indicates, the whole area is evolving at an extremely rapid pace.

The model it has for a language is fixed and dated. Less so for the core language but for example you may use a newer version of the leading 3rd party Postgres library that came out 2 years ago. But the majority of users are still on the previous one since it is still maintained. Their syntax differs. Copilot may only know the syntax for the old library because that is what it was trained with, even though a later version is being imported in the file, so is in Copilot's limited context. So any chat window or code prompts it suggests will be wrong.

I have yet to find it brings up anything useful that I didn't know about the code when using the code review feature, plus the suggestions can include things that are inapplicable or already applied. But I am sure it would be more useful for learning a new language.

AI prompting and commenting issue

Good practise for software teams around code commenting are that you should NOT stick in functional comments that just explain what the next few lines do.  The team are developers and they can read the code as quickly for its base functionality. Adding lots of functional commenting makes things unclear by excessive verbosity.
It is something that is only done for teaching people how to code in example snippets. It has no place in production code.

Comments should be added to give wider context, caveats, assumptions etc. So commenting is all about explaining the Why, not the How.

Doc strings at the head of methods and packages can contain a summary of what the function does in terms of the codebase. So more functional in orientation, but as a big scale summary. So again they are a What not a How.

It looks like current AI assistants may mess that up. Since they need comments that are basically as close to pseudo code as possible. Adding information about real world issues, roadmap, wider codebase, integration with other services ... ie all the Why is likely to  confuse them and degrade the auto-complete.

Unfortunately code comments are not AI prompts for generating code and vice versa.
Which suggests that you may want to write a temporary prompt as a comment to generate the code, then replace it with a proper comment once it has served its purpose.

Or otherwise introduce a separate form of hideable prompt marked comment that make it clear what is for the AI and what is for the Human!

Alternatively use the chat window for code generation then paste it in.

Copilot Translation

Translation is an area where Copilot can be very beneficial. As a non-native English speaker you can interact with it in your own language for prompting and comments and it will handle that and translate any comments in the file to English if asked to.

Code translation is more problematic, since the whole structure of a program and common libraries can be different. But if the code is doing some very encapsulated common process. For example just maths operations, or file operations. It can extract the comments and prompts and regenerate the code into another language for you.

One can imagine that one day the only language anyone will need will be a very high level, succinct English-like language, eg. Python.
When you want to write in a verbose or low-level language. You just write the simpler prompts in a spoken language, but use Python when it is faster to communicate explicitly than spoken. Since spoken languages are so unsuited to creating machine instructions.
Press a button and Copilot turns the lot into verbose C or Java code with English comments.

Categories: FLOSS Project Planets

Debian Brasil: Debian Day Brasil 2024 - chamada de organizadores(as)

Planet Debian - Sun, 2024-06-09 11:00

No dia 16 agosto é comemorado o aniversário do Projeto Debian, e todos os anos comunidades ao redor do mundo organizam encontros para celebrar esta data.

Chamado de Debian Day (Dia do Debian), o evento sempre conta com uma quantidade expressiva de comunidadades brasileiras organizando atividades nas suas cidades no dia 16 (ou no sábado mais próximo).

Em 2024 o Debian Day celebrará os 31 anos do Projeto Debian e o dia 16 de agosto será numa sexta-feira, por isso provavelmente a maioria das comunidades organizarão suas atividades no sábado, dia 17.

Estamos fazendo uma chamada de organizadores(as) para o Debian Day em 2024. A ideia é reunir, em um grupo no telegram, as pessoas interessadas em coordenar as atividades das suas comunidades locais para trocar experiências, ajudar os(as) novatos(as), e discutir a possibilidade do Projeto Debian ajudar financeiramente as comunidades.

O Debian Day na sua cidade pode ser desde um encontro em uma pizzaria/bar/restaurante para promover a reunião das pessoas, até um evento mais amplo com palestras/oficinas. Então não existe obrigatoriedade sobre como deve ser o encontro, tudo depende do que você e a sua comunidade querem e podem fazer.

Existe a possibilidade de solicitarmos ao líder do projeto Debian para reembolsar algumas despesas. Por exemplo, para produzir adesivos, pagar as pizzas, encomendar um bolo, etc.

Venha fazer parte do grupo Debian Day BR no telegram e discutir as ideias: https://t.me/debian_day_br

Se você topa esse desafio e vai organizar um Debina Day na sua cidade, não deixe de adicionar a sua cidade com as informações necessárias aqui.

Categories: FLOSS Project Planets

Pages