FLOSS Project Planets

Hotspot v1.5.0 released

Planet KDE - Fri, 2024-04-26 06:30

Hotspot is a standalone GUI designed to provide a user-friendly interface for analyzing performance data. It takes a perf.data file, parses and evaluates its contents, and presents the results in a visually appealing and easily understandable manner. Our goal with Hotspot is to offer a modern alternative to perf report, making performance analysis on Linux systems more intuitive and efficient.

ChangeLog for Hotspot v1.5.0

It comes packed with a wealth of code cleanups, bug fixes and new functionality. Most notably, the disassembly view has been further improved with better searching, highlighting and faster performance.

Furthermore, we reworked the authentication mechanism to allow perf record to be run directly, with elevated priveleges, via pkexec, obsoleting the error prone old mechanism (see also https://nvd.nist.gov/vuln/detail/CVE-2023-28144).

We now also fully support Qt6 and KF6, while keeping compatibility with Qt5 and KF5. The AppImage below is still built with Qt5 but it might be the last time that we do this. The next version might become Qt6 only.

Many thanks to the various contributors that help build this software, both by writing code as well as reporting bugs.

To get a more detailed scope over all the changes in this new release, check out the full changelog on GitHub. More information about Hotspot can be obtained on its GitHub page or by watching this video.

Happy profiling everyone 🚀

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Hotspot v1.5.0 released appeared first on KDAB.

Categories: FLOSS Project Planets

gnulib @ Savannah: GNU gnulib: gnulib-tool has become much faster

GNU Planet! - Fri, 2024-04-26 06:12

If you are developer on a package that uses GNU gnulib as part of its build system:

gnulib-tool has been known for being slow for many years. We have listened to your complaints. We have rewritten gnulib-tool in another programming language (Python). It is between 8 times and 100 times faster than the previous implementation.

Both implementations behave identically, that is, produce the same generated files and the same output. Nothing changes in your way to use Gnulib; it's only faster.

In order to reap the new speed:

1. Make sure you have Python (version 3.7 or newer) installed on your machine.

2. Update your gnulib checkout. (For some packages, it comes as a git submodule named 'gnulib'.) Like this:

  $ git checkout master
  $ git pull

  Set the environment variable GNULIB_SRCDIR, pointing to this checkout.

  If the package is using a git submodule named 'gnulib', it is also advisable to do

  $ git commit -m 'build: Update gnulib submodule to latest.' gnulib

  (as a preparation for step 4, because the --no-git option does not work as expected in all variants of 'bootstrap').

3. Clean the built files of your package:

  $ make -k distclean


4. Regenerate the fetched and generated files of your package. Depending on the package, this may be a command such as

  $ ./bootstrap --no-git --gnulib-srcdir=$GNULIB_SRCDIR

  or

  $ export GNULIB_SRCDIR; ./autopull.sh; ./autogen.sh

  or, if no such script is available:

  $ $GNULIB_SRCDIR/gnulib-tool --update


5. Continue with

  $ ./configure
  $ make

  as usual.

Enjoy! The rewritten gnulib-tool was implemented by Dmitry Selyutin, Collin Funk, and me.

Categories: FLOSS Project Planets

Goals Sprint 2024

Planet KDE - Fri, 2024-04-26 05:15

From last Friday to Wednesday I was in Berlin to attend the combined 2024 KDE Goals sprint that was graciously hosted by MBition. Compared to previous goals sprints where there were separate sprints Goal this year was different as all three happened at once in the same area. This allowed attendees to freely switch around the different topics and enables more collaboration opportunities. Lets see how that worked out for me.

Most of my time I actually spent in the context of the accessibility goal. I became part of a discussion of how QML comboboxes in general and the Kirigami Add-ons date picker is lacking in the accessibility departement. As the discussion went to how the default representation of a standard combobox could be improved, the question was raised if it would still be possible to do something special for customized comboboxes. This lead to prototyping on the date picker with the first approach being to forego the built in support in QtQuick and implement the relevant interfaces manually like one would do for QtWidgets. This was a lot of boring boilerplate code but it proved that this option is available for very specialized use cases. The solution we came up with in the end for our use cases was to provide the required properties and roles in a proxy Item that exposes the actual controls to the accessibility tools.

On the automatization front I was involved in creating two new CI checks. The first one is a reuse lint check that only checks new files for compliance which enables older projects to enforce coverage at least for new files. The second was an idea that came up during sprint that we could detect untranslated strings in QML files as these are usually to text properties. While it will never catch all cases during testing we found already some problematic cases in Plasma repositories. We discussed also some other points from the idea list such as a cherry-pick bot like the Qt Project uses and automatic updating the fix version field on bugzilla but these innocuous looking problems have some corner cases which require some more thought.

To the Sustainable Software goal I contributed the least. But together with Aleix and Joseph we debugged why the VNC setup of the KDE Eco Lab machine did not work anymore and fixed it. So in the end I interacted with all three goals.

The combined sprint was a nice experience and facilitated many discussion about the Goals but as always also about other KDE topics as is unavoidable when KDE community members are put together in a room. However I feel while it enabled people to jump around the different goals I am wary that in my opinion this setup removes a bit of focus from each goal compared to dedicated sprints.

Thanks to Mbition for hosting us and as a reminder your donations to KDE e.V make sprints such as these possible.

Categories: FLOSS Project Planets

Russell Coker: Humane AI Pin

Planet Debian - Fri, 2024-04-26 04:30

I wrote a blog post The Shape of Computers [1] exploring ideas of how computers might evolve and how we can use them. One of the devices I mentioned was the Humane AI Pin, which has just been the recipient of one of the biggest roast reviews I’ve ever seen [2], good work Marques Brownlee! As an aside I was once given a product to review which didn’t work nearly as well as I think it should have worked so I sent an email to the developers saying “sorry this product failed to work well so I can’t say anything good about it” and didn’t publish a review.

One of the first things that caught my attention in the review is the note that the AI Pin doesn’t connect to your phone. I think that everything should connect to everything else as a usability feature. For security we don’t want so much connecting and it’s quite reasonable to turn off various connections at appropriate times for security, the Librem5 is an example of how this can be done with hardware switches to disable Wifi etc. But to just not have connectivity is bad.

The next noteworthy thing is the external battery which also acts as a magnetic attachment from inside your shirt. So I guess it’s using wireless charging through your shirt. A magnetically attached external battery would be a great feature for a phone, you could quickly swap a discharged battery for a fresh one and keep using it. When I tried to make the PinePhonePro my daily driver [3] I gave up and charging was one of the main reasons. One thing I learned from my experiment with the PinePhonePro is that the ratio of charge time to discharge time is sometimes more important than battery life and being able to quickly swap batteries without rebooting is a way of solving that. The reviewer of the AI Pin complains later in the video about battery life which seems to be partly due to wireless charging from the detachable battery and partly due to being physically small. It seems the “phablet” form factor is the smallest viable personal computer at this time.

The review glosses over what could be the regarded as the 2 worst issues of the device. It does everything via the cloud (where “the cloud” means “a computer owned by someone I probably shouldn’t trust”) and it records everything. Strange that it’s not getting the hate the Google Glass got.

The user interface based on laser projection of menus on the palm of your hand is an interesting concept. I’d rather have a Bluetooth attached tablet or something for operations that can’t be conveniently done with voice. The reviewer harshly criticises the laser projection interface later in the video, maybe technology isn’t yet adequate to implement this properly.

The first criticism of the device in the “review” part of the video is of the time taken to answer questions, especially when Internet connectivity is poor. His question “who designed the Washington Monument” took 8 seconds to start answering it in his demonstration. I asked the Alpaca LLM the same question running on 4 cores of a E5-2696 and it took 10 seconds to start answering and then printed the words at about speaking speed. So if we had a free software based AI device for this purpose it shouldn’t be difficult to get local LLM computation with less delay than the Humane device by simply providing more compute power than 4 cores of a E5-2696v3. How does a 32 core 1.05GHz Mali G72 from 2017 (as used in the Galaxy Note 9) compare to 4 cores of a 2.3GHz Intel CPU from 2015? Passmark says that Intel CPU can do 48GFlop with all 18 cores so 4 cores can presumably do about 10GFlop which seems less than the claimed 20-32GFlop of the Mali G72. It seems that with the right software even older Android phones could give adequate performance for a local LLM. The Alpaca model I’m testing with takes 4.2G of RAM to run which is usable in a Note 9 with 8G of RAM or a Pixel 8 Pro with 12G. A Pixel 8 Pro could have 4.2G of RAM reserved for a LLM and still have as much RAM for other purposes as my main laptop as of a few months ago. I consider the speed of Alpaca on my workstation to be acceptable but not great. If we can get FOSS phones running a LLM at that speed then I think it would be great for a first version – we can always rely on newer and faster hardware becoming available.

Marques notes that the cause of some of the problems is likely due to a desire to make it a separate powerful product in the future and that if they gave it phone connectivity in the start they would have to remove that later on. I think that the real problem is that the profit motive is incompatible with good design. They want to have a product that’s stand-alone and justifies the purchase price plus subscription and that means not making it a “phone accessory”. While I think that the best thing for the user is to allow it to talk to a phone, a PC, a car, and anything else the user wants. He compares it to the Apple Vision Pro which has the same issue of trying to be a stand-alone computer but not being properly capable of it.

One of the benefits that Marques cites for the AI Pin is the ability to capture voice notes. Dictaphones have been around for over 100 years and very few people have bought them, not even in the 80s when they became cheap. While almost everyone can occasionally benefit from being able to make a note of an idea when it’s not convenient to write it down there are few people who need it enough to carry a separate device, not even if that device is tiny. But a phone as a general purpose computing device with microphone can easily be adapted to such things. One possibility would be to program a phone to start a voice note when the volume up and down buttons are pressed at the same time or when some other condition is met. Another possibility is to have a phone have a hotkey function that varies by what you are doing, EG if bushwalking have the hotkey be to take a photo or if on a flight have it be taking a voice note. On the Mobile Apps page on the Debian wiki I created a section for categories of apps that I think we need [4]. In that section I added the following list:

  1. Voice input for dictation
  2. Voice assistant like Google/Apple
  3. Voice output
  4. Full operation for visually impaired people

One thing I really like about the AI Pin is that it has the potential to become a really good computing and personal assistant device for visually impaired people funded by people with full vision who want to legally control a computer while driving etc. I have some concerns about the potential uses of the AI Pin while driving (as Marques stated an aim to do), but if it replaces the use of regular phones while driving it will make things less bad.

Marques concludes his video by warning against buying a product based on the promise of what it can be in future. I bought the Librem5 on exactly that promise, the difference is that I have the source and the ability to help make the promise come true. My aim is to spend thousands of dollars on test hardware and thousands of hours of development time to help make FOSS phones a product that most people can use at low price with little effort.

Another interesting review of the pin is by Mrwhostheboss [5], one of his examples is of asking the pin for advice about a chair but without him knowing the pin selected a different chair in the room. He compares this to using Google’s apps on a phone and seeing which item the app has selected. He also said that he doesn’t want to make an order based on speech he wants to review a page of information about it. I suspect that the design of the pin had too much input from people accustomed to asking a corporate travel office to find them a flight and not enough from people who look through the details of the results of flight booking services trying to save an extra $20. Some people might say “if you need to save $20 on a flight then a $24/month subscription computing service isn’t for you”, I reject that argument. I can afford lots of computing services because I try to get the best deal on every moderately expensive thing I pay for. Another point that Mrwhostheboss makes is regarding secret SMS, you probably wouldn’t want to speak a SMS you are sending to your SO while waiting for a train. He makes it clear that changing between phone and pin while sharing resources (IE not having a separate phone number and separate data store) is a desired feature.

The most insightful point Mrwhostheboss made was when he suggested that if the pin had come out before the smartphone then things might have all gone differently, but now anything that’s developed has to be based around the expectations of phone use. This is something we need to keep in mind when developing FOSS software, there’s lots of different ways that things could be done but we need to meet the expectations of users if we want our software to be used by many people.

I previously wrote a blog post titled Considering Convergence [6] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I’m now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues. I’ve written a blog post about Convergence vs Transferrence [7].

Related posts:

  1. PinePhonePro First Impression Hardware I received my PinePhone Pro [1] on Thursday, it...
  2. I Just Ordered a Nexus 6P Last year I wrote a long-term review of Android phones...
  3. Smart Phones Should Measure Charge Speed My first mobile phone lasted for days between charges. I...
Categories: FLOSS Project Planets

LN Webworks: How To Create Custom Token In Drupal: Step By Step Guide

Planet Drupal - Fri, 2024-04-26 03:53

In Drupal 10, you can create custom tokens using your custom module. Before creating custom tokens, you need to have the Drupal tokens module installed on your Drupal site. This contributed module already comes with some predefined tokens. These defined tokens can be used globally.

Steps to Create the Drupal Custom Tokens

1. Begin by creating a yourmodule.module file in your custom module directory.

2. Establish your custom token type.

 

Categories: FLOSS Project Planets

How To Use Modern QML Tooling in Practice

Planet KDE - Fri, 2024-04-26 03:37

Qt 5.15 introduced “Automatic Type Registration”. With it, a C++ class can be marked as “QML_ELEMENT” to be automatically registered to the QML engine. Qt 6 takes this to the next level and builds all of its tooling around the so-called QML Modules. Let’s talk about what this new infrastructure means to your application in practice and how to benefit from it in an existing project.

Continue reading How To Use Modern QML Tooling in Practice at basysKom GmbH.

Categories: FLOSS Project Planets

Russell Coker: Convergence vs Transference

Planet Debian - Fri, 2024-04-26 03:30

I previously wrote a blog post titled Considering Convergence [1] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I’m now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues.

Currently the expected use is that if you have web pages open on Chrome on Android it’s possible to instruct Chrome on the desktop to open the same page if both instances of Chrome are signed in to the same GMail account. It’s also possible to view the Chrome history with CTRL-H, select “tabs from other devices” and load things that were loaded on other devices some time ago. This is very minimal support for moving work between devices and I think we can do better.

Firstly for web browsing the Chrome functionality is barely adequate. It requires having a heavyweight login process on all browsers that includes sharing stored passwords etc which isn’t desirable. There are many cases where moving work is desired without sharing such things, one example is using a personal device to research something for work. Also the Chrome method of sending web pages is slow and unreliable and the viewing history method gets all closed tabs when the common case is “get the currently open tabs from one browser window” without wanting the dozens of web pages that turned out not to be interesting and were closed. This could be done with browser plugins to allow functionality similar to KDE Connect for sending tabs and also the option of emailing a list of URLs or a JSON file that could be processed by a browser plugin on the receiving end. I can send email between my home and work addresses faster than the Chrome share to another device function can send a URL.

For documents we need a way of transferring files. One possibility is to go the Chromebook route and have it all stored on the web. This means that you rely on a web based document editing system and the FOSS versions are difficult to manage. Using Google Docs or Sharepoint for everything is not something I consider an acceptable option. Also for laptop use being able to run without Internet access is a good thing.

There are a range of distributed filesystems that have been used for various purposes. I don’t think any of them cater to the use case of having a phone/laptop and a desktop PC (or maybe multiple PCs) using the same files.

For a technical user it would be an option to have a script that connects to a peer system (IE another computer with the same accounts and access control decisions) and rsync a directory of working files and the shell history, and then opens a shell with the HISTFILE variable, current directory, and optionally some user environment variables set to match. But this wouldn’t be the most convenient thing even for technical users.

For programs that are integrated into the desktop environment it’s possible for them to be restarted on login if they were active when the user logged out. The session tracking for that has about 1/4 the functionality needed for requesting a list of open files from the application, closing the application, transferring the files, and opening it somewhere else. I think that this would be a good feature to add to the XDG setup.

The model of having programs and data attached to one computer or one network server that terminals of some sort connect to worked well when computers were big and expensive. But computers continue to get smaller and cheaper so we need to think of a document based use of computers to allow things to be easily transferred as convenient. With convenience being important so the hacks of rsync scripts that can work for technical users won’t work for most people.

Related posts:

  1. Considering Convergence What is Convergence In 2013 Kyle Rankin (at the time...
  2. Google Chrome – the Security Implications Google have announced a new web browser – Chrome [1]....
  3. Bugs in Google Chrome I’m currently running google-chrome-beta version 5.0.375.55-r47796 on Debian/Unstable. It’s the...
Categories: FLOSS Project Planets

The Drop Times: Streamlining Local Development with DDEV, Docker, and NGROK

Planet Drupal - Fri, 2024-04-26 01:33
Discover how DDEV, Docker, and NGROK can revolutionize your local development process. Our latest guide dives into the seamless integration of these powerful tools, offering you the most efficient way to set up, develop, and test your Drupal projects right from your local machine. Streamline your workflow and enhance productivity with our comprehensive insights!"
Categories: FLOSS Project Planets

Debug Academy: How to create a partial date field in Drupal (i.e. Year & Month without Day)

Planet Drupal - Thu, 2024-04-25 23:54
How to create a partial date field in Drupal (i.e. Year & Month without Day)

One of Drupal's main strengths is its data modeling.

But sometimes choosing the appropriate field type comes with a form widget that isn't what we're looking for. For example, using a Date field results in the form displaying a date "widget" (form input) which includes a full date consisting of a day, month, and year, and optionally a time.

How to remove the time from a date field in Drupal

Because removing the time from date fields is such a common request, Drupal allows its removal without writing any custom code.

How to hide the time Drupal's frontend

Fortunately, the date field has a highly configurable display on the frontend. By visiting the "Manage Display" page (or configuring the field's block, if using layout builder), you will have the option of selecting (or creating) a date format.

Follow these steps to change the date's output for your frontend:

ashrafabed Fri, 04/26/2024
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RQuantLib 0.4.22 on CRAN: Maintenance

Planet Debian - Thu, 2024-04-25 17:25

A new minor release 0.4.22 of RQuantLib arrived at CRAN earlier today, and has been uploaded to Debian.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for more than twenty years (!!) as it was one of the first packages I uploaded there.

This release of RQuantLib updates to QuantLib version 1.34 which was just released yesterday, and deprecates use of an access point / type for price/yield conversion for bonds. We also made two minor earlier changes.

Changes in RQuantLib version 0.4.22 (2024-04-25)
  • Small code cleanup removing duplicate R code

  • Small improvements to C++ compilation flags

  • Robustify internal version comparison to accommodate RC releases

  • Adjustments to two C++ files for QuantLib 1.34

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Petter Reinholdtsen: 45 orphaned Debian packages moved to git, 391 to go

Planet Debian - Thu, 2024-04-25 16:00

Nine days ago, I started migrating orphaned Debian packages with no version control system listed in debian/control of the source to git. At the time there were 438 such packages. Now there are 391, according to the UDD. In reality it is slightly less, as there is a delay between uploads and UDD updates. In the nine days since, I have thus been able to work my way through ten percent of the packages. I am starting to run out of steam, and hope someone else will also help brushing some dust of these packages. Here is a recipe how to do it. I start by picking a random package by querying the UDD for a list of 10 random packages from the set of remaining packages:

PGPASSWORD="udd-mirror" psql --port=5432 --host=udd-mirror.debian.net \ --username=udd-mirror udd -c "select source from sources \ where release = 'sid' and (vcs_url ilike '%anonscm.debian.org%' \ OR vcs_browser ilike '%anonscm.debian.org%' or vcs_url IS NULL \ OR vcs_browser IS NULL) AND maintainer ilike '%packages@qa.debian.org%' \ order by random() limit 10;"

Next, I visit http://salsa.debian.org/debian and search for the package name, to ensure no git repository already exist. If it does, I clone it and try to get it to an uploadable state, and add the Vcs-* entries in d/control to make the repository more widely known. These packages are a minority, so I will not cover that use case here.

For packages without an existing git repository, I run the following script debian-snap-to-salsa to prepare a git repository with the existing packaging.

#!/bin/sh # # See also https://bugs.debian.org/804722#31 set -e # Move to this Standards-Version. SV_LATEST=4.7.0 PKG="$1" if [ -z "$PKG" ]; then echo "usage: $0 " exit 1 fi if [ -e "${PKG}-salsa" ]; then echo "error: ${PKG}-salsa already exist, aborting." exit 1 fi if [ -z "ALLOWFAILURE" ] ; then ALLOWFAILURE=false fi # Fetch every snapshotted source package. Manually loop until all # transfers succeed, as 'gbp import-dscs --debsnap' do not fail on # download failures. until debsnap --force -v $PKG || $ALLOWFAILURE ; do sleep 1; done mkdir ${PKG}-salsa; cd ${PKG}-salsa git init # Specify branches to override any debian/gbp.conf file present in the # source package. gbp import-dscs --debian-branch=master --upstream-branch=upstream \ --pristine-tar ../source-$PKG/*.dsc # Add Vcs pointing to Salsa Debian project (must be manually created # and pushed to). if ! grep -q ^Vcs- debian/control ; then awk "BEGIN { s=1 } /^\$/ { if (s==1) { print \"Vcs-Browser: https://salsa.debian.org/debian/$PKG\"; print \"Vcs-Git: https://salsa.debian.org/debian/$PKG.git\" }; s=0 } { print }" < debian/control > debian/control.new && mv debian/control.new debian/control git commit -m "Updated vcs in d/control to Salsa." debian/control fi # Tell gbp to enforce the use of pristine-tar. inifile +inifile debian/gbp.conf +create +section DEFAULT +key pristine-tar +value True git add debian/gbp.conf git commit -m "Added d/gbp.conf to enforce the use of pristine-tar." debian/gbp.conf # Update to latest Standards-Version. SV="$(grep ^Standards-Version: debian/control|awk '{print $2}')" if [ $SV_LATEST != $SV ]; then sed -i "s/\(Standards-Version: \)\(.*\)/\1$SV_LATEST/" debian/control git commit -m "Updated Standards-Version from $SV to $SV_LATEST." debian/control fi if grep -q pkg-config debian/control; then sed -i s/pkg-config/pkgconf/ debian/control git commit -m "Replaced obsolete pkg-config build dependency with pkgconf." debian/control fi if grep -q libncurses5-dev debian/control; then sed -i s/libncurses5-dev/libncurses-dev/ debian/control git commit -m "Replaced obsolete libncurses5-dev build dependency with libncurses-dev." debian/control fi Some times the debsnap script fail to download some of the versions. In those cases I investigate, and if I decide the failing versions will not be missed, I call it using ALLOWFAILURE=true to ignore the problem and create the git repository anyway.

With the git repository in place, I do a test build (gbp buildpackage) to ensure the build is actually working. If it does not I pick a different package, or if the build failure is trivial to fix, I fix it before continuing. At this stage I revisit http://salsa.debian.org/debian and create the project under this group for the package. I then follow the instructions to publish the local git repository. Here is from a recent example:

git remote add origin git@salsa.debian.org:debian/perl-byacc.git git push --set-upstream origin master upstream pristine-tar git push --tags

With a working build, I have a look at the build rules if I want to remove some more dust. I normally try to move to debhelper compat level 13, which involves removing debian/compat and modifying debian/control to build depend on debhelper-compat (=13). I also test with 'Rules-Requires-Root: no' in debian/control and verify in debian/rules that hardening is enabled, and include all of these if the package still build. If it fail to build with level 13, I try with 12, 11, 10 and so on until I find a level where it build, as I do not want to spend a lot of time fixing build issues.

Some times, when I feel inspired, I make sure debian/copyright is converted to the machine readable format, often by starting with 'debhelper -cc' and then cleaning up the autogenerated content until it matches realities. If I feel like it, I might also clean up non-dh-based debian/rules files to use the short style dh build rules.

Once I have removed all the dust I care to process for the package, I run 'gbp dch' to generate a debian/changelog entry based on the commits done so far, run 'dch -r' to switch from 'UNRELEASED' to 'unstable' and get an editor to make sure the 'QA upload' marker is in place and that all long commit descriptions are wrapped into sensible lengths, run 'debcommit --release -a' to commit and tag the new debian/changelog entry, run 'debuild -S' to build a source only package, and 'dput ../perl-byacc_2.0-10_source.changes' to do the upload. During the entire process, and many times per step, I run 'debuild' to verify the changes done still work. I also some times verify the set of built files using 'find debian' to see if I can spot any problems (like no file in usr/bin any more or empty package). I also try to fix all lintian issues reported at the end of each 'debuild' run.

If I find Debian specific patches, I try to ensure their metadata is fairly up to date and some times I even try to reach out to upstream, to make the upstream project aware of the patches. Most of my emails bounce, so the success rate is low. For projects with no Homepage entry in debian/control I try to track down one, and for packages with no debian/watch file I try to create one. But at least for some of the packages I have been unable to find a functioning upstream, and must skip both of these.

If I could handle ten percent in nine days, twenty people could complete the rest in less then five days. I use approximately twenty minutes per package, when I have twenty minutes spare time to spend. Perhaps you got twenty minutes to spare too?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: FLOSS Project Planets

Drupal Association blog: Making the Most of Your Time at DrupalCon Portland

Planet Drupal - Thu, 2024-04-25 14:00

It’s less than two weeks to DrupalCon Portland 2024, and the excitement is building! If you’re gearing up for the biggest Drupal event of the year, we’re here to help you maximize your travel experience to Portland. Let’s dive right in!

Hotel Bookings at Great Prices

You still have a chance to book your DrupalCon Portland hotel within the official hotel block. By staying within the hotel block, you'll get the best proximity to the conference center as well as the chance to run into other Drupalists on your floor! Book now:

When and where is DrupalCon’24 happening in Portland?

DrupalCon North America 2024 will be held from 6th to 9th May 2024 at the Oregon Convention Center (yes, in-person!). Located right in the heart of the city, it is a perfect hub for exploration. You'll find hotels, restaurants, and shops just around the corner. It's also super easy to get to fun stuff like entertainment and hiking. With endless possibilities, you're sure to find something that suits your fancy.

Things you should NOT miss out on in Portland

May is a delightful time to be in Portland, with spring in full bloom. Enjoy the sunny weather and mild temperatures, making it the perfect season to explore the city's vibrant outdoor scene. There are several must-visit places that capture the city's unique charm.

1. Governor Tom McCall Waterfront Park

This is the perfect place to enjoy Portland's beauty while watching the river flow by. Visitors to the park can enjoy a variety of recreational activities, from leisurely strolls and picnics to jogging and biking along the paved pathways. The park also hosts numerous events throughout the year, including festivals, concerts, and outdoor markets, adding to its vibrant atmosphere.

One of the park's highlights is the Salmon Street Springs Fountain, where children and adults alike can cool off in the refreshing water jets during the warmer months. The park also features several monuments and public art installations, adding cultural and historical significance to its landscape.


Image Source: https://www.travelportland.com/attractions/governor-tom-mccall-waterfront-park/

2. Powell's City of Books

Powell's City of Books is a literary wonderland located in downtown Portland, Oregon. As the world's largest independent bookstore, Powell's spans an entire city block and boasts multiple floors filled with books of every genre imaginable. One of Powell's most unique features is its rare book room, home to a collection of rare and out-of-print titles, first editions, and signed copies that will delight bibliophiles and collectors alike.

In addition to its vast selection of books, Powell's hosts author readings, book signings, and other literary events, fostering a sense of community among book lovers from near and far.


Image Source: https://www.travelportland.com/attractions/powells/

3. Portland Art Museum

Founded in 1892, the Portland Art Museum is the oldest art museum on the West Coast and holds a rich and diverse collection of artworks spanning various time periods, cultures, and mediums. It is located in the heart of downtown Portland. One of the museum's highlights is its extensive collection of Native American art, which celebrates the rich artistic traditions of indigenous peoples from the Pacific Northwest and beyond. 

In addition to its permanent collection, the Portland Art Museum hosts rotating exhibitions that showcase both established and emerging artists, offering visitors the opportunity to engage with cutting-edge contemporary art and explore new perspectives.


Image Source: https://www.travelportland.com/attractions/portland-art-museum/

4. Voodoo Doughnut

Voodoo Doughnut is more than just a bakery; it's a Portland icon, a symbol of creativity, and a culinary experience like no other. It was founded in 2003 by friends Kenneth Pogson and Richard Shannon and has gained international fame for its wacky doughnut creations.

It is located in the heart of downtown Portland, Voodoo Doughnut draws long lines of locals and tourists, eager to sample its unique offerings. Some of the must-try snacks: Voodoo Doll doughnut, pretzel stake and raspberry filling, Bacon Maple Bar topped with crispy bacon strips. If this has got you drooling (like me), make sure you head to this place while you’re at Portland.


Image Source: https://www.travelportland.com/attractions/voodoo-doughnut/

5. Oregon Museum of Science and Industry 

The Oregon Museum of Science and Industry (OMSI) is a beloved institution in Portland, Oregon, dedicated to inspiring curiosity and fostering a love of science through engaging exhibits, interactive displays, and educational programs. Located on the east bank of the Willamette River, OMSI's sprawling campus encompasses a variety of attractions that cater to visitors of all ages. 

OMSI's planetarium is a highlight, where visitors can explore the wonders of the night sky, learn about astronomy and astrophysics, and take virtual journeys through space. The museum also features a state-of-the-art IMAX theater, where visitors can experience immersive films on topics ranging from nature and wildlife to history and technology.


Image Source: https://www.travelportland.com/attractions/omsi/

Find more information to plan your trip here.

Categories: FLOSS Project Planets

Jonathan McDowell: Sorting out backup internet #3: failover

Planet Debian - Thu, 2024-04-25 13:38

With local recursive DNS and a 5G modem in place the next thing was to work on some sort of automatic failover when the primary FTTP connection failed. My wife works from home too and I sometimes travel so I wanted to make sure things didn’t require me to be around to kick them into switch the link in use.

First, let’s talk about what I didn’t do. One choice to try and ensure as seamless a failover as possible would be to get a VM somewhere out there. I’d then run Wireguard tunnels over both the FTTP + 5G links to the VM, and run some sort of routing protocol (RIP, OSPF?) over the links. Set preferences such that the FTTP is preferred, NAT v4 to the VM IP, and choose somewhere that gave me a v6 range I could just use directly.

This has the advantage that I’m actively checking link quality to the outside work, rather than just to the next hop. It also means, if the failover detection is fast enough, that existing sessions stay up rather than needing re-established.

The downsides are increased complexity, adding another point of potential failure (the VM + provider), the impact on connection quality (even with a decent endpoint it’s an extra hop and latency), and finally the increased cost involved.

I can cope with having to reconnect my SSH sessions in the event of a failure, and I’d rather be sure I can make full use of the FTTP connection, so I didn’t go this route. I chose to rely on local link failure detection to provide the signal for failover, and a set of policy routing on top of that to make things a bit more seamless.

Local link failure turns out to be fairly easy. My FTTP is a PPPoE configuration, so in /etc/ppp/peers/aquiss I have:

lcp-echo-interval 1 lcp-echo-failure 5 lcp-echo-adaptive

Which gives me a failover of ~ 5s if the link goes down.

I’m operating the 5G modem in “bridge” rather than “router” mode, which means I get the actual IP from the 5G network via DHCP. The DHCP lease the modem hands out is under a minute, and in the event of a network failure it only hands out a 192.168.254.x IP to talk to its web interface. As the 5G modem is the last resort path I choose not to do anything special with this, but the information is at least there if I need it.

To allow both interfaces to be up and the FTTP to be preferred I’m simply using route metrics. For the PPP configuration that’s:

defaultroute-metric 100

and for the 5G modem I have:

iface sfp.31 inet dhcp metric 1000 vlan-raw-device sfp

There’s a wrinkle in that pppd will not replace an existing default route, so I’ve created /etc/ppp/ip-up.d/default-route to ensure it’s added:

#!/bin/bash [ "$PPP_IFACE" = "pppoe-wan" ] || exit 0 # Ensure we add a default route; pppd will not do so if we have # a lower pref route out the 5G modem ip route add default dev pppoe-wan metric 100 || true

Additionally, in /etc/dhcp/dhclient.conf I’ve disabled asking for any server details (DNS, NTP, etc) - I have internal setups for the servers I want, and don’t want to be trying to select things over the 5G link by default.

However, what I do want is to be able to access the 5G modem web interface and explicitly route some traffic out that link (e.g. so I can add it to my smokeping tests). For that I need some source based routing.

First step, add a 5g table to /etc/iproute2/rt_tables:

16 5g

Then I ended up with the following in /etc/dhcp/dhclient-exit-hooks.d/modem-interface-route, which is more complex than I’d like but seems to do what I want:

#!/bin/sh case "$reason" in BOUND|RENEW|REBIND|REBOOT) # Check if we've actually changed IP address if [ -z "$old_ip_address" ] || [ "$old_ip_address" != "$new_ip_address" ] || [ "$reason" = "BOUND" ] || [ "$reason" = "REBOOT" ]; then if [ ! -z "$old_ip_address" ]; then ip rule del from $old_ip_address lookup 5g fi ip rule add from $new_ip_address lookup 5g ip route add default dev sfp.31 table 5g || true ip route add 192.168.254.1 dev sfp.31 2>/dev/null || true fi ;; EXPIRE) if [ ! -z "$old_ip_address" ]; then ip rule del from $old_ip_address lookup 5g fi ;; *) ;; esac

What does all that aim to do? We want to ensure traffic directed to the 5G WAN address goes out the 5G modem, so I can SSH into it even when the main link is up. So we add a rule directing traffic from that IP to hit the 5g routing table, and a default route in that table which uses the 5G link. There’s no configuration for the FTTP connection in that table, so if the 5G link is down the traffic gets dropped, which is what we want. We also configure 192.168.254.1 to go out the link to the modem, as that’s where the web interface lives.

I also have a curl callout (curl --interface sfp.31 … to ensure it goes out the 5G link) after the routes are configured to set dynamic DNS with Mythic Beasts, which helps with knowing where to connect back to. I seem to see IP address changes on the 5G link every couple of days at least.

Additionally, I have an entry in the interfaces configuration carving out the top set of the netblock my smokeping server is in:

up ip rule add from 192.0.2.224/27 lookup 5g

My smokeping /etc/smokeping/config.d/Probes file then looks like:

*** Probes *** + FPing binary = /usr/bin/fping ++ FPingNormal ++ FPing5G sourceaddress = 192.0.2.225 + FPing6 binary = /usr/bin/fping

which allows me to use probe = FPing5G for targets to test them over the 5G link.

That mostly covers the functionality I want for a backup link. There’s one piece that isn’t quite solved, however, IPv6, which can wait for another post.

Categories: FLOSS Project Planets

Drupalize.Me: Learning Drupal with the Help of an AI Tutor

Planet Drupal - Thu, 2024-04-25 12:29
Learning Drupal with the Help of an AI Tutor

TL; DR: Use this prompt and the text from a Drupalize.Me tutorial to experiment with using generative AI as a tutor for learning Drupal.

A while ago, I wrote an article and gave a presentation about why learning Drupal is so hard. One of the key challenges I identified is the “pit of despair”. It's that point in the learning journey where you can no longer rely on the hand holding of step-by-step tutorials. You need to step out into the chasm and come up with your own unique solutions to your specific problems. That point where you know just enough to realize the breadth of what you don’t yet know. And I had said, based on input from many peers, that the quickest way through the dip is real-world experience and drawing on the expertise of others. The advice could be summed up as: if you want to learn fast, get a tutor.

It can be hard to find a mentor. As much as we would love to be able to do so, our small team at Drupalize.Me can't scale personalized individual tutoring. So I've been thinking about how you might be able to use AI to help get at least some of the benefits of tutoring.

joe Thu, 04/25/2024 - 11:29
Categories: FLOSS Project Planets

Kubuntu 24.04 LTS Noble Numbat Released

Planet KDE - Thu, 2024-04-25 12:16

The Kubuntu Team is happy to announce that Kubuntu 24.04 has been released, featuring the ‘beautiful’ KDE Plasma 5.27 simple by default, powerful when needed.

Codenamed “Noble Numbat”, Kubuntu 24.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

Under the hood, there have been updates to many core packages, including a new 6.8-based kernel, KDE Frameworks 5.115, KDE Plasma 5.27 and KDE Gear 23.08.

Kubuntu 24.04 with Plasma 5.27.11

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Haruna, Krita, Kdevelop, Yakuake, and many many more applications are updated.

Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.

For a list of other application updates, and known bugs be sure to read our release notes.

Download Kubuntu 24.04, or learn how to upgrade from 23.10 or 22.04 LTS.

Note: For upgrades from 23.10, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.

Categories: FLOSS Project Planets

Jonathan Dowland: Biosphere

Planet Debian - Thu, 2024-04-25 11:15

I've been enjoying Biosphere as the soundtrack to my recent "concentrated work" spells.

Knives by Biosphere

I remember seeing their name on playlists of yester-year: axioms, bluemars1, and (still a going concern) soma.fm's drone zone.

  1. Bluemars lives on, at echoes of bluemars
Categories: FLOSS Project Planets

Data School: How to prevent data leakage in pandas &amp; scikit-learn ☔

Planet Python - Thu, 2024-04-25 10:51

Let&aposs pretend you&aposre working on a supervised Machine Learning problem using Python&aposs scikit-learn library. Your training data is in a pandas DataFrame, and you discover missing values in a column that you were planning to use as a feature.

After considering your options, you decide to impute the missing values, which means that you&aposre going to fill in the missing values with reasonable values.

How should you perform the imputation?

  • Option 1 is to fill in the missing values in pandas, and then pass the transformed data to scikit-learn.
  • Option 2 is to pass the original data to scikit-learn, and then perform all data transformations (including missing value imputation) within scikit-learn.

Option 1 will cause data leakage, whereas option 2 will prevent data leakage.

Here are questions you might be asking:

  • What is data leakage?
  • Why is data leakage problematic?
  • Why would data leakage result from missing value imputation in pandas?
  • How can I prevent data leakage when using pandas and scikit-learn?

Answers below! &#x1F447;

What is data leakage?

Data leakage occurs when you inadvertently include knowledge from testing data when training a Machine Learning model.

Why is data leakage problematic?

Data leakage is problematic because it will cause your model evaluation scores to be less reliable. This may lead you to make bad decisions when tuning hyperparameters, and it will lead you to overestimate how well your model will perform on new data.

It&aposs hard to know whether data leakage will skew your evaluation scores by a negligible amount or a huge amount, so it&aposs best to just avoid data leakage entirely.

Why would data leakage result from missing value imputation in pandas?

Your model evaluation procedure (such as cross-validation) is supposed to simulate the future, so that you can accurately estimate right now how well your model will perform on new data.

But if you impute missing values on your whole dataset in pandas and then pass your dataset to scikit-learn, your model evaluation procedure will no longer be an accurate simulation of reality. That&aposs because the imputation values will be based on your entire dataset (meaning both the training portion and the testing portion), whereas the imputation values should just be based on the training portion.

In other words, imputation based on the entire dataset is like peeking into the future and then using what you learned from the future during model training, which is definitely not allowed.

How can we avoid this in pandas?

You might think that one way around this problem would be to split your dataset into training and testing sets and then impute missing values using pandas. (Specifically, you would need to learn the imputation value from the training set and then use it to fill in both the training and testing sets.)

That would work if you&aposre only ever planning to use train/test split for model evaluation, but it would not work if you&aposre planning to use cross-validation. That&aposs because during 5-fold cross-validation (for example), the rows contained in the training set will change 5 times, and thus it&aposs quite impractical to avoid data leakage if you use pandas for imputation while using cross-validation!

How else can data leakage arise?

So far, I&aposve only mentioned data leakage in the context of missing value imputation. But there are other transformations that if done in pandas on the full dataset will also cause data leakage.

For example, feature scaling in pandas would lead to data leakage, and even one-hot encoding (or "dummy encoding") in pandas would lead to data leakage unless there&aposs a known, fixed set of categories.

More generally, any transformation which incorporates information about other rows when transforming a row will lead to data leakage if done in pandas.

How does scikit-learn prevent data leakage?

Now that you&aposve learned how data transformations in pandas can cause data leakage, I&aposll briefly mention three ways in which scikit-learn prevents data leakage:

  • First, scikit-learn transformers have separate fit and transform steps, which allow you to base your data transformations on the training set only, and then apply those transformations to both the training set and the testing set.
  • Second, the fit and predict methods of a Pipeline encapsulate all calls to fit_transform and transform so that they&aposre called at the appropriate times.
  • Third, cross_val_score splits the data prior to performing data transformations, which ensures that the transformers only learn from the temporary training sets that are created during cross-validation.
Conclusion

When working on a Machine Learning problem in Python, I recommend performing all of your data transformations in scikit-learn, rather than performing some of them in pandas and then passing the transformed data to scikit-learn.

Besides helping you to prevent data leakage, this enables you to tune the transformer and model hyperparameters simultaneously, which can lead to a better performing model!

One final note...

This post is an excerpt from my upcoming video course, Master Machine Learning with scikit-learn.

Join the waitlist below to get free lessons from the course and a special launch discount &#x1F447;

Categories: FLOSS Project Planets

The Drop Times: DrupalCollab: How big is the Drupal Community?

Planet Drupal - Thu, 2024-04-25 09:32
The Drop Times delves into the dynamics of the Drupal community with a detailed analysis of LinkedIn data, revealing the distribution and growth trends of Drupal professionals worldwide. This comprehensive study sheds light on regional concentrations and potential areas for community engagement.
Categories: FLOSS Project Planets

Lukas Märdian: Creating a Netplan enabled system through Debian-Installer

Planet Debian - Thu, 2024-04-25 06:19

With the work that has been done in the debian-installer/netcfg merge-proposal !9 it is possible to install a standard Debian system, using the normal Debian-Installer (d-i) mini.iso images, that will come pre-installed with Netplan and all network configuration structured in /etc/netplan/.

In this write-up I’d like to run you through a list of commands for experiencing the Netplan enabled installation process first-hand. For now, we’ll be using a custom ISO image, while waiting for the above-mentioned merge-proposal to be landed. Furthermore, as the Debian archive is going through major transitions builds of the “unstable” branch of d-i don’t currently work. So I implemented a small backport, producing updated netcfg and netcfg-static for Bookworm, which can be used as localudebs/ during the d-i build.

Let’s start with preparing a working directory and installing the software dependencies for our virtualized Debian system:

$ mkdir d-i_bookworm && cd d-i_bookworm $ apt install ovmf qemu-utils qemu-system-x86

Now let’s download the custom mini.iso, linux kernel image and initrd.gz containing the Netplan enablement changes, as mentioned above.

TODO: localudebs/

$ wget https://people.ubuntu.com/~slyon/d-i/bookworm/mini.iso $ wget https://people.ubuntu.com/~slyon/d-i/bookworm/linux $ wget https://people.ubuntu.com/~slyon/d-i/bookworm/initrd.gz

Next we’ll prepare a VM, by copying the EFI firmware files, preparing some persistent EFIVARs file, to boot from FS0:\EFI\debian\grubx64.efi, and create a virtual disk for our machine:

$ cp /usr/share/OVMF/OVMF_CODE_4M.fd . $ cp /usr/share/OVMF/OVMF_VARS_4M.fd . $ qemu-img create -f qcow2 ./data.qcow2 5G

Finally, let’s launch the installer using a custom preseed.cfg file, that will automatically install Netplan for us in the target system. A minimal preseed file could look like this:

# Install minimal Netplan generator binary
d-i preseed/late_command string in-target apt-get -y install netplan-generator

For this demo, we’re installing the full netplan.io package (incl. Python CLI), as the netplan-generator package was not yet split out as an independent binary in the Bookworm cycle. You can choose the preseed file from a set of different variants to test the different configurations:

We’re using the custom linux kernel and initrd.gz here to be able to pass the PRESEED_URL as a parameter to the kernel’s cmdline directly. Launching this VM should bring up the normal debian-installer in its netboot/gtk form:

$ export U=https://people.ubuntu.com/~slyon/d-i/bookworm/netplan-preseed+networkd.cfg $ qemu-system-x86_64 \ -M q35 -enable-kvm -cpu host -smp 4 -m 2G \ -drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \ -drive if=pflash,format=raw,unit=1,file=OVMF_VARS_4M.fd,readonly=off \ -device qemu-xhci -device usb-kbd -device usb-mouse \ -vga none -device virtio-gpu-pci \ -net nic,model=virtio -net user \ -kernel ./linux -initrd ./initrd.gz -append "url=$U" \ -hda ./data.qcow2 -cdrom ./mini.iso;

Now you can click through the normal Debian-Installer process, using mostly default settings. Optionally, you could play around with the networking settings, to see how those get translated to /etc/netplan/ in the target system.

After you confirmed your partitioning changes, the base system gets installed. I suggest not to select any additional components, like desktop environments, to speed up the process.

During the final step of the installation (finish-install.d/55netcfg-copy-config) d-i will detect that Netplan was installed in the target system (due to the preseed file provided) and opt to write its network configuration to /etc/netplan/ instead of /etc/network/interfaces or /etc/NetworkManager/system-connections/.

Done! After the installation finished you can reboot into your virgin Debian Bookworm system.

To do that, quit the current Qemu process, by pressing Ctrl+C and make sure to copy over the EFIVARS.fd file that was written by grub during the installation, so Qemu can find the new system. Then reboot into the new system, not using the mini.iso image any more:

$ cp ./OVMF_VARS_4M.fd ./EFIVARS.fd $ qemu-system-x86_64 \ -M q35 -enable-kvm -cpu host -smp 4 -m 2G \ -drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \ -drive if=pflash,format=raw,unit=1,file=EFIVARS.fd,readonly=off \ -device qemu-xhci -device usb-kbd -device usb-mouse \ -vga none -device virtio-gpu-pci \ -net nic,model=virtio -net user \ -drive file=./data.qcow2,if=none,format=qcow2,id=disk0 \ -device virtio-blk-pci,drive=disk0,bootindex=1 -serial mon:stdio

Finally, you can play around with your Netplan enabled Debian system! As you will find, /etc/network/interfaces exists but is empty, it could still be used (optionally/additionally). Netplan was configured in /etc/netplan/ according to the settings given during the d-i installation process.

In our case we also installed the Netplan CLI, so we can play around with some of its features, like netplan status:

Thank you for following along the Netplan enabled Debian installation process and happy hacking! If you want to learn more join the discussion at Salsa:installer-team/netcfg and find us at GitHub:netplan.

Categories: FLOSS Project Planets

Mixing C++ and Rust for Fun and Profit: Part 3

Planet KDE - Thu, 2024-04-25 03:00

In the two previous posts (Part 1 and Part 2), we looked at how to build bindings between C++ and Rust from scratch. However, while building a binding generator from scratch is fun, it’s not necessarily an efficient way to integrate Rust into your C++ project. Let’s look at some existing technologies for mixing C++ and Rust that you can easily deploy today.

bindgen

bindgen is an official tool of the Rust project that can create bindings around C headers. It can also wrap C++ headers, but there are limitations to its C++ support. For example, while you can wrap classes, they won’t have their constructors or destructors automatically called. You can read more about these limitations on the bindgen C++ support page. Another quirk of bindgen is that it only allows you to call C++ from Rust. If you want to go the other way around, you have to add cbindgen to generate C headers for your Rust code.

CXX

CXX is a more powerful framework for integrating C++ and Rust. It’s used in some well-known projects, such as Chromium. It does an excellent job at integrating C++ and Rust, but it is not an actual binding generator. Instead, all of your bindings have to be manually created. You can read the tutorial to learn more about how CXX works.

autocxx

Since CXX doesn’t generate bindings itself, if you want to use it in your project, you’ll need to find a generator that wraps C++ headers with CXX bindings. autocxx is a Google project that does just that, using bindgen to generate Rust bindings around C++ headers. However, it gets better—autocxx can also create C++ bindings for Rust functions.

CXX-Qt

While CXX is one of the best C++/Rust binding generators available, it fails to address Qt users. Since Qt depends so heavily on the moc to enable features like signals and slots, it’s almost impossible to use it with a general-purpose binding generator. That’s where CXX-Qt comes in. KDAB has created the CXX-Qt crate to allow you to integrate Rust into your C++/Qt application. It works by leveraging CXX to generate most of the bindings but then adds a Qt support layer. This allows you to easily use Rust on the backend of your Qt app, whether you’re using Qt Widgets or QML. CXX-Qt is available on Github and crates.io.

If you’re interested in integrating CXX-Qt into your C++ application, let us know. To learn more about CXX-Qt, you can check out this blog.

Other options

There are some other binding generators out there that aren’t necessarily going to work well for migrating your codebase, but you may want to read about them and keep an eye on them:

In addition, there are continuing efforts to improve C++/Rust interoperability. For example, Google recently announced that they are giving $1 million dollars to the Rust foundation to improve interoperability.

Conclusion

In the world of programming tools and frameworks, there is never a single solution that will work for everybody. However, CXX, CXX-Qt, and autocxx seem to be the best options for anyone who wants to port their C++ codebase to Rust. Even if you aren’t looking to completely remove C++ from your codebase, these binding generators may be a good option for you to promote memory safety in critical areas of your application.

Have you successfully integrated Rust in your C++ codebase with one of these tools? Have you used a different tool or perhaps a different programming language entirely? Leave a comment and let us know. Memory-safe programming languages like Rust are here to stay, and it’s always good to see programmers change with the times.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Mixing C++ and Rust for Fun and Profit: Part 3 appeared first on KDAB.

Categories: FLOSS Project Planets

Pages