FLOSS Project Planets
Lemberg Solutions: How to Integrate Apple Pay into Your Drupal Commerce shop?
LN Webworks: How Zapier is Helping Business Owners in Automating Their Work Process
If you often feel that you have a lot to accomplish but fall short of time in your business life, you are not alone. A majority of entrepreneurs feel this way. But, thanks to Zapier, you no longer have to worry about the scarcity of time you experience. With Zapier, you can streamline your Automation for website development tasks and focus on growing your business.
Even the most complex Learning Management System can be augmented with Zapier's robust automation platform. While LMS can't accommodate every single use case, Zapier can often provide the best workaround by connecting the apps and services you use and creating automated workflows, known as Zaps. This way, all the repetitive tasks are taken care of, allowing you to focus your time and energy on other tasks that demand your unwavering focus.
The Drop Times: Doing Nothing Sometimes Could be Everything
You lay down for an afternoon nap, right above you a ceiling fan that has a reflective shiny motor catches your attention. You spend a few minutes tracing the slightest reflection of the room on that shiny motor. Or, amidst busy conversations in a happening room, you get lost, drowning out the noise but are aware enough to observe how people are engaging in conversation and small talk; you were not eavesdropping but simply paying attention. Or while you sit alone in a public space say a cafe, working on your laptop, the intricately designed borders of a wooden cabinet fascinates you, you stay in thought for a good moment and eventually recovering back to finish what work you planned for the day.
You can’t grow if you constantly crave comfort, and you certainly cannot move ahead with quality if you are a busy body; too much of a busy body. You have most certainly must have come across the Italian idiom "Il Dolce Far Niente" or The sweetness of doing nothing from the movie Eat Pray Love (you may also have heard it on social media). Between drifting away amidst heavy work and snapping right back, it can be considered balance and balance is beautiful. A break is necessary.
No matter how nescient this all can sound or however dull your break could be it is as important as crossing out your to do list. Sitting in front of a laptop, maybe for 2 hours, or for some even an unbothered 4 hours, move around. The 20/20/20 rule; for every 20 minutes of screen time, look at something 20 feet away for 20 seconds.
So if you have pulled a marathon of screen time take this as a reminder to stand up from your chair or stare at the ceiling fan (or at the wall for those that don’t use ceiling fans), but after you quickly skim through what we covered this past week.
The 2023 edition of DebConf is scheduled for September, and Four Kitchens, a digital agency, has underwent a rebranding process. Acquia Engage Is Back to Europe and the deadlines for Drupal Developer Days 2023 are fast approaching.
amazee.io is a ZeroOps application delivery hub is hosting a Webinar and are also hosting their first ever amazee.io LagoonCon in Pittsburgh.
Acquia releases a DrupalCon guide and also announced the appointment of its new Chief Marketing Officer (CMO). Jennifer Griffin Smith, a technology industry leader.
Material announces strategic alliance with Acquia and The Linux Foundation Introduces an Advisory Board.
That's all for this week.
Yours Sincerely,
Alethia Rose Braganza
Sub Editor, TheDropTimes
Interview - KDE Plasma Sprint 2023
Der KDE Plasma Sprint 2023 fand dieses Mal in den Büros von TUXEDO statt. Die Entwickler haben Fortschritte auf dem Weg zu #Plasma6 gemacht.
Wir haben ein paar Entwickler um ein persönliches Interview gebeten - viel Spaß!
#linux #opensource #opensourcesoftware #opensourcehardware #kdeplasma #plasma6 #kdeplasmasprint #kde #developer #interview #tuxedo #tuxedocomputers #office #2023 #augsburg #germany
Talk Python to Me: #415: Future of Pydantic and FastAPI
LN Webworks: How to Choose the Right Company for LMS Development?
If the decision of choosing the right Drupal Development Services provider for your LMS development has left you feeling perplexed, you are not alone. It’s natural to get surrounded by the mist of confusion when there are plenty of options available to choose from. As this critical decision will have long-term consequences for your company, it actually calls for a conscientious choice with measured steps. So, what’s the way out?
The answer is knowing against which parameters you should weigh a Learning Management System (LMS) development company and the questions you should ask them. This will help you get valuable insights about the company and make an informed decision.
7 Actionable Steps to Select the Right LMS Development Company
Sven Hoexter: GCP: Private Service Connect Forwarding Rules can not be Updated
PSA for those foolish enough to use Google Cloud and try to use private service connect: If you want to change the serviceAttachment your private service connect forwarding rule points at, you must delete the forwarding rule and create a new one. Updates are not supported. I've done that in the past via terraform, but lately encountered strange errors like this:
Error updating ForwardingRule: googleapi: Error 400: Invalid value for field 'target.target': '<https://www.googleapis.com/compute/v1/projects/mydumbproject/regions/europe-west1/serviceAttachments/ k8s1-sa-xyz-abc>'. Unexpected resource collection 'serviceAttachments'., invalidWorked around that with the help of terrraform_data and lifecycle:
resource "terraform_data" "replacement" { input = var.gcp_psc_data["target"] } resource "google_compute_forwarding_rule" "this" { count = length(var.gcp_psc_data["target"]) > 0 ? 1 : 0 name = "${var.gcp_psc_name}-psc" region = var.gcp_region project = var.gcp_project target = var.gcp_psc_data["target"] load_balancing_scheme = "" # need to override EXTERNAL default when target is a service attachment network = var.gcp_network ip_address = google_compute_address.this.id lifecycle { replace_triggered_by = [ terraform_data.replacement ] } }See also terraform data for replace_triggered_by.
Dirk Eddelbuettel: RcppSimdJson 0.1.10 on CRAN: New Upstream
We are happy to share that the RcppSimdJson package has been updated to release 0.1.10.
RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.
This release updates the underlying simdjson library to version 3.1.8 (also made today). Otherwise we only made a minor edit to the README and adjusted one tweek for code coverage.
The (very short) NEWS entry for this release follows.
Changes in version 0.1.10 (2023-05-14)- simdjson was upgraded to version 3.1.8 (Dirk in #85).
Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.
If you like this or other open-source work I do, you can now sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Steinar H. Gunderson: Joining files with FFmpeg
Joining video files (back-to-back) losslessly with FFmpeg is a surprisingly cumbersome operation. You can't just, like, write all the inputs on the command line or something; you need to use a special demuxer and then write all the names in a text file and override the security for that file, which is pretty crazy.
But there's one issue I had that I crashed into and which random searching around didn't help for, namely this happening sometimes on switching files (and the resulting files just having no video in that area):
[mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290238, current: 86263699; changing to 162290239. This may result in incorrect timestamps in the output file. [mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290239, current: 86264723; changing to 162290240. This may result in incorrect timestamps in the output file. [mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290240, current: 86265747; changing to 162290241. This may result in incorrect timestamps in the output file.There are lots of hits about this online, most of them around different codecs and such, but the problem was surprisingly mundane: Some of the segments had video in stream 0 and audio in stream 1, and some the other way round, and the concat demuxer doesn't account for this.
Simplest workaround; just remux the files first. FFmpeg will put the streams in a consistent order. (Inspired by a Stack Overflow answer that suggested remuxing to MPEG-TS in order to use the concat protocol instead of the concat demuxer.)
First Blog Post for GsoC 2023!
I’m Srirupa Datta, about to finish my undergraduate Electrical Engineering degree at Jadavpur University, India, in June. This year, I got selected for Google Summer of Code and will be working on improving the Bundle Creator in Krita.
My Introduction to Krita…It’s been more than a year since my last blogpost where I posted monthly updates on my progress on adding the Perspective Ellipse assistant tool in Krita during SoK’22. Being a painter who’s interested in software development, I’ve been interested in Krita ever since I started using it.
What it’s all aboutThe primary format to share resources in Krita is a Resource Bundle, which is a compressed file containing all the resources together. It also contains some other information like metadata and a manifest so Krita can check there’s no errors in the file.
Krita’s Bundle Creator allows one to create their own bundle from the resources of their choice. The project that I would be working on, aims to improve the user interface of the current Bundle Creator, and allow the ability to edit bundles (which is currently not supported in Krita).
The new Bundle CreatorThe new Bundle Creator would look like an installation wizard with four pages which can be navigated using the Next and Back buttons, as well as buttons on the left side panel.
I think the primary objective behind designing the new Bundle Creator was to organize its workflow, that is, segregate sections devoted to a particular function or job. This is what led to the idea of using a wizard, instead of simple dialogs. Hence it would have four wizard pages:
- Choose Resources
- Choose Tags
- Enter Bundle Details
- Choose Save Location
Some of the cool features you can expect in the new Bundle Creator are a gridview like that of Resource Manager’s to view all the resources, filter resources by name or tag before selecting, and an option to change back to the default listview from gridview if one wishes to stick to the previous layout.
Adding custom tags to selected resources is a feature that we wish to integrate, but it would require a redesign of the Choose Tags wizard page that has been shown below. Just to clarify, these are all mockups!
Yet another important feature would be <breloading</b> last bundle data when opened/on startup - this is particularly useful when making a bundle for other people.
Apart from these, the new Bundle Creator would be resizable(Yaay!), and a separate Menu entry called Bundle Creator would be created. We plan to move Manage Resource Libraries , Manage Resources and Bundle Creator from Menu > Settings to Menu > Resources.
And lastly, I would be working on adding the feature of editing bundles - this however needs to be discussed more and would be dealt with post my mid term evaluations.
And of course, if you want to suggest some ideas or improvements, feel free to drop a comment on this post I created on Krita Artists Forum!
Petter Reinholdtsen: The 2023 LinuxCNC Norwegian developer gathering
The LinuxCNC project is making headway these days. A lot of patches and issues have seen activity on the project github pages recently. A few weeks ago there was a developer gathering over at the Tormach headquarter in Wisconsin, and now we are planning a new gathering in Norway. If you wonder what LinuxCNC is, lets quote Wikipedia:
"LinuxCNC is a software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It can control up to 9 axes or joints of a CNC machine using G-code (RS-274NGC) as input. It has several GUIs suited to specific kinds of usage (touch screen, interactive development)."The Norwegian developer gathering take place the weekend June 16th to 18th this year, and is open for everyone interested in contributing to LinuxCNC. Up to date information about the gathering can be found in the developer mailing list thread where the gathering was announced. Thanks to the good people at Debian, Redpill-Linpro and NUUG Foundation, we have enough sponsor funds to pay for food, and shelter for the people traveling from afar to join us. If you would like to join the gathering, get in touch.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Holger Levsen: 20230514-fwupd
As one cannot use fwupd on Qubes OS to update firmwares this is a quick How-To for using fwupd on Grml for future me.
- boot into Grml.
- mount /boot/efi to /efi or set OverrideESPMountPoint=/boot/efi/EFI if you mount to the usual path.
- apt install fwupd-amd64-signed udisks2 policykit-1
- fwupdmgr get-devices
- fwupdmgr refresh
- fwupdmgr get-updates
- fwupdmgr update
- reboot into Qubes OS.
Go Deh: Pythons chained conditional expressions (and Fizz Buzz)
I was lurking on the LinkedIn Python forum and someone brought up FizzBuzz again and I thought; "Why not try a solution that closely follows the description"
The description given was:
Print a range of numbers from 1 to 100 but for multiples of 3, print "Fizz", for multiples of 5 print "Buzz" and for multiples of both 3 and 5 print "FizzBuzz".Common factors
You know that the because fifteen is a factor of three and five,but three and five share no common factors you can test first for fifteen then for either three or five, Since three is mentioned first I chose to test in order of fifteen, three then five.
We are already moving away from the textual description out of necessity.
Hiding in plain site of the docs you have:
conditional_expression ::= or_test ["if" or_test "else" expression] expression ::= conditional_expression | lambda_exprThe else part of a conditional expression can itself be a conditional expression!
For FizzBuzz there are multiple conditions. Let's try stringing them along, and I will use F and B instead of Fizz and Buzz to better show the chained conditional expression:
for i in range(1, 101): print('FB' if not i % 15 else 'F' if not i % 3 else 'B' if not i % 5 else i)The corresponding English description has changed, but it should be straight-forward in its logic.
Here's a picture of this type of chained conditional expressions in action:
END.
C.J. Collier: Early Access: Inserting JSON data to BigQuery from Spark on Dataproc
Hello folks!
We recently received a case letting us know that Dataproc 2.1.1 was unable to write to a BigQuery table with a column of type JSON. Although the BigQuery connector for Spark has had support for JSON columns since 0.28.0, the Dataproc images on the 2.1 line still cannot create tables with JSON columns or write to existing tables with JSON columns.
The customer has graciously granted permission to share the code we developed to allow this operation. So if you are interested in working with JSON column tables on Dataproc 2.1 please continue reading!
Use the following gcloud command to create your single-node dataproc cluster:
IMAGE_VERSION=2.1.1-debian11 REGION=us-west1 ZONE=${REGION}-a CLUSTER_NAME=pick-a-cluster-name gcloud dataproc clusters create ${CLUSTER_NAME} \ --region ${REGION} \ --zone ${ZONE} \ --single-node \ --master-machine-type n1-standard-4 \ --master-boot-disk-type pd-ssd \ --master-boot-disk-size 50 \ --image-version ${IMAGE_VERSION} \ --max-idle=90m \ --enable-component-gateway \ --scopes 'https://www.googleapis.com/auth/cloud-platform'The following file is the Scala code used to write JSON structured data to a BigQuery table using Spark. The file following this one can be executed from your single-node Dataproc cluster.
import org.apache.spark.sql.functions.col import org.apache.spark.sql.types.{Metadata, StringType, StructField, StructType} import org.apache.spark.sql.{Row, SaveMode, SparkSession} import org.apache.spark.sql.avro import org.apache.avro.specific val env = "x" val my_bucket = "cjac-docker-on-yarn" val my_table = "dataset.testavro2" val spark = env match { case "local" => SparkSession .builder() .config("temporaryGcsBucket", my_bucket) .master("local") .appName("isssue_115574") .getOrCreate() case _ => SparkSession .builder() .config("temporaryGcsBucket", my_bucket) .appName("isssue_115574") .getOrCreate() } // create DF with some data val someData = Seq( Row("""{"name":"name1", "age": 10 }""", "id1"), Row("""{"name":"name2", "age": 20 }""", "id2") ) val schema = StructType( Seq( StructField("user_age", StringType, true), StructField("id", StringType, true) ) ) val avroFileName = s"gs://${my_bucket}/issue_115574/someData.avro" val someDF = spark.createDataFrame(spark.sparkContext.parallelize(someData), schema) someDF.write.format("avro").mode("overwrite").save(avroFileName) val avroDF = spark.read.format("avro").load(avroFileName) // set metadata val dfJSON = avroDF .withColumn("user_age_no_metadata", col("user_age")) .withMetadata("user_age", Metadata.fromJson("""{"sqlType":"JSON"}""")) dfJSON.show() dfJSON.printSchema // write to BigQuery dfJSON.write.format("bigquery") .mode(SaveMode.Overwrite) .option("writeMethod", "indirect") .option("intermediateFormat", "avro") .option("useAvroLogicalTypes", "true") .option("table", my_table) .save() #!/bin/bash PROJECT_ID=set-yours-here DATASET_NAME=dataset TABLE_NAME=testavro2 # We have to remove all of the existing spark bigquery jars from the local # filesystem, as we will be using the symbols from the # spark-3.3-bigquery-0.30.0.jar below. Having existing jar files on the # local filesystem will result in those symbols having higher precedence # than the one loaded with the spark-shell. sudo find /usr -name 'spark*bigquery*jar' -delete # Remove the table from the bigquery dataset if it exists bq rm -f -t $PROJECT_ID:$DATASET_NAME.$TABLE_NAME # Create the table with a JSON type column bq mk --table $PROJECT_ID:$DATASET_NAME.$TABLE_NAME \ user_age:JSON,id:STRING,user_age_no_metadata:STRING # Load the example Main.scala spark-shell -i Main.scala \ --jars /usr/lib/spark/external/spark-avro.jar,gs://spark-lib/bigquery/spark-3.3-bigquery-0.30.0.jar # Show the table schema when we use `bq mk --table` and then load the avro bq query --use_legacy_sql=false \ "SELECT ddl FROM $DATASET_NAME.INFORMATION_SCHEMA.TABLES where table_name='$TABLE_NAME'" # Remove the table so that we can see that the table is created should it not exist bq rm -f -t $PROJECT_ID:$DATASET_NAME.$TABLE_NAME # Dynamically generate a DataFrame, store it to avro, load that avro, # and write the avro to BigQuery, creating the table if it does not already exist spark-shell -i Main.scala \ --jars /usr/lib/spark/external/spark-avro.jar,gs://spark-lib/bigquery/spark-3.3-bigquery-0.30.0.jar # Show that the table schema does not differ from one created with a bq mk --table bq query --use_legacy_sql=false \ "SELECT ddl FROM $DATASET_NAME.INFORMATION_SCHEMA.TABLES where table_name='$TABLE_NAME'"Google BigQuery has supported JSON data since October of 2022, but until now, it has not been possible, on generally available Dataproc clusters, to interact with these columns using the Spark BigQuery Connector.
JSON column type support was introduced in spark-bigquery-connector release 0.28.0.
TweetWoodpecker CI with automatic runner creation
I’ve been happily using Woodpecker CI to get CI for my repositories on Codeberg. Codeberg is a non-profit community-driven git repository hosting platform, so they can’t provide free CI to everyone.
Since I run lots of stuff on small arm boards (for example this website), I need my CI jobs to create arm executables. The easiest way to get that done is to just compile on arm devices, so I was happy to see that Hetzner is now offering arm nodes in their cloud offering.
To make that as cheap as possible, the CI should ideally create a VM before running its job, and remove it again afterwards. Unfortunately Woodpecker does not seem to support that out of the box at this point.
My solution to that was to build a docker proxy, that creates VMs using docker-machine, and then proxies the incoming requests to the remote VM. That works really well now, so maybe you will find it useful.
Setting that up is reasonably simple:
- Install docker-machine. I recommend using the fork by GitLab
- Install the backend for your cloud provider. For Hetzner I use this one
- Grab a binary release of docker-proxy (if you need arm executables), or compile it yourself.
- Create a systemd unit to start the service on boot in /etc/systemd/system/docker.proxy.service. This particular one just runs it on the woodpecker-agent user that you may already have if you use Woodpecker CI.
- Fill in /etc/docker-proxy/config.toml This example works for Hetzner, but everything that has a docker-machine provider should work. You just need to supply the arguments for the correct backend.
- Finally, make woodpecker-agent use the new docker proxy, by setting DOCKER_HOST=http://localhost:8000 in its environment.
I hope this may be useful for you as well :)
"Mathspp Pydon'ts": Properties | Pydon't 🐍
Learn how to use properties to add dynamic behaviour to your attributes.
(If you are new here and have no idea what a Pydon't is, you may want to read the Pydon't Manifesto.)
IntroductionProperties, defined via the property built-in, are a Python feature that lets you add dynamic behaviour behind what is typically a static interface: an attribute. Properties also have other benefits and use cases, and we will cover them here.
In this Pydon't, you will
- understand what a property is;
- learn how to implement a property with the built-in property;
- learn how to use property to implement read-only attributes;
- see that property doesn't have to be used as a decorator;
- add setters to your properties;
- use setters to do data validation and normalisation;
- read about deleters in properties; and
- see usages of property in the standard library.
You can now get your free copy of the ebook “Pydon'ts – Write beautiful Python code” on Gumroad to help support the series of “Pydon't” articles 💪.
What is a property?A property is an attribute that is computed dynamically. That's it. And you will understand what this means in a jiffy!
The problemThis is a class Person with three vanilla attributes:
class Person: def __init__(self, first, last): self.first = first self.last = last self.name = f"{self.first} {self.last}" john = Person("John", "Doe")However, there is an issue with the implementation. Can you see what is the issue with this implementation?
I'll give you a hint:
>>> john = Person("John", "Doe") >>> john.name 'John Doe' >>> john.last = "Smith" >>> john.name # ?When you implement name as a regular attribute, it can go out of sync when you change the attributes upon which name depended on.
How do we fix this? Well, we could provide methods that the user can use to set the first and last names of the Person instance, and those methods could keep name in sync:
class Person: def __init__(self, first, last): self.first = first self.last = last self.name = f"{self.first} {self.last}" def set_first(self, first): self.first = first self.name = f"{self.first} {self.last}" def set_last(self, last): self.last = last self.name = f"{self.first} {self.last}" john = Person("John", "Doe")This works:
>>> john = Person("John", "Doe") >>> john.name 'John Doe' >>> john.set_first("Charles") >>> john.name 'Charles Doe'However, we had to add two methods that look pretty much the same... And this would get worse if we introduced an attribute for middle names, for example... Essentially, this isn't a very Pythonic solution – it isn't very elegant. (Or, at least, we can do better!)
There is another alternative... Instead of updating name when the other attributes are changed, we could add a method that computes the name of the user on demand:
class Person: def __init__(self, first, last): self.first = first self.last = last def get_name(self): return f"{self.first} {self.last}"This also works:
>>> john = Person("John", "Doe") >>> john.get_name() 'John Doe' >>> john.first = "Charles" >>> john.get_name() 'Charles Doe'But this isn't very elegant, either. However, it...
Sergio Durigan Junior: Ubuntu debuginfod and source code indexing
You might remember that in my last post about the Ubuntu debuginfod service I talked about wanting to extend it and make it index and serve source code from packages. I’m excited to announce that this is now a reality since the Ubuntu Lunar (23.04) release.
The feature should work for a lot of packages from the archive, but not all of them. Keep reading to better understand why.
The problemWhile debugging a package in Ubuntu, one of the first steps you need to take is to install its source code. There are some problems with this:
- apt-get source required dpkg-dev to be installed, which ends up pulling in a lot of other dependencies.
- GDB needs to be taught how to find the source code for the package being debugged. This can usually be done by using the dir command, but finding the proper path to be is usually not trivial, and you find yourself having to use more “complex” commands like set substitute-path, for example.
- You have to make sure that the version of the source package is the same as the version of the binary package(s) you want to debug.
- If you want to debug the libraries that the package links against, you will face the same problems described above for each library.
So yeah, not a trivial/pleasant task after all.
The solution…Debuginfod can index source code as well as debug symbols. It is smart enough to keep a relationship between the source package and the corresponding binary’s Build-ID, which is what GDB will use when making a request for a specific source file. This means that, just like what happens for debug symbol files, the user does not need to keep track of the source package version.
While indexing source code, debuginfod will also maintain a record of the relative pathname of each source file. No more fiddling with paths inside the debugger to get things working properly.
Last, but not least, if there’s a need for a library source file and if it’s indexed by debuginfod, then it will get downloaded automatically as well.
… but not a perfect oneIn order to make debuginfod happy when indexing source files, I had to patch dpkg and make it always use -fdebug-prefix-map when compiling stuff. This GCC option is used to remap pathnames inside the DWARF, which is needed because in Debian/Ubuntu we build our packages inside chroots and the build directories end up containing a bunch of random cruft (like /build/ayusd-ASDSEA/something/here). So we need to make sure the path prefix (the /build/ayusd-ASDSEA part) is uniform across all packages, and that’s where -fdebug-prefix-map helps.
This means that the package must honour dpkg-buildflags during its build process, otherwise the magic flag won’t be passed and your DWARF will end up with bogus paths. This should not be a big problem, because most of our packages do honour dpkg-buildflags, and those who don’t should be fixed anyway.
… especially if you’re using LTOUbuntu enables LTO by default, and unfortunately we are affected by an annoying (and complex) bug that results in those bogus pathnames not being properly remapped. The bug doesn’t affect all packages, but if you see GDB having trouble finding a source file whose full path starts without /usr/src/..., that is a good indication that you’re being affected by this bug. Hopefully we should see some progress in the following weeks.
Your feedback is important to usIf you have any comments, or if you found something strange that looks like a bug in the service, please reach out. You can either send an email to my public inbox (see below) or file a bug against the ubuntu-debuginfod project on Launchpad.
Kdenlive 23.04.1 released
Kdenlive 23.04.1 has just been released, and all users of the 23.04.0 version are strongly encouraged to upgrade.
The 23.04.0 release of Kdenlive introduced major changes with the support of nested timeline. However, several issues leading to crashes and project corruption went unnoticed and affected this release.
This should now be fixed in Kdenlive 23.04.1. While we have some automated testing, and continue improving it, it is difficult to test all configurations and cases on such a large codebase with our small team. We are however planning to improve in this area!
It is also important to note that Kdenlive has several automatic backup mechanisms, so even in such cases, data loss should be minimal (see our documentation for more details).
If you want to help us, don’t hesitate to get in touch, report bugs, test the development version, contribute to the documentation or donate if you feel like it!
Version 23.04.1 also fixes many other bugs, see the full log below:
- Don’t store duplicates for modified timeline uuid. Commit.
- Fix recent regression (sequence clip duration not updated). Commit.
- Clear undo history on sequence close as undoing a sequence close leads to crashes. Commit.
- Correctly remember sequence properties (like guides) after closing sequence. Commit.
- Fix various sequence issues (incorrect length limit on load, possible corruption on close/reopen). Commit.
- Do our best to recover 23.04.0 corrupted project files. Commit. See bug #469217
- Try to fix AppImage launching external app. Commit. See bug #468935
- Get rid of the space eating info message in Motion Tracker. Commit.
- Fix Defish range for recently introduced parameters. Commit. Fixes bug #469390
- Fix rotation on proxy formats that don’t support the rotate flag. Commit.
- Fix animated color parameter alpha broken. Commit. Fixes bug #469155
- Fix 23.04.0 corrupted files on opening. Commit. See bug #469217
- Fix another major source of project corruption. Commit.
- Don’t attempt to move external proxy clips. Commit. Fixes bug #468998
- Fix crash on unconfigured speech engine. Commit. Fixes bug #469201
- Fix VOSK model hidden from auto subtitle dialog. Commit. Fixes bug #469230
- Fix vaapi timeline preview profile. Commit. See bug #469251
- Fix effects with filter task (motion tracker, normalize), re-add a non animated color property. Commit.
- Switch test videos to mpg for CI. Commit.
- Fix project corruption on opening, add test to prevent from happening again. Commit. See bug #468962
- Fix concurrency crash in thumbnails. Commit.
- Color wheel: highlight active slider, fix mouse wheel conflicts. Commit.
- More fixes for luma lift gain color wheel (fix dragging outside wheel) and improved cursor feedback. Commit.
- Various fixes for luma lift gain color wheel and slider. Commit.
- Ensure Shape alpha resource are included in archived project. Commit.
- Check missing filter assets on document open (LUT and Shape). Commit.
- Fix various bugs and crashes on sequence close and undo create sequence from selection. Commit.
- Fix temporary data check showing confusing widget. Commit.
- Fix render profiles with no arguments (like GIF Hq). Commit.
- Fix images embeded in titles incorrect path on extract. Commit.
- Wait before all data is copied before re-opening project when using project folder for cache data. Commit.
- Minor ui improvement for clip monitor jobs overlay. Commit.
- Try to fix tests. Commit.
- Don’t show unnecessary warning. Commit.
- Ensure the mute_on_pause property is removed from older project files. Commit.
- Fix clip properties default rotation and aspect ratio detection, display the tracks count for sequence clips. Commit.
The post Kdenlive 23.04.1 released appeared first on Kdenlive.
Petter Reinholdtsen: OpenSnitch in Debian ready for prime time
A bit delayed, the interactive application firewall OpenSnitch package in Debian now got the latest fixes ready for Debian Bookworm. Because it depend on a package missing on some architectures, the autopkgtest check of the testing migration script did not understand that the tests were actually working, so the migration was delayed. A bug in the package dependencies is also fixed, so those installing the firewall package (opensnitch) now also get the GUI admin tool (python3-opensnitch-ui) installed by default. I am very grateful to Gustavo Iñiguez Goya for his work on getting the package ready for Debian Bookworm.
Armed with this package I have discovered some surprising connections from programs I believed were able to work completly offline, and it has already proven its worth, at least to me. If you too want to get more familiar with the kind of programs using Internett connections on your machine, I recommend testing apt install opensnitch in Bookworm and see what you think.
The package is still not able to build its eBPF module within Debian. Not sure how much work it would be to get it working, but suspect some kernel related packages need to be extended with more header files to get it working.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Plasma 6 Sprint 2023
Last weekend I attended the first few days of the Plasma 6 Sprint in Augsburg, Germany. While I’m not deeply involved with the Plasma development as such, there’s plenty of overlap with KDE Frameworks topics there as well.
Wrapping up 2020The Plasma sprint was originally planned for April 2020, with accommodation and some trips already having been booked, but then COVID interfered. For me this was the last 2020 event to catch up with, after the recent PIM Sprint in Toulouse, so that part of 2020 is now finally concluded.
Having missed Akademy last year this was also the first opportunity to finally meet some people again who I hadn’t seen in over three years, and to meet some people for the first time who have joined in recent years (as well as meeting some people I have met in three different countries this year alone already). That’s always the best part of sprints :)
Kai, Marco, Méven, Nate and Noah have already written about the sprint, so I’ll focus on some of the details I worked on below.
KDE Frameworks 6The switch to KF6 allows us to move things around and thus fix one of the bigger “dependency propagators” we have, the color scheme classes. Those are in widespread use and on their own fairly simple. Due to their placement in KF::ConfigWidgets however they depend on Qt::Widgets. That’s not a problem for QWidget-based applications, but for several of our QML-based (mobile) apps it’s often the only reason for that dependency.
For an Android APK that impact is particularly easy to measure, there we are talking about 4-5MB additional package size, which can be up to 20-25% of the entire size. While that might not seem much for the individual user, this quickly multiplies with every APK downloaded from our infrastructure.
This is entirely avoidable with a bit of restructuring, which we got done during the sprint (and by we I mean mostly David Redondo). The color scheme code is now separated into the new widget-less KF::ColorScheme framework, and a small part remaining in KF::ConfigWidgets for generating a color scheme selection menu. For most consumers this is all largely transparent, as the relevant uses happen in other component such as the Breeze QQC2 style.
That’s not the only KF6 topic we discussed and worked on though:
- Reviewing the XDG Portal Notification v2 proposal for compatibility with KF::Notification. The proposed API is actually a big improvement over the existing one, and much better aligns with what we have in KF::Notification and what other platforms offer (which helps us with providing a cross-platform API).
- Finish and integrate two widely used pieces of code from applications, an email address input validator and a return key capture event filter for line edits. This reduces code duplication and helps to cut down dependencies in e.g. KDE PIM.
- Identify and fix test failures caused by the CLDR 42 update we got as part of the switch to Qt 6.5. This now uses a narrow non-breakable Unicode space instead of a regular one for certain time formats, which is actually a good idea but wasn’t expected by some of our unit tests.
- Continued the refactoring of the KF::Prison barcode generator API to no longer be based around polymorphic types and factory methods but instead provide a simple value-type like interface.
I also got to learn more about XDG Portals from others working with those, which will come in handy for adding push notification support there.
ContributingGetting everyone together for a few days is not only great fun, it’s also immensely productive.
This has been made possible thanks to TUXEDO Computers hosting us in their office in Augsburg. If you have access to a suitable venue that is a much appreciated way in which you can support us. Similarly we rely on KDE e.V. to ensure everyone can afford to attend, which you can support with your donations.
The next big in-person meeting isn’t far away fortunately, Akademy is in just nine weeks already :)