Feeds

Steinar H. Gunderson: Continued life with bcachefs

Planet Debian - Fri, 2024-04-26 16:05

This post was supposed to be called “death with bcachefs”, but it sounded a bit too dramatic. :-) Evidently bcachefs-tools in Debian is finally getting an update (although in experimental), so that's good. Meanwhile, one of my multi-device filesystems died a horrible death, and since I had backups, I didn't ask for its fix to be prioritized—fsck still is unable to repair it and I don't use bcachefs on that machine anymore. But the other one still lives fairly happily.

Hanging around #bcachefs on IRC tells me that indeed, this thing is still quite experimental. Some of the killer features (like proper compression) don't perform as well as they should yet. Large rewrites are still happening. People are still reporting quite weird bugs that are being triaged and mostly fixed (although if you can't reproduce them, you're pretty much hosed). But it's a fun ride. Again: Have backups. They saved me. :-)

Categories: FLOSS Project Planets

Drupal Core News: Drupal 11.0.0-alpha 1 will be released on the week of April 29, 2024

Planet Drupal - Fri, 2024-04-26 14:08

Last month, we announced that depending on readiness of the codebase to 11.0.0 beta requirements today on April 26, 2024, Drupal 11 would be released either on the week of July 29, 2024 or the week of December 9, 2024.

The Drupal 11 codebase progressed a lot since then, it is based on Symfony 7 and jQuery 4, and the deprecated APIs have been removed. However, while we are making rapid progress on PHPUnit 10 support, we need to fully complete that update to PHPUnit 10 before a beta release, which will not quite be ready for next week.

To help the community prepare for Drupal 11, we decided to make Drupal 11.0.0-alpha1 available next week (on the week of April 29, 2024), alongside Drupal 10.3.0-beta1. This also means that those attending DrupalCon Portland 2024 the week after can already try out the first tagged version of Drupal 11, and modules can add Drupal 11 compatibility confident that all runtime APIs are stable.

We are giving ourselves an additional couple of weeks to run down the last PHPUnit 10 issues and any other remaining beta blockers ready for a stable Drupal 11.0.0 release on the week of July 29, 2024. Assuming all goes well, we'll make a final decision by May 10th and release a beta shortly afterwards.

Categories: FLOSS Project Planets

Openly Shared

Open Source Initiative - Fri, 2024-04-26 08:02

The definition of “open source” in the most recent version (article 2(48)) of the Cyber Resilience Act (CRA) goes beyond the Open Source Definition (OSD) managed by OSI. It says:

“Free and open-source software is understood as software the source code of which is openly shared and the license of which provides for all rights to make it freely accessible, usable, modifiable and redistributable.”

The addition of “openly shared” was a considered and intentional addition by the co-legislators – they even checked with community members that it did not cause unintended effects before adding it. While open source communities all “openly share” the source code of their projects, the same is not true of some companies, especially those with “open core” business models.

For historical reasons, it is not a requirement either of the OSD or of the FSF’s Free Software Definition (FSD) and the most popular open source licenses do not require it. Notably, the GPL does not insist that source code be made public – only that those receiving the binaries must be able to request the corresponding source code and enjoy it however they wish (including making it public).

For most open source projects and their uses, the CRA’s extra requirement will make no difference. But it complicates matters for companies that either restrict source availability to paying customers (such as Red Hat) or make little distinction between available and non-available source (such as ForgeRock) or withhold source to certain premium elements.

A similar construct{1} is used in the AI Act (recital 102) and I anticipate this trend will continue through other future legislation. Personally I welcome this additional impetus to openness.

This post may be discussed on our forum

{1} The mention in the AI Act has a different character to that in the CRA. In the AI Act it is more narrative, restricted to a recital and is a subset of attributes of the license. In this form it actually refers to virtually no OSI-approved licenses. In the CRA the wording part of the formal definition in an Article, so much more impactful, and adds an additional requirement over the basic requirements of licensing.

Categories: FLOSS Research

Real Python: Quiz: What Is the __pycache__ Folder in Python?

Planet Python - Fri, 2024-04-26 08:00

As your Python project grows, you typically organize your code in modules and packages for easier maintenance and reusability. When you do that, you’ll likely notice the sudden emergence of a __pycache__ folder alongside your original files, popping up in various locations unexpectedly.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #202: Pydantic Data Validation & Python Web Security Practices

Planet Python - Fri, 2024-04-26 08:00

How do you verify and validate the data coming into your Python web application? What tools and security best practices should you consider as a developer? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Robert McQueen: Update from the GNOME board

Planet Debian - Fri, 2024-04-26 06:39

It’s been around 6 months since the GNOME Foundation was joined by our new Executive Director, Holly Million, and the board and I wanted to update members on the Foundation’s current status and some exciting upcoming changes.

Finances

As you may be aware, the GNOME Foundation has operated at a deficit (nonprofit speak for a loss – ie spending more than we’ve been raising each year) for over three years, essentially running the Foundation on reserves from some substantial donations received 4-5 years ago. The Foundation has a reserves policy which specifies a minimum amount of money we have to keep in our accounts. This is so that if there is a significant interruption to our usual income, we can preserve our core operations while we work on new funding sources. We’ve now “hit the buffers” of this reserves policy, meaning the Board can’t approve any more deficit budgets – to keep spending at the same level we must increase our income.

One of the board’s top priorities in hiring Holly was therefore her experience in communications and fundraising, and building broader and more diverse support for our mission and work. Her goals since joining – as well as building her familiarity with the community and project – have been to set up better financial controls and reporting, develop a strategic plan, and start fundraising. You may have noticed the Foundation being more cautious with spending this year, because Holly prepared a break-even budget for the Board to approve in October, so that we can steady the ship while we prepare and launch our new fundraising initiatives.

Strategy & Fundraising

The biggest prerequisite for fundraising is a clear strategy – we need to explain what we’re doing and why it’s important, and use that to convince people to support our plans. I’m very pleased to report that Holly has been working hard on this and meeting with many stakeholders across the community, and has prepared a detailed and insightful five year strategic plan. The plan defines the areas where the Foundation will prioritise, develop and fund initiatives to support and grow the GNOME project and community. The board has approved a draft version of this plan, and over the coming weeks Holly and the Foundation team will be sharing this plan and running a consultation process to gather feedback input from GNOME foundation and community members.

In parallel, Holly has been working on a fundraising plan to stabilise the Foundation, growing our revenue and ability to deliver on these plans. We will be launching a variety of fundraising activities over the coming months, including a development fund for people to directly support GNOME development, working with professional grant writers and managers to apply for government and private foundation funding opportunities, and building better communications to explain the importance of our work to corporate and individual donors.

Board Development

Another observation that Holly had since joining was that we had, by general nonprofit standards, a very small board of just 7 directors. While we do have some committees which have (very much appreciated!) volunteers from outside the board, our officers are usually appointed from within the board, and many board members end up serving on multiple committees and wearing several hats. It also means the number of perspectives on the board is limited and less representative of the diverse contributors and users that make up the GNOME community.

Holly has been working with the board and the governance committee to reduce how much we ask from individual board members, and improve representation from the community within the Foundation’s governance. Firstly, the board has decided to increase its size from 7 to 9 members, effective from the upcoming elections this May & June, allowing more voices to be heard within the board discussions. After that, we’re going to be working on opening up the board to more participants, creating non-voting officer seats to represent certain regions or interests from across the community, and take part in committees and board meetings. These new non-voting roles are likely to be appointed with some kind of application process, and we’ll share details about these roles and how to be considered for them as we refine our plans over the coming year.

Elections

We’re really excited to develop and share these plans and increase the ways that people can get involved in shaping the Foundation’s strategy and how we raise and spend money to support and grow the GNOME community. This brings me to my final point, which is that we’re in the run up to the annual board elections which take place in the run up to GUADEC. Because of the expansion of the board, and four directors coming to the end of their terms, we’ll be electing 6 seats this election. It’s really important to Holly and the board that we use this opportunity to bring some new voices to the table, leading by example in growing and better representing our community.

Allan wrote in the past about what the board does and what’s expected from directors. As you can see we’re working hard on reducing what we ask from each individual board member by increasing the number of directors, and bringing additional members in to committees and non-voting roles. If you’re interested in seeing more diverse backgrounds and perspectives represented on the board, I would strongly encourage you consider standing for election and reach out to a board member to discuss their experience.

Thanks for reading! Until next time.

Best Wishes,
Rob
President, GNOME Foundation

(also posted to GNOME Discourse, please head there if you have any questions or comments)

Categories: FLOSS Project Planets

Web Review, Week 2024-17

Planet KDE - Fri, 2024-04-26 06:37

Let’s go for my web review for the week 2024-17.

AI isn’t useless. But is it worth it?

Tags: tech, ai, gpt, copilot, criticism, work

It is an interesting essay. It leans on the side of “assistants are useful for simple coding tasks” and it’s a bit more critical when it’s about writing. The stance is original I find, yes it can help with some writing tasks, but if you look at the writing tasks you can expedite this way… if you wish to expedite them isn’t it a sign that they were providing little value in the first place? Is the solution the assistant or changing the way you work? Indeed this might hide some busy work otherwise.

https://www.citationneeded.news/ai-isnt-useless/


Magic Numbers | blarg

Tags: tech, networking, history

Interesting facts about how the ethernet frame MTU came to be 1500 bytes.

https://exple.tive.org/blarg/2024/04/24/magic-numbers/


Guiding users away from cd and ls :: Terminal Click — Bringing Dead Text to Life

Tags: tech, command-line, terminal

Interesting ideas for terminal emulators and shells. Maybe will make their way in other software.

https://terminal.click/posts/2024/04/guiding-users-away-from-cd-and-ls/


Tips on Adding JSON Output to Your CLI App - Brazil’s Blog

Tags: tech, json, command-line

Nice set of advices. There are interesting things to do on the command line with more JSON output. It needs to be easy to work with though.

https://blog.kellybrazil.com/2021/12/03/tips-on-adding-json-output-to-your-cli-app/


Modern Linux mounts a lot of different types of virtual filesystems

Tags: tech, linux, filesystem

There is indeed a jungle of virtual filesystems nowadays. That doesn’t make it easy to filter only for the “real” ones.

https://utcc.utoronto.ca/~cks/space/blog/linux/LinuxManyVirtualFilesystems


Shared libs, rpath and the runtime linker

Tags: tech, system, linux, linking, debugging

This can sometimes be confusing. Here are a couple of tips about debugging rpath and linker errors.

https://carlosrdrz.dev/shared-libs-rpath-and-the-runtime-linker


Coverage Guided Fuzzing - Extending Instrumentation to Hunt Down Bugs Faster! - Include Security Research Blog

Tags: tech, tests, security, fuzzing

Maybe a bit dry, but gives a good idea of how a fuzz testing harness works. And also how it can be tweaked.

https://blog.includesecurity.com/2024/04/coverage-guided-fuzzing-extending-instrumentation/


HTML attributes vs DOM properties - JakeArchibald.com

Tags: tech, html, web, frontend

There are differences between attributes on the HTML side and properties on the DOM side. This can quickly get confusing, here is a good reference for it.

https://jakearchibald.com/2024/attributes-vs-properties/


3 important things I overlooked during code reviews | Piglei

Tags: tech, codereview, programming

Indeed, naming, comments and communication styles are three aspects often overlooked during reviews. They are very important though and shouldn’t be neglected.

https://www.piglei.com/articles/3-important-things-I-overlooked-during-cr/


First Come First Served: The Impact of File Position on Code Review

Tags: tech, codereview, cognition

I guess we kind of suspected it, this studies tends to prove it. Defects are more easily found in the first files of a code review rather than in the last ones.

https://arxiv.org/abs/2208.04259


Random musings on the Agile Manifesto – NeoPragma LLC

Tags: tech, agile, criticism

Interesting musings indeed. That’s lesser heard opinions about the manifesto and its origins. Good food for thought.

https://neopragma.com/2024/04/random-musings-on-the-agile-manifesto/


Calculus Made Easy

Tags: mathematics, learning

I didn’t know this book. It is written in a surprising style, but it’s very much down to earth and to the point. For sure a good way to learn calculus.

https://calculusmadeeasy.org/


You Are What You Read, Even If You Don’t Always Remember It - Jim Nielsen’s Blog

Tags: reading, knowledge

Very good point. You might not remember the content, but if it impacted the way you think it did its job.

https://blog.jim-nielsen.com/2024/you-are-what-you-read/


Bye for now!

Categories: FLOSS Project Planets

Hotspot v1.5.0 released

Planet KDE - Fri, 2024-04-26 06:30

Hotspot is a standalone GUI designed to provide a user-friendly interface for analyzing performance data. It takes a perf.data file, parses and evaluates its contents, and presents the results in a visually appealing and easily understandable manner. Our goal with Hotspot is to offer a modern alternative to perf report, making performance analysis on Linux systems more intuitive and efficient.

ChangeLog for Hotspot v1.5.0

It comes packed with a wealth of code cleanups, bug fixes and new functionality. Most notably, the disassembly view has been further improved with better searching, highlighting and faster performance.

Furthermore, we reworked the authentication mechanism to allow perf record to be run directly, with elevated priveleges, via pkexec, obsoleting the error prone old mechanism (see also https://nvd.nist.gov/vuln/detail/CVE-2023-28144).

We now also fully support Qt6 and KF6, while keeping compatibility with Qt5 and KF5. The AppImage below is still built with Qt5 but it might be the last time that we do this. The next version might become Qt6 only.

Many thanks to the various contributors that help build this software, both by writing code as well as reporting bugs.

To get a more detailed scope over all the changes in this new release, check out the full changelog on GitHub. More information about Hotspot can be obtained on its GitHub page or by watching this video.

Happy profiling everyone 🚀

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Hotspot v1.5.0 released appeared first on KDAB.

Categories: FLOSS Project Planets

gnulib @ Savannah: GNU gnulib: gnulib-tool has become much faster

GNU Planet! - Fri, 2024-04-26 06:12

If you are developer on a package that uses GNU gnulib as part of its build system:

gnulib-tool has been known for being slow for many years. We have listened to your complaints. We have rewritten gnulib-tool in another programming language (Python). It is between 8 times and 100 times faster than the previous implementation.

Both implementations behave identically, that is, produce the same generated files and the same output. Nothing changes in your way to use Gnulib; it's only faster.

In order to reap the new speed:

1. Make sure you have Python (version 3.7 or newer) installed on your machine.

2. Update your gnulib checkout. (For some packages, it comes as a git submodule named 'gnulib'.) Like this:

  $ git checkout master
  $ git pull

  Set the environment variable GNULIB_SRCDIR, pointing to this checkout.

  If the package is using a git submodule named 'gnulib', it is also advisable to do

  $ git commit -m 'build: Update gnulib submodule to latest.' gnulib

  (as a preparation for step 4, because the --no-git option does not work as expected in all variants of 'bootstrap').

3. Clean the built files of your package:

  $ make -k distclean


4. Regenerate the fetched and generated files of your package. Depending on the package, this may be a command such as

  $ ./bootstrap --no-git --gnulib-srcdir=$GNULIB_SRCDIR

  or

  $ export GNULIB_SRCDIR; ./autopull.sh; ./autogen.sh

  or, if no such script is available:

  $ $GNULIB_SRCDIR/gnulib-tool --update


5. Continue with

  $ ./configure
  $ make

  as usual.

Enjoy! The rewritten gnulib-tool was implemented by Dmitry Selyutin, Collin Funk, and me.

Categories: FLOSS Project Planets

Goals Sprint 2024

Planet KDE - Fri, 2024-04-26 05:15

From last Friday to Wednesday I was in Berlin to attend the combined 2024 KDE Goals sprint that was graciously hosted by MBition. Compared to previous goals sprints where there were separate sprints Goal this year was different as all three happened at once in the same area. This allowed attendees to freely switch around the different topics and enables more collaboration opportunities. Lets see how that worked out for me.

Most of my time I actually spent in the context of the accessibility goal. I became part of a discussion of how QML comboboxes in general and the Kirigami Add-ons date picker is lacking in the accessibility departement. As the discussion went to how the default representation of a standard combobox could be improved, the question was raised if it would still be possible to do something special for customized comboboxes. This lead to prototyping on the date picker with the first approach being to forego the built in support in QtQuick and implement the relevant interfaces manually like one would do for QtWidgets. This was a lot of boring boilerplate code but it proved that this option is available for very specialized use cases. The solution we came up with in the end for our use cases was to provide the required properties and roles in a proxy Item that exposes the actual controls to the accessibility tools.

On the automatization front I was involved in creating two new CI checks. The first one is a reuse lint check that only checks new files for compliance which enables older projects to enforce coverage at least for new files. The second was an idea that came up during sprint that we could detect untranslated strings in QML files as these are usually to text properties. While it will never catch all cases during testing we found already some problematic cases in Plasma repositories. We discussed also some other points from the idea list such as a cherry-pick bot like the Qt Project uses and automatic updating the fix version field on bugzilla but these innocuous looking problems have some corner cases which require some more thought.

To the Sustainable Software goal I contributed the least. But together with Aleix and Joseph we debugged why the VNC setup of the KDE Eco Lab machine did not work anymore and fixed it. So in the end I interacted with all three goals.

The combined sprint was a nice experience and facilitated many discussion about the Goals but as always also about other KDE topics as is unavoidable when KDE community members are put together in a room. However I feel while it enabled people to jump around the different goals I am wary that in my opinion this setup removes a bit of focus from each goal compared to dedicated sprints.

Thanks to Mbition for hosting us and as a reminder your donations to KDE e.V make sprints such as these possible.

Categories: FLOSS Project Planets

Russell Coker: Humane AI Pin

Planet Debian - Fri, 2024-04-26 04:30

I wrote a blog post The Shape of Computers [1] exploring ideas of how computers might evolve and how we can use them. One of the devices I mentioned was the Humane AI Pin, which has just been the recipient of one of the biggest roast reviews I’ve ever seen [2], good work Marques Brownlee! As an aside I was once given a product to review which didn’t work nearly as well as I think it should have worked so I sent an email to the developers saying “sorry this product failed to work well so I can’t say anything good about it” and didn’t publish a review.

One of the first things that caught my attention in the review is the note that the AI Pin doesn’t connect to your phone. I think that everything should connect to everything else as a usability feature. For security we don’t want so much connecting and it’s quite reasonable to turn off various connections at appropriate times for security, the Librem5 is an example of how this can be done with hardware switches to disable Wifi etc. But to just not have connectivity is bad.

The next noteworthy thing is the external battery which also acts as a magnetic attachment from inside your shirt. So I guess it’s using wireless charging through your shirt. A magnetically attached external battery would be a great feature for a phone, you could quickly swap a discharged battery for a fresh one and keep using it. When I tried to make the PinePhonePro my daily driver [3] I gave up and charging was one of the main reasons. One thing I learned from my experiment with the PinePhonePro is that the ratio of charge time to discharge time is sometimes more important than battery life and being able to quickly swap batteries without rebooting is a way of solving that. The reviewer of the AI Pin complains later in the video about battery life which seems to be partly due to wireless charging from the detachable battery and partly due to being physically small. It seems the “phablet” form factor is the smallest viable personal computer at this time.

The review glosses over what could be the regarded as the 2 worst issues of the device. It does everything via the cloud (where “the cloud” means “a computer owned by someone I probably shouldn’t trust”) and it records everything. Strange that it’s not getting the hate the Google Glass got.

The user interface based on laser projection of menus on the palm of your hand is an interesting concept. I’d rather have a Bluetooth attached tablet or something for operations that can’t be conveniently done with voice. The reviewer harshly criticises the laser projection interface later in the video, maybe technology isn’t yet adequate to implement this properly.

The first criticism of the device in the “review” part of the video is of the time taken to answer questions, especially when Internet connectivity is poor. His question “who designed the Washington Monument” took 8 seconds to start answering it in his demonstration. I asked the Alpaca LLM the same question running on 4 cores of a E5-2696 and it took 10 seconds to start answering and then printed the words at about speaking speed. So if we had a free software based AI device for this purpose it shouldn’t be difficult to get local LLM computation with less delay than the Humane device by simply providing more compute power than 4 cores of a E5-2696v3. How does a 32 core 1.05GHz Mali G72 from 2017 (as used in the Galaxy Note 9) compare to 4 cores of a 2.3GHz Intel CPU from 2015? Passmark says that Intel CPU can do 48GFlop with all 18 cores so 4 cores can presumably do about 10GFlop which seems less than the claimed 20-32GFlop of the Mali G72. It seems that with the right software even older Android phones could give adequate performance for a local LLM. The Alpaca model I’m testing with takes 4.2G of RAM to run which is usable in a Note 9 with 8G of RAM or a Pixel 8 Pro with 12G. A Pixel 8 Pro could have 4.2G of RAM reserved for a LLM and still have as much RAM for other purposes as my main laptop as of a few months ago. I consider the speed of Alpaca on my workstation to be acceptable but not great. If we can get FOSS phones running a LLM at that speed then I think it would be great for a first version – we can always rely on newer and faster hardware becoming available.

Marques notes that the cause of some of the problems is likely due to a desire to make it a separate powerful product in the future and that if they gave it phone connectivity in the start they would have to remove that later on. I think that the real problem is that the profit motive is incompatible with good design. They want to have a product that’s stand-alone and justifies the purchase price plus subscription and that means not making it a “phone accessory”. While I think that the best thing for the user is to allow it to talk to a phone, a PC, a car, and anything else the user wants. He compares it to the Apple Vision Pro which has the same issue of trying to be a stand-alone computer but not being properly capable of it.

One of the benefits that Marques cites for the AI Pin is the ability to capture voice notes. Dictaphones have been around for over 100 years and very few people have bought them, not even in the 80s when they became cheap. While almost everyone can occasionally benefit from being able to make a note of an idea when it’s not convenient to write it down there are few people who need it enough to carry a separate device, not even if that device is tiny. But a phone as a general purpose computing device with microphone can easily be adapted to such things. One possibility would be to program a phone to start a voice note when the volume up and down buttons are pressed at the same time or when some other condition is met. Another possibility is to have a phone have a hotkey function that varies by what you are doing, EG if bushwalking have the hotkey be to take a photo or if on a flight have it be taking a voice note. On the Mobile Apps page on the Debian wiki I created a section for categories of apps that I think we need [4]. In that section I added the following list:

  1. Voice input for dictation
  2. Voice assistant like Google/Apple
  3. Voice output
  4. Full operation for visually impaired people

One thing I really like about the AI Pin is that it has the potential to become a really good computing and personal assistant device for visually impaired people funded by people with full vision who want to legally control a computer while driving etc. I have some concerns about the potential uses of the AI Pin while driving (as Marques stated an aim to do), but if it replaces the use of regular phones while driving it will make things less bad.

Marques concludes his video by warning against buying a product based on the promise of what it can be in future. I bought the Librem5 on exactly that promise, the difference is that I have the source and the ability to help make the promise come true. My aim is to spend thousands of dollars on test hardware and thousands of hours of development time to help make FOSS phones a product that most people can use at low price with little effort.

Another interesting review of the pin is by Mrwhostheboss [5], one of his examples is of asking the pin for advice about a chair but without him knowing the pin selected a different chair in the room. He compares this to using Google’s apps on a phone and seeing which item the app has selected. He also said that he doesn’t want to make an order based on speech he wants to review a page of information about it. I suspect that the design of the pin had too much input from people accustomed to asking a corporate travel office to find them a flight and not enough from people who look through the details of the results of flight booking services trying to save an extra $20. Some people might say “if you need to save $20 on a flight then a $24/month subscription computing service isn’t for you”, I reject that argument. I can afford lots of computing services because I try to get the best deal on every moderately expensive thing I pay for. Another point that Mrwhostheboss makes is regarding secret SMS, you probably wouldn’t want to speak a SMS you are sending to your SO while waiting for a train. He makes it clear that changing between phone and pin while sharing resources (IE not having a separate phone number and separate data store) is a desired feature.

The most insightful point Mrwhostheboss made was when he suggested that if the pin had come out before the smartphone then things might have all gone differently, but now anything that’s developed has to be based around the expectations of phone use. This is something we need to keep in mind when developing FOSS software, there’s lots of different ways that things could be done but we need to meet the expectations of users if we want our software to be used by many people.

I previously wrote a blog post titled Considering Convergence [6] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I’m now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues. I’ve written a blog post about Convergence vs Transferrence [7].

Related posts:

  1. PinePhonePro First Impression Hardware I received my PinePhone Pro [1] on Thursday, it...
  2. I Just Ordered a Nexus 6P Last year I wrote a long-term review of Android phones...
  3. Smart Phones Should Measure Charge Speed My first mobile phone lasted for days between charges. I...
Categories: FLOSS Project Planets

LN Webworks: How To Create Custom Token In Drupal: Step By Step Guide

Planet Drupal - Fri, 2024-04-26 03:53

In Drupal 10, you can create custom tokens using your custom module. Before creating custom tokens, you need to have the Drupal tokens module installed on your Drupal site. This contributed module already comes with some predefined tokens. These defined tokens can be used globally.

Steps to Create the Drupal Custom Tokens

1. Begin by creating a yourmodule.module file in your custom module directory.

2. Establish your custom token type.

 

Categories: FLOSS Project Planets

How To Use Modern QML Tooling in Practice

Planet KDE - Fri, 2024-04-26 03:37

Qt 5.15 introduced “Automatic Type Registration”. With it, a C++ class can be marked as “QML_ELEMENT” to be automatically registered to the QML engine. Qt 6 takes this to the next level and builds all of its tooling around the so-called QML Modules. Let’s talk about what this new infrastructure means to your application in practice and how to benefit from it in an existing project.

Continue reading How To Use Modern QML Tooling in Practice at basysKom GmbH.

Categories: FLOSS Project Planets

Russell Coker: Convergence vs Transference

Planet Debian - Fri, 2024-04-26 03:30

I previously wrote a blog post titled Considering Convergence [1] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I’m now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues.

Currently the expected use is that if you have web pages open on Chrome on Android it’s possible to instruct Chrome on the desktop to open the same page if both instances of Chrome are signed in to the same GMail account. It’s also possible to view the Chrome history with CTRL-H, select “tabs from other devices” and load things that were loaded on other devices some time ago. This is very minimal support for moving work between devices and I think we can do better.

Firstly for web browsing the Chrome functionality is barely adequate. It requires having a heavyweight login process on all browsers that includes sharing stored passwords etc which isn’t desirable. There are many cases where moving work is desired without sharing such things, one example is using a personal device to research something for work. Also the Chrome method of sending web pages is slow and unreliable and the viewing history method gets all closed tabs when the common case is “get the currently open tabs from one browser window” without wanting the dozens of web pages that turned out not to be interesting and were closed. This could be done with browser plugins to allow functionality similar to KDE Connect for sending tabs and also the option of emailing a list of URLs or a JSON file that could be processed by a browser plugin on the receiving end. I can send email between my home and work addresses faster than the Chrome share to another device function can send a URL.

For documents we need a way of transferring files. One possibility is to go the Chromebook route and have it all stored on the web. This means that you rely on a web based document editing system and the FOSS versions are difficult to manage. Using Google Docs or Sharepoint for everything is not something I consider an acceptable option. Also for laptop use being able to run without Internet access is a good thing.

There are a range of distributed filesystems that have been used for various purposes. I don’t think any of them cater to the use case of having a phone/laptop and a desktop PC (or maybe multiple PCs) using the same files.

For a technical user it would be an option to have a script that connects to a peer system (IE another computer with the same accounts and access control decisions) and rsync a directory of working files and the shell history, and then opens a shell with the HISTFILE variable, current directory, and optionally some user environment variables set to match. But this wouldn’t be the most convenient thing even for technical users.

For programs that are integrated into the desktop environment it’s possible for them to be restarted on login if they were active when the user logged out. The session tracking for that has about 1/4 the functionality needed for requesting a list of open files from the application, closing the application, transferring the files, and opening it somewhere else. I think that this would be a good feature to add to the XDG setup.

The model of having programs and data attached to one computer or one network server that terminals of some sort connect to worked well when computers were big and expensive. But computers continue to get smaller and cheaper so we need to think of a document based use of computers to allow things to be easily transferred as convenient. With convenience being important so the hacks of rsync scripts that can work for technical users won’t work for most people.

Related posts:

  1. Considering Convergence What is Convergence In 2013 Kyle Rankin (at the time...
  2. Google Chrome – the Security Implications Google have announced a new web browser – Chrome [1]....
  3. Bugs in Google Chrome I’m currently running google-chrome-beta version 5.0.375.55-r47796 on Debian/Unstable. It’s the...
Categories: FLOSS Project Planets

The Drop Times: Streamlining Local Development with DDEV, Docker, and NGROK

Planet Drupal - Fri, 2024-04-26 01:33
Discover how DDEV, Docker, and NGROK can revolutionize your local development process. Our latest guide dives into the seamless integration of these powerful tools, offering you the most efficient way to set up, develop, and test your Drupal projects right from your local machine. Streamline your workflow and enhance productivity with our comprehensive insights!"
Categories: FLOSS Project Planets

Debug Academy: How to create a partial date field in Drupal (i.e. Year & Month without Day)

Planet Drupal - Thu, 2024-04-25 23:54
How to create a partial date field in Drupal (i.e. Year & Month without Day)

One of Drupal's main strengths is its data modeling.

But sometimes choosing the appropriate field type comes with a form widget that isn't what we're looking for. For example, using a Date field results in the form displaying a date "widget" (form input) which includes a full date consisting of a day, month, and year, and optionally a time.

How to remove the time from a date field in Drupal

Because removing the time from date fields is such a common request, Drupal allows its removal without writing any custom code.

How to hide the time Drupal's frontend

Fortunately, the date field has a highly configurable display on the frontend. By visiting the "Manage Display" page (or configuring the field's block, if using layout builder), you will have the option of selecting (or creating) a date format.

Follow these steps to change the date's output for your frontend:

ashrafabed Fri, 04/26/2024
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RQuantLib 0.4.22 on CRAN: Maintenance

Planet Debian - Thu, 2024-04-25 17:25

A new minor release 0.4.22 of RQuantLib arrived at CRAN earlier today, and has been uploaded to Debian.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for more than twenty years (!!) as it was one of the first packages I uploaded there.

This release of RQuantLib updates to QuantLib version 1.34 which was just released yesterday, and deprecates use of an access point / type for price/yield conversion for bonds. We also made two minor earlier changes.

Changes in RQuantLib version 0.4.22 (2024-04-25)
  • Small code cleanup removing duplicate R code

  • Small improvements to C++ compilation flags

  • Robustify internal version comparison to accommodate RC releases

  • Adjustments to two C++ files for QuantLib 1.34

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Petter Reinholdtsen: 45 orphaned Debian packages moved to git, 391 to go

Planet Debian - Thu, 2024-04-25 16:00

Nine days ago, I started migrating orphaned Debian packages with no version control system listed in debian/control of the source to git. At the time there were 438 such packages. Now there are 391, according to the UDD. In reality it is slightly less, as there is a delay between uploads and UDD updates. In the nine days since, I have thus been able to work my way through ten percent of the packages. I am starting to run out of steam, and hope someone else will also help brushing some dust of these packages. Here is a recipe how to do it. I start by picking a random package by querying the UDD for a list of 10 random packages from the set of remaining packages:

PGPASSWORD="udd-mirror" psql --port=5432 --host=udd-mirror.debian.net \ --username=udd-mirror udd -c "select source from sources \ where release = 'sid' and (vcs_url ilike '%anonscm.debian.org%' \ OR vcs_browser ilike '%anonscm.debian.org%' or vcs_url IS NULL \ OR vcs_browser IS NULL) AND maintainer ilike '%packages@qa.debian.org%' \ order by random() limit 10;"

Next, I visit http://salsa.debian.org/debian and search for the package name, to ensure no git repository already exist. If it does, I clone it and try to get it to an uploadable state, and add the Vcs-* entries in d/control to make the repository more widely known. These packages are a minority, so I will not cover that use case here.

For packages without an existing git repository, I run the following script debian-snap-to-salsa to prepare a git repository with the existing packaging.

#!/bin/sh # # See also https://bugs.debian.org/804722#31 set -e # Move to this Standards-Version. SV_LATEST=4.7.0 PKG="$1" if [ -z "$PKG" ]; then echo "usage: $0 " exit 1 fi if [ -e "${PKG}-salsa" ]; then echo "error: ${PKG}-salsa already exist, aborting." exit 1 fi if [ -z "ALLOWFAILURE" ] ; then ALLOWFAILURE=false fi # Fetch every snapshotted source package. Manually loop until all # transfers succeed, as 'gbp import-dscs --debsnap' do not fail on # download failures. until debsnap --force -v $PKG || $ALLOWFAILURE ; do sleep 1; done mkdir ${PKG}-salsa; cd ${PKG}-salsa git init # Specify branches to override any debian/gbp.conf file present in the # source package. gbp import-dscs --debian-branch=master --upstream-branch=upstream \ --pristine-tar ../source-$PKG/*.dsc # Add Vcs pointing to Salsa Debian project (must be manually created # and pushed to). if ! grep -q ^Vcs- debian/control ; then awk "BEGIN { s=1 } /^\$/ { if (s==1) { print \"Vcs-Browser: https://salsa.debian.org/debian/$PKG\"; print \"Vcs-Git: https://salsa.debian.org/debian/$PKG.git\" }; s=0 } { print }" < debian/control > debian/control.new && mv debian/control.new debian/control git commit -m "Updated vcs in d/control to Salsa." debian/control fi # Tell gbp to enforce the use of pristine-tar. inifile +inifile debian/gbp.conf +create +section DEFAULT +key pristine-tar +value True git add debian/gbp.conf git commit -m "Added d/gbp.conf to enforce the use of pristine-tar." debian/gbp.conf # Update to latest Standards-Version. SV="$(grep ^Standards-Version: debian/control|awk '{print $2}')" if [ $SV_LATEST != $SV ]; then sed -i "s/\(Standards-Version: \)\(.*\)/\1$SV_LATEST/" debian/control git commit -m "Updated Standards-Version from $SV to $SV_LATEST." debian/control fi if grep -q pkg-config debian/control; then sed -i s/pkg-config/pkgconf/ debian/control git commit -m "Replaced obsolete pkg-config build dependency with pkgconf." debian/control fi if grep -q libncurses5-dev debian/control; then sed -i s/libncurses5-dev/libncurses-dev/ debian/control git commit -m "Replaced obsolete libncurses5-dev build dependency with libncurses-dev." debian/control fi Some times the debsnap script fail to download some of the versions. In those cases I investigate, and if I decide the failing versions will not be missed, I call it using ALLOWFAILURE=true to ignore the problem and create the git repository anyway.

With the git repository in place, I do a test build (gbp buildpackage) to ensure the build is actually working. If it does not I pick a different package, or if the build failure is trivial to fix, I fix it before continuing. At this stage I revisit http://salsa.debian.org/debian and create the project under this group for the package. I then follow the instructions to publish the local git repository. Here is from a recent example:

git remote add origin git@salsa.debian.org:debian/perl-byacc.git git push --set-upstream origin master upstream pristine-tar git push --tags

With a working build, I have a look at the build rules if I want to remove some more dust. I normally try to move to debhelper compat level 13, which involves removing debian/compat and modifying debian/control to build depend on debhelper-compat (=13). I also test with 'Rules-Requires-Root: no' in debian/control and verify in debian/rules that hardening is enabled, and include all of these if the package still build. If it fail to build with level 13, I try with 12, 11, 10 and so on until I find a level where it build, as I do not want to spend a lot of time fixing build issues.

Some times, when I feel inspired, I make sure debian/copyright is converted to the machine readable format, often by starting with 'debhelper -cc' and then cleaning up the autogenerated content until it matches realities. If I feel like it, I might also clean up non-dh-based debian/rules files to use the short style dh build rules.

Once I have removed all the dust I care to process for the package, I run 'gbp dch' to generate a debian/changelog entry based on the commits done so far, run 'dch -r' to switch from 'UNRELEASED' to 'unstable' and get an editor to make sure the 'QA upload' marker is in place and that all long commit descriptions are wrapped into sensible lengths, run 'debcommit --release -a' to commit and tag the new debian/changelog entry, run 'debuild -S' to build a source only package, and 'dput ../perl-byacc_2.0-10_source.changes' to do the upload. During the entire process, and many times per step, I run 'debuild' to verify the changes done still work. I also some times verify the set of built files using 'find debian' to see if I can spot any problems (like no file in usr/bin any more or empty package). I also try to fix all lintian issues reported at the end of each 'debuild' run.

If I find Debian specific patches, I try to ensure their metadata is fairly up to date and some times I even try to reach out to upstream, to make the upstream project aware of the patches. Most of my emails bounce, so the success rate is low. For projects with no Homepage entry in debian/control I try to track down one, and for packages with no debian/watch file I try to create one. But at least for some of the packages I have been unable to find a functioning upstream, and must skip both of these.

If I could handle ten percent in nine days, twenty people could complete the rest in less then five days. I use approximately twenty minutes per package, when I have twenty minutes spare time to spend. Perhaps you got twenty minutes to spare too?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: FLOSS Project Planets

Drupal Association blog: Making the Most of Your Time at DrupalCon Portland

Planet Drupal - Thu, 2024-04-25 14:00

It’s less than two weeks to DrupalCon Portland 2024, and the excitement is building! If you’re gearing up for the biggest Drupal event of the year, we’re here to help you maximize your travel experience to Portland. Let’s dive right in!

Hotel Bookings at Great Prices

You still have a chance to book your DrupalCon Portland hotel within the official hotel block. By staying within the hotel block, you'll get the best proximity to the conference center as well as the chance to run into other Drupalists on your floor! Book now:

When and where is DrupalCon’24 happening in Portland?

DrupalCon North America 2024 will be held from 6th to 9th May 2024 at the Oregon Convention Center (yes, in-person!). Located right in the heart of the city, it is a perfect hub for exploration. You'll find hotels, restaurants, and shops just around the corner. It's also super easy to get to fun stuff like entertainment and hiking. With endless possibilities, you're sure to find something that suits your fancy.

Things you should NOT miss out on in Portland

May is a delightful time to be in Portland, with spring in full bloom. Enjoy the sunny weather and mild temperatures, making it the perfect season to explore the city's vibrant outdoor scene. There are several must-visit places that capture the city's unique charm.

1. Governor Tom McCall Waterfront Park

This is the perfect place to enjoy Portland's beauty while watching the river flow by. Visitors to the park can enjoy a variety of recreational activities, from leisurely strolls and picnics to jogging and biking along the paved pathways. The park also hosts numerous events throughout the year, including festivals, concerts, and outdoor markets, adding to its vibrant atmosphere.

One of the park's highlights is the Salmon Street Springs Fountain, where children and adults alike can cool off in the refreshing water jets during the warmer months. The park also features several monuments and public art installations, adding cultural and historical significance to its landscape.


Image Source: https://www.travelportland.com/attractions/governor-tom-mccall-waterfront-park/

2. Powell's City of Books

Powell's City of Books is a literary wonderland located in downtown Portland, Oregon. As the world's largest independent bookstore, Powell's spans an entire city block and boasts multiple floors filled with books of every genre imaginable. One of Powell's most unique features is its rare book room, home to a collection of rare and out-of-print titles, first editions, and signed copies that will delight bibliophiles and collectors alike.

In addition to its vast selection of books, Powell's hosts author readings, book signings, and other literary events, fostering a sense of community among book lovers from near and far.


Image Source: https://www.travelportland.com/attractions/powells/

3. Portland Art Museum

Founded in 1892, the Portland Art Museum is the oldest art museum on the West Coast and holds a rich and diverse collection of artworks spanning various time periods, cultures, and mediums. It is located in the heart of downtown Portland. One of the museum's highlights is its extensive collection of Native American art, which celebrates the rich artistic traditions of indigenous peoples from the Pacific Northwest and beyond. 

In addition to its permanent collection, the Portland Art Museum hosts rotating exhibitions that showcase both established and emerging artists, offering visitors the opportunity to engage with cutting-edge contemporary art and explore new perspectives.


Image Source: https://www.travelportland.com/attractions/portland-art-museum/

4. Voodoo Doughnut

Voodoo Doughnut is more than just a bakery; it's a Portland icon, a symbol of creativity, and a culinary experience like no other. It was founded in 2003 by friends Kenneth Pogson and Richard Shannon and has gained international fame for its wacky doughnut creations.

It is located in the heart of downtown Portland, Voodoo Doughnut draws long lines of locals and tourists, eager to sample its unique offerings. Some of the must-try snacks: Voodoo Doll doughnut, pretzel stake and raspberry filling, Bacon Maple Bar topped with crispy bacon strips. If this has got you drooling (like me), make sure you head to this place while you’re at Portland.


Image Source: https://www.travelportland.com/attractions/voodoo-doughnut/

5. Oregon Museum of Science and Industry 

The Oregon Museum of Science and Industry (OMSI) is a beloved institution in Portland, Oregon, dedicated to inspiring curiosity and fostering a love of science through engaging exhibits, interactive displays, and educational programs. Located on the east bank of the Willamette River, OMSI's sprawling campus encompasses a variety of attractions that cater to visitors of all ages. 

OMSI's planetarium is a highlight, where visitors can explore the wonders of the night sky, learn about astronomy and astrophysics, and take virtual journeys through space. The museum also features a state-of-the-art IMAX theater, where visitors can experience immersive films on topics ranging from nature and wildlife to history and technology.


Image Source: https://www.travelportland.com/attractions/omsi/

Find more information to plan your trip here.

Categories: FLOSS Project Planets

Jonathan McDowell: Sorting out backup internet #3: failover

Planet Debian - Thu, 2024-04-25 13:38

With local recursive DNS and a 5G modem in place the next thing was to work on some sort of automatic failover when the primary FTTP connection failed. My wife works from home too and I sometimes travel so I wanted to make sure things didn’t require me to be around to kick them into switch the link in use.

First, let’s talk about what I didn’t do. One choice to try and ensure as seamless a failover as possible would be to get a VM somewhere out there. I’d then run Wireguard tunnels over both the FTTP + 5G links to the VM, and run some sort of routing protocol (RIP, OSPF?) over the links. Set preferences such that the FTTP is preferred, NAT v4 to the VM IP, and choose somewhere that gave me a v6 range I could just use directly.

This has the advantage that I’m actively checking link quality to the outside work, rather than just to the next hop. It also means, if the failover detection is fast enough, that existing sessions stay up rather than needing re-established.

The downsides are increased complexity, adding another point of potential failure (the VM + provider), the impact on connection quality (even with a decent endpoint it’s an extra hop and latency), and finally the increased cost involved.

I can cope with having to reconnect my SSH sessions in the event of a failure, and I’d rather be sure I can make full use of the FTTP connection, so I didn’t go this route. I chose to rely on local link failure detection to provide the signal for failover, and a set of policy routing on top of that to make things a bit more seamless.

Local link failure turns out to be fairly easy. My FTTP is a PPPoE configuration, so in /etc/ppp/peers/aquiss I have:

lcp-echo-interval 1 lcp-echo-failure 5 lcp-echo-adaptive

Which gives me a failover of ~ 5s if the link goes down.

I’m operating the 5G modem in “bridge” rather than “router” mode, which means I get the actual IP from the 5G network via DHCP. The DHCP lease the modem hands out is under a minute, and in the event of a network failure it only hands out a 192.168.254.x IP to talk to its web interface. As the 5G modem is the last resort path I choose not to do anything special with this, but the information is at least there if I need it.

To allow both interfaces to be up and the FTTP to be preferred I’m simply using route metrics. For the PPP configuration that’s:

defaultroute-metric 100

and for the 5G modem I have:

iface sfp.31 inet dhcp metric 1000 vlan-raw-device sfp

There’s a wrinkle in that pppd will not replace an existing default route, so I’ve created /etc/ppp/ip-up.d/default-route to ensure it’s added:

#!/bin/bash [ "$PPP_IFACE" = "pppoe-wan" ] || exit 0 # Ensure we add a default route; pppd will not do so if we have # a lower pref route out the 5G modem ip route add default dev pppoe-wan metric 100 || true

Additionally, in /etc/dhcp/dhclient.conf I’ve disabled asking for any server details (DNS, NTP, etc) - I have internal setups for the servers I want, and don’t want to be trying to select things over the 5G link by default.

However, what I do want is to be able to access the 5G modem web interface and explicitly route some traffic out that link (e.g. so I can add it to my smokeping tests). For that I need some source based routing.

First step, add a 5g table to /etc/iproute2/rt_tables:

16 5g

Then I ended up with the following in /etc/dhcp/dhclient-exit-hooks.d/modem-interface-route, which is more complex than I’d like but seems to do what I want:

#!/bin/sh case "$reason" in BOUND|RENEW|REBIND|REBOOT) # Check if we've actually changed IP address if [ -z "$old_ip_address" ] || [ "$old_ip_address" != "$new_ip_address" ] || [ "$reason" = "BOUND" ] || [ "$reason" = "REBOOT" ]; then if [ ! -z "$old_ip_address" ]; then ip rule del from $old_ip_address lookup 5g fi ip rule add from $new_ip_address lookup 5g ip route add default dev sfp.31 table 5g || true ip route add 192.168.254.1 dev sfp.31 2>/dev/null || true fi ;; EXPIRE) if [ ! -z "$old_ip_address" ]; then ip rule del from $old_ip_address lookup 5g fi ;; *) ;; esac

What does all that aim to do? We want to ensure traffic directed to the 5G WAN address goes out the 5G modem, so I can SSH into it even when the main link is up. So we add a rule directing traffic from that IP to hit the 5g routing table, and a default route in that table which uses the 5G link. There’s no configuration for the FTTP connection in that table, so if the 5G link is down the traffic gets dropped, which is what we want. We also configure 192.168.254.1 to go out the link to the modem, as that’s where the web interface lives.

I also have a curl callout (curl --interface sfp.31 … to ensure it goes out the 5G link) after the routes are configured to set dynamic DNS with Mythic Beasts, which helps with knowing where to connect back to. I seem to see IP address changes on the 5G link every couple of days at least.

Additionally, I have an entry in the interfaces configuration carving out the top set of the netblock my smokeping server is in:

up ip rule add from 192.0.2.224/27 lookup 5g

My smokeping /etc/smokeping/config.d/Probes file then looks like:

*** Probes *** + FPing binary = /usr/bin/fping ++ FPingNormal ++ FPing5G sourceaddress = 192.0.2.225 + FPing6 binary = /usr/bin/fping

which allows me to use probe = FPing5G for targets to test them over the 5G link.

That mostly covers the functionality I want for a backup link. There’s one piece that isn’t quite solved, however, IPv6, which can wait for another post.

Categories: FLOSS Project Planets

Pages