Feeds

Dries Buytaert: Drupal adventures in Japan and Australia

Planet Drupal - Thu, 2024-07-11 15:09

Next week, I'm traveling to Japan and Australia. I've been to both countries before and can't wait to return – they're among my favorite places in the world.

My goal is to connect with the local Drupal community in each country, discussing the future of Drupal, learning from each other, and collaborating.

I'll also be connecting with Acquia's customers and partners in both countries, sharing our vision, strategy and product roadmap. As part of that, I look forward to spending some time with the Acquia teams as well – about 20 employees in Japan and 35 in Australia.

I'll present at a Drupal event in Tokyo the evening of March 14th at Yahoo! Japan.

While in Australia, I'll be attending Drupal South, held at the Sydney Masonic Centre from March 20-22. I'm excited to deliver the opening keynote on the morning of March 20th, where I'll delve into Drupal's past, present, and future.

I look forward to being back in Australia and Japan, reconnecting with old friends and the local communities.

Categories: FLOSS Project Planets

Dries Buytaert: Two years later: is my Web3 website still standing?

Planet Drupal - Thu, 2024-07-11 15:09

Two years ago, I launched a simple Web3 website using IPFS (InterPlanetary File System) and ENS (Ethereum Name Service). Back then, Web3 tools were getting a lot of media attention and I wanted to try it out.

Since I set up my Web3 website two years ago, I basically forgot about it. I didn't update it or pay attention to it for two years. But now that we hit the two-year mark, I'm curious: is my Web3 website still online?

At that time, I also stated that Web3 was not fit for hosting modern web applications, except for a small niche: static sites requiring high resilience and infrequent content updates.

I was also curious to explore the evolution of Web3 technologies to see if they became more applicable for website hosting.

My original Web3 experiment

In my original blog post, I documented the process of setting up what could be called the "Hello World" of Web3 hosting. I stored an HTML file on IPFS, ensured its availability using "pinning services", and made it accessible using an ENS domain.

For those with a basic understanding of Web3, here is a summary of the steps I took to launch my first Web3 website two years ago:

  1. Purchased an ENS domain name: I used a crypto wallet with Ethereum to acquire dries.eth through the Ethereum Name Service, a decentralized alternative to the traditional DNS (Domain Name System).
  2. Uploaded an HTML File to IPFS: I uploaded a static HTML page to the InterPlanetary File System (IPFS), which involved running my own IPFS node and utilizing various pinning services like Infura, Fleek, and Pinata. These pinning services ensure that the content remains available online even when my own IPFS node is offline.
  3. Accessed the website: I confirmed that my website was accessible through IPFS-compatible browsers.
  4. Mapped my webpage to my domain name: As the last step, I linked my IPFS-hosted site to my ENS domain dries.eth, making the web page accessible under an easy domain name.

If the four steps above are confusing to you, I recommend reading my original post. It is over 2,000 words, complete with screenshots and detailed explanations of the steps above.

Checking the pulse of various Web3 services

As the first step in my check-up, I wanted to verify if the various services I referenced in my original blog post are still operational.

The results, displayed in the table below, are really encouraging: Ethereum, ENS, IPFS, Filecoin, Infura, Fleek, Pinata, and web3.storage are all operational.

The two main technologies – ENS and IPFS – are both actively maintained and developed. This indicates that Web3 technology has built a robust foundation.

Service Description Still around in February 2024) ENS A blockchain-based naming protocol offering DNS for Web3, mapping domain names to Ethereum addresses. Yes IPFS A peer-to-peer protocol for storing and sharing data in a distributed file system. Yes Filecoin A blockchain-based storage network and cryptocurrency that incentivizes data storage and replication. Yes Infura Provides tools and infrastructure to manage content on IPFS and other tools for developers to connect their applications to blockchain networks and deploy smart contracts. Yes Fleek A platform for building websites using IPFS and ENS. Yes Pinata Provides tools and infrastructure to manage content on IPFS, and more recently Farcaster applications. Yes web3.storage Provides tools and infrastructure to manage content on IPFS with support for Filecoin. Yes Is my Web3 website still up?

Seeing all these Web3 services operational is encouraging, but the ultimate test is to check if my Web3 webpage, dries.eth, remained live. It's one thing for these services to work, but another for my site to function properly. Here is what I found in a detailed examination:

  1. Domain ownership verification: A quick check on etherscan.io confirmed that dries.eth is still registered to me. Relief!
  2. ENS registrar access: Using my crypto wallet, I could easily log into the ENS registrar and manage my domains. I even successfully renewed dries.eth as a test.
  3. IPFS content availability: My webpage is still available on IPFS, thanks to having pinned it two years ago. Logging into Fleek and Pinata, I found my content on their admin dashboards.
  4. Web3 and ENS gateway access: I can visit dries.eth using a Web3 browser, and also via an IPFS-compatible ENS gateway like https://dries.eth.limo/ – a privacy-centric service, new since my initial blog post.

The verdict? Not only are these Web3 services still operational, but my webpage also continues to work!

This is particularly noteworthy given that I haven't logged in to these services, didn't perform any maintenance, or didn't pay any hosting fees for two years (the pinning services I'm using have a free tier).

Visit my Web3 page yourself

For anyone interested in visiting my Web3 page (perhaps your first Web3 visit?), there are several methods to choose from, each with a different level of Web3-ness.

  • Use a Web3-enabled browser: Browsers such as Brave and Opera, offer built-in ENS and IPFS support. They can resolve ENS addresses and interpret IPFS addresses, making it as easy to navigate IPFS content as if it is traditional web content via HTTP or HTTPS.
  • Install a Web3 browser extension: If your favorite browser does not support Web3 out of the box, adding a browser extension like MetaMask can help you access Web3 applications. MetaMask works with Chrome, Firefox, and Edge. It enables you to use .eth domains for doing Ethereum transactions or for accessing content on IPFS.
  • Access through an ENS gateway: For those looking for the simplest way to access Web3 content without installing anything new, using an ENS gateway, such as eth.limo, is the easiest method. This gateway maps ENS domains to DNS, offering direct navigation to Web3 sites like mine at https://dries.eth.limo/. It serves as a simple bridge between Web2 (the conventional web) and Web3.
Streamlining content updates with IPNS

In my original post, I highlighted various challenges, such as the limitations for hosting dynamic applications, the cost of updates, and the slow speed of these updates. Although these issues still exist, my initial analysis was conducted with an incomplete understanding of the available technology. I want to delve deeper into these limitations, and refine my previous statements.

Some of these challenges stem from the fact that IPFS operates as a "content-addressed network". Unlike traditional systems that use URLs or file paths to locate content, IPFS uses a unique hash of the content itself. This hash is used to locate and verify the content, but also to facilitate decentralized storage.

While the principle of addressing content by a hash is super interesting, it also introduces some complications: whenever content is updated, its hash changes, making it tricky to link to the updated content. Specifically, every time I updated my Web3 site's content, I had to update my ENS record, and pay a translation fee on the Ethereum network.

At the time, I wasn't familiar with the InterPlanetary Name System (IPNS). IPNS, not to be confused with IPFS, addresses this challenge by assigning a mutable name to content on IPFS. You can think of IPNS as providing an "alias" or "redirect" for IPFS addresses: the IPNS address always stays the same and points to the latest IPFS address. It effectively eliminates the necessity of updating ENS records with each content change, cutting down on expenses and making the update process more automated and efficient.

To leverage IPNS, you have to take the following steps:

  1. Upload your HTML file to IPFS and receive an IPFS hash.
  2. Publish this hash to IPNS, creating an IPNS hash that directs to the latest IPFS hash.
  3. Link your ENS domain to this IPNS hash. Since the IPNS hash remains constant, you only need to update your ENS record once.

Without IPNS, updating content involved:

  1. Update the HTML file.
  2. Upload the revised file to IPFS, generating a new IPFS hash.
  3. Update the ENS record with the new IPFS hash, which costs some Ether and can take a few minutes.

With IPNS, updating content involves:

  1. Update the HTML file.
  2. Upload the revised file to IPFS, generating a new IPFS hash.
  3. Update the IPNS record to reference this new hash, which is free and almost instant.

Although IPNS is a faster and more cost-effective approach compared to the original method, it still carries a level of complexity. There is also a minor runtime delay due to the extra redirection step. However, I believe this tradeoff is worth it.

Updating my Web3 site to use IPNS

With this newfound knowledge, I decided to use IPNS for my own site. I generated an IPNS hash using both the IPFS desktop application (see screenshot) and IPFS' command line tools:

[code bash]$ ipfs name publish /ipfs/bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q > Published to k51qzi5uqu5dgy8mzjtcqvgr388xjc58fwprededbb1fisq1kvl34sy4h2qu1a: /ipfs/bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q[/code] The IPFS Desktop application showing my index.html file with an option to 'Publish to IPNS'.

After generating the IPNS hash, I was able to visit my site in Brave using the IPFS protocol at ipfs://bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q, or via the IPNS protocol at ipns://k51qzi5uqu5dgy8mzjtcqvgr388xjc58fwprededbb1fisq1kvl34sy4h2qu1a.

My Web3 site in Brave using IPNS.

Next, I updated the ENS record for dries.eth to link to my IPNS hash. This change cost me 0.0011 ETH (currently $4.08 USD), as shown in the Etherscan transaction. Once the transaction was processed, dries.eth began directing to the new IPNS address.

A transaction confirmation on the ENS website, showing a successful update for dries.eth. Rolling back my IPNS record in ENS

Unfortunately, my excitement was short-lived. A day later, dries.eth stopped working. IPNS records, it turns out, need to be kept alive – a lesson learned the hard way.

While IPFS content can be persisted through "pinning", IPNS records require periodic "republishing" to remain active. Essentially, the network's Distributed Hash Table (DHT) may drop IPNS records after a certain amount of time, typically 24 hours. To prevent an IPNS record from being dropped, the owner must "republish" it before the DHT forgets it.

I found out that the pinning services I use – Dolphin, Fleek and Pinata – don't support IPNS republishing. Looking into it further, it turns out few IPFS providers do.

During my research, I discovered Filebase, a small Boston-based company with fewer than five employees that I hadn't come across before. Interestingly, they provide both IPFS pinning and IPNS republishing. However, to pin my existing HTML file and republish its IPNS hash, I had to subscribe to their service at a cost of $20 per month.

Faced with the challenge of keeping my IPNS hash active, I found myself at a crossroads: either fork out $20 a month for a service like Filebase that handles IPNS republishing for me, or take on the responsibility of running my own IPFS node.

Of course, the whole point of decentralized storage is that people run their own nodes. However, considering the scope of my project – a single HTML file – the effort of running a dedicated node seemed disproportionate. I'm also running my IPFS node on my personal laptop, which is not always online. Maybe one day I'll try setting up a dedicated IPFS node on a Raspberry Pi or similar setup.

Ultimately, I decided to switch my ENS record back to the original IPFS link. This change, documented in the Etherscan transaction, cost me 0.002 ETH (currently $6.88 USD).

Although IPNS works, or can work, it just didn't work for me. Despite the setback, the whole experience was a great learning journey.

(Update: A couple of days after publishing this blog post, someone kindly recommended https://dwebservices.xyz/, claiming their free tier includes IPNS republishing. Although I haven't personally tested it yet, a quick look at their about page suggests they might be a promising solution.)

Web3 remains too complex for most people

Over the past two years, Web3 hosting hasn't disrupted the mainstream website hosting market. Despite the allure of Web3, mainstream website hosting is simple, reliable, and meets the needs of nearly all users.

Despite a significant upgrade of the Ethereum network that reduced energy consumption by over 99% through its transition to a Proof of Stake (PoS) consensus mechanism, environmental considerations, especially the carbon footprint associated with blockchain technologies, continue to create further challenges for the widespread adoption of Web3 technologies. (Note: ENS operates on the blockchain but IPFS does not.)

As I went through the check-up, I discovered islands of innovation and progress. Wallets and ENS domains got easier to use. However, the overall process of creating a basic website with IPFS and ENS remains relatively complex compared to the simplicity of Web2 hosting.

The need for a SQL-compatible Web3 database

Modern web applications like those built with Drupal and WordPress rely on a technology stack that includes a file system, a domain name system (e.g. DNS), a database (e.g. MariaDB or MySQL), and a server-side runtime environment (e.g. PHP).

While IPFS and ENS offer decentralized alternatives for the first two, the equivalents for databases and runtime environments are less mature. This limits the types of applications that can easily move from Web2 to Web3.

A major breakthrough would be the development of a decentralized database that is compatible with SQL, but currently, this does not seem to exist. The complexity of ensuring data integrity and confidentiality across multiple nodes without a central authority, along with meeting the throughput demands of modern web applications, may be too complex to solve.

After all, blockchains, as decentralized databases, have been in development for over a decade, yet lack support for the SQL language and fall short in speed and efficiency required for dynamic websites.

The need for a distributed runtime

Another critical component for modern websites is the runtime environment, which executes the server-side logic of web applications. Traditionally, this has been the domain of PHP, Python, Node.js, Java, etc.

WebAssembly (WASM) could emerge as a potential solution. It could make for an interesting decentralized solution as WASM binaries can be hosted on IPFS.

However, when WASM runs on the client-side – i.e. in the browser – it can't deliver the full capabilities of a server-side environment. This limitation makes it challenging to fully replicate traditional web applications.

So for now, Web3's applications are quite limited. While it's possible to host static websites on IPFS, dynamic applications requiring database interactions and server-side processing are difficult to transition to Web3.

Bridging the gap between Web2 and Web3

In the short term, the most likely path forward is blending decentralized and traditional technologies. For example, a website could store its static files on IPFS while relying on traditional Web2 solutions for its dynamic features.

Looking to the future, initiatives like OrbitDB's peer-to-peer database, which integrates with IPFS, show promise. However, OrbitDB lacks compatibility with SQL, meaning applications would need to be redesigned rather than simply transferred.

Web3 site hosting remains niche

Even the task of hosting static websites, which don't need a database or server-side processing, is relatively niche within the Web3 ecosystem.

As I wrote in my original post: In its current state, IPFS and ENS offer limited value to most website owners, but tremendous value to a very narrow subset of all website owners.. This observation remains accurate today.

IPFS and ENS stand out for their strengths in censorship resistance and reliability. However, for the majority of users, the convenience and adequacy of Web2 for hosting static sites often outweigh these benefits.

The key to broader acceptance of new technologies, like Web3, hinges on either discovering new mass-market use cases or significantly enhancing the user experience for existing ones. Web3 has not found a universal application or surpassed Web2 in user experience.

The popularity of SaaS platforms underscores this point. They dominate not because they're the most resilient or robust options, but because they're the most convenient. Despite the benefits of resilience and autonomy offered by Web3, most individuals opt for less resilient but more convenient SaaS solutions.

Conclusion

Despite the billions invested in Web3 and notable progress, its use for website hosting still has significant limitations.

The main challenge for the Web3 community is to either develop new, broadly appealing applications or significantly improve the usability of existing technologies.

Website hosting falls into the category of existing use cases.

Unfortunately, Web3 remains mostly limited to static websites, as it does not yet offer robust alternatives to SQL databases and server-side runtime.

Even within the limited scope of static websites, improvements to the user experience have been marginal, focused on individual parts of the technology stack. The overall end-to-end experience remains complex.

Nonetheless, the fact that my Web3 page is still up and running after two years is encouraging, showing the robustness of the underlying technology, even if its current use remains limited. I've grown quite fond of IPFS, and I hope to do more useful experiments with it in the future.

All things considered, I don't see Web3 taking the website hosting world by storm any time soon. That said, over time, Web3 could become significantly more attractive and functional. All in all, keeping an eye on this space is definitely fun and worthwhile.

Categories: FLOSS Project Planets

Dries Buytaert: Acquia a Leader in the 2024 Gartner Magic Quadrant for Digital Experience Platforms

Planet Drupal - Thu, 2024-07-11 15:09

For the fifth year in a row, Acquia has been named a Leader in the Gartner Magic Quadrant for Digital Experience Platforms (DXP).

Acquia received this recognition from Gartner based on both the completeness of product vision and ability to execute.

Central to our vision and execution is a deep commitment to openness. Leveraging Drupal, Mautic and open APIs, we've built the most open DXP, empowering customers and partners to tailor our platform to their needs.

Our emphasis on openness extends to ensuring our solutions are accessible and inclusive, making them available to everyone. We also prioritize building trust through data security and compliance, integral to our philosophy of openness.

We're proud to be included in this report and thank our customers and partners for their support and collaboration.

Mandatory disclaimer from Gartner

Gartner, Magic Quadrant for Digital Experience Platforms, Irina Guseva, Jim Murphy, Mike Lowndes, John Field - February 21, 2024.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Acquia.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

Categories: FLOSS Project Planets

Dries Buytaert: Satoshi Nakamoto's Drupal adventure

Planet Drupal - Thu, 2024-07-11 15:09

Martti Malmi, an early contributor to the Bitcoin project, recently shared a fascinating piece of internet history: an archive of private emails between himself and Satoshi Nakamoto, Bitcoin's mysterious founder.

The identity of Satoshi Nakamoto remains one of the biggest mysteries in the technology world. Despite extensive investigations, speculative reports, and numerous claims over the years, the true identity of Bitcoin's creator(s) is still unknown.

Martti Malmi released these private conversations in reaction to a court case focused on the true identity of Satoshi Nakamoto and the legal entitlements to the Bitcoin brand and technology.

The emails provide some interesting details into Bitcoin's early days, and might also provide some new clues about Satoshi's identity.

Satoshi and Martti worked together on a variety of different things, including the relaunch of the Bitcoin website. Their goal was to broaden public understanding and awareness of Bitcoin.

And to my surprise, the emails reveal they chose Drupal as their preferred CMS! (Thanks to Jeremy Andrews for making me aware.)

The emails detail Satoshi's hands-on involvement, from installing Drupal themes, to configuring Drupal's .htaccess file, to exploring Drupal's multilingual capabilities.

At some point in the conversation, Satoshi expressed reservations about Drupal's forum module.

For what it is worth, this proves that I'm not Satoshi Nakamoto. Had I been, I'd have picked Drupal right away, and I would never have questioned Drupal's forum module.

Jokes aside, as Drupal's Founder and Project Lead, learning about Satoshi's use of Drupal is a nice addition to Drupal's rich history. Almost every day, I'm inspired by the unexpected impact Drupal has.

Categories: FLOSS Project Planets

OPC UA: Programming against Type Descriptions

Planet KDE - Thu, 2024-07-11 14:07

OPC UA client code that relies on hardcoded NodeIds is brittle and often only works with a specific OPC UA server instance. This article shows the proper way to write robust and portable OPC UA client code.

Continue reading OPC UA: Programming against Type Descriptions at basysKom GmbH.

Categories: FLOSS Project Planets

Drupal.org blog: Ending Packages.Drupal.org support for Composer 1

Planet Drupal - Thu, 2024-07-11 12:48

To prepare Drupal.org infrastructure for providing automatic updates for Drupal and upgrading Drupal.org itself, we are removing support for Composer 1 on Packages.Drupal.org.

  • New Drupal.org packages & releases will not be available for Composer 1 after August 12, 2024.
  • Composer 1 support will be dropped after October 1, 2024.

Preparing your site for Composer 2 is documentation for updating Drupal site codebases with Composer 2.

Deprecating Packagist.org support for Composer 1.x is Packagist.org’s announcement.

Less than 1% of our Composer traffic comes from Composer 1. Drupal’s automatic updates require Composer 2. Packagist.org has already reduced support for Composer 1. So now is a good time to upgrade to Composer 2, if you have not already.

Follow #3201223: Deprecate composer 1 for detailed status updates.

Categories: FLOSS Project Planets

mark.ie: My LocalGov Drupal contributions for week-ending July 12th, 2024

Planet Drupal - Thu, 2024-07-11 12:00

Here's what I've been working on for my LocalGov Drupal contributions this week. Thanks to Big Blue Door for sponsoring the time to work on these.

Categories: FLOSS Project Planets

Python Software Foundation: Announcing Our New Infrastructure Engineer

Planet Python - Thu, 2024-07-11 10:34

We are excited to announce that Jacob Coffee has joined the Python Software Foundation staff as an Infrastructure Engineer bringing his experience as an Open Source maintainer, dedicated homelab maintainer, and professional systems administrator to the team. Jacob will be the second member of our Infrastructure staff, reporting to Director of Infrastructure, Ee Durbin.

Joining our team, Jacob will share the responsibility of maintaining the PSF systems and services that serve the Python community, CPython development, and our internal operations. This will add crucially needed redundancy to the team as well as capacity to undertake new initiatives with our infrastructure.


Jacob shares, “I’m living the dream by supporting the PSF mission AND working in open source! I’m thrilled to be a part of the PSF team and deepen my contributions to the Python community.”


In just the first few days, Jacob has already shown initiative on multiple projects and issues throughout the infrastructure and we’re excited to see the impact he’ll have on the PSF and broader Python community. We hope that you’ll wish him a warm welcome as you see him across the repos, issue trackers, mailing lists, and discussion forums!


Categories: FLOSS Project Planets

Nicola Iarocci: Microsoft MVP

Planet Python - Thu, 2024-07-11 09:11

Last night, I was at an outdoor theatre with Serena, watching Anatomy of a Fall (an excellent film). Outdoor theatres are becoming rare, which is a pity, and Arena del Sole is lovely with its strong vintage, 80s vibe. There’s little as pleasant as watching a film under the stars with your loved one on a quiet summer evening.

Anyway, in the pause, I glanced at my e-mails and discovered I had been again granted the Microsoft MVP Award. It is the ninth consecutive year, and I’m grateful and happy the journey continues. At this point, I should put in some extra effort to reach the 10-year milestone next year.

Categories: FLOSS Project Planets

mark.ie: My Drupal Core Contributions for week-ending July 12th, 2024

Planet Drupal - Thu, 2024-07-11 08:59

Here's what I've been working on for my Drupal contributions this week. Thanks to Code Enigma for sponsoring the time to work on these.

Categories: FLOSS Project Planets

Real Python: Quiz: Build a Blog Using Django, GraphQL, and Vue

Planet Python - Thu, 2024-07-11 08:00

In this quiz, you’ll test your understanding of building a Django blog back end and a Vue front end, using GraphQL to communicate between them.

You’ll revisit how to run the Django server and a Vue application on your computer at the same time.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Qt Creator 14 RC released

Planet KDE - Thu, 2024-07-11 06:59

We are happy to announce the release of Qt Creator 14 RC!

Categories: FLOSS Project Planets

Petter Reinholdtsen: More than 200 orphaned Debian packages moved to git, 216 to go

Planet Debian - Thu, 2024-07-11 06:30

In April, I started migrating orphaned Debian packages without any version control system listed in debian/control to git. This morning, my Debian QA page finally reached 200 QA packages migrated. In reality there are a few more, as the packages uploaded by someone else after my initial upload have disappeared from my QA uploads list. As I am running out of steam and will most likely focus on other parts of Debian moving forward, I hope someone else will find time to continue the migration to bring the number of orphaned packages without any version control system down to zero. Here is the updated recipe if someone want to help out.

To locate packages to work on, the following one-liner can be used:

PGPASSWORD="udd-mirror" psql --port=5432 --host=udd-mirror.debian.net \ --username=udd-mirror udd -c "select source from sources \ where release = 'sid' and (vcs_url ilike '%anonscm.debian.org%' \ OR vcs_browser ilike '%anonscm.debian.org%' or vcs_url IS NULL \ OR vcs_browser IS NULL) AND maintainer ilike '%packages@qa.debian.org%' \ order by random() limit 10;"

Pick a random package from the list and run the latest edition of the script debian-snap-to-salsa with the package name as the argument to prepare a git repository with the existing packaging. This will download old Debian packages from snapshot.debian.org. Note that very recent uploads will not be included, so check out the package on tracker.debian.org. Next, run gbp buildpackage --git-ignore-new to verify that the package build as it should, and then visit https://salsa.debian.org/debian/ and make sure there is not already a git repository for the package there. I also did git log -p debian/control and look for vcs entries to check if the package used to have a git repository on Alioth, and see if it can be a useful starting point moving forward. If all this check out, I created a new gitlab project below the Debian group on salsa, push the package source there and upload a new version. I tend to also ensure build hardening is enabled, if it prove to be easy, and check if I can easily fix any lintian issues or bug reports. If the process took more than 20 minutes, I dropped it and moved on to another package.

If I found patches in debian/patches/ that were not yet passed upstream, I would send an email to make sure upstream know about them. This has proved to be a valuable step, and caused several new releases for software that initially appeared abandoned. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: FLOSS Project Planets

Robin Wilson: Searching an aerial photo with text queries – a demo and how it works

Planet Python - Thu, 2024-07-11 05:35

Summary: I’ve created a demo web app where you can search an aerial photo of Southampton, UK using text queries such as "roundabout", "tennis court" or "ship". It uses vector embeddings to do this – which I explain in this blog post.

In this post I’m going to try and explain a bit more about how this works.

Firstly, I should explain that the only data used for the searching is the aerial image data itself – even though a number of these things will be shown on the OpenStreetMap map, none of that data is used, so you can also search for things that wouldn’t be shown on a map (like a blue bus)

The main technique that lets us do this is vector embeddings. I strongly suggest you read Simon Willison’s great article/talk on embeddings but I’ll try and explain here too. An embedding model lets you turn a piece of data (for example, some text, or an image) into a constant-length vector – basically just a sequence of numbers. This vector would look something like [0.283, -0.825, -0.481, 0.153, ...] and would be the same length (often hundreds or even thousands of elements long) regardless how long the data you fed into it was.

In this case, I’m using the SkyCLIP model which produces vectors that are 768 elements long. One of the key features of these vectors are that the model is trained to produce similar vectors for things that are similar in some way. For example, a text embedding model may produce a similar vector for the words "King" and "Queen", or "iPad" and "tablet". The ‘closer’ a vector is to another vector, the more similar the data that produced it.

The SkyCLIP model was trained on image-text pairs – so a load of images that had associated text describing what was in the image. SkyCLIP’s training data "contains 5.2 million remote sensing image-text pairs in total, covering more than 29K distinct semantic tags" – and these semantic tags and the text descriptions of them were generated from OpenStreetMap data.

Once we’ve got the vectors, how do we work out how close vectors are? Well, we can treat the vectors as encoding a point in 768-dimensional space. That’s a bit difficult to visualise – so imagine a point in 2- or 3-dimensional space as that’s easier, plotted on a graph. Vectors for similar things will be located physically closer on the graph – and one way of calculating similarity between two vectors is just to measure the multi-dimensional distance on a graph. In this situation we’re actually using cosine similarity, which gives a number between -1 and +1 representing the similarity of two vectors.

So, we now have a way to calculate an embedding vector for any piece of data. The next step we take is to split the aerial image into lots of little chunks – we call them ‘image chips’ – and calculate the embedding of each of those chunks, and then compare them to the embedding calculated from the text query.

I used the RasterVision library for this, and I’ll show you a bit of the code. First, we generate a sliding window dataset, which will allow us to then iterate over image chips. We define the size of the image chip to be 200×200 pixels, with a ‘stride’ of 100 pixels which means each image chip will overlap the ones on each side by 100 pixels. We then configure it to resize the output to 224×224 pixels, which is the size that the SkyCLIP model expects as input.

ds = SemanticSegmentationSlidingWindowGeoDataset.from_uris( image_uri=uri, image_raster_source_kw=dict(channel_order=[0, 1, 2]), size=200, stride=100, out_size=224, )

We then iterate over all of the image chips, run the model to calculate the embedding and stick it into a big array:

dl = DataLoader(ds, batch_size=24) EMBEDDING_DIM_SIZE = 768 embs = torch.zeros(len(ds), EMBEDDING_DIM_SIZE) with torch.inference_mode(), tqdm(dl, desc='Creating chip embeddings') as bar: i = 0 for x, _ in bar: x = x.to(DEVICE) emb = model.encode_image(x) embs[i:i + len(x)] = emb.cpu() i += len(x) # normalize the embeddings embs /= embs.norm(dim=-1, keepdim=True) embs.shape

We also do a fair amount of fiddling around to get the locations of each chip and store those too.

Once we’ve stored all of those (I’ll get on to storage in a moment), we need to calculate the embedding of the text query too – which can be done with code like this:

text = tokenizer(text_queries) with torch.inference_mode(): text_features = model.encode_text(text.to(DEVICE)) text_features /= text_features.norm(dim=-1, keepdim=True) text_features = text_features.cpu()

It’s then ‘just’ a matter of comparing the text query embedding to the embeddings of all of the image chips, and finding the ones that are closest to each other.

To do this, we can use a vector database. There are loads of different vector databases to choose from, but I’d recently been to a tutorial at PyData Southampton (I’m one of the co-organisers, and I strongly recommend attending if you’re in the area) which used the Pinecone serverless vector database, and they have a fairly generous free tier, so I thought I’d try that.

Pinecone, like all other vector databases, allows you to insert a load of vectors and their metadata (in this case, their location in the image) into the database, and then search the database to find the vectors closest to a ‘search vector’ you provide.

I won’t bother showing you all the code for this side of things: it’s fairly standard code for calling Pinecone APIs, mostly copied from their tutorials.

I then wrapped this all up in a FastAPI API, and put a simple Javascript front-end on it to display the results on a Leaflet web map. I also added some basic caching to stop us hitting the Pinecone API too frequently (as there is limit to the number of API calls you can make on the free plan). And that’s pretty-much it.

I hope the explanation made sense: have a play with the app here and post a comment with any questions.

Categories: FLOSS Project Planets

Qt for Android Supported Versions Guidelines

Planet KDE - Thu, 2024-07-11 05:14

Qt for Android usually supports a wide range of Android versions, some very old. To keep the supported versions to a level that’s maintainable by Qt, especially for LTS releases which are expected to live for three years, Qt for Android is adopting new guidelines for selecting the supported versions for a given Qt release in the hope that this effort would make the selection clear and transparent, and help shape proper expectations of support for each Qt for Android release.

Categories: FLOSS Project Planets

Using Nix as a Yocto Alternative

Planet KDE - Thu, 2024-07-11 03:00

Building system images for embedded devices from the ground up is a very complex process, that involves many different kinds of requirements for the build tooling around it. Traditionally, the most popular build systems used in this context are the Yocto project and buildroot.

These build systems make it easy to set up toolchains for cross-compilation to the embedded target architecture, so that the lengthy compilation process can be offloaded to a more beefy machine, but they also help with all the little details that come up when building an image for what effectively amounts to building an entire custom Linux distribution.

More often than not, an embedded Linux kernel image requires changing the kernel config, as well as setting compiler flags and patching files in the source tree directly. A good build system can take a lot of pain away for these steps, by simplifying this with a declarative interface.

Finally, there is a lot of value in deterministic and reproducible builds, as this allows one to get the very same output image regardless of the context and circumstances where and when the compilation is performed.

Introducing Nix

Today we take a look at Nix as an alternative to Yocto and buildroot. Nix is a purely functional language that fits all of the above criteria perfectly. The project started as a PhD thesis for purely functional software deployment and has been around for over 20 years already.  In the last few years, it has gained a lot of popularity in the Server and Desktop Linux scene, due to its ability to configure an entire system and solve complex packaging-related use cases in a declarative fashion.

In the embedded scene, Nix is not yet as popular, but there have already been success stories of Nix being used as an alternative to Yocto. And with the vast collection of over 80,000 officially maintained packages in the nixpkgs repo (this is more than all official packages of Debian and Arch Linux combined), Nix certainly has an edge over the competition, as most software stacks are already packaged. For most common hardware you will also find an overlay of hardware-specific quirks in the nixos-hardware repository. However, since the user demographic of Nix is slightly different at the moment, for more obscure embedded platforms you are still much more likely to find an OpenEmbedded layer.

For the Nix syntax it is best to refer to the official documentation, but being a declarative language, the following parts should be easy to comprehend even without further introduction. The only uncommon caveat is the syntax for lambdas, which essentially boils down to { x, y }: x + y being a lambda, that takes a set with two attributes x and y and returns their sum.

Cross-Compilation

On Nix there are two possible approaches to do cross-compilation. The first one would be to just pull in the packages for the target architecture (in this case aarch64) and compile it on a x86_64 system by configuring qemu-user as a binfmt_misc handler for user space emulation. While this effectively cheats around the actual cross-compilation by emulating the target instruction set, it has some advantages such as simplifying the build process for all packages, but most importantly it allows to reuse the official package cache, which has already built binaries for most aarch64 packages. While most of the build process can be shortcut with this, packages that need to be actually built will build extremely slow due to the emulation.

For that reason we use the second approach instead, which is actual cross-compilation: Instead of pulling in packages normally, we can use the special pkgsCross.${targetArch} attribute to cross-compile our packages to whatever we pass as ${targetArch}. The majority of the packages will just work, but rarely some packages need extra configuration to cross-compile. For example, for Qt we need to set QT_HOST_PATH to point to a Qt installation on the build host, as it needs to run some tools such as moc during the actual build on the build host. The disadvantage of this approach is that for most packages the official Nix cache does not provide binaries, so we have to build everything ourselves.

Of course builds are already cached locally by default (because they end up as a derivation in /nix/store), but it is also possible to set up a custom self-hosted Nix cache server, so that binaries have to be built only once even across multiple machines.

Building an image with Nix

As an example we will be looking into building an entire system image for the Raspberry Pi 3 Model B from the ground up. This includes compiling a custom Linux kernel from source, building the entire graphics stack including Mesa and the whole rest of the software stack. This also means we will build Qt 6 from source.

As example application we will deploy GammaRay, which is already packaged in the official Nix repository. This is to illustrate the advantage of having the large collection of nixpkgs at our disposal. Building a custom Qt application would not be much more involved, for reference take a look at how GammaRay itself is packaged.

Then at the end of the build pipeline, we will have an actual image that can be flashed onto the Raspberry Pi to boot a custom NixOS image with all the settings we have configured.

To build a system image, nixpkgs provides a lot of utility helper functions. For example, to build a normal bootable ISO that can install NixOS like the official installer, the isoImage module can be used. Even more helper functions are available in the nixos-generators repository. However, we do not want to create an “installer” image, instead we are interested in creating an image that can be booted straight away and already has all of the correct software installed. And because the Raspberry Pi uses an SD card, we can make use of the sd-card/sd-image.nix module for that. This module already does a lot of the extra work for us, i.e. it creates a MBR partitioned SD card image, that contains a FAT boot partition and an ext4 partitioned root partition. Of course it is possible to customize all these settings, for example, to add other partitions, we could simply append elements to the config.fileSystems attribute set.

Leaving out some slight Nix flakes boilerplate (we will get to this at the end), with the following two snippets we would already create a bootable image:

nixosConfigurations.pi = system.nixos { imports = [ "${nixpkgs}/nixos/modules/installer/sd-card/sd-image-aarch64.nix" nixosModules.pi ]; images.pi = nixosConfigurations.pi.config.system.build.sdImage; };

This first snippet imports the sd-image module mentioned above and links to a further nixosModules.pi configuration, that we define in the following snippet and that we can use to configure the entire system to our liking. This includes installed packages, setup of users, boot flags and more.

nixosModules.pi = ({ lib, config, pkgs, ... }: { environment.systemPackages = with pkgs; [ gammaray ]; users.groups = { pi = { gid = 1000; name = "pi"; }; }; users.users = { pi = { uid = 1000; password = "pi"; isSystemUser = true; group = "pi"; extraGroups = ["wheel" "video"]; shell = pkgs.bash; }; }; services = { getty.autologinUser = "pi"; }; time.timeZone = "Europe/Berlin"; boot = { kernelPackages = lib.mkForce pkgs.linuxKernel.packages.linux_rpi3; kernelParams = ["earlyprintk" "loglevel=8" "console=ttyAMA0,115200" "cma=256M"]; }; networking.hostName = "pi"; });

Thus, with this configuration we install GammaRay, set up a user that is logged in by default and uses a custom Raspberry Linux kernel as the default kernel to boot. Most of these configuration options should be self-explanatory and they only show a glimpse of what is possible to configure. In total, there are over 10,000 official NixOS configuration options, that can be searched with the official web interface.

The line where we add GammaRay to the system packages, also automatically adds Qt 6 as a dependency. However, as mentioned previously, this does not work quite well with cross-compilation. In order for it to build we need to patch a dependency of GammaRay to add the QT_HOST_PATH variable to the cmake flags. What would involve bizarre gymnastics with most other build systems, becomes incredibly simple with Nix: we just add an overlay that overrides the Qt6 package, there is no need to touch the definition of GammaRay at all:

nixpkgs.overlays = [ (final: super: { qt6 = super.qt6.overrideScope' (qf: qp: { qtbase = qp.qtbase.overrideAttrs (p: { cmakeFlags = ["-DQT_HOST_PATH=${nixpkgs.legacyPackages.${hostSystem}.qt6.qtbase}"]; }); }); }) ];

And note, how we pass the path to Qt built on the host system to QT_HOST_PATH. Due to lazy evaluation, this will build Qt (or rather download it from the Nix binary cache) for the host architecture and pass the resulting derivation as string at evaluation time.

In order to quickly test an image, we can write a support script to test the output directly in qemu instead of having to flash it on real hardware:

qemuTest = pkgs.writeScript "qemuTest" '' zstd --decompress ${images.pi.outPath}/sd-image/*.img.zst -o qemu.img chmod +w qemu.img qemu-img resize -f raw qemu.img 4G qemu-system-aarch64 -machine raspi3b -kernel "${uboot}/u-boot.bin" -cpu cortex-a53 -m 1G -smp 4 -drive file=qemu.img,format=raw -device usb-net,netdev=net0 -netdev type=user,id=net0 -usb -device usb-mouse -device usb-kbd -serial stdio '';

Here we decompress the output image, resize it so that qemu can start it and then use the uboot boot loader to finally boot it.

Taking a final look at our config, we now have the following flake.nix file:

{ description = "Cross compile for Raspberry"; inputs = { nixpkgs.url = "nixpkgs/nixos-23.11"; }; outputs = { self, nixpkgs, nixos-hardware }: let hostSystem = "x86_64-linux"; system = nixpkgs.legacyPackages.${hostSystem}.pkgsCross.aarch64-multiplatform; pkgs = system.pkgs; uboot = pkgs.ubootRaspberryPi3_64bit; in rec { nixosConfigurations.pi = system.nixos { imports = [ "${nixpkgs}/nixos/modules/installer/sd-card/sd-image-aarch64.nix" nixosModules.pi ]; }; images.pi = nixosConfigurations.pi.config.system.build.sdImage; qemuTest = pkgs.writeScript "qemuTest" '' zstd --decompress ${images.pi.outPath}/sd-image/*.img.zst -o qemu.img chmod +w qemu.img qemu-img resize -f raw qemu.img 4G qemu-system-aarch64 -machine raspi3b -kernel "${uboot}/u-boot.bin" -cpu cortex-a53 -m 1G -smp 4 -drive file=qemu.img,format=raw -device usb-net,netdev=net0 -netdev type=user,id=net0 -usb -device usb-mouse -device usb-kbd -serial stdio ''; nixosModules.pi = ({ lib, config, pkgs, ... }: { environment.systemPackages = with pkgs; [ uboot gammaray ]; services = { getty.autologinUser = "pi"; }; users.groups = { pi = { gid = 1000; name = "pi"; }; }; users.users = { pi = { uid = 1000; password = "pi"; isSystemUser = true; group = "pi"; extraGroups = ["wheel" "video"]; shell = pkgs.bash; }; }; time.timeZone = "Europe/Berlin"; boot = { kernelParams = ["earlyprintk" "loglevel=8" "console=ttyAMA0,115200" "cma=256M"]; kernelPackages = lib.mkForce pkgs.linuxKernel.packages.linux_rpi3; }; networking.hostName = "pi"; nixpkgs.overlays = [ (final: super: { makeModulesClosure = x: super.makeModulesClosure (x // { allowMissing = true; }); # workaround for https://github.com/NixOS/nixpkgs/issues/154163 qt6 = super.qt6.overrideScope' (qf: qp: { qtbase = qp.qtbase.overrideAttrs (p: { cmakeFlags = ["-DQT_HOST_PATH=${nixpkgs.legacyPackages.${hostSystem}.qt6.qtbase}"]; }); }); unixODBCDrivers = super.unixODBCDrivers // { psql = super.unixODBCDrivers.psql.overrideAttrs (p: { nativeBuildInputs = with nixpkgs.legacyPackages.${hostSystem}.pkgsCross.aarch64-multiplatform.pkgs; [unixODBC postgresql]; }); }; # workaround for odbc not building }) ]; system.stateVersion = "23.11"; }); }; }

And that’s it, now we can build and test the entire image with:

nix build .#images.pi --print-build-logs nix build .#qemuTest ./result

Note that this will take quite a while to build, as everything is compiled from source. This will also create a flake.lock file pinning all the inputs to a specific version, so that subsequent runs will be reproducible.

Conclusion

Nix has been growing a lot in recent years, and not without reason. The Nix language allows to solve some otherwise very complicated packaging tasks in a very concise way. The fully hermetic and reproducible builds are a perfect fit for building embedded Linux images, and the vast collection of packages and configuration options allow to perform most tasks without ever having to leave the declarative world.

However, there are also some downsides when compared to the Yocto project. Due to the less frequent use of Nix on embedded, it is harder to find answers and support for embedded-related questions and you are quickly on your own, especially when using more obscure embedded platforms.

And while the Nix syntax in and of itself is very simple, it should not go unmentioned that there is a lot of complexity around the language constructs such as derivations and how everything interacts with each other. Thus, there is definitely a steep learning curve involved, though most of this comes with the territory and is also true for the Yocto project.

Hence overall, Nix is a suitable alternative for building embedded system images (keeping in mind that some extra work is involved for more obscure embedded platforms), and its purely functional language makes it possible to solve most tasks in a very elegant way.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Using Nix as a Yocto Alternative appeared first on KDAB.

Categories: FLOSS Project Planets

PreviousNext: Co-contribution with clients: A revision UI API for all entity types

Planet Drupal - Thu, 2024-07-11 01:11

The tale of an eight-year, collaborative effort to build a generic revision UI into Drupal 10.1.0, bringing a major piece of functionality to core.

by lee.rowlands / 11 July 2024

As we discussed in our previous post, Improving Drupal with the help of your clients, we’re fortunate to work with a client like ServiceNSW that is committed to open-source contribution. So when their challenges require solutions that will also benefit the whole Drupal community, they're on board!

In the beginning, there were nodes

Since Drupal 4.7 was released in 2006, nodes have had a revision user interface (UI). The UI allows editors to view revision history and specific revisions, as well as revert and delete revisions.

A lot has changed since Drupal 4.7. We received revision support for many more entities, but Node remained the only one with a revision UI in core.

Supporting client needs through contrib 

Our client, Service NSW, makes heavy use of block content entities for Notices displayed throughout the site. These are regularly updated. Editors need to be able to see what has changed and when, revert to previous versions, and view revision logs when needed. 

Since Drupal 8, much of the special treatment of Node entities has been replaced with generic Entity API functionality. Nodes were no longer the only tool in the content-modelling toolbox, with this one exception: revision UI.

The code for node's revision UI lives in the node module. It’s dependent on hard-coded permission checking and uses routing and forms outside the entity API.

This meant that for every additional entity type for which Service NSW needed a revision UI, those parts needed to be recreated repeatedly.

As you can imagine, this approach quickly becomes hard to maintain due to the amount of duplication. 

The journey to core

Having identified that Drupal core needed a generic entity revision UI API (it already had generic APIs for entity routing, editing, viewing and access), we set to work on this missing piece of the puzzle.

We found an existing core issue for it, and in 2015, posted our first patch for it. 

This began an 8-year journey to bring a major piece of functionality to core.

Over the course of many re-rolls, we released contributed modules built on top of the patch:

Finally, with the release of Drupal 10.1.0 in 2023, any entity-type could opt into a revision UI. The Drupal 10.1.0 release opted-in for Block Content entities, making that contributed module obsolete. Then later in 2023, the release of Drupal 10.2.0 saw Media entities use this new API. In early 2024, support for Taxonomy terms was added and released in 10.3.0.

Challenges along the way

The biggest challenges encountered were keeping the patch up to date with core as it changed and navigating the contribution process. Over the years, there have been over 120 patch files and 300+ comments on the issue!

Another challenge was the lack of an access API for checking access to revisions. 

The entity API supported a set of entity access operations — view, update, delete — but no revision operations were considered. The node module had hard-coded permissions e.g. 'view all revisions' and 'revert all revisions'. 

To have a generic entity revision UI API, we needed a generic way to check access to the operations the UI would make available.

Initially, we tried to include this with the revision UI changes. However, it became increasingly difficult to get both major pieces of functionality simultaneously. So, in 2019, this was split into a separate issue, and the original issue was postponed.

With efforts from our team, Service NSW and many other individuals and companies in the Drupal community, this made it into Drupal core in 2021. It was first available in Drupal 9.3.0. Adding a whole new major access API is not without its challenges, though. Unfortunately, this change resulted in a security release shortly after 9.3.0 came out. Luckily it was caught and fixed before many sites had updated to 9.3.0.

Collaborative contribution

Adding a new feature to Drupal core is a large undertaking. Doing it in a client-agency collaboration provides an ideal model for how open source should work. 

Developers from PreviousNext and Service NSW worked with the broader Drupal community to bring this feature to fruition.

Our developers have experience contributing to core and were able to guide Service NSW developers through the process. Being credited on large features like this is a major feather in the cap for both individual developers and their organisations.

Wrapping up

Together, we helped integrate a generic revision UI into Drupal 10.1.0. All of the developers involved received issue credits for their work. 

This was a significant effort over eight years, requiring collaboration with individuals and organisations in the wider Drupal community to build consensus. This level of shared commitment helps drive the Drupal open source project forward, recognising that what benefits one can benefit all.

So, what are the next big features you and your clients could work on? Or is there something you want to bring to core, as an individual, group or organisation? Either way, we’d love to chat and collaborate!

Contributors
  • dpi
  • acbramley
  • jibran
  • manuel garcia
  • chr.fritsch
  • AaronMcHale
  • Nono95230
  • capysara
  • darvanen
  • ravi.shankar
  • Spokje
  • thhafner
  • larowlan
  • smustgrave
  • mstrelan
  • mikestar5
  • andregp
  • joachim
  • nterbogt
  • shubhangi1995
  • catch
  • mkalkbrenner
  • Berdir
  • Sam152
  • Xano
Issue links
Categories: FLOSS Project Planets

Russ Allbery: podlators v6.0.0

Planet Debian - Wed, 2024-07-10 22:57

podlators is the collection of Perl modules and front-end scripts that convert POD documentation to *roff manual pages or text, possibly with formatting intended for pagers.

This release continues the simplifications that I've been doing in the last few releases and now uniformly escapes - characters and single quotes, disabling all special interpretation by *roff formatters and dropping the heuristics that were used in previous versions to try to choose between possible interpretations of those characters. I've come around to the position that POD simply is not semantically rich enough to provide sufficient information to successfully make a more nuanced conversion, and these days Unicode characters are available for authors who want to be more precise.

This version also drops support for Perl versions prior to 5.12 and switches to semantic versioning for all modules. I've added a v prefix to the version number, since that is the convention for Perl module versions that use semantic versioning.

This release also works around some changes to the man macros in groff 1.23.0 to force persistent ragged-right justification when formatted with nroff and fixes a variety of other bugs.

You can get the latest release from CPAN or from the podlators distribution page.

Categories: FLOSS Project Planets

GNU Taler news: KYCID, an operational OAuth2 integration of eKYC

GNU Planet! - Wed, 2024-07-10 18:00
In this bachelor thesis Yann Doy presents his implementation of a concept of eKYC (electronic Knwo Your Customer procedure).
Categories: FLOSS Project Planets

Pages