FLOSS Project Planets

health @ Savannah: MyGNUHealth 2.2 series released!

GNU Planet! - Fri, 2024-06-21 05:44

Dear all

I am happy to announce the release of MyGNUHealth 2.2.0!

The new series of the GNU Health Personal Health record comes with many improvements and bug fixes. Some highlights of this new version:

  • Support for Kivy 2.3.0
  • Localization. MyGNUHealth now has support for different languages. English, Spanish and Chinese are available to use, and French, German, Italian are ready to be translated. There will be a translation component for MyGNUHealth at Codeberg's Weblate instance.
  • Bluetooth functionality: Starting with MyGH series 2.2 we provide bluetooth integration for open compatible devices and health trackers. We include the link with the Pinetime Smartwatch (experimental) and the possibility to link to any open hardware device (glucometer, scales, blood pressure monitors,  .. ). We need to get a list of available medical devices that respect our privacy and freedom, so let us know of any!
  • Charts now allow to select date ranges with calendar widgets
  • The Book of Life have a revised format for the pages.
  • The charts have been improved in the format and include x axis labels.


Thanks to Kivy, Mygnuhealth codebase can be ported to other architectures and operating systems such as Android AOSP (Pierre Michel is working on this) and GNU/Linux phones.

In addition to Savannah, we have incorporated Codeberg to the GNU Health development environment. Mailing lists, news and file downloads are at GNU, while the development repositories are at Codeberg (https://codeberg.org/gnuhealth)

You can download the latest MyGNUhealth sourcecode from GNU ftp site, pypi (using pip) or from your operating system package (like openSUSE).

Upgrading should be straightforward, and all the health history will remain in the MyGH database. In any case, please make sure you make a backup before upgrading (and daily ;) ).

Thank you to all the contributors that have possible this milestone!

Happy hacking
Luis

Categories: FLOSS Project Planets

death and gravity: reader 3.13 released – scheduled updates

Planet Python - Fri, 2024-06-21 03:43

Hi there!

I'm happy to announce version 3.13 of reader, a Python feed reader library.

What's new? #

Here are the highlights since reader 3.12.

Scheduled updates #

reader now allows updating feeds at different rates via scheduled updates.

The way it works is quite simple: each feed has an update interval that determines when the feed should be updated next; calling update_feeds(​scheduled​=True) updates only feeds that should be updated at or before the current time.

The interval can be configured by the user globally or per-feed through the .reader​.update tag. In addition, you can specify a jitter; for an interval of 24 hours, a jitter of 0.25 means the update will occur any time in the first 6 hours of the interval.

In the future, the same mechanism will be used to handle 429 Too Many Requests.

Improved documentation #

As part of rewriting the Updating feeds user guide section to talk about scheduled updates, I've added a new section about being polite to servers.

Also, we have a new recipe for adding custom headers when retrieving feeds.

mark_as_read reruns #

You can now re-run the mark_as_read plugin for existing entries by adding the .reader​.mark-as-read​.once tag to a feed. Thanks to Michael Han for the pull request!

That's it for now. For more details, see the full changelog.

Want to contribute? Check out the docs and the roadmap.

Learned something new today? Share this with others, it really helps!

What is reader#

reader takes care of the core functionality required by a feed reader, so you can focus on what makes yours different.

reader allows you to:

  • retrieve, store, and manage Atom, RSS, and JSON feeds
  • mark articles as read or important
  • add arbitrary tags/metadata to feeds and articles
  • filter feeds and articles
  • full-text search articles
  • get statistics on feed and user activity
  • write plugins to extend its functionality

...all these with:

  • a stable, clearly documented API
  • excellent test coverage
  • fully typed Python

To find out more, check out the GitHub repo and the docs, or give the tutorial a try.

Why use a feed reader library? #

Have you been unhappy with existing feed readers and wanted to make your own, but:

  • never knew where to start?
  • it seemed like too much work?
  • you don't like writing backend code?

Are you already working with feedparser, but:

  • want an easier way to store, filter, sort and search feeds and entries?
  • want to get back type-annotated objects instead of dicts?
  • want to restrict or deny file-system access?
  • want to change the way feeds are retrieved by using Requests?
  • want to also support JSON Feed?
  • want to support custom information sources?

... while still supporting all the feed types feedparser does?

If you answered yes to any of the above, reader can help.

The reader philosophy #
  • reader is a library
  • reader is for the long term
  • reader is extensible
  • reader is stable (within reason)
  • reader is simple to use; API matters
  • reader features work well together
  • reader is tested
  • reader is documented
  • reader has minimal dependencies
Why make your own feed reader? #

So you can:

  • have full control over your data
  • control what features it has or doesn't have
  • decide how much you pay for it
  • make sure it doesn't get closed while you're still using it
  • really, it's easier than you think

Obviously, this may not be your cup of tea, but if it is, reader can help.

Categories: FLOSS Project Planets

Gunnar Wolf: A new RISC-V toy... requiring almost no tinkering

Planet Debian - Fri, 2024-06-21 01:59

Shortly before coming back from Argentina, I got news of a very interesting set of little machines, the MilkV Duo. The specs looked really interesting and fun to play with, particularly those of the “bigger” model, Milk-V DUO S Some of the highlights:

  • The SG2000 SoC is a Dual-architecture beast. A hardware switch controls whether the CPU is an ARM or a RISC-V.
    • Not only that: It has a second (albeit lesser) RISC-V core that can run independently. They mention this computer can run simultaneously Linux and FreeRTOS!
  • 512MB RAM
  • Sweet form factor (4.2×4.2cm)
  • Peeking around their Web site, it is one of the most open and well documented computers in their hardware range.

Naturally, for close to only US$12 (plus shipping) for the configuration I wanted… I bought one, and got it delivered in early May. The little box sat on my desk for close to six weeks until I had time to start tinkering with it…

I must say I am surprised. Not only the little bugger delivers what it promises, but it is way more mature than what I expected: It can be used right away without much tinkering with! I mean, I have played with it for less than an hour by now, and I’ve even managed to get (almost) regular Debian working.

Milk-V distributes a simple, 58.9MB compressed Linux image, based on Buildroot, a simple Linux image generator mostly used for embedded applications, as well as its source tree. I thought that would be a good starting point to work on setting up a minimal Debian filesystem, as I did with the CuBox-i4Pro ten years ago, and maybe even to grow towards a more official solution, akin to what we currently have for the Raspberry Pi family

…Until I discovered what looks like a friendly and very active online community of Milk-V users! I haven’t yet engaged in it, but I stumbled across a thread announcing the availability of Debian images for the Milk-V family.

And yes, it feels like a very normal Debian system. /etc/apt/sources.list does point to a third-party repository, but it’s for only four packages, all related to pinmux controlfor CVITEK chips. It does feel like a completely normal Debian system! It is not as snappy and fast to load as Buildroot, but given Debian’s generality, that’s completely as expected. Even the wireless network, one of the usual pain points, works just out of the box! The Debian images can be built or downloaded from this Git repository.

In case you wonder how is this system booting or what hardware it detects, I captured two boot logs:

Categories: FLOSS Project Planets

Krita Monthly Update – Edition 16

Planet KDE - Thu, 2024-06-20 20:00

Welcome to the latest development and community news curated for you by the Krita-promo team.

Development report Krita is 25 years old!

Artwork by David Revoy (CC BY-SA)

May 31, 2024 marks Krita’s 25th birthday. As one would expect, there have been many changes over the years – even the name changed several times. You can get a look inside Krita’s history in this blog post written by @Halla, Krita’s Maintainer for more than 20 years.

In honour of this milestone, @RamonM prepared a special treat for all Krita users: a video interview with @Halla.

Your feedback is requested
  • 5.2.3-Beta1 was released June 5th. This release represents a complete rework of the build system and numerous fixes by the core Krita developer team as well as freyalupen, Grum999, NabilMaghfurUsman, Deif_Lou, Alvin Wong, Rasyuqa A. H. and Mathias Wein. There are a number of first-time contributors whose names appear next to their contribution in the release notes.

  • Testing packages for every platform are provided on the release notes page. Please report your findings and feedback in the Testers Wanted thread.

  • Text property editing: Merge request 2092 is almost finished, awaiting review. Testing builds are now available on the CI. @Wolthera is requesting user feedback on the UX. Please read this post and share your comments there.

  • Google Summer of Code (GSOC) @Ken_Lo is seeking input on the Pixel Perfect project https://krita-artists.org/t/pixel-perfect-line-setting-for-pixel-art-brushes/42629/16

Other Development Highlights There should have been a video here but your browser does not seem to support it.
  • Grum999 is improving the python API so that it is more robust for python developers and they can access more of krita’s internal features through python. There is a work-in-progress MR to add new scripting functions for accessing Grids, Guides, and Mirror Axes from the document, and signals for changes in the document and view. Check the MR

  • @Ralek has added lossless transformation conditions - Rotations in increments of 90 degrees, and perfect x and y mirrors should now be lossless. This should greatly help out pixel artists, who I believe previously could not use these functions at all. Check the MR

Community Report May 2024 Monthly Art Challenge

And the winner is… Cat Reflection by Elixiah.

For the June Art Challenge, Elixiah has chosen Magnificent Dragon with an interesting optional challenge for any who care to give themselves an added stretch.

Featured artwork

Ten images were submitted to the Best of Krita-Artists Nominations thread which was open from April 14th to May 11th. When voting closed on May 14th, these five had the most votes and were added to the Krita-Artists featured artwork banner.

Quiet Morning by @Gurkirat_Singh.

Pollinatrix Terrae by @jimplex.

The Lone Rider-2 by @rohithela.

005 (Spider in the web) by @HappyBuket.

Challenge Horn by @MangooSalade.

In addition to their place of honour on the banner, all five will be entered into the Best of Krita-Artists 2024 competition next January. The Best of Krita-Artists May/June Nominations thread will be open for submissions until June 11, 2024. You are invited to join in by nominating your favourite piece of Krita artwork!

Noteworthy Plugin

Create a New View as Window and Topped by Cliscylla saves steps by opening a new view and setting it to always stay on top.

Tutorial of the month

How to record video directly from Krita and post to social media by Deevad is a comprehensive tutorial for beginner and intermediate Krita users. It takes the viewer through the initial screen set up and recommended canvas dimensions right through to the export process.

Ways to help Krita

Krita is a Free and Open Source application, mostly developed by an international team of enthusiastic volunteers. Donations from Krita users to support maintenance and development is appreciated.

Visit Krita’s funding page to see how donations are used and explore a one-time or monthly contribution.

Notable Changes in the code

This section has been compiled by freyalupen. (May 6 - June 6, 2024)

Stable branch (5.2.3-beta1):

Bugfixes:

  • General Don't waste memory generating empty animation frames on images with no animation. This was a regression in 5.2.x. (commit, Dmitry Kazakov)
  • Storyboard Docker Fix reordering storyboard scenes causing all frame data to be deleted while still appearing to be present. (BUG:476440) (merge request, Freya Lupen)
  • Android: Animation Fix crash when attempting to load audio on Android, a regression present in 5.2.2.1. (merge request, Dmitry Kazakov)

Stable branch (5.2.3-beta1+):

Bugfixes:

Features:
  • Scripting Generate a Python type stub file for Krita's API, which can be used to setup type auto-completion in IDEs, located inside the Krita package at /lib/krita-python-libs/PyKrita/krita.pyi. (merge request, Kate Corcoran)

Stable branch (5.2.3-beta1+) backports from Unstable: Bugfixes:

  • Recorder Docker Reworked default recorder docker FFmpeg profiles. If canvas size changes during recording, the export profiles now keep aspect instead of stretching (BUG:429326). Issues with resize, result preview, and extend result are avoided (BUG:455006, BUG:450790, BUG:485515, BUG:485514). For MP4, detect whether openh264 or libx264 is present instead of using separate profiles. Also, prevent an error when using FFmpeg 7. (merge request, Ralek Kolemios)
  • Selection Tools Fix issue making selections on color-labeled reference selections. (BUG:486419) (commit, Deif Lou)
Unstable branch (5.3.0-prealpha):

Features:

Bugfixes:

These changes are made available for testing in the following Nightly builds:

Like what we are doing? Help support us

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

Donate Buy something
Categories: FLOSS Project Planets

Seth Michael Larson: CPython vulnerability data infrastructure (CVE and OSV)

Planet Python - Thu, 2024-06-20 20:00
CPython vulnerability data infrastructure (CVE and OSV) AboutBlogNewsletterLinks CPython vulnerability data infrastructure (CVE and OSV)

Published 2024-06-21 by Seth Larson
Reading time: minutes

This critical role would not be possible without funding from the Alpha-Omega project. Massive thank-you to Alpha-Omega for investing in the security of the Python ecosystem!

Let's talk about some vulnerability data infrastructure for the Python Software Foundation (PSF). In the recent past, most of the vulnerability data processes were manual. This worked okay because the PSF processes a low double-digit number of vulnerabilities each year (2023 saw 12 published vulnerabilities).

However, almost all of this manual processing was being done by me as full-time staff. Imagining this work being done by either someone else on staff or a volunteer isn't great, because it's a non-zero amount of extra work. Automation to the rescue!

How the vulnerability data flows

The PSF uses the CVE database as its “source of truth” for vulnerability data which then gets imported into our Open Source Vulnerability (OSV) database by translating CVE records into OSV records.

We manually update CVE information in a program called Vulnogram which provides a helpful UI for CVE services.

So what is the minimum amount of information we need to manually create to automatically generate the rest? This is the current list of data the PSF CVE Numbering Authority team creates manually:

  • Advisory text and description
  • CVE reference to the advisory
  • GitHub issue (as a CVE reference)

GitHub RepositoryGitHub Reposito...CVE
ServicesCVE...PSF OSV DatabasePSF OSV DatabaseGitHub
IssueGitHub...GitHub
Merged PRGitHub...git commitgit commitgit tag
(v3.12.4)git tag...CVE RecordCVE RecordCVE Affected VersionsCVE Affected...CVE ReferencesCVE ReferencesOSV RecordOSV RecordOSV Affected CommitsOSV Affected...OSV Affected TagsOSV Affected...OSV ReferencesOSV ReferencesText is not SVG - cannot display
Blue items are manually created, green items are automatically generated.

Advisories are sent to the security-announce@python.org mailing list and then the description is reused as the CVE record's description. The linkages between a GitHub pull request and a GitHub issue is maintained by Bedevere which automatically updates metadata as new pull requests are opened for an issue.

From this information we can use scripts to generate the rest in two stages:

  • CVE affected versions and references are populated by finding git tags that contain git commits.
  • All OSV record information is generated from CVE records. New OSV records are automatically assigned their IDs by OSV tooling. The central OSV database calculates affected git tags on our behalf.

The PSRT publishes advisories and patches once they're available in the latest CPython versions. For low-severity vulnerabilities there typically isn't an immediate release of all bugfix and security branches (ie 3.12, 3.11, 3.10, 3.9, etc). This means that many vulnerability records will be need to be updated over time as fixes are merged and released into other CPython versions so these scripts run periodically to avoid tracking these updates manually.

Other items
  • This week marks my one-year anniversary in the role of Security Developer-in-Residence. Woohoo! 🥳
  • Advisories and records were published for CVE-2024-0397 and CVE-2024-4032.
  • Wrote the draft for "Trusted Publishers for All Package Repositories" for the OpenSSF Securing Software Repositories WG. This document would be used by other package repositories looking to implement Trusted Publishers like PyPI has.
  • Working on documenting more of PSRT processes like membership.
  • Triaging reports to the PSRT.
  • Reviewing PEP 740 and other work for generating publish provenance from William Woodruff.
  • Google Summer of Code contributor, Nate Ohlson has been making excellent progress on the effort for CPython to adopt the "Hardened Compiler Options Guide" from the OpenSSF Best Practices WG. You can follow along with his progress on Mastodon and on GitHub.

That's all for this post! 👋 If you're interested in more you can read the last report.

Thanks for reading! ♡ Did you find this article helpful and want more content like it? Get notified of new posts by subscribing to the RSS feed or the email newsletter.

This work is licensed under CC BY-SA 4.0

Categories: FLOSS Project Planets

The Python Show: 45 - Computer Vision and Data Science with Python

Planet Python - Thu, 2024-06-20 18:11

In this episode, we welcome Kyle Stratis from Real Python to the show to chat with us about computer vision and Python.

We chatted about several different topics including:

  • Writing about Python on Real Python

  • Data science

  • Artificial intelligence

  • Python packaging

  • and much more!

Links
Categories: FLOSS Project Planets

Paolo Melchiorre: Django 5 by Example preface

Planet Python - Thu, 2024-06-20 18:00

The story of my experience in writing the preface of the book “Django By Example” by Antonio Melé.

Categories: FLOSS Project Planets

The Drop is Always Moving: Drupal 10.3.0 is out! The third and final feature release of Drupal 10 ships with a new experimental Navigation UI, stable Workspaces and Single-Directory Components, simplified menu editing, taxonomy moderation, new recipe...

Planet Drupal - Thu, 2024-06-20 16:29

Drupal 10.3.0 is out! The third and final feature release of Drupal 10 ships with a new experimental Navigation UI, stable Workspaces and Single-Directory Components, simplified menu editing, taxonomy moderation, new recipe and access policy APIs and more. https://www.drupal.org/blog/drupal-10-3-0

Categories: FLOSS Project Planets

Drupal blog: Drupal 10.3 is now available

Planet Drupal - Thu, 2024-06-20 16:00
New in Drupal 10.3

The third and final feature release of Drupal 10 ships with a new experimental Navigation user interface, stable Workspaces functionality, stable Single-Directory Components support, simplified menu editing, taxonomy moderation support, new recipe and access policy APIs, performance improvements and more.

New experimental Navigation module

The new Navigation module provides a redesigned collapsible, vertical navigation sidebar for the administrative user interface. Sub-menus open on a full height drawer that can accommodate deeper navigation levels. On smaller viewports, the toolbar is placed on top of the content, and opens with an overlay.

The Navigation module allows multiple types of customization, like adding new custom menus or changing the default Drupal logo provided. It also uses the Layout Builder module, so that site builders can easily add or reorder these menu blocks.

The Navigation module includes a new content creation and management menu, which allows quick access to content-related tasks to increase usability for content users.

Stable Workspaces module

The Workspaces module allows Drupal sites to have multiple work environments, enabling site owners to stage multiple content changes to be deployed to the live site all at once. It has long been available in Drupal core as an experimental module. Following the module's use in live projects, the remaining stable blocking issues have been resolved, so now it is available to all!

Workspaces are sets of content changes that are prepared and reviewed separately from the live site. This is a differentiating feature for Drupal that is important for many large organizations' websites. An organization might use Workspaces to ensure all relevant content goes live simultaneously for a new product launch, or with the outcomes of sporting or election events.

Stable Single-Directory Components

Single-Directory Components (SDCs) are Drupal core’s implementation of a user interface components system. Within SDC, all files necessary to render the user interface component are grouped together in a single directory. This includes Twig, YAML, and optional CSS and JavaScript. SDC support was added to Drupal core in 10.1 as an experimental module. The solution has been very well-received and is now part of the base system. No need to enable a module to use this feature.

Simplified content organization

Menu item editing is now simplified. Advanced options are displayed in a sidebar to help content editors focus on what is most important for the menu item. Taxonomy terms also now have both a dedicated user interface to edit earlier revisions and content moderation support.

New Recipes and Default Content APIs

Drupal recipes allow the automation of Drupal module installation and configuration. Drupal recipes are easy to share, and can be composed from other Drupal recipes. For example, Drupal 10.3 includes a Standard recipe providing the same functionality as the Standard install profile. It is a combination of 16 component recipes that can be reused in other recipes.

Recipes provide similar functionality to install profiles but are more flexible. With install profiles only one can be installed on a site. With recipes, multiple recipes can be applied after each other.

Install profiles/distributions Recipes Lock-in Not possible to uninstall (until Drupal 10.3) No lock-in Inheritance Cannot extend other profiles or distributions Can be based on other recipes Composability Cannot install multiple profiles or distributions Multiple recipes can be applied on the site and be the basis of another recipe

The recently announced Starshot Initiative will rely heavily on recipes to provide composable features.

The added APIs include Configuration Actions, Configuration Checkpoints and Default Content.

Additionally, it is now possible to install Drupal without an install profile, or to uninstall an install profile after Drupal is already set up.

More flexible access management with the new Access Policy API

The new Access Policy API supports the implementation of access management solutions that go beyond permissions and user roles. Other conditions and contexts may be taken into account, like whether the user used two-factor authentication, or whether they reached a rate limit of an activity. Drupal's existing permission- and role-based access control has been converted to the new API, and custom or contributed projects can add more access policies.

The future of Drupal 10

Drupal 10.3 is the final feature release of Drupal 10. Drupal 11 is scheduled to be released the week of July 29th. With that, Drupal 10 goes into long-term support. While more minor releases will be made available of Drupal 10, they will not contain new features, only functionality backported to support security and a smoother upgrade to Drupal 11. Drupal 10's future minor releases will be supported until mid- to late 2026, when Drupal 12 is released and Drupal 11 enters long-term support.

Core maintainer team updates

Cristina Chumillas (at Lullabot), Sally Young (also at Lullabot) and Théodore Biadala (at Très Bien Tech) were all promoted from provisional to full Drupal Core Frontend Framework Managers.

Alex Pott (at Acro Commerce and Thunder), Adam Globus-Hoenich (at Acquia) and Jim Birch (at Kanopi Studios) are the maintainers of the new Default Content and Recipes subsystems.

Andrei Mateescu (at Tag1 Consulting) is the maintainer of the newly stable Workspaces module.

Ivan Berdinsky (at Skilld) became a co-maintainer of the Umami demo.

Daniel Veza (at PreviousNext) is a new co-maintainer of Layout Builder.

Mateu Aguiló Bosch (at Lullabot) and Pierre Dureau are new co-maintainers of the Theme API, focusing on Single Directory Components.

Want to get involved?

If you are looking to make the leap from Drupal user to Drupal contributor, or you want to share resources with your team as part of their professional development, there are many opportunities to deepen your Drupal skill set and give back to the community. Check out the Drupal contributor guide, or join us at DrupalCon Barcelona and attend sessions, network, and enjoy mentorship for your first contributions.

Categories: FLOSS Project Planets

mark.ie: My LocalGov Drupal contributions for week-ending June 21th, 2024

Planet Drupal - Thu, 2024-06-20 13:00

Here's what I've been working on for my LocalGov Drupal contributions this week. Thanks to Big Blue Door for sponsoring the time to work on these.

Categories: FLOSS Project Planets

PyBites: Introducing eXact-RAG: the ultimate local Multimodal Rag

Planet Python - Thu, 2024-06-20 12:57

Exact-RAG is a powerful multimodal model designed for Retrieval-Augmented Generation (RAG). It seamlessly integrates text, visual and audio information, allowing for enhanced content understanding and generation.

In the rapidly evolving landscape of the Large language model (LLM), the quest for more efficient and versatile models continues unabated.

One of the latest advancements in this realm is the emergence of eXact-RAG, a multimodal RAG (Retrieval Augmented Generation) system that leverages state-of-the-art technologies to deliver powerful results.

eXact-RAG stands out for its integration of LangChain and Ollama for backend and model serving, FastAPI for REST API service, and its adaptability through the utilization of ChromaDB or Elasticsearch.

Coupled with an intuitive user interface built on Streamlit, eXact-RAG represents a significant leap forward in LLMs capabilities.

A Step back: what is a RAG?

Retrieval Augmented Generation, or RAG, is an architectural approach that can improve the efficacy of Large Language Model (LLM) applications by leveraging custom data. This is done by retrieving data/documents relevant to a question or task and providing them as context for the LLM. RAG has shown success in support chatbots and Q&A systems that need to maintain up-to-date information or access domain-specific knowledge.

RAG process schema. Source: neo4j.com

As the name suggests, RAG has two phases: retrieval and content generation. In the retrieval phase, algorithms search for and retrieve snippets of information relevant to the user’s prompt or question. In an open-domain, consumer setting, those facts can come from indexed documents on the internet; in a closed-domain, enterprise setting, a narrower set of sources are typically used for added security and reliability.

Understanding eXact-RAG

At its core, eXact-RAG combines the principles of RAG, which focuses on retrieval-based conversational agents, with multimodal capabilities, enabling it to process and generate responses from various modalities such as text, images, and audio.

This versatility makes eXact-RAG well-suited for a wide range of applications, from chatbots to content recommendation systems and beyond.

Technologies Powering eXact-RAG
  1. LangChain and Ollama: LangChain and Ollama serve as the backbone of eXact-RAG, providing robust infrastructure for model development, training, and serving.

    LangChain offers a comprehensive suite of tools for natural language understanding and processing, while Ollama specializes in multimodal learning, enabling eXact-RAG to seamlessly integrate and process diverse data types.
  2. FastAPI for REST API Service: FastAPI, known for its high performance and simplicity, serves as the interface for eXact-RAG, facilitating seamless communication between the backend system and external applications.

    Its asynchronous capabilities ensure rapid response times, crucial for real-time interactions.
  3. ChromaDB or Elasticsearch: eXact-RAG offers flexibility in data storage and retrieval by supporting both ChromaDB and Elasticsearch.

    ChromaDB provides a lightweight solution suitable for simpler tasks, while Elasticsearch caters to more complex operations involving vast amounts of data. This versatility enables users to tailor eXact-RAG to their specific needs, balancing performance and scalability accordingly.
User-Friendly Interface with Streamlit for demo purposes

The user interface of eXact-RAG is built on Streamlit, a popular framework for creating interactive web applications with Python. Streamlit’s intuitive design and seamless integration with Python libraries allow users to interact with eXact-RAG effortlessly.

Through the interface, users can input queries, explore results, and interact with generated content across various modalities, enhancing the overall user experience.

Related article: From concepts to MVPs: Validate Your Idea in few Lines of Code with Streamlit

Applications of eXact-RAG

The versatility of eXact-RAG opens up a myriad of applications across different domains:

  • Conversational Agents: eXact-RAG can power chatbots and virtual assistants capable of engaging users in natural and meaningful conversations, leveraging both text and multimedia inputs.
  • Content Recommendation: By analyzing user preferences and behavior, eXact-RAG can recommend personalized content, including articles, videos, and images, tailored to individual tastes and interests.
  • Information Retrieval: eXact-RAG excels at retrieving relevant information from large datasets, making it invaluable for tasks such as question answering, document summarizing, and knowledge base retrieval.
eXact-RAG flow schema Let’s play with eXact-RAG

The first step to use eXact-RAG is to run your preferred LLM model using Ollama or get an OpenAI token. Both are supported by the RAG and to be sure to configure them rightly, it’s necessary to fill the settings.toml with the preferred options. Here an example:

[embedding] type = "openai" api_key = "" chat.model_name = "gpt-3.5-turbo" ... [database] type = "chroma" persist_directory = "persist" ...

In this example it’s possible to configure the LLM model and the vector database in which store the embeddings of your local data.

eXact-RAG is a multimodal RAG, for this reason it can ingest different kind of data like audio files or images. It is possible to choose, in the installation phase, which “backends” to install:

poetry install # -E audio -E image
  • audio extra will install openai-whisper for speech-to-text
  • image extra will install transformers and pillow for image captioning*

* this feature gives the possibility to process images even if the user has not the (hardware) possibility to run a Vision model like llava locally but still wants to pass images as data.

Now the job is done! The following command

poetry run python exact_rag/main.py

starts the server and at http://localhost:8080/docs it is showed the OpenAPI document (swagger) with all the available endpoints.

Demo

eXact-RAG was built as a server for multimodal RAGs but we provide also a user interface just for demo purposes to test all the features.
To run the UI just use the command:

poetry run streamlit run frontend/ui.py

Now, the page at http://localhost:8501 will show a chat interface like in the following example:

Conclusion

eXact-RAG represents a significant advancement in the field of multimodal RAG systems, offering unparalleled versatility and performance through its integration of cutting-edge technologies.

With its robust backend powered by LangChain and Ollama, flexible data storage options, and user-friendly interface built on Streamlit, eXact-RAG is poised to revolutionize various applications of natural language processing and multimodal learning.

As the demand for sophisticated LLM solutions continues to grow, eXact-RAG stands ready to meet the challenges of tomorrow’s digital landscape.

Categories: FLOSS Project Planets

C.J. Collier: Signed NVIDIA drivers on Google Cloud Dataproc 2.2

Planet Debian - Thu, 2024-06-20 11:38

Hello folks,

I’ve been working this year on better integrating NVIDIA hardware with the Google Cloud Dataproc product (Hadoop on Google Cloud) running the default cluster node image. We have an open bug[1] in the initialization-actions repo regarding creation failures upon enabling secure boot. This is because with secure boot, kernel driver code has its signature verified before insmod places the symbols into kernel memory. The verification process involves reading trust root certificates from EFI variables, and validating that the signatures on the kernel driver either a) were made directly by one of the certificates in the boot sector or b) were made by certificates which chain up to one of them.

This means that Dataproc disk images must have a certificate installed into them. My work on the internals will likely start producing images which have certificates from Google in them. In the meantime, however, our users are left without a mechanism to have both secure boot enabled and install out-of-tree kernel modules such as the NVIDIA GPU drivers. To that end, I’ve got PR #83[2] open with the GoogleCloudDataproc/custom-images github repository. This PR introduces a new argument to the custom image creation script, `–trusted-cert`, the argument of which is the path to a DER-encoded certificate to be included in the certificate database in the EFI variables of the disk’s boot sector.

I’ve written up the instructions on creating a custom image with a trusted certificate here:

https://github.com/cjac/custom-images/blob/secure-boot-custom-image/examples/secure-boot/README.md

Here is a set of commands that can be used to create a general purpose GCE disk image with the certificate inserted into EFI.

wget 'https://github.com/cjac/custom-images/raw/secure-boot-custom-image/examples/secure-boot/create-key-pair.sh' bash create-key-pair.sh cacert_der=tls/db.der # The Microsoft Corporation UEFI CA 2011 ms_uefi_ca="tls/MicCorUEFCA2011_2011-06-27.crt" test -f "${ms_uefi_ca}" || \ curl -L -o ${ms_uefi_ca} 'https://go.microsoft.com/fwlink/p/?linkid=321194' JSON="$(gcloud compute images --project debian-cloud list --format json)" SRC_IMAGE_NAME="$(echo ${JSON} | jq -r '.[] | .name' | grep -i ^debian-12-bookworm-v)" NEW_IMAGE_NAME="debian12-with-db-key-list-${USER}-$(date +%F)" SRC_DISK_ZONE="${ZONE}" gcloud -q compute images create "${NEW_IMAGE_NAME}" \ --source-disk "${SRC_IMAGE_NAME}" \ --source-disk-zone "${SRC_DISK_ZONE}" \ --signature-database-file="${cacert_der},${ms_uefi_ca}" \

I’d love to hear your feedback!

[1] https://github.com/GoogleCloudDataproc/initialization-actions/issues/1058
[2] https://github.com/GoogleCloudDataproc/custom-images/pull/83

Categories: FLOSS Project Planets

Daniel Lange: Fixing esptool read_flash above 2MB on some cheap ESP32 boards

Planet Debian - Thu, 2024-06-20 10:00

esptool, the Espressif SoC serial bootloader utility, tends to dislike cheap Flash chips attached to the various incarnations of the ESP32 chip family. And it seems to dislike them even more when running esptool on Linux than on other OSs.

The common error mode is seeing it break at the 2MB barrier when trying to dump (esptool read_flash) a 4MB flash configuration.

esptool -p /dev/ttyUSB0 -b 921600 read_flash 0 0x400000 flash_dump.bin

will fail with

esptool.py v4.7.0 Serial port /dev/ttyUSB0 Connecting.... Detecting chip type... ESP32 Chip is ESP32-D0WD-V3 (revision v3.1) Features: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None Crystal is 40MHz [..] Detected flash size: 4MB [..] 2097152 (50 %) A fatal error occurred: Failed to read flash block (result was 01090000: CRC or checksum was invalid)

typically at the 2MB barrier.

I found the solution in a rather unrelated esptool Github issue:

Create an esptool.cfg file in the project directory (from where you will run esptool):

[esptool]
timeout = 30
max_timeout = 240
erase_write_timeout_per_mb = 40
mem_end_rom_timeout = 0.2
serial_write_timeout = 10

The timeout = 30 is the setting that fixed reading flash memory via esptool read_flash for me.

When your esptool.cfg is read, esptool will tell you so in its second line of output:

$ esptool flash_id esptool.py v4.7.0 Loaded custom configuration from /home/dl/[..]/Embedded_dev/ESP-32_Wemos/esptool.cfg Found 1 serial ports Serial port /dev/ttyUSB0 Connecting...... [..]

Thank you Radim Karnis and wibbit from the Github issue linked above.

Categories: FLOSS Project Planets

Kdenlive 24.05.1 released

Planet KDE - Thu, 2024-06-20 08:50

The first maintenance release of the 24.05 series is out fixing issues in the spacer tool, effects and compositions, subtitle management and project settings to name a few. We addressed recently introduced crashes and freezes, including fixing the undo/redo track insertion and multiple track insertion issues. This version also improves AppImage packaging and enables notarization for macOS. 

Full changelog

  • Don’t try renaming sequence on double click in empty area of timeline tab bar. Commit.
  • Fix deletion of wrong effect wihh multiple instances of an effect and group effects enabled. Commit.
  • Fix single selected clip disappearing from timeline when dragging a new clip in timeline. Commit.
  • [cmd rendering] Ensure proper kdenlive_render path for AppImage. Commit.
  • Fix freeze/crash on undo/redo track insertion. Commit.
  • Fix crash on undo/redo multiple track insertion. Commit.
  • Project settings: don’t list embedded title clips as empty files in the project files tab. Commit.
  • Fix undo move effect up/down. On effect move, also move the active index, increase margins between effects. Commit.
  • Fix removing a composition from favorites. Commit.
  • Properly activate effect when added to a timeline clip. Commit.
  • Fix spacer tool can move backwards and overlap existing clips. Commit.
  • Fix crash deleting subtitle when the file url was selected. Commit. Fixes bug #487872.
  • Fix build when using openGLES. Commit. Fixes bug #483425.
  • Fix possible crash on project opening. Commit.
  • Fix extra dash added to custom clip job output. Commit. See bug #487115.
  • Fix usage of QUrl for LUT lists. Commit. See bug #487375.
  • Fix default keyframe type referencing the old deprecated smooth type. Commit.
  • Be more clever splitting custom ffmpeg commands around quotes. Commit. See bug #487115.
  • Fix effect name focus in save effect. Commit. See bug #486310.
  • Fix tests. Commit.
  • Fix selection when cutting an unselected clip under mouse. Commit.
  • Fix loading timeline clip with disabled stack should be disabled. Commit.
  • Fix crash trying to save effect with slash in name. Commit. Fixes bug #487224.
  • Remove quotes in custom clip jobe, fix progress display. Commit. See bug #487115.
  • Fix setting sequence thumbnail from clip monitor. Commit.
  • Fix locked track items don’t have red background on project open. Commit.
  • Fix spacer tool doing fake moves with clips in locked tracks. Commit.
  • Hide timeline clip status tooltip when mouse leaves. Commit

The post Kdenlive 24.05.1 released appeared first on Kdenlive.

Categories: FLOSS Project Planets

PyBites: What is the Repository Pattern and How to Use it in Python?

Planet Python - Thu, 2024-06-20 08:17

The repository pattern is a design pattern that helps you separate business logic from data access code.

It does so by providing a unified interface for interacting with different data sources, bringing the following advantages to your system:

  • Flexibility: You can swap out your data source without changing business logic.
  • Maintainability: It’s easier to manage and test different storage backends.
  • Decoupling: You reduce tight coupling between your application and data access layer.
A practical example

Let’s use Python and sqlmodel (PyPI) to demonstrate this pattern (code here):

from abc import ABC, abstractmethod from sqlmodel import SQLModel, create_engine, Session, Field, select # Define the model class Item(SQLModel, table=True): id: int = Field(default=None, primary_key=True) name: str # Repository Interface class IRepository(ABC): @abstractmethod def add(self, item: Item): pass @abstractmethod def get(self, name: str) -> Item | None: pass # SQLModel implementation class SQLModelRepository(IRepository): def __init__(self, db_string="sqlite:///todo.db"): self.engine = create_engine(db_string) SQLModel.metadata.create_all(self.engine) self.session = Session(self.engine) def add(self, item: Item): self.session.add(item) self.session.commit() def get(self, name: str) -> Item | None: statement = select(Item).where(Item.name == name) return self.session.exec(statement).first() # CSV implementation class CsvRepository(IRepository): def __init__(self, file_path="todo.csv"): self._file_path = file_path def add(self, item: Item): with open(self._file_path, "a") as f: f.write(f"{item.id},{item.name}\n") def get(self, name: str) -> Item | None: with open(self._file_path, "r") as f: return next( ( Item(id=int(id_str), name=item_name) for line in f if (id_str := line.strip().split(",", 1)[0]) and (item_name := line.strip().split(",", 1)[1]) == name ), None, ) if __name__ == "__main__": repo = SQLModelRepository() repo.add(Item(name="Buy Milk")) sql_item = repo.get("Buy Milk") # Swap out the repository implementation csv_repo = CsvRepository() csv_repo.add(Item(id=1, name="Buy Milk")) csv_item = csv_repo.get("Buy Milk") print(f"{sql_item=}, {csv_item=}, {sql_item == csv_item=}") # outputs: # sql_item=Item(name='Buy Milk', id=1), csv_item=Item(id=1, name='Buy Milk'), sql_item == csv_item=True
  • First we define the Item model using SQLModel. You can also use SQLAlchemy or any other ORM, but I find SQLModel a bit easier to use and I like its integration with Pydantic.
  • Next we define the IRepository interface with the add and get methods, making them required for any class that inherits from it. This is done using Python’s Abstract Base Classes (ABCs) and applying the @abstractmethod decorator. This is a way to “enforce a contract” and a crucial part of the repository pattern (see also the tests). If you don’t implement these methods in a subclass, Python will raise an error. This is a way to ensure that all repository classes have the same interface, even if they use different storage. To learn more about ABCs, check out our article).
  • Then we implement the SQLModelRepository and CsvRepository subclasses, which inherit from IRepository. These classes implement the add and get methods, which are required by the IRepository interface. This is where we define the data access logic for each of the storage. The SQLModelRepository uses SQLModel to interact with a SQLite database, while the CsvRepositoryinteracts with a CSV file. Same interface, different storage backends.
  • Finally, we demonstrate how to use the repository pattern by adding an item to both the SQL and CSV repositories and then retrieving it.

Note: In this implementation, we did not add error handling to keep the example relatively small. You should ensure that if something goes wrong (e.g., database connection issues or file access errors), the program can handle the error gracefully and provide useful feedback.

The example might be a bit contrived, but it shows how we can leverage the repository pattern in Python.

Again, the advantage is that we have a flexible design that allows us to swap out the data access implementation without changing the business logic. This makes our code easier to test and maintain.

Have you used this pattern yourself and if so, how? Hit us up on social media and let us know: @pybites …

Categories: FLOSS Project Planets

DrupalEasy: Visual Debugger module: a modern take on an old idea

Planet Drupal - Thu, 2024-06-20 08:07

If you've been a Drupal developer since before the time of modern Drupal (pre-Drupal 8,) then you probably remember the Theme Developer module. This module is/was used to figure out theme template usage and possible overrides in Drupal 7.

With modern Drupal, this same information is available directly in the HTML source code of a file when Twig development mode is enabled. 

The Visual Debugger module by Marcos Hollanda combines the best of the present with the best of the past to surface template information directly in the user interface without having to dig into the page's HTML. 

Pre-impressions

I will admit that before I installed the module, I was doubtful that I'd be interested in using it. Marcos had pinged me on Slack asking me to take a look at it, and I figured it would be a nice exercise for DrupalEasy Office Hours, which it was.

Much to my surprise though, I really like using it. 

Basic usage

There's not much to say about how to use this module - it works just like you'd expect. Use Composer to get the code, enable the module, then enable Twig development mode and you're done. By default it will appear on each non-admin page of your site, with a handy activate/deactivate icon. There's really not too much to say about its usage 😀.

Current status

The module is in alpha status with several planned features not yet implemented. What is implemented so far appears to be solid and useful. The basic functionality that I expected is there. Click on any section of the page, and the module will display the template file that is used to render that section as well as a list of possible template file override names. This is the same information that Twig developer mode displays in HTML comments.

Marcos let me know that other planned features include theme hook suggestions and displaying caching information; (similar to the Cache review module, perhaps?)

There are other challenges to overcome as well, including how to handle selecting overlapping elements. I did make one very minor suggestion to relocate the module's configuration page which Marcos quickly implemented 🤘🏼.

Who is this module for?

I would definitely put this module in the category of a developer tool for folks who are either new to Drupal theme development or people who prefer a UI instead of scrolling through raw HTML.

I plan on following the progress of this module closely, in hopes that it will be suitable to use during the next semester of Drupal Career Online, where we have a theming lesson that could definitely benefit from this module. 

Categories: FLOSS Project Planets

1xINTERNET blog: 10 React use cases at 1xINTERNET

Planet Drupal - Thu, 2024-06-20 08:00

React is one of the primary technologies in our technology stack. This article explores our React use cases, outlining their purpose, technical aspects and implementation examples.

Categories: FLOSS Project Planets

The Drop Times: Driving Drupal Forward: Suzanne Dergacheva on the Strategic Rebranding of Drupal

Planet Drupal - Thu, 2024-06-20 07:28
In an in-depth interview with The DropTimes, Suzanne Dergacheva, co-founder of Evolving Web and leader of the Promote Drupal Initiative, discusses the comprehensive efforts behind the Drupal rebranding. Suzanne shares valuable insights into the factors that prompted this evolution, the collaborative work with the Drupal community, and the strategic goals aimed at enhancing Drupal's market presence. She also touches on the new visual identity, the importance of community feedback, and future plans to engage diverse global audiences. This conversation complements our earlier interview with Shawn Perritt, head of the Drupal Rebranding initiative.
Categories: FLOSS Project Planets

Tag1 Consulting: Migrating Your Data from Drupal 7 to Drupal 10: Generating migrations with Migrate Upgrade

Planet Drupal - Thu, 2024-06-20 07:14

Series Overview & ToC | Previous Article | Next Article - coming June 27th In the previous article we performed an automated migration via the administration interface with tools provided by Drupal core out of the box. This gave us the opportunity to familiarize ourselves with the different modules that assist with the task, settings that affect the migration, and the results when there are no customizations in place. Today we will leverage the Migrate Plus and Migrate Upgrade modules to generate migrations from the command line, which we can then pick from and customize. Automated migration using the Migrate Upgrade module Let’s start by listing the modules that we will use to run the automated migration from the command line: * Migrate: consists of the core API that can be used to import data from any source. * Migrate Drupal: allows connecting to a Drupal 6 or 7 site to import configuration and content. * Migrate Plus: provides extra source, process, and destination plugins; handles migrations as configuration entities; allows reusing of configuration via migration groups; and more. It is a dependency of Migrate Upgrade and a very useful module on its own. * Migrate Upgrade: provides a Drush...

Read more mauricio Thu, 06/20/2024 - 05:15
Categories: FLOSS Project Planets

Talk Python to Me: #467: Data Science Panel at PyCon 2024

Planet Python - Thu, 2024-06-20 04:00
I have a special episode for you this time around. We're coming to you live from PyCon 2024. I had the chance to sit down with some amazing people from the data science side of things: Jodie Burchell, Maria Jose Molina-Contreras, and Jessica Greene. We cover a whole set of recent topics from a data science perspective. Though we did have to cut the conversation a bit short as they were coming from and go to talks they were all giving but it was still a pretty deep conversation.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/code-comments'>Code Comments</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Jodie Burchell</b>: <a href="https://twitter.com/t_redactyl" target="_blank" rel="noopener">@t_redactyl</a><br/> <b>Jessica Greene</b>: <a href="https://www.linkedin.com/in/jessica0greene" target="_blank" rel="noopener">linkedin.com</a><br/> <b>Maria Jose Molina-Contreras</b>: <a href="https://www.linkedin.com/in/mjmolinacontreras/" target="_blank" rel="noopener">linkedin.com</a><br/> <br/> <b>Talk Python's free Shiny course</b>: <a href="https://talkpython.fm/shiny" target="_blank" rel="noopener">talkpython.fm/shiny</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=QYiRrHnEomw" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/467/data-science-panel-at-pycon-2024" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Categories: FLOSS Project Planets

Pages