Feeds

Real Python: Quiz: Build a Guitar Synthesizer

Planet Python - Tue, 2024-06-18 08:00

In this quiz, you’ll test your understanding of what it takes to build a guitar synthesizer in Python. By working through this quiz, you’ll revisit a few key concepts from the music theory and sound synthesis.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

CodersLegacy: Adding Data Files in Nuitka (Python Files, Images, etc)

Planet Python - Tue, 2024-06-18 06:09

Nuitka is a Python-to-C compiler that converts Python code into executable binaries. While Nuitka efficiently compiles Python scripts, incorporating data files such as images, audio, video, and additional Python files requires can be a little tricky. This guide outlines the steps to include various data files in a Nuitka standalone executable.


Basic Setup for Nuitka

Before delving into the inclusion of data files, ensure you have Nuitka installed. You can install Nuitka via pip:

pip install nuitka

For a more detailed starter guide on Nuitka and its various options, you can follow this link to our main Nuitka Tutorial. If you already familiar with Nuitka, then proceed ahead in this tutorial to learn how to add data files in our Nuitka EXE.


Compiling a Python Script with Nuitka

Assuming you have a Python script named main.py, the basic command to compile it into a standalone executable using Nuitka is:

python -m nuitka --standalone main.py

This command creates a standalone directory containing the executable and all necessary dependencies. However, if your program relies on external data files (images, audio, video, other Python files), you need to explicitly include these files.


Adding Data Files in Nuitka 1. Adding Individual Files

To include individual files such as images, use the --include-data-files option. The syntax is:

--include-data-files=<source-location>=<target-location>

For example, to include an image file favicon.ico, the command is:

--include-data-files=./favicon.ico=favicon.ico

This command tells Nuitka to include the favicon.ico file from the current directory and place it in the same location within the output directory.


2. Adding Directories

To include entire directories, use the --include-data-dir option. The syntax is the same as earlier:

--include-data-dir=<source-location>=<target-location>

For instance, to include an entire directory named Images, the command is:

--include-data-dir=./Images=Images

This will copy the Images directory from the current location to the output directory, preserving its structure.


Example: Adding Data Files of Multiple Types

Suppose you have a project with the following files and directories:

  • main.py: The main Python script.
  • favicon.ico: An icon file.
  • Images/: A directory containing images.
  • audio.mp3: An audio file.
  • videos/: A directory containing video files.
  • utils.py: An additional Python file used in the project.

To compile this project with all necessary data files included, the command would be:

python -m nuitka --standalone --include-data-files=./favicon.ico=favicon.ico --include-data-files=./audio.mp3=audio.mp3 --include-data-dir=./Images=Images --include-data-dir=./videos=videos --include-data-files=./utils.py=utils.py --disable-console main.py

Pretty confusing isn’t it?

Having to write out this command over and over again each time you want to re-build your executable is a massive pain. Instead, I recommend you use Nuitka Configuration Files, which allow you write out all your dependencies/datafiles/imports in a single file, which can then be executed as many times as you like.


Final Word

By using the --include-data-files and --include-data-dir options, you can include all necessary data files in your Nuitka-compiled executable. This ensures that your standalone application has all the resources it needs to run successfully, whether they are images, audio, video files, or additional Python scripts. Combining these options with plugins and other configurations allows for the creation of robust and self-sufficient executables.

Happy coding!

This marks the end of the Adding Data Files in Nuitka Tutorial. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the tutorial content are more than welcome.

The post Adding Data Files in Nuitka (Python Files, Images, etc) appeared first on CodersLegacy.

Categories: FLOSS Project Planets

Python Bytes: #388 Don't delete all the repos

Planet Python - Tue, 2024-06-18 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>PSF Elections coming up</strong></li> <li><a href="https://www.bleepingcomputer.com/news/security/cloud-engineer-gets-2-years-for-wiping-ex-employers-code-repos/">Cloud engineer gets 2 years for wiping ex-employer’s code repos</a></li> <li><a href="https://adamj.eu/tech/2024/06/17/python-import-by-string/"><strong>Python: Import by string with pkgutil.resolve_name()</strong></a></li> <li><a href="https://x.com/__AlexMonahan__/status/1801435781380325448"><strong>DuckDB goes 1.0</strong></a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=yweZO_BiYfw' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="388">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by ScoutAPM: <a href="https://pythonbytes.fm/scout"><strong>pythonbytes.fm/scout</strong></a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it. </p> <p><strong>Brian #1:</strong> <strong>PSF Elections coming up</strong></p> <ul> <li>This is elections for the PSF Board and for 3 bylaw changes.</li> <li>To vote in the PSF election, you need to be a Supporting, Managing, Contributing, or Fellow member of the PSF, 
</li> <li>And affirm your voting status by June 25.</li> <li>See <a href="https://pyfound.blogspot.com/2024/06/affirm-your-psf-membership-voting-status.html?utm_source=pocket_shared">Affirm your PSF Membership Voting Status</a> for more details.</li> <li>Timeline <ul> <li>Board Nominations open: Tuesday, June 11th, 2:00 pm UTC</li> <li>Board Nominations close: Tuesday, June 25th, 2:00 pm UTC</li> <li>Voter application cut-off date: Tuesday, June 25th, 2:00 pm UTC <ul> <li>same date is also for <a href="https://psfmember.org/user-information">voter affirmation</a>.</li> </ul></li> <li>Announce candidates: Thursday, June 27th</li> <li>Voting start date: Tuesday, July 2nd, 2:00 pm UTC</li> <li>Voting end date: Tuesday, July 16th, 2:00 pm UTC </li> </ul></li> <li>See also <a href="https://pyfound.blogspot.com/2024/05/blog-post.html">Thinking about running for the Python Software Foundation Board of Directors? Let’s talk!</a> <ul> <li>There’s still one upcoming office hours session on June 18th, 12 PM UTC</li> </ul></li> <li>And <a href="https://pyfound.blogspot.com/2024/06/for-your-consideration-proposed-bylaws.html?utm_source=pocket_shared">For your consideration: Proposed bylaws changes to improve our membership experience</a> <ul> <li>3 proposed bylaws changes</li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://www.bleepingcomputer.com/news/security/cloud-engineer-gets-2-years-for-wiping-ex-employers-code-repos/">Cloud engineer gets 2 years for wiping ex-employer’s code repos</a></p> <ul> <li>Miklos Daniel Brody, a cloud engineer, was sentenced to two years in prison and a restitution of $529,000 for wiping the code repositories of his former employer in retaliation for being fired.</li> <li>The <a href="https://www.documentcloud.org/documents/24215622-united-states-v-brody?responsive=1&title=1">court documents</a> state that Brody's employment was terminated after he violated company policies by connecting a USB drive.</li> </ul> <p><strong>Brian #3:</strong> <a href="https://adamj.eu/tech/2024/06/17/python-import-by-string/"><strong>Python: Import by string with pkgutil.resolve_name()</strong></a></p> <ul> <li>Adam Johnson</li> <li>You can use pkgutil.resolve_name("[HTML_REMOVED]:[HTML_REMOVED]")to import classes, functions or modules using strings. </li> <li>You can also use importlib.import_module("[HTML_REMOVED]") </li> <li>Both of these techniques are so that you have an object imported, but the end thing isn’t imported into the local namespace. </li> </ul> <p><strong>Michael #4:</strong> <a href="https://x.com/__AlexMonahan__/status/1801435781380325448"><strong>DuckDB goes 1.0</strong></a></p> <ul> <li>via Alex Monahan</li> <li>The cloud hosted product <a href="https://x.com/motherduck">@MotherDuck</a> also opened up General Availability</li> <li>Codenamed "Snow Duck"</li> <li>The core theme of the 1.0.0 release is stability. </li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>Sending us topics. Please send before Tuesday. But any time is welcome.</li> <li><a href="https://blog.scientific-python.org/numpy/numpy2/">NumPy 2.0 </a></li> <li><a href="https://htmx.org/posts/2024-06-17-htmx-2-0-0-is-released/">htmx 2.0.0</a></li> </ul> <p>Michael:</p> <ul> <li>Get 6 months of PyCharm Pro for free. Just take a course (even a free one) at <a href="https://training.talkpython.fm">Talk Python Training</a>. Then visit your account page &gt; details tab and have fun.</li> <li><a href="https://x.com/TalkPython/status/1803098742515732679">Coming soon at Talk Python</a>: Shiny for Python</li> </ul> <p><strong>Joke:</strong> <a href="https://devhumor.com/media/gitignore-thoughts-won-t-let-me-sleep">.gitignore thoughts won't let me sleep</a></p>
Categories: FLOSS Project Planets

Specbee: Getting started with integrating Drupal and Tailwind CSS

Planet Drupal - Tue, 2024-06-18 02:57
If you’re tired of spending hours tweaking CSS stylesheets to get your website looking just right, then you should consider a new way of CSS’ing (yeah, we just made it up). We’re talking about Tailwind CSS - a different way of doing CSS. It uses a utility-first approach, which means instead of creating a class for a component, let’s say a button, you would use a pre-defined class that’s built to style the button, like a “font-bold” or a “text-white”. All of this right from the comfort of your HTML!  Couple Tailwind CSS, a sleek, utility-based open-source CSS framework, with a modular, highly customizable, and modern CMS like Drupal 10, and you have a powerhouse combination!  In this article, we’ll be talking about Drupal and Tailwind CSS AND how you can integrate them seamlessly to build modern, responsive, and visually stunning websites Why Drupal is a good choice for web development Drupal is an open-source content management system (CMS) that has been gaining popularity among web developers for its versatility and powerful features. The latest version, Drupal 10 brings a host of new features and improvements meant to simplify development and enhance user experience. 1. Highly customizable design: One of the biggest advantages of using Drupal is its highly customizable design. With a wide range of themes and templates available, developers have the freedom to create unique and visually appealing websites that cater to their client's specific needs. 2. Flexibility and scalability: Drupal's modular architecture makes it incredibly flexible and scalable, making it suitable for building websites of any size or complexity. It offers a vast library of modules that can be easily integrated into your website, providing additional functionality such as e-commerce capabilities, social media integration, multilingual support, and more. 3. Robust security measures: Security is a major concern in today's times; hence it is crucial to choose a CMS that prioritizes it. Drupal has always been recognized as one of the most secure CMS platforms due to its stringent security standards and frequent updates that address any potential vulnerabilities promptly.  4. SEO-friendly: Search Engine Optimization (SEO) is vital for every website's success as it determines its visibility on search engines like Google. Drupal's built-in SEO tools assist developers in optimizing their websites by generating search engine-friendly URLs, meta tags, and sitemaps, and providing customizable page titles and descriptions. 5. Active community support: Drupal benefits from a substantial and engaged group of developers who contribute to its ongoing evolution and offer support to fellow users. This community-driven approach ensures that any issues or bugs are addressed promptly, and new updates and features are released regularly. Why Tailwind CSS is great for web developers Tailwind CSS has gained immense popularity in the web development community due to its numerous benefits and advantages.  1. Highly customizable: One of the biggest advantages of Tailwind CSS is its high level of customization. Unlike other popular front-end frameworks like Bootstrap or Foundation, Tailwind does not come with predefined styles and components. Instead, it provides a comprehensive set of utility classes that can be used to create custom designs according to your specific project needs. This allows developers to have complete control over their designs and enables them to create unique and personalized user experiences. 2. Lightweight: Another major benefit of using Tailwind CSS is its lightweight nature. Since it only generates the required classes based on your design choices, it helps keep the file size small and improves website performance. This is especially beneficial for Drupal websites which often have heavy content and functionality that can slow down page loading speed. 3. Mobile-responsive: With more people accessing websites through mobile devices, having a mobile-responsive design has become crucial for any website's success. The use of Tailwind CSS makes it easier to create responsive designs without adding extra code or media queries. Its mobile-first approach ensures that your website looks great on all screen sizes without compromising on performance. 4. Faster development process: By eliminating the need for manually writing complex CSS rules, Tailwind speeds up the development process significantly. It also reduces the chances of errors as developers can easily reuse existing utility classes instead of creating new ones from scratch every time they need a similar style element. 5. Scalable designs: With Tailwind CSS, you can create scalable designs that are easy to maintain and modify in the long run. As your website grows and evolves, making changes to its design becomes more manageable with Tailwind's utility classes, as opposed to traditional frameworks where modifying styles can be a tedious process. Why Integrate Drupal with Tailwind CSS? One of the main reasons to integrate Drupal with Tailwind CSS is its ability to simplify web development. With Tailwind CSS, developers can avoid writing repetitive and lengthy code for styling. Instead, they can use pre-built classes that provide consistent and reusable design patterns. This not only saves time but also results in cleaner code that is easier to maintain. Another benefit of integrating Drupal with Tailwind CSS is its responsive design capabilities. With the increasing use of mobile devices, it has become essential for websites to be optimized for all screen sizes. Thanks to Tailwind's built-in responsive utilities, designers can easily build responsive layouts without having to write separate media queries for different devices. Step-by-Step Guide to Integrating Drupal with Tailwind CSS Step 1: Get your prerequisites ready Local Drupal Setup NPM and Yarn package Managers Drush Lando Step 2: Create a custom theme We will begin by creating a Drupal custom theme and then will integrate Tailwind CSS into the same. In this tutorial, I am using the starter kit theme command to generate my custom theme. Go to the web directory and type in the following command in the terminal php core/scripts/drupal generate-theme drupal_tailwind_theme after pressing the enter key our custom theme will be generated as shown below: Next, we will generate the package.json file which we require to initialize a new node.js project. Go to the terminal and type in the following command npm init and it prompts us with a series of questions regarding our project since for this tutorial I am keeping everything default by pressing the enter key until our package.json file is added as shown in the image. Step 3: Install and set up Tailwind CSS Now let’s install Tailwind and its dependencies and also configure it. In the terminal, run the following command:npm install -D tailwindcss postcss autoprefixer Next, let’s generate a Tailwind CSS config file using the following command:npx tailwindcss init -p Go to the tailwind.config.js file and set the path for Tailwind to scan. Since our Twig templates are in the templates directory, we will use that path as shown in the image. Now we need to create a CSS file to put the Tailwind directives in. Go to the src folder, create a file named index.css, and paste the following directives:@tailwind base;@tailwind components;@tailwind utilities;   Create a folder where the Tailwind CSS will be compiled. On the same level as the src folder, create a folder named dist (or any name you prefer). We now run the following command:npx tailwindcss –input src/index.css –output dist/index.css After executing this command, we will be able to see the compiled version of the Tailwindcss in the dist folder. We can verify it by going to the dist folder as shown in the image. In the package.json file, we will be adding the scripts dev and watch which can be executed using npm run dev & npm run watch or using yarn dev & yarn watch. The script dev compiles the code once as per command while the watch command keeps a watch on any change. Step 4: Adding Tailwind CSS to our Drupal Theme Now we will be adding Tailwind to our Drupal site. For that, we can add a library namely tailwind, and give it a CSS path of the dist/index.css. That means where we have our compiled tailwind. Next, we can add that library either globally or to the specific template. For the purpose of this tutorial, I will be adding it globally. Using Tailwind CSS in our Drupal theme For this tutorial, I am using the page.html.twig file and adding a red background color by including the class bg-red-500 in the markup. Clear the cache and check the output. Our Tailwind CSS class has applied the desired background color.   Final Thoughts Tailwind CSS is a great choice when you want to build websites quickly and easily. It integrates well with Drupal and the combination of these two tools creates a powerful synergy for web development. Reach out to us if you would like to learn more about how Specbee’s Drupal experts can help you create stunning web experiences with Drupal and Tailwind CSS.
Categories: FLOSS Project Planets

Week 3 recap - points on a canvas

Planet KDE - Tue, 2024-06-18 00:56
Hey welcome to the week 3 recap blog. So after some testing, using the pixel art brush preset and clicking on individual pixels on the canvas will trigger functions like kis_paintop:paintAt and kis_brushop:paintAt. KisBrushOp represents a specific pa...
Categories: FLOSS Project Planets

Plasma 6.1

Planet KDE - Mon, 2024-06-17 20:00

Plasma 6 hits its stride with version 6.1. While Plasma 6.0 was all about getting the migration to the underlying Qt 6 frameworks correct (and what a massive job that was), 6.1 is where developers start implementing the features that will take you desktop to a new level.

In this release, you will find features that go far beyond subtle changes to themes and tweaks to animations (although there is plenty of those too), as you delve into interacting with desktops on remote machines, become more productive with usability and accessibility enhancements galore, and discover customizations that will even affect the hardware of your computer.

These features and more are being built directly into Plasma's Wayland version natively, avoiding the need for third party software and hacky extensions required by similar solutions implemented in X.

Things will only get more interesting from here. But meanwhile enjoy what will land on your desktop with your next update.

Note: Due to unforeseen circumstances, we have been unable to ship the new wallpaper, "Reef", with this version of Plasma. However, there will be a new wallpaper coming soon in the next 6.2 version.

If you can't wait, you can download "Reef" here.

We apologise for this delay and the inconvenience this may cause.

What's New Access Remote Plasma Desktops

One of the more spectacular (and useful) features added in Plasma 6.1 is that you can now start up a remote desktop directly from the System Settings app. This means that if you are sysadmin who needs to troubleshoot users' machines, or simply need to work on a Plasma-enabled computer that is out of reach, setting up a connection is just a few clicks away.

Once enabled, you can connect to the remote desktop using a client such as KRDC. You will see the remote machine's Plasma desktop in a window and be able to interact with it from your own computer.

Customization made (more) Visual

We all love customizing our Plasma desktops, don't we? One of the quickest ways to do this is by entering Plasma's Edit Mode (right-click anywhere on the desktop background and select Enter Edit Mode from the menu).

In version 6.1, the visual aspect of Edit Mode has been overhauled and you will now see a slick animation when you activate it. The entire desktop zooms out smoothly, giving you a better overview of what is going on and allowing you to make your changes with ease.

Persistent Apps

Plasma 6.1 on Wayland now has a feature that "remembers" what you were doing in your last session like it did under X11. Although this is still work in progress, If you log off and shut down your computer with a dozen open windows, Plasma will now open them for you the next time you power up your desktop, making it faster and easier to get back to what you were doing.

Sync your Keyboard's Colored LEDs

It wouldn't be a new Plasma release without at least one fancy aesthetic customization features. This time, however, we give you the power to reach beyond the screen, all the way onto your keyboard, as you can now synchronize the LED colours of your keys to match the accent colour on your desktop.

The ultimate enhancement for the fashion-conscious user.

Please note that this feature will not work on unsupported keyboards — support for additional keyboards is on the way!

And all this too...
  • We have simplified the choices you see when you try to exit Plasma by reducing the number of confusing options. For example, when you press Shutdown, Plasma will only list Shutdown and Cancel, not every single power option.

  • Screen Locking gives you the option of configuring it to behave like a traditional screensaver, as you can choose it not to ask you for a password to unlock it.

  • Two visual accessibility changes make it easier to use the cursor in Plasma 6.1:

    • Shake Cursor makes the cursor grow when you "shake" it. This helps you locate that tiny little arrow on your large, cluttered screens when you lose it among all those windows.

    • Edge Barrier is useful if you have a multi-monitor setup and want to access things on the very edge of your monitor. The "barrier" is a sticky area for your cursor near the edge between screens, and it makes it easier to click on things (if that is what you want to do), rather than having the cursor scooting across to the next display.

  • Two major Wayland breakthroughs will greatly improve your Plasma experience:

    • Explicit Sync eliminates flickering and glitches traditionally experienced by NVidia users.

    • Triple Buffering support in Wayland makes animations and screen rendering smoother.


and there's much more going on. To see the full list of changes, check out the changelog for Plasma 6.1.
Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #455 - Top 5 uses of AI for Drupal

Planet Drupal - Mon, 2024-06-17 16:08

Today we are talking about AI Tips for Drupal Devs, AI Best Practices, and Drupal Droid with guest Mike Miles. We’ll also cover AI interpolator as our module of the week.

For show notes visit: www.talkingDrupal.com/455

Topics
  • Top 5 tips
    • Idea Generation (Ideation)
    • Code Generation
    • Debugging
    • Content Generation
    • Technical Explanations
  • How do you suggest people use AI for Ideation
  • Is MIT Sloan using AI to help with Drupal Development
  • Does that code get directly inserted into your sites
  • What are some common pitfalls
  • Is your team using AI for debugging
  • Any best practices you have found helping when working with AI
  • Is MIT Sloan using AI for content generation
  • What is an example of how you use AI for technical explanations
  • What is your view ont he future of AI in Drupal, do you think AI will replace Drupal developers
Resources Guests

Michael Miles - mike-miles.com mikemiles86

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Randy Fay - rfay

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you ever wanted to use AI to help populate entity fields that were left blank by Drupal content authors? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in Sep 2023 by Marcus Johansson of FreelyGive
    • Versions available: 1.0.0-rc4
  • Maintainership
    • Actively maintained, recent release in the past month
    • Security coverage - opted in, needs stable release
    • Test coverage
    • Documentation - User guide
    • Number of open issues: 18 open issues, none of which are a bugs
  • Usage stats:
    • 94 sites
  • Module features and usage
    • In scientific fields, interpolation is the process of using known data to extrapolate or estimate unknown data points. In a similar way this module helps your Drupal site provide values for fields that didn’t receive input, based on the information that was provided.
    • Fundamentally Interpolator AI provides a framework and an API, and then relies on companion modules for processing, either by leveraging third-party services like AI LLMs, or PHP-based scripting.
    • There are existing integrations with a variety of AI services, including OpenAI, Dreamstudio, Hugging Face, and more.
    • You can add retrievers to help extract and normalize the content you’re processing, for example photos from an external site, and other tools to help normalize and optimize content and media, and optimize any prompts you will be using with AI services.
    • You can also extend the workflow capabilities of AI Interpolator, for example using the popular and powerful ECA module that we’ve talked about before on this show.
Categories: FLOSS Project Planets

PyBites: Deploying a FastAPI App as an Azure Function: A Step-by-Step Guide

Planet Python - Mon, 2024-06-17 14:07

In this article I will show you how to deploy a FastAPI app as a function in Azure. Prerequisites are that you have an Azure account and have the Azure CLI installed (see here).

Setup Azure

First you need to login to your Azure account:

az login

It should show your subscriptions and you select the one you want to use.

Create a resource group

Like this:

az group create --name myResourceGroup --location eastus

A resource group is a container that holds related resources for an Azure solution (of course use your own names throughout this guide).

It comes back with the details of the resource group you just created.

Create a storage account

Using this command:

az storage account create --name storage4pybites --location eastus --resource-group myResourceGroup --sku Standard_LRS

Note that for me, to get this working, I had to first make an MS Storage account, which I did in the Azure portal.

The storage account name can only contain lowercase letters and numbers. Also make sure it’s unique enough (e.g. mystorageaccount was already taken).

Upon success it comes back with the details of the storage account you just created.

Create a function app

Now, create a function app.

az functionapp create --resource-group myResourceGroup --consumption-plan-location eastus --runtime python --runtime-version 3.11 --functions-version 4 --name pybitesWebhookSample --storage-account storage4pybites --os-type Linux Your Linux function app 'pybitesWebhookSample', that uses a consumption plan has been successfully created but is not active until content is published using Azure Portal or the Functions Core Tools. Application Insights "pybitesWebhookSample" was created for this Function App. You can visit <URL> to view your Application Insights component App settings have been redacted. Use `az webapp/logicapp/functionapp config appsettings list` to view. { ...JSON response... }

A bit more what these switches mean:

  • --consumption-plan-location is the location of the consumption plan.
  • --runtime is the runtime stack of the function app, here Python.
  • --runtime-version is the version of the runtime stack, here 3.11 (3.12 was not available at the time of writing).
  • --functions-version is the version of the Azure Functions runtime, here 4.
  • --name is the name of the function app (same here: make it something unique / descriptive).
  • --storage-account is the name of the storage account we created.
  • --os-type is the operating system of the function app, here Linux (Windows is the default but did not have the Python runtime, at least for me).
Create your FastAPI app

Go to your FastAPI app directory and create a requirements.txt file with your dependencies if not done already:

To demo this from scratch, let’s create a simple FastAPI webhook app (the associated repo is here).

mkdir fastapi_webhook cd fastapi_webhook mkdir webhook

First, create a requirements.txt file:

fastapi azure-functions

The azure-functions package is required to run FastAPI on Azure Functions.

Then create an __init__.py file inside the webhook folder with the FastAPI app code:

import azure.functions as func from fastapi import FastAPI app = FastAPI() @app.post("/webhook") async def webhook(payload: dict) -> dict: """ Simple webhook that just returns the payload. Normally you would do something with the payload here. And add some security like secret key validation (HMAC for example). """ return {"status": "ok", "payload": payload} async def main(req: func.HttpRequest, context: func.Context) -> func.HttpResponse: return await func.AsgiMiddleware(app).handle_async(req, context)

This took a bit of trial and working to get it working, because it was not evident that I had to use AsgiMiddleware to run FastAPI on Azure Functions. This article helped with that.

Next, create a host.json file in the root of the project for the Azure Functions host:

{ "version": "2.0", "extensions": { "http": { "routePrefix": "" } } }

A local.settings.json file for local development:

{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "UseDevelopmentStorage=true", "FUNCTIONS_WORKER_RUNTIME": "python" } }

And you’ll need a function.json file in the webhook directory that defines the configuration for the Azure Function:

{ "scriptFile": "__init__.py", "bindings": [ { "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["post"] }, { "type": "http", "direction": "out", "name": "$return" } ] }

I also included a quick test script in the root of the project:

import sys import requests url = "http://localhost:7071/webhook" if len(sys.argv) < 2 else sys.argv[1] data = {"key": "value"} resp = requests.get(url) print(resp) resp = requests.post(url, json=data) print(resp) print(resp.text)

If you run this script, you should see a 404 for the GET request (not implemented) and a 200 for the POST request. For remote testing I can run the script with the live webhook URL (see below).

Run the function locally

To verify that the function works locally run it with:

func start

Then run the test script:

$ python test.py <Response [404]> <Response [200]> {"status":"ok","payload":{"key":"value"}}

Awesome!

Deploy it to Azure

Time to deploy the function to Azure:

(venv) √ fastapi_webhook (main) $ func azure functionapp publish pybitesWebhookSample Getting site publishing info... [2024-06-17T16:06:08.032Z] Starting the function app deployment... Creating archive for current directory... ... ... Remote build succeeded! [2024-06-17T16:07:35.491Z] Syncing triggers... Functions in pybitesWebhookSample: webhook - [httpTrigger] Invoke url: https://pybiteswebhooksample.azurewebsites.net/webhook

Let’s see if it also works on Azure:

$ python test.py https://pybiteswebhooksample.azurewebsites.net/webhook <Response [404]> <Response [200]> {"status":"ok","payload":{"key":"value"}}

Cool, it works!

If something goes wrong remotely, you can check the logs:

az webapp log tail --name pybitesWebhookSample --resource-group myResourceGroup

And when you make changes, you can redeploy with:

func azure functionapp publish pybitesWebhookSample

That’s it, a FastAPI app running as an Azure Function! I hope this article helps you if you want to do the same. If you have any questions, feel free to reach out to me on X | Fosstodon | LinkedIn -> @bbelderbos.

Categories: FLOSS Project Planets

Open Source AI Definition – Weekly update June 17

Open Source Initiative - Mon, 2024-06-17 12:52
Explaining the concept of Data information
  • After much debate regarding training data, @stefano published a summary of the positions expressed and some clarifications about the terminology included in draft v.0.0.8. You can read the rationale about it and share your thoughts on the forum. 
  • Initial thoughts:
    • @Senficon (Felix Reda) adds that while the discussion has highlighted the case for data information, it’s crucial to understand the implications of copyright law on AI, particularly concerning access to training data. Open Source software relies on a legal element (copyright licenses) and an access element (availability of source code). However, this framework does not seamlessly apply to AI, as different copyright regimes allow text and data mining (TDM) for AI training but not the redistribution of datasets. This discrepancy means that requiring the publication of training datasets would make Open Source AI models illegal, despite TDM exceptions that facilitate AI development. Also, public domain status is not consistent internationally, complicating the creation of legally publishable datasets. Consequently, a definition of Open Source AI that imposes releasing datasets would impede collaborative improvements and limit practical significance. Emphasizing data innovation can help maintain Open Source principles without legal pitfalls.
Concerns and feedback on anchoring on the Model Openness Framework
  • @amcasari expresses concern about the usability and neutrality of the “Model Openness Framework” (MOF) for identifying AI systems, suggesting it doesn’t align well with current industry practices and isn’t ready for practical application without further feedback and iteration.
  • @shujisado points out that the MOF’s classification of components doesn’t depend on the specific IP laws applied, but rather on a general legal framework, and highlights that Japan’s IP law system differs from the US and EU, yet finds discussions based on the OSD consistent.
  • @stefano emphasizes the importance of having well-thought-out, timeless principles in the Open Source AI Definition document, while viewing the Checklist as a more frequently updated working document. He also supports the call to see practical examples of the framework in use and proposes separating the Checklist from the main document to reduce confusion.
Initial Report on Definition Validation
  • Reviews of eleven different AI systems have been published. We do these review to check existing systems compatibility with our current definition. These are the systems in question: Arctic, BLOOM, Falcon, Grok, Llama 2, Mistral, OLMo, OpenCV, Phy-2, Pythia, and T5.
    • @mer has set up a review sheet for the Viking model upon request from @merlijn-sebrechts.
    • @anatta8538 asks if MLOps is considered within the topic of the Model Openness Framework and whether CLIP, an LMM, would be consistent with the OSAID.
    • @nick clarifies that the evaluation focuses on components as described in the Model Openness Framework, which includes development and deployment aspects but does not cover MLOps as a whole.
Why and how to certify Open Source AI
  • @Alek_Tarkowski agrees that certification of open-source AI will be crucial under the AI Act and highlights the importance of defining what constitutes an Open Source license. He points out the confusion surrounding terms like “free and open source license” and suggests that the issue of responsible AI licensing as a form of Open Source licensing needs resolution. Notes that some restrictive licenses are gaining traction and may need consideration for exemption from regulation, thus urging for a consensus.
Open Source AI Definition Town Hall – June 14, 2024

Slides and the recording of our previous townhall meeting can be found here.

Categories: FLOSS Research

ADCI Solutions: How to add a live chat to a website using Mercure

Planet Drupal - Mon, 2024-06-17 11:58
<p>We are currently working on a platform for contacting influencers and media to promote products, services, and personal brands. As the platform was developing, the client felt the need to <a href="https://www.adcisolutions.com/work/mercure-chat?utm_source=planetdrupal%26utm_medium=rss_feed%26utm_campaign=mercure_chat">add live chat functionality</a> so that platform clients could communicate with its administrators and copywriters. But it had to store its history on the client server.&nbsp;</p><img data-entity-uuid="8baeedd5-7f77-4fad-af7b-78aba79bfae1" data-entity-type="file" src="https://www.adcisolutions.com/sites/default/files/inline-images/adding-a-live-chat-to-website.png" width="1324" height="842" alt="adding a live chat to a website">
Categories: FLOSS Project Planets

Real Python: Ruff: A Modern Python Linter for Error-Free and Maintainable Code

Planet Python - Mon, 2024-06-17 10:00

Linting is essential to writing clean and readable code that you can share with others. A linter, like Ruff, is a tool that analyzes your code and looks for errors, stylistic issues, and suspicious constructs. Linting allows you to address issues and improve your code quality before you commit your code and share it with others.

Ruff is a modern linter that’s extremely fast and has a simple interface, making it straightforward to use. It also aims to be a drop-in replacement for many other linting and formatting tools, such as Flake8, isort, and Black. It’s quickly becoming one of the most popular Python linters.

In this tutorial, you’ll learn how to:

  • Install Ruff
  • Check your Python code for errors
  • Automatically fix your linting errors
  • Use Ruff to format your code
  • Add optional configurations to supercharge your linting

To get the most from this tutorial, you should be familiar with virtual environments, installing third-party modules, and be comfortable with using the terminal.

Ruff cheat sheet: Click here to get access to a free Ruff cheat sheet that summarizes the main Ruff commands you’ll use in this tutorial.

Take the Quiz: Test your knowledge with our interactive “Ruff: A Modern Python Linter” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Ruff: A Modern Python Linter

In this quiz, you'll test your understanding of Ruff, a modern linter for Python. By working through this quiz, you'll revisit why you'd want to use Ruff to check your Python code and how it automatically fixes errors, formats your code, and provides optional configurations to enhance your linting.

Installing Ruff

Now that you know why linting your code is important and how Ruff is a powerful tool for the job, it’s time to install it. Thankfully, Ruff works out of the box, so no complicated installation instructions or configurations are needed to start using it.

Assuming your project is already set up with a virtual environment, you can install Ruff in the following ways:

Shell $ python -m pip install ruff Copied!

In addition to pip, you can also install Ruff with Homebrew if you’re on macOS or Linux:

Shell $ brew install ruff Copied!

Conda users can install Ruff using conda-forge:

Shell $ conda install -c conda-forge ruff Copied!

If you use Arch, Alpine, or openSUSE Linux, you can also use the official distribution repositories. You’ll find specific instructions on the Ruff installation page of the official documentation.

Additionally, if you’d like Ruff to be available for all your projects, you might want to install Ruff with pipx.

You can check that Ruff installed correctly by using the ruff version command:

Shell $ ruff version ruff 0.4.7 Copied!

For the ruff command to appear in your PATH, you may need to close and reopen your terminal application or start a new terminal session.

Linting Your Python Code

While linting helps keep your code consistent and error-free, it doesn’t guarantee that your code will be bug-free. Finding the bugs in your code is best handled with a debugger and adequate testing, which won’t be covered in this tutorial. Coming up in the next sections, you’ll learn how to use Ruff to check for errors and speed up your workflow.

Checking for Errors

The code below is a simple script called one_ring.py. When you run it, it gets a random Lord of the Rings character name from a tuple and lets you know if that character bore the burden of the One Ring. This code has no real practical use and is just a bit of fun. Regardless of the size of your code base, the steps are going to be the same:

Read the full article at https://realpython.com/ruff-python/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: Quiz: Ruff: A Modern Python Linter

Planet Python - Mon, 2024-06-17 08:00

In this quiz, you’ll test your understanding of Ruff, a modern linter for Python.

By working through this quiz, you’ll revisit why you’d want to use Ruff to check your Python code and how it automatically fixes errors, formats your code, and provides optional configurations to enhance your linting.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Robin Wilson: Accessing Planetary Computer STAC files in DuckDB

Planet Python - Mon, 2024-06-17 05:36

Microsoft Planetary Computer is a wonderful archive of geospatial datasets (primarily raster images of various types), provided with a STAC catalog to enable them to be easily searched through an API. That’s fine for normal usage where you want to find a selection of images and access the images themselves, but less useful when you want to do some analysis of the metadata itself.

For that use case, Planetary Computer provide the STAC metadata in bulk, stored as GeoParquet files. Their documentation page explains how to use this with geopandas and do some large-ish scale processing with Dask. I wanted to try this with DuckDB – a newer tool that is excellent at accessing and processing Parquet files, including efficiently accessing files available via HTTP, and only downloading the relevant parts of the file. So, this post explains how I managed to do this – showing various different approaches I tried and how (or if!) each of them worked.

Getting URLs

Planetary Computer URLs are basically just URLs to files on Azure Blob Storage. However, the URLs require signing with a Shared Access Signature (SAS) key. Luckily, there is a Planetary Computer Python module that will use an API to generate the correct keys for us.

In the docs for using the STAC GeoParquet files, they give this code:

catalog = pystac_client.Client.open( "https://planetarycomputer.microsoft.com/api/stac/v1/", modifier=planetary_computer.sign_inplace, ) asset = catalog.get_collection("io-lulc-9-class").assets["geoparquet-items"]

This gets a collection, and the geoparquet-items collection-level asset, having used a ‘modifier’ to sign the URLs as they are acquired from the STAC API. The URL is then stored in asset.href.

Using the DuckDB CLI – with HTTP URLs

When using GeoPandas, the docs show that you can pass a storage_options parameter to the read_parquet method and it will ‘just work’:

df = geopandas.read_parquet( asset.href, storage_options=asset.extra_fields["table:storage_options"] )

However, to use the file in DuckDB we need a full URL. The SQL code we’re going to use in DuckDB for a quick test looks like this:

SELECT * FROM <URL> LIMIT 1

This just gets the first row of whatever URL is given.

Unfortunately, though, the URL provided by the STAC catalog looks like this:

abfs://items/io-lulc-9-class.parquet

If we try that with DuckDB, we find it doesn’t know how to deal with it. So, we need to convert this to a standard HTTP URL that can be downloaded as if it were just typed into a web browser (or used with cURL or wget). After a bit of playing around, I found I could create a full URL with this Python code:

url = f'https://{asset.extra_fields["table:storage_options"] ["account_name"]}.dfs.core.windows.net/{asset.href[7:]}? {asset.extra_fields["table:storage_options"]["credential"]}'

That’s taking the account_name and sticking it in as part of the host part of the UK, then extracting the path from the original URL and adding that, and then finishing with the SAS token (stored as credential in the storage options) as a URL query parameter.

That results in a URL that looks like this (split over multiple lines for readability):

https://pcstacitems.dfs.core.windows.net/items/io-lulc-9-class.parquet ?st=2024-06-14T19%3A04%3A20Z&se=2024-06-15T19%3A49%3A20Z&sp=rl&sv=2024-05-04& sr=c&skoid=9c8ff44a-6a2c-4dfb-b298-1c9212f64d9a &sktid=72f988bf-86f1-41af-91ab-2d7cd011db47&skt=2024-06-15T09%3A00%3A14Z &ske=2024-06-22T09%3A00%3A14Z&sks=b &skv=2024-05-04 &sig=A0e%2BihbAagHmIZ%2Bg8gzH71TavRYQMZiHWJ/Uk9j0Who%3D

(note: that specific URL won’t work for you, as the SAS tokens are time-limited).

If we insert that into our DuckDB SQL statement then we find it works (click to enlarge):

Great! Now let’s try with a different collection on Planetary Computer – in this case the Sentinel 2 collection. We can make the URL in the same way as before, by changing how we define asset:

asset = catalog.get_collection("sentinel-2-l2a").assets["geoparquet-items"]

and then using the same query in DuckDB with the new URL. Unfortunately, this time we get an error:

Invalid Input Error: File '<URL>' too small to be a Parquet file

The problem here is that for larger collections, the STAC metadata is spread out over a series of GeoParquet files inside a folder, and the URL we’ve been given is just to the folder (even though it ends in .parquet). As we’re just using HTTP, there is no way to do things like list the files in a folder, so DuckDB has no way to find out what files are within the folder and start reading them. We need to find another way.

Using the DuckDB CLI – with the Azure extension

Conveniently, there is an Azure extension for DuckDB, and it lets you use URLs like 'abfss://⟹my_filesystem⟩/⟹path⟩/⟹my_file⟩.⟹parquet_or_csv⟩'. That’s a slightly different URL scheme to the one we’ve been given (abfss as opposed to abfs), but we can easily sort that.

Looking at the authentication docs though, it seems to require you to specify either a connection string, a service principal or using the ‘Azure Credential Chain’ (which, I think, works with the az command line and various environment variables that you may have set up). We don’t have any of those – they’re all a far broader scope than what we’ve got, which is just a SAS token for a specific file or folder. It looks like the Azure extension doesn’t support this, so we’ll have to look for another way.

Using the DuckDB Python library – with an Azure connector

As well as the command-line interface, DuckDB can also be used from Python. To set this up, just run pip install duckdb. While you’re at it, you might want to install pandas and the adlfs library for connecting to Azure Storage.

Using this is actually quite simple, first import the libraries:

import duckdb from adlfs.spec import AzureBlobFileSystem

then set up the Azure connection:

a = AzureBlobFileSystem(account_name='pcstacitems', sas_token=asset.extra_fields["table:storage_options"]['credential'])

Note how here we’re using the sas_token parameter to provide a SAS token. We could use a different parameter here to provide a connection string or some other credential type. I couldn’t find much real-world use of this sas_token parameter when looking online – so this is probably the key ‘less well documented’ bit to take away from this article.

Continuing, we then connect to DuckDB and ‘register’ this Azure connection:

connection = duckdb.connect() connection.register_filesystem(a)

From there, we can run a SQL query like this:

query = connection.sql(f""" SELECT * FROM 'abfs://items/sentinel-2-l2a.parquet/*.parquet' LIMIT 1; """) query.to_df()

The call to to_df at the end converts the result into a Pandas DataFrame for easy viewing and manipulation. The results are shown below (click to enlarge):

Note that we’re passing a URL of abfs://items/sentinel-2-l2a.parquet/*.parquet – this is the URL from the STAC catalog with /*.parquet added to the end to ensure DuckDB picks up the large number of Parquet files stored there. I’d have thought this would have worked without that, but if I miss that out I get a error saying:

TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'

I suspect this is something to do with how things are passed to the Azure connection that we registered with DuckDB, but I’m not entirely sure. If you do know, then please leave a comment!

A few fun queries

So, now we’ve got this working, what can we do? Well, we can – fairly efficiently – run queries across all the STAC metadata for Sentinel 2 images. I’ll just give a few examples of queries I’ve run below.

We can find out how many images there are in the collection in total:

SELECT COUNT(*) FROM 'abfs://items/sentinel-2-l2a.parquet/*.parquet'

We can do the same for just the images acquired in 2020 – by using a fact we know about how the individual parquet files are named (from the Planetary Computer docs):

SELECT COUNT(*) FROM 'abfs://items/sentinel-2-l2a.parquet/*2020*.parquet'

And we can start looking at how many scenes were captured with different amounts of cloud cover:

SELECT COUNT(*) / (SELECT COUNT(*) FROM 'abfs://items/sentinel-2-l2a.parquet/*2020*.parquet'), CASE WHEN "eo:cloud_cover" between 0 and 5 then '0-5%' WHEN "eo:cloud_cover" between 5 and 10 then '5-10%' WHEN "eo:cloud_cover" between 10 and 15 then '10-15%' END AS category FROM 'abfs://items/sentinel-2-l2a.parquet/*2020*.parquet' GROUP BY category,

This tells us that 20% of scenes had between 0 and 5% cloud cover (quite a high number, I thought – but then again, I am used to living in the UK!), and around 4-5% in each of the 5-10% and 10-15% categories.

There are plenty of other analyses that you could do with these Parquet files, of course. At some point I might even get around to the task which initially made me look into this: that is, trying to find Landsat and Sentinel 2 scenes that were acquired over the same location at very similar times. I think I’ll leave that for another day though…

Categories: FLOSS Project Planets

Greg Casamento: Keysight laid me off in January!

GNU Planet! - Mon, 2024-06-17 01:24
A little history first. Keysight is a large company that, primarily, makes testing equipment such as oscilloscopes and other electronics. They bought a company a few years back named TestPlant. Prior to that, TestPlant bought a company by the name of Redstone that produced a product known as Eggplant. Recently, I was laid off for economic reasons (at least that's what they said). It occurs to me that nothing in this world lasts forever. I was so depressed when I was let go because Keysight was the perfect home for me... they used GNUstep deeply. So, as you can imagine, I was deeply upset when things ended... but all things do. 
 I think it happened for several reasons: 
  • Economic - This is what was explained to me, but I am not sure I believe it 
  • Politics - I think this part is because I expressed my opinions HONESTLY about the direction of the company given that they wanted to make the application into a VSCode plugin.
  • Perception - I am 54 years old... so I think that they believed that Objective-C was my one and only talent, it's not... I know many other languages and have many other skills. 
Unfortunately, in the US, any employer can let go of any employee or contractor for ANY reason. This is known as at-will employment, making it very hard to take any action against any employer (not that this is something I considered).
Keysight is and will remain a major contributor to GNUstep.
That being said, I recently ran into something rather disturbing at another company.   I have been working with a company based out of New Mexico that is interested in space applications.  They have been using GNUstep and have been awaiting funding.
The lead of this effort expressed something during a meeting saying "We will work on the GNUstep side of this because there is no reason we should have to pay for any of this."   This hit a sour note with me to say the very least.   As it turns out he was under the mistaken impression that, because the work was on GNUstep, it was for free... which is WRONG.
I wonder if the same impression was present at Keysight or if other companies believe this.  The saying, according to RMS, is "Free as in freedom, not as in beer."   If you are a manager at a company who is under the mistaken impression that work on any Free Software or Open Source project is free when your product depends on it, please correct your thinking.   Just because it is someone's passion project does NOT mean that they are going to do that work for free and prioritize the things that need to be done for your organization.
All of that being said the positive sides are this:
  1. More time to code on GNUstep without interruption
  2. More time to work on my own projects
  3. Time to rest and relax
So, as much as I hate being unemployed there ARE some upsides to it.  Here's to hoping something works out soon.   I literally loved my job at Keysight and, honestly, hope to return.   I have my eye on their changes as well as those of others just like any other member of the community.  Yours, GC
Categories: FLOSS Project Planets

The Drop Times: Tech Talks, Brand Strategies, and the Future of Drupal: Highlights from The DropTimes

Planet Drupal - Mon, 2024-06-17 01:16

An enterprise web solution is inherently complex. Its endless integrations with various applications that are added or loaded off as per the changing requirements, multiple front-ends that provide varied and personalised digital experiences, the cultural nuances catered to in its l10n editions, the time-zone differences that are to be taken care of, etc., etc., are the most basic parts that add to the complexity.

Continuous investment, development and deployment are part and parcel of such a solution. Even so, it becomes amiable only if the complexity is masked out in the simplicity of the consumer touch points.

Two separate sets of consumers should be considered there. One is the enterprise consumers, the content creators and marketers, who are the immediate benefactors of the solution. The second, a more prominent set, is the customers to whom the enterprises cater.

Drupal, the platform for building a digital experience, invested heavily in satisfying the end customers. That was what Drupal agencies were doing all these years. They got an exoskeleton on which they built the end-user experiences. But what about the enterprise consumers themselves? Weren’t they the ones who ultimately decided to ditch or continue using the services? How good were their experiences sans the involvement of a hired developer?

The decision to simplify the touch points of enterprise consumers is a development and design decision. These content creators and marketers will benefit the most from the Starshot Initiative. It supports the marketing team by reducing the go-to-market time and eases the content creators by providing the most valuable tools out of the box. A complex solution is now draped in a simple cassock.

The first story I’m sharing today concerns Drupal’s new brand strategy and future directions. The Starshot Initiative manifests this strategy and the direction the project is taking. An interview with Shawn Perritt, Senior Director of Brand & Creative at Acquia, shines a light on this brand refresh. Read Shawn’s conversation with Alka Elizabeth, our sub-editor.

Montreal became the hub for tech talks in the past week. We conducted one-question interviews with three featured speakers at Evolve Drupal Montreal. Kazima Abbas, our sub-editor, spoke with Josh Koenig, co-founder of Pantheon, Joe Kwan, Manager of Web Development at the University of Waterloo, and the formidable Mike Herchel, Senior Front-end Developer at Agileana. Their answers form part of the feature published here.

Kazima also conducted an exclusive interview with BaddĂœ Sonja Breidert, CEO and Co-Founder of 1xINTERNET. The interview focused on the agency’s support for the Starshot Initiative based on its experience with ‘Try Drupal’, which implements a default Drupal deployment for enterprises. It explains why they did not wait or hesitate to offer all-out support for the new initiative.

Alka Elizabeth and Ben Peter Mathew, our community manager, had jointly covered the Starshot Session on ‘Strategic Milestones in Product Definition’ led by Dries Buytaert and Cristina Chumillas on June 7. The report can be read here.

The next two stories are our signature stories, written by our founder and lead, Anoop John, CTO of Zyxware Technologies. From the start, Anoop was vocal about ‘DrupalCollab’, an idea he had been nurturing for years. As part of the initiative, he has published a list of the largest 500+ cities in the world with a sizable population of people tagged ‘Drupal developers’ on LinkedIn. You can read the analysis here. The second story published with this list is, On Using LinkedIn to Analyze the Size of the Drupal Community, an article on the methodology for this analysis using LinkedIn, associated errors and justifications. Based on these stories, Anoop is trying to build momentum to restart local Drupal meetups in as many cities as possible. Also, read his LinkedIn article on the initiative here.

Mautic 5.1 Andromeda Edition was released last week with major updates. Read our report here. We published a quick review of the Drupal Quick Exit Module developed by Oomph Inc. Ben Peter Mathew spoke with Alyssa Varsanyi of Oomph to produce this report.

To ensure brevity, I shall list the rest of the important stories below without much ado.

That is for the week, dear readers. I wish you all a very sacred Eid Ul Adha. Let the festival of sacrifice be a time to ponder the things we are ready to give up for the better common good.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thanks and regards.

Sincerely,
Sebin A. Jacob,
Editor-in-Chief, The DropTimes

Categories: FLOSS Project Planets

GSoC'24 Okular | Week 2-3 Recap

Planet KDE - Mon, 2024-06-17 01:00
People of the Internet,

While working on keystroke events, I realized my improvements to the event.change property were still inconsistent for certain Unicode characters. This led me to delve into code units, code points, graphemes, and other cool Unicode concepts. I found this blog post to be very enlightening.

Here’s an update on my progress over the past two weeks:

MRs merged:
  • event.change : The change property of the event object now correctly handles Unicode, with adjustments to selStart and selEnd calculations. !MR998
  • cursor position and undo/redo fix : Fixed cursor position calculations to account for rejected input text and resolved merging issues with undo/redo commands in text fields. !MR1011
  • DocOpen Event implementation : Enabled document-level scripts to access the event object by implementing the DocOpen event. !MR1003
  • Executing validation events correctly : Fixed a bug where validation scripts wouldn’t run after KeystrokeCommit scripts in Okular. !MR999
  • Widget refresh functions for RadioButton, ListEdit and ComboEdit : Added refresh functions as slots for RadioButton, ListEdit, and ComboEdit widgets, aiding in reset functionality and script updates. !MR1012
  • Additional document actions in Poppler : Implemented reading additional document actions (CloseDocument, SaveDocumentStart, SaveDocumentFinish, PrintDocumentStart, PrintDocumentFinish) in the qt5 and qt6 frontends for Poppler. !MR1561 (in Poppler)
MRs currently under review:
  • Reset form implementation in qt6 frontend for Okular : Working on the reset form functionality in Okular, currently focusing on qt6 frontend details. !MR1564 (in Poppler)
  • Reset form in Okular : Using the Poppler API to reset forms within Okular. !MR1007
  • Fixing order of execution of events for text form fields : Addressing the incorrect execution order of certain events (e.g., calculation scripts) and ensuring keystroke commit, validation, and format scripts are evaluated correctly when setting fields via JavaScript. !MR1002

For the coming weeks, my focus will be on implementing reset forms, enhancing keystroking and formatting scripts, and possibly starting on submit forms. Let’s see how it goes.

See you next time. Cheers!

Categories: FLOSS Project Planets

Quansight Labs Blog: How Narwhals and scikit-lego came together to achieve dataframe-agnosticism

Planet Python - Sun, 2024-06-16 20:00
And how your Python library can become dataframe-agnostic too
Categories: FLOSS Project Planets

Report From KDEPIM Spring Sprint 2024

Planet KDE - Sun, 2024-06-16 10:04

Like last year, the KDEPIM team convened in Toulouse to hold it’s traditional spring sprint. Unlike last year this time it was late spring, almost made it into summer. This time we’ve been hosted by Étincelle Coworking in one of their spaces. It was fairly nice. Lots of space, comfortable, well situated in the center. I definitely recommend.

The Warm-up

Some of the team (namely Carl and Volker) arrived early before the official start. I’m pretty sure Carl made it early to have lunch in the infamous Cake Placeℱ (at last). Of course a stupid amount of cake slices was involved. And after a short stroll we got to the meeting space we would use the rest of the time.

We started preparing the work board and finished collecting ideas as Dan arrived.

Day 1

Once again, we had two types of tasks: discussions and actual hacking. During that first day we mostly focused on the conversations but did some hacking on the side. Since the attendance was a bit low this year, we didn’t go for break out groups and did them all as a single group.

Said discussions covered mostly the following topics:

  • Retiring some old software or now unused resources
  • Trying to devise plans to remove some duplication of efforts (did you know we have several copies of the time tree parser?)
  • Some minor stability issues were discussed to have a plan to solve them
  • How to improve the usage of Akonadi on mobile which revolved mostly around some better partial syncing and some tight coupling with widgets at places
  • Devising a strategy to have some end to end testing of the supported resources
  • How to have better tag support and indexing in Akonadi (which opens the door to really cool features so stay tuned)
  • And finally, we spent quite some time discussing how to improve our little PR efforts and how to recruit new contributors

Of course, like last year we’re contrained by the size of the team and so need to use smartly everyone’s time. Again no grand plans, but some of the items we plan to deliver during the year could have very nice consequences. We’ll focus on a better quality of life both for users and contributors.

Day 2

We already started some of the technical tasks the previous day, but Sunday was mostly devoted to them. We had a break for a nice brunch of course but still people worked on their branches around it.

As far as technical tasks, we got the following things done already:

  • Some obscure and broken feature around notes in KMail has been removed
  • KJots notes can now be imported into MarkNotes giving an upgrade path
  • Akonadi tag discovery in caldav resources is a thing
  • The Akonadi brand should finally not appear anywhere in the applications anymore
  • A way to clean up the data coming from retired resources is now in place

But there’s more in the works, namely:

  • the completion of the infrastructure for QtQuick based config dialogs
  • the switch to KConfigDialogManager where is makes sense (instead of using an old PIM specific legacy system)
  • the rewrite of the indexing system (which should improve performance and have instant indexing of payloads)
  • the migration agent to apply tag discovery also to old and already existing items
  • the move of KMime to KDE Frameworks is getting prepared
  • the IMAP resource configuration system is in the works to be able to jump on the QtQuick infrastructure (it’s one of the more complex ones with lots of QtWidgets dependencies)
  • the KDAV test suite is being completed for better coverage

The mentioned tasks and all the ones created during the conversations have been added to our Technical Roadmap on Gitlab. The work is still on going for some of them.

What now?

The most noticeable is probably one thing we acted on immediately after our discussions. We added milestones in the PIM group. Indeed, it’s probably not always easy for people wanting to join to know which are the bigger on going goals in the KDEPIM team and what to work on to help.

This is now encoded through those milestones. It’s the early days for them so they need to be improved and more tasks added, but at least this gives an idea of what has higher priority for now.

We plan to have a BoF at Akademy 2024 to be announced later when the schedule for the BoFs actually opens. There we’ll plan to focus at discussing the experience of contributors and see how it can be improved. We’ll also present our milestones. We might sign you up on tasks there, don’t hesitate to show up, we want to hear from you.

Last but not least

If you want to know more or engage with us, please join the KDEPIM matrix channel! Let’s chat further.

Also, I’d like to thank again Étincelle Coworking and KDE e.V. to make this event possible. This wouldn’t be possible without a venue and without at least partial support in the travel expenses.

Finally, if you like such meetings to happen in the future so that we can push forward your favorite software, please consider making a tax-deductible donation to the KDE e.V. foundation.

Categories: FLOSS Project Planets

KDE PIM Sprint 2024 edition

Planet KDE - Sun, 2024-06-16 06:10

This year again I participated to the KDE PIM Sprint in Toulouse. As always it was really great to meet other KDE contributors and to work together for one weekend. And as you might have seen on my Mastodon account, a lot of food was also involved.

Day 1 (Friday Afternoon)

We started our sprint on Thursday with a lunch at the legendary cake place, which I missed last year due to my late arrival.

Picture of some delicious cakes: a piece of cheesecake raspberry and basil, a piece of lemon tart with meringue and a piece of carrot cake)

We then went to the coworking space where we would spend the remaining of this sprint and started working on defining tasks to work on and putting them on real kanban board.

A kanban board with tasks to discuss and to implement

To get a good summary of the specific topics we discussed, I invite you to consult the blog of Kevin.

That day, aside from the high level discussion, I proceeded to port away the IMAP authentification mechanism for Outlook accounts away from the KWallet API to use the more generic QtKeychain API. I also removed a large dependency from libkleo (the KDE library to interact with GPG).

Day 2 (Saturday)

On the second day, we were greated by a wonderful breakfast (thanks Kevin).

Picture of croissant, brioche and chocolatine

I worked on moving EventViews (the library that renders the calendar in KOrganizer) and IncidenceEditor (the library that provides the event/todo editor in KOrganizer) to KOrganizer. This will allow to reduce the number of libraries in PIM.

For lunch, we ended up eating at the excellent Mexican restaurant next to the location of the previous sprint.

Mexican food

I also worked on removing the “Add note” functionality in KMail. This feature allow to store notes to emails following RFC5257. Unfortunatelty this RFC never left the EXPERIMENTAL state and so these notes were only stored in Akonadi and not synchronized with any services.

This allow to remove the relevant widget from the pimcommon library and the Akonadi attribute.

I also started removing another type of notes: the KNotes app which provided sticky notes. This application was not maintained anymore, didn’t work so well with Wayland. If you were using KNotes, to make sure you don’t loose your notes, I added support in Marknote to import notes from KNotes.

Marknote with the context menu to import notes

Finally I worked on removing visible Akonadi branding from some KDE PIM applications. The branding was usually only visible when an issue occurred, which didn’t help with Akonadi reputation.

We ended up working quite late and ordering Pizzas. I personally got one with a lot of cheese (but no photo this time).

Day 3 (Sunday)

The final day, we didn’t had any breakfast :( but instead a wonderful brunch.

Picture of the brunch

Aside from eating, I started writing a plugin system for the MimeTreeParser which powers the email viewer in Merkuro and in Kleopatra. In the short term, I want to be able to add Itinerary integration in Merkuro but in the longer term the goal is to bring this email viewer to feature parity with the email viewer from KMail and then replace the KMail email viewer with the one from Merkuro. Aside from removing duplicate code, this will improve the security since the individual email parts are isolated from each other and this will makes it easier for the email view to follow KDE styling as this is just normal QML instead of fake HTML components.

I also merged and rebased some WIP merge requests in Marknote in preparation of a new release soon and reviewed merge requests from the others.

Last but not least

If you want to know more or engage with us, please join the KDE PIM and the Merkuro matrix channels! Let’s chat further.

Also, I’d like to thank again Étincelle Coworking and KDE e.V. to make this event possible. This wouldn’t be possible without a venue and without at least partial support in the travel expenses.

Finally, if you like such meetings to happen in the future so that we can push forward your favorite software, please consider making a tax-deductible donation to the KDE e.V. foundation.

Categories: FLOSS Project Planets

Ned Batchelder: Math factoid of the day: 62

Planet Python - Sun, 2024-06-16 06:00

There are two Archimedean solids with 62 faces:

rhomb­icosi­dodeca­hedron truncated icosi­dodeca­hedron

They both have 62 faces because of their roots in the dodecahedron and icosahedron. They have a face for each of the faces, vertices, and edges of either of those polyhedra: 12 + 20 + 30 = 62.

The rhomb­icosi­dodeca­hedron shows up in more places than you might expect for a complex polyhedron. The Zometool building kit uses it for hubs, though the squares are adjusted to rectangles:

Recently one also showed up on Kickstarter: Glint, a rhomb­icosi­dodeca­hedron machined in solid brass. I have one. It’s satisfyingly heavy and precise. Here it is with its little Zome sister:

The rhomb­icosi­dodeca­hedron is very pleasing: nearly round, but with simple polygonal faces roughly equal in size. Who’d have thought a number as seemingly uninteresting as 62 could appear in such serendipitous places?

Categories: FLOSS Project Planets

Pages