Feeds

Real Python: Rounding Numbers in Python

Planet Python - Tue, 2024-06-18 10:00

With many businesses turning to Python’s powerful data science ecosystem to analyze their data, understanding how to avoid introducing bias into datasets is absolutely vital. If you’ve studied some statistics, then you’re probably familiar with terms like reporting bias, selection bias, and sampling bias. There’s another type of bias that plays an important role when you’re dealing with numeric data: rounding bias.

Understanding how rounding works in Python can help you avoid biasing your dataset. This is an important skill. After all, drawing conclusions from biased data can lead to costly mistakes.

In this video course, you’ll learn:

  • Why the way you round numbers is important
  • How to round a number according to various rounding strategies
  • How to implement each strategy in pure Python
  • How rounding affects data and which rounding strategy minimizes this effect
  • How to round numbers in NumPy arrays and pandas DataFrames
  • When to apply different rounding strategies

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Mike Driscoll: How to Publish a Python Package to PyPI

Planet Python - Tue, 2024-06-18 09:01

Do you have a Python package that you’d like to share with the world? You should publish it on the Python Package Index (PyPI). The vast majority of Python packages are published there. The PyPI team has also created extensive documentation to help you on your packaging journey. This article does not aim to replace that documentation. Instead, it is just a shorter version of it using ObjectListView as an example.

The ObjectListView project for Python is based on a C# wrapper .NET ListView but for wxPython. You use ObjectListView as a replacement for wx.ListCtrl because its methods and attributes are simpler. Unfortunately, the original implementation died out in 2015 while a fork, ObjectListView2 died in 2019. For this article, you will learn how I forked it again and created ObjectListView3 and packaged it up for PyPI.

Creating a Package Structure

When you create a Python package, you must follow a certain type of directory structure to build everything correctly. For example, your package files should go inside a folder named src. Within that folder, you will have your package sub-folder that contains something like an __init__.py and your other Python files.

The src folder’s contents will look something like this:

package_tutorial/ | -- src/ | -- your_amazing_package/ | -- __init__.py | -- example.py

The __init__.py file can be empty. You use that file to tell Python that the folder is a package and is importable. But wait! There’s more to creating a package than just your Python files!

You also should include the following:

  • A license file
  • The pyproject.toml file which is used for configuring your package when you build it, among other things
  • A README.md file to describe the package, how to install it, example usage, etc
  • A tests folder, if you have any (and you should!)

Go ahead and create these files and the tests folder. It’s okay if they are all blank right now. At this point, your folder structure will look like this:

package_tutorial/ | -- LICENSE | -- pyproject.toml | -- README.md | -- src/ | -- your_amazing_package/ | -- __init__.py | -- example.py | -- tests/ Picking a Build Backend

The packaging tutorial mentions that you can choose various build backends for when you create your package. There are examples for the following:

  • Hatchling
  • setuptools
  • Flit
  • PDM

You add this build information in your pyproject.toml file. Here is an example you might use if you picked setuptools:

[build-system] requires = ["setuptools>=61.0"] build-backend = "setuptools.build_meta"

This section in your config is used by pip or build. These tools don’t actually do the heavy lifting of converting your source code into a wheel or other distribution package. That is handled by the build backend. You don’t have to add this section to your pyproject.toml file though. You will find that pip will default to setuptools if there isn’t anything listed.

Configuring Metadata

All the metadata for your package should go into a pyproject.toml file. In the case of ObjectListView, it didn’t have one of these files at all.

Here’s the generic example that the Packaging documentation gives:

[project] name = "example_package_YOUR_USERNAME_HERE" version = "0.0.1" authors = [ { name="Example Author", email="author@example.com" }, ] description = "A small example package" readme = "README.md" requires-python = ">=3.8" classifiers = [ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ] [project.urls] Homepage = "https://github.com/pypa/sampleproject" Issues = "https://github.com/pypa/sampleproject/issues"

Using this as a template, I created the following for the ObjectListView package:

[project] name = "ObjectListView3" version = "1.3.4" authors = [ { name="Mike Driscoll", email="mike@somewhere.org" }, ] description = "An ObjectListView is a wrapper around the wx.ListCtrl that makes the list control easier to use." readme = "README.md" requires-python = ">=3.9" classifiers = [ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ] [project.urls] Homepage = "https://github.com/driscollis/ObjectListView3" Issues = "https://github.com/driscollis/ObjectListView3/issues"

Now let’s go over what the various parts are in the above metadata:

  • name – The distribution name of your package. Make sure your name is unique!
  • version – The package version
  • authors – One or more authors including emails for each
  • description – A short, one-sentence summary of your package
  • readme – A path to the file that contains a detailed description of the package. You may use a Markdown file here.
  • requires-python – Tells which Python versions are supported by this package
  • classifiers – Gives an index and some metadata for pip
  • URLs – Extra links that you want to show on PyPI. In general, you would want to link to the source, documentation, issue trackers, etc.

You can specify other information in your TOML file if you’d like. For full details, see the pyproject.toml guide.

READMEs and Licenses

The README file is almost always a Markdown file now. Take a look at other popular Python packages to see what you should include. Here are some recommended items:

  • How to install the package
  • Basic usage examples
  • Link to a guide or tutorial
  • An FAQ if you have one

There are many licenses out there. Don’t take advice from anyone; buy a lawyer on that unless you know a lot about this topic. However, you can look at the licenses for other popular packages and use their license or one similar.

Generating a Package

Now you have the files you need, you are ready to generate your package. You will need to make sure you have PyPA’s build installed.

Here’s the command you’ll need for that:

python3 -m pip install --upgrade build

Next, you’ll want to run the following command from the same directory that your pyproject.toml file is in:

python3 -m build

You’ll see a bunch of output from that command. When it’s done, you will have a dist A folder with a wheel (*.whl) file and a gzipped tarball inside it is called a built distribution. The tarball is a source distribution , while the wheel file is called a built distribution. When you use pip, it will try to find the built distribution first, but pip will fall back to the tarball if necessary.

Uploading / Publishing to PyPI

You now have the files you need to publish to share your package with the world on the Python Package Index (PyPI). However, you need to register an account on TestPyPI first. TestPyPI  is a separate package index intended for testing and experimentation, which is perfect when you have never published a package before. To register an account, go to https://test.pypi.org/account/register/ and complete the steps on that page. It will require you to verify your email address, but other than that, it’s a very straightforward signup.

To securely upload your project, you’ll need a PyPI API token. Create one from your account and make sure to set the “Scope” to the “Entire account”. Don’t close the page until you have copied and saved your token or you’ll need to generate another one!

The last step before publishing is to install twine, a tool for uploading packages to PyPI. Here’s the command you’ll need to run in your terminal to get twine:

python3 -m pip install --upgrade twine

Once that has finished installing, you can run twine to upload your files. Make sure you run this command in your package folder where the new dist folder is:

python3 -m twine upload --repository testpypi dist/*

You will see a prompt for your TestPyPI username and/or a password. The password is the API token you saved earlier. The directions in the documentation state that you should use __token__ as your username, but I don’t think it even asked for a username when I ran this command. I believe it only needed the API token itself.

After the command is complete, you will see some text stating which files were uploaded. You can view your package at https://test.pypi.org/project/example_package_name

To verify you can install your new package, run this command:

python3 -m pip install --index-url https://test.pypi.org/simple/ --no-deps example-package-name

If you specify the correct name, the package should be downloaded and installed. TestPyPI is not meant for permanent storage, though, so it will likely delete your package eventually to conserve space.

When you have thoroughly tested your package, you’ll want to upload it to the real PyPI site. Here are the steps you’ll need to follow:

  • Pick a memorable and unique package name
  • Register an account at https://pypi.org. You’ll go through the same steps as you did on TestPyPI
    • Be sure to verify your email
    • Create an API key and save it on your machine or in a password vault
  • Use twine to upload the dist folder like this:  python -m twine upload dist/*
  • Install the package from the real PyPI as you normally would

That’s it! You’ve just published your first package!

Wrapping Up

Creating a Python package takes time and thought. You want your package to be easy to install and easy to understand. Be sure to spend enough time on your README and other documentation to make using your package easy and fun. It’s not good to look up a package and not know which versions of Python it supports or how to install it. Make the process easy by uploading your package to the Python Package Index.

The post How to Publish a Python Package to PyPI appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

The Drop Times: Christina Lockhart on Elevating Drupal: The Essential Role of Women in Tech

Planet Drupal - Tue, 2024-06-18 08:36
Discover how women are transforming the Drupal community and driving innovation in technology. In this interview, Christina Lockhart, Digital Marketing Manager at the Drupal Association, shares her journey, her key responsibilities, and the vital role women play in advancing Drupal. Learn about her strategies for promoting DrupalCon, her insights on overcoming barriers for women in tech, and the importance of empowering female leaders in the industry.
Categories: FLOSS Project Planets

DrupalEasy: Two very different European Drupal events in one week

Planet Drupal - Tue, 2024-06-18 08:18

I was fortunate enough to attend two very different European Drupal events recently, and wanted to take a few minutes to share my experience as well as thank the organizers for their dedication to the Drupal community.

The events couldn't have been more different - one being a first-time event in a town I've never heard of and the other celebrating their 20th anniversary in a city that can be considered the crossroads of the Netherlands.

DrupalCamp Cemaes

First off - it is pronounced KEM-ice (I didn't learn this until I actually arrived). Cemaes is a small village at the very northern tip of Wales with a population of 1,357. Luckily, one of those 1,357 people is Jennifer Tehan, an independent Drupal developer who decided to organize a Drupal event in a town that (I can only assume) has more sheep than people. 

When this event was first announced a few months back, I immediately knew that this was something I wanted to attend - mostly because I've always wanted to visit Wales to do some hiking ("walking," if you're not from the United States.) I used the camp as an excuse for my wife and I to take a vacation and explore northern Wales. I could write an entire other blog post on how amazeballs the walking was


DrupalCamp Cemaes was never going to be a large event, and it turns out there wound up being only nine of us (ten if you count Frankie, pictured below.) The advantage of such a small event was easy to see - we all got to know each other quite well. There's something really nice - less intimidating - about small Drupal events that I realized that I've really missed since the decline of Drupal meetups in many localities. 

The event itself was an unconference, with different people presenting on Drupal contributions (James Shields), GitHub Codespaces (Rachel Lawson), remote work (Jennifer Tehan), the LocalGov distribution (Andy Broomfield), and Starshot (myself.)

The setting of the event couldn't have been more lovely - an idyllic seaside community where everything was within walking distance, including the village hall where the event took place. If Jennifer decides to organize DrupalCamp Cemaes 2025, I'm pretty sure Gwendolyn and I will be there.

Read a brief wrap-up of DrupalCamp Cemaes from Jennifer Tehan. 

This camp was proof that anyone, anywhere can organize a successful Drupal event with a minimum of fuss. 

Drupaljam

Four days later, I was in Utrecht, Netherlands, at the 20th anniversary of Drupaljam, the annual main Dutch Drupal event. 

I had previously attended 2011 Drupaljam and recall two things about that event: the building it took place in seemed like it was from the future, and many of the sessions were in Dutch.

This was a very different event from the one I had been at a few days earlier, in a city of almost 400,000 people with over 300 attendees (I was told this number anecdotally) in one of the coolest event venues I've ever been to. Defabrique is a renovated linseed oil and compound feed factory that is a bit over 100 years old that is (IMHO) absolutely perfect for a Drupal event. Each and every public space has character, yet is also modern enough to support open-source enthusiasts. On top of all that, the food was amazing.

Attending Drupal Jam was a bit of a last minute decision for me, but I'm really glad that I made the time for it. I was able to reconnect with members of the European Drupal community that I don't often see, and make new connections with more folks that I can count.

I spent the day the way I spend most of my time at Drupal events - alternating between networking and attending sessions that attracted me. There were a number of really interesting AI-related sessions that I took in; it's pretty obvious to me that the Drupal community is approaching AI integrations in an unsurprisingly thoughtful manner.

The last week reinforced to me how fortunate I am to be able to attend so many in-person Drupal events. The two events I was able to participate in couldn't have been more different in scale and scope, but don't ask me to choose my favorite, because I'm honestly not sure I could!
 

Categories: FLOSS Project Planets

Real Python: Quiz: Build a Guitar Synthesizer

Planet Python - Tue, 2024-06-18 08:00

In this quiz, you’ll test your understanding of what it takes to build a guitar synthesizer in Python. By working through this quiz, you’ll revisit a few key concepts from the music theory and sound synthesis.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

CodersLegacy: Adding Data Files in Nuitka (Python Files, Images, etc)

Planet Python - Tue, 2024-06-18 06:09

Nuitka is a Python-to-C compiler that converts Python code into executable binaries. While Nuitka efficiently compiles Python scripts, incorporating data files such as images, audio, video, and additional Python files requires can be a little tricky. This guide outlines the steps to include various data files in a Nuitka standalone executable.


Basic Setup for Nuitka

Before delving into the inclusion of data files, ensure you have Nuitka installed. You can install Nuitka via pip:

pip install nuitka

For a more detailed starter guide on Nuitka and its various options, you can follow this link to our main Nuitka Tutorial. If you already familiar with Nuitka, then proceed ahead in this tutorial to learn how to add data files in our Nuitka EXE.


Compiling a Python Script with Nuitka

Assuming you have a Python script named main.py, the basic command to compile it into a standalone executable using Nuitka is:

python -m nuitka --standalone main.py

This command creates a standalone directory containing the executable and all necessary dependencies. However, if your program relies on external data files (images, audio, video, other Python files), you need to explicitly include these files.


Adding Data Files in Nuitka 1. Adding Individual Files

To include individual files such as images, use the --include-data-files option. The syntax is:

--include-data-files=<source-location>=<target-location>

For example, to include an image file favicon.ico, the command is:

--include-data-files=./favicon.ico=favicon.ico

This command tells Nuitka to include the favicon.ico file from the current directory and place it in the same location within the output directory.


2. Adding Directories

To include entire directories, use the --include-data-dir option. The syntax is the same as earlier:

--include-data-dir=<source-location>=<target-location>

For instance, to include an entire directory named Images, the command is:

--include-data-dir=./Images=Images

This will copy the Images directory from the current location to the output directory, preserving its structure.


Example: Adding Data Files of Multiple Types

Suppose you have a project with the following files and directories:

  • main.py: The main Python script.
  • favicon.ico: An icon file.
  • Images/: A directory containing images.
  • audio.mp3: An audio file.
  • videos/: A directory containing video files.
  • utils.py: An additional Python file used in the project.

To compile this project with all necessary data files included, the command would be:

python -m nuitka --standalone --include-data-files=./favicon.ico=favicon.ico --include-data-files=./audio.mp3=audio.mp3 --include-data-dir=./Images=Images --include-data-dir=./videos=videos --include-data-files=./utils.py=utils.py --disable-console main.py

Pretty confusing isn’t it?

Having to write out this command over and over again each time you want to re-build your executable is a massive pain. Instead, I recommend you use Nuitka Configuration Files, which allow you write out all your dependencies/datafiles/imports in a single file, which can then be executed as many times as you like.


Final Word

By using the --include-data-files and --include-data-dir options, you can include all necessary data files in your Nuitka-compiled executable. This ensures that your standalone application has all the resources it needs to run successfully, whether they are images, audio, video files, or additional Python scripts. Combining these options with plugins and other configurations allows for the creation of robust and self-sufficient executables.

Happy coding!

This marks the end of the Adding Data Files in Nuitka Tutorial. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the tutorial content are more than welcome.

The post Adding Data Files in Nuitka (Python Files, Images, etc) appeared first on CodersLegacy.

Categories: FLOSS Project Planets

Python Bytes: #388 Don't delete all the repos

Planet Python - Tue, 2024-06-18 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>PSF Elections coming up</strong></li> <li><a href="https://www.bleepingcomputer.com/news/security/cloud-engineer-gets-2-years-for-wiping-ex-employers-code-repos/">Cloud engineer gets 2 years for wiping ex-employer’s code repos</a></li> <li><a href="https://adamj.eu/tech/2024/06/17/python-import-by-string/"><strong>Python: Import by string with pkgutil.resolve_name()</strong></a></li> <li><a href="https://x.com/__AlexMonahan__/status/1801435781380325448"><strong>DuckDB goes 1.0</strong></a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=yweZO_BiYfw' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="388">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by ScoutAPM: <a href="https://pythonbytes.fm/scout"><strong>pythonbytes.fm/scout</strong></a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it. </p> <p><strong>Brian #1:</strong> <strong>PSF Elections coming up</strong></p> <ul> <li>This is elections for the PSF Board and for 3 bylaw changes.</li> <li>To vote in the PSF election, you need to be a Supporting, Managing, Contributing, or Fellow member of the PSF, 
</li> <li>And affirm your voting status by June 25.</li> <li>See <a href="https://pyfound.blogspot.com/2024/06/affirm-your-psf-membership-voting-status.html?utm_source=pocket_shared">Affirm your PSF Membership Voting Status</a> for more details.</li> <li>Timeline <ul> <li>Board Nominations open: Tuesday, June 11th, 2:00 pm UTC</li> <li>Board Nominations close: Tuesday, June 25th, 2:00 pm UTC</li> <li>Voter application cut-off date: Tuesday, June 25th, 2:00 pm UTC <ul> <li>same date is also for <a href="https://psfmember.org/user-information">voter affirmation</a>.</li> </ul></li> <li>Announce candidates: Thursday, June 27th</li> <li>Voting start date: Tuesday, July 2nd, 2:00 pm UTC</li> <li>Voting end date: Tuesday, July 16th, 2:00 pm UTC </li> </ul></li> <li>See also <a href="https://pyfound.blogspot.com/2024/05/blog-post.html">Thinking about running for the Python Software Foundation Board of Directors? Let’s talk!</a> <ul> <li>There’s still one upcoming office hours session on June 18th, 12 PM UTC</li> </ul></li> <li>And <a href="https://pyfound.blogspot.com/2024/06/for-your-consideration-proposed-bylaws.html?utm_source=pocket_shared">For your consideration: Proposed bylaws changes to improve our membership experience</a> <ul> <li>3 proposed bylaws changes</li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://www.bleepingcomputer.com/news/security/cloud-engineer-gets-2-years-for-wiping-ex-employers-code-repos/">Cloud engineer gets 2 years for wiping ex-employer’s code repos</a></p> <ul> <li>Miklos Daniel Brody, a cloud engineer, was sentenced to two years in prison and a restitution of $529,000 for wiping the code repositories of his former employer in retaliation for being fired.</li> <li>The <a href="https://www.documentcloud.org/documents/24215622-united-states-v-brody?responsive=1&title=1">court documents</a> state that Brody's employment was terminated after he violated company policies by connecting a USB drive.</li> </ul> <p><strong>Brian #3:</strong> <a href="https://adamj.eu/tech/2024/06/17/python-import-by-string/"><strong>Python: Import by string with pkgutil.resolve_name()</strong></a></p> <ul> <li>Adam Johnson</li> <li>You can use pkgutil.resolve_name("[HTML_REMOVED]:[HTML_REMOVED]")to import classes, functions or modules using strings. </li> <li>You can also use importlib.import_module("[HTML_REMOVED]") </li> <li>Both of these techniques are so that you have an object imported, but the end thing isn’t imported into the local namespace. </li> </ul> <p><strong>Michael #4:</strong> <a href="https://x.com/__AlexMonahan__/status/1801435781380325448"><strong>DuckDB goes 1.0</strong></a></p> <ul> <li>via Alex Monahan</li> <li>The cloud hosted product <a href="https://x.com/motherduck">@MotherDuck</a> also opened up General Availability</li> <li>Codenamed "Snow Duck"</li> <li>The core theme of the 1.0.0 release is stability. </li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>Sending us topics. Please send before Tuesday. But any time is welcome.</li> <li><a href="https://blog.scientific-python.org/numpy/numpy2/">NumPy 2.0 </a></li> <li><a href="https://htmx.org/posts/2024-06-17-htmx-2-0-0-is-released/">htmx 2.0.0</a></li> </ul> <p>Michael:</p> <ul> <li>Get 6 months of PyCharm Pro for free. Just take a course (even a free one) at <a href="https://training.talkpython.fm">Talk Python Training</a>. Then visit your account page &gt; details tab and have fun.</li> <li><a href="https://x.com/TalkPython/status/1803098742515732679">Coming soon at Talk Python</a>: Shiny for Python</li> </ul> <p><strong>Joke:</strong> <a href="https://devhumor.com/media/gitignore-thoughts-won-t-let-me-sleep">.gitignore thoughts won't let me sleep</a></p>
Categories: FLOSS Project Planets

Specbee: Getting started with integrating Drupal and Tailwind CSS

Planet Drupal - Tue, 2024-06-18 02:57
If you’re tired of spending hours tweaking CSS stylesheets to get your website looking just right, then you should consider a new way of CSS’ing (yeah, we just made it up). We’re talking about Tailwind CSS - a different way of doing CSS. It uses a utility-first approach, which means instead of creating a class for a component, let’s say a button, you would use a pre-defined class that’s built to style the button, like a “font-bold” or a “text-white”. All of this right from the comfort of your HTML!  Couple Tailwind CSS, a sleek, utility-based open-source CSS framework, with a modular, highly customizable, and modern CMS like Drupal 10, and you have a powerhouse combination!  In this article, we’ll be talking about Drupal and Tailwind CSS AND how you can integrate them seamlessly to build modern, responsive, and visually stunning websites Why Drupal is a good choice for web development Drupal is an open-source content management system (CMS) that has been gaining popularity among web developers for its versatility and powerful features. The latest version, Drupal 10 brings a host of new features and improvements meant to simplify development and enhance user experience. 1. Highly customizable design: One of the biggest advantages of using Drupal is its highly customizable design. With a wide range of themes and templates available, developers have the freedom to create unique and visually appealing websites that cater to their client's specific needs. 2. Flexibility and scalability: Drupal's modular architecture makes it incredibly flexible and scalable, making it suitable for building websites of any size or complexity. It offers a vast library of modules that can be easily integrated into your website, providing additional functionality such as e-commerce capabilities, social media integration, multilingual support, and more. 3. Robust security measures: Security is a major concern in today's times; hence it is crucial to choose a CMS that prioritizes it. Drupal has always been recognized as one of the most secure CMS platforms due to its stringent security standards and frequent updates that address any potential vulnerabilities promptly.  4. SEO-friendly: Search Engine Optimization (SEO) is vital for every website's success as it determines its visibility on search engines like Google. Drupal's built-in SEO tools assist developers in optimizing their websites by generating search engine-friendly URLs, meta tags, and sitemaps, and providing customizable page titles and descriptions. 5. Active community support: Drupal benefits from a substantial and engaged group of developers who contribute to its ongoing evolution and offer support to fellow users. This community-driven approach ensures that any issues or bugs are addressed promptly, and new updates and features are released regularly. Why Tailwind CSS is great for web developers Tailwind CSS has gained immense popularity in the web development community due to its numerous benefits and advantages.  1. Highly customizable: One of the biggest advantages of Tailwind CSS is its high level of customization. Unlike other popular front-end frameworks like Bootstrap or Foundation, Tailwind does not come with predefined styles and components. Instead, it provides a comprehensive set of utility classes that can be used to create custom designs according to your specific project needs. This allows developers to have complete control over their designs and enables them to create unique and personalized user experiences. 2. Lightweight: Another major benefit of using Tailwind CSS is its lightweight nature. Since it only generates the required classes based on your design choices, it helps keep the file size small and improves website performance. This is especially beneficial for Drupal websites which often have heavy content and functionality that can slow down page loading speed. 3. Mobile-responsive: With more people accessing websites through mobile devices, having a mobile-responsive design has become crucial for any website's success. The use of Tailwind CSS makes it easier to create responsive designs without adding extra code or media queries. Its mobile-first approach ensures that your website looks great on all screen sizes without compromising on performance. 4. Faster development process: By eliminating the need for manually writing complex CSS rules, Tailwind speeds up the development process significantly. It also reduces the chances of errors as developers can easily reuse existing utility classes instead of creating new ones from scratch every time they need a similar style element. 5. Scalable designs: With Tailwind CSS, you can create scalable designs that are easy to maintain and modify in the long run. As your website grows and evolves, making changes to its design becomes more manageable with Tailwind's utility classes, as opposed to traditional frameworks where modifying styles can be a tedious process. Why Integrate Drupal with Tailwind CSS? One of the main reasons to integrate Drupal with Tailwind CSS is its ability to simplify web development. With Tailwind CSS, developers can avoid writing repetitive and lengthy code for styling. Instead, they can use pre-built classes that provide consistent and reusable design patterns. This not only saves time but also results in cleaner code that is easier to maintain. Another benefit of integrating Drupal with Tailwind CSS is its responsive design capabilities. With the increasing use of mobile devices, it has become essential for websites to be optimized for all screen sizes. Thanks to Tailwind's built-in responsive utilities, designers can easily build responsive layouts without having to write separate media queries for different devices. Step-by-Step Guide to Integrating Drupal with Tailwind CSS Step 1: Get your prerequisites ready Local Drupal Setup NPM and Yarn package Managers Drush Lando Step 2: Create a custom theme We will begin by creating a Drupal custom theme and then will integrate Tailwind CSS into the same. In this tutorial, I am using the starter kit theme command to generate my custom theme. Go to the web directory and type in the following command in the terminal php core/scripts/drupal generate-theme drupal_tailwind_theme after pressing the enter key our custom theme will be generated as shown below: Next, we will generate the package.json file which we require to initialize a new node.js project. Go to the terminal and type in the following command npm init and it prompts us with a series of questions regarding our project since for this tutorial I am keeping everything default by pressing the enter key until our package.json file is added as shown in the image. Step 3: Install and set up Tailwind CSS Now let’s install Tailwind and its dependencies and also configure it. In the terminal, run the following command:npm install -D tailwindcss postcss autoprefixer Next, let’s generate a Tailwind CSS config file using the following command:npx tailwindcss init -p Go to the tailwind.config.js file and set the path for Tailwind to scan. Since our Twig templates are in the templates directory, we will use that path as shown in the image. Now we need to create a CSS file to put the Tailwind directives in. Go to the src folder, create a file named index.css, and paste the following directives:@tailwind base;@tailwind components;@tailwind utilities;   Create a folder where the Tailwind CSS will be compiled. On the same level as the src folder, create a folder named dist (or any name you prefer). We now run the following command:npx tailwindcss –input src/index.css –output dist/index.css After executing this command, we will be able to see the compiled version of the Tailwindcss in the dist folder. We can verify it by going to the dist folder as shown in the image. In the package.json file, we will be adding the scripts dev and watch which can be executed using npm run dev & npm run watch or using yarn dev & yarn watch. The script dev compiles the code once as per command while the watch command keeps a watch on any change. Step 4: Adding Tailwind CSS to our Drupal Theme Now we will be adding Tailwind to our Drupal site. For that, we can add a library namely tailwind, and give it a CSS path of the dist/index.css. That means where we have our compiled tailwind. Next, we can add that library either globally or to the specific template. For the purpose of this tutorial, I will be adding it globally. Using Tailwind CSS in our Drupal theme For this tutorial, I am using the page.html.twig file and adding a red background color by including the class bg-red-500 in the markup. Clear the cache and check the output. Our Tailwind CSS class has applied the desired background color.   Final Thoughts Tailwind CSS is a great choice when you want to build websites quickly and easily. It integrates well with Drupal and the combination of these two tools creates a powerful synergy for web development. Reach out to us if you would like to learn more about how Specbee’s Drupal experts can help you create stunning web experiences with Drupal and Tailwind CSS.
Categories: FLOSS Project Planets

Week 3 recap - points on a canvas

Planet KDE - Tue, 2024-06-18 00:56
Hey welcome to the week 3 recap blog. So after some testing, using the pixel art brush preset and clicking on individual pixels on the canvas will trigger functions like kis_paintop:paintAt and kis_brushop:paintAt. KisBrushOp represents a specific pa...
Categories: FLOSS Project Planets

Plasma 6.1

Planet KDE - Mon, 2024-06-17 20:00

Plasma 6 hits its stride with version 6.1. While Plasma 6.0 was all about getting the migration to the underlying Qt 6 frameworks correct (and what a massive job that was), 6.1 is where developers start implementing the features that will take you desktop to a new level.

In this release, you will find features that go far beyond subtle changes to themes and tweaks to animations (although there is plenty of those too), as you delve into interacting with desktops on remote machines, become more productive with usability and accessibility enhancements galore, and discover customizations that will even affect the hardware of your computer.

These features and more are being built directly into Plasma's Wayland version natively, avoiding the need for third party software and hacky extensions required by similar solutions implemented in X.

Things will only get more interesting from here. But meanwhile enjoy what will land on your desktop with your next update.

Note: Due to unforeseen circumstances, we have been unable to ship the new wallpaper, "Reef", with this version of Plasma. However, there will be a new wallpaper coming soon in the next 6.2 version.

If you can't wait, you can download "Reef" here.

We apologise for this delay and the inconvenience this may cause.

What's New Access Remote Plasma Desktops

One of the more spectacular (and useful) features added in Plasma 6.1 is that you can now start up a remote desktop directly from the System Settings app. This means that if you are sysadmin who needs to troubleshoot users' machines, or simply need to work on a Plasma-enabled computer that is out of reach, setting up a connection is just a few clicks away.

Once enabled, you can connect to the remote desktop using a client such as KRDC. You will see the remote machine's Plasma desktop in a window and be able to interact with it from your own computer.

Customization made (more) Visual

We all love customizing our Plasma desktops, don't we? One of the quickest ways to do this is by entering Plasma's Edit Mode (right-click anywhere on the desktop background and select Enter Edit Mode from the menu).

In version 6.1, the visual aspect of Edit Mode has been overhauled and you will now see a slick animation when you activate it. The entire desktop zooms out smoothly, giving you a better overview of what is going on and allowing you to make your changes with ease.

Persistent Apps

Plasma 6.1 on Wayland now has a feature that "remembers" what you were doing in your last session like it did under X11. Although this is still work in progress, If you log off and shut down your computer with a dozen open windows, Plasma will now open them for you the next time you power up your desktop, making it faster and easier to get back to what you were doing.

Sync your Keyboard's Colored LEDs

It wouldn't be a new Plasma release without at least one fancy aesthetic customization features. This time, however, we give you the power to reach beyond the screen, all the way onto your keyboard, as you can now synchronize the LED colours of your keys to match the accent colour on your desktop.

The ultimate enhancement for the fashion-conscious user.

Please note that this feature will not work on unsupported keyboards — support for additional keyboards is on the way!

And all this too...
  • We have simplified the choices you see when you try to exit Plasma by reducing the number of confusing options. For example, when you press Shutdown, Plasma will only list Shutdown and Cancel, not every single power option.

  • Screen Locking gives you the option of configuring it to behave like a traditional screensaver, as you can choose it not to ask you for a password to unlock it.

  • Two visual accessibility changes make it easier to use the cursor in Plasma 6.1:

    • Shake Cursor makes the cursor grow when you "shake" it. This helps you locate that tiny little arrow on your large, cluttered screens when you lose it among all those windows.

    • Edge Barrier is useful if you have a multi-monitor setup and want to access things on the very edge of your monitor. The "barrier" is a sticky area for your cursor near the edge between screens, and it makes it easier to click on things (if that is what you want to do), rather than having the cursor scooting across to the next display.

  • Two major Wayland breakthroughs will greatly improve your Plasma experience:

    • Explicit Sync eliminates flickering and glitches traditionally experienced by NVidia users.

    • Triple Buffering support in Wayland makes animations and screen rendering smoother.


and there's much more going on. To see the full list of changes, check out the changelog for Plasma 6.1.
Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #455 - Top 5 uses of AI for Drupal

Planet Drupal - Mon, 2024-06-17 16:08

Today we are talking about AI Tips for Drupal Devs, AI Best Practices, and Drupal Droid with guest Mike Miles. We’ll also cover AI interpolator as our module of the week.

For show notes visit: www.talkingDrupal.com/455

Topics
  • Top 5 tips
    • Idea Generation (Ideation)
    • Code Generation
    • Debugging
    • Content Generation
    • Technical Explanations
  • How do you suggest people use AI for Ideation
  • Is MIT Sloan using AI to help with Drupal Development
  • Does that code get directly inserted into your sites
  • What are some common pitfalls
  • Is your team using AI for debugging
  • Any best practices you have found helping when working with AI
  • Is MIT Sloan using AI for content generation
  • What is an example of how you use AI for technical explanations
  • What is your view ont he future of AI in Drupal, do you think AI will replace Drupal developers
Resources Guests

Michael Miles - mike-miles.com mikemiles86

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Randy Fay - rfay

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you ever wanted to use AI to help populate entity fields that were left blank by Drupal content authors? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in Sep 2023 by Marcus Johansson of FreelyGive
    • Versions available: 1.0.0-rc4
  • Maintainership
    • Actively maintained, recent release in the past month
    • Security coverage - opted in, needs stable release
    • Test coverage
    • Documentation - User guide
    • Number of open issues: 18 open issues, none of which are a bugs
  • Usage stats:
    • 94 sites
  • Module features and usage
    • In scientific fields, interpolation is the process of using known data to extrapolate or estimate unknown data points. In a similar way this module helps your Drupal site provide values for fields that didn’t receive input, based on the information that was provided.
    • Fundamentally Interpolator AI provides a framework and an API, and then relies on companion modules for processing, either by leveraging third-party services like AI LLMs, or PHP-based scripting.
    • There are existing integrations with a variety of AI services, including OpenAI, Dreamstudio, Hugging Face, and more.
    • You can add retrievers to help extract and normalize the content you’re processing, for example photos from an external site, and other tools to help normalize and optimize content and media, and optimize any prompts you will be using with AI services.
    • You can also extend the workflow capabilities of AI Interpolator, for example using the popular and powerful ECA module that we’ve talked about before on this show.
Categories: FLOSS Project Planets

PyBites: Deploying a FastAPI App as an Azure Function: A Step-by-Step Guide

Planet Python - Mon, 2024-06-17 14:07

In this article I will show you how to deploy a FastAPI app as a function in Azure. Prerequisites are that you have an Azure account and have the Azure CLI installed (see here).

Setup Azure

First you need to login to your Azure account:

az login

It should show your subscriptions and you select the one you want to use.

Create a resource group

Like this:

az group create --name myResourceGroup --location eastus

A resource group is a container that holds related resources for an Azure solution (of course use your own names throughout this guide).

It comes back with the details of the resource group you just created.

Create a storage account

Using this command:

az storage account create --name storage4pybites --location eastus --resource-group myResourceGroup --sku Standard_LRS

Note that for me, to get this working, I had to first make an MS Storage account, which I did in the Azure portal.

The storage account name can only contain lowercase letters and numbers. Also make sure it’s unique enough (e.g. mystorageaccount was already taken).

Upon success it comes back with the details of the storage account you just created.

Create a function app

Now, create a function app.

az functionapp create --resource-group myResourceGroup --consumption-plan-location eastus --runtime python --runtime-version 3.11 --functions-version 4 --name pybitesWebhookSample --storage-account storage4pybites --os-type Linux Your Linux function app 'pybitesWebhookSample', that uses a consumption plan has been successfully created but is not active until content is published using Azure Portal or the Functions Core Tools. Application Insights "pybitesWebhookSample" was created for this Function App. You can visit <URL> to view your Application Insights component App settings have been redacted. Use `az webapp/logicapp/functionapp config appsettings list` to view. { ...JSON response... }

A bit more what these switches mean:

  • --consumption-plan-location is the location of the consumption plan.
  • --runtime is the runtime stack of the function app, here Python.
  • --runtime-version is the version of the runtime stack, here 3.11 (3.12 was not available at the time of writing).
  • --functions-version is the version of the Azure Functions runtime, here 4.
  • --name is the name of the function app (same here: make it something unique / descriptive).
  • --storage-account is the name of the storage account we created.
  • --os-type is the operating system of the function app, here Linux (Windows is the default but did not have the Python runtime, at least for me).
Create your FastAPI app

Go to your FastAPI app directory and create a requirements.txt file with your dependencies if not done already:

To demo this from scratch, let’s create a simple FastAPI webhook app (the associated repo is here).

mkdir fastapi_webhook cd fastapi_webhook mkdir webhook

First, create a requirements.txt file:

fastapi azure-functions

The azure-functions package is required to run FastAPI on Azure Functions.

Then create an __init__.py file inside the webhook folder with the FastAPI app code:

import azure.functions as func from fastapi import FastAPI app = FastAPI() @app.post("/webhook") async def webhook(payload: dict) -> dict: """ Simple webhook that just returns the payload. Normally you would do something with the payload here. And add some security like secret key validation (HMAC for example). """ return {"status": "ok", "payload": payload} async def main(req: func.HttpRequest, context: func.Context) -> func.HttpResponse: return await func.AsgiMiddleware(app).handle_async(req, context)

This took a bit of trial and working to get it working, because it was not evident that I had to use AsgiMiddleware to run FastAPI on Azure Functions. This article helped with that.

Next, create a host.json file in the root of the project for the Azure Functions host:

{ "version": "2.0", "extensions": { "http": { "routePrefix": "" } } }

A local.settings.json file for local development:

{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "UseDevelopmentStorage=true", "FUNCTIONS_WORKER_RUNTIME": "python" } }

And you’ll need a function.json file in the webhook directory that defines the configuration for the Azure Function:

{ "scriptFile": "__init__.py", "bindings": [ { "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["post"] }, { "type": "http", "direction": "out", "name": "$return" } ] }

I also included a quick test script in the root of the project:

import sys import requests url = "http://localhost:7071/webhook" if len(sys.argv) < 2 else sys.argv[1] data = {"key": "value"} resp = requests.get(url) print(resp) resp = requests.post(url, json=data) print(resp) print(resp.text)

If you run this script, you should see a 404 for the GET request (not implemented) and a 200 for the POST request. For remote testing I can run the script with the live webhook URL (see below).

Run the function locally

To verify that the function works locally run it with:

func start

Then run the test script:

$ python test.py <Response [404]> <Response [200]> {"status":"ok","payload":{"key":"value"}}

Awesome!

Deploy it to Azure

Time to deploy the function to Azure:

(venv) √ fastapi_webhook (main) $ func azure functionapp publish pybitesWebhookSample Getting site publishing info... [2024-06-17T16:06:08.032Z] Starting the function app deployment... Creating archive for current directory... ... ... Remote build succeeded! [2024-06-17T16:07:35.491Z] Syncing triggers... Functions in pybitesWebhookSample: webhook - [httpTrigger] Invoke url: https://pybiteswebhooksample.azurewebsites.net/webhook

Let’s see if it also works on Azure:

$ python test.py https://pybiteswebhooksample.azurewebsites.net/webhook <Response [404]> <Response [200]> {"status":"ok","payload":{"key":"value"}}

Cool, it works!

If something goes wrong remotely, you can check the logs:

az webapp log tail --name pybitesWebhookSample --resource-group myResourceGroup

And when you make changes, you can redeploy with:

func azure functionapp publish pybitesWebhookSample

That’s it, a FastAPI app running as an Azure Function! I hope this article helps you if you want to do the same. If you have any questions, feel free to reach out to me on X | Fosstodon | LinkedIn -> @bbelderbos.

Categories: FLOSS Project Planets

Open Source AI Definition – Weekly update June 17

Open Source Initiative - Mon, 2024-06-17 12:52
Explaining the concept of Data information
  • After much debate regarding training data, @stefano published a summary of the positions expressed and some clarifications about the terminology included in draft v.0.0.8. You can read the rationale about it and share your thoughts on the forum. 
  • Initial thoughts:
    • @Senficon (Felix Reda) adds that while the discussion has highlighted the case for data information, it’s crucial to understand the implications of copyright law on AI, particularly concerning access to training data. Open Source software relies on a legal element (copyright licenses) and an access element (availability of source code). However, this framework does not seamlessly apply to AI, as different copyright regimes allow text and data mining (TDM) for AI training but not the redistribution of datasets. This discrepancy means that requiring the publication of training datasets would make Open Source AI models illegal, despite TDM exceptions that facilitate AI development. Also, public domain status is not consistent internationally, complicating the creation of legally publishable datasets. Consequently, a definition of Open Source AI that imposes releasing datasets would impede collaborative improvements and limit practical significance. Emphasizing data innovation can help maintain Open Source principles without legal pitfalls.
Concerns and feedback on anchoring on the Model Openness Framework
  • @amcasari expresses concern about the usability and neutrality of the “Model Openness Framework” (MOF) for identifying AI systems, suggesting it doesn’t align well with current industry practices and isn’t ready for practical application without further feedback and iteration.
  • @shujisado points out that the MOF’s classification of components doesn’t depend on the specific IP laws applied, but rather on a general legal framework, and highlights that Japan’s IP law system differs from the US and EU, yet finds discussions based on the OSD consistent.
  • @stefano emphasizes the importance of having well-thought-out, timeless principles in the Open Source AI Definition document, while viewing the Checklist as a more frequently updated working document. He also supports the call to see practical examples of the framework in use and proposes separating the Checklist from the main document to reduce confusion.
Initial Report on Definition Validation
  • Reviews of eleven different AI systems have been published. We do these review to check existing systems compatibility with our current definition. These are the systems in question: Arctic, BLOOM, Falcon, Grok, Llama 2, Mistral, OLMo, OpenCV, Phy-2, Pythia, and T5.
    • @mer has set up a review sheet for the Viking model upon request from @merlijn-sebrechts.
    • @anatta8538 asks if MLOps is considered within the topic of the Model Openness Framework and whether CLIP, an LMM, would be consistent with the OSAID.
    • @nick clarifies that the evaluation focuses on components as described in the Model Openness Framework, which includes development and deployment aspects but does not cover MLOps as a whole.
Why and how to certify Open Source AI
  • @Alek_Tarkowski agrees that certification of open-source AI will be crucial under the AI Act and highlights the importance of defining what constitutes an Open Source license. He points out the confusion surrounding terms like “free and open source license” and suggests that the issue of responsible AI licensing as a form of Open Source licensing needs resolution. Notes that some restrictive licenses are gaining traction and may need consideration for exemption from regulation, thus urging for a consensus.
Open Source AI Definition Town Hall – June 14, 2024

Slides and the recording of our previous townhall meeting can be found here.

Categories: FLOSS Research

ADCI Solutions: How to add a live chat to a website using Mercure

Planet Drupal - Mon, 2024-06-17 11:58
<p>We are currently working on a platform for contacting influencers and media to promote products, services, and personal brands. As the platform was developing, the client felt the need to <a href="https://www.adcisolutions.com/work/mercure-chat?utm_source=planetdrupal%26utm_medium=rss_feed%26utm_campaign=mercure_chat">add live chat functionality</a> so that platform clients could communicate with its administrators and copywriters. But it had to store its history on the client server.&nbsp;</p><img data-entity-uuid="8baeedd5-7f77-4fad-af7b-78aba79bfae1" data-entity-type="file" src="https://www.adcisolutions.com/sites/default/files/inline-images/adding-a-live-chat-to-website.png" width="1324" height="842" alt="adding a live chat to a website">
Categories: FLOSS Project Planets

Real Python: Ruff: A Modern Python Linter for Error-Free and Maintainable Code

Planet Python - Mon, 2024-06-17 10:00

Linting is essential to writing clean and readable code that you can share with others. A linter, like Ruff, is a tool that analyzes your code and looks for errors, stylistic issues, and suspicious constructs. Linting allows you to address issues and improve your code quality before you commit your code and share it with others.

Ruff is a modern linter that’s extremely fast and has a simple interface, making it straightforward to use. It also aims to be a drop-in replacement for many other linting and formatting tools, such as Flake8, isort, and Black. It’s quickly becoming one of the most popular Python linters.

In this tutorial, you’ll learn how to:

  • Install Ruff
  • Check your Python code for errors
  • Automatically fix your linting errors
  • Use Ruff to format your code
  • Add optional configurations to supercharge your linting

To get the most from this tutorial, you should be familiar with virtual environments, installing third-party modules, and be comfortable with using the terminal.

Ruff cheat sheet: Click here to get access to a free Ruff cheat sheet that summarizes the main Ruff commands you’ll use in this tutorial.

Take the Quiz: Test your knowledge with our interactive “Ruff: A Modern Python Linter” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Ruff: A Modern Python Linter

In this quiz, you'll test your understanding of Ruff, a modern linter for Python. By working through this quiz, you'll revisit why you'd want to use Ruff to check your Python code and how it automatically fixes errors, formats your code, and provides optional configurations to enhance your linting.

Installing Ruff

Now that you know why linting your code is important and how Ruff is a powerful tool for the job, it’s time to install it. Thankfully, Ruff works out of the box, so no complicated installation instructions or configurations are needed to start using it.

Assuming your project is already set up with a virtual environment, you can install Ruff in the following ways:

Shell $ python -m pip install ruff Copied!

In addition to pip, you can also install Ruff with Homebrew if you’re on macOS or Linux:

Shell $ brew install ruff Copied!

Conda users can install Ruff using conda-forge:

Shell $ conda install -c conda-forge ruff Copied!

If you use Arch, Alpine, or openSUSE Linux, you can also use the official distribution repositories. You’ll find specific instructions on the Ruff installation page of the official documentation.

Additionally, if you’d like Ruff to be available for all your projects, you might want to install Ruff with pipx.

You can check that Ruff installed correctly by using the ruff version command:

Shell $ ruff version ruff 0.4.7 Copied!

For the ruff command to appear in your PATH, you may need to close and reopen your terminal application or start a new terminal session.

Linting Your Python Code

While linting helps keep your code consistent and error-free, it doesn’t guarantee that your code will be bug-free. Finding the bugs in your code is best handled with a debugger and adequate testing, which won’t be covered in this tutorial. Coming up in the next sections, you’ll learn how to use Ruff to check for errors and speed up your workflow.

Checking for Errors

The code below is a simple script called one_ring.py. When you run it, it gets a random Lord of the Rings character name from a tuple and lets you know if that character bore the burden of the One Ring. This code has no real practical use and is just a bit of fun. Regardless of the size of your code base, the steps are going to be the same:

Read the full article at https://realpython.com/ruff-python/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: Quiz: Ruff: A Modern Python Linter

Planet Python - Mon, 2024-06-17 08:00

In this quiz, you’ll test your understanding of Ruff, a modern linter for Python.

By working through this quiz, you’ll revisit why you’d want to use Ruff to check your Python code and how it automatically fixes errors, formats your code, and provides optional configurations to enhance your linting.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Robin Wilson: Accessing Planetary Computer STAC files in DuckDB

Planet Python - Mon, 2024-06-17 05:36

Microsoft Planetary Computer is a wonderful archive of geospatial datasets (primarily raster images of various types), provided with a STAC catalog to enable them to be easily searched through an API. That’s fine for normal usage where you want to find a selection of images and access the images themselves, but less useful when you want to do some analysis of the metadata itself.

For that use case, Planetary Computer provide the STAC metadata in bulk, stored as GeoParquet files. Their documentation page explains how to use this with geopandas and do some large-ish scale processing with Dask. I wanted to try this with DuckDB – a newer tool that is excellent at accessing and processing Parquet files, including efficiently accessing files available via HTTP, and only downloading the relevant parts of the file. So, this post explains how I managed to do this – showing various different approaches I tried and how (or if!) each of them worked.

Getting URLs

Planetary Computer URLs are basically just URLs to files on Azure Blob Storage. However, the URLs require signing with a Shared Access Signature (SAS) key. Luckily, there is a Planetary Computer Python module that will use an API to generate the correct keys for us.

In the docs for using the STAC GeoParquet files, they give this code:

catalog = pystac_client.Client.open( "https://planetarycomputer.microsoft.com/api/stac/v1/", modifier=planetary_computer.sign_inplace, ) asset = catalog.get_collection("io-lulc-9-class").assets["geoparquet-items"]

This gets a collection, and the geoparquet-items collection-level asset, having used a ‘modifier’ to sign the URLs as they are acquired from the STAC API. The URL is then stored in asset.href.

Using the DuckDB CLI – with HTTP URLs

When using GeoPandas, the docs show that you can pass a storage_options parameter to the read_parquet method and it will ‘just work’:

df = geopandas.read_parquet( asset.href, storage_options=asset.extra_fields["table:storage_options"] )

However, to use the file in DuckDB we need a full URL. The SQL code we’re going to use in DuckDB for a quick test looks like this:

SELECT * FROM <URL> LIMIT 1

This just gets the first row of whatever URL is given.

Unfortunately, though, the URL provided by the STAC catalog looks like this:

abfs://items/io-lulc-9-class.parquet

If we try that with DuckDB, we find it doesn’t know how to deal with it. So, we need to convert this to a standard HTTP URL that can be downloaded as if it were just typed into a web browser (or used with cURL or wget). After a bit of playing around, I found I could create a full URL with this Python code:

url = f'https://{asset.extra_fields["table:storage_options"] ["account_name"]}.dfs.core.windows.net/{asset.href[7:]}? {asset.extra_fields["table:storage_options"]["credential"]}'

That’s taking the account_name and sticking it in as part of the host part of the UK, then extracting the path from the original URL and adding that, and then finishing with the SAS token (stored as credential in the storage options) as a URL query parameter.

That results in a URL that looks like this (split over multiple lines for readability):

https://pcstacitems.dfs.core.windows.net/items/io-lulc-9-class.parquet ?st=2024-06-14T19%3A04%3A20Z&se=2024-06-15T19%3A49%3A20Z&sp=rl&sv=2024-05-04& sr=c&skoid=9c8ff44a-6a2c-4dfb-b298-1c9212f64d9a &sktid=72f988bf-86f1-41af-91ab-2d7cd011db47&skt=2024-06-15T09%3A00%3A14Z &ske=2024-06-22T09%3A00%3A14Z&sks=b &skv=2024-05-04 &sig=A0e%2BihbAagHmIZ%2Bg8gzH71TavRYQMZiHWJ/Uk9j0Who%3D

(note: that specific URL won’t work for you, as the SAS tokens are time-limited).

If we insert that into our DuckDB SQL statement then we find it works (click to enlarge):

Great! Now let’s try with a different collection on Planetary Computer – in this case the Sentinel 2 collection. We can make the URL in the same way as before, by changing how we define asset:

asset = catalog.get_collection("sentinel-2-l2a").assets["geoparquet-items"]

and then using the same query in DuckDB with the new URL. Unfortunately, this time we get an error:

Invalid Input Error: File '<URL>' too small to be a Parquet file

The problem here is that for larger collections, the STAC metadata is spread out over a series of GeoParquet files inside a folder, and the URL we’ve been given is just to the folder (even though it ends in .parquet). As we’re just using HTTP, there is no way to do things like list the files in a folder, so DuckDB has no way to find out what files are within the folder and start reading them. We need to find another way.

Using the DuckDB CLI – with the Azure extension

Conveniently, there is an Azure extension for DuckDB, and it lets you use URLs like 'abfss://⟹my_filesystem⟩/⟹path⟩/⟹my_file⟩.⟹parquet_or_csv⟩'. That’s a slightly different URL scheme to the one we’ve been given (abfss as opposed to abfs), but we can easily sort that.

Looking at the authentication docs though, it seems to require you to specify either a connection string, a service principal or using the ‘Azure Credential Chain’ (which, I think, works with the az command line and various environment variables that you may have set up). We don’t have any of those – they’re all a far broader scope than what we’ve got, which is just a SAS token for a specific file or folder. It looks like the Azure extension doesn’t support this, so we’ll have to look for another way.

Using the DuckDB Python library – with an Azure connector

As well as the command-line interface, DuckDB can also be used from Python. To set this up, just run pip install duckdb. While you’re at it, you might want to install pandas and the adlfs library for connecting to Azure Storage.

Using this is actually quite simple, first import the libraries:

import duckdb from adlfs.spec import AzureBlobFileSystem

then set up the Azure connection:

a = AzureBlobFileSystem(account_name='pcstacitems', sas_token=asset.extra_fields["table:storage_options"]['credential'])

Note how here we’re using the sas_token parameter to provide a SAS token. We could use a different parameter here to provide a connection string or some other credential type. I couldn’t find much real-world use of this sas_token parameter when looking online – so this is probably the key ‘less well documented’ bit to take away from this article.

Continuing, we then connect to DuckDB and ‘register’ this Azure connection:

connection = duckdb.connect() connection.register_filesystem(a)

From there, we can run a SQL query like this:

query = connection.sql(f""" SELECT * FROM 'abfs://items/sentinel-2-l2a.parquet/*.parquet' LIMIT 1; """) query.to_df()

The call to to_df at the end converts the result into a Pandas DataFrame for easy viewing and manipulation. The results are shown below (click to enlarge):

Note that we’re passing a URL of abfs://items/sentinel-2-l2a.parquet/*.parquet – this is the URL from the STAC catalog with /*.parquet added to the end to ensure DuckDB picks up the large number of Parquet files stored there. I’d have thought this would have worked without that, but if I miss that out I get a error saying:

TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'

I suspect this is something to do with how things are passed to the Azure connection that we registered with DuckDB, but I’m not entirely sure. If you do know, then please leave a comment!

A few fun queries

So, now we’ve got this working, what can we do? Well, we can – fairly efficiently – run queries across all the STAC metadata for Sentinel 2 images. I’ll just give a few examples of queries I’ve run below.

We can find out how many images there are in the collection in total:

SELECT COUNT(*) FROM 'abfs://items/sentinel-2-l2a.parquet/*.parquet'

We can do the same for just the images acquired in 2020 – by using a fact we know about how the individual parquet files are named (from the Planetary Computer docs):

SELECT COUNT(*) FROM 'abfs://items/sentinel-2-l2a.parquet/*2020*.parquet'

And we can start looking at how many scenes were captured with different amounts of cloud cover:

SELECT COUNT(*) / (SELECT COUNT(*) FROM 'abfs://items/sentinel-2-l2a.parquet/*2020*.parquet'), CASE WHEN "eo:cloud_cover" between 0 and 5 then '0-5%' WHEN "eo:cloud_cover" between 5 and 10 then '5-10%' WHEN "eo:cloud_cover" between 10 and 15 then '10-15%' END AS category FROM 'abfs://items/sentinel-2-l2a.parquet/*2020*.parquet' GROUP BY category,

This tells us that 20% of scenes had between 0 and 5% cloud cover (quite a high number, I thought – but then again, I am used to living in the UK!), and around 4-5% in each of the 5-10% and 10-15% categories.

There are plenty of other analyses that you could do with these Parquet files, of course. At some point I might even get around to the task which initially made me look into this: that is, trying to find Landsat and Sentinel 2 scenes that were acquired over the same location at very similar times. I think I’ll leave that for another day though…

Categories: FLOSS Project Planets

Greg Casamento: Keysight laid me off in January!

GNU Planet! - Mon, 2024-06-17 01:24
A little history first. Keysight is a large company that, primarily, makes testing equipment such as oscilloscopes and other electronics. They bought a company a few years back named TestPlant. Prior to that, TestPlant bought a company by the name of Redstone that produced a product known as Eggplant. Recently, I was laid off for economic reasons (at least that's what they said). It occurs to me that nothing in this world lasts forever. I was so depressed when I was let go because Keysight was the perfect home for me... they used GNUstep deeply. So, as you can imagine, I was deeply upset when things ended... but all things do. 
 I think it happened for several reasons: 
  • Economic - This is what was explained to me, but I am not sure I believe it 
  • Politics - I think this part is because I expressed my opinions HONESTLY about the direction of the company given that they wanted to make the application into a VSCode plugin.
  • Perception - I am 54 years old... so I think that they believed that Objective-C was my one and only talent, it's not... I know many other languages and have many other skills. 
Unfortunately, in the US, any employer can let go of any employee or contractor for ANY reason. This is known as at-will employment, making it very hard to take any action against any employer (not that this is something I considered).
Keysight is and will remain a major contributor to GNUstep.
That being said, I recently ran into something rather disturbing at another company.   I have been working with a company based out of New Mexico that is interested in space applications.  They have been using GNUstep and have been awaiting funding.
The lead of this effort expressed something during a meeting saying "We will work on the GNUstep side of this because there is no reason we should have to pay for any of this."   This hit a sour note with me to say the very least.   As it turns out he was under the mistaken impression that, because the work was on GNUstep, it was for free... which is WRONG.
I wonder if the same impression was present at Keysight or if other companies believe this.  The saying, according to RMS, is "Free as in freedom, not as in beer."   If you are a manager at a company who is under the mistaken impression that work on any Free Software or Open Source project is free when your product depends on it, please correct your thinking.   Just because it is someone's passion project does NOT mean that they are going to do that work for free and prioritize the things that need to be done for your organization.
All of that being said the positive sides are this:
  1. More time to code on GNUstep without interruption
  2. More time to work on my own projects
  3. Time to rest and relax
So, as much as I hate being unemployed there ARE some upsides to it.  Here's to hoping something works out soon.   I literally loved my job at Keysight and, honestly, hope to return.   I have my eye on their changes as well as those of others just like any other member of the community.  Yours, GC
Categories: FLOSS Project Planets

The Drop Times: Tech Talks, Brand Strategies, and the Future of Drupal: Highlights from The DropTimes

Planet Drupal - Mon, 2024-06-17 01:16

An enterprise web solution is inherently complex. Its endless integrations with various applications that are added or loaded off as per the changing requirements, multiple front-ends that provide varied and personalised digital experiences, the cultural nuances catered to in its l10n editions, the time-zone differences that are to be taken care of, etc., etc., are the most basic parts that add to the complexity.

Continuous investment, development and deployment are part and parcel of such a solution. Even so, it becomes amiable only if the complexity is masked out in the simplicity of the consumer touch points.

Two separate sets of consumers should be considered there. One is the enterprise consumers, the content creators and marketers, who are the immediate benefactors of the solution. The second, a more prominent set, is the customers to whom the enterprises cater.

Drupal, the platform for building a digital experience, invested heavily in satisfying the end customers. That was what Drupal agencies were doing all these years. They got an exoskeleton on which they built the end-user experiences. But what about the enterprise consumers themselves? Weren’t they the ones who ultimately decided to ditch or continue using the services? How good were their experiences sans the involvement of a hired developer?

The decision to simplify the touch points of enterprise consumers is a development and design decision. These content creators and marketers will benefit the most from the Starshot Initiative. It supports the marketing team by reducing the go-to-market time and eases the content creators by providing the most valuable tools out of the box. A complex solution is now draped in a simple cassock.

The first story I’m sharing today concerns Drupal’s new brand strategy and future directions. The Starshot Initiative manifests this strategy and the direction the project is taking. An interview with Shawn Perritt, Senior Director of Brand & Creative at Acquia, shines a light on this brand refresh. Read Shawn’s conversation with Alka Elizabeth, our sub-editor.

Montreal became the hub for tech talks in the past week. We conducted one-question interviews with three featured speakers at Evolve Drupal Montreal. Kazima Abbas, our sub-editor, spoke with Josh Koenig, co-founder of Pantheon, Joe Kwan, Manager of Web Development at the University of Waterloo, and the formidable Mike Herchel, Senior Front-end Developer at Agileana. Their answers form part of the feature published here.

Kazima also conducted an exclusive interview with BaddĂœ Sonja Breidert, CEO and Co-Founder of 1xINTERNET. The interview focused on the agency’s support for the Starshot Initiative based on its experience with ‘Try Drupal’, which implements a default Drupal deployment for enterprises. It explains why they did not wait or hesitate to offer all-out support for the new initiative.

Alka Elizabeth and Ben Peter Mathew, our community manager, had jointly covered the Starshot Session on ‘Strategic Milestones in Product Definition’ led by Dries Buytaert and Cristina Chumillas on June 7. The report can be read here.

The next two stories are our signature stories, written by our founder and lead, Anoop John, CTO of Zyxware Technologies. From the start, Anoop was vocal about ‘DrupalCollab’, an idea he had been nurturing for years. As part of the initiative, he has published a list of the largest 500+ cities in the world with a sizable population of people tagged ‘Drupal developers’ on LinkedIn. You can read the analysis here. The second story published with this list is, On Using LinkedIn to Analyze the Size of the Drupal Community, an article on the methodology for this analysis using LinkedIn, associated errors and justifications. Based on these stories, Anoop is trying to build momentum to restart local Drupal meetups in as many cities as possible. Also, read his LinkedIn article on the initiative here.

Mautic 5.1 Andromeda Edition was released last week with major updates. Read our report here. We published a quick review of the Drupal Quick Exit Module developed by Oomph Inc. Ben Peter Mathew spoke with Alyssa Varsanyi of Oomph to produce this report.

To ensure brevity, I shall list the rest of the important stories below without much ado.

That is for the week, dear readers. I wish you all a very sacred Eid Ul Adha. Let the festival of sacrifice be a time to ponder the things we are ready to give up for the better common good.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thanks and regards.

Sincerely,
Sebin A. Jacob,
Editor-in-Chief, The DropTimes

Categories: FLOSS Project Planets

GSoC'24 Okular | Week 2-3 Recap

Planet KDE - Mon, 2024-06-17 01:00
People of the Internet,

While working on keystroke events, I realized my improvements to the event.change property were still inconsistent for certain Unicode characters. This led me to delve into code units, code points, graphemes, and other cool Unicode concepts. I found this blog post to be very enlightening.

Here’s an update on my progress over the past two weeks:

MRs merged:
  • event.change : The change property of the event object now correctly handles Unicode, with adjustments to selStart and selEnd calculations. !MR998
  • cursor position and undo/redo fix : Fixed cursor position calculations to account for rejected input text and resolved merging issues with undo/redo commands in text fields. !MR1011
  • DocOpen Event implementation : Enabled document-level scripts to access the event object by implementing the DocOpen event. !MR1003
  • Executing validation events correctly : Fixed a bug where validation scripts wouldn’t run after KeystrokeCommit scripts in Okular. !MR999
  • Widget refresh functions for RadioButton, ListEdit and ComboEdit : Added refresh functions as slots for RadioButton, ListEdit, and ComboEdit widgets, aiding in reset functionality and script updates. !MR1012
  • Additional document actions in Poppler : Implemented reading additional document actions (CloseDocument, SaveDocumentStart, SaveDocumentFinish, PrintDocumentStart, PrintDocumentFinish) in the qt5 and qt6 frontends for Poppler. !MR1561 (in Poppler)
MRs currently under review:
  • Reset form implementation in qt6 frontend for Okular : Working on the reset form functionality in Okular, currently focusing on qt6 frontend details. !MR1564 (in Poppler)
  • Reset form in Okular : Using the Poppler API to reset forms within Okular. !MR1007
  • Fixing order of execution of events for text form fields : Addressing the incorrect execution order of certain events (e.g., calculation scripts) and ensuring keystroke commit, validation, and format scripts are evaluated correctly when setting fields via JavaScript. !MR1002

For the coming weeks, my focus will be on implementing reset forms, enhancing keystroking and formatting scripts, and possibly starting on submit forms. Let’s see how it goes.

See you next time. Cheers!

Categories: FLOSS Project Planets

Pages