Feeds

Morpht: Unleashing the power of Metatag custom tags

Planet Drupal - Fri, 2024-12-06 03:09
The Metatag 2.1.0 release introduces the powerful Metatag Custom Tags submodule, derived from the popular Custom Meta module. This new feature simplifies custom metadata management and enhances SEO.
Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 284 released

Planet Debian - Thu, 2024-12-05 19:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 284. This version includes the following changes:

[ Chris Lamb ] * Simplify tests_quines.py::test_{differences,differences_deb} to use assert_diff and not mangle the expected test output. * Update some tests to support file(1) version 5.46. (Closes: reproducible-builds/diffoscope#395)

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

icecream!

Planet KDE - Thu, 2024-12-05 16:28

Lots of KDE hacking these days, and that comes with compiling large amounts of code. Right now, I am installing, well building from source Plasma Mobile on an “old” laptop so I can test some patches natively on a touchscreen device. The machine has just two cores (hyperthreaded), so builds take rather long, especially if you build Qt and all that 80+ packages that are needed for a fully working Plasma system.
One of the tools that do an incredible job while being super flexible to use is icecream. Icecream (or “icecc“) allows you to distribute your build over multiple machines, it basically ships compile-jobs with all that’s needed to other machines on a local network, meaning you can parallelize your builds.

Icecream has this nice visualization tool, called icecream-monitor which you can stare at while your builds are running (in case you don’t have anyone handy for a sword-fight). In the screenshot you can see manta, the underpowered laptop doing a 32 parallel job build over the network. miro is my heavy workstation, 8 cores and 128GB of RAM, it duely gets the bulk of the work assigned, frame is my (Framework) laptop, which is also quite beefy, gets something to do too, but not taxed as heavily as that build monster in my basement office.
Icecream can be used with most environments that have you run your compiler locally. Different distros are no problem! Just a matching CPU architecture is needed. Icecream does its job by providing its own g++ and gcc binaries, which will relay the build jobs transparently to either your local machine or across the network. So you basically install it, adjust your PATH variable to make sure icecc’s g++ is found before your system’s compiler and start your build. Other machines you want to join in for the fun just need to run icecc-scheduler and they will be automatically discovered as build slaves on your network. If you want to further speed up builds, it works with ccache as well.

Please note that you only want to do this in a trusted environment, we’re shipping executables around the network without authorization!

Categories: FLOSS Project Planets

Christian Ledermann: Trusted publishing ‐ It has never been easier to publish your python packages

Planet Python - Thu, 2024-12-05 13:52

Publishing Python packages used to be a daunting task, but not any more. Even better, it has become significantly more secure. Gone are the days of juggling usernames, passwords, or API tokens while relying on CLI tools. With trusted publishing, you simply provide PyPI with the details of your GitHub repository, and GitHub Actions takes care of the heavy lifting.

How to Publish Your Python Package with Trusted Publishing

I will introduce a workflow that will publish your package to TestPyPi when a tag is created (on the development branch), or to PyPi when you merge to the main branch.

Prepare Your Package for Publishing

Ensure your Python package follows PyPI’s packaging guidelines. At a minimum, you’ll need:

  • A setup.py or pyproject.toml file defining your package metadata.
  • Properly structured code with a clear directory layout.
  • A README file to showcase your project on PyPI.

For a detailed checklist, refer to the Python Packaging User Guide.

Configure GitHub Actions in Your Repository

Let's start by creating a new GitHub action .github/workflows/test-build-publish.yml.

name: test-build-publish on: [push, pull_request] permissions: contents: read jobs: build-and-check-package: name: Build & inspect our package. runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: hynek/build-and-inspect-python-package@v2

This action will build your package and uploads the built wheel and the source distribution (SDist) as GitHub Actions artefacts.

Next, we add a step to publish to TestPyPI. This step will run whenever a tag is created, ensuring that the build from the previous step has completed successfully. Replace PROJECT_OWNER and PROJECT_NAME with the appropriate values for your repository.

test-publish: if: >- github.event_name == 'push' && github.repository == 'PROJECT_OWNER/PROJECT_NAME' && startsWith(github.ref, 'refs/tags') needs: build-and-check-package name: Test publish on TestPyPI runs-on: ubuntu-latest environment: test-release permissions: id-token: write steps: - name: Download packages built by build-and-check-package uses: actions/download-artifact@v4 with: name: Packages path: dist - name: Upload package to Test PyPI uses: pypa/gh-action-pypi-publish@release/v1 with: repository-url: https://test.pypi.org/legacy/

This step downloads the artefacts created during the build process and uploads them to TestPyPI for testing.

In the last step, we will upload the package to PyPI when a pull request is merged into the main branch.

publish: if: >- github.event_name == 'push' && github.repository == 'PROJECT_OWNER/PROJECT_NAME' && github.ref == 'refs/heads/main' needs: build-and-check-package name: Publish to PyPI runs-on: ubuntu-latest environment: release permissions: id-token: write steps: - name: Download packages built by build-and-check-package uses: actions/download-artifact@v4 with: name: Packages path: dist - name: Publish distribution 📦 to PyPI for push to main uses: pypa/gh-action-pypi-publish@release/v1 Configure GitHub Environments

To ensure that only specific tags trigger the publishing workflow and maintain control over your release process.
Create a new environment test-release by navigating to Settings -> Environments in your GitHub repository.

Set up the environment and add a deployment tag rule.

Limit which branches and tags can deploy to this environment based on rules or naming patterns.

Limit which branches and tags can deploy to this environment based on naming patterns.

Configure the target tags.

The pattern [0-9]*.[0-9]*.[0-9]* matches semantic versioning tags such as 1.2.3, 0.1.0, or 2.5.1b3, but it excludes arbitrary tags like bugfix-567 or feature-update.

Repeat this for the release environment to protect the main branch in the same way, but this time targeting the main branch.

Set Up a PyPI Project and Link Your GitHub Repository

Create an account on TestPyPI if you don’t have one.
Navigate to your account, Publishing and add a new pending publisher.
Link your GitHub repository to the PyPI project by providing its name, your GitHub username, the repository name, the workflow name (test-build-publish.yml) and the environment name (test-release).

Repeat the above on PyPI with the environment name set to release.

Test the Workflow

Now whenever you create a tag on your development branch, it will trigger a release to be uploaded to TestPyPI and merging the development branch into main will upload a release to PyPI.

What Wasn't Covered

While this guide provides an introduction to trusted publishing workflows, there are additional steps and best practices you might consider implementing. For example, setting up branch protection rules can ensure only authorized collaborators can push tags or merge to protected branches, like main or develop. You can also enforce status checks or require pull request reviews before merging, adding another layer of quality assurance.

Have a look at my python-repository-template that covers additional enhancement to this workflow, such as requiring unit and static tests to pass, checking the package with pyroma and ensuring that your tag matches the version of your package with vercheck.

Summary

If you've been holding back on sharing your work, now is the perfect time to try trusted publishing.

Categories: FLOSS Project Planets

Luke Plant: Check if a point is in a cylinder - geometry and code

Planet Python - Thu, 2024-12-05 11:40

In my current project I’m doing a fair amount of geometry, and one small problem I needed to solve a while back was finding whether a point is inside a cylinder.

The accepted answer for this on math.stackexchange.com wasn’t ideal — part of it was very over-complicated, and also didn’t work under some circumstances. So I contributed my own answer. In this post, in addition to the maths, I’ll give an implementation in Python.

Method

We can solve this problem by constructing the cylinder negatively:

  1. Start with an infinite space

  2. Throw out everything that isn't within the cylinder.

This is a classic mathematician’s approach, but it works great here, and it also works pretty well for any simply-connected solid object with only straight or concave surfaces, depending on how complex those surfaces are.

First some definitions:

  • Our cylinder is defined by two points, A and B, and a radius R

  • The point we want to test is P

  • The vectors from the origin to points A, B and P are \(\boldsymbol{r}_A\), \(\boldsymbol{r}_B\) and \(\boldsymbol{r}_P\) respectively.

We start with the infinite space, that is we assume all points are within the cylinder until we show they aren’t.

Then we construct 3 cuts to exclude certain values of \(\boldsymbol{r}_P\).

First, a cylindrical cut of radius R about an infinite line that goes through A and B. (This was taken from John Alexiou's answer in the link above):

  • The vector from point A to point B is:

    \begin{equation*} \boldsymbol{e} = \boldsymbol{r}_B-\boldsymbol{r}_A \end{equation*}

    This defines the direction of the line through A and B.

  • The distance from any point P at vector \(\boldsymbol{r}_P\) to the line is:

    \begin{equation*} d = \frac{\| \boldsymbol{e}\times\left(\boldsymbol{r}_{P}-\boldsymbol{r}_{A}\right) \|}{\|\boldsymbol{e}\|} \end{equation*}

    This is based on finding the distance of a point to a line, and using point A as an arbitrary point on the line. We could equally have used B.

  • We then simply exclude all points with \(d > R\).

    This can be optimised slightly by squaring both sides of the comparison to avoid two square root operations.

Second, a planar cut that throws away the space above the "top" of the cylinder, which I'm calling A.

  • The plane is defined by any point on it (A will do), and any normal pointing out of the cylinder, \(-\boldsymbol{e}\) will do (i.e. in the opposite direction to \(\boldsymbol{e}\) as defined above).

  • We can see which side a point is of this plane as per Relation between a point and a plane:

    The point is "above" the plane (in the direction of the normal) if:

    \begin{equation*} (\boldsymbol{r}_P - \boldsymbol{r}_A) \cdot -\boldsymbol{e} > 0 \end{equation*}
  • We exclude points which match the above.

Third, a planar cut which throws away the space below the bottom of the cylinder B.

  • This is the same as the previous step, but with the other end of the cylinder and the normal vector in the other direction, so the condition is:

    \begin{equation*} (\boldsymbol{r}_P - \boldsymbol{r}_B) \cdot \boldsymbol{e} > 0 \end{equation*}
Python implementation

Below is a minimal implementation with zero dependencies outside the standard lib, in which there are just enough classes, with just enough methods, to express the algorithm neatly, following the above steps exactly.

In a real implementation:

  • You might separate out a Point class as being semantically different from Vec

  • You could move some functions to be methods (in my real implementation, I have a Cylinder.contains_point() method, for example)

  • You should probably use @dataclass(frozen=True) – immutable objects are a good default, I didn’t use them here because there aren’t needed and I’m focusing on clarity of the code.

  • Conversely, if performance is more of a consideration, and you don’t have a more general need for classes like Vec and Cylinder:

    • you might use a more efficient representation, such as a tuple or list, or a numpy array especially if you wanted bulk operations.

    • you could use more generic dot product functions etc. from numpy.

    • you might inline more of the algorithm into a single function.

For clarity, I also have not implemented the optimisation mentioned in which you can avoid doing some square root operations.

from __future__ import annotations import math from dataclasses import dataclass # Implementation of https://math.stackexchange.com/questions/3518495/check-if-a-general-point-is-inside-a-given-cylinder # See the accompanying blog post http://lukeplant.me.uk/blog/posts/check-if-a-point-is-in-a-cylinder-geometry-and-code/ # -- Main algorithm -- def cylinder_contains_point(cylinder: Cylinder, point: Vec) -> bool: # First condition: distance from axis cylinder_direction: Vec = cylinder.end - cylinder.start point_distance_from_axis: float = abs( cross_product( cylinder_direction, (point - cylinder.start), ) ) / abs(cylinder_direction) if point_distance_from_axis > cylinder.radius: return False # Second condition: point must lie below the top plane. # Third condition: point must lie above the bottom plane # We construct planes with normals pointing out of the cylinder at both # ends, and exclude points that are outside ("above") either plane. start_plane = Plane(cylinder.start, -cylinder_direction) if point_is_above_plane(point, start_plane): return False end_plane = Plane(cylinder.end, cylinder_direction) if point_is_above_plane(point, end_plane): return False return True # -- Supporting classes and functions -- @dataclass class Vec: """ A Vector in 3 dimensions, also used to represent points in space """ x: float y: float z: float def __add__(self, other: Vec) -> Vec: return Vec(self.x + other.x, self.y + other.y, self.z + other.z) def __sub__(self, other: Vec) -> Vec: return self + (-other) def __neg__(self) -> Vec: return -1 * self def __mul__(self, scalar: float) -> Vec: return Vec(self.x * scalar, self.y * scalar, self.z * scalar) def __rmul__(self, scalar: float) -> Vec: return self * scalar def __abs__(self) -> float: return math.sqrt(self.x**2 + self.y**2 + self.z**2) @dataclass class Plane: """ A plane defined by a point on the plane, `origin`, and a `normal` vector to the plane. """ origin: Vec normal: Vec @dataclass class Cylinder: """ A closed cylinder defined by start and end points along the center line and a radius """ start: Vec end: Vec radius: float def cross_product(a: Vec, b: Vec) -> Vec: return Vec( a.y * b.z - a.z * b.y, a.z * b.x - a.x * b.z, a.x * b.y - a.y * b.x, ) def dot_product(a: Vec, b: Vec) -> float: return a.x * b.x + a.y * b.y + a.z * b.z def point_is_above_plane(point: Vec, plane: Plane) -> bool: """ Returns True if `point` is above the plane — that is on the side of the plane which is in the direction of the plane `normal`. """ # See https://math.stackexchange.com/a/2998886/78071 return dot_product((point - plane.origin), plane.normal) > 0 # -- Tests -- def test_cylinder_contains_point(): # Test cases constructed with help of Geogebra - https://www.geogebra.org/calculator/tnc3arfm cylinder = Cylinder(start=Vec(1, 0, 0), end=Vec(6.196, 3, 0), radius=0.5) # In the Z plane: assert cylinder_contains_point(cylinder, Vec(1.02, 0, 0)) assert not cylinder_contains_point(cylinder, Vec(0.98, 0, 0)) # outside bottom plane assert cylinder_contains_point(cylinder, Vec(0.8, 0.4, 0)) assert not cylinder_contains_point(cylinder, Vec(0.8, 0.5, 0)) # too far from center assert not cylinder_contains_point(cylinder, Vec(0.8, 0.3, 0)) # outside bottom plane assert cylinder_contains_point(cylinder, Vec(1.4, -0.3, 0)) assert not cylinder_contains_point(cylinder, Vec(1.4, -0.4, 0)) # too far from center assert cylinder_contains_point(cylinder, Vec(6.2, 2.8, 0)) assert not cylinder_contains_point(cylinder, Vec(6.2, 2.2, 0)) # too far from center assert not cylinder_contains_point(cylinder, Vec(6.2, 3.2, 0)) # outside top plane # Away from Z plane assert cylinder_contains_point(cylinder, Vec(1.02, 0, 0.2)) assert not cylinder_contains_point(cylinder, Vec(1.02, 0, 1)) # too far from center assert not cylinder_contains_point(cylinder, Vec(0.8, 0.3, 2)) # too far from center, and outside bottom plane
Categories: FLOSS Project Planets

ImageX: Women in Drupal 2024: Exclusive Interview with Award Winner Alla Petrovska from Our Team

Planet Drupal - Thu, 2024-12-05 10:54

It’s a great blessing to work alongside outstanding women and an enormous joy to see their efforts and contributions recognized.

Categories: FLOSS Project Planets

EuroPython Society: EPS Board 2024-2025

Planet Python - Thu, 2024-12-05 10:53

We’re happy to announce our new board for the 2024-2025 term:

  • Anders Hammarquist
  • Aris Nivorils
  • Artur Czepiel (Chair)
  • Cyril Bitterich
  • Ege Akman
  • Mia Bajić (Vice Chair)
  • Shekhar Koirala

You can read more about them in their nomination post at https://www.europython-society.org/list-of-eps-board-candidates-for-2024-2025/. The minutes and the video recording of the General Assembly 2024 will be published soon.

Together, we will continue to serve the community and head off to the preparations for EuroPython 2025!

Categories: FLOSS Project Planets

Freelock Blog: Automatically tag articles

Planet Drupal - Thu, 2024-12-05 10:00
Automatically tag articles Anonymous (not verified) Thu, 12/05/2024 - 07:00 Tags Website management Content Management Drupal Planet Artificial Intelligence

Today, another automation using the Drupal #AI module -- automatically tag your articles.

With the AI module, its AI Automators submodule, and a provider configured, you can add an automation to any field. With a Tag field on your content, you can edit the field definition and "Enable AI Automator". Give it a reasonable prompt, and it will tag your content for you.

Like most of the AI integrations, the great thing is you can easily edit the tags later if you want to highlight something specific, or if it comes up with something inappropriate.

Categories: FLOSS Project Planets

Drupal Association blog: One month until Drupal 7 End of Life on 5 January 2025!

Planet Drupal - Thu, 2024-12-05 10:00

As you’ve most likely heard already, Drupal 7's End of Life is fast approaching. Drupal 7 security support is ending one month from today on 5 January 2025 – fourteen years to the day that Drupal 7 was originally released! If you are still running Drupal 7 beyond this date, your website will be vulnerable to security risks and may face compatibility issues. 

With only one month left until Drupal 7 security support ends, now is the time to wrap up any final preparations to get your site ready in time for the end of life. While migrating from Drupal 7 may seem daunting, the Drupal Association is here to assure you that the process can be smooth from start to finish with sources like our Drupal 7 migration resource page. 

Hopefully, you have started your plan for life after Drupal 7, but if you are looking for direction, here are some paths you can take: 

Extended Security Support for Drupal 7

If you plan on keeping your site running on Drupal 7, check out our Drupal 7 Extended Security Support Program, if you haven’t already. 

Our D7 extended security support partners are ready to provide you with the support that you need!

Migration partners

On top of engaging an extended support partner, the Drupal Association has also created a list of certified migration partners for sites of all sizes. These partners can help you with your content strategy, audit your existing site, and help you through every step of the migration process to upgrade your site to modern Drupal.

Check out those who have already migrated!

Many folks have already migrated from Drupal 7, and you can find their stories on our case studies page.

Not migrating? Be sure to communicate with your users!

If migrating to a newer version or using extended support isn't feasible right now, it’s essential to notify your users of your security strategy. Make sure to inform your customers, managers, CISO, or other stakeholders about your plans for handling support and managing potential vulnerabilities. This transparency is important for maintaining trust and compliance!

Categories: FLOSS Project Planets

ComputerMinds.co.uk: DDEV, solr, and platform.sh

Planet Drupal - Thu, 2024-12-05 09:08

We use platform.sh to host many of our client Drupal sites, and many of those sites use Solr.

Platform.sh has great documentation for setting up Solr to work with Drupal, and it essentially boils down to something like this in a services.yaml file:

solr94: type: solr:9.4 disk: 1024 configuration: cores: main: conf_dir: !archive "solr_9.x_config/" endpoints: main: core: main

And then a directory of Solr config.

Platform.sh then gives you a nice PHP library that can wire up your configuration into Drupal's settings.php like this:

$platformsh->registerFormatter('drupal-solr', function($solr) { // Default the solr core name to `collection1` for pre-Solr-6.x instances. return [ 'core' => substr($solr['path'], 5) ? : 'collection1', 'path' => '', 'host' => $solr['host'], 'port' => $solr['port'], ]; }); // The Solr relationship name (from .platform.app.yaml) $relationship_name = 'solrsearch'; // The machine name of the server in your Drupal configuration. $solr_server_name = 'solr_server'; if ($platformsh->hasRelationship($relationship_name)) { // Set the connector configuration to the appropriate value, as defined by the formatter above. $config['search_api.server.' . $solr_server_name]['backend_config']['connector_config'] = $platformsh->formattedCredentials($relationship_name, 'drupal-solr'); }

All nice and simple, and works really well in platform.sh etc.

DDEV

We wanted our DDEV based development environments to match the platform.sh approach as closely as possible, specifically we really didn't want to have two versions of the Solr config, and we didn't want to have one process for DDEV and another for platform.sh.

We are targeting Solr 9, so we are able to use the fantastic ddev/ddev-solr add-on that does the hard work of getting a Solr container running etc.

The add-on has the option of sending the Solr config (which Drupal can generate) to the Solr server, but then that config isn't the same as the config used by platform.sh. So we wanted to use the option where we make a core from existing set of config on disk. To do this we need a couple of things:

First, we make a 'placeholder' configset by creating a file at .ddev/solr/configsets/main/README.md with the following contents:

This directory will get mounted over the top of by the config from the .platform/solr_9.x_config directory at the root of this repository. See: docker-compose.cm-ddev-match-platform.yaml

And then we need a file .ddev/docker-compose.cm-ddev-match-platform.yaml with the following contents:

services: solr: volumes: # We sneakily mount our platform.sh config into the place that ddev-solr wants it. - ../.platform/solr_9.x_config:/mnt/ddev_config/solr/configsets/main

As the comment in the file says, this means that the same config is both used by platform.sh and the DDEV solr server for the main core.

Finally, we add a bit of a settings.php config for developers running DDEV:

// Hardcode the settings for the Solr config to match our ddev config. // The machine name of the server in your Drupal configuration: 'solr_server'. $config['search_api.server.solr_server']['backend_config']['connector'] = 'basic_auth'; $config['search_api.server.solr_server']['backend_config']['connector_config']['host'] = 'solr'; $config['search_api.server.solr_server']['backend_config']['connector_config']['username'] = 'solr'; $config['search_api.server.solr_server']['backend_config']['connector_config']['password'] = 'SolrRocks'; $config['search_api.server.solr_server']['backend_config']['connector_config']['core'] = 'main';

And that's it! So simple, thanks DDEV!

Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in November 2024

Planet Debian - Thu, 2024-12-05 07:47

Welcome to the November 2024 report from the Reproducible Builds project!

Our monthly reports outline what we’ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security where relevant. As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. Reproducible Builds mourns the passing of Lunar
  2. Introducing reproduce.debian.net
  3. New landing page design
  4. SBOMs for Python packages
  5. Debian updates
  6. Reproducible builds by default in Maven 4
  7. PyPI now supports digital attestations
  8. “Dependency Challenges in OSS Package Registries”
  9. Zig programming language demonstrated reproducible
  10. Website updates
  11. Upstream patches
  12. Misc development news
  13. Reproducibility testing framework
Reproducible Builds mourns the passing of Lunar

The Reproducible Builds community sadly announced it has lost its founding member, Lunar. Jérémy Bobbio aka ‘Lunar’ passed away on Friday November 8th in palliative care in Rennes, France.

Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. He was the author of our earliest status reports and many of our key tools in use today are based on his design. Lunar’s creativity, insight and kindness were often noted.

You can view our full tribute elsewhere on our website. He will be greatly missed.


Introducing reproduce.debian.net

In happier news, this month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project.

rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there.

In November, reproduce.debian.net began rebuilding Debian unstable on the amd64 architecture, but throughout the MiniDebConf, it had attempted to rebuild 66% of the official archive. From this, it could be determined that it is currently possible to bit-for-bit reproduce and corroborate approximately 78% of the actual binaries distributed by Debian — that is, using the .buildinfo files hosted by Debian itself.

reproduce.debian.net also contains instructions how to setup one’s own rebuilderd instance, and we very much invite everyone with a machine to spare to setup their own version and to share the results. Whilst rebuilderd is still in development, it has been used to reproduce Arch Linux since 2019. We are especially looking for installations targeting Debian architectures other than i386 and amd64.


New landing page design

As part of a very productive partnership with the Sovereign Tech Fund and Neighbourhoodie, we are pleased to unveil our new homepage/landing page.

We are very happy with our collaboration with both STF and Neighbourhoodie (including many changes not directly related to the website), and look forward to working with them in the future.

SBOMs for Python packages

The Python Software Foundation has announced a new “cross-functional project for SBOMs and Python packages”. Seth Michael Larson writes that the project is “specifically looking to solve these issues”:

  • Enable Python users that require SBOM documents (likely due to regulations like CRA or SSDF) to self-serve using existing SBOM generation tools.
  • Solve the “phantom dependency” problem, where non-Python software is bundled in Python packages but not recorded in any metadata. This makes the job of software composition analysis (SCA) tools difficult or impossible.
  • Make the adoption work by relevant projects such as build backends, auditwheel-esque tools, as minimal as possible. Empower users who are interested in having better SBOM data for the Python projects they are using to be able to contribute engineering time towards that goal.

A GitHub repository for the initiative is available, and there are a number of queries, comments and remarks on Seth’s Discourse forum post.


Debian updates

There was significant development within Debian this month. Firstly, at the recent MiniDebConf in Toulouse, France, Holger Levsen gave a Debian-specific talk on rebuilding packages distributed from ftp.debian.org — that is to say, how to reproduce the results from the official Debian build servers:

Holger described the talk as follows:

For more than ten years, the Reproducible Builds project has worked towards reproducible builds of many projects, and for ten years now we have build Debian packages twice—with maximal variations applied—to see if they can be build reproducible still.

Since about a month, we’ve also been rebuilding trying to exactly match the builds being distributed via ftp.debian.org. This talk will describe the setup and the lessons learned so far, and why the results currently are what they are (spoiler: they are less than 30% reproducible), and what we can do to fix that.

The Debian Project Leader, Andreas Tille, was present at the talk and remarked later in his Bits from the DPL update that:

It might be unfair to single out a specific talk from Toulouse, but I’d like to highlight the one on reproducible builds. Beyond its technical focus, the talk also addressed the recent loss of Lunar, whom we mourn deeply. It served as a tribute to Lunar’s contributions and legacy. Personally, I’ve encountered packages maintained by Lunar and bugs he had filed. I believe that taking over his packages and addressing the bugs he reported is a meaningful way to honor his memory and acknowledge the value of his work.

Holger’s slides and video in .webm format are available.


Next, rebuilderd is the server to monitor package repositories of Linux distributions and attempt to reproduce the observed results. This month, version 0.21.0 released, most notably with improved support for binNMUs by Jochen Sprickerhof and updating the rebuilderd-debian.sh integration to the latest debrebuild version by Holger Levsen. There has also been significant work to get the rebuilderd package into the Debian archive, in particular, both rust-rebuilderd-common version 0.20.0-1 and rust-rust-lzma version 0.6.0-1 were packaged by kpcyrd and uploaded by Holger Levsen.

Related to this, Holger Levsen submitted three additional issues against rebuilderd as well:

  • rebuildctl should be more verbose when encountering issues. []
  • Please add an option to used randomised queues. []
  • Scheduling and re-scheduling multiple packages at once. []

… and lastly, Jochen Sprickerhof submitted one an issue requested that rebuilderd downloads the source package in addition to the .buildinfo file [] and kpcyrd also submitted and fixed an issue surrounding dependencies and clarifying the license []


Separate to this, back in 2018, Chris Lamb filed a bug report against the sphinx-gallery package as it generates unreproducible content in various ways. This month, however, Dmitry Shachnev finally closed the bug, listing the multiple sub-issues that were part of the problem and how they were resolved.


Elsewhere, Roland Clobus posted to our mailing list this month, asking for input on a bug in Debian’s ca-certificates-java package. The issue is that the Java key management tools embed timestamps in its output, and this output ends up in the /etc/ssl/certs/java/cacerts file on the generated ISO images. A discussion resulted from Roland’s post suggesting some short- and medium-term solutions to the problem.


Holger Levsen uploaded some packages with reproducibility-related changes:


Lastly, 12 reviews of Debian packages were added, 5 were updated and 21 were removed this month adding to our knowledge about identified issues in Debian.


Reproducible builds by default in Maven 4

On our mailing list this month, Hervé Boutemy reported the latest release of Maven (4.0.0-beta-5) has reproducible builds enabled by default. In his mailing list post, Hervé mentions that “this story started during our Reproducible Builds summit in Hamburg”, where he created the upstream issue that builds on a “multi-year” effort to have Maven builds configured for reproducibility.


PyPI now supports digital attestations

Elsewhere in the Python ecosystem and as reported on LWN and elsewhere, the Python Package Index (PyPI) has announced that it has finalised support for PEP 740 (“Index support for digital attestations”).

Trail of Bits, who performed much of the development work, has an in-depth blog post about the work and its adoption, as well as what is left undone:

One thing is notably missing from all of this work: downstream verification. […]

This isn’t an acceptable end state (cryptographic attestations have defensive properties only insofar as they’re actually verified), so we’re looking into ways to bring verification to individual installing clients. In particular, we’re currently working on a plugin architecture for pip that will enable users to load verification logic directly into their pip install flows.

There was an in-depth discussion on LWN’s announcement page, as well as on Hacker News.


Dependency Challenges in OSS Package Registries

At BENEVOL, the Belgium-Netherlands Software Evolution workshop in Namur, Belgium, Tom Mens and Alexandre Decan presented their paper, “An Overview and Catalogue of Dependency Challenges in Open Source Software Package Registries”.

The abstract of their paper is as follows:

While open-source software has enabled significant levels of reuse to speed up software development, it has also given rise to the dreadful dependency hell that all software practitioners face on a regular basis. This article provides a catalogue of dependency-related challenges that come with relying on OSS packages or libraries. The catalogue is based on the scientific literature on empirical research that has been conducted to understand, quantify and overcome these challenges. []

A PDF of the paper is available online.


Zig programming language demonstrated reproducible

Motiejus Jakšty posted an interesting and practical blog post on his successful attempt to reproduce the Zig programming language without using the pre-compiled binaries checked into the repository, and despite the circular dependency inherent in its bootstrapping process.

As a summary, Motiejus concludes that:

I can now confidently say (and you can also check, you don’t need to trust me) that there is nothing hiding in zig1.wasm [the checked-in binary] that hasn’t been checked-in as a source file.

The full post is full of practical details, and includes a few open questions.


Website updates

Notwithstanding the significant change to the landing page (screenshot above), there were an enormous number of changes made to our website this month. This included:

  • Alex Feyerke and Mariano Giménez:

  • Bernhard M. Wiedemann:

    • Update the “System images” page to document the e2fsprogs approach. []
  • Chris Lamb:

  • FC (Fay) Stegerman:

    • Replace more inline markdown with HTML on the “Success stories” page. []
    • Add some links, fix some other links and correct some spelling errors on the “Tools” page. []
  • Holger Levsen:

    • Add a historical presentation (“Reproducible builds everywhere eg. in Debian, OpenWrt and LEDE”) from October 2016. []
    • Add jochensp and Oejet to the list of known contributors. [][]
  • Julia Krüger:

  • Ninette Adhikari & hulkoba:

    • Add/rework the list of success stories into a new page that clearly shows milestones in Reproducible Builds. [][][][][][]
  • Philip Rinn:

    • Import 47 historical weekly reports. []
  • hulkoba:

    • Add alt text to almost all images (!). [][]
    • Fix a number of links on the “Talks”. [][]
    • Avoid so-called ‘ghost’ buttons by not using <button> elements as links, as the affordance of a <button> implies an action with (potentially) a side effect. [][]
    • Center the sponsor logos on the homepage. []
    • Move publications and generate them instead from a data.yml file with an improved layout. [][]

    • Make a large number of small but impactful stylisting changes. [][][][]

    • Expand the “Tools” to include a number of missing tools, fix some styling issues and fix a number of stale/broken links. [][][][][][]


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Misc development news


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related changes:

    • Create and introduce a new reproduce.debian.net service and subdomain []
    • Make a large number of documentation changes relevant to rebuilderd. [][][][][]
    • Explain a temporary workaround for a specific issue in rebuilderd. []
    • Setup another rebuilderd instance on the o4 node and update installation documentation to match. [][]
    • Make a number of helpful/cosmetic changes to the interface, such as clarifying terms and adding links. [][][][][]
    • Deploy configuration to the /opt and /var directories. [][]
    • Add an infancy (or ‘alpha’) disclaimer. [][]
    • Add more notes to the temporary rebuilderd documentation. []
    • Commit an nginx configuration file for reproduce.debian.net’s “Stats” page. []
    • Commit a rebuilder-worker.conf configuration for the o5 node. []
  • Debian-related changes:

    • Grant jspricke and jochensp access to the o5 node. [][]
    • Build the qemu package with the nocheck build flag. []
  • Misc changes:

    • Adapt the update_jdn.sh script for new Debian trixie systems. []
    • Stop installing the PostgreSQL database engine on the o4 and o5 nodes. []
    • Prevent accidental reboots of the o4 node because of a long-running job owned by josch. [][]

In addition, Mattia Rizzolo addressed a number of issues with reproduce.debian.net [][][][]. And lastly, both Holger Levsen [][][][] and Vagrant Cascadian [][][][] performed node maintenance.


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Categories: FLOSS Project Planets

Hardware Shenanigans

Planet KDE - Thu, 2024-12-05 07:00

(originally titled “On Dead Trees”)

There’s features that you know are really important to some of our users but you frankly don’t really care for them much yourself. Printing is one such example. Recently, I actually had to print lots of paperwork, so I had a reason to fix some of my more pressing issues with our Print Manager.

Print jobs right at your finger tip

The biggest regression from the Plasma 4 days, when we moved from individual System Tray popups to a unified square view, was that Print Manager had to give up its two pane layout that showed the print queue directly in the popup. In order to view and cancel print jobs, you now had to select the printer and open its print queue window, and close it again after you’re done.

Unfortunately, with printer management, there’s really two opposing use cases: a home computer with maybe a couple of printers, and the office use case of hundreds of remote printers across several buildings. Picking one side usually leaves the other one worse off. However, I did not want to spent too much time on this, so in order to fix my workflow, I simply added the list of print jobs in the expanded view. I then added a busy indicator to a printer when it’s printing to make it easier to find in the list.

CUPS error messages have never been very nice and with all that “driver-less” stuff the user experience seems to have become worse, spitting technical gibberish like “cfFilterChain: Ghost script (PID 123456)” at the user. While printing probably works better now, the overall feature set has definitely regressed for me. In order to accommodate status messages better, Print Manager now shows up to three lines of text, which is particularly important in case of a printer or network error.

Just pretend the pile of paper at the top of the laser printer is actually the printed pages

Another nice little touch from Plasma 4 was a dedicated laser printer icon. At home I have a black and white laser printer for printing documents and a color inkjet for printing pictures. It’s really nice being able to tell them apart at a glance. Therefore, I added a laser printer icon to Breeze as well. However, when I investigated how it worked, I found it just assumes every black and white printer to be a laser printer. Fair enough. You can ask CUPS for the “marker type”, e.g. toner or ink, and I hoped that I could use it to determine the printer type more accurately. Alas, since updating to the Ubuntu 24.04 base, none of my printers show ink levels anymore, not even after installing the official vendor drivers. Either way, ink status has been hit or miss for me under Linux for as long as I can remember, sometimes randomly working when talking to the printer over the network but not on the computer it was plugged into via USB and so on.

Next, while tinkering with printer settings, I noticed the nice little search box we have in our settings dialogs nowadays. Trying to find a certain option, I was surprised it didn’t highlight it, even though I clearly typed the exact name on the label. You see, controls in Qt can have “mnemonics” or “accelerators”, this is the underlined letter you typically see on a button that tells you what Alt+key to use to trigger it. The letter is prefixed with an ampersand (&) in the string, so “Pap&er size” shows as “Paper size” and will trigger on pressing Alt+E. KDE applications automatically assign a free accelerators to most widgets unless explicitly provided through the ampersand notation. The settings search did not account for this and subsequently failed to find it.

Leaving the subject of printers for now, I made a few minor improvements regarding batteries. One of my earliest contributions to Plasma’s power management system was actually a notification when a peripheral device, such as mouse or keyboard, runs low on battery. While the notification showed a dedicated icon for headsets (i.e. headphones with a microphone) it did not provide one for regular headphones, and neither did battery monitor, but it was an easy fix.

Unlike a mouse or keyboard, it’s probably not as bad that it may turn off at any time

Additionally, when switching output devices, a brief on screen display is shown. In case of Bluetooth devices, battery status is included alongside the device name, to quickly see when the headphones you just connected are almost out of juice. When I switched to PipeWire this stopped working, no battery percentage was shown. I didn’t fully understand how it works but with PulseAudio it probably has exclusive access to the Bluetooth device and is the one that has to read the battery information and provide it to others as audio device property. With PipeWire, I guess things are different, and I just get to read battery information over the regular BlueZ battery interface, so that’s what Plasma will consult before showing the device popup.

Finally, the Energy Information page now displays the number of charge cycles your laptop battery has experienced so far, if available, in addition to the capacity estimation (“battery health”) it already showed. The ability to query this information was added to Solid, KDE’s Hardware Abstraction Framework, and is supported by all of its backends. My trusty ThinkPad has over 700 charge cycles now and still reports 77% capacity left. I was still quite happy with its battery life during this year’s Akademy – admittedly I didn’t compile much during talks and had the screen brightness very low.

If you like what you saw and want to support the KDE Community and enable the good people behind it to create the best software possible, please consider donating to our Year End Fundraiser! KDE is funded mainly by you: our friends, users, and supporters. Thanks to your donations, we can deliver the best free and open software that respects your privacy and gives you control over your devices and digital life.

Categories: FLOSS Project Planets

PyCharm: How to Do Sentiment Analysis With Large Language Models

Planet Python - Thu, 2024-12-05 05:49

Sentiment analysis is a powerful tool for understanding emotions in text. While there are many ways to approach sentiment analysis, including more traditional lexicon-based and machine learning approaches, today we’ll be focusing on one of the most cutting-edge ways of working with text – large language models (LLMs). We’ll explain how you can use these powerful models to predict the sentiment expressed in a text.

As a practical tutorial, this post will introduce you to the types of LLMs most suited for sentiment analysis tasks and then show you how to choose the right model for your specific task.

We’ll cover using models that other people have fine-tuned for sentiment analysis and how to fine-tune one yourself. We’ll also look at some of the powerful tools and resources available that can help you work with these models easily, while demystifying what can feel like an overly complex and overwhelming topic.

To get the most out of this blog post, we’d recommend you have some experience training machine learning or deep learning models and be confident using Python. That said, you don’t necessarily need to have a background in large language models to enjoy it.

Let’s get started!

What are large language models?

Large language models are some of the latest and most powerful tools for solving natural language problems. In brief, they are generalist language models that can complete a range of natural language tasks, from named entity recognition to question answering. LLMs are based on the transformer architecture, a type of neural network that uses a mechanism called attention to represent complex and nuanced relationships between words in a piece of text. This design allows LLMs to accurately represent the information being conveyed in a piece of text.

The full transformer model architecture consists of two blocks. Encoder blocks are designed to receive text inputs and build a representation of them, creating a feature set based on the text corpus over which the model is trained. Decoder blocks take the features generated by the encoder and other inputs and attempt to generate a sequence based on these.

Transformer models can be divided up based on whether they contain encoder blocks, decoder blocks, or both.

  • Encoder-only models tend to be good at tasks requiring a detailed understanding of the input to do downstream tasks, like text classification and named entity recognition.
  • Decoder-only models are best for tasks such as text generation.
  • Encoder-decoder, or sequence-to-sequence models are mainly used for tasks that require the model to evaluate an input and generate a different output, such as translation. In fact, translation was the original task that transformer models were designed for!

This Hugging Face table (also featured below), which I took from their course on natural language processing, gives an overview of what each model tends to be strongest at.

After finishing this blog post and discovering what other natural language tasks you can perform with the Transformers library, I recommend the course if you’d like to learn more about LLMs. It strikes an excellent balance between accessibility and technical depth.

Model typeExamplesTasksEncoder-onlyALBERT, BERT, DistilBERT, ELECTRA, RoBERTaSentence classification, named entity recognition, extractive question answeringDecoder-onlyCTRL, GPT, GPT-2, Transformer XLText generationEncoder-decoderBART, T5, Marian, mBARTSummarization, translation, generative question answering

Sentiment analysis is usually treated as a text or sentence classification problem with LLMs, meaning that encoder-only models such as RoBERTa, BERT, and ELECTRA are most often used for this task. However, there are some exceptions. For example, the top scoring model for aspect-based sentiment analysis, InstructABSA, is based on a fine-tuned version of T5, an encoder-decoder model.

Using large language models for sentiment analysis

With all of the background out of the way, we can now get started with using LLMs to do sentiment analysis.

Install PyCharm to get started with sentiment analysis

We’ll use PyCharm Professional for this demo, but you can follow along with any other IDE that supports Python development.

PyCharm Professional is a powerful Python IDE for data science. It supports advanced Python code completion, inspections and debugging, rich databases, Jupyter, Git, Conda, and more right out of the box. You can try out great features such as our DataFrame Column Statistics and Chart View, as well as Hugging Face integrations, which make working with LLMs much simpler and faster.

If you’d like to follow along with this tutorial, you can activate your free three-month subscription to PyCharm using this special promo code: PCSA. Click on the link below, and enter the code. You’ll then receive an activation code through your email.

Activate your free three-month subscription Import the required libraries

There are two parts to this tutorial: using an LLM that someone else has fine-tuned for sentiment analysis, and fine-tuning a model ourselves.

In order to run both parts of this tutorial, we need to import the following packages:

  • Transformers: As described, this will allow us to use fine-tuned LLMs for sentiment analysis and fine-tune our own models.
  • PyTorch, Tensorflow, or Flax: Transformers acts as a high-level interface for deep learning frameworks, reusing their functionality for building, training, and running neural networks. In order to actually work with LLMs using the Transformers package, you will need to install your choice of PyTorch, Tensorflow, or Flax. PyTorch supports the largest number of models of the three frameworks, so that’s the one we’ll use in this tutorial.
  • Datasets: This is another package from Hugging Face that allows you to easily work with the datasets hosted on Hugging Face Hub. We’ll need this package to get a dataset to fine-tune an LLM for sentiment analysis.

In order to fine-tune our own model, we also need to import these additional packages:

  • NumPy: NumPy allows us to work with arrays. We’ll need this to do some post-processing on the predictions generated by our LLM.
  • scikit-learn: This package contains a huge range of functionality for machine learning. We’ll use it to evaluate the performance of our model.
  • Evaluate: This is another package from Hugging Face. Evaluate adds a convenient interface for measuring the performance of models. It will give us an alternative way of measuring our model’s performance.
  • Accelerate: This final package from Hugging Face, Accelerate, takes care of distributed model training.

We can easily find and install these in PyCharm. Make sure you’re using a Python 3.7 or higher interpreter. For this demo, we’ll be using Python 3.11.7.

Pick the right model

The next step is picking the right model. Before we get into that, we need to cover some terminology.

LLMs are made up of two components: an architecture and a checkpoint. The architecture is like the blueprint of the model, and describes what will be contained in each layer and each operation that takes place within the model.

The checkpoint refers to the weights that will be used within each layer. Each of the pretrained models will use an architecture like T5 or GPT, and obtain the specific weights (the model checkpoint) by training the model over a huge corpus of text data.

Fine-tuning will adjust the weights in the checkpoint by retraining the last layer(s) on a dataset specialized in a certain task or domain. To make predictions (called inference), an architecture will load in the checkpoint and use this to process text inputs, and together this is called a model.

If you’ve ever looked at the models available on Hugging Face, you might have been overwhelmed by the sheer number of them (even when we narrow them down to encoder-only models).

So, how do you know which one to use for sentiment analysis?

One useful place to start is the sentiment analysis page on Papers With Code. This page includes a very helpful overview of this task and a Benchmarks table that includes the top-performing models for each sentiment analysis benchmarking dataset. From this page, we can see that some of the commonly appearing models are those based on BERT and RoBERTa architectures.

While we may not be able to access these exact model checkpoints on Hugging Face (as not all of them will be uploaded there), it can give us a guide for what sorts of models might perform well at this task. Papers With Code also has similar pages for a range of other natural language tasks: If you search for the task in the upper left-hand corner of the site, you can navigate to these.

Now that we know what kinds of architectures are likely to do well for this problem, we can start searching for a specific model.

PyCharm has an built-in integration with Hugging Face that allows us to search for models directly. Simply right-click anywhere in your Jupyter notebook or Python script, and select Insert HF model. You’ll be presented with the following window:

You can see that we can find Hugging Face models either by the task type (which we can select from the menu on the left-hand side), by keyword search in the search box at the top of the window, or by a combination of both. Models are ranked by the number of likes by default, but we can also select models based on downloads or when the model was created or last modified.

When you use a model for a task, the checkpoint is downloaded and cached, making it faster the next time you need to use that model. You can see all of the models you’ve downloaded in the Hugging Face tool window.

Once we’ve downloaded the model, we can also look at its model card again by hovering over the model name in our Jupyter notebook or Python script. We can do the same thing with dataset cards.

Use a fine-tuned LLM for sentiment analysis

Let’s move on to how we can use a model that someone else has already fine-tuned for sentiment analysis.

As mentioned, sentiment analysis is usually treated as a text classification problem for LLMs.  This means that in our Hugging Face model selection window, we’ll select Text Classification, which can be found under Natural Language Processing on the left-hand side. To narrow the results down to sentiment analysis models, we’ll type “sentiment” in the search box in the upper left-hand corner.

We can see various fine-tuned models, and as expected from what we saw on the Papers With Code Benchmarks table, most of them use RoBERTa or BERT architectures. Let’s try out the top ranked model, Twitter-roBERTa-base for Sentiment Analysis.

You can see that after we select Use Model in the Hugging Face model selection window, code is automatically generated at the caret in our Jupyter notebook or Python script to allow us to start working with this model.

from transformers import pipeline pipe = pipeline("text-classification", model="cardiffnlp/twitter-roberta-base-sentiment-latest")

Before we can do inference with this model, we’ll need to modify this code.

The first thing we can check is whether we have a GPU available, which will make the model run faster. We’ll check for two types: NVIDIA GPUs, which support CUDA, and Apple GPUs, which support MPS.

import torch print(f"CUDA available: {torch.cuda.is_available()}") print(f"MPS available: {torch.backends.mps.is_available()}")

My computer supports MPS, so we can add a device argument to the pipeline and add "mps". If your computer supports CUDA, you can instead add the argument device=0.

from transformers import pipeline pipe = pipeline("text-classification", model="cardiffnlp/twitter-roberta-base-sentiment-latest", device="mps")

Finally, we can get the fine-tuned LLM to run inference over our example text.

result = pipe("I love PyCharm! It's my favorite Python IDE.") result [{'label': 'positive', 'score': 0.9914802312850952}]

You can see that this model predicts that the text will be positive, with 99% probability.

Fine-tune your own LLM for sentiment analysis

The other way we can use LLMs for sentiment analysis is to fine-tune our own model.

You might wonder why you’d bother doing this, given the huge number of fine-tuned models that already exist on Hugging Face Hub. The main reason you might want to fine-tune a model is so that you can tailor it to your specific use case.

Most models are fine-tuned on public datasets, especially social media posts and movie reviews, and you might need your model to be more sensitive to your specific domain or use case.

Model fine-tuning can be quite a complex topic, so in this demonstration, I’ll explain how to do it at a more general level. However, if you want to understand this in more detail, you can read more about it in Hugging Face’s excellent NLP course, which I recommended earlier. In their tutorial, they explain in detail how to process data for fine-tuning models and two different approaches to fine-tuning: with the trainer API and without it.

To demonstrate how to fine-tune a model, we’ll use the SST-2 dataset, which is composed of single lines pulled from movie reviews that have been annotated as either negative or positive.

As mentioned earlier, BERT models consistently show up as top performers on the Papers With Code benchmarks, so we’ll fine-tune a BERT checkpoint.

We can again search for these models in PyCharm’s Hugging Face model selection window.

We can see that the most popular BERT model is bert-base-uncased. This is perfect for our use case, as this was also trained on lowercase text, so it will match the casing of our dataset.

We could have used the popular bert-large-uncased, but the base model has only 110 million parameters compared to BERT large, which has 340 million, so the base model is a bit friendlier for fine-tuning on a local machine.

If you still want to use a smaller model, you could also try this with a DistilBERT model, which has far fewer parameters but still preserves most of the performance of the original BERT models.

Let’s start by reading in our dataset. We can do so using the load_dataset() function from the Datasets package. SST-2 is part of the GLUE dataset, which is designed to see how well a model can complete a range of natural language tasks.

from datasets import load_dataset sst_2_raw = load_dataset("glue", "sst2") sst_2_raw DatasetDict({ train: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 67349 }) validation: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 872 }) test: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 1821 }) })

This dataset has already been split into the train, validation, and test sets. We have around 67,349 training examples – quite a modest number for fine-tuning such a large model.

Here’s an example from this dataset.

sst_2_raw["train"][1] {'sentence': 'contains no wit , only labored gags ', 'label': 0, 'idx': 1}

We can see what the labels mean by calling the features attribute on the training set.

sst_2_raw["train"].features {'sentence': Value(dtype='string', id=None), 'label': ClassLabel(names=['negative', 'positive'], id=None), 'idx': Value(dtype='int32', id=None)}

0 indicates a negative sentiment, and 1 indicates a positive one.

Let’s look at the number in each class:

print(f'Number of negative examples: {sst_2_raw["train"]["label"].count(0)}') print(f'Number of positive examples: {sst_2_raw["train"]["label"].count(1)}') Number of negative examples: 29780 Number of positive examples: 37569

The classes in our training data are a tad unbalanced, but they aren’t excessively skewed.

We now need to tokenize our data, transforming the raw text into a form that our model can use. To do this, we need to use the same tokenizer that was used to train the bert-large-uncased model in the first place. The AutoTokenizer class will take care of all of the under-the-hood details for us.

from transformers import AutoTokenizer checkpoint = "google-bert/bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint)

Once we’ve loaded in the correct tokenizer, we can apply this to the training data.

tokenised_sentences = tokenizer(sst_2_raw["train"]["sentence"])

Finally, we need to add a function to pad our tokenized sentences. This will make sure all of the inputs in a training batch are the same length – text inputs are rarely the same length and models require a consistent number of features for each input.

from transformers import DataCollatorWithPadding def tokenize_function(example): return tokenizer(example["sentence"]) tokenized_datasets = sst_2_raw.map(tokenize_function, batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer)

Now that we’ve prepared our dataset, we need to determine how well the model is fitting to the data as it trains. To do this, we need to decide which metrics to use to evaluate the model’s prediction performance.

As we’re dealing with a binary classification problem, we have a few choices of metrics, the most popular of which are accuracy, precision, recall, and the F1 score. In the “Evaluate the model” section, we’ll discuss the pros and cons of using each of these measures.

We have two ways of creating an evaluation function for our model. The first is using the Evaluate package. This package allows us to use the specific evaluator for the SST-2 dataset, meaning we’ll evaluate the model fine-tuning using the specific metrics for this task. In the case of SST-2, the metric used is accuracy.

import evaluate import numpy as np def compute_metrics(eval_preds): metric = evaluate.load("glue", "sst2") logits, labels = eval_preds predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels)

However, if we want to customize the metrics used, we can also create our own evaluation function. 

In this case, I’ve imported the accuracy, precision, recall, and F1 score metrics from scikit-learn. I’ve then created a function which takes in the predicted labels versus actual labels for each sentence and calculates the four required metrics. We’ll use this function, as it gives us a wider variety of metrics we can check our model performance against.

from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score import numpy as np def compute_metrics(eval_preds): logits, labels = eval_preds predictions = np.argmax(logits, axis=-1) return { 'accuracy': accuracy_score(labels, predictions), 'f1': f1_score(labels, predictions, average='macro'), 'precision': precision_score(labels, predictions, average='macro'), 'recall': recall_score(labels, predictions, average='macro') }

Now that we’ve done all of the setup, we’re ready to train the model. The first thing we need to do is define some parameters that will control the training process using the TrainingArguments class. We’ve only specified a few parameters here, but this class has an enormous number of possible arguments allowing you to calibrate your model training to a high degree of specificity.

from transformers import TrainingArguments training_args = TrainingArguments(output_dir="sst2-bert-fine-tuning", eval_strategy="epoch", num_train_epochs=3)

In our case, we’ve used the following arguments:

  • output_dir: The output directory where we want our model predictions and checkpoints saved.
  • eval_strategy="epoch": This ensures that the evaluation is performed at the end of each training epoch. Other possible values are “steps” (meaning that evaluation is done at regular step intervals) and “no” (meaning that evaluation is not done during training).
  • num_train_epochs=3: This sets the number of training epochs (or the number of times the training loop will repeat over all of the data). In this case, it’s set to train on the data three times.

The next step is to load in our pre-trained BERT model.

from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) Some weights of BertForSequenceClassification were not initialized from the model checkpoint at google-bert/bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

Let’s break this down step-by-step:

  • The AutoModelForSequenceClassification class does two things. First, it automatically identifies the appropriate model architecture from the Hugging Face model hub given the provided checkpoint string. In our case, this would be the BERT architecture. Second, it converts this model into one we can use for classification. It does this by discarding the weights in the model’s final layer(s) so that we can retrain these using our sentiment analysis dataset.
  • The from_pretrained() method loads in our selected checkpoint, which in this case is bert-base-uncased.
  • The argument num_labels=2 indicates that we have two classes to predict in our model: positive and negative.

We get a message telling us that some model weights were not initialized when we ran this code. This message is exactly the one we want – it tells us that the AutoModelForSequenceClassification class reset the final model weights in preparation for our fine-tuning.

The last step is to set up our Trainer object. This stage takes in the model, the training arguments, the train and validation datasets, our tokenizer and padding function, and our evaluation function. It uses all of these to train the weights for the head (or final layers) of the BERT model, evaluating the performance of the model after each epoch on the validation set.

from transformers import Trainer trainer = Trainer( model, training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics, )

We can now kick off the training. The Trainer class gives us a nice timer that tells us both the elapsed time and how much longer the training is estimated to take. We can also see the metrics after each epoch, as we requested when creating the TrainingArguments.

trainer.train() Evaluate the model Classification metrics

Before we have a look at how our model performed, let’s first discuss the evaluation metrics we used in more detail:

  • Accuracy: As mentioned, this is the default evaluation metric for the SST-2 dataset. Accuracy is the simplest metric for evaluating classification models, being the ratio of correct predictions to all predictions. Accuracy is a good choice when the target classes are well balanced, meaning each class has an approximately equal number of instances.
  • Precision: Precision calculates the percentage of the correctly predicted positive observations to the total predicted positives. It is important when the cost of a false positive is high. For example, in spam detection, you would rather miss a spam email (false negative) than have non-spam emails land in your spam folder (false positive).
  • Recall (also known as sensitivity): Recall calculates the percentage of the correctly predicted positive observations to all observations in the actual class. It is of interest when the cost of false negatives is high, meaning classifying a positive class incorrectly as negative. For example, in disease diagnosis, you would rather have false alarms (false positives) than miss someone who is actually ill (false negatives).
  • F1-score: The F1-score is the harmonic mean of precision and recall. It tries to find the balance between both measures. It is a more reliable metric than accuracy when dealing with imbalanced classes.

In our case, we had slightly imbalanced classes, so it’s a good idea to check both accuracy and the F1 score. If they differ, the F1 score is likely to be more trustworthy. However, if they are roughly the same, it is nice to be able to use accuracy, as it is easily interpretable.

Knowing whether your model is better at predicting one class versus the other is also useful. Depending on your application, capturing all customers who are unhappy with your service may be more important, even if you sometimes get false negatives. In this case, a model with high recall would be a priority over high precision.

Model predictions

Now that we’ve trained our model, we need to evaluate it. Normally, we would use the test set to get a final, unbiased evaluation, but the SST-2 test set does not have labels, so we cannot use it for evaluation. In this case, we’ll use the validation set accuracy scores for our final evaluation. We can do this using the following code:

trainer.evaluate(eval_dataset=tokenized_datasets["validation"]) {'eval_loss': 0.4223457872867584, 'eval_accuracy': 0.9071100917431193, 'eval_f1': 0.9070209502998072, 'eval_precision': 0.9074841225920363, 'eval_recall': 0.9068472678285763, 'eval_runtime': 3.9341, 'eval_samples_per_second': 221.649, 'eval_steps_per_second': 27.706, 'epoch': 3.0}

We see that the model has a 90% accuracy on the test set, comparable to other BERT models trained on SST-2. If we wanted to improve our model performance, we could investigate a few things:

  • Check whether the model is overfitting: While small by LLM standards, the BERT model we used for fine-tuning is still very large, and our training set was quite modest. In such cases, overfitting is quite common. To check this, we should compare our validation set metrics with our training set metrics. If the training set metrics are much higher than the validation set metrics, then we have overfit the model. You can adjust a range of parameters during model training to help mitigate this.
  • Train on more epochs: In this example, we only trained the model for three epochs. If the model is not overfitting, continuing to train it for longer may improve its performance.
  • Check where the model has misclassified: We could dig into where the model is classifying correctly and incorrectly to see if we could spot a pattern. This may allow us to spot any issues with ambiguous cases or mislabelled data. Perhaps the fact this is a binary classification problem with no label for “neutral” sentiment means there is a subset of sentences that the model cannot properly classify.

To finish our section on evaluating this model, let’s see how it goes with our test sentence. We’ll pass our fine-tuned model and tokenizer to a TextClassificationPipeline, then pass our sentence to this pipeline:

from transformers import TextClassificationPipeline pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True) predictions = pipeline("I love PyCharm! It's my favourite Python IDE.") print(predictions) [[{'label': 'LABEL_0', 'score': 0.0006891043740324676}, {'label': 'LABEL_1', 'score': 0.9993108510971069}]]

Our model assigns LABEL_0 (negative) a probability of 0.0007 and LABEL_1 (positive) a probability of 0.999, indicating it predicts that the sentence has a positive sentiment with 99% certainty. This result is similar to the one we got from the fine-tuned RoBERTa model we used earlier in the post.

Sentiment analysis benchmarks

Instead of evaluating the model on only the dataset it was trained on, we could also assess it on other datasets.

As you can see from the Papers With Code benchmarking table, you can use a wide variety of labeled datasets to assess the performance of your sentiment classifiers. These datasets include the SST-5 fine-grained classification, IMDB dataset, Yelp binary and fine-grained classification, Amazon review polarity, TweetEval, and the SemEval Aspect-based sentiment analysis dataset.

When evaluating your model, the main thing is to ensure that the datasets represent your problem domain.

Most of the benchmarking datasets contain either reviews or social media texts, so if your problem is in either of these domains, you may find an existing benchmark that mirrors your business domain closely enough. However, suppose you are applying sentiment analysis to a more specialized problem. In that case, it may be necessary to create your own benchmarks to ensure your model can generalize to your problem domain properly.

Since there are multiple ways of measuring sentiment, it’s also necessary to make sure that any benchmarks you use to assess your model have the same target as the dataset you trained your model on.

For example, it wouldn’t be a fair measure of a model’s performance to fine-tune it on the SST-2 with a binary target, and then test it on the SST-5. As the model has never seen the very positive, very negative, and neutral categories, it will not be able to accurately predict texts with these labels and hence will perform poorly.

Wrapping up

In this blog post, we saw how LLMs can be a powerful way of classifying the sentiment expressed in a piece of text and took a hands-on approach to fine-tuning an LLM for this purpose.

We saw how understanding which types of models are most suited for sentiment analysis, as well as how being able to see the top performing models on different benchmarks with resources like Papers With Code can help you narrow down your options for which models to use.

We also learned how Hugging Face’s powerful tooling for using these models and their integration into PyCharm makes using LLMs for sentiment analysis approachable for anyone with a background in machine learning.

If you’d like to continue learning about large language models, check out our guest blog post by Dido Grigorov, who explains how to build a chatbot using the LangChain package.

Get started with sentiment analysis with PyCharm today

If you’re ready to get started on your own sentiment analysis project, you can activate your free three-month subscription of PyCharm. Click on the link below, and enter this promo code: PCSA. You’ll then receive an activation code through your email.

Activate your free three-month subscription
Categories: FLOSS Project Planets

LostCarPark Drupal Blog: Drupal Advent Calendar day 5 - Blog Recipe

Planet Drupal - Thu, 2024-12-05 04:00
Drupal Advent Calendar day 5 - Blog Recipe james Thu, 12/05/2024 - 09:00

In the early days of Drupal, it was a popular blogging platform. Nowadays, while it is rare to use Drupal for a pure blog site, it is still quite common for Drupal sites to include a blog. There even used to be a dedicated blog module in Drupal, but it was largely superseded by Drupal’s core functionality.

The ‘Blog’ recipe for Drupal Starshot is designed to facilitate the creation and management of blog posts on a website. It will create the ‘Blog’ content type, equipped with the necessary fields and features that enable content creators to produce rich, informative, and engaging blog entries…

Tags
Categories: FLOSS Project Planets

KStars v3.7.4 is Released

Planet KDE - Thu, 2024-12-05 01:04

KStars v3.7.4 is released on 2024.12.05 for Windows, MacOS & Linux. It's a bi-monthly bug-fix release with a couple of exciting features.

Imaging Planner

Hy Murveit added a brand new Imaging Planner in KStars to facilitate imaging.

The Imaging Planner tool helps users choose which objects to image. Users can download catalogs of recommended objects, or possibly create and share their own catalogs. The tool computes when the objects in a read-in catalog may be imaged on the selected night given constraints such as minimum altitude, terrain and moon separation.

It can sort the objects along several different dimensions including the number of hours an object may be imaged tonight (given the users geography, constraints and possibly artificial horizon), its peak altitude, distance from the moon, constellation, name and type. Objects can also be filtered out for several reasons (e.g. type of object, whether it was previously imaged, keywords the user has added, whether the object has been selected, user not interested, etc). 

This tool helps users research the objects by showing small images of the objects, showing the objects' sky locations on the skymap, and by providing links to follow to internet sites with more information and images. It allows users to attach notes and links to objects, and select certain of them for further consideration. This tool can be used in conjunction with the Ekos imager or any other imaging tool. It does not currently directly interact with the actual imager; it only helps the user decide what to image.

Simbad Integration with FITSViewerJohn Evans added a new, experimental feature to the FITSViewer that allows the user to dynamically query the SIMBAD astronomical database and highlight the results on the image in the FITSViewer. The user draws a circle on the image and the objects within that circle are then displayed in a table and on the image.
It is possible to filter by object type and click through to the Simbad / CDS or NED websites for more information about the objects.

This is an interesting tool to see what is in your image, be it a subframe whilst you are imaging or a completed image that you have reloaded into the FITSViewer.

In order to use the feature you will need an internet connection to access the online Simbad database and an image must have WCS enabled within the FITSViewer. For the most accurate results, plate solve the image with the build-in FITSViewer plate solver. The feature is controlled by a toggle in the FITSViewer options.
New Focus Measures

John Evans introduced a new contrast based focusing algorithm suited for solar and planetary imaging. 

4 new focus measures have been added to the Focus Module to complement the existing measures of HFR, FWHM, etc.
·      StdDev. This is similar conceptually to the Fourier Algorithm but is simpler. It uses an algorithm based on the standard deviation of the pixels in the image as the measure of focus. It can be used on star fields.
·      Contrast based measures use algorithms that can be found in other areas of image processing and uses the contrast of texture in the image in various way as a measure of focus. The following measures are available:

o   Sobel
o   Laplassian
o   Canny

These measures require some form of extended object in the image so will not work on star fields. They are intended for Solar, Lunar and planetary focusing.


 

These algorithms can be used on the whole image or with the existing mask features, or with a user-defined region-of-interest that is used in single-star mode for star based focusing measures.
 
This new feature requires the openCV library to be installed (a standard installation is fine). This library is not installed by default with Kstars so anyone wishing to use these features will need to first install openCV and then rebuild Kstars on their system. It will not be available with pre-built executables.

Categories: FLOSS Project Planets

Russ Allbery: Review: Paladin's Hope

Planet Debian - Wed, 2024-12-04 22:56

Review: Paladin's Hope, by T. Kingfisher

Series: The Saint of Steel #3 Publisher: Red Wombat Studio Copyright: 2021 ISBN: 1-61450-613-2 Format: Kindle Pages: 303

Paladin's Hope is a fantasy romance novel and the third book of The Saint of Steel series. Each book of that series features different protagonists in closer to the romance series style than the fantasy series style and stands alone reasonably well. There are a few spoilers for the previous books here, so you probably want to read the series in order.

Galen is one of the former paladins of the Saint of Steel, left bereft and then adopted by the Temple of the Rat after their god dies. Even more than the paladin protagonists of the previous two books, he reacted very badly to that death and has ongoing problems with nightmares and going into berserker rages when awakened. As the book opens, he's the escort for a lich-doctor named Piper who is examining a corpse found in the river.

The last of the five was the only one who did not share a certain martial quality. He was slim and well-groomed and would be considered handsome, but he was also extraordinarily pale, as if he lived his life underground.

It was this fifth man who nudged the corpse with the toe of his boot and said, "Well, if you want my professional opinion, this great goddamn hole in his chest is probably what killed him."

As it turns out, slim and well-groomed and exceedingly pale is Galen's type.

This is another paladin romance, this time between two men. It's almost all romance; the plot is barely worth mentioning. About half of the book is an exploration of a puzzle dungeon of the sort that might be fun in a video game or tabletop RPG, but that I found rather boring and monotonous in a novel. This creates a lot more room for the yearning and angst.

Kingfisher tends towards slow-burn romances. This romance is a somewhat faster burn than some of her other books, but instead implodes into one of the most egregiously stupid third-act break-ups that I've read in a romance plot. Of all the Kingfisher paladin books, I think this one was hurt the most by my basic difference in taste from the author. Kingfisher finds constant worrying and despair over being good enough for the romantic partner to be an enjoyable element, and I find it incredibly annoying. I think your enjoyment of this book will heavily depend on where you fall on that taste divide.

The saving grace of this book are the gnoles, who are by far the best part of this world. Earstripe, a gnole constable, is the one who found the body that the book opens with and he drives most of the plot, such that it is. He's also the source of the best banter in the book, which is full of pointed and amused gnole observations about humans and their various stupidities. Given that I was also grumbling about human stupidities for most of the book, the gnole viewpoint and I got along rather well.

"God's stripes." Earstripe shook his head in disbelief. "Bone-doctor would save some gnole, yes? If some gnole was hurt."

"Of course," said Piper. "If I could."

"And tomato-man would save some gnole?" He swung his muzzle toward Galen. "If some gnome needed big human with sword?"

"Yes, of course."

Earstripe spread his hands, claws gleaming. "A gnole saves some human. Same thing." He took a deep breath, clearly choosing his words carefully. "A gnole's compassion does not require fur."

We learn a great deal more about gnole culture, all of which I found fascinating, and we get a rather satisfying amount of gnole acerbic commentary. Kingfisher is very good at banter, and dialogue in general, which also smoothes over the paucity of detailed plot. There was no salvaging the romance, at least for me, but I did at least like Piper, and Galen wasn't too bad when he wasn't being annoyingly self-destructive.

I had been wondering a little if gay romance would, like sapphic romance, avoid my dislike of heterosexual gender roles. I think the jury is still out, but it did not work in this book because Galen is so committed to being the self-sacrificing protector who is unable to talk about his feelings that he single-handedly introduced a bunch of annoying pieces of the male gender role anyway. I will have to try that experiment with a book that doesn't involve hard-headed paladins.

I have yet to read a bad T. Kingfisher novel, but I thought this one was on the weaker side. The gnoles are great and kept me reading, but I wish there had been a more robust plot, a lot less of the romance, and no third-act break-up. As is, I recommend the other Saint of Steel books over this one. Ah well.

Followed by Paladin's Faith.

Rating: 6 out of 10

Categories: FLOSS Project Planets

Dirk Eddelbuettel: corels 0.0.5 on CRAN: Maintenance

Planet Debian - Wed, 2024-12-04 18:13

An updated version of the corels package is now on CRAN! The ‘Certifiably Optimal RulE ListS (Corels)’ learner provides interpretable decision rules with an optimality guarantee—a nice feature which sets it apart in machine learning. You can learn more about corels at its UBC site.

The changes concern mostly maintenance for both the repository (such as continunous integration setup, badges, documentation links, …) and the package level (such as removing the no-longer-requiring C++ compilation standard setter now emitting a NOTE at CRAN.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Hacked and tis the season for surgeries

Planet KDE - Wed, 2024-12-04 12:30

I am still here. Sadly while I battle this insane infection from my broken arm I got back in July, the hackers got my blog. I am slowly building it back up. Further bad news is I have more surgeries, first one tomorrow. Furthering my current struggles I cannot start my job search due to hospitalization and recovery. Please consider a donation. https://gofund.me/6e99345d

On the open source work front, I am still working on stuff, mostly snaps ( Apps 24.08.3 released )

Thank you everyone that voted me into the Ubuntu Community Council!

I am trying to stay positive, but it seems I can’t catch a break. I will have my computer in the hospital and will work on what I can. Have a blessed day and see you soon.

Scarlett

Categories: FLOSS Project Planets

Scarlett Gately Moore: Hacked and tis the season for surgeries

Planet Debian - Wed, 2024-12-04 12:30

I am still here. Sadly while I battle this insane infection from my broken arm I got back in July, the hackers got my blog. I am slowly building it back up. Further bad news is I have more surgeries, first one tomorrow. Furthering my current struggles I cannot start my job search due to hospitalization and recovery. Please consider a donation. https://gofund.me/6e99345d

On the open source work front, I am still working on stuff, mostly snaps ( Apps 24.08.3 released )

Thank you everyone that voted me into the Ubuntu Community Council!

I am trying to stay positive, but it seems I can’t catch a break. I will have my computer in the hospital and will work on what I can. Have a blessed day and see you soon.

Scarlett

Categories: FLOSS Project Planets

Sven Hoexter: Looking at x509 Certificate Chains

Planet Debian - Wed, 2024-12-04 12:23

Sometimes you've to look at the content of x509 certificate chains. Usually one finds them pem encoded and concatenated in a text file.

Since the openssl x509 subcommand only decodes the first certificate it will find in a file, I did something like this:

csplit -z -f 'cert' fullchain.pem '/-----BEGIN CERTIFICATE-----/' '{*}' for x in cert*; do openssl x509 -in $x -noout -text; done

Apparently that's the "wrong" way and the more appropriate way is using the openssl crl2pkcs7 subcommand albeit we do not try to parse a revocation list here.

openssl crl2pkcs7 -nocrl -certfile fullchain.pem | \ openssl pkcs7 -print_certs -noout

Learned that one in a webinar presented by Victor Dukhovni. If you're new to the topic worth watching.

Categories: FLOSS Project Planets

Pages