Feeds

PyBites: Adventures in Import-land, Part II

Planet Python - Mon, 2024-04-08 14:15

“KeyError: 'GOOGLE_APPLICATION_CREDENTIALS‘”

It was way too early in the morning for this error. See if you can spot the problem. I hadn’t had my coffee before trying to debug the code I’d written the night before, so it will probably take you less time than it did me.

app.py:

from dotenv import load_dotenv from file_handling import initialize_constants load_dotenv() #...

file_handling.py:

import os from google.cloud import storage UPLOAD_FOLDER=None DOWNLOAD_FOLDER = None def initialize_cloud_storage(): """ Initializes the Google Cloud Storage client. """ os.environ["GOOGLE_APPLICATION_CREDENTIALS"] storage_client = storage.Client() bucket_name = #redacted return storage_client.bucket(bucket_name) def set_upload_folder(): """ Determines the environment and sets the path to the upload folder accordingly. """ if os.environ.get("FLASK_ENV") in ["production", "staging"]: UPLOAD_FOLDER = os.path.join("/tmp", "upload") os.makedirs(UPLOAD_FOLDER, exist_ok=True) else: UPLOAD_FOLDER = os.path.join("src", "upload_folder") return UPLOAD_FOLDER def initialize_constants(): """ Initializes the global constants for the application. """ UPLOAD_FOLDER = initialize_upload_folder() DOWNLOAD_FOLDER = initialize_cloud_storage() return UPLOAD_FOLDER, DOWNLOAD_FOLDER DOWNLOAD_FOLDER=initialize_cloud_storage() def write_to_gcs(content: str, file: str): "Writes a text file to a Google Cloud Storage file." blob = DOWNLOAD_FOLDER.blob(file) blob.upload_from_string(content, content_type="text/plain") def upload_file_to_gcs(file_path:str, gcs_file: str): "Uploads a file to a Google Cloud Storage bucket" blob = DOWNLOAD_FOLDER.blob(gcs_file) with open(file_path, "rb") as f: blob.upload_from_file(f, content_type="application/octet-stream")

See the problem?

This was just the discussion of a recent Pybites article.

When app.py imported initialize_constants from file_handling, the Python interpreter ran

DOWNLOAD_FOLDER = initialize_cloud_storage()

and looked for GOOGLE_APPLICATION_CREDENTIALS from the environment path, but load_dotenv hadn’t added them to the environment path from the .env file yet.

Typically, configuration variables, secret keys, and passwords are stored in a file called .env and then read as environment variables rather than as pure text using a package such as python-dotenv, which is what is being used here.

So, I had a few options.

I could call load_dotenv before importing from file_handling:

from dotenv import load_dotenv load_dotenv() from file_handling import initialize_constants

But that’s not very Pythonic.

I could call initialize_cloud_storage inside both upload_file_to_gcs and write_to_gcs

def write_to_gcs(content: str, file: str): "Writes a text file to a Google Cloud Storage file." DOWNLOAD_FOLDER = initialize_cloud_storage() blob = DOWNLOAD_FOLDER.blob(file) blob.upload_from_string(content, content_type="text/plain") def upload_file_to_gcs(file_path:str, gcs_file: str): "Uploads a file to a Google Cloud Storage bucket" DOWNLOAD_FOLDER = initialize_cloud_storage() blob = DOWNLOAD_FOLDER.blob(gcs_file) with open(file_path, "rb") as f: blob.upload_from_file(f, content_type="application/octet-stream")

But this violates the DRY principle. Plus we really shouldn’t be initializing the storage client multiple times. In fact, we already are initializing it twice in the way the code was originally written.

Going Global

So what about this?

DOWNLOAD_FOLDER = None def initialize_constants(): """ Initializes the global constants for the application. """ global DOWNLOAD_FOLDER UPLOAD_FOLDER = initialize_upload_folder() DOWNLOAD_FOLDER = initialize_cloud_storage() return UPLOAD_FOLDER, DOWNLOAD_FOLDER

Here, we are defining DOWNLOAD_FOLDER as having global scope.

This will work here.

This will work here, because upload_file_to_gcs and write_to_gcs are in the same module. But if they were in a different module, it would break.

Why does it matter?

Well, let’s go back to how Python handles imports. Remember that Python runs any code outside of a function or class at import. That applies to variable (or constant) assignment, as well. So if upload_file_to_gcs and write_to_gcs were in another module and importing DOWNLOAD_FOLDER from file_handling,p it would be importing it while assigned a value of None. It wouldn’t matter that by the time it was needed, it wouldn’t be assigned to None any longer. Inside this other module, it would still be None.

What would be necessary in this situation would be another function called get_download_folder.

def get_download_folder(): """ Returns the current value of the Google Cloud Storage bucket """ return DOWNLOAD_FOLDER

Then, in this other module containing the upload_file_to_gcs and write_to_gcs functions, I would import get_download_folder instead of DOWNLOAD_FOLDER. By importing get_download_folder, you can get the value of DOWNLOAD_FOLDER after it has been assigned to an actual value, because get_download_folder won’t run until you explicitly call it. Which, presumably wouldn’t be until after you’ve let initialize_cloud_storage do its thing.

I have another part of my codebase where I have done this. On my site, I have a tool that helps authors create finetunes of GPT 3.5 from their books. This Finetuner is BYOK, or ‘bring your own key’ meaning that users supply their own OpenAI API key to use the tool. I chose this route because charging authors to fine-tune a model and then charging them to use it, forever, is just not something that benefits either of us. This way, they can take their finetuned model and use it an any of the multiple other BYOK AI writing tools that are out there, and I don’t have to maintain writing software on top of everything else. So the webapp’s form accepts the user’s API key, and after a valid form submit, starts a thread of my Finetuner application.

This application starts in the training_management.py module, which imports set_client and get_client from openai_client.py and passes the user’s API key to set_client right away. I can’t import client directly, because client is None until set_client has been passed the API key, which happens after import.

from openai import OpenAI client = None def set_client(api_key:str): """ Initializes OpenAI API client with user API key """ global client client = OpenAI(api_key = api_key) def get_client(): """ Returns the initialized OpenAI client """ return client

When the function that starts a fine tuning job starts, it calls get_client to retrieve the initialized client. And by moving the API client initialization into another module, it becomes available to be used for an AI-powered chunking algorithm I’m working on. Nothing amazing. Basically, just generating scene beats from each chapter to use as the prompt, with the actual chapter as the response. It needs work still, but it’s available for authors who want to try it.

A Class Act

Now, we could go one step further from here. The code we’ve settled on so far relies on global names. Perhaps we can get away with this. DOWNLOAD_FOLDER is a constant. Well, sort of. Remember, it’s defined by initializing a connection to a cloud storage container. It’s actually a class. By rights, we should be encapsulating all of this logic inside of another class.

So what could that look like? Well, it should initialize the upload and download folders, and expose them as properties, and then use the functions write_to_gcs and upload_file_to_gcs as methods like this:

class FileStorageHandler: def __init__(self): self._upload_folder = self._set_upload_folder() self._download_folder = self._initialize_cloud_storage() @property def upload_folder(self): return self._upload_folder @property def download_folder(self): return self._download_folder def _initialize_cloud_storage(self): """ Initializes the Google Cloud Storage client. """ os.environ["GOOGLE_APPLICATION_CREDENTIALS"] storage_client = storage.Client() bucket_name = #redacted return storage_client.bucket(bucket_name) def _set_upload_folder(self): """ Determines the environment and sets the path to the upload folder accordingly. """ if os.environ.get("FLASK_ENV") in ["production", "staging"]: upload_folder = os.path.join("/tmp", "upload") os.makedirs(upload_folder, exist_ok=True) else: upload_folder = os.path.join("src", "upload_folder") return upload_folder def write_to_gcs(self, content: str, file_name: str): """ Writes a text file to a Google Cloud Storage file. """ blob = self._download_folder.blob(file_name) blob.upload_from_string(content, content_type="text/plain") def upload_file_to_gcs(self, file_path: str, gcs_file_name: str): """ Uploads a file to a Google Cloud Storage bucket. """ blob = self._download_folder.blob(gcs_file_name) with open(file_path, "rb") as file_obj: blob.upload_from_file(file_obj)

Now, we can initialize an instance of FileStorageHandler in app.py and assign UPLOAD_FOLDER and DOWNLOAD_FOLDER to the properties of the class.

from dotenv import load_dotenv from file_handling import FileStorageHandler load_dotenv() folders = FileStorageHandler() UPLOAD_FOLDER = folders.upload_folder DOWNLOAD_FOLDER = folders.download_folder Key take away

In the example, the error arose because initialize_cloud_storage was called at the top level in file_handling.py. This resulted in Python attempting to access environment variables before load_dotenv had a chance to set them.

I had been thinking of module level imports as “everything at the top runs at import.” But that’s not true. Or rather, it is true, but not accurate. Python executes code based on indentation, and functions are indented within the module. So, it’s fair to say that every line that isn’t indented is at the top of the module. In fact, it’s even called that: top-level code, which is defined as basically anything that is not part of a function, class or other code block.

And top-level code runs runs when the module is imported. It’s not enough to bury an expression below some functions, it will still run immediately when the module is imported, whether you are ready for it to run or not. Which is really what the argument against global variables and state is all about, managing when and how your code runs.

Understanding top-level code execution at import helped solved the initial error and design a more robust pattern.

Next steps

The downside with using a class is that if it gets called again, a new instance is created, with a new connection to the cloud storage. To get around this, something to look into would be to implement something called a Singleton Pattern, which is outside of the scope of this article.

Also, the code currently doesn’t handle exceptions that might arise during initialization (e.g., issues with credentials or network connectivity). Adding robust error handling mechanisms will make the code more resilient.

Speaking of robustness, I would be remiss if I didn’t point out that a properly abstracted initialization method should retrieve the bucket name from a configuration or .env file instead of leaving it hardcoded in the method itself.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #445 - Drupal Bounty Program

Planet Drupal - Mon, 2024-04-08 14:00

Today we are talking about The Drupal Bounty Program, How it supports innovation, and how you can get involved with guest Alex Moreno. We’ll also cover WebProfiler as our module of the week.

For show notes visit: www.talkingDrupal.com/445

Topics
  • What is the Drupal Bounty program
  • How and when did it start
  • What issues and tasks are included
  • Has the bounty program been successful
  • Why was this program extended
  • Do you see any drawbacks
  • Can anyone participate
  • How are issues for the second round being selected
  • What do you see the future of the bounty program looking like
  • Could this become like other bounty programs with cash
  • Do you think the bounty program will help maintainers get sponsorship
Resources Guests

Alejandro Moreno - alexmoreno.net alexmoreno

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Matt Glaman - mglaman.dev mglaman

MOTW Correspondent

Martin Anderson-Clutz - mandclu

  • Brief description:
    • Have you ever wanted to get detailed performance data for the pages on your Drupal sites? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in Jan 2014 by Luca Lusso of Italy who was a guest on the show in episode #425
    • Versions available: 10.1.5 which works with Drupal >=10.1.2
  • Maintainership
    • Actively maintained, latest release on Feb 1
    • Security coverage
    • Test coverage
    • Not much in the way of documentation, but the module is largely a wrapper for the Symfony WebProfiler bundle, which has its own section in the Symfony documentation
    • Number of open issues: 36 open issues, 13 of which are bugs
  • Usage stats:
    • 477 sites
  • Module features and usage
    • Once installed the module adds a toolbar to the bottom of your site, within which it will show a variety of data for every page:
    • Route and Controller
    • Memory usage
    • Time to load (with some additional setup)
    • Number of AJAX requests
    • Number of queries run and the total query time
    • Number of blocks visible
    • How many forms are on the profile
    • Lots of other detailed information available through links
    • Reports are saved into the database, so you can dig through additional details such as:
    • Request information like access metadata, cookies, session info, and server parameters, in addition to the request and response headers
    • All of the queries that ran, how long each took, and even a quick way to create an EXPLAIN statement to get deeper insight from your database engine
    • You can also view all the services available, and with a single click open the class file in the IDE of your choice
    • A handy alternative to other performance monitoring tools like XHProf (either as Drupal module, or installed directly into your development environment), or commercial tools like Blackfire or New Relic
    • Discussion
    • Luca’s book Modernizing Drupal 10 Theme Development actually provides a great deep dive into this module
Categories: FLOSS Project Planets

Open Source AI Definition – Weekly update April 8

Open Source Initiative - Mon, 2024-04-08 13:15
Seeking document reviewers for OpenCV
  • This is your final opportunity to register for the review of licenses provided by OpenCV. Join us for our upcoming phase, where we meticulously compare various systems’ documentation against our latest definition to test compatibility.
    • For more information, check the forum
Action on the 0.0.6 draft 
  • Under “The following components are not required, but their inclusion in public releases is appreciated”, a user highlighted that model cards should be a required open component, as its purpose is to promote transparency and accountability
  • Under “What is Open Source AI?”, a user raises a concern regarding “made available to the public”, stating that software carries an Open Source license, even if a copy was only made available to a single person.
    • This will be considered for the next draft
Open Source AI Definition Town Hall – April 5, 2024

Access the slides and the recording of the previous town hall meeting here.

Categories: FLOSS Research

Bastian Blank: Python dataclasses for Deb822 format

Planet Debian - Mon, 2024-04-08 13:00

Python includes some helping support for classes that are designed to just hold some data and not much more: Data Classes. It uses plain Python type definitions to specify what you can have and some further information for every field. This will then generate you some useful methods, like __init__ and __repr__, but on request also more. But given that those type definitions are available to other code, a lot more can be done.

There exists several separate packages to work on data classes. For example you can have data validation from JSON with dacite.

But Debian likes a pretty strange format usually called Deb822, which is in fact derived from the RFC 822 format of e-mail messages. Those files includes single messages with a well known format.

So I'd like to introduce some Deb822 format support for Python Data Classes. For now the code resides in the Debian Cloud tool.

Usage Setup

It uses the standard data classes support and several helper functions. Also you need to enable support for postponed evaluation of annotations.

from __future__ import annotations from dataclasses import dataclass from dataclasses_deb822 import read_deb822, field_deb822 Class definition start

Data classes are just normal classes, just with a decorator.

@dataclass class Package: Field definitions

You need to specify the exact key to be used for this field.

package: str = field_deb822('Package') version: str = field_deb822('Version') arch: str = field_deb822('Architecture')

Default values are also supported.

multi_arch: Optional[str] = field_deb822( 'Multi-Arch', default=None, ) Reading files for p in read_deb822(Package, sys.stdin, ignore_unknown=True): print(p) Full example from __future__ import annotations from dataclasses import dataclass from debian_cloud_images.utils.dataclasses_deb822 import read_deb822, field_deb822 from typing import Optional import sys @dataclass class Package: package: str = field_deb822('Package') version: str = field_deb822('Version') arch: str = field_deb822('Architecture') multi_arch: Optional[str] = field_deb822( 'Multi-Arch', default=None, ) for p in read_deb822(Package, sys.stdin, ignore_unknown=True): print(p) Known limitations
Categories: FLOSS Project Planets

Anwesha Das: Test container image with eercheck

Planet Python - Mon, 2024-04-08 10:25

Execution Environments serves us the benefits of containerization by solving the issues such as software dependencies, portability. Ansible Execution Environment are Ansible control nodes packaged as container images. There are two kinds of Ansible execution environments

  • Base, includes the following

    • fedora base image
    • ansible core
    • ansible collections : The following set of collections
      ansible.posix
      ansible.utils
      ansible.windows
  • Minimal, includes the following

    • fedora base image
    • ansible core

I have been the release manager for Ansible Execution Environments. After building the images I perform certain steps of tests to check if the versions of different components of the newly built correct or not. So I wrote eercheck to ease the steps of tests.

What is eercheck?

eercheck is a command line tool to test Ansible community execution environment before release. It uses podman py to connect and work with the podman container image, and Python unittest for testing the containers.

eercheck is a command line tool to test Ansible Community Execution Environment before release. It uses podman-py to connect and work with the podman container image, and Python unittest for testing the containers. The project is licensed under GPL-3.0-or-later.

How to use eercheck?

Activate the virtual environment in the working directory.

python3 -m venv .venv source .venv/bin/activate python -m pip install -r requirements.txt

Activate the podman socket.

systemctl start podman.socket --user

Update vars.json with correct version numbers.Pick the correct versions of the Ansible Collections from the .deps file of the corresponding Ansible community package release. For example for 9.4.0 the Collection versions can be found in here. You can find the appropriate version of Ansible Community Package here. The check needs to be carried out each time before the release of the Ansible Community Execution Environment.

Execute the program by giving the correct container image id.

./containertest.py image_id

Happy automating.

Categories: FLOSS Project Planets

Real Python: Python News: What's New From March 2024

Planet Python - Mon, 2024-04-08 10:00

While many people went hunting for Easter eggs, the Python community stayed active through March 2024. The free-threaded Python project reached a new milestone, and you can now experiment with disabling the GIL in your interpreter.

The Python Software Foundation does a great job supporting the language with limited resources. They’ve now announced a new position that will support users of PyPI. NumPy is an old workhorse in the data science space. The library is getting a big facelift, and the first release candidate of NumPy 2 is now available.

Dive in to learn more about last month’s most important Python news.

Free-Threaded Python Reaches an Important Milestone

Python’s global interpreter lock (GIL) has been part of the CPython implementation since the early days. The lock simplifies a lot of the code under the hood of the language, but also causes some issues with parallel processing.

Over the years, there have been many attempts to remove the GIL. However, until PEP 703 was accepted by the steering council last year, none had been successful.

The PEP describes how the GIL can be removed based on experimental work done by Sam Gross. It suggests that what’s now called free-threaded Python is activated through a build option. In time, this free-threaded Python is expected to become the default version of CPython, but for now, it’s only meant for testing and experiments.

When free-threaded Python is ready for bigger audiences, the GIL will still be enabled by default. You can then set an environment variable or add a command-line option to try out free-threaded Python:

Read the full article at https://realpython.com/python-news-march-2024/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Zato Blog: Integrating with Jira APIs

Planet Python - Mon, 2024-04-08 09:44
Integrating with Jira APIs 2024-04-08, by Dariusz Suchojad Overview

Continuing in the series of articles about newest cloud connections in Zato 3.2, this episode covers Atlassian Jira from the perspective of invoking its APIs to build integrations between Jira and other systems.

There are essentially two use modes of integrations with Jira:

  1. Jira reacts to events taking place in your projects and invokes your endpoints accordingly via WebHooks. In this case, it is Jira that explicitly establishes connections with and sends requests to your APIs.
  2. Jira projects are queried periodically or as a consequence of events triggered by Jira using means other than WebHooks.

The first case is usually more straightforward to conceptualize - you create a WebHook in Jira, point it to your endpoint and Jira invokes it when a situation of interest arises, e.g. a new ticket is opened or updated. I will talk about this variant of integrations with Jira in a future instalment as the current one is about the other situation, when it is your systems that establish connections with Jira.

The reason why it is more practical to first speak about the second form is that, even if WebHooks are somewhat easier to reason about, they do come with their own ramifications.

To start off, assuming that you use the cloud-based version of Jira (e.g. https://example.atlassian.net), you need to have a publicly available endpoint for Jira to invoke through WebHooks. Very often, this is undesirable because the systems that you need to integrate with may be internal ones, never meant to be exposed to public networks.

Secondly, your endpoints need to have a TLS certificate signed by a public Certificate Authority and they need to be accessible on port 443. Again, both of these are something that most enterprise systems will not allow at all or it may take months or years to process such a change internally across the various corporate departments involved.

Lastly, even if a WebHook can be used, it is not always a given that the initial information that you receive in the request from a WebHook will already contain everything that you need in your particular integration service. Thus, you will still need a way to issue requests to Jira to look up details of a particular object, such as tickets, in this way reducing WebHooks to the role of initial triggers of an interaction with Jira, e.g. a WebHook invokes your endpoint, you have a ticket ID on input and then you invoke Jira back anyway to obtain all the details that you actually need in your business integration.

The end situation is that, although WebHooks are a useful concept that I will write about in a future article, they may very well not be sufficient for many integration use cases. That is why I start with integration methods that are alternative to WebHooks.

Alternatives to WebHooks

If, in our case, we cannot use WebHooks then what next? Two good approaches are:

  1. Scheduled jobs
  2. Reacting to emails (via IMAP)

Scheduled jobs will let you periodically inquire with Jira about the changes that you have not processed yet. For instance, with a job definition as below:

Now, the service configured for this job will be invoked once per minute to carry out any integration works required. For instance, it can get a list of tickets since the last time it ran, process each of them as required in your business context and update a database with information about what has been just done - the database can be based on Redis, MongoDB, SQL or anything else.

Integrations built around scheduled jobs make most sense when you need to make periodic sweeps across a large swaths of business data, these are the "Give me everything that changed in the last period" kind of interactions when you do not know precisely how much data you are going to receive.

In the specific case of Jira tickets, though, an interesting alternative may be to combine scheduled jobs with IMAP connections:

The idea here is that when new tickets are opened, or when updates are made to existing ones, Jira will send out notifications to specific email addresses and we can take advantage of it.

For instance, you can tell Jira to CC or BCC an address such as zato@example.com. Now, Zato will still run a scheduled job but instead of connecting with Jira directly, that job will look up unread emails for it inbox ("UNSEEN" per the relevant RFC).

Anything that is unread must be new since the last iteration which means that we can process each such email from the inbox, in this way guaranteeing that we process only the latest updates, dispensing with the need for our own database of tickets already processed. We can extract the ticket ID or other details from the email, look up its details in Jira and the continue as needed.

All the details of how to work with IMAP emails are provided in the documentation but it would boil down to this:

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class MyService(Service): def handle(self): conn = self.email.imap.get('My Jira Inbox').conn for msg_id, msg in conn.get(): # Process the message here .. process_message(msg.data) # .. and mark it as seen in IMAP. msg.mark_seen()

The natural question is - how would the "process_message" function extract details of a ticket from an email?

There are several ways:

  1. Each email has a subject of a fixed form - "[JIRA] (ABC-123) Here goes description". In this case, ABC-123 is the ticket ID.
  2. Each email will contain a summary, such as the one below, which can also be parsed:
Summary: Here goes description Key: ABC-123 URL: https://example.atlassian.net/browse/ABC-123 Project: My Project Issue Type: Improvement Affects Versions: 1.3.17 Environment: Production Reporter: Reporter Name Assignee: Assignee Name
  1. Finally, each email will have an "X-Atl-Mail-Meta" header with interesting metadata that can also be parsed and extracted:
X-Atl-Mail-Meta: user_id="123456:12d80508-dcd0-42a2-a2cd-c07f230030e5", event_type="Issue Created", tenant="https://example.atlassian.net"

The first option is the most straightforward and likely the most convenient one - simply parse out the ticket ID and call Jira with that ID on input for all the other information about the ticket. How to do it exactly is presented in the next chapter.

Regardless of how we parse the emails, the important part is that we know that we invoke Jira only when there are new or updated tickets - otherwise there would not have been any new emails to process. Moreover, because it is our side that invokes Jira, we do not expose our internal system to the public network directly.

However, from the perspective of the overall security architecture, email is still part of the attack surface so we need to make sure that we read and parse emails with that in view. In other words, regardless of whether it is Jira invoking us or our reading emails from Jira, all the usual security precautions regarding API integrations and accepting input from external resources, all that still holds and needs to be part of the design of the integration workflow.

Creating Jira connections

The above presented the ways in which we can arrive at the step of when we invoke Jira and now we are ready to actually do it.

As with other types of connections, Jira connections are created in Zato Dashboard, as below. Note that you use the email address of a user on whose behalf you connect to Jira but the only other credential is that user's API token previously generated in Jira, not the user's password.

Invoking Jira

With a Jira connection in place, we can now create a Python API service. In this case, we accept a ticket ID on input (called "a key" in Jira) and we return a few details about the ticket to our caller.

This is the kind of a service that could be invoked from a service that is triggered by a scheduled job. That is, we would separate the tasks, one service would be responsible for opening IMAP inboxes and parsing emails and the one below would be responsible for communication with Jira.

Thanks to this loose coupling, we make everything much more reusable - that the services can be changed independently is but one part and the more important side is that, with such separation, both of them can be reused by future services as well, without tying them rigidly to this one integration alone.

# -*- coding: utf-8 -*- # stdlib from dataclasses import dataclass # Zato from zato.common.typing_ import cast_, dictnone from zato.server.service import Model, Service # ########################################################################### if 0: from zato.server.connection.jira_ import JiraClient # ########################################################################### @dataclass(init=False) class GetTicketDetailsRequest(Model): key: str @dataclass(init=False) class GetTicketDetailsResponse(Model): assigned_to: str = '' progress_info: dictnone = None # ########################################################################### class GetTicketDetails(Service): class SimpleIO: input = GetTicketDetailsRequest output = GetTicketDetailsResponse def handle(self): # This is our input data input = self.request.input # type: GetTicketDetailsRequest # .. create a reference to our connection definition .. jira = self.cloud.jira['My Jira Connection'] # .. obtain a client to Jira .. with jira.conn.client() as client: # Cast to enable code completion client = cast_('JiraClient', client) # Get details of a ticket (issue) from Jira ticket = client.get_issue(input.key) # Observe that ticket may be None (e.g. invalid key), hence this 'if' guard .. if ticket: # .. build a shortcut reference to all the fields in the ticket .. fields = ticket['fields'] # .. build our response object .. response = GetTicketDetailsResponse() response.assigned_to = fields['assignee']['emailAddress'] response.progress_info = fields['progress'] # .. and return the response to our caller. self.response.payload = response # ########################################################################### Creating a REST channel and testing it

The last remaining part is a REST channel to invoke our service through. We will provide the ticket ID (key) on input and the service will reply with what was found in Jira for that ticket.

We are now ready for the final step - we invoke the channel, which invokes the service which communicates with Jira, transforming the response from Jira to the output that we need:

$ curl localhost:17010/jira1 -d '{"key":"ABC-123"}' { "assigned_to":"zato@example.com", "progress_info": { "progress": 10, "total": 30 } } $

And this is everything for today - just remember that this is just one way of integrating with Jira. The other one, using WebHooks, is something that I will go into in one of the future articles.

More blog posts
Categories: FLOSS Project Planets

The Drop Times: Small Strides to Dramatic Leaps

Planet Drupal - Mon, 2024-04-08 09:29

Dear Readers,

The DropTimes [TDT], as you know, is a news website with the vision of contributing to the growth of a vibrant community of users and contributors around Drupal through the process of covering and promoting everything happening around Drupal. To borrow our founder, Anoop John's words, 

"What we are doing as TDT is not just running a news website but we are trying to mobilize a whole community of users toward revitalizing the community."

 We are working towards improving the technology, driving the contributions back to the Drupal community, and ultimately contributing back to society at large. We are driving towards something bigger than all of us. 

The growth of such a venture certainly will be slow, like a writer adding a few words to their novel each day, a runner slightly extending their distance each week, or a business steadily enhancing its customer service. These small steps may seem insignificant in isolation, but they compound into significant advancements over months and years. These seemingly minor improvements compound over time, accumulating smaller strides and preparing for the dramatic leaps. The DropTimes has, day by day, accumulated the strength to make bigger leaps.

In digital innovation, embracing new directions and challenging conventional norms often leads to remarkable discoveries. Just as the Drupal community continuously strives to push boundaries and advocate for the principles of openness and community-driven development, we are likewise urged to delve into diverse domains with immense potential for creativity and impact. The DropTimes model is one based on resilience, fosters a culture of continuous learning, and ultimately transforms modest efforts into significant accomplishments. This method underscores the importance of the journey, teaching patience and discipline and proving that steady progress can lead to remarkable success.

While grateful to all the readers and loyal supporters of TDT, we seek your continued support in building something impactful and helping contribute to Drupal and open-source. With that, let's revisit the highlights from last week's coverage.

The DropTimes Community Strategist, Tarun Udayaraj, had an opportunity to converse with Tim Doyle, the first-ever Chief Executive Officer and the appointed leader of the Drupal Association. In this exclusive interview, the CEO of the Drupal Association shares his perspectives on the future of Drupal and the open-source community at large. Read the full interview here.

Preston So is a dynamic figure in software development. He showcases a rich career spanning diverse roles within the tech industry and emphasizes a leadership philosophy rooted in empathy and adaptability. Beyond his professional endeavors, Preston's commitment to the Drupal community is notable, having been a significant part of it since 2007. Learn more about this multifaceted individual and his contributions to open-source with an interview by Elma John, a sub-editor at TDT, "Navigating the Currents of Change: The Multidimensional Journey of Preston So."

In an interview with Kazima Abbas, sub-editor of The DropTimes, Adrian Ababei, the Senior Drupal Architect and CEO at OPTASY, shares his extensive experience in web development and Drupal architecture. He discusses overseeing full-cycle project management, conducting technology research, and leading a team of developers at OPTASY.

The third part of the hit Page Builder series by  André Angelantoni, Senior Drupal Architect at HeroDevs, came out last week. "Drupal Page Builders—Part 3: Other Alternative Solutions" discusses alternatives to Paragraphs and Layout Builder. This segment navigates through a variety of server-side rendered page generation solutions, offering a closer look at innovative modules that provide a broader range of page-building capabilities beyond Drupal's native tools.

March has ended, and TDT has successfully concluded its "Women in Drupal" campaign. As the series ends with the third part of Inspiring Inclusion: Celebrating Women in Drupal, The DropTimes reflects on the powerful narratives and insightful messages shared by women Drupalers from around the globe. 

In exciting news, TDT has been announced as the Media Partner for DrupalCon Barcelona 2024 and Drupal Iberia 2024 as a testament to the platform's growth and resilience. We are also seeking volunteers from the members of the Drupal Community to help us cover the upcoming DrupalCon Portland 2024

The Regular Registration window for DrupalCon Portland is now open. Registration for DrupalCon Portland will unlock an additional $100 discount on your ticket for DrupalCon North America 2025, in addition to the Early Bird discount during the Early Bird registration window. 

Every week, we will have some events somewhere around the world. A complete list of events for the week is available here.

In other news, Drupal 10.2.5 is now available, featuring a collection of bug fixes. This patch release addresses various issues to ensure stability for production sites. Janez Urevc has reported a 10% improvement in Drupal core test suite runtime, attributed to Gander, a performance testing framework part of Drupal since version 10.2. The latest WebAIM Million report reveals insights into web accessibility, with Drupal holding strong in the CMS rankings. Discover the subtle shifts in WCAG 2 compliance and the strategic decision to exclude subdomains for improved analysis. 

In another interesting update, Mufeed VH, a 21-year-old from Kerala, India, and founder of Lyminal and Stition.AI, has created Devika, an open-source AI software similar to Devin. Devika, conceived initially as a joke, can understand instructions, conduct research, and write code autonomously.

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely
Alka Elizabeth
Sub-editor, The DropTimes.

Categories: FLOSS Project Planets

EuroPython: EuroPython April 2024 Newsletter

Planet Python - Mon, 2024-04-08 06:42

Hello, Python enthusiasts! 👋

Guess what? We&aposre on the home stretch now, with less than 100 days left until we all rendezvous in the enchanting city of Prague for EuroPython 2024!

Only 91 days left until EuroPython 2024!

Can you feel the excitement tingling in your Pythonic veins?

Let’s look up what&aposs been cooking in the EuroPython pot lately. 🪄🍜

📣 Programme

The curtains have officially closed on the EuroPython 2024 Call for Proposals! 🎬

We&aposve hit records with an incredible 627 submissions this year!! 🎉

Thank you to each and every one of you brave souls who tossed your hats into the ring! 🎩 Your willingness to share your ideas has truly made this a memorable journey.

🗃️ Community Voting

EuroPython 2024 Community Voting was a blast!

The Community Voting is composed of all past EuroPython attendees and prospective speakers between 2015 and 2024.

We had 297 people contributing, making EuroPython more responsive to the community’s choices. 😎 We can’t thank you enough for helping us hear the voice of the Community.

Now, our wonderful programme crew along with the team of reviewers and community voters have been working hard to create the schedule for the conference! 📋✨

💰 Sponsor EuroPython 2024

EuroPython is a volunteer-run, non-profit conference. All sponsor support goes to cover the cost of running the Europython Conference and supporting the community with Grants and Financial Aid.

If you want to support EuroPython and its efforts to make the event accessible to everyone, please consider sponsoring (or asking your employer to sponsor).

Sponsoring EuroPython guarantees you highly targeted visibility and the opportunity to present your company to one of the largest and most diverse Python communities in Europe and beyond!

There are various sponsor tiers and some have limited slots available. This year, besides our main packages, we offer add-ons as optional extras. For more information, check out our Sponsorship brochure.

🐦 We have an Early Bird 10% discount for companies that sign up by April 15th.🐦

More information at:  https://ep2024.europython.eu/sponsor 🫂 Contact us at sponsoring@europython.eu

🎟️ Ticket Sales

The tickets are now open to purchase, and there is a variety of options:

  • Conference Tickets: access to Conference and Sprint Weekends.
  • Tutorial Tickets: access to the Workshop/Tutorial Days and Sprint Weekend (no access to the main conference).
  • Combined Tickets: access to everything during the whole seven-day, i.e. workshops, conference talks and sprint weekend!

We also offer different payment tiers designed to answer each attendee&aposs needs. They are:

Business Tickets: for companies and employees funded by their companies
  • Tutorial Only Business (Net price €400.00 + 21% VAT)
  • Conference Only Business (Net price €500.00 + 21% VAT)
  • Late Bird (Net price €750.00 + 21% VAT)
  • Combines Business (Net price €800.00 + 21% VAT)
  • Late Bird (Net price €1200.00 + 21% VAT)
Personal Tickets: for individuals
  • Tutorial Only Personal (€200.00 incl. 21%VAT)
  • Conference Only Personal (€300.00 incl. 21% VAT)
  • Late Bird (€450.00 incl. 21% VAT)
  • Combined Personal (€450.00 incl. 21% VAT)
  • Late Bird (€675.00 incl. 21% VAT)
Education Tickets: for students and active teachers (Educational ID is required at registration)
  • Conference Only Education (€135.00 incl. 21% VAT)
  • Tutorial Only Education (€100.00 incl. 21% VAT)
  • Combined Education (€210.00 incl. 21% VAT)
Fun fact: Czechia has been ranked among the world&aposs top 20 happiest countries recently.

Seize the chance to grab an EP24 ticket and connect with the delightful community of Pythonistas and happy locals this summer! ☀️

Need more information regarding tickets? Please visit https://ep2024.europython.eu/tickets or contact us at helpdesk@europython.eu.

⚖️ Visa Application

If you require a visa to attend EuroPython 2024 in Prague, now is the time to start preparing.

The first step is to verify if you require a visa to travel to the Czech Republic.

The Czech Republic is a part of the EU and the Schengen Area. If you already have a valid Schengen visa, you may NOT need to apply for a Czech visa. If you are uncertain, please check this website and consult your local consular office or embassy. 🏫

If you need a visa to attend EuroPython, you can lodge a visa application for Short Stay (C), up to 90 days, for the purpose of “Business /Conference”. We recommend you do this as soon as possible.

Please, make sure you read all the visa pages carefully and prepare all the required documents before making your application. The EuroPython organisers are not able nor qualified to give visa advice.

However, we are more than happy to help with the visa support letter issued by the EuroPython Society. Every registered attendee can request one; we only issue visa support letters to confirmed attendees. We kindly ask you to purchase your ticket before filling in the request form.

For more information, please check https://ep2024.europython.eu/visa or contact us at visa@europython.eu. ✈️

💶 Financial Aid

We are also pleased to announce our financial aid, sponsored by the EuroPython Society. The goal is to make the conference open to everyone, including those in need of financial assistance.

Submissions for the first round of our financial aid programme are open until April 21st 2024.

There are three types of grants including:

  • Free Ticket Voucher Grant
  • Travel/Accommodation Grant (reimbursement of travel costs up to €400.)
  • Visa Application Fee Grant (up to €80)
⏰ FinAid timeline

If you apply for the first round and do not get selected, you will automatically be considered for the second round. No need to reapply.

8 March 2024Applications open21 April 2024Deadline for submitting first-round applications8 May 2024First round of grant notifications12 May 2024Deadline to accept a first-round grant19 May 2024Deadline for submitting second-round applications15 June 2024Second round of grant notifications12 June 2024Deadline to accept a second-round grant21 July 2024Deadline for submitting receipts/invoices

Visit https://europython.eu/finaid for information on eligibility and application procedures for Financial Aid grants.

🎤 Public Speaking Workshop for Mentees

We are excited to announce that this year’s Speaker Mentorship Programme comes with an extra package!

We have selected a limited number of mentees for a 5-week interactive course covering the basics of a presentation from start to finish.

The main facilitator is the seasoned speaker Cheuk Ting Ho and the participants will end the course by delivering a talk covering all they have learned.

We look forward to the amazing talks the workshop participants will give us. 🙌

🐍 Upcoming Events in Europe

Here are some upcoming events happening in Europe soon.

Czech Open Source Policy Forum: Apr 24, 2024 (In-Person)

Interested in open source and happen to be near Brno/Czech Republic in April? Join the Czech Open Source Policy Forum and have the chance to celebrate the launch of the Czech Republic&aposs first Open Source Policy Office (OSPO). More info at: https://pretix.eu/om/czospf2024/

OSSCi Prague Meetup: May 16, 2024 (In-Person)

Join the forefront of innovation at OSSci Prague Meetup, where open source meets science. Call for Speakers is open!  https://pydata.cz/ossci-cfs.html

PyCon DE & PyData Berlin: April 22-24 2024

Dive into three days of Python and PyData excellence at Pycon DE! Visit https://2024.pycon.de/ for details.

PyCon Italy: May 22-25 2024

PyCon Italia 2024 will happen in Florence. The schedule is online and you can check it out at their nice website: https://2024.pycon.it/

GeoPython 2024: May 27-29, 2024

GeoPython 2024 will happen in Basel, Switzerland. For more information visit their website: https://2024.geopython.net/

🤭 Py.Jokes

Can you imagine our newsletter without joy and laughter? We can’t. 😾🙅‍♀️❌ Here’s this month&aposs PyJoke:

pip install pyjokesimport pyjokesprint(pyjokes.get_joke()) How many programmers does it take to change a lightbulb? None, they just make darkness a standard! 🐣 See You All Next Month

Before saying goodbye, thank you so much for reading this far.

We can’t wait to reunite with all you amazing people in beautiful Prague again.

Let me remind you how pretty Prague is during summer. 🌺🌼🌺

Rozkvetlá jarní Praha, březen 2024 by Radoslav Vnenčák

Remember to take good care of yourselves, stay hydrated and mind your posture!

Oh, and don’t forget to force encourage your friends to join us at EuroPython 2024! 😌

It’s time again to make new Python memories together!

Looking forward to meeting you all here next month!

With much joy and excitement,

EuroPython 2024 Team 🤗

Categories: FLOSS Project Planets

LN Webworks: How To Create Drupal Custom Entity: Step By Step Guide

Planet Drupal - Mon, 2024-04-08 06:15

Custom Entities are a powerful tool for building complex web applications and content management systems. Entities in Drupal provide a standardized way to store and manipulate data. Custom entity types in Drupal empower developers to define custom functionality, enhance performance, and maintain full control over data structures, supplementing the numerous built-in entity types.

Here are the steps for creating a custom entity.

Drupal 10 Custom Entity Development in easy steps:

Step 1: Create a custom folder for your module.

                                                     

Choose a short name or machine name for your module.

Categories: FLOSS Project Planets

Golems GABB: Innovative Methods of Integrating Drupal with Other Systems to Expand Your Website's Capabilities

Planet Drupal - Mon, 2024-04-08 05:42
Innovative Methods of Integrating Drupal with Other Systems to Expand Your Website's Capabilities Editor Mon, 04/08/2024 - 12:42

Given the variety of tools and techniques Drupal offers, it is a must to estimate your business needs first. AI, VR, AR, blockchain, and other technologies will keep reshaping industrial processes, so your task is to ensure the overall Drupal site’s scalability and versatility. 
Businesses of any calibre won’t achieve excellent results if they don’t align the server-side and client-side aspects of website development. Expanding website capabilities with Drupal integration will help you keep the momentum and improve online experiences for your audiences. 
The demand for mobile-first designs, as well as emerging technologies and e-commerce growth, make surviving in the niche without implementing innovative methods of performance and communication impossible. Stay tuned to explore the palette of tools and techniques to level up the standards of website architecture and efficiency for your business.

Categories: FLOSS Project Planets

Python Insider: Python 3.11.9 is now available

Planet Python - Mon, 2024-04-08 00:50

  


 This is the last bug fix release of Python 3.11 

This is the ninth maintenance release of Python 3.11

Python 3.11.9 is the newest major release of the Python programming language, and it contains many new features and optimizations. Get it here:

https://www.python.org/downloads/release/python-3119/

Major new features of the 3.11 series, compared to 3.10

Among the new major new features and changes so far:

  • PEP 657 – Include Fine-Grained Error Locations in Tracebacks
  • PEP 654 – Exception Groups and except*
  • PEP 673 – Self Type
  • PEP 646 – Variadic Generics
  • PEP 680 – tomllib: Support for Parsing TOML in the Standard Library
  • PEP 675 – Arbitrary Literal String Type
  • PEP 655 – Marking individual TypedDict items as required or potentially-missing
  • bpo-46752 – Introduce task groups to asyncio
  • PEP 681 – Data Class Transforms
  • bpo-433030– Atomic grouping ((?>…)) and possessive quantifiers (*+, ++, ?+, {m,n}+) are now supported in regular expressions.
  • The Faster Cpython Project is already yielding some exciting results. Python 3.11 is up to 10-60% faster than Python 3.10. On average, we measured a 1.22x speedup on the standard benchmark suite. See Faster CPython for details.

More resources

And now for something completely different

A kugelblitz is a theoretical astrophysical object predicted by general relativity. It is a concentration of heat, light or radiation so intense that its energy forms an event horizon and becomes self-trapped. In other words, if enough radiation is aimed into a region of space, the concentration of energy can warp spacetime so much that it creates a black hole. This would be a black hole whose original mass–energy was in the form of radiant energy rather than matter, however as soon as it forms, it is indistinguishable from an ordinary black hole.

We hope you enjoy the new releases!Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.

https://www.python.org/psf/

Your friendly release team,
Ned Deily @nad 
Steve Dower @steve.dower 
Pablo Galindo Salgado @pablogsal

Categories: FLOSS Project Planets

Salsa Digital: Drupal stats in Australian government jurisdictions

Planet Drupal - Mon, 2024-04-08 00:14
Drupal use in the Australian public sector At the recent DrupalSouth 2024, Sean Hamlin from  amazee.io took us through some enlightening stats on CMS use for Australian government websites. This was as a follow-on to stats he provided at DrupalSouth 2022 . Once again, he analysed Drupal use for each state/territory and also for Federal Government sites.  After briefly highlighting his methodology, Sean dived into the stats.  Victoria The breakdown of CMSs used by government websites in Victoria is as follows: Drupal (.vic.gov.au) — 32.2% (based on aggregation by score)  Unknown — 20%  SquizMatrix — 15.5% Opencities — 9.4% New South Wales For New South Wales, the most used CMSs by the government are: Drupal (nsw.gov.au) — 23.7% Unknown — 19.5% SquizMatrix — 15.
Categories: FLOSS Project Planets

stow @ Savannah: GNU Stow 2.4.0 released

GNU Planet! - Sun, 2024-04-07 19:22

Stow 2.4.0 has been released. This release contains some much-wanted bug-fixes — specifically, fixing the --dotfiles option to work with dot-foo directories, and avoiding a spurious warning when unstowing. There were also very many clean-ups and improvements, mostly internal and not visible to users. See http://git.savannah.gnu.org/cgit/stow.git/tree/NEWS for more details.

Categories: FLOSS Project Planets

Thorsten Alteholz: My Debian Activities in March 2024

Planet Debian - Sun, 2024-04-07 07:56
FTP master

This month I accepted 147 and rejected 12 packages. The overall number of packages that got accepted was 151.

If you file an RM bug, please do check whether there are reverse dependencies as well and file RM bugs for them. It is annoying and time-consuming when I have to do the moreinfo dance.

Debian LTS

This was my hundred-seventeenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3770-1] libnet-cidr-lite-perl security update for one CVE to fix IP parsing and ACLs based on the result
  • [#1067544] Bullseye PU bug for libmicrohttpd
  • Unfortunately XZ happened at the end of month and I had to delay/intentionally delayed other uploads: they will appear as DLA-3781-1 and DLA-3784-1 in April

I also continued to work on qtbase-opensource-src and last but not least did a week of FD.

Debian ELTS

This month was the sixty-eighth ELTS month. During my allocated time I uploaded:

  • [ELA-1062-1]libnet-cidr-lite-perl security update for one CVE to improve parsing of IP addresses in Jessie and Stretch
  • Due to XZ I also delayed the uploads here. They will appear as ELA-1069-1 and DLA-1070-1 in April

I also continued on an update for qtbase-opensource-src in Stretch (and LTS and other releases as well) and did a week of FD.

Debian Printing

This month I uploaded new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I uploaded a new upstream or bugfix version of:

misc

This month I uploaded new upstream or bugfix versions of:

Categories: FLOSS Project Planets

The Drop Times: The Drop Times Seeks Volunteers for DrupalCon Portland 2024 Coverage

Planet Drupal - Sat, 2024-04-06 13:54
Join The Drop Times at DrupalCon Portland 2024! We're seeking passionate volunteers to capture the excitement, insights, and moments that make this event unforgettable. From live updates to interviews and photo documentation, be part of our team and showcase the best of DrupalCon North America 2024! Sign up now to get involved.
Categories: FLOSS Project Planets

John Goerzen: Facebook is Censoring Stories about Climate Change and Illegal Raid in Marion, Kansas

Planet Debian - Sat, 2024-04-06 10:00

It is, sadly, not entirely surprising that Facebook is censoring articles critical of Meta.

The Kansas Reflector published an artical about Meta censoring environmental articles about climate change — deeming them “too controversial”.

Facebook then censored the article about Facebook censorship, and then after an independent site published a copy of the climate change article, Facebook censored it too.

The CNN story says Facebook apologized and said it was a mistake and was fixing it.

Color me skeptical, because today I saw this:

Yes, that’s right: today, April 6, I get a notification that they removed a post from August 12. The notification was dated April 4, but only showed up for me today.

I wonder why my post from August 12 was fine for nearly 8 months, and then all of a sudden, when the same website runs an article critical of Facebook, my 8-month-old post is a problem. Hmm.

Riiiiiight. Cybersecurity.

This isn’t even the first time they’ve done this to me.

On September 11, 2021, they removed my post about the social network Mastodon (click that link for screenshot). A post that, incidentally, had been made 10 months prior to being removed.

While they ultimately reversed themselves, I subsequently wrote Facebook’s Blocking Decisions Are Deliberate — Including Their Censorship of Mastodon.

That this same pattern has played out a second time — again with something that is a very slight challenege to Facebook — seems to validate my conclusion. Facebook lets all sort of hateful garbage infest their site, but anything about climate change — or their own censorship — gets removed, and this pattern persists for years.

There’s a reason I prefer Mastodon these days. You can find me there as @jgoerzen@floss.social.

So. I’ve written this blog post. And then I’m going to post it to Facebook. Let’s see if they try to censor me for a third time. Bring it, Facebook.

Categories: FLOSS Project Planets

Drupixels: Slow Drupal Permissions Page? Use Better Permissions Page Module!

Planet Drupal - Sat, 2024-04-06 04:38
Discover how the Better Permissions Page module simplifies permission management in Drupal by replacing the default permissions page, improving performance, and efficiently updating permissions and roles.
Categories: FLOSS Project Planets

February/March in KDE Itinerary

Planet KDE - Sat, 2024-04-06 04:00

It has been two exciting months since the last update on KDE Itinerary again, with new vehicle and train coach amenity information, DST changes in the timeline, progress on indoor routing and most notably the founding of the Transitous project.

New Features Train and coach amenity information

The library we use for public transport data now has a much more elaborate data model for vehicle features. That’s general comfort feature like air conditioning or Wi-Fi but also things specifically relevant when traveling with small children, a bike or a wheelchair. These can also be qualified by availability (e.g. if those need a special reservation) and can be marked as disrupted.

Itinerary makes use of this in the train coach layout view, where it’s now possible to tap on a coach for a more detailed description.

Coach feature details in Itinerary's train coach layout view.

Another place where this is used is anything showing results from journey or departure searches, such as when planning a new train trip. While all routing services provide some of this information, the level of detail can vary greatly though.

Vehicle features in journey search results.

Since this is now available in a machine-readable form, it also becomes conceivable to allow configuring more detailed traveler profiles so Itinerary can show the information most relevant to you more prominently, or take this into consideration e.g. when automatically selecting transfer suggestions.

Daylight saving time information

Switching to and from daylight saving time happens at different times in different locations (if at all), therefore Itinerary now also displays an information for upcoming daylight saving time changes in the timeline, similar as it already does for e.g. timezone changes.

Daylight saving time information in Itinerary's timeline. Infrastructure Work Transitous

The probably most significant development on the infrastructure side is the appearance of Transitous. That’s a project which started at FOSDEM 2024 barely two months ago with the aim of setting up a community-run free and open public transport routing service. It has been growing rapidly and is meanwhile a collaboration from people from many different FOSS and Open Data projects and communities.

Current Transitous coverage for long-distance travel in Europe.

While not even having all the basic features completed yet, it nevertheless already provides value by covering a few countries where we didn’t have any public transport data at all before. Starting with 24.05 KPublicTransport will have support for Transitous enabled by default, and thus it will also become available in Itinerary and KTrip.

Unlike with vendor-operated or otherwise proprietary services it’s now possible to expand public transport data coverage ourselves, assuming publicly available GTFS feeds at least.

Indoor routing

Work on indoor routing for our train station, airport or event venue maps also continued, with the focus on turning the previously shown demo that was able to find a path from A to B into something that does that reliably and matching human expectations, which is the bulk of the work here.

Examples of this include not taking “shortcuts” through paths you shouldn’t usually take (e.g. emergency exists, or walking through conference rooms/lecture halls as pictured below), but also ensuring robustness against imperfect or incomplete map data.

Shortest path (left), preferring corridors (right).

Some of this also involves clarifying or extending the OSM data model, and onsite visits to inspect challenging locations.

Fixes & Improvements Travel document extractor
  • New or improved travel document extractors for AMSBus, ANA, Deutsche Bahn, Eckerö Line, Elron, European Sleeper, Eurostar, Eventim, Finnair, Flibco, Leo Express, LTG Link, Moongate, National Express, Pasažieru vilciens, Salzbergwerk, SNCF, Thalys, ti.to, Trenitalia and UK national railways.
  • Added support for yet another variant of PDF raster images for barcode detection.
  • Improved generic extractors for flight boarding passes as well as ERA FCB and VDV train tickets.
  • Fixed the schema.org semantic annotations in the OSM event calendar.
  • Consider GIF files as well when searching for barcodes.

All of this has been made possible thanks to your travel document donations!

Public transport data
  • Fixed Deutsche Bahn Hafas searches sometimes not including replacement trains.
  • Fixed misdetected train coach types from UIC coach numbers.
  • Updated coverage metadata from the Transport API Repository.
  • Fixed caching of location queries and negative journey query results.
  • Improved support for arrival query result paging.
  • Updated support for ÖBB coach layout data.
  • Fixed train coach layout queries using times in the wrong timezone.
Itinerary app
  • Fixed barcode scanning on Android, caused by a regression in Qt 6.6.2 (affects all KDE apps, not just Itinerary).
  • Prevent overly large Apple Wallet pass footer images from messing up the layout.
  • Fix editing of times in AM/PM format.
  • Remember the last used folder in trip group export file dialogs.
  • Suggest meaningful file names for exporting trip groups.
  • Allow to copy the program membership number on reservation pages as well.
  • Added enough space at the end of the journey details view so floating buttons don’t overlap relevant content.
  • Added floating button to timeline page to navigate to the current element and for manually adding entries.
  • Fixed current ticket selection for elements without known arrival times.
  • Fixed retaining journey notes/vehicle layouts when getting partial trip updates.
  • Fixed displaying of departure notes for train trips.
  • Fixed displaying of public transport departure disruptions.
How you can help

Feedback and travel document samples are very much welcome, as are all other forms of contributions. Feel free to join us in the KDE Itinerary Matrix channel.

Categories: FLOSS Project Planets

Pages