Feeds

Melissa Wen: Display/KMS Meeting at XDC 2024: Detailed Report

Planet Debian - Tue, 2024-11-19 08:00

XDC 2024 in Montreal was another fantastic gathering for the Linux Graphics community. It was again a great time to immerse in the world of graphics development, engage in stimulating conversations, and learn from inspiring developers.

Many Igalia colleagues and I participated in the conference again, delivering multiple talks about our work on the Linux Graphics stack and also organizing the Display/KMS meeting. This blog post is a detailed report on the Display/KMS meeting held during this XDC edition.

Short on Time?

  1. Catch the lightning talk summarizing the meeting here (you can even speed up 2x):
  1. For a quick written summary, scroll down to the TL;DR section.
TL;DR

This meeting took 3 hours and tackled a variety of topics related to DRM/KMS (Linux/DRM Kernel Modesetting):

  • Sharing Drivers Between V4L2 and KMS: Brainstorming solutions for using a single driver for devices used in both camera capture and display pipelines.
  • Real-Time Scheduling: Addressing issues with non-blocking page flips encountering sigkills under real-time scheduling.
  • HDR/Color Management: Agreement on merging the current proposal, with NVIDIA implementing its special cases on VKMS and adding missing parts on top of Harry Wentland’s (AMD) changes.
  • Display Mux: Collaborative design discussions focusing on compositor control and cross-sync considerations.
  • Better Commit Failure Feedback: Exploring ways to equip compositors with more detailed information for failure analysis.
Bringing together Linux display developers in the XDC 2024

While I didn’t present a talk this year, I co-organized a Display/KMS meeting (with Rodrigo Siqueira of AMD) to build upon the momentum from the 2024 Linux Display Next hackfest. The meeting was attended by around 30 people in person and 4 remote participants.

Speakers: Melissa Wen (Igalia) and Rodrigo Siqueira (AMD)

Link: https://indico.freedesktop.org/event/6/contributions/383/

Topics: Similar to the hackfest, the meeting agenda was built over the first two days of the conference and mixed talks follow-up with new ideas and ongoing community efforts.

The final agenda covered five topics in the scheduled order:

  1. How to share drivers between V4L2 and DRM for bridge-like components (new topic);
  2. Real-time Scheduling (problems encountered after the Display Next hackfest);
  3. HDR/Color Management (ofc);
  4. Display Mux (from Display hackfest and XDC 2024 talk, bringing AMD and NVIDIA together);
  5. (Better) Commit Failure Feedback (continuing the last minute topic of the Display Next hackfest).
Unpacking the Topics

Similar to the hackfest, the meeting agenda evolved over the conference. During the 3 hours of meeting, I coordinated the room and discussion rounds, and Rodrigo Siqueira took notes and also contacted key developers to provide a detailed report of the many topics discussed.

From his notes, let’s dive into the key discussions!

How to share drivers between V4L2 and KMS for bridge-like components.

Led by Laurent Pinchart, we delved into the challenge of creating a unified driver for hardware devices (like scalers) that are used in both camera capture pipelines and display pipelines.

  • Problem Statement: How can we design a single kernel driver to handle devices that serve dual purposes in both V4L2 and DRM subsystems?
  • Potential Solutions:
    1. Multiple Compatible Strings: We could assign different compatible strings to the device tree node based on its usage in either the camera or display pipeline. However, this approach might raise concerns from device tree maintainers as it could be seen as a layer violation.
    2. Separate Abstractions: A single driver could expose the device to both DRM and V4L2 through separate abstractions: drm-bridge for DRM and V4L2 subdev for video. While simple, this approach requires maintaining two different abstractions for the same underlying device.
    3. Unified Kernel Abstraction: We could create a new, unified kernel abstraction that combines the best aspects of drm-bridge and V4L2 subdev. This approach offers a more elegant solution but requires significant design effort and potential migration challenges for existing hardware.
Real-Time Scheduling Challenges

We have discussed real-time scheduling during this year Linux Display Next hackfest and, during the XDC 2024, Jonas Adahl brought up issues uncovered while progressing on this front.

  • Context: Non-blocking page-flips can, on rare occasions, take a long time and, for that reason, get a sigkill if the thread doing the atomic commit is a real-time schedule.
  • Action items:
    • Explore alternative backtraces during the busy wait (e.g., ftrace).
    • Investigate the maximum thread time in busy wait to reproduce issues faced by compositors. Tools like RTKit (mutter) can be used for better control (Michel Dänzer can help with this setup).
HDR/Color Management

This is a well-known topic with ongoing effort on all layers of the Linux Display stack and has been discussed online and in-person in conferences and meetings over the last years.

Here’s a breakdown of the key points raised at this meeting:

  • Talk: Color operations for Linux color pipeline on AMD devices: In the previous day, Alex Hung (AMD) presented the implementation of this API on AMD display driver.
  • NVIDIA Integration: While they agree with the overall proposal, NVIDIA needs to add some missing parts. Importantly, they will implement these on top of Harry Wentland’s (AMD) proposal. Their specific requirements will be implemented on VKMS (Virtual Kernel Mode Setting driver) for further discussion. This VKMS implementation can benefit compositor developers by providing insights into NVIDIA’s specific needs.
  • Other vendors: There is a version of the KMS API applied on Intel color pipeline. Apart from that, other vendors appear to be comfortable with the current proposal but lacks the bandwidth to implement it right now.
  • Upstream Patches: The relevant upstream patches were can be found here. [As humorously notes, this series is eagerly awaiting your “Acked-by” (approval)]
  • Compositor Side: The compositor developers have also made significant progress.
    • KDE has already implemented and validated the API through an experimental implementation in Kwin.
    • Gamescope currently uses a driver-specific implementation but has a draft that utilizes the generic version. However, some work is still required to fully transition away from the driver-specific approach. AP: work on porting gamescope to KMS generic API
    • Weston has also begun exploring implementation, and we might see something from them by the end of the year.
  • Kernel and Testing: The kernel API proposal is well-refined and meets the DRM subsystem requirements. Thanks to Harry Wentland effort, we already have the API attached to two hardware vendors and IGT tests, and, thanks to Xaver Hugl, a compositor implementation in place.

Finally, there was a strong sense of agreement that the current proposal for HDR/Color Management is ready to be merged. In simpler terms, everything seems to be working well on the technical side - all signs point to merging and “shipping” the DRM/KMS plane color management API!

Display Mux

During the meeting, Daniel Dadap led a brainstorming session on the design of the display mux switching sequence, in which the compositor would arm the switch via sysfs, then send a modeset to the outgoing driver, followed by a modeset to the incoming driver.

  • Context:
  • Key Considerations:
    • HPD Handling: There was a general consensus that disabling HPD can be part of the sequence for internal panels and we don’t need to focus on it here.
    • Cross-Sync: Ensuring synchronization between the compositor and the drivers is crucial. The compositor should act as the “drm-master” to coordinate the entire sequence, but how can this be ensured?
    • Future-Proofing: The design should not assume the presence of a mux. In future scenarios, direct sharing over DP might be possible.
  • Action points:
    • Sharing DP AUX: Explore the idea of sharing DP AUX and its implications.
    • Backlight: The backlight definition represents a problem in the mux switch context, so we should explore some of the current specs available for that.

Towards Better Commit Failure Feedback

In the last part of the meeting, Xaver Hugl asked for better commit failure feedback.

  • Problem description: Compositors currently face challenges in collecting detailed information from the kernel about commit failures. This lack of granular data hinders their ability to understand and address the root causes of these failures.

To address this issue, we discussed several potential improvements:

  • Direct Kernel Log Access: One idea is to directly load relevant kernel logs into the compositor. This would provide more detailed information about the failure and potentially aid in debugging.
  • Finer-Grained Failure Reporting: We also explored the possibility of separating atomic failures into more specific categories. Not all failures are critical, and understanding the nature of the failure can help compositors take appropriate action.
  • Enhanced Logging: Currently, the dmesg log doesn’t provide enough information for user-space validation. Raising the log level to capture more detailed information during failures could be a viable solution.

By implementing these improvements, we aim to equip compositors with the necessary tools to better understand and resolve commit failures, leading to a more robust and stable display system.

A Big Thank You!

Huge thanks to Rodrigo Siqueira for these detailed meeting notes. Also, Laurent Pinchart, Jonas Adahl, Daniel Dadap, Xaver Hugl, and Harry Wentland for bringing up interesting topics and leading discussions. Finally, thanks to all the participants who enriched the discussions with their experience, ideas, and inputs, especially Alex Goins, Antonino Maniscalco, Austin Shafer, Daniel Stone, Demi Obenour, Jessica Zhang, Joan Torres, Leo Li, Liviu Dudau, Mario Limonciello, Michel Dänzer, Rob Clark, Simon Ser and Teddy Li.

This collaborative effort will undoubtedly contribute to the continued development of the Linux display stack.

Stay tuned for future updates!

Categories: FLOSS Project Planets

Ned Batchelder: Loop targets

Planet Python - Tue, 2024-11-19 05:40

I posted a Python tidbit about how for loops can assign to other things than simple variables, and many people were surprised or even concerned:

params = {
    "query": QUERY,
    "page_size": 100,
}

# Get page=0, page=1, page=2, ...
for params["page"] in itertools.count():
    data = requests.get(SEARCH_URL, params).json()
    if not data["results"]:
        break
    ...

This code makes successive GET requests to a URL, with a params dict as the data payload. Each request uses the same data, except the “page” item is 0, then 1, 2, and so on. It has the same effect as if we had written it:

for page_num in itertools.count():
    params["page"] = page_num
    data = requests.get(SEARCH_URL, params).json()

One reply asked if there was a new params dict in each iteration. No, loops in Python do not create a scope, and never make new variables. The loop target is assigned to exactly as if it were an assignment statement.

As a Python Discord helper once described it,

While loops are “if” on repeat. For loops are assignment on repeat.

A loop like for <ANYTHING> in <ITER>: will take successive values from <ITER> and do an assignment exactly as this statement would: <ANYTHING> = <VAL>. If the assignment statement is ok, then the for loop is ok.

We’re used to seeing for loops that do more than a simple assignment:

for i, thing in enumerate(things):
    ...

for x, y, z in zip(xs, ys, zs):
    ...

These work because Python can assign to a number of variables at once:

i, thing = 0, "hello"
x, y, z = 1, 2, 3

Assigning to a dict key (or an attribute, or a property setter, and so on) in a for loop is an example of Python having a few independent mechanisms that combine in uniform ways. We aren’t used to seeing exotic combinations, but you can reason through how they would behave, and you would be right.

You can assign to a dict key in an assignment statement, so you can assign to it in a for loop. You might decide it’s too unusual to use, but it is possible and it works.

Categories: FLOSS Project Planets

Zato Blog: IMAP and OAuth2 Integrations with Microsoft 365

Planet Python - Tue, 2024-11-19 03:00
IMAP and OAuth2 Integrations with Microsoft 365 2024-11-19, by Dariusz Suchojad

Overview

This is the first in a series of articles about automation of and integrations with Microsoft 365 cloud products using Python and Zato.

We start off with IMAP automation by showing how to create a scheduled Python service that periodically pulls latest emails from Outlook using OAuth2-based connections.

IMAP and OAuth2

Microsoft 365 requires for all IMAP connections to use OAuth2. This can be challenging to configure in server-side automation and orchestration processes so Zato offers an easy way that lets you read and send emails without a need for getting into low-level OAuth2 details.

Consider a common orchestration scenario - a business partner sends automated emails with attachments that need to be parsed, some information needs to be extracted and processed accordingly.

Before OAuth2, an automation process would receive from Azure administrators a dedicated IMAP account with a username and password.

Now, however, in addition to creating an IMAP account, administrators will need to create and configure a few more resources that the orchestration service will use. Note that the password to the IMAP account will never be used.

Administrators need to:

  • Register an Azure client app representing your service that uses IMAP
  • Grant this app a couple of Microsoft Graph application permissions:
  • Mail.ReadWrite
  • Mail.Send

Next, administrators need to give you a few pieces of information about the app:

  • Application (client) ID
  • Tenant (directory) ID
  • Client secret

Additionally, you still need to receive the IMAP username (an e-mail address). It is just that you do not need its corresponding password.

In Dashboard

The first step is to create a new connection in your Zato Dashboard - this will establish an OAuth2-using connection that Zato will manage and your Python code will not have to do anything else, all the underlying OAuth2 tokens will keep refreshing as needed, the platform will take care of everything.

Having received the configuration details from Azure administrators, you can open your Zato Dashboard and navigate to IMAP connections:

Fill out the form as below, choosing "Microsoft 365" as the server type. The other type, "Generic IMAP" is used for the classical case of IMAP with a username and password:

Change the secret and click Ping to confirm that the connection is configured correctly:

In Python

Use the code below to receive emails. Note that it merely needs to refer to a connection definition by its name and there is no need for any usage of OAuth2 here:

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class MyService(Service): def handle(self): # Connect to a Microsoft 365 IMAP connection by its name .. conn = self.email.imap.get('My Automation').conn # .. get all messages matching filter criteria ("unread" by default).. for msg_id, msg in conn.get(): # .. and access each of them. self.logger.info(msg.data)

This is everything that is needed for integrations with IMAP using Microsoft 365 although we can still go further. For instance, to create a scheduled job to periodically invoke the service, go to the Scheduler job in Dashboard:

In this case, we decide to have a job that runs once per hour:

As expected, clicking OK will suffice for the job to start in background. It is as simple as that.

More resources

➤ Python API integration tutorial
What is an integration platform?
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?

More blog posts
Categories: FLOSS Project Planets

Specbee: Your essential guide to Multilingual SEO and Hreflang (and how Drupal makes it easier)

Planet Drupal - Tue, 2024-11-19 01:26
Multilingual websites can attract a wider audience! Read this blog to strengthen your technical knowledge about multilingual SEO and the impact of hreflang tags.
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppArmadillo 14.2.0-1 on CRAN: New Upstream Minor

Planet Debian - Mon, 2024-11-18 17:31

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1191 other packages on CRAN, downloaded 37.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 603 times according to Google Scholar.

Conrad released a minor version 14.2.0 a few days ago after we spent about two weeks with several runs of reverse-dependency checks covering corner cases. After a short delay at CRAN due to a false positive on a test, a package failing tests we also failed under the previous version, and some concern over new deprecation warnings _whem using the headers directly as _e.g. mlpack R package does we are now on CRAN. I noticed a missing feature under large ‘64bit word’ (for large floating-point matrices) and added an exporter for icube going to double to support the 64-bit integer range (as we already did, of course, for vectors and matrices). Changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.2.0-1 (2024-11-16)
  • Upgraded to Armadillo release 14.2.0 (Smooth Caffeine)

    • Faster handling of symmetric matrices by inv() and rcond()

    • Faster handling of hermitian matrices by inv(), rcond(), cond(), pinv(), rank()

    • Added solve_opts::force_sym option to solve() to force the use of the symmetric solver

    • More efficient handling of compound expressions by solve()

  • Added exporter specialisation for icube for the ARMA_64BIT_WORD case

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Nonprofit Drupal posts: November Drupal for Nonprofits Chat

Planet Drupal - Mon, 2024-11-18 15:29

Join us THURSDAY, November 21 at 1pm ET / 10am PT, for our regularly scheduled call to chat about all things Drupal and nonprofits. (Convert to your local time zone.)

We don't have anything specific on the agenda this month, so we'll have plenty of time to discuss anything that's on our minds at the intersection of Drupal and nonprofits.  Got something specific you want to talk about? Feel free to share ahead of time in our collaborative Google doc: https://nten.org/drupal/notes!

All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.

This free call is sponsored by NTEN.org and open to everyone. 

  • Join the call: https://us02web.zoom.us/j/81817469653

    • Meeting ID: 818 1746 9653
      Passcode: 551681

    • One tap mobile:
      +16699006833,,81817469653# US (San Jose)
      +13462487799,,81817469653# US (Houston)

    • Dial by your location:
      +1 669 900 6833 US (San Jose)
      +1 346 248 7799 US (Houston)
      +1 253 215 8782 US (Tacoma)
      +1 929 205 6099 US (New York)
      +1 301 715 8592 US (Washington DC)
      +1 312 626 6799 US (Chicago)

    • Find your local number: https://us02web.zoom.us/u/kpV1o65N

  • Follow along on Google Docs: https://nten.org/drupal/notes

View notes of previous months' calls.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #476 - Off The Cuff #10

Planet Drupal - Mon, 2024-11-18 15:00

Today we are talking about some things are on our mind including, The DOJ Accessibility ruling,Drupal CMS Event Recipes and Tooling for core development with our Hosts. We’ll also cover @font-your-face as our module of the week.

For show notes visit: https://www.talkingDrupal.com/476

Topics
  • DOJ Accessibility Ruling
  • Drupal CMS
  • Tooling for core development
  • Open University
Resources Guests

Martin Anderson-Clutz - mandclu.com mandclu

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Martin Anderson-Clutz - mandclu.com mandclu Joshua "Josh" Mitchell - joshuami.com joshuami

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you ever wanted to add and manage web fonts for your Drupal site, directly within the admin interface? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in May 2010 by Scott Reynen, but the most recent release was by Henrique Mendes (hmendes) of CI&T
    • Versions available: 7.x-2.8 and 4.0.0 versions available, the latter of which support Drupal 9.4 and 10.
  • Maintainership
    • Actively maintained
    • Security coverage
    • Test coverage
    • Documentation, but looks like it might be ready for a refresh
    • Number of open issues: 48 open issues, 8 of which are bugs against the current branch
  • Usage stats:
    • 32,213 sites
  • Module features and usage
    • The module provides an interface to browse fonts from Google, Adobe, Typekit, and more
    • License restrictions for fonts are clearly indicated
    • When you find a font you want to use, you just click “enable”. You don’t need to write any CSS or define a library, and it’s easy to mix-and-match fonts from different providers. It can even make it easier to include your own local fonts
    • The module includes submodules for the different font providers, so you enable the submodules based on where you want to use fonts from
    • Then you can import the fonts for those providers, though you do need an API key to import fonts from Google
    • The module does also have an API, so you can write your own modules to integrate with other font providers, or access the information about available fonts
Categories: FLOSS Project Planets

C.J. Collier: Managing HPE SAS Controllers

Planet Debian - Mon, 2024-11-18 14:21

Notes to self. And anyone else who might find them useful. Following are some ssacli commands which I use infrequently enough that they fall out of cache. This may repeat information in other blogs, but since I search my posts first when commands slip my mind, I thought I’d include them here, too.

hpacucli is the wrong command. Use ssacli instead.

$ KR='/usr/share/keyrings/hpe.gpg' $ for fingerprint in \ 882F7199B20F94BD7E3E690EFADD8D64B1275EA3 \ 57446EFDE098E5C934B69C7DC208ADDE26C2B797 \ 476DADAC9E647EE27453F2A3B070680A5CE2D476 ; do \ curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x${fingerprint}" \ | gpg --no-default-keyring --keyring "${KR}" --import ; \ done $ gpg --list-keys --no-default-keyring --keyring "${KR}" /usr/share/keyrings/hpe.gpg --------------------------- pub rsa2048 2012-12-04 [SC] [expired: 2022-12-02] 476DADAC9E647EE27453F2A3B070680A5CE2D476 uid [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) pub rsa2048 2014-11-19 [SC] [expired: 2024-11-16] 882F7199B20F94BD7E3E690EFADD8D64B1275EA3 uid [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) - 1 pub rsa2048 2015-12-10 [SCEA] [expires: 2025-12-07] 57446EFDE098E5C934B69C7DC208ADDE26C2B797 uid [ unknown] Hewlett Packard Enterprise Company RSA-2048-25 $ echo "deb [signed-by=${KR}] http://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" \ | sudo dd of=/etc/apt/sources.list.d status=none $ sudo apt-get update $ sudo apt-get install -y -qq ssacli > /dev/null 2>&1 $ sudo ssacli ctrl all show status HPE Smart Array P408i-p SR Gen10 in Slot 3 Controller Status: OK Cache Status: OK Battery/Capacitor Status: OK $ sudo ssacli ctrl all show detail HPE Smart Array P408i-p SR Gen10 in Slot 3 Bus Interface: PCI Slot: 3 Serial Number: PFJHD0ARCCR1QM RAID 6 Status: Enabled Controller Status: OK Hardware Revision: B Firmware Version: 2.65 Firmware Supports Online Firmware Activation: True Driver Supports Online Firmware Activation: True Rebuild Priority: High Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Parallel Surface Scan Supported: Yes Current Parallel Surface Scan Count: 1 Max Parallel Surface Scan Count: 16 Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Write Cache Bypass Threshold Size: 1040 KiB Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Cache Ratio: 10% Read / 90% Write Configured Drive Write Cache Policy: Disable Unconfigured Drive Write Cache Policy: Default Total Cache Size: 2.0 Total Cache Memory Available: 1.8 Battery Backed Cache Size: 1.8 No-Battery Write Cache: Disabled SSD Caching RAID5 WriteBack Enabled: True SSD Caching Version: 2 Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Spare Activation Mode: Activate on physical drive failure (default) Controller Temperature (C): 53 Cache Module Temperature (C): 43 Capacitor Temperature (C): 40 Number of Ports: 2 Internal only Encryption: Not Set Express Local Encryption: False Driver Name: smartpqi Driver Version: Linux 2.1.18-045 PCI Address (Domain:Bus:Device.Function): 0000:11:00.0 Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s) Controller Mode: Mixed Port Max Phy Rate Limiting Supported: False Latency Scheduler Setting: Disabled Current Power Mode: MaxPerformance Survival Mode: Enabled Host Serial Number: 2M20040D1Q Sanitize Erase Supported: True Sanitize Lock: None Sensor ID: 0 Location: Capacitor Current Value (C): 40 Max Value Since Power On: 42 Sensor ID: 1 Location: ASIC Current Value (C): 53 Max Value Since Power On: 55 Sensor ID: 2 Location: Unknown Current Value (C): 43 Max Value Since Power On: 45 Sensor ID: 3 Location: Cache Current Value (C): 43 Max Value Since Power On: 44 Primary Boot Volume: None Secondary Boot Volume: None $ sudo ssacli ctrl all show config HPE Smart Array P408i-p SR Gen10 in Slot 3 (sn: PFJHD0ARCCR1QM) Internal Drive Cage at Port 1I, Box 2, OK Internal Drive Cage at Port 2I, Box 2, OK Port Name: 1I (Mixed) Port Name: 2I (Mixed) Array A (SAS, Unused Space: 0 MB) logicaldrive 1 (1.64 TB, RAID 6, OK) physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK) physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, OK) physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK) physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK) physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK) physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK) physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK) physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 1.2 TB, OK) SEP (Vendor ID HPE, Model Smart Adapter) 379 (WWID: 51402EC013705E88, Port: Unknown) $ sudo ssacli ctrl slot=3 pd 2I:2:7 show detail HPE Smart Array P408i-p SR Gen10 in Slot 3 Array A physicaldrive 2I:2:7 Port: 2I Box: 2 Bay: 7 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 1.2 TB Drive exposed to OS: False Logical/Physical Block Size: 512/512 Rotational Speed: 10000 Firmware Revision: U850 Serial Number: KZGN1BDE WWID: 5000CCA01D247239 Model: HGST HUC101212CSS600 Current Temperature (C): 46 Maximum Temperature (C): 51 PHY Count: 2 PHY Transfer Rate: 6.0Gbps, Unknown PHY Physical Link Rate: 6.0Gbps, Unknown PHY Maximum Link Rate: 6.0Gbps, 6.0Gbps Drive Authentication Status: OK Carrier Application Version: 11 Carrier Bootloader Version: 6 Sanitize Erase Supported: False Shingled Magnetic Recording Support: None Drive Unique ID: 5000CCA01D247238
Categories: FLOSS Project Planets

Python Morsels: Python's pathlib module

Planet Python - Mon, 2024-11-18 12:00

Python's pathlib module is the tool to use for working with file paths. See pathlib quick reference tables and examples.

Table of contents

  1. A pathlib cheat sheet
  2. The open function accepts Path objects
  3. Why use a pathlib.Path instead of a string?
  4. The basics: constructing paths with pathlib
  5. Joining paths
  6. Current working directory
  7. Absolute paths
  8. Splitting up paths with pathlib
  9. Listing files in a directory
  10. Reading and writing a whole file
  11. Many common operations are even easier
  12. No need to worry about normalizing paths
  13. Built-in cross-platform compatibility
  14. A pathlib conversion cheat sheet
  15. What about things pathlib can't do?
  16. Should strings ever represent file paths?
  17. Use pathlib for readable cross-platform code

A pathlib cheat sheet

Below is a cheat sheet table of common pathlib.Path operations.

The variables used in the table are defined here:

>>> import pathlib >>> path = Path("/home/trey/proj/readme.md") >>> relative = Path("readme.md") >>> base = Path("/home/trey/proj") >>> new = Path("/home/trey/proj/sub") >>> target = path.with_suffix(".txt") # .md -> .txt >>> pattern = "*.md" >>> name = "sub/f.txt" Path-related task pathlib approach Example Read all file contents path.read_text() 'Line 1\nLine 2\n' Write file contents path.write_text('new') Writes new to file Get absolute file path relative.resolve() Path('/home/trey/proj/readme.md') Get the filename path.name 'readme.md' Get parent directory path.parent Path('home/trey/proj') Get file extension path.suffix '.md' Get suffix-free name path.stem 'readme' Ancestor-relative path path.relative_to(base) Path('readme.md') Verify path is a file path.is_file() True Make new directory new.mkdir() Makes new directory Get current directory Path.cwd() Path('/home/trey/proj') Get home directory Path.home() Path('/home/trey') Get all ancestor paths path.parents [Path('/home/trey/proj'), ...] List files/directories base.iterdir() [Path('home/trey/proj/readme.md'), ...] Find files by pattern base.glob(pattern) [Path('/home/trey/proj/readme.md')] Find files recursively base.rglob(pattern) [Path('/home/trey/proj/readme.md')] Join path parts base / name Path('/home/trey/proj/sub/f.txt') Get file size (bytes) path.stat().st_size 14 Walk the file tree base.walk() Iterable of (path, subdirs, files) Rename file to new path path.rename(target) Path object for new path Remove file path.unlink()

Note that iterdir, glob, rglob, and walk all return iterators. The examples above show lists for convenience.

The open function accepts Path objects

What does Python's open function …

Read the full article: https://www.pythonmorsels.com/pathlib-module/
Categories: FLOSS Project Planets

Philipp Kern: debian.org now supports Security Key-backed SSH keys

Planet Debian - Mon, 2024-11-18 11:43

debian.org's infrastructure now supports using Security Key-backed SSH keys. DDs (and guests) can use the mail gateway to add SSH keys of the types sk-ecdsa-sha2-nistp256@openssh.com and sk-ssh-ed25519@openssh.com to their LDAP accounts.

This was done in support of hardening our infrastructure: Hopefully we can require these hardware-backed keys for sensitive machines in the future, to have some assertion that it is a human that is connecting to them.

As some of us shell to machines a little too often, I also wrote a small SSH CA that issues short-lived certificates (documentation). It requires the user to login via SSH using an SK-backed key and then issues a certificate that is valid for less than a day. For cases where you need to frequently shell to a machine or to a lot of machines at once that should be a nice compromise of usability vs. security.

The capabilities of various keys differ a lot and it is not always easy to determine what feature set they support. Generally SK-backed keys work with FIDO U2F keys, if you use the ecdsa key type. Resident keys (i.e. keys stored on the token, to be used from multiple devices) require FIDO2-compatible keys. no-touch-required is its own maze, e.g. the flag is not properly restored today when pulling the public key from a resident key. The latter is also one reason for writing my own CA.

SomeoneTM should write up a matrix on what is supported where and how. In the meantime it is probably easiest to generate an ed25519 key - or if that does not work an ecdsa key - and make a backup copy of the resulting on-disk key file. And copy that around to other devices (or OSes) that require access to the key.

Categories: FLOSS Project Planets

Real Python: Interacting With Python

Planet Python - Mon, 2024-11-18 09:00

There are multiple ways of interacting with Python, and each can be useful for different scenarios. You can quickly explore functionality in Python’s interactive mode using the built-in Read-Eval-Print Loop (REPL), or you can write larger applications to a script file using an editor or Integrated Development Environment (IDE).

In this tutorial, you’ll learn how to:

  • Use Python interactively by typing code directly into the interpreter
  • Execute code contained in a script file from the command line
  • Work within a Python Integrated Development Environment (IDE)
  • Assess additional options, such as the Jupyter Notebook and online interpreters

Before working through this tutorial, make sure that you have a functioning Python installation at hand. Once you’re set up with that, it’s time to write some Python code!

Get Your Code: Click here to get the free sample code that you’ll use to learn about interacting with Python.

Take the Quiz: Test your knowledge with our interactive “Interacting With Python” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Interacting With Python

In this quiz, you'll test your understanding of the different ways of interacting with Python. By working through this quiz, you'll revisit key concepts related to Python interaction in interactive mode using the REPL, through Python script files, and within IDEs and code editors.

Hello, World!

There’s a long-standing custom in computer programming that the first code written in a newly installed language is a short program that displays the text Hello, World! to the console.

In Python, running a “Hello, World!” program only takes a single line of code:

Python print("Hello, World!") Copied!

Here, print() will display the text Hello, World! in quotes to your screen. In this tutorial, you’ll explore several ways to execute this code.

Running Python in Interactive Mode

The quickest way to start interacting with Python is in a Read-Eval-Print Loop (REPL) environment. This means starting up the interpreter and typing commands to it directly.

When you interact with Python in this way, the interpreter will:

  • Read the command you enter
  • Evaluate and execute the command
  • Print the output (if any) to the console
  • Loop back and repeat the process

The interactive session continues like this until you instruct the interpreter to stop. Using Python in this interactive mode is a great way to test short snippets of Python code and get more familiar with the language.

When you install Python using an installer, the Start menu shows a program group labeled Python 3.x. The label may vary depending on the particular installation you chose. Click on that item to start the Python interpreter.

Alternatively, you can open your Command Prompt or PowerShell application and type the py command to launch it:

Windows PowerShell PS> py Copied!

To start the Python interpreter, open your Terminal application and type python3 to launch it from the command line:

Shell $ python3 Copied!

If you’re unfamiliar with this application, then you can use your operating system’s search function to find it.

After pressing Enter, you should see a response from the Python interpreter similar to the one below:

Python Python 3.13.0 (main, Oct 14 2024, 10:34:31) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Copied!

If you’re not seeing the >>> prompt, then you’re not talking to the Python interpreter. This could be because Python is either not installed or not in the path of your terminal window session.

Note: If you need additional help to get to this point, then you can check out the How to Install Python on Your System: A Guide tutorial.

If you’re seeing the prompt, then you’re off and running! With these next steps, you’ll execute the statement that displays "Hello, World!" to the console:

  1. Ensure that Python displays the >>> prompt, and that you position your cursor after it.
  2. Type the command print("Hello, World!") exactly as shown.
  3. Press the Enter key.
Read the full article at https://realpython.com/interacting-with-python/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

1xINTERNET blog: Open-source innovation: Drupal Recipes and the upcoming Drupal CMS

Planet Drupal - Mon, 2024-11-18 07:00

Explore how the integration of Drupal Recipes is transforming Drupal development by simplifying configurations and enhancing reusability. Learn how these innovations are setting the stage for the highly anticipated release of the official Drupal CMS in January 2025.

Categories: FLOSS Project Planets

1xINTERNET blog: Open-source innovation: embracing Symfony Recipes and the upcoming Drupal CMS

Planet Drupal - Mon, 2024-11-18 06:24

Explore how the integration of Symfony Recipes is transforming Drupal development by simplifying configurations and enhancing reusability. Learn how these innovations are setting the stage for the highly anticipated release of the official Drupal CMS in January 2025.

Categories: FLOSS Project Planets

Python Software Foundation: Help power Python and join in the PSF year-end fundraiser &amp; membership drive!

Planet Python - Mon, 2024-11-18 04:56



The Python Software Foundation (PSF) is the charitable organization behind Python, dedicated to advancing, supporting, and protecting the Python programming language and the community that sustains it. That mission and cause are more than just words we believe in. Our tiny but mighty team works hard to deliver the projects and services that allow Python to be the thriving, independent, community-driven language it is today. Some of what the PSF does includes producing PyCon US, hosting the Python Packaging Index (PyPI), awarding grants to Python initiatives worldwide, maintaining critical community infrastructure, and more.

To build the future of Python and sustain the thriving community that its users deserve, we need your help. By backing the PSF, you’re investing in Python’s growth and health, and your contributions directly impact the language's future. Is your community, work, or hobby powered by Python? Join this year’s drive and power Python’s future with us by donating or becoming a Supporting Member today.

There are three ways to join in:

  1. Save on PyCharm! JetBrains is once again supporting the PSF by providing a 30% discount on PyCharm and ALL proceeds will go to the PSF! You can take advantage of this discount by clicking the button on the PyCharm promotion page and the discount will be automatically applied when you check out. The promotion will only be available through November 30th, 2024, so make sure to grab the deal today!

  2. Donate to the PSF! Your donation is a direct way to support and power the future of the Python programming language and community you love. Every dollar makes a difference.

  3. Become a Supporting member! When you sign up as a Supporting Member of the PSF, you become a part of the PSF, are eligible to vote in PSF elections and help us sustain what we do with your annual support. You can sign up as a Supporting Member at the usual annual rate($99 USD), or you can take advantage of our sliding scale option (starting at $25 USD)! We don't want cost to be a barrier to you being a part of the PSF or to your voice helping direct our future. Every PSF member makes the Python community stronger!

  4. Your donations:

      • Keep Python thriving 
      • Support CPython and PyPI progress 
      • Increase security across the Python ecosystem 
      • Bring the global Python community together 
      • Make our community more diverse and robust every year

      Highlights from 2024:

      • A record-making PyCon US - We produced the 21st PyCon US, in Pittsburgh, US, and online, and it was a huge success! For the first time post-2020, PyCon US 2024 sold out with over 2,500 in-person attendees.
      • Advances in our Grants Program - 2024 has been a year of change and reflection for the Grants Program, starting with the addition of Marie Nordin to the grants administration team who has supported the PSF in launching several new grants initiatives. We set up Grants Program Office Hours, published a Grants Program Transparency Report for 2022 and 2024, invested in a third-party retrospective, launched a major refresh of all areas of our Grants program and updated our Grants Workgroup Charter. With more changes to come, we are thrilled to share that we awarded a record-breaking amount of grant funds in 2024!
      • Empowering the Python community through Fiscal Sponsorship - We are proud to continue supporting our 20 fiscal sponsoree organizations with their initiatives and events all year round. The PSF provides 501(c)(3) tax-exempt status to fiscal sponsorees such as PyLadies and Pallets, and provides back office support so they can focus on their missions. Consider donating to your favorite PSF Fiscal Sponsoree and check out our Fiscal Sponsorees page to learn more about what each of these awesome organizations is all about!
      • Connecting directly through Office Hours - The current PSF Board has decided to invest more in connecting and serving the global Python community by establishing a forum to have regular conversations. The board members of the PSF with the support of PSF staff are now holding monthly PSF Board Office Hours on the PSF Discord. The Office Hours are sessions where folks from the community can share with us how we can help your regional community, express perspectives, and provide feedback for the PSF.
      • Paying more engineers to work directly on Python, PyPI, and security - We welcomed Petr Viktorin, Deputy Developer in Residence (DiR), and Serhiy Storchaka, Supporting DiR. It’s been exciting to begin to realize the full vision of the DiR program, with special thanks to Bloomberg for making it possible for us to bring Petr on board. The DiR team is taking an active role in shaping the development of the language, and with three people on the team each DiR can now also spend a percentage of their time on feature work aligned with their interests.
      • Continuing to enhance Python’s security through Developers-in-Residence - Seth Larson, PSF Security Developer in Residence (DiR) had a busy year thanks to continued support from Alpha-Omega. Seth worked on a variety of projects including the creation of SBOMs for Source and Windows CPython artifacts, implementing build reproducibility for CPython source artifacts, and auditing and migrating Sigstore, to name just a few. Check out Seth's blog to keep up to date with his work. Mike Fiedler, PyPI Safety & Security Engineer, also worked on a variety of projects such as two-factor authentication for all users on PyPI, an audit of PyPI, made significant progress on malware response and reporting, collaborated on the PSF’s submission for the Cybersecurity and Infrastructure Security Agency (CISA)’s Request for Information (RFI), and more! Thanks to AWS and Georgetown for making Mike’s PyPI security accomplishments possible. Stay up to date with Mike's work on the PyPI blog.
      • New PSF Staff dedicated to critical infrastructure - We established the PyPI Support Specialist role, filled by Maria Ashna. Over the past 23 years, PyPI has seen essentially exponential growth in traffic and users, relying for the most part on volunteers to support it. The load far outstretched volunteers and prior staff capacity, so we are very excited to have Maria on board. We also filled our Infrastructure Engineer role, welcoming Jacob Coffee to the team, to ensure PSF-maintained systems and services are running smoothly.
       Our thanks!

      Every dollar you contribute to the PSF helps power Python, makes an impact, and tells us you value Python and the work we do. Python and the PSF are built on the amazing generosity and energy of all our amazing community members out there who step up and give back.

      We appreciate you and we’re so excited to see where we can go together in the year to come!

      Categories: FLOSS Project Planets

      Python Bytes: #410 Entering the Django core

      Planet Python - Mon, 2024-11-18 03:00
      <strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://buttondown.com/carlton/archive/thoughts-on-djangos-core/?featured_on=pythonbytes">Thoughts on Django’s Core</a></strong></li> <li><strong><a href="https://pypi.org/project/futurepool/?featured_on=pythonbytes">futurepool</a></strong></li> <li><strong><a href="https://snarky.ca/dont-use-named-tuples-in-new-apis/?featured_on=pythonbytes">Don't return named tuples in new APIs</a></strong></li> <li><strong><a href="https://ziglang.org/news/migrate-to-self-hosting/?featured_on=pythonbytes">Ziglang: Migrating from AWS to Self-Hosting</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=j-q31u9G3Ds' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="410">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes"><strong>@mkennedy.codes</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes"><strong>@brianokken.bsky.social</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/pythonbytes.bsky.social"><strong>@pythonbytes.bsky.social</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it. </p> <p><strong>Brian #1:</strong> <a href="https://buttondown.com/carlton/archive/thoughts-on-djangos-core/?featured_on=pythonbytes">Thoughts on Django’s Core</a></p> <ul> <li>Carlton Gibson</li> <li>Great discussion on <ul> <li>Django and Core vs Plugins</li> <li>Sustainability with limited people</li> <li>Keeping core small</li> <li>The release cycle</li> <li>eembrace plugins vs endorsing plugins.</li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://pypi.org/project/futurepool/?featured_on=pythonbytes">futurepool</a></p> <ul> <li>via Pat Decker</li> <li>Takes the concept of multiprocessing Pool to the async/await world.</li> <li><p>Create a pool then delegate the work:</p> <pre><code>async with FuturePool(2) as fp: result = await fp.map(async_pool_fn, range(10)) </code></pre></li> <li><p>I would LOVE to see something like this in a broader background asyncio worker pool concept.</p></li> <li>But that concept doesn’t exist in asyncio in Python and that’s a failing of the framework IMO.</li> </ul> <p><strong>Brian #3:</strong> <a href="https://snarky.ca/dont-use-named-tuples-in-new-apis/?featured_on=pythonbytes">Don't return named tuples in new APIs</a></p> <ul> <li>Brett Cannon</li> <li>First off, I’m grateful for any post that talks about APIs and the API is a module, class, or package API and not a Web/REST API. The term API existed long before the internet.</li> <li>“e.g., get_mouse_position() very likely has a two-item tuple of X and Y coordinates of the screen”</li> <li>“it actually makes your API more complex for both you and your users <em>to use</em>. For you, it doubles the data access API surface for your return type as you have to now support index-based and attribute-based data access forever (or until you choose to break your users and change your return type so it doesn't support both approaches)”</li> <li>“… you probably don't want people doing with your return type, like slicing, iterating over all the items …”</li> <li>Alternatives <ul> <li>class</li> <li>dataclass</li> <li>dictionary</li> <li>TypedDict</li> <li>SimpleNamespace</li> </ul></li> <li>“My key point in all of this is to prefer readability and ergonomics over brevity in your code. That means avoiding named tuples except where you are expanding to tweaking an existing API where the named tuple improves over the plain tuple that's already being used.”</li> </ul> <p><strong>Michael #4:</strong> <a href="https://ziglang.org/news/migrate-to-self-hosting/?featured_on=pythonbytes">Ziglang: Migrating from AWS to Self-Hosting</a></p> <ul> <li>The Rust Foundation for example, reports that they spent $404,400 on infrastructure costs in 2023.</li> <li>Zig lang has decided to use a single big cloud machine + mirrors</li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>Changing the Python Test community <ul> <li>Was started to answer questions for Test &amp; Code listeners years ago. </li> <li>Primarily pytest questions</li> <li>Used to be Slack. Then moved to Podia forum. </li> <li>Now I’m trying to work out a Discord solution that is both sustainable and usable.</li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://bsky.app/profile/wang.social/post/3lb346uyzdc2r?featured_on=pythonbytes">PWang Bsky essay</a></li> <li><a href="https://theworkitem.com/blog/building-a-business-from-python-expertise-michael-kennedy/?featured_on=pythonbytes">Building A Business From Python Expertise - Michael Kennedy on Work Item Podcast</a></li> <li>Subscribe to package releases, just put .atom on the end of their releases URL, for example: <ul> <li><a href="https://github.com/mikeckennedy/jinja_partials/releases?featured_on=pythonbytes">github.com/mikeckennedy/jinja_partials/releases</a> ← add .atom for RSS</li> </ul></li> <li><a href="https://pypi.org/project/pytest-bdd/8.0.0/#data">pytest-bdd 8.0.0</a> was just released via Jamie Thomson <ul> <li>The big feature (in Jamie’s opinion) is the addition of data tables https://github.com/pytest-dev/pytest-bdd/blob/master/CHANGES.rst#800---2024-11-14</li> </ul></li> </ul> <p><strong>Joke:</strong> <a href="https://devhumor.com/media/breaking-javascript-developer-commits-to-framework-for-record-breaking-3-weeks?featured_on=pythonbytes">Breaking: JavaScript Developer Commits to Framework for Record-Breaking 3 Weeks</a></p>
      Categories: FLOSS Project Planets

      Open Data and Open Source AI: Charting a course to get more of both

      Open Source Initiative - Mon, 2024-11-18 03:00

      While working to define Open Source AI, we realized that data governance is an unresolved issue. The Open Source Initiative organized a workshop to discuss data sharing and governance for AI training. The critical question posed to attendees was “How can we best govern and share data to power Open Source AI?” The main objective of this workshop was to establish specific approaches and strategies for both Open Source AI developers and other stakeholders.

      The Workshop: Building bridges across “Open” streams  

      Held on October 10-11, 2024, and hosted by Linagora’s Villa Good Tech, the OSI workshop brought together 20 experts from diverse fields and regions. Funded by the Alfred P. Sloan Foundation, the event focused on actionable steps to align open data practices with the goals of Open Source AI.  

      Participants, listed below, comprised academics, civil society leaders, technologists, and representatives from organizations like Mozilla Foundation, Creative Commons, EleutherAI Institute and others. 

      Over two days, the group worked to frame a cohesive approach to data governance. Alek Tarkowski and Paul Keller of the Open Future Foundation are working with OSI to complete the white paper summarizing the group’s work. In the meantime, here is a quick “tease”—just a few of the many topics that the group discussed:  

      The streams of “open” merge, creating waves

      AI is where Open Source software, open data, open knowledge, and open science meet in a new way. Since OpenAI released ChatGPT, what once were largely parallel tracks with occasional junctures are now a turbulent merger of streams, creating ripples in all of these disciplines and forcing us to reassess our principles: How do we merge these streams without eroding the principles of transparency and access that define openness?

      We discovered in the process of defining Open Source AI that the basic freedoms we’ve put in the Open Source Definition and its foundation, the Free Software Definition, are still good and relevant. Open Source software has had decades to mature into a structured ecosystem with clear rules, tools, and legal frameworks. Same with Open Knowledge and Open Science: While rooted in age-old traditions, open knowledge and science have seen modern rejuvenation through platforms like Wikipedia and the Open Knowledge Foundation. Open data, however, feels less solid: often serving as a one-way pipeline from public institutions to private profiteers, is now dragged into a whole new territory. 

      How are these principles of “open” interacting with each other, how are we going to merge Open Data with Open Source with Open Science and Open Knowledge in Open Source AI?

      The broken social contract of data 

      Data fuels AI. The sheer scale of data required to train models like ChatGPT reveals not just a technological challenge but also a societal dilemma. Much of this data comes from us—the blogs we write, the code we share, the information we give freely to platforms. 

      OpenAI, for example, “slurps” all the data it can find, and much of it is what we willingly give: the blogs we write; the code we share; the pictures, emails and address books we keep in “the cloud”; and all the other information we give freely to platforms. 

      We, the people, make the “data,” but what are we getting in exchange? OpenAI owns and controls the machine built with our data, and it grants us access via API, until it changes its mind. We are essentially being stripmined for a proprietary system that grants access at a price—until the owner decides otherwise.

      We need a different future, one where data empowers communities, not just corporations. That starts with revisiting the principles of openness that underpin the open source, open science, and open knowledge movements. The question is: How do we take back control?  

      Charting a path forward  

      We want the machine for ourselves. We want machines that the people can own and control. We need to find a way to swing the pendulum back to our meaning of Open. And it’s all about the “data.”

      The OSI’s work on the Open Source AI Definition provides a starting point. An Open Source AI machine is one that the people can meaningfully fork without having to ask for permission. For AI to truly be open, developers need access to the same tools and data as the original creators. That means transparent training processes, open filtering code, and, critically, open datasets. 

      Group photo of the participants to the workshop on data governance, Paris, Oct 2024. Next steps  

      The white paper, expected in December, will synthesize the workshop’s discussions and propose concrete strategies for data governance in Open Source AI. Its goal is to lay the groundwork for an ecosystem where innovation thrives without sacrificing openness or equity.  

      As the lines between “open” streams continue to blur, the choices we make now will define the future of AI. Will it be a tool controlled by a few, or a shared resource for all?  

      The answer lies in how we navigate the waves of data and openness. Let’s get it right. 

      Categories: FLOSS Research

      This Week in KDE Apps: Python bindings

      Planet KDE - Mon, 2024-11-18 02:20

      Welcome to a new issue of "This Week in KDE Apps"! Every week we cover as much as possible of what's happening in the world of KDE apps.

      This week, we release the first beta of what will become KDE Gear 24.12.0. If your distro provides testing package, please help with testing. Meanwhile, and as part of the 2024 end-of-year fundraiser, you can "Adopt an App" in a symbolic effort to support your favorite KDE app.

      This week, we are particularly grateful to George Fakidis, tmpod, Paxriel for showing their support for Okular; Ian Lohmann, Anthony Perrett, Linus Seelinger and Nils Martens for Dolphin, Erik Bernoth for Arianna and Daniel Lloyd-Miller and mdPlusPlus for KDE Connect.

      Any monetary contribution, however small, will help us cover operational costs, salaries, travel expenses for contributors and in general just keep KDE bringing Free Software to the world. So consider donating today!

      Getting back to all that's new in the KDE App scene, let's dig in!

      Global Changes

      KWidgetsAddons, a collection of add-on widgets for QtWidgets, and KUnitConversion now have Python bindings. (Manuel Alcaraz Zambrano, KDE Frameworks 6.9.0. Link 1 and link 2)

      Call to Action

      KDE has over 70 frameworks, we need your help to add Python bindings to the relevant remaining frameworks. See metatask.

      The "About" page of Kirigami applications now provides helpful "Copy" button that lets you copy system information, which can be useful when filling a bug report. The same feature was also implemented for QtWidgets-based applications. (Carl Schwan, Kirigami Addons 1.6.0 and KDE Frameworks 6.9. Link for Kirigami apps and link for QtWidget apps)

      Additionally Joshua added icons to the "Getting involved", "Donate", and other actions for the Kirigami version. (Joshua Goins, Kirigami Addons 1.6.0. Link)

      The "share" context menu of many applications can now copy the data to clipboard. (Aleix Pol Gonzalez, Frameworks 6.9.0. Link)

      Alligator RSS feed reader

      Alligator now lets you bookmark your favorite posts. (Soumyadeep Ghosh, 25.04.0. Link)

      The selected feed will also now be highlighted correctly and the text of an article can now be selected and copied. (Soumyadeep Ghosh, 24.12.0. Link and link 2)

      AudioTube YouTube Music app

      Fix parsing certain playlists. (Eren Karakas, 24.12.0. Link)

      Clock Keep time and set alarms

      Fix a crash of the Clock Daemon when waking up. (Devin Lin, 24.12.0. Link)

      digiKam Photo Management Program

      Digikam 8.5.0 is out! This releases improves the Face Management system, adds colored labels to identify important items, increases its list of supported languages to 61, and fixes over 160 bugs.

      Read the full announcement

      Dolphin Manage your files

      When Dolphin is started on a location which does not report a storage size (for example "remote", or "bluetooth") the status bar will no longer pointlessly show a empty storage size indicator for a split second before hiding it again. (Felix Ernst, 25.04.0. Link)

      Gwenview Image Viewer

      We fixed a bug where indexed-color or monochrome-palette images (e.g. from pngquant) would render with garbled colors or black and white noise when zoomed. (Tabby Kitten, 24.12.0. Link)

      KDE Itinerary Digital travel assistant

      Itinerary's Matrix integration now uses encrypted Matrix rooms by default and Itinerary can now do session verification, which is going to be mandatory in the future. (Volker Krause, 25.04.0. Link 1 and link 2). Volker also fixed various small issues with the Matrix integration (too many to list them all) and backported these fixes for the 24.12.0 release.

      Kate Advanced Text Editor

      It is now possible to disable the autocompletion popup which appears when you are just typing. (Waqar Ahmed, 25.04.0. BUG 476620)

      KCalc Scientific calculator

      We redesigned the bit edit feature of KCalc. (Tomasz Bojczuk, 25.04.0. Link)

      Kdenlive Video editor

      Several Kdenlive effects got the capacity to animate their parameters with keyframes. (Bernd Jordan, Julius Künzel, and Massimo Stella, 25.04.0. Link 1, link 2, link 3 and link 4)

      Keysmith Two-factor code generator for Plasma Mobile and Desktop

      Keysmith can now import OTPs from andOTP's backup files. (Martin Reboredo, 25.04.0, Link)

      Akonadi Background service for KDE PIM apps

      Fixed a crash when migrating old iCal entries in Akonadi to be properly tagged. (Daniel Vrátil, 24.12.0. Link)

      Fix style of configuration dialogs for Akonadi agents on platforms other than Plasma. (Laurent Montel, 24.12.0. Link)

      Port away IMAP resource from KWallet and use QtKeychain instead. This ensures your email's credentials are correctly stored and retrieved on other platforms like Windows. (Carl Schwan, 24.12.0. Link)

      KMail A feature-rich email application

      Reduce temporary memory allocation by 25% when starting KMail. If you are curious how, the merge requests are super interesting. (Volker Krause, 24.12.0. Link 1, link 2, and link 3)

      Kodaskanna A multi-format 1D/2D code scanner

      Kodaskanna was ported to Qt6/KF6. (Friedrich W. H. Kossebau. Link)

      KRDC Connect with RDP or VNC to another computer

      We added various options related to security of the RDP connection and the redirection of smartcards to the remote host. (Roman Katichev, 25.04.0. Link)

      Kup Backup scheduler for KDE's Plasma desktop

      We rephrased all yes/no buttons in Kup's notifications to use more descriptive names. (Nate Graham, 25.04.0. Link)

      NeoChat Chat on Matrix

      When receiving stickers with NeoChat, they will be displayed with a more appropriate size (256x256px). Same with custom emoticons, which are now displayed with the same height as the rest of the message. (Joshua Goins, 24.12.0. Link)

      We don't show the filename underneath images anymore, and also make the download file dialog fill out the filename by default. (Joshua Goins, 24.12.0. Link 1 and link 2)

      We redesigned the list of accounts in the welcome page. Now we show the display name and avatar of your accounts there, which makes it easier to recognize. (Joshua Goins, 24.12.0. Link)

      We rearranged the room, file and message context menus. (Joshua Goins, 24.12.0. Link 1 and link 2)

      Tokodon Browse the Fediverse

      Add "Open in Browser" action to profile pages. (Sean Baggaley, 24.12.0. Link)

      Fix various issues on Android, most prominently ensure all icons required by Tokodon are packaged correctly. (Joshua Goins, 24.12.0)

      ... And Everything Else

      This blog only covers the tip of the iceberg! If you’re hungry for more, check out Nate's blog about Plasma and be sure not to miss his This Week in Plasma series, where every Saturday he covers all the work being put into KDE's Plasma desktop environment.

      For a complete overview of what's going on, visit KDE's Planet, where you can find all KDE news unfiltered directly from our contributors.

      Get Involved

      The KDE organization has become important in the world, and your time and contributions have helped us get there. As we grow, we're going to need your support for KDE to become sustainable.

      You can help KDE by becoming an active community member and getting involved. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine! You don’t have to be a programmer either. There are many things you can do: you can help hunt and confirm bugs, even maybe solve them; contribute designs for wallpapers, web pages, icons and app interfaces; translate messages and menu items into your own language; promote KDE in your local community; and a ton more things.

      You can also help us by donating. Any monetary contribution, however small, will help us cover operational costs, salaries, travel expenses for contributors and in general just keep KDE bringing Free Software to the world.

      To get your application mentioned here, please ping us in invent or in Matrix.

      Thanks to Tobias Fella and Michael Mischurow for the proofreading.

      Categories: FLOSS Project Planets

      Droptica: Headless CMS. How to Expose Data Using REST API and JSON API Modules?

      Planet Drupal - Mon, 2024-11-18 02:11

      Headless CMS allows flexible data exposure for different applications. In systems like Drupal, this concept becomes especially important for teams that want to build modern applications with a separate frontend layer. In this blog post, you'll see the process of exposing Drupal data using the REST API and JSON API. You'll also discover how to customize views, generate content, and manage settings to ensure seamless collaboration with frontend frameworks.

      Categories: FLOSS Project Planets

      Russ Allbery: Review: Delilah Green Doesn't Care

      Planet Debian - Sun, 2024-11-17 23:20

      Review: Delilah Green Doesn't Care, by Ashley Herring Blake

      Series: Bright Falls #1 Publisher: Jove Copyright: February 2022 ISBN: 0-593-33641-0 Format: Kindle Pages: 374

      Delilah Green Doesn't Care is a sapphic romance novel. It's the first of a trilogy, although in the normal romance series fashion each book follows a different protagonist and has its own happy ending. It is apparently classified as romantic comedy, which did not occur to me while reading but which I suppose I can see in retrospect.

      Delilah Green got the hell out of Bright Falls as soon as she could and tried not to look back. After her father died, her step-mother lavished all of her perfectionist attention on her overachiever step-sister, leaving Delilah feeling like an unwanted ghost. She escaped to New York where there was space for a queer woman with an acerbic personality and a burgeoning career in photography. Her estranged step-sister's upcoming wedding was not a good enough reason to return to the stifling small town of her childhood. The pay for photographing the wedding was, since it amounted to three months of rent and trying to sell photographs in galleries was not exactly a steady living. So back to Bright Falls Delilah goes.

      Claire never left Bright Falls. She got pregnant young and ended up with a different life than she expected, although not a bad one. Now she's raising her daughter as a single mom, running the town bookstore, and dealing with her unreliable ex. She and Iris are Astrid Parker's best friends and have been since fifth grade, which means she wants to be happy for Astrid's upcoming wedding. There's only one problem: the groom. He's a controlling, boorish ass, but worse, Astrid seems to turn into a different person around him. Someone Claire doesn't like.

      Then, to make life even more complicated, Claire tries to pick up Astrid's estranged step-sister in Bright Falls's bar without recognizing her.

      I have a lot of things to say about this novel, but here's the core of my review: I started this book at 4pm on a Saturday because I hadn't read anything so far that day and wanted to at least start a book. I finished it at 11pm, having blown off everything else I had intended to do that evening, completely unable to put it down.

      It turns out there is a specific type of romance novel protagonist that I absolutely adore: the sarcastic, confident, no-bullshit character who is willing to pick the fights and say the things that the other overly polite and anxious characters aren't able to get out. Astrid does not react well to criticism, for reasons that are far more complicated than it may first appear, and Claire and Iris have been dancing around the obvious problems with her surprise engagement. As the title says, Delilah thinks she doesn't care: she's here to do a job and get out, and maybe she'll get to tweak her annoying step-sister a bit in the process. But that also means that she is unwilling to play along with Astrid's obsessively controlling mother or her obnoxious fiance, and thus, to the barely disguised glee of Claire and Iris, is a direct threat to the tidy life that Astrid's mother is trying to shoehorn her daughter into.

      This book is a great example of why I prefer sapphic romances: I think this character setup would not work, at least for me, in a heterosexual romance. Delilah's role only works if she's a woman; if a male character were the sarcastic conversational bulldozer, it would be almost impossible to avoid falling into the gender stereotype of a male rescuer. If this were a heterosexual romance trying to avoid that trap, the long-time friend who doesn't know how to directly confront Astrid would have to be the male protagonist. That could work, but it would be a tricky book to write without turning it into a story focused primarily on the subversion of gender roles. Making both protagonists women dodges the problem entirely and gives them so much narrative and conceptual space to simply be themselves, rather than characters obscured by the shadows of societal gender rules.

      This is also, at it's core, a book about friendship. Claire, Astrid, and Iris have the sort of close-knit friend group that looks exclusive and unapproachable from the outside. Delilah was the stereotypical outsider, mocked and excluded when they thought of her at all. This, at least, is how the dynamics look at the start of the book, but Blake did an impressive job of shifting my understanding of those relationships without changing their essential nature. She fleshes out all of the characters, not just the romantic leads, and adds complexity, nuance, and perspective. And, yes, past misunderstanding, but it's mostly not the cheap sort that sometimes drives romance plots. It's the misunderstanding rooted in remembered teenage social dynamics, the sort of misunderstanding that happens because communication is incredibly difficult, even more difficult when one has no practice or life experience, and requires knowing oneself well enough to even know what to communicate.

      The encounter between Delilah and Claire in the bar near the start of the book is cornerstone of the plot, but the moment that grabbed me and pulled me in was Delilah's first interaction with Claire's daughter Ruby. That was the point when I knew these were characters I could trust, and Blake never let me down. I love how Ruby is handled throughout this book, with all of the messy complexity of a kid of divorced parents with her own life and her own personality and complicated relationships with both parents that are independent of the relationship their parents have with each other.

      This is not a perfect book. There's one prank scene that I thought was excessively juvenile and should have been counter-productive, and there's one tricky question of (nonsexual) consent that the book raises and then later seems to ignore in a way that bugged me after I finished it. There is a third-act breakup, which is not my favorite plot structure, but I think Blake handles it reasonably well. I would probably find more niggles and nitpicks if I re-read it more slowly. But it was utterly engrossing reading that exactly matched my mood the day that I picked it up, and that was a fantastic reading experience.

      I'm not much of a romance reader and am not the traditional audience for sapphic romance, so I'm probably not the person you should be looking to for recommendations, but this is the sort of book that got me to immediately buy all of the sequels and start thinking about a re-read. It's also the sort of book that dragged me back in for several chapters when I was fact-checking bits of my review. Take that recommendation for whatever it's worth.

      Content note: Reviews of Delilah Green Doesn't Care tend to call it steamy or spicy. I have no calibration for this for romance novels. I did not find it very sex-focused (I have read genre fantasy novels with more sex), but there are several on-page sex scenes if that's something you care about one way or the other.

      Followed by Astrid Parker Doesn't Fail.

      Rating: 9 out of 10

      Categories: FLOSS Project Planets

      James Bennett: Introducing DjangoVer

      Planet Python - Sun, 2024-11-17 21:04

      Version numbering is hard, and there are lots of popular schemes out there for how to do it. Today I want to talk about a system I’ve settled on for my own Django-related packages, and which I’m calling “DjangoVer”, because it ties the version number of a Django-related package to the latest Django version that package supports.

      But one quick note to start with: this is not really “introducing” the idea of DjangoVer, because I know I’ve used the name a few times already in other places. I’m also not the person who invented this, and I don’t know for certain who did — I’ve seen several packages which appear to follow some form of DjangoVer and took inspiration from them in defining my own take on it.

      Django’s version scheme: an overview

      The basic idea of DjangoVer is that the version number of a Django-related package should tell you which version of Django you can use it with. Which probably doesn’t help much if you don’t know how Django releases are numbered, so let’s start there. In brief:

      • Django issues a “feature release” — one which introduces new features — roughly once every eight months. The current feature release series of Django is 5.1.
      • Django issues “bugfix releases” — which fix bugs in one or more feature releases — roughly once each month. As I write this, the latest bugfix release for the 5.1 feature release series is 5.1.3 (along with Django 5.0.9 for the 5.0 feature release series, and Django 4.2.16 for the 4.2 feature release series).
      • The version number scheme is MAJOR.FEATURE.BUGFIX, where MAJOR, FEATURE, and BUGFIX are integers.
      • The FEATURE component starts at 0, then increments to 1, then to 2, then MAJOR is incremented and FEATURE goes back to 0. BUGFIX starts at 0 with each new feature release, and increments for the bugfix releases for that feature release.
      • Every feature release whose FEATURE component is 2 is a long-term support (“LTS”) release.

      This has been in effect since Django 2.0 was released, and the feature releases have been: 2.0, 2.1, 2.2 (LTS); 3.0, 3.1, 3.2 (LTS); 4.0, 4.1, 4.2 (LTS); 5.0, 5.1. Django 5.2 (LTS) is expected in April 2025, and then eight months later (if nothing is changed) will come Django 6.0.

      I’ll talk more about SemVer in a bit, but it’s worth being crystal clear that Django does not follow Semantic Versioning, and the MAJOR number is not a signal about API compatibility. Instead, API compatibility runs LTS-to-LTS, with a simple principle: if your code runs on a Django LTS release and raises no deprecation warnings, it will run unmodified on the next LTS release. So, for example, if you have an application that runs without deprecation warnings on Django 4.2 LTS, it will run unmodified on Django 5.2 LTS (though at that point it might begin raising new deprecation warnings, and you’d need to clear them before it would be safe to upgrade any further).

      DjangoVer, defined

      In DjangoVer, a Django-related package has a version number of the form DJANGO_MAJOR.DJANGO_FEATURE.PACKAGE_VERSION, where DJANGO_MAJOR and DJANGO_FEATURE indicate the most recent feature release series of Django supported by the package, and PACKAGE_VERSION begins at zero and increments by one with each release of the package supporting that feature release of Django.

      Since the version number only indicates the newest Django feature release supported, a package using DjangoVer should also use Python package classifiers to indicate the full range of its Django support (such as Framework :: Django :: 5.1 to indicate support for Django 5.1 — see examples on PyPI).

      But while Django takes care to maintain compatibility from one LTS to the next, I do not think DjangoVer packages need to do that; they can use the simpler approach of issuing deprecation warnings for two releases, and then making the breaking change. One of the stated reasons for Django’s LTS-to-LTS compatibility policy is to help third-party packages have an easier time supporting Django releases that people are actually likely to use; otherwise, Django itself generally just follows the “deprecate for two releases, then remove it” pattern. No matter what compatibility policy is chosen, however, it should be documented clearly, since DjangoVer explicitly does not attempt to provide any information about API stability/compatibility in the version number.

      That’s a bit wordy, so let’s try an example:

      • If you started a new Django-related package today, you’d (hopefully) support the most recent Django feature release, which is 5.1. So the DjangoVer version of your package should be 5.1.0.
      • As long as Django 5.1 is the newest Django feature release you support, you’d increment the third digit of the version number. As you add features or fix bugs you’d release 5.1.1, 5.1.2, etc.
      • When Django 5.2 comes out next year, you’d (hopefully) add support for it. When you do, you’d set your package’s version number to 5.2.0. This would be followed by 5.2.1, 5.2.2, etc., and then eight months later by 6.0.0 to support Django 6.0.
      • If version 5.1.0 of your package supports Django 5.1, 5.0, and 4.2 (the feature releases receiving upstream support from Django at the time of the 5.1 release), it should indicate that by including the Framework :: Django, Framework :: Django :: 4.2, Framework :: Django :: 5.0, and Framework :: Django :: 5.1 classifiers in its package metadata.
      Why another version system?

      Some of you probably didn’t even read this far before rushing to instantly post the XKCD “Standards” comic as a reply. Thank you in advance for letting the rest of us know we don’t need to bother listening to or engaging with you. For everyone else: here’s why I think in this case adding yet another “standard” is actually a good idea.

      The elephant in the room here is Semantic Versioning (“SemVer”). Others have written about some of the problems with SemVer, but I’ll add my own two cents here: “compatibility” is far too complex and nebulous a concept to be usefully encoded in a simple value like a version number. And if you want my really cynical take, the actual point of SemVer in practice is to protect developers of software from users, by providing endless loopholes and ways to say “sure, this change broke your code, but that doesn’t count as a breaking change”. It’ll turn out that the developer had a different interpretation of the documentation than you did, or that the API contract was “underspecified” and now has been “clarified”, or they’ll just throw their hands up, yell “Hyrum’s Law” and say they can’t possibly be expected to preserve that behavior.

      A lot of this is rooted in the belief that changes, and especially breaking changes, are inherently bad and shameful, and that if you introduce them you’re a bad developer who should be ashamed. Which is, frankly, bullshit. Useful software almost always evolves and changes over time, and it’s unrealistic to expect it not to. I wrote about this a few years back in the context of the Python 2/3 transition:

      Though there is one thing I think gets overlooked a lot: usually, the anti-Python-3 argument is presented as the desire of a particular company, or project, or person, to stand still and buck the trend of the world to be ever-changing.

      But really they’re asking for the inverse of that. Rather than being a fixed point in a constantly-changing world, what they really seem to want is to be the only ones still moving in a world that has become static around them. If only the Python team would stop fiddling with the language! If only the maintainers of popular frameworks would stop evolving their APIs! Then we could finally stop worrying about our dependencies and get on with our real work! Of course, it’s logically impossible for each one of those entities to be the sole mover in a static world, but pointing that out doesn’t always go well.

      But that’s a rant for another day and another full post all its own. For now it’s enough to just say I don’t believe SemVer can ever deliver on what it promises. So where does that leave us?

      Well, if the version number can’t tell you whether it’s safe to upgrade from one version to another, perhaps it can still tell you something useful. And for me, when I’m evaluating a piece of third-party software for possible use, one of the most important things I want to know is: is someone actually maintaining this? There are lots of potential signals to look for, but some version schemes — like CalVer — can encode this into the version number. Want to know if the software’s maintained? With CalVer you can guess a package’s maintenance status, with pretty good accuracy, from a glance at the version number.

      Over the course of this year I’ve been transitioning all my personal non-Django packages to CalVer for precisely this reason. Compatibility, again, is something I think can’t possibly be encoded into a version number, but “someone’s keeping an eye on this” can be. Even if I’m not adding features to something, Python itself does a new version every year and I’ll push a new release to explicitly mark compatibility (as I did recently for the release of Python 3.13). That’ll bump the version number and let anyone who takes a quick glance at it know I’m still there and paying attention to the package.

      For packages meant to be used with Django, though, the version number can usefully encode another piece of information: not just “is someone maintaining this”, but “can I use this with my Django installation”. And that is what DjangoVer is about: telling you at a glance the maintenance and Django compatibility status of a package.

      DjangoVer in practice

      All of my own personal Django-related packages are now using DjangoVer, and say so in their documentation. If I start any new Django-related projects they’ll do the same thing.

      A quick scroll through PyPI turns up other packages doing something that looks similar; django-cockroachdb and django-snowflake, for example, versioned their Django 5.1 packages as “5.1”, and explicitly say in their READMEs to install a package version corresponding to the Django version you use (they also have a maintainer in common, who I suspect of having been an early inventor of what I’m now calling “DjangoVer”).

      If you maintain a Django-related package, I’d encourage you to at least think about adopting some form of DjangoVer, too. I won’t say it’s the best, period, because something better could always come along, but in terms of information that can be usefully encoded into the version number, I think DjangoVer is the best option I’ve seen for Django-related packages.

      Categories: FLOSS Project Planets

      Pages