FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Should we make pdb better?

Planet Python - Fri, 2024-06-14 05:13

Tian Gao came to the Language Summit 2024 to talk about improving pdb, short for "Python debugger", a module and command line tool for debugging Python.

Tian Gao presenting on how to improve pdb

There are not many command-line debugger alternatives to pdb for Python. Tian mentioned a few, including PuDB, pdb++, and ipdb, but those alternatives are all themselves based on either pdb or another standard library module 'bdb'.

pdb is the only "standalone" command-line-based Python debugger

Tian presented a laundry list of desirable new features that could be added to pdb, including:

  • Showing more lines of code around the current breakpoint.
  • Colors in the terminal, syntax highlighting.
  • Customization, with defaults being safe.
  • Handling of more scenarios (threads, asyncio, bytecode, remote debugging)
Performance and backwards compatibility

The biggest issue according to Tian, which he noted had been discussed in the past, was the performance of pdb. "pdb is slow because sys.trace is slow, which is something we cannot change", and the only way forward on making pdb faster is to switch to sys.monitoring to avoid triggering unnecessary events.

Switching to sys.monitoring would give a big boost to performance. According to Tian, "setting a breakpoint in your code in the worst case you get a 100x slowdown compared to almost zero overhead with sys.monitoring". Unfortunately, switching isn't so easy, Tian noted there are serious backwards compatibility concerns for the standard library module bdb if pdb were to start using sys.monitoring.

"If we're not ready to [switch to sys.monitoring] yet, would we ever do this in the future?", Tian asked the group, noting that an alternative is to create a third-party library and encourage folks to use that library instead.

Thomas Wouters started off saying that "bdb is a standard library module and it cannot break user code" and cautioned that core developers don't know who is depending on modules. bdb's interface can't have backwards incompatible changes without long deprecation periods. In Thomas' mind, "the answer is obvious, leave pdb as it is and build something else".

Thomas also noted "in the long-term, a debugger in the standard library is important" but that development doesn't need to happen in the standard library. Thomas listed the benefits for developing a new debugger outside the standard library like being able to publish outside the Python release schedule and to use the debugger with older Python versions. Once a debugger reaches a certain level of stability it can be added to the standard library and potentially replace pdb.

Tian agreed with Thomas' proposal in theory, but was concerned that a third-party debugger on PyPI wouldn't see the same levels of adoption compared to being in the standard library and thus would struggle to meet a threshold of "stability" without a critical mass of users. Or worse yet, maintainers wouldn't be motivated to continue due to a lack of use, resulting in a "dead project". (Some foreshadowing, Steering Council member Emily Morehouse gave a lightning talk on this topic later on in the Language Summit)

Łukasz Langa noted that Python now has support for "breakpoint()" and that "what breakpoint() actually does, we can change. We can run another debugger if we decide to", referencing if a better debugger was added in the future to CPython that it could be made into a new default for breakpoints.

Russell Keith-Magee from BeeWare, was interested in what Tian had said about remote debugging, noting that "remote debugging is the only way you can debug [on mobile platforms]". Russell would be interested in pdb or a new debugger supporting this use-case. Tian noted that unfortunately remote debugging would be one of the more difficult features to implement.

Pablo Galindo Salgado, commenting on existing Python "attach-to-process" debuggers, said that the hacks in use today are "extremely unsafe". Pablo said that "we'd need something inside CPython [to be safe], but then you have another problem, you have to implement that feature on [all platforms]". Pablo also mentioned that attach-to-process debugging is usually a bad model because it can't be enabled by default for security reasons but "you won't know when you'll need to debug".

Anthony Shaw asked about the scope of the project and was interested in whether there could be a framework for debugging in CPython that pdb and others could build on. Anthony pointed out that many other debuggers "needed to do a bunch of hooks and tricks" to do debugging because it's "not provided out of the box by CPython".

Tian responded that "bdb is supposed to do that, but it was written 30 years ago so is too old to support new things that a debugger wants". Others mentioned that sys.monitoring (new in Python 3.12) was meant to be a framework for debuggers to build on.

Gregory Smith, Steering Council member, said he "wants all of these things" and agreed with Thomas to "develop this as much as you can... outside of the standard library", telling Tian that "you're going to end up in a better state that way". Greg's primary concern was whether CPython needed to do anything to enable Tian's proposal. He continued, "it sounds like we (CPython) have most of what we need, but if we don't let's get that planned so we can enable a successful separate project before we ship it with Python in the future".

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Python on Mobile

Planet Python - Fri, 2024-06-14 05:13

Malcolm Smith from BeeWare presented on the status and direction of Python on mobile platforms like iOS and Android. BeeWare has been working on bringing Python to mobile for a few years now. Previously Russell Keith-Magee gave a talk at the Language Summit in 2023 on BeeWare to announce plans for Tier 3 support for Python on Android and iOS in Python 3.13 along with Anaconda's funded support for the project.

Now we've arrived at Python 3.13 pre-releases, and things are going well! Malcolm reported that "the implementations are nearly complete" along with thank-yous to the core developers who helped with the project.

Overview of current Python mobile platform support

The other platforms listed in the table "iOS x86_64 and Android ARM32/x86", don't have any plans to be implemented. There aren't any actual physical devices for iOS on x86_64 as the architecture is only used for development simulators.

For Android the ARM32 and x86 platforms are being phased out due to being 32-bit architectures and today represent less than 10% of devices. For these reasons, Malcolm and team have decided not to implement support for this architecture.

Malcolm also reported that there is a buildbot for iOS and in the coming weeks there will be buildbots added for Android ARM64 and x86_64 platforms.

Let's talk packages!

Python is well-known for its rich package ecosystem, and the BeeWare team is working on bringing Python packages to mobile Python, too. "It's not enough just to have support for CPython", Malcolm said on this topic, "we also need to support the packaging ecosystem". As with many new platforms for Python, pure Python packages work without much issue and "the difficulty comes in with anything which contains native compiled components".

 

The current and future approach for mobile-friendly Python packages
 

The BeeWare team's approach so far has been to bootstrap packages with native components on their own by creating tools and "building wheels for popular packages like numpy, cryptography, and Pillow". Malcolm reported that the current approach of rebuilding individual packages isn't scalable and the team would need to help upstream maintainers build their own mobile wheels. Malcolm said the team plans to focus this year on "making it as easy as possible to produce and release [mobile] wheels within existing workflows" and contributing to tools like cibuildwheel, setuptools, and PyO3.

Malcolm also hopes that "by the end of this year some of the major packages will be in position to start releasing mobile wheels to the Python Package Index". The team has already specified a format for the wheel tags for iOS (PEP 730) and Android (PEP 738). "The binary compatibility situation is pretty good", Malcolm noted that iOS and Android both come from a single source in Apple and Google respectively meaning "there's a fairly well-defined set of libraries available on each version".

Python today provides an embeddable package for the Windows platform. Malcolm requested from the group that more official Python embeddable packages be created for each of the mobile platforms with headers and libraries to ease building Python packages for those platforms. Having these artifacts available would provide a reference for binary compatibility on those platforms.

Ned Deily, the macOS release expert for CPython, agreed that having more binary releases for macOS and iOS is something we "should definitely do in the 3.14 timeframe".

Challenges with keeping mobile buildbots green

Malcolm provided the core developer team some tips on writing Python code with these new and constrained platforms in mind. He warned that there is little to no support for spawning subprocesses, but "multi-threading on the other hand is perfectly fine on both of these platforms".

Mobile platforms also tend to be constrained in terms of security. iOS only allows loading libraries from specific folders and Android has restrictions like not being able to read the root directory or create hard links.

Given these differences, "it's reasonable to expect that mobile platforms will have more frequent failures as development proceeds, so how do we go about testing them?" The full CPython test suite is running on both mobile platforms with buildbots, but today there's no testing done before a pull request is merged. This situation leads to mobile buildbots starting to fail without the contributing developer necessarily noticing.

This problem is exacerbated by limited continuous integration (CI) resources in GitHub Actions, especially for macOS which limits virtualization on ARM64 processors. Malcolm suggested evaluating GitHub's Merge Queue feature as a potential way to solve this issue by requiring a small amount of testing on mobile platforms without blocking development of features.

Malcolm's proposal for better visibility of test failures for mobile

Łukasz Langa agreed that CI was an issue, one that he's actively looking improving, but wasn't convinced that using a merge queue would decrease the number of jobs required to run. Malcolm clarified that he is proposing only running a smaller subset of jobs per-commit in pull requests and the complete set, including some buildbots, as a part of pre-merge testing.

Many folks expressed concern about adding buildbots as a part of pre-merge or per-commit checks, because buildbots have no high-availability SLA and often suffer occasional outages, some buildbots not being reliable and therefore preventing merging of commits, and concerns about security of unreviewed changes running on buildbots.

Thomas Wouters, Python 3.13 release manager, was "unconvinced" on adding pre-merge testing for Tier 3 platforms, something that is usually reserved for Tier 1 platforms.

Ned Deily recommended doing iOS builds as a part of existing macOS builds in GitHub Actions. This would catch build errors for the platform and would likely find some issues early without much additional investment.

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Free-threading ecosystems

Planet Python - Fri, 2024-06-14 05:13

Following years of excitement around the removal of the Global Interpreter Lock (GIL), Python without the GIL is coming soon. Python 3.13 pre-releases already have support for being built without the GIL using a new --disable-gil compile-time option:

# Download
wget https://www.python.org/ftp/python/3.13.0/Python-3.13.0b2.tgz
echo "c87c42aa8137230a15a02ed90a6600610ba680cb5b54c0fbc57581a0d032e0c4  ./Python-3.13.0b2.tgz" | sha256sum --check
tar -xzvf ./Python-3.13.0b2.tgz

# Build
cd Python-3.13.0b2/
./configure --disable-gil
make

# Run with no GIL!
./python -X nogil -c "import sys; print(sys._is_gil_enabled())"
False

But simply having GIL-less Python is not enough, code needs to be written that is safe and performant without the GIL using both the C and Python APIs.

This year at the Language Summit, Daniele Parmeggiani gave a talk about ways Python can enable safe and performant concurrent code without locking CPython into a specific implementation or memory model.

Don't leak the details

Daniele started his talk, like many Python users, with cautious enthusiasm about the prospect of free-threading in Python:

"Given the acceptance notes to PEP 703, one should argue for caution in discussing the prospects of new multi-threading ecosystems after the release of Python 3.13 — with a hopeful spirit I will disregard this caution here."

-- Daniele Parmeggiani

Daniele detailed a feature request he had opened to create a public function for the private C API function "_Py_TRY_INCREF()". Daniele wanted to use this function to increment an object's reference count safely in a truly multi-threaded Python where a reference count might be decremented concurrently to an increment.

Daniele continued, "[Sam Gross] responded as thoughtfully and thoroughly as he usually does that the function shouldn't be public, and I agree with him".

The semantics of _Py_TRY_INCREF() today are tied to the specific implementation of free-threading and without a guarantee that the underlying implementation won't change Daniele does not think the function "should ever be made public".

But without this functionality Daniele's problem still stands, where do we go from here?

Higher-level APIs to the rescue

"At a higher-level it's possible to write further guarantees without constraining what's under the hood". Daniele started a single step up in abstraction, detailing an atomic reference API:

PyObject *AtomicRef_Get(AtomicRef *self)
{
    PyObject *reference;
    reference = self->reference;
    while (!_Py_TRY_INCREF(reference)) {
        reference = self->reference;
    }
    return reference;
}

This would be "trivial to implement" with the new garbage collection scheme in Python 3.13 ("quiescent state-based reclamation" or QSBR), "but what if [Python 3.14] were to change this scheme radically? Or what if 3.15 decides to do away with it entirely?"

Daniele eschewed making guarantees about low-level APIs at this stage of development, but concluded that "an API for atomically updating a reference to a PyObject seems like a high-level use-case worth guaranteeing, regardless of any implementation of reference counting".

Atomic data structures

Daniele continued exploring higher-level concepts that Python could provide at this stage of free-threading by looking to what other languages are doing.

Java provides a java.util.concurrent package containing some familiar faces for Python concurrency users like Semaphores, Locks, and Barriers, but also some other atomic primitives that map to Python classes like dicts, lists, booleans, and integers. Daniele asked whether Python should provide atomic variations for primitives like numbers and dictionaries.

Daniele explained that many atomic data structures use the "compare-and-set" model to synchronize read and write access to the same space in memory. Compare-and-set requires the caller to specify an expected value, if the value in memory matches the expected value then the value is updated to the passed value, and the call returns whether the operation was successful or not.

Daniele explained that compare-and-set establishes a "happens-before" ordering between concurrent writes to the same memory location, joking that the phrase "happens-before" may spark thoughts of memory models which he wished to avoid.

Today Python doesn't have any method of reordering memory accesses which would require thinking about the memory model. Daniele noted that may come one day from the new just-in-time compiler (JIT).

Daniele was already developing an atomic dictionary class and had seen performance gains over the existing standard library dictionary with the GIL disabled (with lower single-threaded performance):

Performance comparison of dict with and without the GIL and Daniele's AtomicDict

Daniele observed that the free-threading changes actually decreased the performance for write-heavy workloads on builtin types like dictionaries because "Python programs will now actually be subject to memory contention". When multiple threads attempt to mutate a list or dictionary, "it will be as if the GIL is still there, [the threads] will all be contending for one lock", offering that "new concurrent data structures would alleviate this performance issue".

Daniele wanted to know what primitives Python should offer for C extension developers targeting free-threaded builds, or asked if it's still too early to make guarantees:

"As the writer of a C extension looking to implement concurrent lock-free data structures for Python", Daniele asked of the room, "does CPython eventually wish to incorporate... either high-level atomics or low-level routines?"

Daniele continued, "if not the atomics, then new low-level APIs like _Py_TRY_INCREF() will be necessary in order not to force the abuse of locks in external efforts towards new free-threading ecosystems".

Discussion

Thomas Wouters, channeling the Steering Council's past intent from accepting PEP 703 last October said, "we don't know yet what users will actually need" and the Steering Council didn't want to "prematurely optimize" and mandate features be implemented without that knowledge.

Thomas recommended building solutions to "production use-cases" as PyPI packages or separate projects before the deciding to pull those solutions into Python, summarizing the sentiment with, "we need to take our time and make sure we're doing the right thing".

Steering Council member Barry Warsaw agreed with Thomas on strategy, also adding that "[atomic references] might be something [Python] needs to make sure the interpreter doesn't crash with some of our own C code". Barry was interested in how to "ensure that the interpreter stays safe in the face of free-threading without necessarily thinking about the right APIs for the higher-level data structures".

Sam Gross, author and main implementer of PEP 703 to make the GIL optional in CPython, commented on making additional guarantees to standard library collections, saying "we're going to find situations that are ambiguous where no one's promised thread-safety or [the lack of thread-safety]".

Sam would also like to see "scalable collections" on PyPI (and "would love to see in Python eventually too") that are "designed not just to be thread-safe, but to scale well with certain workloads". Sam noted that builtin data classes like dict and list "can only make so many trade-offs" and tend to "focus on single-threaded performance" or "multi-threaded read-only access".

Eric Snow wanted to see immutable data structures be considered, too, noting the benefits to performance and shareability that Yury Selivanov was seeing when using them with sub-interpreters.

Gregory Smith sympathized with Daniele on wanting to avoid thinking about memory models, but "had a sneaking suspicion we kinda have to anyway". Greg was concerned about other stacks like data science and machine learning "re-interpreting Python code and transforming it into other things that run on other hardware". Without a clear definition, people "make their own assumptions" and get confused when code runs differently in different places.

Replying to Greg, Daniele offered that there's already a mechanism for determining whether an object is shared between threads "which might be a first-step", but that this "was a detail of the implementation, and not a part of the language".

Guido van Rossum began by being "wary of looking to Java for examples", stating that many APIs that Python borrowed from Java were eventually deprecated and removed.

Guido commented that "there will be other people with much higher-level ideas on concurrency" and recommended "to wait as long as we can before we build anything into the language explicitly or implicitly". Guido also felt it was "important that we have sub-interpreters as well as free-threading, so people can play with different models before we commit to anything".

Overall, the group seemed interested in Daniele's work on atomics but didn't seem willing to commit to exact answers for Python yet. It's clear that more experimentation will be needed in this area.

 

 

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Native Interface and Limited C API

Planet Python - Fri, 2024-06-14 05:13

Back in October 2023, PEP 731 proposed a new C API working group charged with overseeing and coordinating the development and maintenance of the Python C API. This working group spawned from a series of discussions on the C API from the Language Summit in 2023 and creation of an inventory of problems with the C API at the 2023 core developer sprint.

Two inaugural C API working group members, Petr Viktorin and Victor Stinner, presented back-to-back talks on the C API and gave context on what's been happening in the past year.

What does the C API working group do?

The first of the two C API talks was given by Petr Viktorin on the "Native Interface" and some of the first steps towards an idealized C API.

Petr started off by explaining that the C API working group makes two types of decisions: what functionality to expose via the C API and how to expose it. Petr also explained that the C API working group keeps two separate issue trackers, one for incremental "evolution" of the C API and another for "revolution", a place where more "radical" ideas are discussed.

The existing C API wasn't designed with the knowledge, context, and needs of today (like free-threading), but there are many good parts of the C API. Petr explained that one of the more impactful things the working group has done is to formalize "guidelines to get consistency with the good parts of the existing API".

Petr gave an example of what can go wrong with the PyLong_GetSign() function. This API has a baked-in type check that can't be avoided due to its function signature and thus incurs a performance penalty even when the caller has already checked whether the object is the correct type.

This extra performance penalty means that CPython itself uses its own private API which avoids the type check, but this extra private API only for CPython isn't a great experience. Other languages and projects want access to the more performant API, too.

Petr went on to reference Mark Shannon's proposal for a New C API which Petr called "close to perfect" with caveats around not dropping existing APIs and the name, instead suggesting "Native Interface" for the name of the new C API.

"Unfortunately we need to keep the old API around. We can't just remove a chunk of the existing API just because it's old", Petr lamented. Petr also noted that not being able to remove parts of the existing API might mean that the Faster CPython project loses some incentive to work on the new C API.

C API decisions are made on three axes: performance, safety, and convenience. Petr argued that of the three, "performance should be prioritized", because a convenient and safe layer can be built on top of a performant API with the right amount of context.

Annotating the existing C API

Petr noted that we have experience within Python for adding a safety layer on top of APIs in Python: type hints! Type hints in Python provide context into an API's inputs and outputs that can be checked using external tooling without incurring a performance penalty on runtime.

Petr proposed adding annotations to C function signatures for function behaviors like "returns a null pointer on error" or "never returns a null pointer" which can then be used in other contexts like documentation or borrow checking. Among the proposed annotations were some about whether references were borrowed, stolen, or a new reference, which can be used to check consistency of references.

List of possible annotations for C API functions

Petr also noted that many of these annotations apply not only to new APIs but to existing APIs as well. Implementing these annotations as empty C macros means that behavior and performance isn't impacted but can be parsed from header files.

Petr's slides showing the annotations in use as C macros

To go along with these new annotations, Petr proposed writing a tool similar to Argument Clinic. Argument Clinic is a tool maintained by the CPython team which automatically generates boilerplate code like function signatures and argument unpacking based on input instructions.

Mark Shannon asked to clarify whether the priority was to improve the C API or document existing behavior. Petr's plan was to add annotation information to the existing API and to wait on implementing the new Native Interface until later. This plan wouldn't change the behavior of any existing API, but APIs which aren't conforming would receive a new variant that conforms to the new C API standards.

Victor Stinner asked whether the annotation information would be stored in a separate file. Petr noted that a separate file is the plan to make it easier to wrap the API and to avoid needing to parse header files directly.

PyO3 maintainer David Hewitt asked whether the plan was to include variations that avoid type checks for all C API functions to dodge the performance penalty for C API wrappers. David noted that PyO3 implemented many C API function calls as methods on wrapped objects. This means that the type check was implicit and thus could avoid having types checked again by the C API function. David also clarified that these extra checks "aren't a major performance drag" but would be great to remove the inefficiencies if possible.

Petr answered that wrappers will need to wait for the Native Interface to be implemented to expose the underlying C API functions which don't include type checks.

There was enthusiastic agreement from the room about using annotation information for documentation and automatically generating boilerplate code and checks along with being able to do borrow checking using annotation information.

Limited C API

The second C API talk was given by Victor Stinner on the status of the Limited C API. The Limited C API is a subset of the Python C API that's consistent across different versions of Python. The Limited C API can be opted-in to using #define Py_LIMITED_API, by doing so only public functions of the limited C API can be used.

Victor started off by listing his long-term goals for the Python C API, which mostly focused on reducing friction both for maintainers of the Python C API and for third parties using the API or updating to support new Python versions. One possibility to achieve this would be to "move to using the Limited C API by default and use the Stable ABI for everybody" but Victor noted this is a "very long term goal".

Getting to this goal is challenging because it's difficult to know how a given change will affect the ecosystem of Python projects, both for finding affected projects and how widespread breakage would be for users. Victor explained that each change typically only requires "1-10 lines of code changed per impacted project" to fix issues.

Trying to move all functions from private to either public or internal

Victor's biggest project currently is to remove private functions from the C API, specifically functions which begin with an underscore "_" by convention. Victor explained that he removed all 300 private functions starting with "_Py" for 3.13.0-alpha1 to discover how and where private APIs are used by downstream projects. Victor and team anticipated that this mass-removal would cause breakages, so after the initial round of discovery the removed functions causing the most issues have been re-added in 3.13.0-alpha2.

As of 3.13.0-beta1, 264 functions of the over 300 functions are still removed. The functions which have been added back are not simply left as-is either: once a private function is discovered the C API working group gets a chance to design a new public C API function for projects to use instead.

"The goal isn't to annoy people, the goal is to provide better functions for everybody" -- Victor Stinner

These new public C API functions would have documentation, tests, backwards compatibility guarantees, and can benefit from the new C API working group guidelines around API design. Victor gave an example of the PyDict_Pop() API which previously required checking for an error condition using PyErr_Occurred() to disambiguate between a key not being in the dictionary or if any other error occurred.

The new PyDict_Pop() function returns -1, 0, and 1 for the "error", "not found", and "found" cases respectively in accordance with new C API guidelines meaning a call to PyErr_Ocurred() is avoided.

New PyDict_Pop() public function with improvements

The pythoncapi-compat project, which Victor is a maintainer of, provides backfills for these new 3.13 APIs for Python 3.12 and older. This means that projects can immediately start taking advantage of new APIs which are better designed and return strong references. Victor highlighted in particular PyDict_GetItemRef() and others which are new in 3.13 and are important for free-threading due to PyDict_GetItem() returning a borrowed reference instead of a new strong reference.

Slide from Victor's presentation on current Limited C API adoption

The biggest users of the Python C API like Cython, PyO3, pybind, and more are at various stages of supporting the Limited C API, most of which require an opt-in for builds.

Victor's top project in coming months and years will be to move the C API away from using structures ("C structs") like PyFrameObject, PyThreadState, and PyTypeObject. Victor noted that projects like Cython, greenlet, gevent, and more have to access directly into structure members which can cause breakages when upgrading to new Python versions. Victor explained that there is no way to handle this with the Limited C API today. "We already provide many helper functions like getters and setters, but we need to provide even more" said Victor as a way forwards on this issue.

Petr questioned the approach of "breaking current projects so that future Python versions don't break them", saying that it'd be better to warn projects about using private API functions that aren't supported and wait to introduce breaking changes when it's necessary to progress the C API.

Victor replied that he'd already started work on a PEP to opt-in for build errors when a project is using deprecated functions, "like a strict mode for the C API". Victor agreed that the current plan isn't great in this way, "we ask people to update their code and the timeline is very short, we expect people to update in one years time" noting the circumstances where this can be difficult such as unmaintained projects or solo-maintainers.

Petr also added here that the opt-in would need to be versioned per Python version, so users can have control over when they want to do the work to move to new C API functions.

Eric Snow and Mark Shannon remarked on a more incremental strategy. This strategy would see deprecated functions moved structurally into a separate file ("legacy.c" and "legacy.h") but with the behavior preserved to have a clearer idea of what functions Python developers want to remove. After being moved the functions would be implemented using newly designed APIs where possible. Others noted that this would only be a convenience for core developers and projects that are interested in internals like PyO3 and Cython.

David Hewitt commented on the long feedback cycles, as downstream projects of the Stable ABI are still using Python 3.7 as a target, so any changes to the Stable ABI may not receive feedback until many years later. Victor responded that he's working on a new project that implements new functions of Python for old Python versions.

Overall, the work and proposals presented by both Petr and Victor were well-received by the room. It's clear that the Python C API is in good hands with the C API working group and is moving in the right direction to solve tomorrow's problems.

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Should Python adopt Calendar Versioning?

Planet Python - Fri, 2024-06-14 05:12

 

Hugo van Kemenade, the newly announced Release Manager for Python 3.14 and 3.15, started the Language Summit with a proposal to change Python's versioning scheme.

Hugo's view of kicking off the language summit!
(Photo credit: Hugo van Kemenade)

The goal of Hugo's proposal was to make expectations around versioning, backwards compatibility, and support timelines clearer for Python users.

On the surface, Python's versioning might appear to be Semantic Versioning (SemVer) due to its three-part version and infamous set of backwards incompatible changes known as Python 3. Hugo noted that the publication of Python 1.0.0 (1994) and what would become the Python versioning scheme predates the publication of SemVer by at around 15 years (2009).

The perception of Python using semantic versioning is a source of confusion for users who don't expect backwards incompatible changes when upgrading to new versions of Python. In reality almost all new feature releases of Python include backwards incompatible changes such as the removal of "dead batteries" where PEP 594 marked 19 modules for removal in Python 3.13.

Calendar Versioning (CalVer) encompasses a wide array of different versioning schemes that have one property in common: using the release date as part of a release's version. Calendar-based versions vary quite widely, but typically include a two or four digit year (YY or YYYY) and sometimes a month or day (MM and DD).

Using years in versions is quite common amongst other programming languages, operating systems like Ubuntu, and tools like Black, pip, and PyCharm.

Slide from Hugo's presentation showing programming languages using calendar-based versioning like Ada, Algol, C, C++, Fortran, and JavaScript

Since 2019, Python has made releases according to the new yearly cadence from PEP 602. Moving to annual releases made it possible for downstream distributors to rely on when a new Python version appears, which brings newer Python versions to users faster.

Each minor release receives 5 years of security fixes. Using the release year of 2026 as an example, users could add 5 years and know they'll receive security fixes on that minor release until 2031. Figuring out this information from "3.15" in the existing versioning scheme would require another lookup, typically to the release schedule PEP.

If the year were baked into the version, one wouldn't need to see the release schedule to know when support was ending, instead one could add 5 years to the year encoded in the version (e.g. for "3.26", 26 + 5 = 31, therefore security support ends in 2031).

Hugo offered multiple proposed versioning schemes, including:

  • Using the release year as minor version (3.YY.micro, "3.26.0")
  • Using the release year as major version (YY.0.micro, "26.0.0")
  • Using the release year and month as major and minor version (YY.MM.micro, "26.10.0")

There were discussions about other options beyond these amongst attendees.

Thomas Wouters, release manager for 3.12 and 3.13, questioned the value-add for adopting a new versioning system. Thomas noted that while the current system is confusing, changing the system in any way also adds confusion for users. Hugo responded that clarity, especially support for security fix and end-of-life dates, was the biggest motivation.

Barry Warsaw wondered if there was a way to test potential new versioning scheme ahead of time to find potential problems. Hugo referenced the deadsnakes project which builds distributions of CPython for Ubuntu. The deadsnakes project previously created a build of Python 3.9 that modified the version to be "3.10" to help discover breakages in projects assuming a single-digit minor version. Hugo also had experience using static code analysis to find other version assumptions in Python projects.

"Python 3 is a brand at this point, and we should stick to it" said Guido van Rossum after sharing concerns that changes to the major version would break the ecosystem more than changes to the minor version. Others voiced concerns about changing the major version "3" including in the "python3" binary and for packaging such as "abi3" tag.

Carol Willing noted that many projects are relying on Python's versioning system and already have those versions "baked in" to warnings in existing releases. Hugo confirmed this is a problem, including Python itself, which had a few deprecation warnings and messages that reference future Python versions like 3.15. Hugo's plan would be to update these versions for Python, give plenty of time before the new versioning scheme took affect.

Donghee Na offered up Rust's use of "yearly editions" in the branding of their releases, where the version number is completely separate from the branding of the release. Hugo was concerned that this would add another layer of confusion and would mostly repeat information already found in the release schedule.

Overall the proposal to use the current year as the minor version was well-received, Hugo mentioned that he'd be drafting up a PEP for this change.

Carl Meyer cautioned against making any changes to the version scheme before 2026 in order to preserve the 3.14 "π"-thon release which received approval and laughter from the room. Sounds like whatever happens we'll get to have our pie and eat it too. 🥧

Categories: FLOSS Project Planets

Talk Python to Me: #466: Pydantic Performance Tips

Planet Python - Fri, 2024-06-14 04:00
You're using Pydantic and it seems pretty straightforward, right? But could you adopt some simple changes to your code that would make it a lot faster and more efficient? Chances are, you'll find a couple of the tips from Sydney Runkle that will do just that. Join us to talk about Pydantic performance tips here on Talk Python.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/code-comments'>Code Comments</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Sydney Runkle</b>: <a href="https://www.linkedin.com/in/sydney-runkle-105a35190/" target="_blank" rel="noopener">linkedin.com</a><br/> <b>Pydantic</b>: <a href="https://pydantic.dev/opensource" target="_blank" rel="noopener">pydantic.dev</a><br/> <b>Performance docs</b>: <a href="https://docs.pydantic.dev/latest/concepts/performance/" target="_blank" rel="noopener">docs.pydantic.dev</a><br/> <b>Union tips</b>: <a href="https://docs.pydantic.dev/latest/concepts/unions/" target="_blank" rel="noopener">docs.pydantic.dev</a><br/> <b>Sydney's presentation slides</b>: <a href="https://docs.google.com/presentation/d/183bn9ecIzOOqfxanrESu7rBaKCI70CX0/edit?usp=sharing&ouid=117072411264002710561&rtpof=true&sd=true" target="_blank" rel="noopener">docs.google.com</a><br/> <b>JSON to Pydantic</b>: <a href="https://jsontopydantic.com" target="_blank" rel="noopener">jsontopydantic.com</a><br/> <b>Samuel talking FastUI</b>: <a href="https://talkpython.fm/episodes/show/449/building-uis-in-python-with-fastui" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>CodeFlash</b>: <a href="https://www.codeflash.ai" target="_blank" rel="noopener">codeflash.ai</a><br/> <b>Codspeed</b>: <a href="https://codspeed.io" target="_blank" rel="noopener">codspeed.io</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=R8PL1snHgzY" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/466/pydantic-performance-tips" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Categories: FLOSS Project Planets

Paul Tagliamonte: Reverse Engineering a Restaurant Pager system 🍽️

Planet Debian - Fri, 2024-06-14 01:07

It’s been a while since I played with something new – been stuck in a bit of a rut with radios recently - working on refining and debugging stuff I mostly understand for the time being. The other day, I was out getting some food and I idly wondered how the restaurant pager system worked. Idle curiosity gave way to the realization that I, in fact, likely had the means and ability to answer this question, so I bought the first set of the most popular looking restaurant pagers I could find on eBay, figuring it’d be a fun multi-week adventure.

Order up!

I wound up buying a Retekess brand TD-158 Restaurant Pager System (they looked like ones I’d seen before and seemed to be low-cost and popular), and quickly after, had a pack of 10 pagers and a base station in-hand. The manual stated that the radios operated at 433 MHz` (cool! can do! Love a good ISM band device), and after taking an inital read through the manual for tips on the PHY, I picked out a few interesting things. First is that the base station ID was limited to 0-999, which is weird because it means the limiting factor is likely the base-10 display on the base station, not the protocol – we need enough bits to store 999 – at least 10 bits. Nothing else seemed to catch my eye, so I figured may as well jump right to it.

Not being the type to mess with success, I did exactly the same thing as I did in my christmas tree post, and took a capture at 433.92MHz since it was in the middle of the band, and immediately got deja-vu. Not only was the signal at 433.92MHz, but throwing the packet into inspectrum gave me the identical plot of the OOK encoding scheme.

Not just similar – identical. The only major difference was the baud rate and bit structure of the packets, and the only minor difference was the existence of what I think is a wakeup preamble packet (of all zeros), rather than a preamble symbol that lasted longer than usual PHY symbol (which makes this pager system a bit easier to work with than my tree, IMHO).

Getting down to work, I took some measurements to determine what the symbol duration was over the course of a few packets, I was able to determine the symbol rate was somewhere around 858 microseconds (0.000858 seconds per symbol), which is a weird number, but maybe I’m slightly off or there’s some larger math I’m missing that makes this number satisfyingly round (internal low cost crystal clock or something? I assume this is some hardware constrait with the pager?)

Anyway, good enough. Moving along, let’s try our hand at a demod – let’s just assume it’s all the same as the chrismas tree post and demod ones and zeros the same way here. That gives us 26 bits:

00001101110000001010001000

Now, I know we need at least 10 bits for the base station ID, some number of bits for the pager ID, and some bits for the command. This was a capture of me hitting “call” from a base station ID of 55 to a pager with the ID of 10, so let’s blindly look for 10 bit chunks with the numbers we’re looking for:

0000110111 0000001010 001000

Jeez. First try. 10 bits for the base station ID (55 in binary is 0000110111), 10 bits for the pager ID (10 in binary is 0000001010), which leaves us with 6 bits for a command (and maybe something else too?) – which is 8 here. Great, cool, let’s work off that being the case and revisit it if we hit bugs.

Besides our data packet, there’s also a “preamble” packet that I’ll add in, in case it’s used for signal detection or wakeup or something – which is fairly easy to do since it’s the same packet structure as the above, just all zeros. Very kind of them to leave it with the same number of bits and encoding scheme – it’s nice that it can live outside the PHY.

Once I got here, I wrote a quick and dirty modulator, and was able to ring up pagers! Unmitigated success and good news – only downside was that it took me a single night, and not the multi-week adventure I was looking for. Well, let’s finish the job and document what we’ve found for the sake of completeness.

Boxing everything up

My best guess on the packet structure is as follows:

base id argument command

For a call or F2 operation, the argument is the Pager’s ID code, but for other commands it’s a value or an enum, depending. Here’s a table of my by-hand demodulation of all the packet types the base station produces:

Type Cmd Id Description Call8Call the pager identified by the id in argument Off60Request any pagers on the charger power off when power is removed, argument is all zero F240Program a pager to the specified Pager ID (in argument) and base station F344Set the reminder duration in seconds specified in argument F448Set the pager's beep mode to the one in argument (0 is disabled, 1 is slow, 2 is medium, 3 is fast) F552Set the pager's vibration mode to the one in argument (0 is disabled, 1 is enabled) Kitchen’s closed for the night

I’m not going to be publishing this code since I can’t think of a good use anyone would have for this besides folks using a low cost SDR and annoying local resturants; but there’s enough here for folks who find this interesting to try modulating this protocol on their own hardware if they want to buy their own pack of pagers and give it a shot, which I do encourage! It’s fun! Radios are great, and this is a good protocol to hack with – it’s really nice.

All in all, this wasn’t the multi-week adventure I was looking for, this was still a great exercise and a fun reminder that I’ve come a far way from when I’ve started. It felt a lot like cheating since I was able to infer a lot about the PHY because I’d seen it before, but it was still a great time. I may grab a few more restaurant pagers and see if I can find one with a more exotic PHY to emulate next. I mean why not, I’ve already got the thermal printer libraries working 🖨️

Categories: FLOSS Project Planets

Kirigami tutorial now ported to Qt6

Planet KDE - Thu, 2024-06-13 20:00
After three months, KDE’s Kirigami tutorial has been ported to Qt6. In case you are unaware of what Kirigami is: Qt provides two GUI technologies to create desktop apps: QtWidgets and QtQuick QtWidgets uses only C++ while QtQuick uses QML (plus optional C++ and JavaScript) Kirigami is a library made by KDE that extends QtQuick and provides a lot of niceties and quality-of-life components Strictly speaking there weren’t that many API changes to Kirigami.
Categories: FLOSS Project Planets

Matthew Palmer: Information Security: "We Can Do It, We Just Choose Not To"

Planet Debian - Thu, 2024-06-13 20:00

Whenever a large corporation disgorges the personal information of millions of people onto the Internet, there is a standard playbook that is followed.

“Security is our top priority”.

“Passwords were hashed”.

“No credit card numbers were disclosed”.

record scratch

Let’s talk about that last one a bit.

A Case Study

This post could have been written any time in the past… well, decade or so, really. But the trigger for my sitting down and writing this post is the recent breach of wallet-finding and criminal-harassment-enablement platform Tile. As reported by Engadget, a statement attributed to Life360 CEO Chris Hulls says

The potentially impacted data consists of information such as names, addresses, email addresses, phone numbers, and Tile device identification numbers.

But don’t worry though; even though your home address is now public information

It does not include more sensitive information, such as credit card numbers

Aaaaaand here is where I get salty.

Why Credit Card Numbers Don’t Matter

Describing credit card numbers as “more sensitive information” is somewhere between disingenuous and a flat-out lie. It was probably included in the statement because it’s part of the standard playbook. Why is it part of the playbook, though?

Not being a disaster comms specialist, I can’t say for sure, but my hunch is that the post-breach playbook includes this line because (a) credit cards are less commonly breached these days (more on that later), and (b) it’s a way to insinuate that “all your financial data is safe, no need to worry” without having to say that (because that statement would absolutely be a lie).

The thing that not nearly enough people realise about credit card numbers is:

  1. The credit card holder is not usually liable for most fraud done via credit card numbers; and

  2. In terms of actual, long-term damage to individuals, credit card fraud barely rates a mention. Identity fraud, Business Email Compromise, extortion, and all manner of other unpleasantness is far more damaging to individuals.

Why Credit Card Numbers Do Matter

Losing credit card numbers in a data breach is a huge deal – but not for the users of the breached platform. Instead, it’s a problem for the company that got breached.

See, going back some years now, there was a wave of huge credit card data breaches. If you’ve been around a while, names like Target and Heartland will bring back some memories.

Because these breaches cost issuing banks and card brands a lot of money, the Payment Card Industry Security Standards Council (PCI-SSC) and the rest of the ecosystem went full goblin mode. Now, if you lose credit card numbers in bulk, it will cost you big. Massive fines for breaches (typically levied by the card brands via the acquiring bank), increased transaction fees, and even the Credit Card Death Penalty (being banned from charging credit cards), are all very big sticks.

Now Comes the Finding Out

In news that should not be surprising, when there are actual consequences for failing to do something, companies take the problem seriously. Which is why “no credit card numbers were disclosed” is such an interesting statement.

Consider why no credit card numbers were disclosed. It’s not that credit card numbers aren’t valuable to criminals – because they are. Instead, it’s because the company took steps to properly secure the credit card data.

Next, you’ll start to consider why, if the credit card numbers were secured, why wasn’t the personal information that did get disclosed similarly secured? Information that is far more damaging to the individuals to whom that information relates than credit card numbers.

The only logical answer is that it wasn’t deemed financially beneficial to the company to secure that data. The consequences of disclosure for that information isn’t felt by the company which was breached. Instead, it’s felt by the individuals who have to spend weeks of their life cleaning up from identity fraud committed against them. It’s felt by the victim of intimate partner violence whose new address is found in a data dump, letting their ex find them again.

Until there are real, actual consequences for the companies which hemorrhage our personal data (preferably ones that have “percentage of global revenue” at the end), data breaches will continue to happen. Not because they’re inevitable – because as credit card numbers show, data can be secured – but because there’s no incentive for companies to prevent our personal data from being handed over to whoever comes along.

Support my Salt

My salty takes are powered by refreshing beverages. If you’d like to see more of the same, buy me one.

Categories: FLOSS Project Planets

GNU Taler news: Privacy-preserving Subscriptions, Discounts and Tax Deductable Donations

GNU Planet! - Thu, 2024-06-13 18:00
Two independent bachelor theses bring new privacy-focused features to GNU Taler. Christian Blättler designed and implemented token-based subscriptions and discounts in Taler, while Lukas Matyja and Johannes Casaburi's thesis introduces the Donau system, a new type of a donation authority system.
Categories: FLOSS Project Planets

Python Morsels: Data structures contain pointers

Planet Python - Thu, 2024-06-13 17:20

Data structures, like variables, contain references to objects, rather than the objects themselves.

Table of contents

  1. Referencing the same object in multiple places
  2. Data structures store references, not objects
  3. Avoid referencing the same mutable object
  4. An ouroboros: A list that contains itself
  5. Python's data structures contain pointers to objects

Referencing the same object in multiple places

Let's point a variable row to a list of three zeroes:

>>> row = [0, 0, 0]

Now let's make a new variable that points to a list-of-lists:

>>> boat = [row, row, row]

We now have a list of three lists, each with three zeroes in it:

>>> boat [[0, 0, 0], [0, 0, 0], [0, 0, 0]]

What would happen if we look up index 1, and then index 1 again, and change that to the number 1?

>>> boat[1][1] = 1

What do you think might happen? What will change in our lists?

We're looking up the second list, and then the second value in the second list, and assigning that value to 1. So we've asked to change the middle item in the middle list to the number 1.

That's not quite what happens though:

>>> boat [[0, 1, 0], [0, 1, 0], [0, 1, 0]]

Instead, we changed the middle number in all three of our inner lists.

Why did this happen?

Well...

Data structures store references, not objects

Lists in Python don't actually …

Read the full article: https://www.pythonmorsels.com/data-structures-contain-pointers/
Categories: FLOSS Project Planets

mark.ie: My LocalGov Drupal contributions for week-ending June 14th, 2024

Planet Drupal - Thu, 2024-06-13 12:00

Here's what I've been working on for my LocalGov Drupal contributions this week. Thanks to Big Blue Door for sponsoring the time to work on these.

Categories: FLOSS Project Planets

The Drop Times: Montreal Welcomes Evolve Drupal: A Premier Event Shaping the Future of Web Development

Planet Drupal - Thu, 2024-06-13 08:54
Montreal is set to host Evolve Drupal tomorrow, a premier event dedicated to the ever-evolving world of Drupal. With over 30 talks spread across three tracks—Digital, Tech, and Design Summit—attendees will have the opportunity to delve into the future of web development, gain wisdom from past projects, expand their professional networks, and grow as a vibrant community of professionals.
Categories: FLOSS Project Planets

GSoC '24 Progress: Week 1 and 2

Planet KDE - Thu, 2024-06-13 08:42

Hi! It has been over two weeks since the coding period began. In this blog post, I will provide a brief summary of my work during the first two weeks.

After spending some time reviewing the code, I decided to start by refactoring the existing code related to ASS format subtitles. This has two main goals: first, to enable Kdenlive to read as much information as possible from ASS subtitles (specifically, the features supported by libass) and load it into the memory; and second, to ensure that Kdenlive can save all this information back to the file. Since SubtitleModel already contains a significant amount of code for ASS format subtitles, my work mainly involved refining and expanding upon this existing code while maintaining compatibility.

So far, I have accomplished the following:

  • Added initial support for reading and saving embedded fonts in ASS subtitles
  • Optimized the storage method for subtitle styles
  • Migrated from V4Style to V4+Style

Tasks still to be completed:

  • Modify m_subtitleList to accommodate more information.
  • Write unit tests for each feature

Once these steps are completed, we will have more comprehensive support for ASS format subtitles, marking the end of this phase of the ASS code refactoring. The next focus will be on refactoring the functionality for modifying subtitle styles. These two parts will be my primary focus for the next two weeks. Stay tuned!

Categories: FLOSS Project Planets

When Open Source isn't: Floorp, FUTO...

Planet KDE - Thu, 2024-06-13 08:30

A strange fact recently came into my purview – many users and developers don’t know what Open Source is.

Some time ago, there was a controversy regarding the Floorp web browser (a fork of Firefox) going closed source.

The cause for this was that some company forked Floorp and made a browser based on it. I’ll not comment on the irony that the author of a fork of a project complained that someone forked his project.

Obviously, this triggered a storm of negative reactions on quite a few platforms where fans of Free Software / Open Source software hang out.

The developer responded that this is just temporary, and that the browser will soon be Open Source again. After a while, the repositories became public again and all was fine.

The developer said that Floorp is again Open Source, the angry mob said good, the Floorp browser is again Open Source. And every discussion about Floorp got a plethora of comments that people should stop complaining, that Floorp is Open Source, and that the previous situation was just a misunderstanding.

Open Source

The term Open Source is well defined at opensource.org.

Making the source code of a program publicly available is not enough for something to be Open Source. Having an army of people saying that something is Open Source, is not enough for something to be Open Source.

If a license that the code is published under doesn’t conform to the criteria published on opensource.org, a program is not Open Source. A program whose license contains the following sentence, is, by definition, not Open Source:

You may not use or distribute this Software or any derivative works in any form for commercial purposes. Examples of commercial purposes would be running business operations, licensing, leasing, or selling the Software, or distributing the Software for use with commercial products.

Floorp private components/LICENSE

This is strangely worded as it looks like you can not use the Floorp web browser to access, for example, a work e-mail account as that would be using it for commercial purposes. This is likely not what the license author intended – the intent of the license is to disallow creating commercial products by forking Floorp.

While it is a valid desire of the author not to have somebody else profit from his work, it is the thing that makes the Floorp web browser not Open Source.

You can call it ‘source available’, you can call it ‘fair code’ but you can not call it Open Source.

Update: A few days after this blog was published, Floorp moved the code from the private submodule to the main repository. So, it should be again Open Source. Let’s hope it remains like that.

Redefining Open Source

This blog post should have been written when the Floorp thing happened, but I thought this is just a random incident not worth the extra attention.

It seems I was wrong. It is something that people should start paying attention to.

A lot of people – developers and users alike, intentionally or not – misuse the term Open Source, and some of them like FUTO even go that far to redefine it and create their own incompatible The Open Source Definition that will suit their own purpose.

Open Source Confusion Cases

Now, the main purpose of this post isn’t for me to let off some steam, but to share a great project started by Dan Brown of attempting to find and list all projects which claim to be Open Source while their licenses say otherwise.

It can be found on his Github account.

Share your views to FSFE

Albert pointed out that FSFE is also interested in this topic:

The FSFE is looking for examples and thoughts about openwashing if you feel like it I’m sure they’ll welcome your input mastodon.social/@johas/112524760073638652 You can support my work on Patreon, or you can get my book Functional Programming in C++ at Manning if you're into that sort of thing. -->
Categories: FLOSS Project Planets

When Open Source isn't: Floorp, FUTO...

Planet KDE - Thu, 2024-06-13 08:30

A strange fact recently came into my purview – many users and developers don’t know what Open Source is.

Some time ago, there was a controversy regarding the Floorp web browser (a fork of Firefox) going closed source.

The cause for this was that some company forked Floorp and made a browser based on it. I’ll not comment on the irony that the author of a fork of a project complained that someone forked his project.

Obviously, this triggered a storm of negative reactions on quite a few platforms where fans of Free Software / Open Source software hang out.

The developer responded that this is just temporary, and that the browser will soon be Open Source again. After a while, the repositories became public again and all was fine.

The developer said that Floorp is again Open Source, the angry mob said good, the Floorp browser is again Open Source. And every discussion about Floorp got a plethora of comments that people should stop complaining, that Floorp is Open Source, and that the previous situation was just a misunderstanding.

Open Source

The term Open Source is well defined at opensource.org.

Making the source code of a program publicly available is not enough for something to be Open Source. Having an army of people saying that something is Open Source, is not enough for something to be Open Source.

If a license that the code is published under doesn’t conform to the criteria published on opensource.org, a program is not Open Source. A program whose license contains the following sentence, is, by definition, not Open Source:

You may not use or distribute this Software or any derivative works in any form for commercial purposes. Examples of commercial purposes would be running business operations, licensing, leasing, or selling the Software, or distributing the Software for use with commercial products.

Floorp private components/LICENSE

This is strangely worded as it looks like you can not use the Floorp web browser to access, for example, a work e-mail account as that would be using it for commercial purposes. This is likely not what the license author intended – the intent of the license is to disallow creating commercial products by forking Floorp.

While it is a valid desire of the author not to have somebody else profit from his work, it is the thing that makes the Floorp web browser not Open Source.

You can call it ‘source available’, you can call it ‘fair code’ but you can not call it Open Source.

Redefining Open Source

This blog post should have been written when the Floorp thing happened, but I thought this is just a random incident not worth the extra attention.

It seems I was wrong. It is something that people should start paying attention to.

A lot of people – developers and users alike, intentionally or not – misuse the term Open Source, and some of them like FUTO even go that far to redefine it and create their own incompatible The Open Source Definition that will suit their own purpose.

Open Source Confusion Cases

Now, the main purpose of this post isn’t for me to let off some steam, but to share a great project started by Dan Brown of attempting to find and list all projects which claim to be Open Source while their licenses say otherwise.

It can be found on his Github account.

You can support my work on Patreon, or you can get my book Functional Programming in C++ at Manning if you're into that sort of thing. -->
Categories: FLOSS Project Planets

Python Software Foundation: For your consideration: Proposed bylaws changes to improve our membership experience

Planet Python - Thu, 2024-06-13 08:22

This year, as part of our annual election process, the Python Software Foundation Board is offering three bylaws changes for our Members to vote on. These changes are all centered on our membership experience: making it simpler to qualify as a Member for Python-related volunteer work, making it easier to vote, and allowing the Board more options to keep our membership safe and enforce the Code of Conduct.

Voting Members will be asked to vote on these items during the July Board of Directors election. If the majority of voting members vote in favor of any of the changes, those changes will be incorporated into the bylaws and go into immediate effect.

We're sharing these changes with you today as an opportunity to understand why these changes are being proposed, and to give you an opportunity to ask questions of the Board before you vote, either by emailing psf-elections@pyfound.org or membership-wg@pyfound.org, or by responding to the thread on the PSF discussions site.

The text of the changes are available from the following links, all of which show visual representations of additions and deletions to our canonical bylaws repository:

The Board has carefully considered these changes and strongly encourages all Members to vote in favor of them. The rest of this post explains the changes, and why we're putting them to our Voting Members.

Change 1: Merging Contributing and Managing member classes

Since 2017, when we adopted our current membership model, we've had four classes of membership: Supporting (recognition for being a monetary donor), Managing (recognition for volunteer work in the PSF or in community events), Contributing (recognition for volunteer work on open source software), and Fellow (recognition of long-term service to the mission of the PSF).

For almost as long as our membership options have existed, there's been confusion about the distinction between Managing and Contributing members. Both require 5 hours a month of volunteer service, but the distinction between community work and work on software is increasingly out of step with how our community thinks about contributing.

In the future, we want community members to qualify for PSF membership by participating and giving back to the community, either through donating, through volunteer work, or in recognition of long service, without distinguishing between code contributions and non-code contributions.

With this proposed bylaws change, we would merge the Managing and Contributing membership classes. All Managing Members would become Contributing Members, and we would no longer have a Managing Member class. Further, this change would explicitly allow for works of authorship (including documentation, books, or blogs) beyond software to count as volunteer work, provided those works are openly licensed.

We think this would significantly simplify the membership categories, reduce confusion, and make it easier for volunteers who both run events and write code to decide which membership class best applies to them.

Change 2: Simplifying the voter affirmation process by treating past voting activity as intent to continue voting

Our bylaws (section 4.2) require every Member to affirm their intention to vote in an election before they can be issued a ballot. This is intended to ensure that our election reaches a quorum (i.e. one third of ballots issued are actually used to vote), and is therefore valid by our bylaws. Due to technical limitations, we only started requiring this affirmation in practice last year. Since then, we've received feedback that the affirmation process has unintentionally excluded some people who had intended to vote.

It is the Board's intention that we make it as easy as possible to vote in our elections, however, we must balance legal obligations that require us to maintain a quorum in our elections.

This bylaws change would allow us to treat any member who voted in the immediately previous year's election as having affirmed their intention to vote. The Board believes that voting in a PSF election is a good indicator that a member is likely to vote again, and including them in the quorum calculation is unlikely to put our quorum at risk.

While there may be technical limitations to us implementing this change, the change is necessary to allow the Foundation to alter the affirmation procedure at all. It is the Board's intention that this change would be implemented for the 2025 election should the Bylaws change be accepted by the membership during the 2024 election.

Change 3: Allow for removal of Fellows by a Board vote in response to Code of Conduct violations, removing the need for a vote of the membership

Currently PSF Fellows are awarded membership for life, as a reward for exceptional service to the mission of the Foundation. There are deliberately very few ways to remove a Fellow from the membership.

If a Fellow were found to have violated the Foundation's Code of Conduct in a way that warranted termination of their membership, currently the only way to remove them would be to put their removal to a vote of all Voting Members (per section 4.15 of our Bylaws). We believe that requiring a vote of the membership to remove a Code of Conduct violator from our community would subject members of the community --- including people directly impacted by that violator's behavior --- to undue distress.

On the other hand, we believe there is significant legal risk that could arise from Code of Conduct violators known to the PSF using their status as a PSF Fellow to enhance their reputation. In cases where the Foundation needs to act in order to continue being able to serve the Python community effectively, we currently have no choice but to name a known Code of Conduct violator as part of a vote put to the membership.

In practice, this requirement limits our ability to effectively enforce our Code of Conduct. This is a disservice to our community.

This proposed change gives the Board, by a majority vote, the ability to terminate the membership of a Fellow as a consequence of breaching any written policy of the Foundation, specifically including our Code of Conduct. This change would allow the Board to act in cases where there is a clear need for a problematic community member to no longer be affiliated with the Foundation, without further perpetuating the trauma caused by that community member's actions.

Categories: FLOSS Project Planets

Drupal Core News: New community initiative: Frontend bundler

Planet Drupal - Thu, 2024-06-13 02:55

Adapted from: https://www.sitback.com.au/insights/article/working-with-javascript-in-d...

As far as I understand it, community initiatives exist because enough people say they’re interested and start working towards the initiative’s goals.

So I thought I would try starting an initiative to solve a problem I see pop up fairly regularly:

Basically: why isn’t there a standard way to install javascript dependencies?

Some modules have tried asset-packagist, but there are myriad problems with that:

I had a whinge about it in #australia-nz: https://drupal.slack.com/archives/C45SW3FLM/p1712295645835869 and took up larowlan’s generous offer to try to get a new initiative this off the ground.

He introduced me to Théodore (@nod_) the frontend framework manager and we three had a short discussion around suitable directions to take. This initiative would not be happening without their help and guidance, thank you so much Lee and Théodore 🙇‍♂️

We explored the idea of using import-maps to let the browser handle module imports and agreed that the cascading downloads would be an unacceptable performance burden on non-admin pages.

The result of that meeting was the idea of trying out publishing Drupal modules on npm, or at least an npm-like repository, since @larowlan mentioned that GitLab can provide one. I got started and wrote some scripts for gathering package names and putting them in a central package.json to be downloaded by npm/yarn/whatever.

Then @larowlan pointed out https://github.com/php-forge/foxy which I had seen, but didn’t really understand the power of. What I didn’t understand was that you could define a package.json file inside a composer package, make a couple of tweaks to composer.json and without publishing any kind of npm package, foxy would find it and treat it like one.

Cue a couple of weeks of messing around with foxy, composer and vite, and I have created a working prototype for compiling multiple Drupal modules (including custom modules if desired) in a project, and routing the library system to the new entry points:

https://github.com/darvanen/drupal-js

It requires a few things:

Any module that wants to opt in:

  • Adds php-forge/foxy to require or require-dev in composer.json.

  • Adds a module-name.foxy.yml file to represent the library state when using foxy.

Site builders:

  • Have one or more modules that use foxy in their project

  • Require and enable drupal/foxy

  • Add a provided vite.config.js to their project (could this be done by the foxy module?)

  • Set up a way to run vite build (or their own implementation):

    • post-install/update commands

    • pipeline?

    • manually?

This is where you come in

The prototype is just a starting point. I want us to come together to define a new way of working with JavaScript in Drupal that everyone can and will want to use, similar to how drupal-composer/drupal-project pioneered effective usage of composer and was eventually adopted by core. I intend to keep working on this but I want it to be driven by the community, hence the initiative.

Things you can do right now:

  • Spread the word, recruit more people to the initiative, especially if they maintain a module with JS dependencies.

  • Try out the prototype and give feedback - no change is too big to explore.

  • Join the #frontend-bundler-initiative channel to chat about ways forward - bikeshedding is welcome here, we used to call that brainstorming ;)

  • If you have a module with JS dependencies: speak up to have your module included in the prototype, or make a PR.

  • Contribute to the foxy module to get it to import css/image/asset dependencies from the vite manifest

So what do you say, are you in?- come join me in the channel!

Categories: FLOSS Project Planets

Tag1 Consulting: Migrating Your Data from Drupal 7 to Drupal 10: Performing an automated migration

Planet Drupal - Thu, 2024-06-13 02:39

Transition your Drupal 7 site to Drupal 10 with ease using the automated migration approach. This latest article from our comprehensive migration guide walks you through configuring the Drupal 10 site, enabling migration modules, and utilizing the Migrate Drupal UI for a smooth transition. Are you ready?

Read more mauricio Thu, 06/13/2024 - 06:37
Categories: FLOSS Project Planets

Russ Allbery: Security review of tag2upload

Planet Debian - Thu, 2024-06-13 01:03

For some time now, Debian has been discussing a possible enhancement to the way that Debian packages are uploaded to the archive. The basic idea is to allow a package upload to be triggered by pushing a signed tag, with some structured metadata, to Salsa, the instance of GitLab that Debian provides for packaging repositories. This would allow Debian package maintainers to use a more typical Git-first workflow, where releases are triggered by Git tags and the release artifacts are built in a clean CI environment, while still enforcing the existing Debian rules about who is allowed to upload packages.

As part of that effort, I recently completed a detailed security review of the tag2upload design. I sent it to debian-vote as part of the ongoing discussion, but have also posted it at the link above to give it a more permanent home.

This security review may be revised based on the discussion if people point out things that I missed.

Categories: FLOSS Project Planets

Pages