Feeds

Arnaud Rebillout: Firefox: Moving from the Debian package to the Flatpak app (long-term?)

Planet Debian - Tue, 2024-04-02 20:00

First, thanks to Samuel Henrique for giving notice of recent Firefox CVEs in Debian testing/unstable.

At the time I didn't want to upgrade my system (Debian Sid) due to the ongoing t64 transition transition, so I decided I could install the Firefox Flatpak app instead, and why not stick to it long-term?

This blog post details all the steps, if ever others want to go the same road.

Flatpak Installation

Disclaimer: this section is hardly anything more than a copy/paste of the official documentation, and with time it will get outdated, so you'd better follow the official doc.

First thing first, let's install Flatpak:

$ sudo apt update $ sudo apt install flatpak

Then the next step is to add the Flathub remote repository, from where we'll get our Flatpak applications:

$ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo

And that's all there is to it! Now come the optional steps.

For GNOME and KDE users, you might want to install a plugin for the software manager specific to your desktop, so that it can support and manage Flatpak apps:

$ which -s gnome-software && sudo apt install gnome-software-plugin-flatpak $ which -s plasma-discover && sudo apt install plasma-discover-backend-flatpak

And here's an additional check you can do, as it's something that did bite me in the past: missing xdg-portal-* packages, that are required for Flatpak applications to communicate with the desktop environment. Just to be sure, you can check the output of apt search '^xdg-desktop-portal' to see what's available, and compare with the output of dpkg -l | grep xdg-desktop-portal.

As you can see, if you're a GNOME or KDE user, there's a portal backend for you, and it should be installed. For reference, this is what I have on my GNOME desktop at the moment:

$ dpkg -l | grep xdg-desktop-portal | awk '{print $2}' xdg-desktop-portal xdg-desktop-portal-gnome xdg-desktop-portal-gtk Install the Firefox Flatpak app

This is trivial, but still, there's a question I've always asked myself: should I install applications system-wide (aka. flatpak --system, the default) or per-user (aka. flatpak --user)? Turns out, this questions is answered in the Flatpak documentation:

Flatpak commands are run system-wide by default. If you are installing applications for day-to-day usage, it is recommended to stick with this default behavior.

Armed with this new knowledge, let's install the Firefox app:

$ flatpak install flathub org.mozilla.firefox

And that's about it! We can give it a go already:

$ flatpak run org.mozilla.firefox Data migration

At this point, running Firefox via Flatpak gives me an "empty" Firefox. That's not what I want, instead I want my usual Firefox, with a gazillion of tabs already opened, a few extensions, bookmarks and so on.

As it turns out, Mozilla provides a brief doc for data migration, and it's as simple as moving Firefox data directory around!

To clarify, we'll be copying data:

  • from ~/.mozilla/ -- where the Firefox Debian package stores its data
  • into ~/.var/app/org.mozilla.firefox/.mozilla/ -- where the Firefox Flatpak app stores its data

Make sure that all Firefox instances are closed, then proceed:

# BEWARE! Below I'm erasing data! $ rm -fr ~/.var/app/org.mozilla.firefox/.mozilla/firefox/ $ cp -a ~/.mozilla/firefox/ ~/.var/app/org.mozilla.firefox/.mozilla/

To avoid confusing myself, it's also a good idea to rename the local data directory:

$ mv ~/.mozilla/firefox ~/.mozilla/firefox.old.$(date --iso-8601=date)

At this point, flatpak run org.mozilla.firefox takes me to my "usual" everyday Firefox, with all its tabs opened, pinned, bookmarked, etc.

More integration?

After following all the steps above, I must say that I'm 99% happy. So far, everything works as before, I didn't hit any issue, and I don't even notice that Firefox is running via Flatpak, it's completely transparent.

So where's the 1% of unhappiness? The « Run a Command » dialog from GNOME, the one that shows up via the keyboard shortcut <Alt+F2>. This is how I start my GUI applications, and I usually run two Firefox instances in parallel (one for work, one for personal), using the firefox -p <profile> command.

Given that I ran apt purge firefox before (to avoid confusing myself with two installations of Firefox), now the right (and only) way to start Firefox from a command-line is to type flatpak run org.mozilla.firefox -p <profile>. Typing that every time is way too cumbersome, so I need something quicker.

Seems like the most straightforward is to create a wrapper script:

$ cat /usr/local/bin/firefox #!/bin/sh exec flatpak run org.mozilla.firefox "$@"

And now I can just hit <Alt+F2> and type firefox -p <profile> to start Firefox with the profile I want, just as before. Neat!

Looking forward: system updates

I usually update my system manually every now and then, via the well-known pair of commands:

$ sudo apt update $ sudo apt full-upgrade

The downside of introducing Flatpak, ie. introducing another package manager, is that I'll need to learn new commands to update the software that comes via this channel.

Fortunately, there's really not much to learn. From flatpak-update(1):

flatpak update [OPTION...] [REF...]

Updates applications and runtimes. [...] If no REF is given, everything is updated, as well as appstream info for all remotes.

Could it be that simple? Apparently yes, the Flatpak equivalent of the two apt commands above is just:

$ flatpak update

Going forward, my options are:

  1. Teach myself to run flatpak update additionally to apt update, manually, everytime I update my system.
  2. Go crazy: let something automatically update my Flatpak apps, in my back and without my consent.

I'm actually tempted to go for option 2 here, and I wonder if GNOME Software will do that for me, provided that I installed gnome-software-plugin-flatpak, and that I checked « Software Updates -> Automatic » in the Settings (which I did).

However, I didn't find any documentation regarding what this setting really does, so I can't say if it will only download updates, or if it will also install it. I'd be happy if it automatically installs new version of Flatpak apps, but at the same time I'd be very unhappy if it automatically upgrades my Debian system...

So we'll see. Enough for today, hope this blog post was useful!

Categories: FLOSS Project Planets

Dirk Eddelbuettel: ulid 0.3.1 on CRAN: New Maintainer, Some Polish

Planet Debian - Tue, 2024-04-02 19:14

Happy to share that ulid is now (back) on CRAN. It provides universally unique identifiers that are lexicographically sortable, which improves over the more well-known uuid generators.

ulid is a neat little package put together by Bob Rudis a few years ago. It had recently drifted off CRAN so I offered to brush it up and re-submit it. And as tooted earlier today, it took just over an hour to finish that (after the lead up work I had done, including prior email with CRAN in the loop, the repo transfer from Bob’s to my ulid repo plus of course a wee bit of actual maintenance; see below for more).

The NEWS entry follows.

Changes in version 0.3.1 (2024-04-02)
  • New Maintainer

  • Deleted several repository files no longer used or needed

  • Added .editorconfig, ChangeLog and cleanup

  • Converted NEWS.md to NEWS.Rd

  • Simplified R/ directory to one source file

  • Simplified src/ removing redundant Makevars

  • Added ulid() alias

  • Updated / edited roxygen and README.md documention

  • Removed vignette which was identical to README.md

  • Switched continuous integration to GitHub Actions

  • Placed upstream (header-only) library into src/ulid/

  • Renamed single interface file to src/wrapper

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

The Drop Times: For Drupal to Remain Well and Alive: An Exclusive Conversation with Tim Doyle

Planet Drupal - Tue, 2024-04-02 17:09
Discover the future of Drupal and the open-source community in our exclusive interview with Tim Doyle, CEO of the Drupal Association. Learn about the innovative Open Web Alliance, Drupal's journey as a Digital Public Good, and the strategic plans shaping the platform's future. Dive into Tim's vision for a collaborative, inclusive, and technologically advanced Drupal ecosystem that champions the open web. Don't miss these insightful revelations and strategies set to redefine the landscape of open-source content management systems.
Categories: FLOSS Project Planets

Drupal Association blog: DrupalCon Portland 2024: The Nonprofit Summit Agenda is here!

Planet Drupal - Tue, 2024-04-02 15:43

I am pleased to share the schedule for the upcoming 2024 DrupalCon Nonprofit Summit. There is a special rate ($395.00) for nonprofit org staff, and those who are affiliated with nonprofits, and the summit is included free with your ticket! You can register here.

Relying on community feedback and past experience, we put together an agenda that we hope encompasses the spirit of open source camaraderie and will provide nourishment for the mind and soul. We tried to balance the technical with the strategy and networking with expertise. We look forward to seeing you there.

Agenda 9:00 am - 9:15 am: Welcome and overview

Julia Kranzthor

9:15 am - 10:30 am: Why Should Nonprofits Use Drupal? The Case for Owning Your Own Data and Using Drupal to Manage It.

Fireside Chat with Tim Lehnen, Johanna Bates, and Jess Snyder

10:30 am - 10:45 am: Break 10:45 am -11:00 am: Sponsor Case Study #1 11:00 am - 12:15 pm: Breakout Sessions

Round Table Discussions

  • Using Drupal to Promote Engagement With Your Audience: Tools, Challenges, and Measurement

  • Web Analytics for Nonprofits: Google Analytics 4 and Alternatives.

  • Thriving as a Lone Wolf: Navigating the Challenges of Being the Only Drupalist at a Nonprofit

  • Migrating from Drupal 7 to Drupal 10

  • Managing a Major Website Rebuild/Migration

  • Birds of a Feather: Topic to be determined on-site

12:15 pm - 1:15 pm: Lunch

A time for relaxing, and networking if you feel like it.

1:15 pm - 2:30 pm: Breakout Sessions

Round Table Discussions

  • Web Accessibility and Site Governance

  • Using Drupal in Small Nonprofits with Limited Staff and Financial Resources

  • Preparing for Impact on Your Website Redesign

  • Development and Hosting Challenges for Nonprofits

  • Leveraging CiviCRM with Drupal: Open Source CRM for Contact Management and Engagement Tracking

  • Birds of a Feather: Topic to be determined on-site

2:30 pm - 2:45 pm: Sponsor Case Study #2 2:45 pm - 3:00 pm: Break 3:00 pm - 4:00 pm: Drupal 10 Migration: How to Stop Kicking the Can (of Worms) Down the Road

Panel discussion with Tim Lehnen and Fran Garcia-Linares

4:00 pm - 5:00 pm: Optional Networking

Wrap up conversations, visit with colleagues.

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #623 (April 2, 2024)

Planet Python - Tue, 2024-04-02 15:30

#623 – APRIL 2, 2024
View in Browser »

Reading and Writing WAV Files in Python

In this tutorial, you’ll learn how to work with WAV audio files in Python using the standard-library wave module. Along the way, you’ll synthesize sounds from scratch, visualize waveforms in the time domain, animate real-time spectrograms, and apply special effects to widen the stereo field.
REAL PYTHON

Designing a Pure Python Web Framework

This blog post talks about Reflex, a Python web framework. The post talks about what makes Reflex different from other frameworks and shows you sample starting code. See also the associated HN Discussion.
NIKHIL RAO

Creating an Autopilot in X-Plane Using Python

X-Plane is a flight simulator, and Austin is using Python to create an autopilot using proportional integral derivative controllers. Read on to see how its done.
AUSTIN

Mojo Goes Open Source

MODULAR

PyPI Hiring a Support Specialist (Remote)

PYPI

Discussions Draft PEP: Sealed Decorator for Static Typing

PYTHON DISCUSS

What Are Some Good Python Codebases to Read?

LOBSTERS

Articles & Tutorials Using Python in Bioinformatics and the Laboratory

How is Python being used to automate processes in the laboratory? How can it speed up scientific work with DNA sequencing? This week on the show, Chemical Engineering PhD Student Parsa Ghadermazi is here to discuss Python in bioinformatics.
REAL PYTHON podcast

Handling Database Migrations With Alembic

Alembic is a change control tool for database content in SQLAlchemy. This article looks at the high-level architecture of how Alembic works, how to add it to your project, and some common workflows you’ll encounter.
PAUL ESCH-LAURENT • Shared by Michael Herman

Python Tricks: A Buffet of Awesome Python Features

Discover Python’s best practices with simple examples and start writing even more beautiful + Pythonic code. “Python Tricks: The Book” shows you exactly how. You’ll master intermediate and advanced-level features in Python with practical examples and a clear narrative. Get the book + video bundle 33% off →
DAN BADER sponsor

Python in List of Best Languages to Learn

The US Bureau of Labor Statistics has identified the top four languages for programmers to learn and Python made the list. Median annual wage of programmers in the US is expected to rise 25% in the next 5 years.
FORTUNE

Finding Python Easter Eggs

Python has its fair share of hidden surprises, commonly known as Easter eggs. From clever jokes to secret messages, these little mysteries are often meant to be discovered by curious geeks like you!
REAL PYTHON course

PyPI Temporarily Halted New Users and Projects

To fend off a supply-chain attack, PyPI temporarily halted new users and projects for about 10 hours last week. This article discusses why, and the scourge of supply-chain attacks.
ARS TECHNICA

Broadcasting in NumPy

Broadcasting in NumPy is not the most exciting topic, but this article explores the topic using a narrative perspective. This is not your standard “broadcasting in NumPy” article!
STEPHEN GRUPPETTA • Shared by Stephen Gruppetta

A Better Python Cache for Slow Function Calls

The folks at Sweep AI needed something more persistent than Python’s lru_cache. This post talks about the design behind a file based cached decorator they’ve recently released.
WILLIAM ZENG

Jupyter & IPython Terminology Explained

Are you trying to understand the differences between Jupyter Notebook, JupyterLab, IPython, Colab, and related terms? This article is for you.
DATA SCHOOL

How I Manage Python in 2024

This post covers the tools one developer uses in their day-to-day process. Read on for info about mise, uv, ruff, and more.
OUTLORE

Fixing a Bug in PyPy’s Incremental GC

A deep dive on hunting a tricky bug in the garbage collection code inside the alternate interpreter PyPy.
CARL FRIEDRICH BOLZ-TEREICK

Projects & Code django-prose-editor: Rich Text Editing for Django

GITHUB.COM/MATTHIASK

pycountry: ISO Country, Language, Currency and More

GITHUB.COM/PYCOUNTRY

sqlelf: Explore ELF Objects Through the Power of SQL

GITHUB.COM/FZAKARIA

Python Post-Mortem Debugger

GITHUB.COM/COCOLATO • Shared by cocolato

botasaurus: Framework to Build Awesome Scrapers

GITHUB.COM/OMKARCLOUD

Events Weekly Real Python Office Hours Q&A (Virtual)

April 3, 2024
REALPYTHON.COM

Canberra Python Meetup

April 4, 2024
MEETUP.COM

Sydney Python User Group (SyPy)

April 4, 2024
SYPY.ORG

PyCascades 2024

April 5 to April 9, 2024
PYCASCADES.COM

PyDelhi User Group Meetup

April 6, 2024
MEETUP.COM

Django Girls Ecuador 2024

April 6, 2024
OPENLAB.EC

Happy Pythoning!
This was PyCoder’s Weekly Issue #623.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Sven Hoexter: PKIX: pathLen Constrain on Root Certificates

Planet Debian - Tue, 2024-04-02 15:07

I recently came a cross a x509 P(rivate)KI Root Certificate which had a pathLen constrain set on the (self signed) Root Certificate. Since that is not commonly seen I looked a bit around to get a better understanding about how the pathLen basic constrain should be used.

Primary source is RFC 5280 section 4.2.1.9

The pathLenConstraint field is meaningful only if the cA boolean is asserted and the key usage extension, if present, asserts the keyCertSign bit (Section 4.2.1.3). In this case, it gives the maximum number of non-self-issued intermediate certificates that may follow this certificate in a valid certification path

Since the Root is always self-issued it doesn't count towards the limit, and since it's the last certificate (or the first depending on how you count) in a chain, it's pretty much pointless to configure a pathLen constrain directly on a Root Certificate.

Another relevant resource are the Baseline Requirements of the CA/Browser Forum (currently v2.0.2). Section 7.1.2.1.4 "Root CA Basic Constraints" describes it as NOT RECOMMENDED for a Root CA.

Last but not least there is the awesome x509 Limbo project which has a section for validating pathLen constrains. Since the RFC 5280 based assumption is that self signed certs do not count, they do not check a case with such a constrain on the Root itself, and what the implementations do about it. So the assumption right now is that they properly ignore it.

Summary: It's pointless to set the pathLen constrain on the Root Certificate, so just don't do it.

Categories: FLOSS Project Planets

PyCon: PyCon US 2024: Call for Volunteers and Hatchery Registration now Open!

Planet Python - Tue, 2024-04-02 14:45

Looking to make a meaningful contribution to the Python community? Look no further than PyCon US 2024! Whether you're a seasoned Python pro or a newcomer to the community and looking to get involved, there's a volunteer opportunity that's perfect for you.

Sign-up for volunteer roles is done directly through the PyCon US website. This way, you can view and manage shifts you sign up for through your personal dashboard! You can read up on the different roles to volunteer for and how to sign up on the PyCon US website.

PyCon US is largely organized and run by volunteers. Every year, we ask to fill over 300 onsite volunteer hours to ensure everything runs smoothly at the event. And the best part? You don't need to commit a lot of time to make a difference– some shifts are as short as one hour long! You can sign up for as many or as few shifts as you’d like. Even a couple of hours of your time can go a long way in helping us create an amazing experience for attendees.

Keep in mind that you need to be registered for the event to sign up for a volunteer role.

One important way to get involved is to sign up as a Session Chair or Session Runner. This is an excellent opportunity to meet and interact with speakers while helping to ensure that sessions run smoothly. And who knows, you might just learn something new along the way! You can sign up for these roles directly on the Talks schedule.

Volunteer your time at PyCon US 2024 and you’ll be part of a fantastic community that's passionate about Python programming and help us make this year's conference a huge success. Sign up today for the shifts that call to you and join the fun!

Hatchery Program

First introduced in 2018, the Hatchery program offers the pathways for PyCon US attendees to introduce new tracks, activities, summits, demos, etc., at the conference—activities that all share and fulfill the Python Software Foundation’s mission within the PyCon US schedule.

Since its introduction, this program has “hatched” several new tracks that are now staples of our conference, including PyCon US Charlas, Mentored Sprints, and the Maintainer’s Summit. This year, we’ve received eight very compelling proposals. After careful consideration, we have selected four new programs, each of them unique and focus on different aspects of the Python community.

FlaskCon - Friday, May 17, 2024

Join us in a mini conference dedicated to Flask, its community and ecosystem, as well as related web technologies. Meet maintainers and community members, learn about how to get involved, and join us during the sprint days to contribute. Submit your talk proposal today!

Organized by David Lord, Phil Jones, Adam Englander, David Carmichael, Abdur-Rahmaan Janhangeer

Community Organizers Summit - Saturday, May 18, 2024

Do you organize a Conference, Meetup, User Group, Hackathon, or other community event in your area? Are you trying to start a group but don't know where to start? Whether you have 30 years of experience or are looking to create a new event, this summit is for you.

Join us for a summit of Presentations, Panels, and Breakout Sessions about organizing community events.

Organized by Mason Egger, Kevin Horn, and Heather White

Sign-up is required. Register to secure your spot.

Humble Data - Saturday, May 18, 2024

Are you eager to embark on a tech career but unsure where to start? Are you curious about data science? Taking the first steps in this area is hard, but you don’t have to do it alone. Join our workshop for complete beginners and get started in Python data science - even if you’ve never written a single line of code!

We invite those from underrepresented groups to apply to join us for a fun, supportive workshop that will give you the confidence to get started in this exciting area. You can expect plenty of exercises, as well as inspiring talks from those who were once in your shoes. You’ll cover the basics of programming in Python, as well as useful libraries and tools such as Jupyter notebooks, pandas, and Matplotlib.

In this hands-on workshop, you’ll work through a series of beginner-friendly materials at your own pace. You’ll work within small groups, each with an assigned mentor, who will be there to help you with any questions or whenever you get stuck. All you’ll need to bring is a laptop that can connect to the internet and a willingness to learn!

Organized by Cheuk Ting Ho and Jodie Burchell

Sign-up is required. Register to secure your spot.
Documentation Summit - Sunday May 19, 2024

A full-day summit including talks and panel sessions inviting leaders in documentation to share their experience in how to make good documentation, discussion about documentation tools such as sphinx, mkdocs, themes etc, what are the common mistakes and how to avoid them. Accessibility of documentation is also an important topic so we will also cover talks or discussions regarding accessibility of documentation.

This summit is aimed at anyone who cares about or is involved in any aspect of open source documentation, such as, but not limited to, technical writers, developers, developer advocates, project maintainers and contributors, accessibility experts, documentation tooling developers, and documentation end-users.

Organized by Cheuk Ting Ho

Sign-up is required. Register to secure your spot.
Register Now

Registration for PyCon US is now open, and all of the Hatchery programs are included as part of your PyCon US registration (no additional cost). Some of the programs require advanced sign up, in which case walk-ins will only be accepted if space is available. Please check each Hatchery program carefully to determine whether a registration is required or not.

Head over to your PyCon US Dashboard to add any of the above Hatchery programs to your PyCon US registration. Don't worry, you can always change your mind and cancel later to open up the space for someone else!

Congratulations to all the accepted program organizers! Thank you for bringing forward your fresh ideas to PyCon US. We look forward to seeing you in Pittsburgh.
Categories: FLOSS Project Planets

OSI’s Response to NTIA ‘Dual Use’ RFC 3.27.2024

Open Source Initiative - Tue, 2024-04-02 14:00

March 27, 2024

Mr. Bertram Lee
National Telecommunications and Information Administration (NTIA)
U.S. Department of Commerce
1401 Constitution Avenue NW
Washington, DC 20230

RE: [Docket Number 240216-0052] Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights

Dear Mr. Lee:

The Open Source Initiative (“OSI”) appreciates the opportunity to provide our views on the above referenced matter. As steward of the Open Source Definition, the OSI sets the foundation for Open Source software, a global public good that plays a vital role in the economy and is foundational for most technology we use today. As the leading voice on the policies and principles of Open Source, the OSI helps build a world where the freedoms and opportunities of Open Source software can be enjoyed by all and supports institutions and individuals working together to create communities of practice in which the healthy Open Source ecosystem thrives. One of the most important activities of the OSI, a California public benefit 501(c)(3) organization founded in 1998, is to maintain the Open Source Definition for the good of the community.

The OSI is encouraged by the work of NTIA to bring stakeholders together to understand the lessons from the Open Source software experience in having a recognized, unified Open Source Definition that enables an ecosystem whose value is estimated to be worth $8.8 trillion. As provided below in more detail, it is essential that federal policymakers encourage Open Source AI models to the greatest extent possible, and work with organizations like the OSI which is endeavoring to create a unified, recognized definition of Open Source AI.

The Power of Open Source

Open Source delivers autonomy and personal agency to software users which enables a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of Open Source is higher quality, better reliability, greater flexibility, lower cost, and an end to proprietary lock-in.

Open Source software is widely used across the federal government and in every critical infrastructure sector. “The Federal Government recognizes the immense benefits of Open Source software, which enables software development at an incredible pace and fosters significant innovation and collaboration.” For the last two decades, authoritative direction and educational resources have been given to agencies on the use, management and benefits of Open Source software.

Moreover, Open Source software has direct economic and societal benefits. Open Source software empowers companies to develop, test and deploy services, thereby substantiating market demand and economic viability. By leveraging Open Source, companies can accelerate their progress and focus on innovation. Many of the essential services and technologies of our society and economy are powered by Open Source software, including, e.g., the Internet.

The Open Source Definition has demonstrated that massive social benefits accrue when the barriers to learning, using, sharing and improving software systems are removed. The core criteria of the Open Source Definition – free redistribution; source code; derived works; integrity of the author’s source code; no discrimination against persons or groups; no discrimination against fields of endeavor; distribution of license; license must not be specific to a product; license must not restrict other software; license must be technology-neutral – have given users agency, control and self-sovereignty of their technical choices and a dynamic ecosystem based on permissionless innovation.

A recent study published by the European Commission estimated that companies located in the European Union invested around €1 billion in Open Source Software in 2018, which brought about a positive impact on the European economy of between €65 and €95 billion.

This success and the potency of Open Source software has for the last three decades relied upon the recognized unified definition of Open Source software and the list of Approved Licenses that the Open Source Initiative maintains.

OSI believes this “open” analog is highly relevant to Open Source AI as an emerging technology domain with tremendous potential for public benefit.

Distinguishing the Open Source Definition

The OSI Approved License trademark and program creates a nexus of trust around which developers, users, corporations and governments can organize cooperation on Open Source software. However, it is generally agreed that the Open Source Definition, drafted 26 years ago and maintained by the OSI, does not cover this new era of AI systems.

AI models are not just code; they are trained on massive datasets, deployed on intricate computing infrastructure, and accessed through diverse interfaces and modalities. With traditional software, there was a very clear separation between the code one wrote, the compiler one used, the binary it produced, and what license they had. However, for AI models, many components collectively influence the functioning of the system, including the algorithms, code, hardware, and datasets used for training and testing. The very notion of modifying the source code (which is important in the Open Source Definition) becomes fuzzy. For example, there is the key question of whether the training dataset, the model weights, or other key elements should be considered independently or collectively as the source code for the model/weights that have been trained.

AI (specifically the Models that it manifests) include a variety of technologies, each is a vital element to all Models.

This challenge is not new. In its guidance on use of Open Source software, the US Department of Defense distinguished open systems from open standards, that while “different from Open Source software, they are complementary and can work well together”:

Open standards make it easier for users to (later) adopt an Open Source software
program, because users of open standards aren’t locked into a particular
implementation. Instead, users who are careful to use open standards can easily
switch to a different implementation, including an OSS implementation. … Open
standards also make it easier for OSS developers to create their projects, because
the standard itself helps developers know what to do. Creating any interface is an
effort, and having a predefined standard helps reduce that effort greatly.

OSS implementations can help create and keep open standards open. An OSS
implementation can be read and modified by anyone; such implementations can
quickly become a working reference model (a “sample implementation” or an
“executable specification”) that demonstrates what the specification means
(clarifying the specification) and demonstrating how to actually implement it.
Perhaps more importantly, by forcing there to be an implementation that others can
examine in detail, resulting in better specifications that are more likely to be used.

OSS implementations can help rapidly increase adoption/use of the open standard.
OSS programs can typically be simply downloaded and tried out, making it much
easier for people to try it out and encouraging widespread use. This also pressures
proprietary implementations to limit their prices, and such lower prices for
proprietary software also encourages use of the standard.

With practically no exceptions, successful open standards for software have OSS
implementations.

Towards a Unified Vision of what is ‘Open Source AI’

With these essential differentiating elements in mind, last summer, the OSI kicked off a multi-stakeholder process to define the characteristics of an AI system that can be confidently and generally understood to be considered as “Open Source”.

This collaboration utilizes the latest definition of AI system adopted by the Organization for Economic Cooperation and Development (OECD), and which has been the foundation for NIST’s “AI Risk Management Framework” as well as the European Union’s AI Act:

An AI system is a machine-based system that, for explicit or implicit objectives,
infers, from the input it receives, how to generate outputs such as predictions,
content, recommendations, or decisions that can influence physical or virtual
environments. Different AI systems vary in their levels of autonomy and
adaptiveness after deployment.

Since its announcement last summer, the OSI has had an open call for papers and held open webinars in order to collect ideas from the community describing precise problem areas in AI and collect suggestions for solutions. More than 6 community reviews – in Europe, Africa, and various locations in the US – have taken place in 2023, coinciding with a first draft of the Open Source AI Definition. This year, the OSI has coordinated working groups to analyze various foundation models, released three more drafts of the Definition, hosted bi-weekly public town halls to review and continues to get feedback from a wide variety of stakeholders, including:

  • System Creators (makes AI system and/or component that will be studied, used, modified, or shared through an Open Source license;
  • License Creators (writes or edits the Open Source license to be applied to the AI system or component; includes compliance;
  • Regulators (writes or edits rules governing licenses and systems (e.g. government policy-maker);
  • Licensees (seeks to study, use modify, or share an Open Source AI system (e.g. AI engineer, health researcher, education researcher);
  • End Users (consumes a system output, but does not seek to study, use, modify, or share the system (e.g., student using a chatbot to write a report, artist creating an image);
  • Subjects (affected upstream or downstream by a system output without interacting with it intentionally; includes advocates for this group (e.g. people with loan denied, or content creators.
What is Open Source AI?

An Open Source AI is an AI system made available to the public under terms that grant the freedoms to:

  • Use the system for any purpose and without having to ask for permission.
  • Study how the system works and inspect its components.
  • Modify the system for any purpose, including to change its output.
  • Share the system for others to use with or without modifications, for any purpose.

Precondition to exercise these freedoms is to have access to the preferred form to make modifications to the system.

The OSI expects to wrap up and report the outcome of in-person and online meetings and anticipates having the draft endorsed by at least 5 reps for each of the stakeholder groups with a formal announcement of the results in late October.

To address the need to define rules for maintenance and review of this new Open Source AI Definition, the OSI Board of Directors approved the creation of a new committee to oversee the development of the Open Source AI Definition, approve version 1.0, and set rules for the maintenance of Definition.

Some preliminary observations based on these efforts to date:

  • It is generally recognized, as indicated above, that the Open Source Definition as created for software does not completely cover this new era of Open Source AI. This is not a software-only issue and is not something that can be solved by applying the same exact terms in the new territory of defining Open Source AI. The Open Source AI definition will start from the core motivation of the need to ensure users of AI systems retain their autonomy and personal agency.
  • To the greatest degree practical, Open Source AI should not be limited in scope, allowing users the right to adopt the technology for any purpose. One of the key lessons and underlying successes of the Open Source Definition is that field-of-use restrictions deprive creators of software to utilize tools in a way to affect positive outcomes in society.
  • Reflecting on the past 20-to-30 years of learning about what has gone well and what hasn’t in terms of the open community and the progress it has made, it’s important to understand that openness does not automatically mean ethical, right or just. Other factors such as privacy concerns and safety when developing open systems come into play, and in each element of an AI model – and when put together as a system — there is an ongoing tension between something being open and being safe, or potentially harmful.
  • Open Source AI systems lower the barrier for stakeholders outside of large tech companies to shape the future of AI, enabling more AI services to be built by and for diverse communities with different needs that big companies may not always address.
  • Similarly, Open Source AI systems make it easier for regulators and civil society to assess AI systems for compliance with laws protecting civil rights, privacy, consumers, and workers. They increase transparency, education, testing and trust around the use of AI, enabling researchers and journalists to audit and write about AI systems’ impacts on society.
  • Open source AI systems advance safety and security by accelerating the understanding of their capabilities, risks and harms through independent research, collaboration, and knowledge sharing.
  • Open source AI systems promote economic growth by lowering the barrier for innovators, startups, and small businesses from more diverse communities to build and use AI. Open models also help accelerate scientific research because they can be less expensive, easier to fine-tune, and supportive of reproducible research.

The OSI looks forward to working with NTIA as it considers the comments to this RFI, and stands ready to participate in any follow on discussions to this or the general topic of ‘Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights’. As shared above, it is essential that federal policymakers encourage Open Source AI models to the greatest extent possible, and work with organizations like the OSI and others who are endeavoring to create a unified, recognized definition of Open Source AI.

Respectfully submitted,
THE OPEN SOURCE INITIATIVE


For more information, contact:

  • Stefano Maffulli, Executive Director
  • Deb Bryant, US Policy Director

 

Categories: FLOSS Research

Drupal Association Journey: Pedro Cambra: Survey on Bookmarking Tool Needs Your Input

Planet Drupal - Tue, 2024-04-02 13:06

TL;DR: I’m requesting members of the Drupal community to help my research about the need for a bookmarking tool by responding a super quick survey.

As part of my dissertation work for my bachelor’s degree, I’m unsurprisingly working in something related to Drupal. After a lot of consideration regarding a project that could be within a reasonable scope but also allowed me to contribute a little bit to the Drupal ecosystem, a chat with Cristina and Christian helped me decide to work in the shortcut module, and try to make improvements before it is marked to be removed to core – and try to avoid that because I believe it could be a useful tool for both the navigation and the dashboards initiatives.

But first things first.

One of the elements I am looking to explore the most in my research is the full process of the contribution, from identifying the issue to solve, get quantitative data through a survey in the community to establish that the problem is worth solving it, then propose a solution and get feedback on it.

I would appreciate it a lot if you could help me achieve my goal by answering the survey I’ve prepared.

#Drupal

Discuss...

Categories: FLOSS Project Planets

Bits from Debian: Bits from the DPL

Planet Debian - Tue, 2024-04-02 13:00

Dear Debianites

This morning I decided to just start writing Bits from DPL and send whatever I have by 18:00 local time. Here it is, barely proof read, along with all it's warts and grammar mistakes! It's slightly long and doesn't contain any critical information, so if you're not in the mood, don't feel compelled to read it!

== Get ready for a new DPL! ==

Soon, the voting period will start to elect our next DPL, and my time as DPL will come to an end. Reading the questions posted to the new candidates on [debian-vote], it takes quite a bit of restraint to not answer all of them myself, I think I can see how that aspect contributed to me being reeled in to running for DPL! In total I've done so 5 times (the first time I ran, Sam was elected!).

Good luck to both [Andreas] and [Sruthi], our current DPL candidates! I've already started working on preparing handover, and there's multiple request from teams that have came in recently that will have to wait for the new term, so I hope they're both ready to hit the ground running!

  • [debian-vote] Mailing list: https://lists.debian.org/debian-vote/2024/03/threads.html
  • Platform: https://www.debian.org/vote/2024/platforms/tille [Anrea]
  • Platform: https://www.debian.org/vote/2024/platforms/srud [Sruthi]

== Things that I wish could have gone better ==

  • Communication:

Recently, I saw a t-shirt that read:

Adulthood is saying, 'But after this week things will slow down a bit' over and over until you die.

I can relate! With every task, crisis or deadline that appears, I think that once this is over, I'll have some more breathing space to get back to non-urgent, but important tasks. "Bits from the DPL" was something I really wanted to get right this last term, and clearly failed spectacularly. I have two long Bits from the DPL drafts that I never finished, I tend to have prioritised problems of the day over communication. With all the hindsight I have, I'm not sure which is better to prioritise, I do rate communication and transparency very highly and this is really the top thing that I wish I could've done better over the last four years.

On that note, thanks to people who provided me with some kind words when I've mentioned this to them before. They pointed out that there are many other ways to communicate and be in touch with the community, and they mentioned that they thought that I did a good job with that.

Since I'm still on communication, I think we can all learn to be more effective at it, since it's really so important for the project. Every time I publicly spoke about us spending more money, we got more donations. People out there really like to see how we invest funds in to Debian, instead of just making it heap up. DSA just spent a nice chunk on money on hardware, but we don't have very good visibility on it. It's one thing having it on a public line item in SPI's reporting, but it would be much more exciting if DSA could provide a write-up on all the cool hardware they're buying and what impact it would have on developers, and post it somewhere prominent like debian-devel-announce, Planet Debian or Bits from Debian (from the publicity team).

I don't want to single out DSA there, it's difficult and affects many other teams. The Salsa CI team also spent a lot of resources (time and money wise) to extend testing on AMD GPUs and other AMD hardware. It's fantastic and interesting work, and really more people within the project and in the outside world should know about it!

I'm not going to push my agendas to the next DPL, but I hope that they continue to encourage people to write about their work, and hopefully at some point we'll build enough excitement in doing so that it becomes a more normal part of our daily work.

  • Founding Debian as a standalone entity:

This was my number one goal for the project this last term, which was a carried over item from my previous terms.

I'm tempted to write everything out here, including the problem statement and our current predicaments, what kind of ground work needs to happen, likely constitutional changes that need to happen, and the nature of the GR that would be needed to make such a thing happen, but if I start with that, I might not finish this mail.

In short, I 100% believe that this is still a very high ranking issue for Debian, and perhaps after my term I'd be in a better position to spend more time on this (hmm, is this an instance of "The grass is always better on the other side", or "Next week will go better until I die?"). Anyway, I'm willing to work with any future DPL on this, and perhaps it can in itself be a delegation tasked to properly explore all the options, and write up a report for the project that can lead to a GR.

Overall, I'd rather have us take another few years and do this properly, rather than rush into something that is again difficult to change afterwards. So while I very much wish this could've been achieved in the last term, I can't say that I have any regrets here either.

== My terms in a nutshell ==

  • COVID-19 and Debian 11 era:

My first term in 2020 started just as the COVID-19 pandemic became known to spread globally. It was a tough year for everyone, and Debian wasn't immune against its effects either. Many of our contributors got sick, some have lost loved ones (my father passed away in March 2020 just after I became DPL), some have lost their jobs (or other earners in their household have) and the effects of social distancing took a mental and even physical health toll on many. In Debian, we tend to do really well when we get together in person to solve problems, and when DebConf20 got cancelled in person, we understood that that was necessary, but it was still more bad news in a year we had too much of it already.

I can't remember if there was ever any kind of formal choice or discussion about this at any time, but the DebConf video team just kind of organically and spontaneously became the orga team for an online DebConf, and that lead to our first ever completely online DebConf. This was great on so many levels. We got to see each other's faces again, even though it was on screen. We had some teams talk to each other face to face for the first time in years, even though it was just on a Jitsi call. It had a lasting cultural change in Debian, some teams still have video meetings now, where they didn't do that before, and I think it's a good supplement to our other methods of communication.

We also had a few online Mini-DebConfs that was fun, but DebConf21 was also online, and by then we all developed an online conference fatigue, and while it was another good online event overall, it did start to feel a bit like a zombieconf and after that, we had some really nice events from the Brazillians, but no big global online community events again. In my opinion online MiniDebConfs can be a great way to develop our community and we should spend some further energy into this, but hey! This isn't a platform so let me back out of talking about the future as I see it...

Despite all the adversity that we faced together, the Debian 11 release ended up being quite good. It happened about a month or so later than what we ideally would've liked, but it was a solid release nonetheless. It turns out that for quite a few people, staying inside for a few months to focus on Debian bugs was quite productive, and Debian 11 ended up being a very polished release.

During this time period we also had to deal with a previous Debian Developer that was expelled for his poor behaviour in Debian, who continued to harass members of the Debian project and in other free software communities after his expulsion. This ended up being quite a lot of work since we had to take legal action to protect our community, and eventually also get the police involved. I'm not going to give him the satisfaction by spending too much time talking about him, but you can read our official statement regarding Daniel Pocock here:

https://www.debian.org/News/2021/20211117

In late 2021 and early 2022 we also discussed our general resolution process, and had two consequent votes to address some issues that have affected past votes:

  • https://www.debian.org/vote/2021/vote_003
  • https://www.debian.org/vote/2022/vote_001

In my first term I addressed our delegations that were a bit behind, by the end of my last term all delegation requests are up to date. There's still some work to do, but I'm feeling good that I get to hand this over to the next DPL in a very decent state. Delegation updates can be very deceiving, sometimes a delegation is completely re-written and it was just 1 or 2 hours of work. Other times, a delegation updated can contain one line that has changed or a change in one team member that was the result of days worth of discussion and hashing out differences.

I also received quite a few requests either to host a service, or to pay a third-party directly for hosting. This was quite an admin nightmare, it either meant we had to manually do monthly reimbursements to someone, or have our TOs create accounts/agreements at the multiple providers that people use. So, after talking to a few people about this, we founded the DebianNet team (we could've admittedly chosen a better name, but that can happen later on) for providing hosting at two different hosting providers that we have agreement with so that people who host things under debian.net have an easy way to host it, and then at the same time Debian also has more control if a site maintainer goes MIA.

More info:

https://wiki.debian.org/Teams/DebianNet

You might notice some Openstack mentioned there, we had some intention to set up a Debian cloud for hosting these things, that could also be used for other additional Debiany things like archive rebuilds, but these have so far fallen through. We still consider it a good idea and hopefully it will work out some other time (if you're a large company who can sponsor few racks and servers, please get in touch!)

  • DebConf22 and Debian 12 era:

DebConf22 was the first time we returned to an in-person DebConf. It was a bit smaller than our usual DebConf - understandably so, considering that there were still COVID risks and people who were at high risk or who had family with high risk factors did the sensible thing and stayed home.

After watching many MiniDebConfs online, I also attended my first ever MiniDebConf in Hamburg. It still feels odd typing that, it feels like I should've been at one before, but my location makes attending them difficult (on a side-note, a few of us are working on bootstrapping a South African Debian community and hopefully we can pull off MiniDebConf in South Africa later this year).

While I was at the MiniDebConf, I gave a talk where I covered the evolution of firmware, from the simple e-proms that you'd find in old printers to the complicated firmware in modern GPUs that basically contain complete operating systems- complete with drivers for the device their running on. I also showed my shiny new laptop, and explained that it's impossible to install that laptop without non-free firmware (you'd get a black display on d-i or Debian live). Also that you couldn't even use an accessibility mode with audio since even that depends on non-free firmware these days.

Steve, from the image building team, has said for a while that we need to do a GR to vote for this, and after more discussion at DebConf, I kept nudging him to propose the GR, and we ended up voting in favour of it. I do believe that someone out there should be campaigning for more free firmware (unfortunately in Debian we just don't have the resources for this), but, I'm glad that we have the firmware included. In the end, the choice comes down to whether we still want Debian to be installable on mainstream bare-metal hardware.

At this point, I'd like to give a special thanks to the ftpmasters, image building team and the installer team who worked really hard to get the changes done that were needed in order to make this happen for Debian 12, and for being really proactive for remaining niggles that was solved by the time Debian 12.1 was released.

The included firmware contributed to Debian 12 being a huge success, but it wasn't the only factor. I had a list of personal peeves, and as the hard freeze hit, I lost hope that these would be fixed and made peace with the fact that Debian 12 would release with those bugs. I'm glad that lots of people proved me wrong and also proved that it's never to late to fix bugs, everything on my list got eliminated by the time final freeze hit, which was great! We usually aim to have a release ready about 2 years after the previous release, sometimes there are complications during a freeze and it can take a bit longer. But due to the excellent co-ordination of the release team and heavy lifting from many DDs, the Debian 12 release happened 21 months and 3 weeks after the Debian 11 release. I hope the work from the release team continues to pay off so that we can achieve their goals of having shorter and less painful freezes in the future!

Even though many things were going well, the ongoing usr-merge effort highlighted some social problems within our processes. I started typing out the whole history of usrmerge here, but it's going to be too long for the purpose of this mail. Important questions that did come out of this is, should core Debian packages be team maintained? And also about how far the CTTE should really be able to override a maintainer. We had lots of discussion about this at DebConf22, but didn't make much concrete progress. I think that at some point we'll probably have a GR about package maintenance. Also, thank you to Guillem who very patiently explained a few things to me (after probably having have to done so many times to others before already) and to Helmut who have done the same during the MiniDebConf in Hamburg. I think all the technical and social issues here are fixable, it will just take some time and patience and I have lots of confidence in everyone involved.

UsrMerge wiki page: https://wiki.debian.org/UsrMerge

  • DebConf 23 and Debian 13 era:

DebConf23 took place in Kochi, India. At the end of my Bits from the DPL talk there, someone asked me what the most difficult thing I had to do was during my terms as DPL. I answered that nothing particular stood out, and even the most difficult tasks ended up being rewarding to work on. Little did I know that my most difficult period of being DPL was just about to follow. During the day trip, one of our contributors, Abraham Raji, passed away in a tragic accident. There's really not anything anyone could've done to predict or stop it, but it was devastating to many of us, especially the people closest to him. Quite a number of DebConf attendees went to his funeral, wearing the DebConf t-shirts he designed as a tribute. It still haunts me when I saw his mother scream "He was my everything! He was my everything!", this was by a large margin the hardest day I've ever had in Debian, and I really wasn't ok for even a few weeks after that and I think the hurt will be with many of us for some time to come. So, a plea again to everyone, please take care of yourself! There's probably more people that love you than you realise.

A special thanks to the DebConf23 team, who did a really good job despite all the uphills they faced (and there were many!).

As DPL, I think that planning for a DebConf is near to impossible, all you can do is show up and just jump into things. I planned to work with Enrico to finish up something that will hopefully save future DPLs some time, and that is a web-based DD certificate creator instead of having the DPL do so manually using LaTeX. It already mostly works, you can see the work so far by visiting https://nm.debian.org/person/ACCOUNTNAME/certificate/ and replacing ACCOUNTNAME with your Debian account name, and if you're a DD, you should see your certificate. It still needs a few minor changes and a DPL signature, but at this point I think that will be finished up when the new DPL start. Thanks to Enrico for working on this!

Since my first term, I've been trying to find ways to improve all our accounting/finance issues. Tracking what we spend on things, and getting an annual overview is hard, especially over 3 trusted organisations. The reimbursement process can also be really tedious, especially when you have to provide files in a certain order and combine them into a PDF. So, at DebConf22 we had a meeting along with the treasurer team and Stefano Rivera who said that it might be possible for him to work on a new system as part of his Freexian work. It worked out, and Freexian funded the development of the system since then, and after DebConf23 we handled the reimbursements for the conference via the new reimbursements site:

https://reimbursements.debian.net

It's still early days, but over time it should be linked to all our TOs and we'll use the same category codes across the board. So, overall, our reimbursement process becomes a lot simpler, and also we'll be able to get information like how much money we've spent on any category in any period. It will also help us to track how much money we have available or how much we spend on recurring costs. Right now that needs manual polling from our TOs. So I'm really glad that this is a big long-standing problem in the project that is being fixed.

For Debian 13, we're waving goodbye to the KFreeBSD and mipsel ports. But we're also gaining riscv64 and loongarch64 as release architectures! I have 3 different RISC-V based machines on my desk here that I haven't had much time to work with yet, you can expect some blog posts about them soon after my DPL term ends!

As Debian is a unix-like system, we're affected by the [Year 2038 problem], where systems that uses 32 bit time in seconds since 1970 run out of available time and will wrap back to 1970 or have other undefined behaviour. A detailed [wiki page] explains how this works in Debian, and currently we're going through a rather large transition to make this possible.

[Year 2038 problem] https://simple.wikipedia.org/wiki/Year_2038_problem [wiki page] https://wiki.debian.org/ReleaseGoals/64bit-time

I believe this is the right time for Debian to be addressing this, we're still a bit more than a year away for the Debian 13 release, and this provides enough time to test the implementation before 2038 rolls along.

Of course, big complicated transitions with dependency loops that causes chaos for everyone would still be too easy, so this past weekend (which is a holiday period in most of the west due to Easter weekend) has been filled with dealing with an upstream bug in xz-utils, where a backdoor was placed in this key piece of software. An [Ars Technica] covers it quite well, so I won't go into all the details here. I mention it because I want to give yet another special thanks to everyone involved in dealing with this on the Debian side. Everyone involved, from the ftpmasters to security team and others involved were super calm and professional and made quick, high quality decisions. This also lead to the archive being frozen on Saturday, this is the first time I've seen this happen since I've been a DD, but I'm sure next week will go better!

[Ars Technica] https://arstechnica.com/security/2024/04/what-we-know-about-the-xz-utils-backdoor-that-almost-infected-the-world/

== Looking forward ==

It's really been an honour for me to serve as DPL. It might well be my biggest achievement in my life. Previous DPLs range from prominent software engineers to game developers, or people who have done things like complete Iron Man, run other huge open source projects and are part of big consortiums. Ian Jackson even authored dpkg and is now working on the very interesting [tag2upload service]!

[tag2upload service] https://peertube.debian.social/w/pav68XBWdurWzfTYvDgWRM

I'm a relative nobody, just someone who grew up as a poor kid in South Africa, who just really cares about Debian a lot. And, above all, I'm really thankful that I didn't do anything major to screw up Debian for good.

Not unlike learning how to use Debian, and also becoming a Debian Developer, I've learned a lot from this and it's been a really valuable growth experience for me.

I know I can't possible give all the thanks to everyone who deserves it, so here's a big big thanks to everyone who have worked so hard and who have put in many, many hours to making Debian better, I consider you all heroes!

-Jonathan

Categories: FLOSS Project Planets

Real Python: Python Deep Learning: PyTorch vs Tensorflow

Planet Python - Tue, 2024-04-02 10:00

PyTorch vs TensorFlow: What’s the difference? Both are open source Python libraries that use graphs to perform numerical computation on data. Both are used extensively in academic research and commercial code. Both are extended by a variety of APIs, cloud computing platforms, and model repositories.

If they’re so similar, then which one is best for your project?

In this video course, you’ll learn:

  • What the differences are between PyTorch and TensorFlow
  • What tools and resources are available for each
  • How to choose the best option for your specific use case

You’ll start by taking a close look at both platforms, beginning with the slightly older TensorFlow, before exploring some considerations that can help you determine which choice is best for your project. Let’s get started!

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Matt Glaman: Ensuring smart_date works for all versions of Drupal 10 and 11

Planet Drupal - Tue, 2024-04-02 09:18

At MidCamp a few weeks ago, Martin Anderson-Clutz tapped me on the shoulder to check out a Smart Date issue for compatibility with Drupal 10.2. As of Drupal 10.2, ListItemBase::extractAllowedValues takes an array as its first argument versus a string. The method used to explode a newline separated string into an array for its callers. I took a look at the issue. The change affected the parseValues method in the SmartDateListItemBase class. The parseValues method takes the field's values and passes them to extractAllowedValues, the method with a changed signature in Drupal 11.

The original method contained the following:

Categories: FLOSS Project Planets

Anwesha Das: Opening up Ansible release to the community

Planet Python - Tue, 2024-04-02 08:52

Transparency, collaboration, inclusivity, and openness lay the foundation of the Open Source community. As the project&aposs maintainers, few of our tasks make the entry bar of contribution low, collaboration easy, and the governance model fair. Ansible Community Engineering Team always thrives on these purposes through our different endeavors.

Ansible has historically been released by Red Hat employees. We planned to open up the release to the community. And I was asked about that. My primary goal was releasing Ansible, which should be dull and possible for the community. This was my first time dealing with Github actions. There is still a lot to learn. But we are there now.

The Release Management working group started releasing the Ansible Community package using GitHub Actions workflow from Ansible version 9.3.0 . The recent 9.4.0 release has also been released following the same workflow.

Thank you Felix Fontein, Maxwell G, Sviatoslav Sydorenko and Toshio for helping out in shaping the workflow with you valuable feedback, doing the actual release and giving answers to my enumerable queries.

Categories: FLOSS Project Planets

EuroPython: EuroPython 2024: Ticket sales now open! 🐍

Planet Python - Tue, 2024-04-02 08:39

Hey hey, everyone,

We are thrilled to announce that EuroPython is back and better than ever! ✨
EuroPython 2024 will be held 8-14 July at the Prague Congress Centre (PCC), Czech Republic. Details of how to participate remotely will be published soon.

The conference will follow the same structure as the previous editions:

  • Two Workshop/Tutorial Days (8-9 July, Mon-Tue)
  • Three Conference Days (10-12 July, Wed-Fri)
  • Sprint Weekend (13-14 July, Sat-Sun)

Secure your spot at EuroPython 2024 by purchasing your tickets today. For more information and to grab your tickets, visit https://ep2024.europython.eu/tickets before they sell out!

Get your tickets fast before the late-bird prices kick in. &#x1F3C3;

Looking forward to welcoming you to EuroPython 2024 in Prague! &#x1F1E8;&#x1F1FF;

&#x1F3AB;Don&apost forget to get your ticket at - https://ep2024.europython.eu

Cheers,

The EuroPython 2024 Organisers

Categories: FLOSS Project Planets

Qt 6.7 Released!

Planet KDE - Tue, 2024-04-02 05:54

Qt 6.7 is out with lots of large and small improvements for all of us who like to have fun when building modern applications and user experiences.

Categories: FLOSS Project Planets

Specbee: How to Write Your First Test Case Using PHPUnit & Kernel in Drupal

Planet Drupal - Tue, 2024-04-02 04:07
Are you able to imagine a world where your code functions flawlessly, your bugs are scared of your testing routine and your users can enjoy a seamless experience - free from crashes and errors? Well, this only means that you understand the importance of automated testing.  With automated testing, Drupal developers can elevate the code quality, streamline workflows, and fortify digital ecosystems against errors and bugs. Drupal offers 4 types of PHPUnit tests: Unit, Kernel, Functional, and Functional Javascript. In this blog post, we'll explore PHPUnit tests and Kernel tests. Setting Up PHPUnit in Drupal For setting up PHPUnit in Drupal, the recommended method by Drupal is composer-based: $ composer require --dev phpunit/phpunit --with-dependencies $ composer require behat/mink && composer require --dev phpspec/prophecy(Note  -  PHPUnit version 11 requires PHP 8.2, This blog is written for Drupal 10 and PHP 8.1)Once PHPUnit and its dependencies are installed, the next step involves creating and configuring the phpunit.xml file. Locate the phpunit.xml.dist file in the core folder of Drupal installation. Copy and paste this file into the docroot directory and rename it to phpunit.xml (it is recommended to keep the file in docroot directory instead of core so that it won't get affected by core updates). Create simpletest and browser_output directories. In order to run tests like Kernel and Functional we need to create these two directories and set the permissions to writable $ mkdir -p docroot/sites/simpletest/browser_output && chmod -R 777 docroot/sites/simpletestTo run the test locally, we need to configure some values in phpunit.xml e.g, Change 1 - <env name="SIMPLETEST_BASE_URL" value="" /> to <env name="SIMPLETEST_BASE_URL" value="https://yoursiteurl.com" /> 2 - <env name="SIMPLETEST_DB" value="" /> to <env name="SIMPLETEST_DB" value="mysql://username:password@yourdbhost/databasename" /> 3 - <env name="BROWSERTEST_OUTPUT_DIRECTORY" value="" /> to <env name="BROWSERTEST_OUTPUT_DIRECTORY" value="fullpath/docroot/sites/simpletest/browser_output" /> (Note - To check the full path of your app run from project root) $ pwdSetting Up PHPUnit with Lando If you want to run your test from lando you’ll need to configure .lando.yml file. We’ll leave the defaults for most of the values in the phpunit.xml file except for where to find the bootstrap.php file. This should be changed to the path in the Lando container, which will be /app/web/core/tests/bootstrap.php. This can be done with sed: $ sed -i 's|tests\/bootstrap\.php|/app/web/core/tests/bootstrap.php|g' phpunit.xml Next, edit the .lando.yml file and add the following: services:  appserver:    overrides:      environment:        SIMPLETEST_BASE_URL: "http://mysite.lndo.site"        SIMPLETEST_DB: "mysql://database:database@database/database"        MINK_DRIVER_ARGS_WEBDRIVER: '["chrome", {"browserName":"chrome","chromeOptions":{"args":["--disable-gpu","--headless"]}}, "http://chrome:9515"]'  chrome:    type: compose    services:      image: drupalci/webdriver-chromedriver:production      command: chromedriver --log-path=/tmp/chromedriver.log --verbose --whitelisted-ips= tooling:  test:    service: appserver    cmd: "php /app/vendor/bin/phpunit -c /app/phpunit.xml"Modify SIMPLETEST_BASE_URL and SIMPLETEST_DB to point to your lando site and database credentials as needed.This does three things: Adds environment variables to the appserver container (the one we’ll run the tests in). Adds a new container for the chromedriver image which is used for running headless javascript tests (more on that later). A tooling section that adds a test command to Lando to run our tests. Important!After updating the .lando.yml file we need to rebuild the containers with the following:$ lando rebuild -yLets run a single test with lando $ lando test core/modules/datetime/tests/src/Unit/Plugin/migrate/field/DateFiedTest.phpWithout Lando — Verify the tests are working by running core test.(Note — I keep my phpunit.xml file in docroot folder and will be running test from docroot directory.)$ ../vendor/bin/phpunit -c core core/modules/datetime/tests/src/Unit/Plugin/migrate/field/DateFiedTest.php What is a PHPUnit Test PHPUnit tests are utilized to test small blocks of code and functionalities that do not necessitate a complete Drupal installation. These tests allow us to evaluate the functionality of a class within the Drupal environment, encompassing aspects like Database, Settings, etc. Moreover, they do not require a web browser, as the Drupal environment can be substituted by a "mock" object. Before writing unit tests, we should remember the following things: Base Class — \Drupal\Tests\UnitTestCaseTo implement a unit test case we need to extend our test class with the base classNamespace — \Drupal\Tests\mymodule\Unit (or subdirectory)We need to specify a namespace for our testDirectory location — mymodule/tests/src/Unit (or subdirectory)To run the test, the test class must reside in the above-mentioned directory. Write Your First PHPUnit Test Step 1: Create a custom module Step 2: Create the event_example.info.yml file for the custom module name: Events Exampletype: moduledescription: Provides an example of subscribing to and dispatching events.package: Customcore_version_requirement: ^9.4 || ^10 Step 3: Create event_example.services.yml file services: # Give your service a unique name, convention is to prefix service names with # the name of the module that implements them. events_example_subscriber:  # Point to the class that will contain your implementation of  # \Symfony\Component\EventDispatcher\EventSubscriberInterface  class: Drupal\events_example\EventSubscriber\EventsExampleSubscriber  tags:  - {name: event_subscriber}Step 4: Create events_example/src/EventSubscriber/EvensExampleSubscriber.php file for our class. <?php namespace Drupal\events_example\EventSubscriber; use Drupal\events_example\Event\IncidentEvents; use Drupal\events_example\Event\IncidentReportEvent; use Drupal\Core\Messenger\MessengerTrait; use Drupal\Core\StringTranslation\StringTranslationTrait; use Symfony\Component\EventDispatcher\EventSubscriberInterface; /** * Subscribe to IncidentEvents::NEW_REPORT events and react to new reports. * * In this example we subscribe to all IncidentEvents::NEW_REPORT events and * point to two different methods to execute when the event is triggered. In * each method we have some custom logic that determines if we want to react to * the event by examining the event object, and the displaying a message to the * user indicating whether or not that method reacted to the event. * * By convention, classes subscribing to an event live in the * Drupal/{module_name}/EventSubscriber namespace. * * @ingroup events_example */ class EventsExampleSubscriber implements EventSubscriberInterface {  use StringTranslationTrait;  use MessengerTrait;  /**   * {@inheritdoc}   */  public static function getSubscribedEvents() {    // Return an array of events that you want to subscribe to mapped to the    // method on this class that you would like called whenever the event is    // triggered. A single class can subscribe to any number of events. For    // organization purposes it's a good idea to create a new class for each    // unique task/concept rather than just creating a catch-all class for all    // event subscriptions.    //    // See EventSubscriberInterface::getSubscribedEvents() for an explanation    // of the array's format.    //    // The array key is the name of the event your want to subscribe to. Best    // practice is to use the constant that represents the event as defined by    // the code responsible for dispatching the event. This way, if, for    // example, the string name of an event changes your code will continue to    // work. You can get a list of event constants for all events triggered by    // core here:    // https://api.drupal.org/api/drupal/core%21core.api.php/group/events/8.2.x.    //    // Since any module can define and trigger new events there may be    // additional events available in your application. Look for classes with    // the special @Event docblock indicator to discover other events.    //    // For each event key define an array of arrays composed of the method names    // to call and optional priorities. The method name here refers to a method    // on this class to call whenever the event is triggered.    $events[IncidentEvents::NEW_REPORT][] = ['notifyMario'];    // Subscribers can optionally set a priority. If more than one subscriber is    // listening to an event when it is triggered they will be executed in order    // of priority. If no priority is set the default is 0.    $events[IncidentEvents::NEW_REPORT][] = ['notifyBatman', -100];    // We'll set an event listener with a very low priority to catch incident    // types not yet defined. In practice, this will be the 'cat' incident.    $events[IncidentEvents::NEW_REPORT][] = ['notifyDefault', -255];    return $events;  }  /**   * If this incident is about a missing princess, notify Mario.   *   * Per our configuration above, this method is called whenever the   * IncidentEvents::NEW_REPORT event is dispatched. This method is where you   * place any custom logic that you want to perform when the specific event is   * triggered.   *   * These responder methods receive an event object as their argument. The   * event object is usually, but not always, specific to the event being   * triggered and contains data about application state and configuration   * relative to what was happening when the event was triggered.   *   * For example, when responding to an event triggered by saving a   * configuration change you'll get an event object that contains the relevant   * configuration object.   *   * @param \Drupal\events_example\Event\IncidentReportEvent $event   *   The event object containing the incident report.   */  public function notifyMario(IncidentReportEvent $event) {    // You can use the event object to access information about the event passed    // along by the event dispatcher.    if ($event->getType() == 'stolen_princess') {      $this->messenger()->addStatus($this->t('Mario has been alerted. Thank you. This message was set by an event subscriber. See @method()', ['@method' => __METHOD__]));      // Optionally use the event object to stop propagation.      // If there are other subscribers that have not been called yet this will      // cause them to be skipped.      $event->stopPropagation();    }  }  /**   * Let Batman know about any events involving the Joker.   *   * @param \Drupal\events_example\Event\IncidentReportEvent $event   *   The event object containing the incident report.   */  public function notifyBatman(IncidentReportEvent $event) {    if ($event->getType() == 'joker') {      $this->messenger()->addStatus($this->t('Batman has been alerted. Thank you. This message was set by an event subscriber. See @method()', ['@method' => __METHOD__]));      $event->stopPropagation();    }  }  /**   * Handle incidents not handled by the other handlers.   *   * @param \Drupal\events_example\Event\IncidentReportEvent $event   *   The event object containing the incident report.   */  public function notifyDefault(IncidentReportEvent $event) {    $this->messenger()->addStatus($this->t('Thank you for reporting this incident. This message was set by an event subscriber. See @method()', ['@method' => __METHOD__]));    $event->stopPropagation();  }  /**   * @param $string String   *   Simple function to check string.   */  public function checkString($string) {    return $string ? TRUE : FALSE;  } }Step 5: Create tests/src/Unit/EventsExampleUnitTest.php file for our UnitTest <?php namespace Drupal\Tests\events_example\Unit; use Drupal\Tests\UnitTestCase; use Drupal\events_example\EventSubscriber\EventsExampleSubscriber; /** * Test events_example EventsExampleSubscriber functionality * * @group events_example * * @ingroup events_example */ class EventsExampleUnitTest extends UnitTestCase {    /**     * event_example EventsExampleSubscriber object.     *     * @var-object     */    protected $eventExampleSubscriber;    /**     * {@inheritdoc}     */     protected function setUp(): void {        $this->eventExampleSubscriber = new EventsExampleSubscriber();        parent::setUp();     }    /**     * Test simple function that returns true if string is present.     */    public function testHasString() {      $this->assertEqual(TRUE, $this->eventExampleSubscriber->checkString('Some String'));    }  }Important Notes: Test class name should start or end with “Test”. For example, EventsExampleUnitTest and it should extend with the base class UnitTestCase. Test function should start with “test”. For example, testHasString otherwise it will not be included in test run. Step 6: Let’s run our test! $ lando test modules/custom/events_example/tests/src/Unit/EventsExampleUnitTest.phpor$ ../vender/bin/phpunit -c core modules/custom/events_example/tests/src/Unit/EventsExampleUnitTest.php PHPUnit 9.6.17 by Sebastian Bergmann and contributors. Testing Drupal\Tests\events_example\Unit\EventsExampleUnitTest.                                                                   1 / 1 (100%) Time: 00:00.054, Memory: 10.00 MB OK (1 test, 1 assertion) Hooray! Our test has passed! Write Your First Kernel Test These tests necessitate specific Drupal environment dependencies, with no requirement for web browsers. They allow us to assess class functionality without the full Drupal setup or web browsers. However, certain Drupal environment dependencies are indispensable and cannot be easily mocked. Kernel tests are capable of accessing services, databases, and minimal file systems.Below are the essential prerequisites to consider when running kernel tests. Base Class — \Drupal\KernelTests\KernelTestBaseTo implement the Kernel test we need to extend our test class with the base classNamespace — \Drupal\Tests\mymodule\Kernel (or subdirectory)We need to specify a namespace for our testDirectory location — mymodule/tests/src/Kernel (or subdirectory)To run the test, the test class must reside in the above-mentioned directory.Create tests/src/Kernel/EventsExampleServiceTest.php file for our Kernel Test <?php namespace Drupal\Tests\events_example\Kernel; use Drupal\KernelTests\KernelTestBase; use Drupal\events_example\EventSubscriber\EventsExampleSubscriber; /** * Test to ensure 'events_example_subscriber' service is reachable. * * @group events_example * * @ingroup events_example */ class EventsExampleServiceTest extends KernelTestBase {  /**   * {@inheritdoc}   */  protected static $modules = ['events_example'];  /**   * Test for existence of 'events_example_subscriber' service.   */  public function testEventsExampleService() {    $subscriber = $this->container->get('events_example_subscriber');    $this->assertInstanceOf(EventsExampleSubscriber::class, $subscriber);  } }Important Notes: The test class name should start or end with “Test”, for example, EventsExampleServiceTest and it should extend with base class KernelTestBase.  The kernel test should include modules that have dependencies. For example , “$modules = [‘events_example’]” we can include other dependent modules here like “$modules = [‘events_example’, ‘user’, ‘field’]” . It acts like dependency injection. The test function should start with “test”, for example, testEventsExampleService otherwise it will not be included in the test run.Let's run our Kernel Test.$ lando test modules/custom/events_example/tests/src/Kernel/EventsExampleServiceTest.phpor$ ../vender/bin/phpunit -c core modules/custom/events_example/tests/src/Kernel/EventsExampleServiceTest.php PHPUnit 9.6.17 by Sebastian Bergmann and contributors. Testing Drupal\Tests\events_example\Kernel\EventsExampleServiceTest.                                                                   1 / 1 (100%) Time: 01:24.129, Memory: 10.00 MB OK (1 test, 1 assertion) Woohoo! Another successful test in the books! Final Thoughts What next after writing your first test case using PHPUnit and Kernel testing? Well, you can proceed to write more test cases to cover other functionalities or edge cases in your Drupal project. Additionally, you may want to consider integrating your test suite into a continuous integration(CI) pipeline to automate testing and ensure code quality throughout your development process.
Categories: FLOSS Project Planets

Python Bytes: #377 A Dramatic Episode

Planet Python - Tue, 2024-04-02 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><a href="https://github.com/epogrebnyak/justpath"><strong>justpath</strong></a></li> <li><strong>xz back door</strong></li> <li><a href="https://lpython.org">LPython</a></li> <li><a href="https://github.com/treyhunner/dramatic"><strong>dramatic</strong></a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=eWnYlxOREu4' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="377">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by ScoutAPM: <a href="https://pythonbytes.fm/scout"><strong>pythonbytes.fm/scout</strong></a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of </p> <p>the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://github.com/epogrebnyak/justpath"><strong>justpath</strong></a></p> <ul> <li>Inspect and refine PATH environment variable on both Windows and Linux.</li> <li>Raw, count, duplicates, invalids, corrections, excellent stuff.</li> <li>Check out <a href="https://asciinema.org/a/642726">the video</a></li> </ul> <p><strong>Brian #2:</strong> <strong>xz back door</strong></p> <ul> <li>In case you kinda heard about this, but not really.</li> <li>Very short version: <ul> <li>A Microsoft engineer noticed a performance problem with ssh and tracked it to a particular version update of xz.</li> <li>Further investigations found a multi-year installation of a fairly complex back door into the xz by a new-ish contributor. But still contributing over several years. First commit in early 2022.</li> <li>The problem is caught. But if it had succeeded, it would have been bad.</li> <li>Part of the issue of how this happened is due to having one primary maintainer on a very widely used tool included in tons-o-Linux distributions.</li> </ul></li> <li>Some useful articles <ul> <li><a href="https://boehs.org/node/everything-i-know-about-the-xz-backdoor"><strong>Everything I Know About the XZ Backdoor</strong></a> - Evan Boehs - recommended read</li> </ul></li> <li>Don’t think your affected? Think again if you use homebrew, for example: <ul> <li><a href="https://micro.webology.dev/2024/03/29/update-and-upgrade.html"><strong>Update and upgrade Homebrew and</strong></a><a href="https://micro.webology.dev/2024/03/29/update-and-upgrade.html"> </a><a href="https://micro.webology.dev/2024/03/29/update-and-upgrade.html"><strong><code>xz</code></strong></a><a href="https://micro.webology.dev/2024/03/29/update-and-upgrade.html"> <strong>versions</strong></a></li> </ul></li> <li>Notes <ul> <li>Open source maintenance burnout is real</li> <li>Lots of open source projects are maintained by unpaid individuals for long periods of time.</li> <li>Multi-year sneakiness and social bullying is pretty hard to defend against.</li> <li>Handing off projects to another primary maintainer has to be doable. <ul> <li>But now I think we need better tools to vet contributors. </li> <li>Maybe? Or would that just suppress contributions?</li> </ul></li> </ul></li> <li>One option to help with burnout: <ul> <li>JGMM, Just Give Maintainers Money: <a href="https://blog.glyph.im/2024/03/software-needs-to-be-more-expensive.html"><strong>Software Needs To Be More Expensive</strong></a> - Glyph</li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://lpython.org">LPython</a></p> <ul> <li>LPython aggressively optimizes type-annotated Python code. It has several backends, including LLVM, C, C++, and WASM. </li> <li>LPython’s primary tenet is speed.</li> <li>Play with the wasm version here: <a href="https://dev.lpython.org">dev.lpython.org</a></li> <li>Still in alpha, so keep that in mind.</li> </ul> <p><strong>Brian #4:</strong> <a href="https://github.com/treyhunner/dramatic"><strong>dramatic</strong></a></p> <ul> <li>Trey Hunner</li> <li>More drama in the software world. This time in the Python. </li> <li>Actually, this is just a fun utility to make your Python output more dramatic.</li> <li>More fun output with <a href="https://github.com/ChrisBuilds/terminaltexteffects">terminaltexteffects</a> <ul> <li>suggested by Allan</li> </ul></li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li><a href="https://github.com/Textualize/textual/releases/tag/v0.55.0">Textual how has a new inline feature in the new release.</a></li> </ul> <p>Michael:</p> <ul> <li>My keynote talk is out: <a href="https://www.youtube.com/watch?v=coz1CGRxjQ0">The State of Python in 2024</a></li> <li>Have you browsed your <a href="https://github.com">github feed</a> lately?</li> <li><a href="https://pythoninsider.blogspot.com/2024/03/python-31014-3919-and-3819-is-now.html">3.10, 3.9, 3.8 security updates</a></li> </ul> <p><strong>Joke:</strong> <a href="https://python-bytes-static.nyc3.digitaloceanspaces.com/definition-of-methodolgy-terms.jpg">Definition of terms</a></p>
Categories: FLOSS Project Planets

Python Software Foundation: New Open Initiative for Cybersecurity Standards

Planet Python - Mon, 2024-04-01 23:00

The Python Software Foundation is pleased to announce our participation in co-starting a new Open Initiative for Cybersecurity Standards collaboration with the Apache Software Foundation, the Eclipse Foundation, other code-hosting open source foundations, SMEs, industry players, and researchers. This collaboration is focused on meeting the real challenges of cybersecurity in the open source ecosystem, and demonstrating full cooperation with and supporting the implementation of the European Union’s Cyber Resilience Act (CRA). With our combined efforts, we are optimistic that we will reach our goal of establishing common specifications for secure open source development based on existing open source best practices. 

New regulations, such as those in the CRA, highlight the need for secure by design and strong supply chain security standards. The CRA will lead to standard requests from the Commission to the European Standards Organisations and we foresee requirements from the United States and other regions in the future. As open source foundations, we want to respond to these requests proactively by establishing common specifications for secure software development and meet the expectations of the newly defined term Open Source Steward. 

Open source communities and foundations, including the Python community, have long been practicing and documenting secure software development processes. The starting points for creating common specifications around security are already there, thanks to millions of contributions to hundreds of open source projects. In the true spirit of open source, we plan to learn from, adapt, and build upon what already exists for the collective betterment of our greater software ecosystem. 

The PSF’s Executive Director Deb Nicholson will attend and participate in the initial Open Initiative for Cybersecurity Standards meetings. Later on, various PSF staff members will join in relevant parts of the conversation to help guide the initiative alongside their peers. The PSF looks forward to more investment in cybersecurity best practices by Python and the industry overall. 

This community-driven initiative will have a lasting impact on the future of cybersecurity and our shared open source communities. We welcome you to join this collaborative effort to develop secure open source development specifications. Participate by sharing your knowledge, input, and raising up existing community contributions. Sign up for the Open Initiative for Process Specifications mailing list to get involved and stay updated on this initiative. Check out the press release's from the Eclipse Foundation’s and the Apache Software Foundation for more information.

Categories: FLOSS Project Planets

SoK 2024 - Implementing package management features from RKWard into Cantor via a GUI First Blog

Planet KDE - Mon, 2024-04-01 20:00
Introduction

Hi! I’m Krish, an undergraduate student at the University of Rochester studying Computer Science and this KDE Season of Code I’m working on implementing package management features from RKWard into Cantor via a GUI. I’m being mentored by Alexander Semke.

In an effort to improve usability and functionality, this project seeks to strengthen Cantor's capabilities as a scientific computing platform by incorporating package management tools modeled after those found in RKWard and RStudio. The goal is to create an intuitive graphical interface within Cantor for managing packages in R, Octave, Julia, and other languages.

Set up development environment for Cantor
  • Used virt-manager to setup an “Kubuntu” virtual machine
  • Installed dependencies for Cantor
  • Open a terminal emulator and in your favorite shell:
cd cantor mkdir build cd build cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/cantor/usr/local -DCMAKE_BUILD_TYPE=RELEASE make make install Digging into RKWard's package management system

RKWard, an open-source, cross-platform integrated development environment for the R programming language, implements package management using R's built-in package management system and provides a GUI to simplify package installation, updating, and removal. This writeup will discuss the technical aspects of package management in RKWard.

Broadly, RKWard leverages R's built-in package management functions, such as install.packages(), update.packages(), and remove.packages(), to handle package management tasks. These functions interact with R's package repository (CRAN by default) and local package libraries to perform package-related operations.

RKWard's package management GUI is built on top of these R functions, allowing users to perform package management tasks without directly interacting with the command-line interface. The GUI provides a more user-friendly experience and enables users to manage packages with just a few clicks.

When a user requests to install or update a package, RKWard performs the following technical steps:

  • Dependency resolution: RKWard checks for dependencies required by the package and resolves them using R's available.packages() and installed.packages() functions. If dependencies are not satisfied, RKWard prompts the user to install them automatically.
  • CMake Configuration: The FindR.cmake script is used during the compilation of RKWard to locate the R installation and its components, including the R library directory. This information is necessary for RKWard to interface with R and manage packages.
  • Package download and installation: RKWard downloads package source code or binary files from CRAN or other repositories using R's download.file() function. The packages are then installed using install.packages() with appropriate options, such as specifying the library directory and installing dependencies.
  • Package library management: RKWard installs packages in the R library directory, which is typically located in the user's home folder. The library directory can be configured in RKWard's settings. RKWard ensures that packages are installed in the correct library directory, depending on the user's R version and operating system.
  • Integration with R Console: RKWard includes an embedded R console, which allows users to see the output of package management commands and interact with them directly if needed. Error Handling: RKWard provides error messages and troubleshooting advice if package installation or loading fails. This can include issues like missing dependencies, compilation errors, or permissions problems.
  • Package loading: After installation, RKWard loads the package into the current R session using R's library() or require() functions, making its functions and datasets available for use.
  • RKWard supports package management on various platforms, including Windows, macOS, and Linux. On Windows and macOS, RKWard typically installs packages as pre-compiled binaries for improved performance and ease of installation. On Linux, RKWard installs packages from source, which may require additional development libraries or tools to be installed on the user's system.

RKWard also supports installing packages from local files, Git repositories, and other sources by providing options to specify custom package repositories or URLs. This flexibility allows users to manage their R packages according to their specific needs.

In summary, RKWard implements package management using R's built-in package management functions and provides a user-friendly GUI to simplify package installation, updating, and removal. By leveraging R's package management system, RKWard enables users to manage their R packages efficiently and effectively, regardless of their operating system or R version.

Next Steps
  • Begin implementing package management features in Cantor.
  • Focus on creating a consistent and user-friendly interface for package management.
Categories: FLOSS Project Planets

Hynek Schlawack: Python Project-Local Virtualenv Management Redux

Planet Python - Mon, 2024-04-01 20:00

One of my first TIL entries was about how you can imitate Node’s node_modules semantics in Python on UNIX-like operating systems. A lot has happened since then (to the better!) and it’s time for an update. direnv still rocks, though.

Categories: FLOSS Project Planets

Pages