Feeds

The Drop Times: Dialogues that Shape the Future.

Planet Drupal - Mon, 2024-04-22 11:37

Dear Readers,

At the core of any vibrant community lies the foundational practice of open and effective communication—where growth is nurtured, and innovation is sparked. For The DropTimes, our mission transcends beyond the basic dissemination of information; we actively facilitate a platform for dialogue, connecting the latest technological evolutions with an engaged readership. This integral role embodies our commitment to not only inform but also to catalyze conversation and change, ensuring that every voice can be heard and every insight can be shared, thereby driving Drupal's growth.

The vitality of these discussions within the Drupal community is observable. Whether debating the necessity of new modules for Drupal 10 or discussing the release dates for Drupal 11, each dialogue enriches our collective understanding and propels the platform forward. These conversations, highlighted in our articles and series, testify to the importance of discussing diverse perspectives and disseminating varied opinions. 

By engaging in such dynamic exchanges, the community not only addresses immediate needs and challenges but also shapes the future of the technology. This ongoing dialogue ensures that Drupal remains at the cutting edge, constantly evolving in response to the contributions and concerns of its global user base. With that, let's move on to last week's highlights.

Kazima Abbas, a sub-editor with The DropTimes [TDT]  in the second part of the "Drupal's Innovation & Future: 2024 and Beyond" series, takes a closer look at the thoughts and predictions of Drupal experts: Carlos Rincon Sanchez, Oscar Loria, Stella Power, Krishna R P, and Sinduri Guntupalli. The series aims to examine what might lie ahead for the Drupal platform with meticulous discussions.

In another interesting exchange, I had the opportunity to sit with Grzegorz Bartman, the CEO of Droptica, to discuss Droopler 4, the latest version of their Drupal distribution. Droopler is a Drupal distribution, a pre-built website that users can customize and manage to fit their requirements without having to start from scratch. Learn in-depth about Droopler 4 with this interview with Greg Bartman.

Last week's TDT Spotlight featured Gai Technologies, an organization rooted in the open-source ethos with more than 12 years of experience in web solutions. Elma John, our sub-editor, discussed the inception and journey of Gai Technologies, which is located in the serene setting of the Himalayas, with N. Krishnan, the CEO.

Pantheon, a leading SaaS-based WebOps platform, has entered into a strategic partnership with Lytics, a premier customer data platform (CDP) fully integrated with Google Cloud. Chris Yates, Pantheon’s vice president of products, and James McDermott elaborated on the technical synergies during a detailed discussion with The DropTimes. Read the detailed article to learn how this synergy will benefit Pantheon and Lytics' customers.

TDT is actively seeking volunteers to cover DrupalCon Portland 2024. Meanwhile, the insights from our DrupalCon Pittsburg 2023 volunteers can be read here. Aiden F Dean Dunn, a then-recent graduate from the University of Pittsburgh, provides a fresh perspective on the event in a conversation with Varun Baker, highlighting the welcoming and diverse nature of the Drupal community. Additionally, the regular registration window of DrupalCon Portland 2024 has ended; late registration is open today.

Drupalers can now win a free ticket to DrupalCon Barcelona 2024 with the promo video contest. All are encouraged to record a brief video—preferably in landscape mode—stating their name and country of origin and send it to the organizers. Participants are also urged to share their videos on social media to encourage others to join the campaign, which aims to highlight the Drupal community's global diversity. The deadline for submissions is April 24, 2024.

The Healthcare Summit at DrupalConPortland, scheduled for Thursday, May 9, is hailed as a must-attend event that will offer valuable insights for navigating the complexities of the healthcare industry in the digital age. Tickets are now available for Drupal Camping 2024 in Wolfsburg. Tickets, priced between 80 € and 100 €, offer access to four days of camping, chilling, swimming, barbecuing, and, of course, all things Drupal. 

There are Drupal events around the globe each week to keep Drupal enthusiasts engaged. A complete list of events for the week is available here.

Jeff Greenberg, a seasoned Drupal Architect at iFACTORY, recently sparked a discussion on LinkedIn about the apparent lack of a Drupal 10 module that enables users to subscribe by email to be notified when new content is posted. This query led to a broad response from various Drupal community experts, each offering insights or potential solutions. Dive into the discussion here.

The organizers of Splash Awards, Deutschland & Österreich, have announced the date for this year's Splash Awards 2024 Germany & Austria, set to take place on November 7 in Berlin. The Singapore Government Digital Services team has recently introduced Purple A11y, a comprehensive open-source accessibility testing tool designed to enhance website usability for persons with disabilities

The BAT API module, widely recognized for its robust booking and availability management capabilities, has officially launched support for Drupal 10. Baddy Sonja Breidert, CEO and Co-Founder of 1xINTERNET, recently discussed enhancements to the Frontend Editing module they designed to streamline the editor experience. 

In other news, Smile launched a new no-code Drupal profile named Sobki at DrupalCamp Rennes 2024 on March 27.

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely
Alka Elizabeth
Sub-editor, The DropTimes.

Categories: FLOSS Project Planets

Open Source AI Definition – Weekly update April 22

Open Source Initiative - Mon, 2024-04-22 10:42
Comments on the forum
  • A user added in the forum that there is an issue as traditional copyright protection might not apply to weight models because they are essentially mathematical calculations. “ licensing them through any kind of copyright license will not be enforceable !! and this means that anybody can use them without any copyright restriction (assuming that they have been made public) and this means that you cannot enforce any kind of provisions such as attribution, no warranty or copyleft” They suggest using contractual terms instead of relying on copyright as a workaround, acknowledgement that this will trigger a larger conversation
Comments left on the definition text
  • Clarification needed under “What is Open Source AI”
  1. Discussion on whether “made available” should be changed to “released” or “distributed”
    1. One user pointed out that “made available” is the most appropriate, as the suggested wordings would be antagonistic and limiting
  2. Continuation of last week’s issue regarding defining who these four freedoms are for, deployers, users or someone else.
    1. Added that a user understands it as “We need essential freedoms to enable users…”
    2. But, then who are we defining as “Users”? Is it the person deploying the AI or the calling prompt?
    3. Another wording is suggested: “Open Source AI is an AI system that is made available under terms that grant, without conditions or restrictions, the rights to…”
  • Clarification is needed under “Preferred form to make modification to a machine learning system”, 
  1. Specifically to the claim: (The following components are not required,) but their inclusion in releases is appreciated.
    1. Clarification regarding whether this means best practice or it’s a mere a suggestion.
    2. Suggestion to change the sentence to “The following components are not required to meet the Open Source AI definition and may be provided for convenience.” This will also “consider if those components are provided, can they be provided under different terms that don’t meet the Open Source AI definition, or do they fall under the same OSI compliant license automatically. “
  2. Question regarding the addition of “may” under data transparency in the 0.0.7 draft definition, which was not included in the 0.0.6 one, considering that the components are described as “required” in the checklist below
    1. (Context: “Sufficiently detailed information on how the system was trained. This may include the training methodologies and techniques, the training data sets used, information about the provenance of those data sets, their scope and characteristics; how the data was obtained and selected, the labelling procedures and data cleaning methodologies.”)
    2. Another user seconds this and further adds that it should be changed to “must”, or something else which is definitive.
Town Hall meeting was held on April 19th

In case you missed it, the with town hall was held last Friday. Access the recordings and slides used here

Categories: FLOSS Research

Real Python: Write Unit Tests for Your Python Code With ChatGPT

Planet Python - Mon, 2024-04-22 10:00

Having a good battery of tests for your code may be a requirement for many Python projects. In practice, writing unit tests is hard and can take a lot of time and effort. Therefore, some developers don’t like to write them. However, with large language models (LLMs) and tools like ChatGPT, you can quickly create robust and complete sets of tests for your Python code.

In Python, you can use multiple different tools for writing tests. The most commonly used tools include doctest, unittest, and pytest. ChatGPT can be of great help in writing tests with any of these tools.

In this tutorial, you’ll:

  • Prompt ChatGPT to create tests using doctest
  • Use ChatGPT to write unittest tests, fixtures, and suites
  • Craft ChatGPT prompts to write pytest tests and fixtures
  • Use alternative prompts for cases where the code isn’t available

To get the most out of this tutorial, you should set up a ChatGPT account and know the basics of interacting with this tool using prompt engineering. You should also know the basics of how to test code in Python.

Get Your Code: Click here to download the free sample code you’ll use to write unit tests for your Python code using ChatGPT.

Benefits of Using ChatGPT for Testing Python Code

Having good and up-to-date unit tests for your code is a must for any Python project. Poorly tested code or code without tests may end up being unreliable and weak. With automated tests, you can ensure and show that your code works correctly in different scenarios. So, having tests is important from the technical and commercial point of view.

Writing good tests is hard and can take a lot of time. That’s why some developers don’t like to write them at all. Using large language models (LLM) like ChatGPT can be a viable alternative for providing your projects and code with proper tests.

Note: You can use ChatGPT and other LLMs in many useful ways in the context of Python development. Check out the following resources for other use cases and useful insights:

Some of the benefits of using ChatGPT to write tests for your Python code include the following:

  • Efficiency and speed: It can generate unit tests based on specifications or code snippets. This possibility significantly reduces the time that you need to spend writing tests. So you can focus on writing application logic.
  • Coverage improvement: It can suggest tests for edge cases or scenarios that developers might not immediately consider. This way, you can improve your code’s test coverage.
  • Error reduction: It can reduce human error in writing repetitive or boilerplate test code.
  • Learning and onboarding: It can serve as an educational tool for developers who are new to testing frameworks or unfamiliar with best practices in unit testing. The generated tests can help developers learn about testing patterns, assertions, and ways to effectively write tests.

With ChatGPT, you can generate unit tests for your Python code in almost no time. However, you must note that even though the generated tests can look good, you should still review and possibly refine the generated tests.

Note: For this tutorial, you’ll use a free ChatGPT account that’s powered by the GPT-3.5 LLM. You can use this model for free with the default web interface by OpenAI. If you use a paid account with GPT-4, then your results might be more accurate and faster.

Additionally, you’ll use an individual chat thread for each major section in the tutorial. So you’ll use different threads for the doctest, unittest, and pytest sections.

Using a single chat thread to run all the examples in this tutorial may cause the chat to generate significantly different responses because of the past context, which includes previous prompts and responses.

In the following sections, you’ll learn the basics of using ChatGPT as an assistant for creating coherent unit tests for your Python projects using different test frameworks and libraries.

Writing doctest Tests With ChatGPT

Python’s doctest module provides a lightweight testing framework that you can use for test automation. It can read the test cases from your project’s documentation and your code’s docstrings. This framework comes with the Python interpreter, so you’ll have it at your disposal with any Python installation, which is great.

Note: To dive deeper into how to use the doctest module for testing your code, check out Python’s doctest: Document and Test Your Code at Once.

With doctest, you can write code that provides examples of how your code works, simulating REPL sessions. These examples work as documentation and also as test cases or unit tests. Each doctest test needs a statement or expression and an expected result.

You can use ChatGPT to help you out with writing doctest tests. First, you need some code to test. The following function tries to tackle the FizzBuzz challenge, where you return "fizz" for numbers divisible by three, "buzz" for those divisible by five, and "fizz buzz" for those divisible by both three and five.

Go ahead and create a fizzbuzz.py file and add the following code to it:

Python fizzbuzz.py def fizzbuzz(number): if number % 3 == 0: return "fizz" elif number % 5 == 0: return "buzz" elif number % 15 == 0: return "fizz buzz" else: return number Copied!

This function works okay for numbers divisible by 3 or 5. However, there is a slight issue with numbers that are divisible by both. Even though you know this, go ahead and start a new chat thread. Then, prompt ChatGPT to write doctest test for the function:

You: Write doctest tests for the following function:

Python def fizzbuzz(number): if number % 3 == 0: return "fizz" elif number % 5 == 0: return "buzz" elif number % 15 == 0: return "fizz buzz" else: return number Copied!

ChatGPT:

Certainly! The function you’ve provided implements a basic FizzBuzz algorithm but with a slight logical error. The check for divisibility by 15 should come before checks for 3 and 5, as every number divisible by 15 is also divisible by both 3 and 5, but your current order will never reach the "fizz buzz" case. Here’s a corrected version of your function with the logical order fixed, followed by doctest tests for it:

Python def fizzbuzz(number): """ Returns 'fizz' if number is divisible by 3, 'buzz' if number is divisible by 5, 'fizz buzz' if number is divisible by 15, and the number itself otherwise. >>> fizzbuzz(3) 'fizz' >>> fizzbuzz(5) 'buzz' >>> fizzbuzz(15) 'fizz buzz' >>> fizzbuzz(4) 4 >>> fizzbuzz(30) 'fizz buzz' >>> fizzbuzz(9) 'fizz' >>> fizzbuzz(10) 'buzz' >>> fizzbuzz(2) 2 """ if number % 15 == 0: return "fizz buzz" elif number % 3 == 0: return "fizz" elif number % 5 == 0: return "buzz" else: return number Copied!

To use these doctest tests, you’ll need to include them in the docstring of your function as shown above. Then, you can run the tests with the doctest module by adding the following to your script:

Python if __name__ == "__main__": import doctest doctest.testmod() Copied!

This code will automatically run the tests defined in the docstrings when you execute the Python file. Remember, the corrected order of conditions in the function ensures that numbers divisible by both 3 and 5 return "fizz buzz", which is crucial for the FizzBuzz logic.

Wow! That great! The chat not only wrote the doctest tests for you. It also fixed the issue, explained important concepts around doctest, and provided instructions on how to run the tests. Note that the tests cover both regular and edge cases, which is pretty convenient.

Read the full article at https://realpython.com/chatgpt-unit-tests-python/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Bits from Debian: Debian Project Leader Election 2024, Andreas Tille elected.

Planet Debian - Mon, 2024-04-22 08:00

The voting period for the Debian Project Leader election has ended. Please join us in congratulating Andreas Tille as the new Debian Project Leader.

The new term for the project leader started on 2024-04-21.

369 of 1,010 Debian Developers voted using the Condorcet method.

More information about the results of the voting are available on the Debian Project Leader Elections 2024 page.

Many thanks all of our Developers for voting.

Categories: FLOSS Project Planets

Vincent Fourmond: QSoas version 3.3 is out

Planet Debian - Mon, 2024-04-22 06:50
Version 3.3 brings in new features, including reverse Laplace transforms and fits, pH fits, commands for picking points from a dataset, averaging points with the same X value, or perform singular value decomposition.

In addition to these new features, many previous commands were improved, like the addition of a bandcut filter in FFT filtering, better handling of the loading of files produced by QSoas itself, and a button to interrupt the processing of scripts.

There are a lot of other new features, improvements and so on, look for the full list there. About QSoas
QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.3. You can download for free its source code or precompiled versions for MacOS and Windows there. Alternatively, you can clone from the GitHub repository.
Categories: FLOSS Project Planets

LN Webworks: How To Use Cron Jobs For Task Automation In Drupal

Planet Drupal - Mon, 2024-04-22 04:07

The efficiency of any website is a must in this fast-paced world of web development. One of the most important tools to maximize the efficiency of your Drupal site is cron jobs. 

Cron jobs are powerful tools that automate any kind of routine tasks of the website which decreases the big load of a developer or site administrator. In this article, I will give you important insights about cron jobs and how we can use them within Drupal, followed by how we customize the tasks by overriding services of the ultimate cron job module.

What is Cron Jobs in Drupal? 

Cron in Drupal works quite well as it coordinates all the backstage tasks. It handles everything from clearing caches to checking for updates of any kind of content and executing scheduled tasks set by your modules.

Categories: FLOSS Project Planets

Zato Blog: Enterprise Python: Integrating with Salesforce

Planet Python - Mon, 2024-04-22 04:00
Enterprise Python: Integrating with Salesforce 2024-04-22, by Dariusz Suchojad Overview

Salesforce connections are one of the newest additions to Zato 3.2, allowing you to look up and manage Salesforce records and other business data. To showcase it, the article will create a sample Salesforce marketing campaign in a way that does not require the usage of anything else except for basic REST APIs combined with plain Python objects, such as dicts.

If you have not done it already, you can download Zato here.

Basic workflow

The scope of our works will be:

  • Creating Salesforce credentials for our integration project
  • Defining a Salesforce connection in Zato
  • Authoring a service that will map input data to the format that Salesforce expects
  • Creating a Zato REST channel that will be invoked through curl during tests
  • Testing the integration
Creating Salesforce credentials

To be able to create as connection to Salesforce in the next step, we need a few credentials. There is a full article about how to prepare them and this section is the gist of it.

In runtime, based on this information, Zato will obtain the necessary authentication and authorization tokens itself, which means that you will only focus on the business side of the integrations, not on the low-level aspects of it.

The process of obtaining the credentials needs to be coordinated with an administrator of your organization. To assist in that, the screenshots below explain where to find them.

The credentials are:

  • Username and password
  • Consumer key
  • Consumer secret

The username and password are simply the same credentials that can be used to log in to Salesforce:

Consumer key and secret are properties of a connected app - this is a term that Salesforce uses for API clients that invoke its services. If you are already an experienced Salesforce REST API user, you may know the key and secret under their aliases of "client_id" and "client_secret" - these are the same objects.

Note that when a connected app already exists and you would like to retrieve the key and secret, they will be available under the "View" menu option for the app, not under "Edit" or "Manage".

Defining a Salesforce connection in Zato

With all the credentials in place, we can create a new Salesforce connection in Zato Dashboard, as below.

Authoring an integration service in Python

Above, we created a connection definition that lets Zato obtain session tokens and establish connections to Salesforce. Now, we can create an API service that will make use of such connections.

In the example below, we are using the POST REST method to invoke an endpoint that creates new Salesforce campaigns. In your own integrations, you can invoke any other Salesforce endpoint, using any REST method as needed, by following the same pattern, which is, create a model with input fields, build a Python dict for the request to Salesforce, invoke it and map all the required from the response from Salesforce to that which your own service returns to its own callers.

Note that we use a datamodel-based SimpleIO definition for the service. Among other things, although we are not going to do it here, this would let us offer definitions for this and other services.

# -*- coding: utf-8 -*- # stdlib from dataclasses import dataclass # Zato from zato.server.service import Model, Service # ########################################################################### if 0: from zato.server.connection.salesforce import SalesforceClient # ########################################################################### @dataclass(init=False) class CreateCampaignRequest(Model): name: str segment: str # ########################################################################### @dataclass(init=False) class CreateCampaignResponse(Model): campaign_id: str # ########################################################################### class CreateCampaign(Service): class SimpleIO: input = CreateCampaignRequest output = CreateCampaignResponse def handle(self): # This is our input data input = self.request.input # type: CreateCampaignRequest # Salesforce REST API endpoint to invoke - note that Zato # will add a prefix to it containing the API version. path = '/sobjects/Campaign/' # Build the request to Salesforce based on what we received request = { 'Name': input.name, 'Segment__c': input.segment, } # .. create a reference to our connection definition .. salesforce = self.cloud.salesforce['My Salesforce Connection'] # .. obtain a client to Salesforce .. with salesforce.conn.client() as client: # type: SalesforceClient # .. create the campaign now .. sf_response = client.post(path, request) # .. build our response object .. response = CreateCampaignResponse() response.campaign_id = sf_response['id'] # .. and return its ID to our caller. self.response.payload = response # ########################################################################### Creating a REST channel

Note that we assign HTTP Basic Auth credentials to the channel. In this manner, it is possible for clients of this REST channel to authenticate using a method that they are already familiar which simplifies everyone's work - it is Zato that deals with how to authenticate against Salesforce whereas your API clients use the ubiquitous HTTP Basic Auth method.

Testing

The last step is to invoke the newly created channel:

$ curl http://api:password@localhost:17010/api/campaign/create -d '{"name":"Hello", "segment":"123"}' {"campaign_id":"8901Z3VHXDTebEJWs"} $

That is everything - you have just integrated with Salesforce and exposed a REST channel for external applications to integrate with!

More blog posts
Categories: FLOSS Project Planets

Russ Allbery: Review: The Stars, Like Dust

Planet Debian - Sun, 2024-04-21 22:22

Review: The Stars, Like Dust, by Isaac Asimov

Series: Galactic Empire #2 Publisher: Fawcett Crest Copyright: 1950, 1951 Printing: June 1972 Format: Mass market Pages: 192

The Stars, Like Dust is usually listed as the first book in Asimov's lesser-known Galactic Empire Trilogy since it takes place before Pebble in the Sky. Pebble in the Sky was published first, though, so I count it as the second book. It is very early science fiction with a few mystery overtones.

Buying books produces about 5% of the pleasure of reading them while taking much less than 5% of the time. There was a time in my life when I thoroughly enjoyed methodically working through a used book store, list in hand, tracking down cheap copies to fill in holes in series. This means that I own a lot of books that I thought at some point that I would want to read but never got around to, often because, at the time, I was feeling completionist about some series or piece of world-building. From time to time, I get the urge to try to read some of them.

Sometimes this is a poor use of my time.

The Galactic Empire series is from Asimov's first science fiction period, after the Foundation series but contemporaneous with their collection into novels. They're set long, long before Foundation, but after humans have inhabited numerous star systems and Earth has become something of a backwater. That process is just starting in The Stars, Like Dust: Earth is still somewhere where an upper-class son might be sent for an education, but it has been devastated by nuclear wars and is well on its way to becoming an inward-looking relic on the edge of galactic society.

Biron Farrill is the son of the Lord Rancher of Widemos, a wealthy noble whose world is one of those conquered by the Tyranni. In many other SF novels, the Tyranni would be an alien race; here, it's a hierarchical and authoritarian human civilization. The book opens with Biron discovering a radiation bomb planted in his dorm room. Shortly after, he learns that his father had been arrested. One of his fellow students claims to be in Biron's side against the Tyranni and gives him false papers to travel to Rhodia, a wealthy world run by a Tyranni sycophant.

Like most books of this era, The Stars, Like Dust is a short novel full of plot twists. Unlike some of its contemporaries, it's not devoid of characterization, but I might have liked it better if it were. Biron behaves like an obnoxious teenager when he's not being an arrogant ass. There is a female character who does a few plot-relevant things and at no point is sexually assaulted, so I'll give Asimov that much, but the gender stereotypes are ironclad and there is an entire subplot focused on what I can only describe as seduction via petty jealousy.

The writing... well, let me quote a typical passage:

There was no way of telling when the threshold would be reached. Perhaps not for hours, and perhaps the next moment. Biron remained standing helplessly, flashlight held loosely in his damp hands. Half an hour before, the visiphone had awakened him, and he had been at peace then. Now he knew he was going to die.

Biron didn't want to die, but he was penned in hopelessly, and there was no place to hide.

Needless to say, Biron doesn't die. Even if your tolerance for pulp melodrama is high, 192 small-print pages of this sort of thing is wearying.

Like a lot of Asimov plots, The Stars, Like Dust has some of the shape of a mystery novel. Biron, with the aid of some newfound companions on Rhodia, learns of a secret rebellion against the Tyranni and attempts to track down its base to join them. There are false leads, disguised identities, clues that are difficult to interpret, and similar classic mystery trappings, all covered with a patina of early 1950s imaginary science. To me, it felt constructed and artificial in ways that made the strings Asimov was pulling obvious. I don't know if someone who likes mystery construction would feel differently about it.

The worst part of the plot thankfully doesn't come up much. We learn early in the story that Biron was on Earth to search for a long-lost document believed to be vital to defeating the Tyranni. The nature of that document is revealed on the final page, so I won't spoil it, but if you try to think of the stupidest possible document someone could have built this plot around, I suspect you will only need one guess. (In Asimov's defense, he blamed Galaxy editor H.L. Gold for persuading him to include this plot, and disavowed it a few years later.)

The Stars, Like Dust is one of the worst books I have ever read. The characters are overwrought, the politics are slapdash and build on broad stereotypes, the romantic subplot is dire and plays out mainly via the Biron egregiously manipulating his petulant love interest, and the writing is annoying. Sometimes pulp fiction makes up for those common flaws through larger-than-life feats of daring, sweeping visions of future societies, and ever-escalating stakes. There is little to none of that here. Asimov instead provides tedious political maneuvering among a class of elitist bankers and land owners who consider themselves natural leaders. The only places where the power structures of this future government make sense are where Asimov blatantly steals them from either the Roman Empire or the Doge of Venice.

The one thing this book has going for it — the thing, apart from bloody-minded completionism, that kept me reading — is that the technology is hilariously weird in that way that only 1940s and 1950s science fiction can be. The characters have access to communication via some sort of interstellar telepathy (messages coded to a specific person's "brain waves") and can travel between stars through hyperspace jumps, but each jump is manually calculated by referring to the pilot's (paper!) volumes of the Standard Galactic Ephemeris. Communication between ships (via "etheric radio") requires manually aiming a radio beam at the area in space where one thinks the other ship is. It's an unintentionally entertaining combination of technology that now looks absurdly primitive and science that is so advanced and hand-waved that it's obviously made up.

I also have to give Asimov some points for using spherical coordinates. It's a small thing, but the coordinate systems in most SF novels and TV shows are obviously not fit for purpose.

I spent about a month and a half of this year barely reading, and while some of that is because I finally tackled a few projects I'd been putting off for years, a lot of it was because of this book. It was only 192 pages, and I'm still curious about the glue between Asimov's Foundation and Robot series, both of which I devoured as a teenager. But every time I picked it up to finally finish it and start another book, I made it about ten pages and then couldn't take any more. Learn from my error: don't try this at home, or at least give up if the same thing starts happening to you.

Followed by The Currents of Space.

Rating: 2 out of 10

Categories: FLOSS Project Planets

www-zh-cn @ Savannah: Welcome our new member - integral

GNU Planet! - Sun, 2024-04-21 20:56

Hi, All:

Please join me in welcoming our new member:

 User Details:
-------------
Name:
Login:   integral
Email:   integral@member.fsf.org

I wish integral a wonderful journey in GNU CTT.

Happy Hacking
wxie

Categories: FLOSS Project Planets

Go Deh: Searching OEIS tables

Planet Python - Sun, 2024-04-21 16:32

 

 A few months ago I submitted a series to OEIS* that was accepted; yes, but OEIS does not seem to leave my series searchable!

*OEIS is the Online Encyclopedia of  Integer Series. I guess table is not in the name, but...

(best viewed on larger than a portrait phone)

Let me explain.

The documentation for OEIS, explains that if you have a 2D triangle or table of values rather than a one dimensional strict series, then one should antidiagonalise the data and submit the series produced.

They give as an example A003987 . This gives this table:

Table begins   0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, ...   1,  0,  3,  2,  5,  4,  7,  6,  9,  8, 11, 10, ...   2,  3,  0,  1,  6,  7,  4,  5, 10, 11,  8, ...   3,  2,  1,  0,  7,  6,  5,  4, 11, 10, ...   4,  5,  6,  7,  0,  1,  2,  3, 12, ...   5,  4,  7,  6,  1,  0,  3,  2, ...   6,  7,  4,  5,  2,  3,  0, ...   7,  6,  5,  4,  3,  2, ...   8,  9, 10, 11, 12, ...   9,  8, 11, 10, ...  10, 11,  8, ...  11, 10, ...  12, ...  ...

The above gets antidiagonalised to the series beginning:

  0, 1, 1, 2, 0, 2, 3, 3, 3, 3, 4, 2, 0, 2, 4, 5, 5, 1, 1, 5, 5, 6, 4,  6, 0, 6, 4, 6, 7, 7, 7, 7, 7, 7, 7, 7, 8, 6, 4, 6, 0, 6, 4, 6, 8, 9,  9, 5, 5, 1, 1, 5, 5, 9, 9, 10, 8, 10, 4, 2, 0, 2, 4, 10, 8, 10, 11,  11, 11, 11, 3, 3, 3, 3, 11, 11, 11, 11, 12, 10, 8, 10, 12, 2, 0, 2,  12, 10, 8, 10, 12, 13, 13, 9, 9, 13, 13, 1, 1, 13, 13, 9, 9, 13, 13

Searching...

If we search for a sub-sequence of the antidiagonalised table, we can find the correct entry.

If, however, we search for a row of the values from the table,  A003987 is not found!
The values chosen: to search for: 5,4,7,6,1,0,3 appear near the end of the table which shows that that row of numbers should be followed by a 2.
The table shows 13*13 / 2 ~ 85 values. OEIS has a list of 104 values, so it has the data to search through.

No intuitive search of OEIS tables

It seems to me that the most intuitive way to search a table of values is by row, left to right. There are other ways to search a table, (assuming an origin at top left and the table extends to the right and down):

  • By row, L2R. , R2L
  • By Column Top2Bottom, , B2T
  • By 45 degree diagonals, ↘, ↖, ↙, ↗

OEIS doesn't seem to do these searches on tabular data.

Regenerating a 2D table from antidiagonalised data.

I did play around and created some code to recreate a table as a list of row-lists, in Python, given an OEIS B-file. The options handling is a work in progress, but the main part was being able to generate the table.

# -*- coding: utf-8 -*-# %%"""adia_to_table.py file
Generate 2D table from OEIS type anti-diagonalised sequence held in file

Created on Thu Apr  4 18:16:07 2024
@author: paddy3118"""
from itertools import zip_longestimport math# from pprint import ppfrom antidiagonals import antidiag_triangle_indices

def read_bfile(bfname: str) -> tuple[int | None,  # first index                                     list[int]]:   # anti-diag values    """Read B-file bfname
## B-File format:* Ref: https://oeis.org/SubmitB.html* Blank lines ignored* Lines beginning  with # ignored* Lines of two, space-separated integers <index> <indexed-value>
It is assumed the index increments by one on subsequent lines.    """    first_index, values = None, []    with open(bfname) as bfile:        for line in bfile:            ln = line.strip()            if not ln or ln.startswith('#'):                continue            index, value = [int(field) for field in ln.split()[:2]]            if first_index is None:                first_index = index            values.append(value)
    return first_index, values

def antidiag_to_table(sequence: list[int]) -> list[list[int]]:    """    Convert anti-diagonalised sequence back to infinite 2D table.
    Parameters    ----------    sequence : list[int]        Anti-diagonalised values from table.
    Returns    -------    list[list[int]]        Table of rows of ints.
    Table rows will fill in from successive sequence values like this:
     1  2  4  7 11 16 ...     3  5  8 12 17     6  9 13 18    10 14 19    15 20    21    .    .    .    """
    # 1, 3, 6, 10, 15, 21, ... rows*(rows+1) / 2
    # min columns in triangular table generation. ~= min rows    size = len(sequence)                                  # = rows*(rows+1)/2    rows = math.ceil((-1 + math.sqrt(4 * 2 * size)) / 2)  # solve for rows    # cols = rows  # last row may be deleted if last anti-diag is part filled.    # print(f"{(size, cols) = }")
    # Empty (triangular) table of None's    table = [[None] * (rows - i)             for i in range(rows)]
    indices = antidiag_triangle_indices()    col = 0  # for if sequence is empty    for val, (row, col) in zip(sequence, indices):        table[row][col] = val
    # Remove unfilled part of last anti-diag of table    while col > 0:        row, col = next(indices)        table[row].pop(-1)    # remove last row if present and empty    if table and not table[-1]:        table.pop(-1)
    return table

def pp_table(table: list[list[int]]) -> None:    "Pretty-print table of differring row lengths"    if not table:        return    col_width = max(max(len(str(val)) for val in row) for row in table)    for row in table:        print(''.join(f"{val:{col_width}}" for val in row))

def transpose(table: list[list[int]]) -> list[list[int]]:    "Table of rows to x<>y transposed table of new rows"    fv = math.nan    tr = [list(new_row)          for new_row in zip_longest(*table, fillvalue=fv)]    # remove fillvalues in triangular transposition    for row in tr:        try:            row[row.index(fv):] = []        except ValueError:            continue
    return tr

if __name__ == "__main__":    print("# TEST FILL BY ANTI-DIAGONAL\n")    for n in range(0, 8):        print(f"{n = }:\n")        ans = antidiag_to_table(list(range(1, n+1)))        pp_table(ans)        print()
    fname, m = 'b365096.txt', 505    print(f"\n\n# Data from {fname}, first {m} values:\n")    ad = read_bfile(fname)    ans = antidiag_to_table(ad[1][:m])    pp_table(ans)    print("\n## Transposed:\n")    pp_table((tr:=transpose(ans)))


And antidiagonals.py is this:

# -*- coding: utf-8 -*-"""Anti-diagonals:
0,0 0,1 0,2 0,31,0 1,1 1,2 1,32,0 2,1 2,2 2,33,0 3,1 3,2 3,3
Of Square:0,0  0,1 1,0  0,2 1,1 2,0  0,3 1,2 2,1 3,0   1,3 2,2 3,1  2,3 3,2  3,3
of Infinite table:0,0  0,1 1,0  0,2 1,1 2,0  0,3 1,2 2,1 3,0   0,4 1,3 2,2 3,1 4,0 ...

Created on Mon Aug 21 13:36:31 2023
@author: paddy
"""# %% Triangles
from itertools import islice

def antidiag_triangle_indices() -> tuple[int, int]:    x = y = 0    while True:        yield (x, y)        x, y = (x+1, y-1) if y else (0, x+1)

list(islice(antidiag_triangle_indices(), 15))
# %% Rectangles
from itertools import islice

def antidiag_rectangle_indices(sizex: int=4, sizey: int=4) -> tuple[int, int]:    x = y = 0    while True:        yield (x, y)        if (x, y) == (sizex - 1, sizey - 1):            break        x, y = (x+1, y-1)        if x == sizex or y < 0:            u = x + y + 1            x, y = (0, u) if u < sizey else (u - sizey + 1, sizey - 1)
list(antidiag_rectangle_indices(3, 4))


END.

Categories: FLOSS Project Planets

gnulib @ Savannah: GNU gnulib: calling for beta-testers

GNU Planet! - Sun, 2024-04-21 06:47

If you are developer on a package that uses GNU gnulib as part of its build system:

gnulib-tool has been known for being slow for many years. We have listened to your complaints. A rewrite of gnulib-tool in another programming language (Python) is ready for beta-testing. It is between 8 times and 100 times faster than the original gnulib-tool.

Both implementations should behave identically, that is, produce the same generated files and the same output. You can help us ensure this, through the following steps:

1. Make sure you have Python (version 3.7 or newer) installed on your machine.

2. Update your gnulib checkout. (For some packages, it comes as a git submodule named 'gnulib'.) Like this:

  $ git checkout master
  $ git pull

     Set the environment variable GNULIB_SRCDIR, pointing to this checkout.

     If the package is using a git submodule named 'gnulib', it is also advisable to do

  $ git commit -m 'build: Update gnulib submodule to latest.' gnulib

     (as a preparation for step 5, because the --no-git option does not work as expected in all variants of 'bootstrap').

3. Set an environment variable that enables checking that the two implementations behave the same:

  $ export GNULIB_TOOL_IMPL=sh+py


4. Clean the built files of your package:

  $ make -k distclean


5. Regenerate the fetched and generated files of your package. Depending on the package, this may be a command such as

  $ ./bootstrap --no-git --gnulib-srcdir=$GNULIB_SRCDIR

     or

  $ export GNULIB_SRCDIR; ./autopull.sh; ./autogen.sh

     or, if no such script is available:

  $ $GNULIB_SRCDIR/gnulib-tool --update

     If there is a failure, due to differences between the 'sh' and 'py' results, please report it to <bug-gnulib@gnu.org>.

6. If this invocation was successful, you can trust the rewritten gnulib-tool and use it from now on, by setting the environment variable

  $ export GNULIB_TOOL_IMPL=py


7. Continue with

  $ ./configure
  $ make

     as usual.

And enjoy the speed! The rewritten gnulib-tool was implemented by Dmitry Selyutin, Collin Funk, and me.

Categories: FLOSS Project Planets

KDE Gear 24.05 branches created

Planet KDE - Sun, 2024-04-21 06:39
Make sure you commit anything you want to end up in the KDE Gear 24.05
releases to them

Next Dates
  • April 25 2024: 24.05 Freeze and Beta (24.04.80) tag & release
  • May 9, 2024: 24.05 RC (24.04.90) Tagging and Release
  • May 16, 2024: 24.05 Tagging
  • May 23, 2024: 24.05 Release

https://community.kde.org/Schedules/KDE_Gear_24.05_Schedule
Categories: FLOSS Project Planets

The Drop Times: Drupal's Innovation & Future: 2024 and Beyond—Part 2 | Industry Experts' Perspective

Planet Drupal - Sun, 2024-04-21 02:23
Discover what lies ahead for Drupal as we delve into insights and predictions from experts Carlos Rincon Sanchez, Oscar Loria, Stella Power, Krishna R P, and Sinduri Guntupalli. Join us as we explore the future of this resilient platform and ask: What is your vision for Drupal in 2024 and beyond?
Categories: FLOSS Project Planets

Bastian Venthur: Help needed: creating a WSDL file to interact with debbugs

Planet Debian - Sat, 2024-04-20 07:45

I am upstream and Debian package maintainer of python-debianbts, which is a Python library that allows for querying Debian’s Bug Tracking System (BTS). python-debianbts is used by reportbug, the standard tool to report bugs in Debian, and therefore the glue between the reportbug and the BTS.

debbugs, the software that powers Debian’s BTS, provides a SOAP interface for querying the BTS. Unfortunately, SOAP is not a very popular protocol anymore, and I’m facing the second migration to another underlying SOAP library as they continue to become unmaintained over time. Zeep, the library I’m currently considering, requires a WSDL file in order to work with a SOAP service, however, debbugs does not provide one. Since I’m not familiar with WSDL, I need help from someone who can create a WSDL file for debbugs, so I can migrate python-debianbts away from pysimplesoap to zeep.

How did we get here?

Back in the olden days, reportbug was querying the BTS by parsing its HTML output. While this worked, it tightly coupled the user-facing presentation of the BTS with critical functionality of the bug reporting tool. The setup was fragile, prone to breakage, and did not allow changing anything in the BTS frontend for fear of breaking reportbug itself.

In 2007, I started to work on reportbug-ng, a user-friendly alternative to reportbug, targeted at users not comfortable using the command line. Early on, I decided to use the BTS’ SOAP interface instead of parsing HTML like reportbug did. 2008, I extracted the code that dealt with the BTS into a separate Python library, and after some collaboration with the reportbug maintainers, reportbug adopted python-debianbts in 2011 and has used it ever since.

2015, I was working on porting python-debianbts to Python 3. During that process, it turned out that its major dependency, SoapPy was pretty much unmaintained for years and blocking the Python3 transition. Thanks to the help of Gaetano Guerriero, who ported python-debianbts to pysimplesoap, the migration was unblocked and could proceed.

In 2024, almost ten years later, pysimplesoap seems to be unmaintained as well, and I have to look again for alternatives. The most promising one right now seems to be zeep. Unfortunately, zeep requires a WSDL file for working with a SOAP service, which debbugs does not provide.

How can you help?

reportbug (and thus python-debianbts) is used by thousands of users and I have a certain responsibility to keep things working properly. Since I simply don’t know enough about WSDL to create such a file for debbugs myself, I’m looking for someone who can help me with this task.

If you’re familiar with SOAP, WSDL and optionally debbugs, please get in touch with me. I don’t speak Pearl, so I’m not really able to read debbugs code, but I do know some things about the SOAP requests and replies due to my work on python-debianbts, so I’m sure we can work something out.

There is a WSDL file for a debbugs version used by GNU, but I don’t think it’s official and it currently does not work with zeep. It may be a good starting point, though.

The future of debbugs’ API

While we can probably continue to support debbugs’ SOAP interface for a while, I don’t think it’s very sustainable in the long run. A simpler, well documented REST API that returns JSON seems more appropriate nowadays. The queries and replies that debbugs currently supports are simple enough to design a REST API with JSON around it. The benefit would be less complex libraries on the client side and probably easier maintainability on the server side as well. debbugs’ maintainer seemed to be in agreement with this idea back in 2018. I created an attempt to define a new API (HTML render), but somehow we got stuck and no progress has been made since then. I’m still happy to help shaping such an API for debbugs, but I can’t really implement anything in debbugs itself, as it is written in Perl, which I’m not familiar with.

Categories: FLOSS Project Planets

ListenData: 4 Ways to Use ChatGPT API in Python

Planet Python - Sat, 2024-04-20 07:28

In this tutorial, we will explain how to use ChatGPT API in Python, along with examples.

Steps to Access ChatGPT API

Please follow the steps below to access the ChatGPT API.

  1. Visit the OpenAI Platform and sign up using your Google, Microsoft or Apple account.
  2. After creating your account, the next step is to generate a secret API key to access the API. The API key looks like this -sk-xxxxxxxxxxxxxxxxxxxx
  3. If your phone number has not been associated with any other OpenAI account previously, you may get free credits to test the API. Otherwise you have to add atleast 5 dollars into your account and charges will be based on the usage and the type of model you use. Check out the pricing details in the OpenAI website.
  4. Now you can call the API using the code below.
To read this article in full, please click hereThis post appeared first on ListenData
Categories: FLOSS Project Planets

ListenData: How to Build ChatGPT Clone in Python

Planet Python - Sat, 2024-04-20 02:17

In this article, we will see the steps involved in building a chat application and an answering bot in Python using the ChatGPT API and gradio.

Developing a chat application in Python provides more control and flexibility over the ChatGPT website. You can customize and extend the chat application as per your needs. It also help you to integrate with your existing systems and other APIs.

To read this article in full, please click hereThis post appeared first on ListenData
Categories: FLOSS Project Planets

Python Morsels: Multiline comments in Python

Planet Python - Fri, 2024-04-19 19:00

Python does not have multiline comments. But you can use alternatives like docstrings, editor shortcuts, and conventional commenting methods, depending on your requirements.

Table of contents

  1. Does Python have multi-line comments?
  2. What about triple quotes?
  3. Docstrings versus comments
  4. What if I want to comment-out a whole block of code?
  5. How to comment multiple lines
  6. Use docstrings, your code editor, or version control

Does Python have multi-line comments?

For single-line comments, Python uses the octothorpe character (#), also known as pound, number sign, crunch, and of course, the hashtag character:

# This is a comment this = "is not a comment"

But what if you want to comment out a whole block of code?

Unlike some programming languages, Python does not have multi-line comments.

What about triple quotes?

You might be thinking, wait, …

Read the full article: https://www.pythonmorsels.com/multiline-comments/
Categories: FLOSS Project Planets

Louis-Philippe Véronneau: Montreal's Debian & Stuff - March 2024

Planet Debian - Fri, 2024-04-19 17:45

Time really flies when you are really busy you have fun! Our Montréal Debian User Group met on Sunday March 31st and I only just found the time to write our report :)

This time around, 9 of us we met at EfficiOS's offices1 to chat, hang out and work on Debian and other stuff!

Here is what we did:

pollo:

  • did some clerical work for the DebConf videoteam
  • tried to book a plane ticket for DC24
  • triaged #1067620 (dependency problem with whipper)
  • closed #1067121 (flaky test in supysonic)
  • closed #1065514 (qpdfview crossbuilding)

tvaz:

tassia:

  • planned & brainstormed for the upcoming Debian usability tests
  • mentored a student/new contributor (justin)
  • babysat a future contributor!
  • closed #1067649
  • learnt about fabre.debian.net & element.debian.social (thanks, pollo!)

viashimo:

  • uploaded puppet-strings 4.1.2-1 to unstable
  • updated various services in personal infra
  • cleaned vagrant-hostmanager and worked on packaging the new upstream release (1.8.10)
  • extended GPG key expiry
  • looked at options for a new backup machine

lavamind:

  • updated puppetdb to 8.4.1

justin:

  • opened #1068152 after a misfortune with #1068151
  • created new contributor accounts (salsa & wiki)
Pictures

Here are pictures of the event. Well, one picture (thanks Tassia!) of the event itself and another one of the crisp Italian lager I drank at the bar after the event :)

  1. Maintainers, amongst other things, of the great LTTng

Categories: FLOSS Project Planets

This week in KDE: sprints, enhancements, and kebabs

Planet KDE - Fri, 2024-04-19 17:18

This week I’m attending two sprints: a KDE e.V. Board sprint which is already done, and a KDE Goals mega-sprint, which begins today! My “Automate and systematize internal processes” goal is represented there and I’m hoping we have a great time brainstorming and fixing stuff. And also that we eat lots of Döner kebabs. So many Döner kebabs. All the Döner kebabs.

New Features

Spectacle regains its “blur factor” feature, so now if you want your blurred areas to be even blurrier, you can do that without having to draw a blur over the same area multiple times (Noah Davis, Spectacle 24.05. Link)

You can now copy the text of a column in System Monitor (Joshua Goins, Plasma 6.1. Link)

UI Improvements

The Plasma Digital Clock’s Islamic calendar feature now respects the locale-specific numbering system, which means for example that people living in Arabic-speaking countries that use western-style Arabic numerals will now see those rather than eastern-style Arabic numerals (Fushan Wen, Plasma 6.0.5. Link)

Removed the “Clear all” button on the fingerprint authorization enrollment UI, because it was a big obvious footgun you could use to destroy all your work to enroll fingerprints (Arthur Zamarin, Plasma 6.1. Link)

Normally I don’t mention changes to Welcome center, but this is a fairly large one: the “Simple by default” page is no longer a big dumb picture of System Settings, but rather shows you a nice mock desktop where you can hover or click on elements to learn what they are and what they do. System Settings is now mentioned on the “Powerful When Needed” page (me: Nate Graham and Oliver Beard, Plasma 6.1. Link):

On System Settings’ Night Light page, the time input fields for manual time mode are no longer a nightmare, because they’ve been replaced with a set of spinboxes (Natalie Clarius, Plasma 6.1. Link):

Yes, ideally these would use a dedicated time picker UI component, but one thing at a time

Information about Night Light transition times is now shown in the tooltip for the Brightness and Color widget (Natalie Clarius, Plasma 6.1. Link):

System Settings’ Screen Locking page has also now adopted the “buttons in the header” paradigm, this time for its “Configure Appearance” button (Jakob Petsovits, Plasma 6.1. Link):

Bug Fixes

Fixed a number of issues in various pieces of KDE software caused by changes in Qt 6.7 (Felix Ernst, Noah Davis, and Fushan Wen, Dolphin & Spectacle 24.05 and Plasma 6.0.4. Link 1, link 2, and link 3)

Spectacle no longer badly misbehaves when asked to take a screenshot of a screen with a scale factor of less than 100% (Noah Davis, Spectacle 24.05. Link)

Fixed an issue whereby Spectacle’s feature to save screenshots with an increasing number sequence would number the screenshots incorrectly in certain circumstances (Noah Davis, Spectacle 24.05. Link)

Closing your laptop’s lid with the keyboard backlight illuminated now correctly restores its brightness level when you open the lid again (Anthony Vital, Plasma 6.0.4. Link)

Fixed multiple issues whereby parts of Plasma would fail to save changed settings if Plasma quickly crashed, exited unexpectedly, or was terminated in an unclean way (David Edmundson, Plasma 6.0.5. Link)

Plasma no longer crashes when removing standalone launcher widgets for apps with no “Open With” context menu action (Fushan Wen, Plasma 6.0.5. Link)

Fixed multiple issues with keyboard focus and selection with search results in Plasma’s Application Dashboard widget—which you may notice is visibly on life support, but this constitutes some of that support (Marco Martin, Plasma 6.0.5. Link)

Plasma’s screen chooser OSD once again works as expected on X11 (Fushan Wen and David Redondo, Plasma 6.0.5. Link 1 and link 2)

The Grid layout for System Monitor sensors once again looks good in constrained spaces, such as a thick Plasma panel (Akseli Lahtinen, Plasma 6.0.5. Link)

Fixed a major regression that prevented LUKS-encrypted disks from being decrypted by KDE software using the Solid framework (Kai Uwe Broulik, Frameworks 6.2, but distros should be backporting it to their 6.1 packages too. Link)

Those increasingly ubiquitous contextual help buttons that open explanatory tooltips when clicked no longer have a weird empty space on the right side of their tooltips (Shubham Arora, Frameworks 6.2. Link):

Fixed a Breeze icon that was not changing its color properly when using non-default color schemes (Evgeniy Harchenko, Frameworks 6.2. Link)

Other bug information of note:

Performance & Technical

Kolourpaint has been ported to Qt6 (Carl Schwan, Kolourpaint 24.05. Link)

On Wayland, implemented support for the remote desktop portal’s ConnectToEIS method. This enables applications like Synergy and Barrier that have Wayland support for sending emulated input events to do so in a more convenient way than using the existing methods. Full support for input-leap is pending, but in progress (David Redondo, Plasma 6.1. Link)

System Settings’ Keyboard page has been ported to QML, retaining most of its prior styling but with a modern and extensible codebase (Evgeny Chesnokov, Plasma 6.1. Link)

…And Everything Else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

The KDE organization has become important in the world, and your time and labor have helped to bring it there! But as we grow, it’s going to be equally important that this stream of labor be made sustainable, which primarily means paying for it. Right now the vast majority of KDE runs on labor not paid for by KDE e.V. (the nonprofit foundation behind KDE, of which I am a board member), and that’s a problem. We’ve taken steps to change this with paid technical contractors—but those steps are small due to growing but still limited financial resources. If you’d like to help change that, consider donating today!

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Categories: FLOSS Project Planets

Pages