FLOSS Project Planets

Evgeni Golov: Using Packit to build RPMs for projects that depend on or vendor your code

Planet Debian - Tue, 2024-05-14 16:12

I am a huge fan of Packit as it allows us to provide RPMs to our users and testers directly from a pull-request, thus massively tightening the feedback loop and involving people who otherwise might not be able to apply the changes (for whatever reason) and "quickly test" something out. It's also a great way to validate that a change actually builds in a production environment, where no unnecessary development and test dependencies are installed.

You can also run tests of the built packages on Testing Farm and automate pushing releases into Fedora/CentOS Stream, but this is neither a (plain) Packit advertisement post, nor is that functionality that I can talk about with a certain level of experience.

Adam recently asked why we don't have Packit builds for our our Puppet modules and my first answer was: "well, puppet-* doesn't produce a thing we ship directly, so nobody dared to do it".

My second answer was that I had blogged how to test a Puppet module PR with Packit, but I totally agree that the process was a tad cumbersome and could be improved.

Now some madman did it and we all get to hear his story! ;-)

What is the problem anyway?

The Foreman Installer is a bit of Ruby code1 that provides a CLI to puppet apply based on a set of Puppet modules. As the Puppet modules can also be used outside the installer and have their own lifecycle, they live in separate git repositories and their releases get uploaded to the Puppet Forge. Users however do not want to (and should not have to) install the modules themselves.

So we have to ship the modules inside the foreman-installer package. Packaging 25 modules for two packaging systems (we support Enterprise Linux and Debian/Ubuntu) seems like a lot of work. Especially if you consider that the main foreman-installer package would need to be rebuilt after each module change as it contains generated files based on the modules which are too expensive to generate at runtime.

So we can ship the modules inside the foreman-installer source release, thus vendoring those modules into the installer release.

To do so we use librarian-puppet with a Puppetfile and either a Puppetfile.lock for stable releases or by letting librarian-puppet fetch latest for nightly snapshots.

This works beautifully for changes that land in the development and release branches of our repositories - regardless if it's foreman-installer.git or any of the puppet-*.git ones. It also works nicely for pull-requests against foreman-installer.git.

But because the puppet-* repositories do not map to packages, we assumed it wouldn't work well for pull-requests against those.

How can we solve this?

Well, the "obvious" solution is to build the foreman-installer package via Packit also for pull-requests against the puppet-* repositories. However, as usual, the devil is in the details.

Packit by default clones the repository of the pull-request and tries to create a source tarball from that using git archive. As this might be too simple for many projects, one can define a custom create-archive action that runs after the pull-request has been cloned and produces the tarball instead. We already use that in the Packit configuration for foreman-installer to run the pkg:generate_source rake target which executes librarian-puppet for us.

But now the pull-request is against one of the Puppet modules, so Packit will clone that, not the installer.

We gotta clone foreman-installer on our own. And then point librarian-puppet at the pull-request. Fun.

Cloning is relatively simple, call git clone -- sorry Packit/Copr infrastructure.

But the Puppet module pull-request? One can use :git => 'https://git.example.com/repo.git' in the Puppetfile to fetch a git repository. In fact, that's what we already do for our nightly snapshots. It also supports :ref => 'some_branch_or_tag_name', if the remote HEAD is not what you want.

My brain first went "I know this! GitHub has this magic refs/pull/1/head and refs/pull/1/merge refs you can checkout to get the contents of the pull-request without bothering to add a remote for the source of the pull-request". Well, this requires to know the ID of the pull-request and Packit does not expose that in the environment variables available during create-archive.

Wait, but we already have a checkout. Can we just say :git => '../.git'? Cloning a .git folder is totally possible after all.

[Librarian] --> fatal: repository '../.git' does not exist Could not checkout ../.git: fatal: repository '../.git' does not exist

Seems librarian disagrees. Damn. (Yes, I checked, the path exists.)

💡 does it maybe just not like relative paths?! Yepp, using an absolute path absolutely works!

For some reason it ends up checking out the default HEAD of the "real" (GitHub) remote, not of ../. Luckily this can be fixed by explicitly passing :ref => 'origin/HEAD', which resolves to the branch Packit created for the pull-request.

Now we just need to put all of that together and remember to execute all commands from inside the foreman-installer checkout as that is where all our vendoring recipes etc live.

Putting it all together

Let's look at the diff between the packit.yaml for foreman-installer and the one I've proposed for puppet-pulpcore:

--- a/foreman-installer/.packit.yaml 2024-05-14 21:45:26.545260798 +0200 +++ b/puppet-pulpcore/.packit.yaml 2024-05-14 21:44:47.834162418 +0200 @@ -18,13 +18,15 @@ actions: post-upstream-clone: - "wget https://raw.githubusercontent.com/theforeman/foreman-packaging/rpm/develop/packages/foreman/foreman-installer/foreman-installer.spec -O foreman-installer.spec" + - "git clone https://github.com/theforeman/foreman-installer" + - "sed -i '/theforeman.pulpcore/ s@:git.*@:git => \"#{__dir__}/../.git\", :ref => \"origin/HEAD\"@' foreman-installer/Puppetfile" get-current-version: - - "sed 's/-develop//' VERSION" + - "sed 's/-develop//' foreman-installer/VERSION" create-archive: - - bundle config set --local path vendor/bundle - - bundle config set --local without development:test - - bundle install - - bundle exec rake pkg:generate_source + - bash -c "cd foreman-installer && bundle config set --local path vendor/bundle" + - bash -c "cd foreman-installer && bundle config set --local without development:test" + - bash -c "cd foreman-installer && bundle install" + - bash -c "cd foreman-installer && bundle exec rake pkg:generate_source"
  1. It clones foreman-installer (in post-upstream-clone, as that felt more natural after some thinking)
  2. It adjusts the Puppetfile to use #{__dir__}/../.git as the Git repository, abusing the fact that a Puppetfile is really just a Ruby script (sorry Ben!) and knows the __dir__ it lives in
  3. It fetches the version from the foreman-installer checkout, so it's sort-of reasonable
  4. It performs all building inside the foreman-installer checkout
Can this be used in other scenarios?

I hope so! Vendoring is not unheard of. And testing your "consumers" (dependents? naming is hard) is good style anyway!

  1. three Ruby modules in a trench coat, so to say 

Categories: FLOSS Project Planets

ADCI Solutions: How to quickly integrate Angular with a Drupal website

Planet Drupal - Tue, 2024-05-14 11:50
<p>Our client's website had a questionnaire that ran on a Drupal module. The customer rewrote this block in Angular and asked us to <a href="https://www.adcisolutions.com/work/drupal-angular?utm_source=planetdrupal%26utm_medium=rss_feed%26utm_campaign=drupal-angular">implement it into the site</a>.</p><img data-entity-uuid="9be3f214-f25a-4efb-89d6-7c4f454fa687" data-entity-type="file" src="https://www.adcisolutions.com/sites/default/files/inline-images/integrating-drupal-with-angular.png" width="1388" height="911" alt="integrating drupal with angular">
Categories: FLOSS Project Planets

Matt Glaman: Starshot, recipe to cook up ambitious Drupal applications

Planet Drupal - Tue, 2024-05-14 09:18

This blog post was inspired by my time at DrupalCon Portland and the Driesnote, announcing Starshot.

There has also been a story about Drupal being a series of building blocks for building your own CMS (or other application). Often, it has been compared to building a LEGO® set. The idea is that you have Drupal core and contributed modules, acting as individual pieces, to build an application that meets your desired needs. Oftentimes, this could be done without writing any code. As someone who built with Drupal, this made so much sense. You take disparate components, build on a solid base, and have this magical software built with minimal code (if you choose) that meets your needs. That metaphor always stuck, but it was pretty flawed, and I never really understood why until DrupalCon Portland 2024, last week.

Categories: FLOSS Project Planets

KDE e.V. is looking for a graphic designer for environmental sustainability project

Planet KDE - Tue, 2024-05-14 08:30

KDE e.V., the non-profit organisation supporting the KDE community, is looking for a graphic designer to implement materials (print design, logo design, infographics, etc.) for a new environmental sustainability campaign within KDE Eco. Please see the job ad for more details about this employment opportunity.

We are looking forward to your application.

Categories: FLOSS Project Planets

PyCharm: PyCharm at PyCon US 2024: Engage, Learn, and Celebrate!

Planet Python - Tue, 2024-05-14 07:48

We’re thrilled to announce that the PyCharm team will be part of the vibrant PyCon US 2024 conference in Pittsburgh, Pennsylvania, USA. Join us for a series of engaging activities, expert talks, live demonstrations, and fun quizzes throughout the event! 

Here’s a sneak peek at what we have lined up for you. Unless otherwise specified, all activities mentioned will take place at the JetBrains PyCharm booth. 

Engaging talks and live demonstrations

1. Lies, damned lies, and large language models
Speaker: Jodie Burchell
Dive into the intricacies of large language models (LLMs) with Jodie Burchell’s talk. Discover the reasons behind LLM hallucinations and explore methods like retrieval augmented generation (RAG) to mitigate these issues.
📅 Date and time: May 18, 1:45 pm – 2:15 pm
📍 Location: Ballroom BC
Learn more about this talk

2. XGBoost demo and book signing with Matt Harrison
Get hands-on with XGBoost in this engaging demo, during which you can ask Matt Harrison any questions you have about the gradient boosting library.
📅 Date and time: May 17, 10:30 am
🎁 Giveaway: The first 20 attendees will get a free signed copy of Effective XGBoost.

3. Build a fully local RAG app for a variety of files in just 20 minutes
Presenter: Maria Khalusova
Watch a live demonstration on setting up a fully local RAG application in just 20 minutes. Gain insights from Maria Khalusova, an expert from Unstructured and former Hugging Face employee.
📅 Date and time: May 17, 4:10 pm

4. PyCharm on-demand demos
Discover how to boost your efficiency in data science and web development projects with PyCharm’s latest features. Have any questions or feedback? Our team lead, product manager, and developer advocates will be there to discuss everything PyCharm with you.

5. Qodana on-demand demos
Enhance your team’s code quality with our Qodana static code analysis tool. Drop by the PyCharm booth for on-demand demos and learn how to maintain high-quality, secure, and maintainable code.

Interactive quizzes and giveaways

1. Data science quizzes with Matt Harrison
Kick off the event with our exciting quiz session during the opening reception. Engage in challenging questions and stand a chance to win a copy of Effective Pandas 2.0.
📅 Date and time: May 16, 5:30 pm

2. More quizzes! More prizes!
Continue testing your knowledge on May 17 at 1:20 pm. Win a one-year subscription to PyCharm Professional or a voucher for JetBrains merch.

Special events

1. Share your Python story

Want to share your Python experiences with us? Get interviewed by Paul Everitt at our booth! Our video crew will be on-site throughout the conference to capture your stories and insights. 

2. Talk Python To Me: Live podcast recording
Join Michael Kennedy for a live recording with guests Maria Jose Molina-Contreras, Jessica Greene, and Jodie Burchell.
📅 Date and time: May 18, 10:10 am

2. The Black Python Dev community’s first anniversary
Celebrate the Black Python Developers group’s milestone with Jay Miller.
📅 Date and time: May 18, 12:45 pm

3. Frontend testing office hours with Paul Everitt
Got questions about frontend testing? Paul Everitt has the answers! Visit our booth to engage directly with an expert.
📅 Date and time: May 18, 3:45 pm

Visit our booth

Don’t miss out on these exciting opportunities to deepen your knowledge, network with peers, and celebrate achievements in the Python community. See you there!

Categories: FLOSS Project Planets

Julian Andres Klode: The new APT 3.0 solver

Planet Debian - Tue, 2024-05-14 07:26

APT 2.9.3 introduces the first iteration of the new solver codenamed solver3, and now available with the –solver 3.0 option. The new solver works fundamentally different from the old one.

How does it work?

Solver3 is a fully backtracking dependency solving algorithm that defers choices to as late as possible. It starts with an empty set of packages, then adds the manually installed packages, and then installs packages automatically as necessary to satisfy the dependencies.

Deferring the choices is implemented multiple ways:

First, all install requests recursively mark dependencies with a single solution for install, and any packages that are being rejected due to conflicts or user requests will cause their reverse dependencies to be transitively marked as rejected, provided their or group cannot be solved by a different package.

Second, any dependency with more than one choice is pushed to a priority queue that is ordered by the number of possible solutions, such that we resolve a|b before a|b|c.

Not just by the number of solutions, though. One important point to note is that optional dependencies, that is, Recommends, are always sorting after mandatory dependencies. Do note on that: Recommended packages do not “nest” in backtracking - dependencies of a Recommended package themselves are not optional, so they will have to be resolved before the next Recommended package is seen in the queue.

Another important step in deferring choices is extracting the common dependencies of a package across its version and then installing them before we even decide which of its versions we want to install - one of the dependencies might cycle back to a specific version after all.

Decisions about package levels are recorded at a certain decision level, if we reach a conflict we backtrack to the previous decision level, mark the decision we made (install X) in the inverse (DO NOT INSTALL X), reset all the state all decisions made at the higher level, and restore any dependencies that are no longer resolved to the work queue.

Comparison to SAT solver design.

If you have studied SAT solver design, you’ll find that essentially this is a DPLL solver without pure literal elimination. A pure literal eliminitation phase would not work for a package manager: First negative pure literals (packages that everything conflicts with) do not exist, and positive pure literals (packages nothing conflicts with) we do not want to mark for install - we want to install as little as possible (well subject, to policy).

As part of the solving phase, we also construct an implication graph, albeit a partial one: The first package installing another package is marked as the reason (A -> B), the same thing for conflicts (not A -> not B).

Once we have added the ability to have multiple parents in the implication graph, it stands to reason that we can also implement the much more advanced method of conflict-driven clause learning; where we do not jump back to the previous decision level but exactly to the decision level that caused the conflict. This would massively speed up backtracking.

What changes can you expect in behavior?

The most striking difference to the classic APT solver is that solver3 always keeps manually installed packages around, it never offers to remove them. We will relax that in a future iteration so that it can replace packages with new ones, that is, if your package is no longer available in the repository (obsolete), but there is one that Conflicts+Replaces+Provides it, solver3 will be allowed to install that and remove the other.

Implementing that policy is rather trivial: We just need to queue obsolete | replacement as a dependency to solve, rather than mark the obsolete package for install.

Another critical difference is the change in the autoremove behavior: The new solver currently only knows the strongest dependency chain to each package, and hence it will not keep around any packages that are only reachable via weaker chains. A common example is when gcc-<version> packages accumulate on your system over the years. They all have Provides: c-compiler and the libtool Depends: gcc | c-compiler is enough to keep them around.

New features

The new option --no-strict-pinning instructs the solver to consider all versions of a package and not just the candidate version. For example, you could use apt install foo=2.0 --no-strict-pinning to install version 2.0 of foo and upgrade - or downgrade - packages as needed to satisfy foo=2.0 dependencies. This mostly comes in handy in use cases involving Debian experimental or the Ubuntu proposed pockets, where you want to install a package from there, but try to satisfy from the normal release as much as possible.

The implication graph building allows us to implement an apt why command, that while not as nicely detailed as aptitude, at least tells you the exact reason why a package is installed. It will only show the strongest dependency chain at first of course, since that is what we record.

What is left to do?

At the moment, error information is not stored across backtracking in any way, but we generally will want to show you the first conflict we reach as it is the most natural one; or all conflicts. Currently you get the last conflict which may not be particularly useful.

Likewise, errors currently are just rendered as implication graphs of the form [not] A -> [not] B -> ..., and we need to put in some work to present those nicely.

The test suite is not passing yet, I haven’t really started working on it. A challenge is that most packages in the test suite are manually installed as they are mocked, and the solver now doesn’t remove those.

We plan to implement the replacement logic such that foo can be replaced by foo2 Conflicts/Replaces/Provides foo without needing to be automatically installed.

Improving the backtracking to be non-chronological conflict-driven clause learning would vastly enhance our backtracking performance. Not that it seems to be an issue right now in my limited testing (mostly noble 64-bit-time_t upgrades). A lot of that complexity you have normally is not there because the manually installed packages and resulting unit propagation (single-solution Depends/Reverse-Depends for Conflicts) already ground us fairly far in what changes we can actually make.

Once all the stuff has landed, we need to start rolling it out and gather feedback. On Ubuntu I’d like automated feedback on regressions (running solver3 in parallel, checking if result is worse and then submitting an error to the error tracker), on Debian this could just be a role email address to send solver dumps to.

At the same time, we can also incrementally start rolling this out. Like phased updates in Ubuntu, we can also roll out the new solver as the default to 10%, 20%, 50% of users before going to the full 100%. This will allow us to capture regressions early and fix them.

Categories: FLOSS Project Planets

Specbee: Drupal Translation Modules: How to create Multilingual Drupal websites

Planet Drupal - Tue, 2024-05-14 07:24
Want an easy way to extend your market reach and ultimately your sales? Do you feel you need to personalize your website to every user no matter which country they belong to or what language they speak? Then getting yourself a multilingual website is one of the most effective business strategies you can implement. Not only is it a more cost-effective, it also helps in increasing your website traffic and overall Drupal SEO. Drupal CMS has particularly taken up this challenge of providing not only users but also developers with the ability to access Drupal in a language that they prefer. And with Drupal being multilingual out-of-the-box since version 8, it has become an ideal choice for businesses and developers. Powerful Drupal translation modules offer developers with granular configuration capabilities where every content entity can be translated. Let's dive right in to learn more about the various multilingual Drupal modules. What are Multilingual Websites Multilingual basically means written or available in different languages. Multilingual websites connect better with users from different countries as it immediately adds an element of familiarity. Drupal provides an easy and a great experience of building a multilingual website. Currently, Drupal supports 100 different languages for translation. Drupal multilingual features come along with the installation interfaces. As soon as you install Drupal, based on the browser preference, it provides a language for your Drupal website. Based on the option selected the site is installed in that particular language. It basically provides 4 different Drupal translation modules for language and content translation. We can enable the required Drupal modules on our site and use it according to our requirements.  But first, ensure you're using the latest Drupal version so you can leverage Drupal's built-in multilingual capabilities. Drupal 7 also supports multilingual functionality but requires additional modules and configurations. The four core Drupal translation modules available in Drupal 10: Language module It provides a feature for adding and choosing a new language to your Drupal website. Allows users to configure languages and how page languages are chosen, and apply languages to content. Enable any non-English language for your site's content (one or more) Content translation module This Drupal Translation module allows you to translate content entities such as comments, custom blocks, contents, taxonomy terms, users, etc. Allows you to translate your site content, including pages, taxonomy terms, blocks, etc., into different languages. Interface translation module Translates the built-in user interface, your added modules and themes. Configuration translation module Provides a translation interface for configuration. Allows you to translate text that is part of the configuration, such as field labels, the site name, the views name, the text used in Views,   Let’s catch up with what each Drupal translation module does, its configurations and how each module can be used in our Drupal website. If you're a content writer/editor/marketer, here's a less-technical article especially for you to help you build your first multi-lingual page/website. Install Drupal with Multilingual Support During the installation process, enable the multilingual options. This sets up your Drupal site to handle multiple languages from the start. Make sure to select the appropriate language options during installation. Drupal Language Module Navigate to Configuration > Regional and language > Languages to configure language settings. Here, you can add, enable, and configure languages for your site. Drupal supports a wide range of languages out of the box.   Language Switcher Configuration Set up language switcher blocks or menus to allow users to switch between languages seamlessly. Drupal provides several options for configuring language switchers, including dropdown menus, language blocks, and language negotiation settings. Once the block is placed in the region we will be able to switch to the different languages in the web page itself. Content Translation Module Enable content translation for the content types you want to be multilingual. Navigate to Configuration > Regional and language > Content language and translation to configure content translation settings. This allows content creators to translate content into different languages. It provides a list of entity types that can be translated.  For example, click on the content configuration option that appears for each content type. Let us consider that the content translation is being enabled for the article content type. It provides an option to decide if each subtype entity is translatable or not. We can also change the default language for a particular content type. Each field has an option to translate its content or not.    The Drupal translation modules also provides an option to input the content in the language which is suitable for the user while adding content from the backend interface. Once the above configuration is set up and when we try to add content to the Article content type we can see a Select option with the languages installed in our site. We can select any language and add content in the particular language selected. Once the content is saved, users with translate permissions will see links to Translate their content. It provides an additional tab called “Translate” along with the  "Edit" links, and you'll be able to add translations for each configured language. Interface Translation Module The Drupal Interface translation module is also a part of the core module and can be easily enabled like any other Drupal Translation module. Once this Drupal multilingual module is enabled, it is possible to replace any string in the interface with a string that has been customized. Whenever this drupal translation module encounters any string, it tries to translate the particular string to the current language of the interface. If a particular translation is not available it is remembered and we can look up into the untranslated string in the table.    Navigate to Configuration > Regional and language > Interface translation to manage interface translations. In the above example, the strings which are both translated and untranslated are displayed and we are able to modify the strings for the language that is installed as well. The Drupal translations for the strings are put up in a single place called http://localize.drupal.org and the Localization Update module will automatically import the updated translation strings for your selected language. In Drupal 7 and previous versions, this was a contributed module. Since Drupal 8 and higher however, this multilingual Drupal module is a part of the core. Drupal Configuration Translation Module The Drupal Configuration Translation module allows configuration to be translated into different languages. The site name, views name, and other configurations can be translated easily using this Drupal multi language module. It also provides an option to input the content in the language which is suitable for the user while adding content from the backend interface. Once the above configuration is set up and when we try to add content to the Article content type we can see a Select option with the languages installed in our site. We can select any language and add content in the particular language selected. URL Language Detection and Handling Configure language detection and handling for URLs. Drupal allows you to set up different URL structures for different languages, such as domain.com/en/ for English and domain.com/es/ for Spanish. Configure language detection methods and URL patterns under Configuration > Regional and language > Languages > Detection and selection. SEO Considerations Ensure your multilingual Drupal site is SEO-friendly by using proper hreflang tags and canonical URLs. Drupal provides modules like the hreflang module to help manage hreflang tags for multilingual content. Additionally, configure proper language-specific meta tags and URL structures for better search engine visibility. Testing and Quality Assurance Thoroughly test your multilingual Drupal site to ensure that language switching, content translation, and interface translations work correctly across all pages and languages. Pay close attention to user experience and ensure that all translated content is accurate and contextually appropriate. Shoutout to Manish Saharan for helping us update this article! Final Thoughts Having a Drupal multilingual website is a great way to building better and stronger relationships with users and prospective customers. Drupal offers 100 languages to choose from to translate your website effectively. With Drupal translation modules in core, developers now find it easier to install and adapt to a multilingual environment while providing businesses with great digital experiences.As a leading Drupal development company, we provide comprehensive Drupal services which also includes building multilingual Drupal websites keeping in mind the needs of our global clientele.
Categories: FLOSS Project Planets

Web Wash: Download and Install Drupal Starshot

Planet Drupal - Tue, 2024-05-14 07:08

Drupal Starshot is a new packaged and pre-configured version of Drupal that was announced at DrupalCon Portland. It is not a separate fork or rewrite of Drupal.

Drupal Starshot takes advantage of the new Recipes feature introduced in Drupal versions 10.3 and 11.

Recipes allow installing Drupal with a complete, ready-to-use website, going beyond the standard installation profile.

More details about the Drupal Starshot initiative are available on its official page.

Categories: FLOSS Project Planets

EuroPython: Community Post: The Invisible Threads that sustained me in STEM/Tech

Planet Python - Tue, 2024-05-14 06:20
The unconscious influence the Ghanaian tech community has had on my career.

My name is Joana Owusu-Appiah, and I am currently pursuing an MSc degree in Medical Imaging and Applications. I am originally from Ghana, but as my colleague likes to put it, I am currently backpacking through Europe. So depending on when you see this, my location might have changed.

I hold a Bachelor of Science degree in Biomedical Engineering from Kwame Nkrumah University of Science and Technology, Ghana. Prior to commencing my graduate studies, I dabbled in data science and analytics, gaining experience in visualizing and manipulating data using various tools (Python, Power BI, Excel, SQL). My current research focuses on computer vision applications on medical images.

Do I consider myself a woman in tech? I guess if it means knowing how to use a computer (lol) and understanding that photo editing is based on image processing algorithms and deep learning, then I might be close.

Has it always been this way? No.

What changed

I am a first-generation university student. In my country, or how it used to be, growing up, the smarter students were encouraged to pursue General Science in high school because it ultimately ensured job security. In high school, my primary ambition was to attend medical school. However, as a backup plan, I stumbled upon Biomedical Engineering (BME), which fascinated me with its potential. It quickly became my secondary option. Interestingly, everyone I spoke to knew nothing about it. Guess who would jump at any opportunity to give a lecture about this mystery degree? Me!

Side note: My high school biology teacher mentioned that neurons (nerve cells), once damaged, could never be repaired, but he also said that they functioned like wires. I thought to myself, if I merged this pathological accident and the BME I had read about, then I could replace damaged nerves with wires (some day). I ran with this new, uninformed career goal.

Fun fact: I didn&apost get into medical school, but I did get into the BME program. I quickly realised that technical drawing (a requisite course for all engineering freshers) was definitely not going to equip me to fix Neurons, and that the only viable role for BM E graduates in my country was clinical engineering (maintenance and installation of medical equipment - or so I thought). Clinical engineering wasn’t something I wanted to try, so I needed an escape!

Programming looked interesting, but also difficult and meant for very smart people. However, I gave it a shot during covid. PyLadies Ghana was organising a data science boot camp, and I decided to try.

[Heads up: My undergraduate degree had programming courses like Introduction to C and Object-Oriented Programming with Java( I had collaborated with people on some projects then), but for some reason, I couldn&apost get my brain to enjoy it…]

The Real Reason you’re here

During the boot camp, some of the participants were absorbed into the national online community of Python Ghana because more resources and opportunities were being shared there. It turned out:  I was looking for an escape without any destination. Members of the community seemed very vibrant; there was always a job opening up for grabs, a new free online course or banter on trendy tech topics. My main struggle was finding a niche to belong; what was in tech for me?

My interest in health never waned, so you would usually see me reposting information on female health, breast cancer, etc. The PyLadies Ghana Lead, at that time, Abigail Mesrenyame Dogbe noticed it and in October (Breast Cancer Awareness month) she tasked me to help organise a session for the members of PyLadies Ghana. I moderated the session and it was very successful. My very first visible interaction with the community!

Abigail asked if I wanted to keep contributing to the Communications team( the comms team is the main organising force of PyLadies Ghana ) or default to being just a member. I opted for the former. In my eyes, this was a big deal; being asked to stay on the team meant a ton, It was a validation of a certain value I had to offer. I made mistakes, I created terrible designs, and I missed deadlines, but I also learned a lot. I learned how to use tools like Canva, schedule virtual calls,  MS Office tools (Excel, Docs), write official emails, organise events, etc. I was helping with social media engagements, and I didn&apost even have a vibrant social media presence. I was recommended to help with Public Relations (PR) and social media for a connected tech community(Ghana Data Science Summit-IndabaX Ghana) that organises annual data science conferences.

Two years later, I got the opportunity to mentor ladies in the very bootcamp that led me into the community. The ripple effects of my involvement with PyLadies Ghana are diverse, ranging from giving a lightning talk to speaking to young girls about STEM, to helping organise Django Girls at PyCon Ghana 2022, and more…

STEM outreach for teenage girls on International Women&aposs Day 2023

Unknown to everyone, I had contemplated brushing the study of data science under the carpet as a ‘failed project’ and moving on to something else. Staying committed to the community, watching the members, and participating in events encouraged me to keep trying. I attended conferences, met and saw women who had achieved great things in data science and machine learning, which meant that I could also, through their stories, find a plan to help me get close to what they had done.

I was always fascinated by their work conversations because wow, these women work in tech?! Some community members had secured scholarships and were pursuing higher STEM degrees abroad while others worked for top tech companies.

After covid, my plan for life after school was to either hone my programming skills and get a good job in a Ghanaian tech company and/or find graduate programs that would enable me to work on my Neurons(of course I had developed other interests). I got into a specialised data science and analytics fellowship with Blossom Academy (more about the training here), landed my first tech role through it, and later began my master’s degree.

The Intro slide of the Data science bootcamp I mentored at!

The threads that sustained me in tech were the people, the conversations, and the inclusive atmosphere the Ghanaian community created for people with different personalities to thrive. My journey in STEM can be traced back to that pivotal moment in 2020 when I was offered the opportunity to belong and I seized it!

Categories: FLOSS Project Planets

Doug Hellmann: sphinxcontrib-sqltable 2.1.1 - db cursor fix

Planet Python - Tue, 2024-05-14 04:11
What’s new in 2.1.1? Access cursor before closing the db connection (contributions by dopas21) use the readthedocs.org theme for docs
Categories: FLOSS Project Planets

Matthew Palmer: "Is This Project Still Maintained?"

Planet Debian - Mon, 2024-05-13 20:00

If you wander around a lot of open source repositories on the likes of GitHub, you’ll invariably stumble over repos that have an issue (or more than one!) with a title like the above. Sometimes sitting open and unloved, often with a comment or two from the maintainer and a bunch of “I’ll help out!” followups that never seemed to pan out. Very rarely, you’ll find one that has been closed, with a happy ending.

These issues always fascinate me, because they say a lot about what it means to “maintain” an open source project, the nature of succession (particularly in a post-Jia Tan world), and the expectations of users and the impedence mismatch between maintainers, contributors, and users. I’ve also recently been thinking about pre-empting this sort of issue, and opening my own issue that answers the question before it’s even asked.

Why These Issues Are Created

As both a producer and consumer of open source software, I completely understand the reasons someone might want to know whether a project is abandoned. It’s comforting to be able to believe that there’s someone “on the other end of the line”, and that if you have a problem, you can ask for help with a non-zero chance of someone answering you. There’s also a better chance that, if the maintainer is still interested in the software, that compatibility issues and at least show-stopper bugs might get fixed for you.

But often there’s more at play. There is a delusion that “maintained” open source software comes with entitlements – an expectation that your questions, bug reports, and feature requests will be attended to in some fashion.

This comes about, I think, in part because there are a lot of open source projects that are energetically supported, where generous volunteers do answer questions, fix reported bugs, and implement things that they don’t personally need, but which random Internet strangers ask for. If you’ve had that kind of user experience, it’s not surprising that you might start to expect it from all open source projects.

Of course, these wonders of cooperative collaboration are the exception, rather than the rule. In many (most?) cases, there is little practical difference between most projects that are “maintained” and those that are formally declared “unmaintained”. The contributors (or, most often, contributor – singular) are unlikely to have the time or inclination to respond to your questions in a timely and effective manner. If you find a problem with the software, you’re going to be paddling your own canoe, even if the maintainer swears that they’re still “maintaining” it.

A Thought Appears

With this in mind, I’ve been considering how to get ahead of the problem and answer the question for the software projects I’ve put out in the world. Nothing I’ve built has anything like what you’d call a “community”; most have never seen an external PR, or even an issue. The last commit date on them might be years ago.

By most measures, almost all of my repos look “unmaintained”. Yet, they don’t feel unmaintained to me. I’m still using the code, sometimes as often as every day, and if something broke for me, I’d fix it. Anyone who needs the functionality I’ve developed can use the code, and be pretty confident that it’ll do what it says in the README.

I’m considering creating an issue in all my repos, titled “Is This Project Still Maintained?”, pinning it to the issues list, and pasting in something I’m starting to think of as “The Open Source Maintainer’s Manifesto”.

It goes something like this:

Is This Project Still Maintained?

Yes. Maybe. Actually, perhaps no. Well, really, it depends on what you mean by “maintained”.

I wrote the software in this repo for my own benefit – to solve the problems I had, when I had them. While I could have kept the software to myself, I instead released it publicly, under the terms of an open licence, with the hope that it might be useful to others, but with no guarantees of any kind. Thanks to the generosity of others, it costs me literally nothing for you to use, modify, and redistribute this project, so have at it!

OK, Whatever. What About Maintenance?

In one sense, this software is “maintained”, and always will be. I fix the bugs that annoy me, I upgrade dependencies when not doing so causes me problems, and I add features that I need. To the degree that any on-going development is happening, it’s because I want that development to happen.

However, if “maintained” to you means responses to questions, bug fixes, upgrades, or new features, you may be somewhat disappointed. That’s not “maintenance”, that’s “support”, and if you expect support, you’ll probably want to have a “support contract”, where we come to an agreement where you pay me money, and I help you with the things you need help with.

That Doesn’t Sound Fair!

If it makes you feel better, there are several things you are entitled to:

  1. The ability to use, study, modify, and redistribute the contents of this repository, under the terms stated in the applicable licence(s).

  2. That any interactions you may have with myself, other contributors, and anyone else in this project’s spaces will be in line with the published Code of Conduct, and any transgressions of the Code of Conduct will be dealt with appropriately.

  3. … actually, that’s it.

Things that you are not entitled to include an answer to your question, a fix for your bug, an implementation of your feature request, or a merge (or even review) of your pull request. Sometimes I may respond, either immediately or at some time long afterwards. You may luck out, and I’ll think “hmm, yeah, that’s an interesting thing” and I’ll work on it, but if I do that in any particular instance, it does not create an entitlement that I will continue to do so, or that I will ever do so again in the future.

But… I’ve Found a Huge and Terrible Bug!

You have my full and complete sympathy. It’s reasonable to assume that I haven’t come across the same bug, or at least that it doesn’t bother me, otherwise I’d have fixed it for myself.

Feel free to report it, if only to warn other people that there is a huge bug they might need to avoid (possibly by not using the software at all). Well-written bug reports are great contributions, and I appreciate the effort you’ve put in, but the work that you’ve done on your bug report still doesn’t create any entitlement on me to fix it.

If you really want that bug fixed, the source is available, and the licence gives you the right to modify it as you see fit. I encourage you to dig in and fix the bug. If you don’t have the necessary skills to do so yourself, you can get someone else to fix it – everyone has the same entitlements to use, study, modify, and redistribute as you do.

You may also decide to pay me for a support contract, and get the bug fixed that way. That gets the bug fixed for everyone, and gives you the bonus warm fuzzies of contributing to the digital commons, which is always nice.

But… My PR is a Gift!

If you take the time and effort to make a PR, you’re doing good work and I commend you for it. However, that doesn’t mean I’ll necessarily merge it into this repository, or even work with you to get it into a state suitable for merging.

A PR is what is often called a “gift of work”. I’ll have to make sure that, at the very least, it doesn’t make anything actively worse. That includes introducing bugs, or causing maintenance headaches in the future (which includes my getting irrationally angry at indenting, because I’m like that). Properly reviewing a PR takes me at least as much time as it would take me to write it from scratch, in almost all cases.

So, if your PR languishes, it might not be that it’s bad, or that the project is (dum dum dummmm!) “unmaintained”, but just that I don’t accept this particular gift of work at this particular time.

Don’t forget that the terms of licence include permission to redistribute modified versions of the code I’ve released. If you think your PR is all that and a bag of potato chips, fork away! I won’t be offended if you decide to release a permanent fork of this software, as long as you comply with the terms of the licence(s) involved.

(Note that I do not undertake support contracts solely to review and merge PRs; that reeks a little too much of “pay to play” for my liking)

Gee, You Sound Like an Asshole

I prefer to think of myself as “forthright” and “plain-speaking”, but that brings to mind that third thing you’re entitled to: your opinion.

I’ve written this out because I feel like clarifying the reality we’re living in, in the hope that it prevents misunderstandings. If what I’ve written makes you not want to use the software I’ve written, that’s fine – you’ve probably avoided future disappointment.

Opinions Sought

What do you think? Too harsh? Too wishy-washy? Comment away!

Categories: FLOSS Project Planets

ThinkDrop Consulting: Run CI/CD with preview environments anywhere with self-hosted Git runners.

Planet Drupal - Mon, 2024-05-13 17:19
Run CI/CD with preview environments anywhere with self-hosted Git runners. admin Mon, 05/13/2024 - 17:19

GitHub Actions and BitBucket Pipelines are amazing. You can control what is run using yaml files in your codebase. 

You can run just about any command, and they provide a really powerful interface for browsing jobs and logs.

Many people are unaware, you can also control where your scripts are run. If you setup a tool called a Git Runner, you can run Git Actions anywhere, including from your local machine.

Categories: FLOSS Project Planets

ListenData: How to Use Gemini in Python

Planet Python - Mon, 2024-05-13 16:54

In this tutorial, you will learn how to use Google's Gemini AI model in Python.

Steps to Access Gemini API

Follow the steps below to access the Gemini API and then use it in python.

  1. Visit Google AI Studio website.
  2. Sign in using your Google account.
  3. Create an API key.
  4. Install the Google AI Python library for the Gemini API using the command below : pip install google-generativeai.
To read this article in full, please click hereThis post appeared first on ListenData
Categories: FLOSS Project Planets

Nonprofit Drupal posts: May Drupal for Nonprofits Chat: Recapping DrupalCon and Nonprofit Summit

Planet Drupal - Mon, 2024-05-13 16:46

Join us THURSDAY, May 16 at 1pm ET / 10am PT, for our regularly scheduled call to chat about all things Drupal and nonprofits. (Convert to your local time zone.)

This month we'll be recapping DrupalCon Portland and the Nonprofit Summit!

And we'll of course also have time to discuss anything else that's on our minds at the intersection of Drupal and nonprofits.  Got something specific you want to talk about? Feel free to share ahead of time in our collaborative Google doc: https://nten.org/drupal/notes!

All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.

This free call is sponsored by NTEN.org and open to everyone. 

  • Join the call: https://us02web.zoom.us/j/81817469653

    • Meeting ID: 818 1746 9653
      Passcode: 551681

    • One tap mobile:
      +16699006833,,81817469653# US (San Jose)
      +13462487799,,81817469653# US (Houston)

    • Dial by your location:
      +1 669 900 6833 US (San Jose)
      +1 346 248 7799 US (Houston)
      +1 253 215 8782 US (Tacoma)
      +1 929 205 6099 US (New York)
      +1 301 715 8592 US (Washington DC)
      +1 312 626 6799 US (Chicago)

    • Find your local number: https://us02web.zoom.us/u/kpV1o65N

  • Follow along on Google Docs: https://nten.org/drupal/notes

View notes of previous months' calls.

Categories: FLOSS Project Planets

libtool @ Savannah: libtool-2.5.0 released [alpha]

GNU Planet! - Mon, 2024-05-13 15:06

Libtoolers!

The Libtool Team is pleased to announce the release of libtool 2.5.0, a alpha release.

GNU Libtool hides the complexity of using shared libraries behind a
consistent, portable interface. GNU Libtool ships with GNU libltdl, which
hides the complexity of loading dynamic runtime libraries (modules)
behind a consistent, portable interface.

There have been 91 commits by 29 people in the 113 weeks since 2.4.7.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Albert Chu (1)
  Alex Ameen (3)
  Antonin Décimo (3)
  Brad Smith (2)
  Bruno Haible (2)
  Dmitry Antipov (1)
  Florian Weimer (1)
  Gilles Gouaillardet (1)
  Ileana Dumitrescu (24)
  Jakub Wilk (1)
  Jonathan Wakely (2)
  Manoj Gupta (1)
  Mike Frysinger (23)
  Mingli Yu (2)
  Oliver Kiddle (1)
  Olly Betts (1)
  Ozkan Sezer (2)
  Paul Eggert (2)
  Paul Green (1)
  Raul E Rangel (1)
  Richard Purdie (5)
  Sam James (4)
  Samuel Thibault (1)
  Stephen Webb (1)
  Tijl Coosemans (1)
  Tim Rice (1)
  Uwe Kleine-König (1)
  Vadim Zeitlin (1)
  Xiang.Lin (1)

Ileana
 [on behalf of the libtool maintainers]
==================================================================

Here is the GNU libtool home page:
    https://gnu.org/s/libtool/

For a summary of changes and contributors, see:
  https://git.sv.gnu.org/gitweb/?p=libtool.git;a=shortlog;h=v2.5.0
or run this command from a git-cloned libtool directory:
  git shortlog v2.4.7..v2.5.0

Here are the compressed sources:
  https://alpha.gnu.org/gnu/libtool/libtool-2.5.0.tar.gz   (1.9MB)
  https://alpha.gnu.org/gnu/libtool/libtool-2.5.0.tar.xz   (1008KB)

Here are the GPG detached signatures:
  https://alpha.gnu.org/gnu/libtool/libtool-2.5.0.tar.gz.sig
  https://alpha.gnu.org/gnu/libtool/libtool-2.5.0.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  fb3ab5907115b16bf12a0d3d424c79cb0003d02e  libtool-2.5.0.tar.gz
  1DjDF0VdhVVM4vmYvkiGb9QM/L+DTWCzAm9PwO1YPSM=  libtool-2.5.0.tar.gz
  70e2dd113a9460c279df01b2eee319adb99ee998  libtool-2.5.0.tar.xz
  fhDMhjgj1AjsX/6kHUPDckqgiBZldXljydsL77LIecw=  libtool-2.5.0.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify libtool-2.5.0.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096 2021-09-23 [SC]
        FA26 CA78 4BE1 8892 7F22  B99F 6570 EA01 146F 7354
  uid   Ileana Dumitrescu <ileanadumi95@protonmail.com>
  uid   Ileana Dumitrescu <ileanadumitrescu95@gmail.com>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key ileanadumi95@protonmail.com

  gpg --recv-keys 6570EA01146F7354

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=libtool&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify libtool-2.5.0.tar.gz.sig

This release was bootstrapped with the following tools:
  Autoconf 2.72e
  Automake 1.16.5
  Gnulib v0.1-6995-g29d705ead1

NEWS

  • Noteworthy changes in release 2.5.0 (2024-05-13) [alpha]


** New features:

  - Pass '-fdiagnostics-color', '-frecord-gcc-switches',
    '-fno-sanitize*', '-Werror', and 'prefix-map' flags.

  - Pass the '-no-canonical-prefixes' linker flag.

  - Pass '-fopenmp=*' for Clang to allow choosing between libgomp and
    libomp.

  - Pass '-shared-libsan', '-static-libsan', 'rtlib=*', and
    'unwindlib=*' for Clang.

  - Expanded process.h inclusion on Windows for more than the
    proprietary MSVC compiler. Other alternative Windows compilers
    also require process.h.

  - Pass 'elf32_x86_64' and 'elf64_x86_64' to the linker on hurd-amd64.

  - Recognize --windows* config triplets.

** Important incompatible changes:

  - Removed test_compile from command line options.

  - By default executables are created with the RUNPATH property for
    the Android linker. RUNPATH works for libraries which are not
    installed in system locations.

  - Removed AC_PROG_SED fallback, as the macro has been supported
    in Autoconf since the 90's.

** Bug fixes:

  - Check for space after -l, -L, and -R linker flags.

  - Updated documentation for tests, the demo directory, and
    elsewhere.

  - Fixed Solaris 11 builds.

  - Clean trailing "/" from sysroot path.

  - Fixed shared library builds for System V.

  - Added mingw to the list of systems not requiring libm.

  - Fixed support for nios2 systems.

  - Fixed linker check for '--whole-archive' support for linkers other
    than ld.

  - Use -Fe instead of -o with MSVC to avoid deprecation warnings.

  - Improved reproducibility of libtool scripts.

  - Avoided MinGW warning by adding CRTIMP.

  - Improved grep portability.

  - Fixed cross-building warnings when checking for file.


** Changes in supported systems or compilers:

  - Removed support for bitrig (--bitrig*).

  - Added support for flang (Fortran LLVM-based) compilers.


Enjoy!

Categories: FLOSS Project Planets

Gábor Hojtsy: Drupal 11 deep dive: watch the recording, present your own (free slides!)

Planet Drupal - Mon, 2024-05-13 14:33
Drupal 11 deep dive: watch the recording, present your own (free slides!)

I presented my first ever Drupal 11 deep dive session at DrupalCon Portland 2024 last week. It turned out to not just be about Drupal 11 but also about Starshot and even about Drupal 12 thanks to the coolest future-proofing technology I announced in this talk. Unfortunately not all of the attendees fit in, that wanted to attend, as the room was standing space only and many turned around and left. But here we go!

I strongly believe in open content. I came to open source from open content 24 or so years ago. So in good tradition, I built this slide deck on slides.com in way that is easy to share and fork. You can create your own or present directly from my deck with my speaker notes. The content is licensed with a Creative Commons license. I'll keep updating this slideshow, but under different URLs, so people can catch the latest edition of this presentation at Drupal Devdays Burgas next month for example. See some of you there!

If you can't make it there or plan to present this at your organization or meetup in the meantime, check out the open source slides.

The recording from DrupalCon Portland is below. Unfortunately I was not well prepared with a subtitling set up. I am exploring good tools and will do better next time! The conference tech crew tried to help in the middle of the session, but unfortunately they could not make it work either. At least managed to discuss some current Starshot questions while that was attempted. I promised a video with subtitles, which turns out Youtube nicely delivered, so I will not create a separate recording now. Hope this helps!

Gábor Hojtsy Mon, 05/13/2024 - 21:33
Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #450 - Certification & Exam Prep

Planet Drupal - Mon, 2024-05-13 14:00

Today we are talking about Certification & Exam Prep, Resources for studying, and tips to get a passing grade with guests Chad Hester & Martin Anderson-Clutz. We’ll also cover Quiz Maker as our module of the week.

For show notes visit: www.talkingDrupal.com/450

Topics
  • Why are exams and certifications important to dev's
  • After going through the Talking Drupal Skills Upgrade mini series do you feel preparted to take an Acquia certification
  • How should someone get ready
  • What are some struggles people may have getting ready
  • What does the plan look like for someone getting ready
  • Does Acquia provide pre tests
  • Did Skills Upgrade prepare you for this type of assessment
  • What happens if you do not pass
  • How do you know you're ready
  • Tips and tricks for taking a test
  • Where do you take the test
  • Questions to someone who has taken the test
  • Special surprise
Resources Guests Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Matthew Grasmick - grasmash

MOTW Correspondent

Martin Anderson-Clutz - mandclu

  • Brief description:
    • Have you ever wanted to build and deliver interactive quizzes on your Drupal website?
  • Module name/project name:
  • Brief history
    • How old: created in Apr 2024 (the last couple of weeks) by Roman Chekhaniuk (r_cheh)
    • Versions available: 1.0.5, which works with Drupal 9, 10, and 11
  • Maintainership
    • Actively maintained
    • Not yet opted into Security coverage, but being so new it’s possible they started the process of getting the project reviewed
    • Number of open issues: 0
  • Usage stats:
    • Not currently installed on any sites yet, according to Drupal.org
    • Module features and usage
    • The module defines a number of of custom entities to allow your site to define very flexible quizzes, that can include options like the amount of time allowed, pass rate, maximum number of attempts, randomizing the sequence of the questions, and more
    • The module also defines custom plugins for questions, responses, and answers, so you can extend it to handle very custom use cases
    • The Quiz module is very popular in this space but the version you can use with modern versions of Drupal is still in alpha, so it’s great to see another option available, especially for sites that don’t need anything as complex as the Opigno LMS
Categories: FLOSS Project Planets

Introducing the Formatting plugin

Planet KDE - Mon, 2024-05-13 13:35

So this is not quite an introduction since the plugin has been around for almost a year now, having been released in the 23.04 release but since I never got around to writing a blog about it, here I am.

In simple words, the formatting plugin allows one to format code easily and quickly. Well the "quickness" depends on the underlying code formatter but we try to be as quick as possible. So far if you wanted to do code formatting from within Kate, the only way to do that was to configure a tool in the External Tools plugin and then invoke it whenever you wanted to format the code. While this works it wasn't great for a few reasons. Firstly, you would loose undo history. Secondly, the buffer would jump and you would most likely loose your current position in the document. Thirdly, for every language you get a different tool and you need to remember the right tool to invoke on the right document type.

To simplify this, I decided to write a plugin that would expose a minimal UI but still provide a lot of features.

There are basically two ways to use this plugin:

  • Manually using the "Format Document" action.
  • Automatically on save

The correct formatter is invoked based on the document type in all cases. Additionally the plugin will preserve the document's undo history and user's cursor position when formatting the code so that the formatting of code doesn't disrupt user's workflow. This is especially important for automatic formatting on save.

Supported languages:

The current list of supported languages and formatters are as follows:

  • C/C++/ObjectiveC/ObjectiveC++/Protobuf
    • clang-format
  • Javascript/Typescript/JSX/TSX
    • Prettier
  • Json
    • clang-format
    • Prettier
    • jq
  • Dart
    • dartfmt
  • Rust
    • rustfmt
  • XML
    • xmllint
  • Go
    • gofmt
  • Zig
    • zigfmt
  • CMake
    • cmake-format
  • Pythong
    • autopep8
    • ruff
Configuring

The plugin can be configured in two ways:

  • Globally, from the Configure dialog
  • On a per project basis using the .kateproject file

When reading the config, the plugin will first try to read the config from .kateproject file and then read the global config.

Example:

{ "formatOnSave": true, "formatterForJson": "jq", "cmake-format": { "formatOnSave": false }, "autopep8": { "formatOnSave": false } }

The above

  • enables "format on save" globally
  • specifies "jq" as the formatter for JSON
  • disables "format on save" for cmake-format and autopep8

To configure formatting for a project, first create a .kateproject file and then add a "formatting" object to it. In the "formatting" object you can specify your settings as shown in the previous example. Example:

{ "name": "My Cool Project", "files": [ { "git": 1 } ], "formatting": { "formatterForJson": "clang-format", "autopep8": { "formatOnSave": false } } }
Categories: FLOSS Project Planets

Pages