FLOSS Project Planets

Drupal Association blog: Contributor guide: Maximizing Impactful Contributions

Planet Drupal - Fri, 2024-03-08 09:48

As I have mentioned in the past, many people and companies have communicated to me in the past their willingness to know how they could make their contribution more impactful to Drupal and the Drupal association. The Bounty program has proved success, and we are exploring and getting new ideas to extend it. However we don't want to stop here.

That’s why we are publishing today this list of strategic initiatives, and list of issues and modules where your contribution should be more impactful.

Additionally we may want at some point to grant extra credits to some those issues. For now, if you are not sure where to contribute but you want to make sure that your contribution makes a difference, have a look at this list and take your pick. 

And have in mind that this is a work in progress or a living document. Some sections will need proposals that we will start populating after internal review, and depending on the feedback received on the usefulness of this document.

Strategic Initiatives

Strategic initiatives are where some of the most important innovations in Drupal happen. These are often big picture ideas to add major new features to Drupal that range from improving major apis, to adding better page building, to improving the total cost of ownership by adding quality of life features, and much more. 

Participating in a strategic initiative can be challenging but also rewarding. It is not a place for a drive-by contribution - it's a place to join if you have dedicated time to devote, are willing to listen and learn from the existing contributors and initiative leads before you jump in, and have a strong background in related areas.

Find here more information about the current Strategic Initiatives.

Issues

Contributing to individual issues can be less of a long-term commitment than participating in Strategic Initiatives, but it can also be overwhelming because of the sheer number of issues on Drupal.org. It's also very important to follow the issue etiquette guidelines when contributing to issues. Most of all - listen to and respect the project maintainer and their guidance when contributing to issues on their project. It's better to help solve existing issues to show your willingness to help before opening any new ones.

Modules and projects

Drupal is built on the back of a powerful ecosystem of extensions, modules, themes, distributions, etc. These extensions are crucial for supporting the vast variety of industry use cases that Drupal is used for, and oftentimes some of the most important innovations in Drupal begin as contributed extensions. 

These are just a few projects that could use contribution support to help advance Drupal.

Top used patches
  • Would it be amazing to have a list of most used patches, and propose those as priorities to get fixed? We are working on extracting that list. COMING SOON
  • Would you like to propose a patch or patches on this section? Send me your suggestions and why it would make a difference to: alex.moreno@association.drupal.org
Easy picks

Issues that are easy to fix or just need a little push

Ideas/others?

Contact me: alex.moreno@association.drupal.org

Educational resources for contribution

We offer some detailed resources that we recommend everyone review when learning to first contribute: 

Resource #1: A video introduction to contribution:

https://www.youtube.com/watch?v=lu7ND0JT-8A

Resource #2: A slide deck which goes into greater depth about contribution:

https://docs.google.com/presentation/d/1jvU0-9Fd4p1Bla67x9rGALyE7anmzjhQ4vPUbf4SGhk/edit 

Resource #3: The First Time Contributors Workshop from DrupalCon Global:

https://www.youtube.com/watch?v=0K0uIgKaVNQ

Avoid contribution behavior that seems motivated just to 'game the system'

It's unfortunate, but we do sometimes see contributors who appear and disappear on single issues on small, repetitive tasks that could just as easily be handled by automated tools. These issues are generally not eligible for credit anyway, and often cause frustration for Project Maintainers. It's not good for you or your company's reputation to contribute in this way.

Resource #4: Abuse of the credit system

These guidelines help clarify what kinds of contributions are not considered acceptable for marketplace credit.

https://www.drupal.org/drupalorg/docs/marketplace/abuse-of-the-contribution-credit-system

We did see some recent examples of issues being opened for individual phpcs issues, when we prefer to see all phpcs issues fixed in a single issue, for example. 

Categories: FLOSS Project Planets

Droptica: Why is Drupal a Perfect CMS for Higher Education? 8 Reasons

Planet Drupal - Fri, 2024-03-08 09:00

Universities need a solid online presence to attract students, connect with stakeholders, provide valuable information, and foster collaboration among different departments. Choosing the right content management system is therefore crucial so that you can easily develop a user-friendly, attractive, and informative website. This article will explore why Drupal is the ideal CMS for higher education institutions.

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #195: Building a Healthy Developer Mindset While Learning Python

Planet Python - Fri, 2024-03-08 07:00

How do you get yourself unstuck when facing a programming problem? How do you develop a positive developer mindset while learning Python? This week on the show, Bob Belderbos from Pybites is here to talk about learning Python and building healthy developer habits.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Web Review, Week 2024-10

Planet KDE - Fri, 2024-03-08 06:06

Let’s go for my web review for the week 2024-10.

KDE Neon shows that the Plasma 6 Linux distro is something truly special | ZDNET

Tags: tech, kde, foss

Another nice review for Plasma 6. Looks like it’s getting mostly very positive reviews. So glad!

https://www.zdnet.com/article/kde-neon-shows-that-the-plasma-6-linux-distro-is-something-truly-special/


CACM Is Now Open Access – Communications of the ACM

Tags: tech, science, research

This is great news, more scientific papers from the past decades will be accessible to everyone.

https://cacm.acm.org/news/cacm-is-now-open-access-2/


French Court Issues Damages Award for Violation of GPL – Copyleft Currents

Tags: tech, copyright, foss, law

This is a nice ruling about GPL violation in France. Gives some more weight to the GPL.

https://heathermeeker.com/2024/02/17/french-court-issues-damages-award-for-violation-of-gpl/


European crash tester says carmakers must bring back physical controls | Ars Technica

Tags: tech, automotive, ux

This is an important request. It has safety implications. It is non-binding request of course, but the insurance companies pay attention to it and so could have an impact.

https://arstechnica.com/cars/2024/03/carmakers-must-bring-back-buttons-to-get-good-safety-scores-in-europe/


Progressive Web Apps in EU will work fine in iOS 17.4

Tags: tech, apple, law, criticism

Looks like enough people complained that they had to change course. Good, until the next bad move…

https://appleinsider.com/articles/24/03/01/apple-reverses-course-on-death-of-progressive-web-apps-in-eu


Nvidia bans using translation layers for CUDA software

Tags: tech, nvidia, computation, vendor-lockin

This was only a matter of time before we’d see such a move. This doesn’t bode well for things like ZLUDA.

https://www.tomshardware.com/pc-components/gpus/nvidia-bans-using-translation-layers-for-cuda-software-to-run-on-other-chips-new-restriction-apparently-targets-zluda-and-some-chinese-gpu-makers


Generative AI’s environmental costs are soaring — and mostly secret

Tags: tech, ai, machine-learning, gpt, water, energy, ecology

This is one of the main problems with using those generative models as currently provided. It’s time for the legislators to step up, we can’t let a couple of players hoard energy and water for themselves.

https://www.nature.com/articles/d41586-024-00478-x


We’re told AI neural networks ‘learn’ the way humans do. A neuroscientist explains why that’s not the case

Tags: tech, neural-networks, ai, machine-learning, neuroscience

Friendly reminder that the neural networks we use are very much artificial. They’re also far from working like biological ones do.

https://theconversation.com/were-told-ai-neural-networks-learn-the-way-humans-do-a-neuroscientist-explains-why-thats-not-the-case-183993


Radicle: sovereign code infrastructure

Tags: tech, git, version-control, p2p

Looks like an interesting approach for a new family of development forges. Fully distributed and peer to peer, I wonder if it’ll pick up.

https://radicle.xyz/


List of 2024 Leap Day Bugs

Tags: tech, time

We’re collectively still failing at handling leap days properly it seems.

https://codeofmatt.com/list-of-2024-leap-day-bugs/


The Hunt for the Missing Data Type

Tags: tech, graph, mathematics, matrix, performance

Indeed, graphs are peculiar beasts. When dealing with graph related problems there are so many choices to make that it’s hard or impossible to come up with a generic solution.

https://www.hillelwayne.com/post/graph-types/


The “missing” graph datatype already exists. It was invented in the ‘70s

Tags: tech, graph, mathematics, performance

A response to “The Hunt for the Missing Data Type” article. There are indeed potential solutions, but they’re not really used/usable in the industry right now. Maybe tomorrow.

https://tylerhou.com/posts/datalog-go-brrr/


Java is becoming more like Rust, and I am here for it! | Josh Austin

Tags: tech, java, type-systems

Don’t fret, this just illustrates the fact that immutable data and algebraic data types are easier to have in Java now. Still that’s very good things to see spread in many languages.

https://joshaustin.tech/blog/java-is-becoming-rust/


CSS for printing to paper

Tags: tech, web, frontend, css, javascript

Nice set of tricks (might also involve Javascript, not only CSS) when you need to format web content for printing.

https://voussoir.net/writing/css_for_printing


DUSt3R: Geometric 3D Vision Made Easy

Tags: tech, 3d, computer-vision

Looks like an interesting pipeline for multi-view stereo reconstruction.

https://dust3r.europe.naverlabs.com/


How I use git worktrees - llimllib notes

Tags: tech, git, version-control, tools

Good reminder that git worktrees exist. They definitely come in handy sometimes.

https://notes.billmill.org/blog/2024/03/How_I_use_git_worktrees.html


Twenty Years Is Nothing – De Programmatica Ipsum

Tags: tech, version-control, git, history

Going back on the history of the introduction of version control in software engineering and how Git ended up so dominant. We often forget there was a time before Git.

https://deprogrammaticaipsum.com/twenty-years-is-nothing/


Google Testing Blog: Increase Test Fidelity By Avoiding Mocks

Tags: tech, tests

This is a good explanation of why you should limit your use of mocks. It also highlights some of the alternatives.

https://testing.googleblog.com/2024/02/increase-test-fidelity-by-avoiding-mocks.html?m=1


I’m a programmer and I’m stupid

Tags: tech, programming, craftsmanship, complexity

Interesting how feeling stupid can actually push you toward good engineering practices, isn’t it?

https://antonz.org/stupid/


Defining, Measuring, and Managing Technical Debt

Tags: tech, technical-debt, cognition

A bit of a high level view on technical debt. There’s a couple of interesting insights though. In particular the lack of good metrics to evaluate technical debt… and the fact that it’s probably about “both the present state and the possible state” of the code base. So it’s very much linked to the human cognition in order to conceive the “ideal state”.

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10109339


Enabling constraints | Organizing Chaos

Tags: tech, architecture, complexity

Interesting thinking about constraints and their rough classification as restrictive or enabling. I also liked how they’re tied to complexity.

https://jordankaye.dev/posts/enabling-constraints/


The Bureaucratization of Agile. Why Bureaucratic Software Environments… | by Kevin Meadows | Feb, 2024 | Medium

Tags: tech, agile, management, project-management, product-management, culture

A few points to take with a pinch of salt, especially regarding the proposed solutions. Still it makes a very good point that most transformation failures toward agile organizations are due to lack of trust and the swapping of one bureaucracy for another.

https://jmlascala71.medium.com/the-bureaucratization-of-agile-025dd5e2d2d0


These companies tried a 4-day workweek. More than a year in, they still love it : NPR

Tags: management, work, life

Interesting outcome from those experiments. Interesting insights coming from the practices the companies put in place. The failures also bring interesting information.

https://www.npr.org/2024/02/27/1234271434/4-day-workweek-successful-a-year-later-in-uk


Lemmings: Can You Dig It?

Tags: tech, game, history

Very nice documentary about the creation of Lemmings. It’s especially incredible what you can do with a bunch of pixels. This is a lesson in minimalism. And to think it was initially rejected by publishers… This is a fascinating story through and through with a lot of (sometimes surprising) ramifications.

https://www.youtube.com/watch?v=RbAVNKdk9gA


Bye for now!

Categories: FLOSS Project Planets

LN Webworks: How To Protect Your Website With Drupal 10 From Cyber Threats

Planet Drupal - Fri, 2024-03-08 01:44

In 2024, safeguarding your website against a multitude of online threats has become more crucial than ever. With cyberattacks posing significant risks that can potentially cripple your business, ensuring the security and safety of your digital presence is paramount. 

Enter Drupal 10, a robust CMS equipped with advanced features designed to protect your website from these looming dangers. This comprehensive guide will dive into the talk about the prominent thread out there for your website and the key steps you need to take to protect your website. 

Knowing Potential Threats that Can Harm Your Drupal 10 Website:

Before forging your Drupal 10 security shields, understanding the enemies you face is important. Here's a deeper dive into the most common threats, their tactics, and their potential impact:

Categories: FLOSS Project Planets

www @ Savannah: Malware in Proprietary Software - Latest Additions

GNU Planet! - Thu, 2024-03-07 21:05

The initial injustice of proprietary software often leads to further injustices: malicious functionalities.

The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.

We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.

Here are our latest additions February 2024

Proprietary Surveillance

  • Surveillance cameras put in by government A to surveil for it may be surveilling for government B as well. That's because A put in a product made by B with nonfree software.

(Please note that this article misuses the word "hack" to mean "break security.")

January 2024

Malware in Cars

A good privacy law would prohibit cars recording this data about the users' activities. But not just this data—lots of other data too.

DRM in Trains

  • Newag, a Polish railway manufacturer, puts DRM inside trains to prevent third-party repairs.
    • The train's software contains code to detect if the GPS coordinates are near some third party repairers, or the train has not been running for some time. If yes, the train will be "locked up" (i.e. bricked). It was also possible to unlock it by pressing a secret combination of buttons in the cockpit, but this ability was removed by a manufacturer's software update.
    • The train will also lock up after a certain date, which is hardcoded in the software.
    • The company pushes a software update that detects if the DRM code has been bypassed, i.e. the lock should have been engaged but the train is still operational. If yes, the controller cabin screen will display a scary message warning about "copyright violation."


Proprietary Insecurity in LogoFAIL


4K UHD Blu-ray Disks, Super Duper Malware

  • The UHD (Ultra High Definition, also known as 4K) Blu-ray standard involves several types of restrictions, both at the hardware and the software levels, which make “legitimate” playback of UHD Blu-ray media impossible on a PC with free/libre software.
    • DRM - UHD Blu-ray disks are encrypted with AACS, one of the worst kinds of DRM. Playing them on a PC requires software and hardware that meet stringent proprietary specifications, which developers can only obtain after signing an agreement that explicitly forbids them from disclosing any source code.
    • Sabotage - UHD Blu-ray disks are loaded with malware of the worst kinds. Not only does playback of these disks on a PC require proprietary software and hardware that enforce AACS, a very nasty DRM, but developers of software players are forbidden from disclosing any source code. The user could also lose the ability to play AACS-restricted disks anytime by attempting to play a new Blu-ray disk.
    • Tethering - UHD Blu-ray disks are encrypted with keys that must be retrieved from a remote server. This makes repeated updates and internet connections a requirement if the user purchases several UHD Blu-ray disks over time.
    • Insecurity - Playing UHD Blu-ray disks on a PC requires Intel SGX (Software Guard Extensions), which not only has numerous security vulnerabilities, but also was deprecated and removed from mainstream Intel CPUs in 2022.
    • Back Doors - Playing UHD Blu-ray disks on a PC requires the Intel Management Engine, which has back doors and cannot be disabled. Every Blu-ray drive also has a back door in its firmware, which allows the AACS-enforcing organization to "revoke" the ability to play any AACS-restricted disk.


Proprietary Interference

This is a reminder that angry users still have the power to make developers of proprietary software remove small annoyances. Don't count on public outcry to make them remove more profitable malware, though. Run away from proprietary software!

Categories: FLOSS Project Planets

Matt Layman: Do It Live - Building SaaS with Python and Django #185

Planet Python - Thu, 2024-03-07 19:00
In this episode, we deployed all our user setup and Stripe configuration change to the live site and tested the new flows end to end. Along the way, we found a bug in djstripe as well as some final bugs in the JourneyInbox configuration that prevented things from working. This is why you test!
Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 260 released

Planet Debian - Thu, 2024-03-07 19:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 260. This version includes the following changes:

[ Chris Lamb ] * Actually test 7z support in the test_7z set of tests, not the lz4 functionality. (Closes: reproducible-builds/diffoscope#359) * In addition, correctly check for the 7z binary being available (and not lz4) when testing 7z. * Prevent a traceback when comparing a contentful .pyc file with an empty one. (Re: Debian:#1064973)

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

Valhalla's Things: Denim Waistcoat

Planet Debian - Thu, 2024-03-07 19:00
Posted on March 8, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

I had finished sewing my jeans, I had a scant 50 cm of elastic denim left.

Unrelated to that, I had just finished drafting a vest with Valentina, after the Cutters’ Practical Guide to the Cutting of Ladies Garments.

A new pattern requires a (wearable) mockup. 50 cm of leftover fabric require a quick project. The decision didn’t take a lot of time.

As a mockup, I kept things easy: single layer with no lining, some edges finished with a topstitched hem and some with bias tape, and plain tape on the fronts, to give more support to the buttons and buttonholes.

I did add pockets: not real welt ones (too much effort on denim), but simple slits covered by flaps.

piece; there is a slit in the middle that has been finished with topstitching.

To do them I marked the slits, then I cut two rectangles of pocketing fabric that should have been as wide as the slit + 1.5 cm (width of the pocket) + 3 cm (allowances) and twice the sum of as tall as I wanted the pocket to be plus 1 cm (space above the slit) + 1.5 cm (allowances).

Then I put the rectangle on the right side of the denim, aligned so that the top edge was 2.5 cm above the slit, sewed 2 mm from the slit, cut, turned the pocketing to the wrong side, pressed and topstitched 2 mm from the fold to finish the slit.

other sides; it does not lay flat on the right side of the fabric because the finished slit (hidden in the picture) is pulling it.

Then I turned the pocketing back to the right side, folded it in half, sewed the side and top seams with a small allowance, pressed and turned it again to the wrong side, where I sewed the seams again to make a french seam.

And finally, a simple rectangular denim flap was topstitched to the front, covering the slits.

I wasn’t as precise as I should have been and the pockets aren’t exactly the right size, but they will do to see if I got the positions right (I think that the breast one should be a cm or so lower, the waist ones are fine), and of course they are tiny, but that’s to be expected from a waistcoat.

The other thing that wasn’t exactly as expected is the back: the pattern splits the bottom part of the back to give it “sufficient spring over the hips”. The book is probably published in 1892, but I had already found when drafting the foundation skirt that its idea of “hips” includes a bit of structure. The “enough steel to carry a book or a cup of tea” kind of structure. I should have expected a lot of spring, and indeed that’s what I got.

To fit the bottom part of the back on the limited amount of fabric I had to piece it, and I suspect that the flat felled seam in the center is helping it sticking out; I don’t think it’s exactly bad, but it is a peculiar look.

Also, I had to cut the back on the fold, rather than having a seam in the middle and the grain on a different angle.

Anyway, my next waistcoat project is going to have a linen-cotton lining and silk fashion fabric, and I’d say that the pattern is good enough that I can do a few small fixes and cut it directly in the lining, using it as a second mockup.

As for the wrinkles, there is quite a bit, but it looks something that will be solved by a bit of lightweight boning in the side seams and in the front; it will be seen in the second mockup and the finished waistcoat.

As for this one, it’s definitely going to get some wear as is, in casual contexts. Except. Well, it’s a denim waistcoat, right? With a very different cut from the “get a denim jacket and rip out the sleeves”, but still a denim waistcoat, right? The kind that you cover in patches, right?

And I may have screenprinted a “home sewing is killing fashion” patch some time ago, using the SVG from wikimedia commons / the Home Taping is Killing Music page.

And. Maybe I’ll wait until I have finished the real waistcoat. But I suspect that one, and other sewing / costuming patches may happen in the future.

No regrets, as the words on my seam ripper pin say, right? :D

Categories: FLOSS Project Planets

Quant Drupal Planet Blog Posts: Join us at DrupalSouth Sydney 2024 to talk about the big wins of static Drupal websites

Planet Drupal - Thu, 2024-03-07 18:48




Quant is excited to be presenting this month at DrupalSouth Sydney 2024! Join us to learn about the big wins of static Drupal websites including security, performance, scalability and environmental impact.


Focus:

Tags:

Related Content:
Categories: FLOSS Project Planets

Dirk Eddelbuettel: prrd 0.0.6 at CRAN: Several Improvements

Planet Debian - Thu, 2024-03-07 18:05

Thrilled to share that a new version of prrd arrived at CRAN yesterday in a first update in two and a half years. prrd facilitates the parallel running [of] reverse dependency [checks] when preparing R packages. It is used extensively for releases I make of Rcpp, RcppArmadillo, RcppEigen, BH, and others.

The key idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development that is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the (dated) screenshot (running six parallel workers, arranged in a split byobu session).

This release, the first since 2021, brings a number of enhancments. In particular, the summary function is now improved in several ways. Josh also put in a nice PR that generalizes some setup defaults and values.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.6 (2024-03-06)
  • The summary function has received several enhancements:

    • Extended summary is only running when failures are seen.

    • The summariseQueue function now displays an anticipated completion time and remaining duration.

    • The use of optional package foghorn has been refined, and refactored, when running summaries.

  • The dequeueJobs.r scripts can receive a date argument, the date can be parse via anydate if anytime ins present.

  • The enqueeJobs.r now considers skipped package when running 'addfailed' while ensuring selecting packages are still on CRAN.

  • The CI setup has been updated (twice),

  • Enqueing and dequing functions and scripts now support relative directories, updated documentation (#18 by Joshua Ulrich).

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

GNUnet News: Messenger-GTK 0.9.0

GNU Planet! - Thu, 2024-03-07 18:00
Messenger-GTK 0.9.0

Following the new release of "libgnunetchat" there have been some changes regarding the applications utilizing it. So we are pleased to announce the new release of the Messenger-GTK application. This release will be compatible with libgnunetchat 0.3.0 and GNUnet 0.21.0 upwards.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.9.0
  • Contacts can be blocked and unblocked to filter chat messages.
  • Requests for permission to use a camera, autostart the application and running it in background.
  • Camera sensors can be selected to exchange contact information.

A detailed list of changes can be found in the ChangeLog .

Known Issues
  • Chats still require a reliable connection between GNUnet peers. So this still depends on the upcoming NAT traversal to be used outside of local networks for most users (see #5710 ).
  • File sharing via the FS service should work in a GNUnet single-user setup but a multi-user setup breaks it (see #7355 )

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org .

messenger-cli 0.2.0

There's also a new release of the terminal application using the GNUnet Messenger service. This release will ensure compatibility with changes in libgnunetchat 0.3.0 and GNUnet 0.21.0.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Categories: FLOSS Project Planets

GNU Taler news: GNU Taler v0.9.4 released

GNU Planet! - Thu, 2024-03-07 18:00
We are happy to announce the release of GNU Taler v0.9.4.
Categories: FLOSS Project Planets

Drupal Core News: Drupal 11 will be released either on the week of July 29 or week of December 9, 2024

Planet Drupal - Thu, 2024-03-07 15:52

In November 2023, we announced three possible release windows for Drupal 11 based on when beta requirements will be completed. We opened the development branch two weeks ago.

Major version updates of dependencies in Drupal 11 include Symfony 7, jQuery 4 and PHPUnit 10 or 11. Based on our findings with the PHPUnit 10 update particularly, we already see that the first release window in June will not be possible for Drupal 11.

The two remaining potential release windows for Drupal 11 are as follows:

  • If Drupal 11 beta requirements are done by April 26, 2024: Drupal 11.0.0-beta1 on the week of April 29, 2024. RC1 on the week of July 1, 2024 and stable release on the week of July 29, 2024.
  • If Drupal 11 beta requirements are done later by September 13, 2024: Drupal 11.0.0-beta1 will be on the week of September 16, 2024. RC1 on the week of November 11, 2024 and stable release on the week of December 9, 2024. In this case the same versions of Drupal 10.4 are planned for the same release windows.

 

Help with getting Drupal 11 ready

Most help is needed around the update to PHPUnit 10, while the Symfony 7 update and jQuery 4 update issues also have more work to do. Join the #d11readiness channel on Drupal Slack to discuss issues live with contributors.

Get involved in person

In the earlier scenario, Drupal 11 will be in beta the week before DrupalCon Portland 2024, while in the later scenario it will be in beta the week before DrupalCon Barcelona 2024. We'll be working on outstanding core issues at the time and updating contributed projects as well at those events.

Drupal 10.3 will be released on the week of June 17, 2024

While the release dates of its alpha and beta version may be different based on the scenario, Drupal 10.3.0 is planned to have a release candidate on the week of June 3, 2024 and a release on the week of June 17, 2024, independent of when Drupal 11 is released.

Categories: FLOSS Project Planets

Welcome attendees, get to know speakers first hand, and make LibrePlanet a unique experience

FSF Blogs - Thu, 2024-03-07 15:28
We need your help to make the world's premier gathering of free software enthusiasts a success. Would you like to volunteer at LibrePlanet 2024 and play an important part in making the conference a unique experience?
Categories: FLOSS Project Planets

FSF Blogs: Welcome attendees, get to know speakers first hand, and make LibrePlanet a unique experience

GNU Planet! - Thu, 2024-03-07 15:28
We need your help to make the world's premier gathering of free software enthusiasts a success. Would you like to volunteer at LibrePlanet 2024 and play an important part in making the conference a unique experience?
Categories: FLOSS Project Planets

PyBites: A Better Place to Put Your Python Virtual Environments

Planet Python - Thu, 2024-03-07 14:50

Virtual environments are vital if you’re developing Python apps or just writing some Python scripts. They allow you to isolate different app requirements to prevent conflicts and keep your global OS environment clean.

This is super important for many reasons. The most obvious one is requirements isolation. Let’s say that you’re working on two different projects:

  • Project #1 requires Python 3.6 and the “requests” package version 2.1.0.
  • Project #2 requires Python 3.10 and requests 2.31.0.

You cannot install both Python 3.6 and Python 3.10 globally on your OS. You can only install one version. And the same goes for the “requests” library.

The solution is to create a virtual environment, or “venv”, for each project which will isolate it from the other environment and, most importantly isolate it from your OS global env. Each venv has its own folder that contains all the packages and libraries the project needs to run correctly including a copy of Python itself. Then you can activate “venv1” when you’re working on Project #1 or when you finally deploy it or run it as a service or a process. When it comes time to work on Project #2, you just deactivate “venv1” and activate “venv2”.
Better yet, you can have two separate terminal or IDE windows open at the same time, one for each project with its own “venv” activated.

Another great advantage of venvs is the ability to replicate the project’s environment on almost any computer regardless of its OS or currently installed packages. This eliminates the popular “well, it runs on my laptop” excuse when your code doesn’t run on other team members’ machines. Everybody can recreate the exact project’s environment on their machine using venvs and the requirements list conventionally saved in the “requirements.txt” file.

Now, the most common place that you’ll see developers put their virtualenvs is within the same directory where the project they’re developing is saved. This has some great benefits but it also has major drawbacks. I will briefly mention both the pros and cons of this approach then I will suggest a better place for your ‘venv’ folder, the one I actually do and prefer.

Benefits of creating the “venv” inside the project folder: Easier to create:

It’s easier when you create a new project to “cd” into it and create the “venv” right inside it.

Easier to activate:

It’s also easier for you to activate the venv by just running “source venv/bin/activate” from within the project folder. No need to remember where this project’s venv lives on your disk and the absolute or relative path to it.

One bundle:

When you create the venv inside the project folder, it’s always included and bundled with the project. If you copy the project, move it or send it to a team member, the venv is right there with it. No need to recreate it or “pip install” the requirements again (not sure about this last one).

IDE optimization:

Most IDE’s will discover your venv folder if it’s directly in your project’s folder. Some IDE’s will activate the venv automatically for you, while others will at least use it as the default venv without requiring you to manually set it for each project.

Drawbacks of creating the “venv” inside the project folder: Committing venv to Git is gross:

When you have your venv inside your project’s folder, you risk committing it to Git or your version control system either by accident or unknowing that it is a bad practice.

Virtual environments can contain thousands of files and their size can be in gigabytes. Committing them to Git can overload and clutter your source code repo with unnecessary files and cause confusion for anyone trying to clone and run the source code on their machine.

I can hear you saying that the venv folder should be added to the .gitignore file and thus never committed to Git. This is true but remember that the .gitignore file is not created automatically when you initialize a Git repo. You must manually create it and manually edit it to exclude the venv folder, or use a common template for Python projects. You can easily forget to do this step and won’t notice until you see the venv folder on Github. Simply adding it then to .gitignore file won’t remove it completely from Git history which makes it look unprofessional.

Making the project difficult to backup and sync:

Venvs can get huge both in size and number of files. If you’re using any system to backup your laptop or sync your home/user folder to a remote backup location such as a NAS or Nextcloud/Dropbox/Google Drive, you’ll make your life miserable by backing up your venv folders.
I know, you can set some rules to ignore venvs but believe me, I’ve tried to do so but I always forget to add newly created projects to the list of ignored files.

I often don’t notice until it’s too late. This usually comes after a backup job takes too long to complete then I discover after some digging that a venv folder is being backed up to my NAS or even worse, being uploaded to my pCloud folder.

The Solution: A better place for your virtualenvs

Virtual environments are disposable in nature. They contain packages downloaded from the internet and should not be backed up, sync’ed, copied or moved around. They also are re-producible. Using the “requirements.txt” file that’s ideally included in every Python project, you can reinstall all the content of the venv folder anywhere you have an internet connection.

With that in mind, you should not worry about loosing the venv folder or sharing the project with that folder missing. If the person you share it with has any basic Python knowledge they should be able to replicate its venv easily.

This suggests that you should create the venv in a separate folder outside the project’s folder. The question is “Where?”

Keep all your Virtualenvs in a disposable, central folder outside the project’s folder:

As the heading suggests, the ideal place for venvs is a central folder in a disposable location on your disk outside your project’s folder. This can be your “Downloads” , “var/tmp/” or even “/tmp” folder. Basically any place that any backup or sync software would NOT backup/sync by default and would automatically ignore. Most if not all backup/sync apps and systems will ignore your “Download” folder and “/var/tmp” folder. For me, it’s my Downloads folder where I keep a sub folder of all my venvs.

This guards against human error when you forget to ignore the venv folder manually. You don’t even have to think about it.
So, the venv location must be:

  1. outside.
  2. disposable.
  3. central.
    Here’s a breakdown of why:
Why outside the project’s folder?

So that you avoid drawbacks I mentioned above:

  1. Automatically ignored from Git:
    Because they live outside the Git repo, they will never get a chance to be committed or even added to the tracked files.
  2. Clutter-free source code:
    You won’t commit them to Git. They won’t clutter and overload your source code with unnecessary files. As you probably know, you repo should only contain the code you write or incorporate in your project in addition to non-reproducible assets, and NOT source code for packages and libraries you download over the internet, i.e. venvs.
  3. Automatically excluded from Backups and Sync:
    When you backup or sync a project, you usually wouldn’t include any outside folders in the process.
Why a disposable folder?

Simply put, venvs can be reproduced anytime. No need to back them up as you would your personal documents and valuable source code and assets.

So, having them in a disposable location, like the Downloads folder is ideal. You don’t have to worry about excluding them from backups or sync. They’re excluded by default.

Why central location for all venvs?

There are several advantages you will gain by having all your venvs in one central folder on your disk:

Easily exclude them from backup and sync:

There are many consequences for backing up venv folders or sync’ing them to the cloud.

Most backup and sync software have a way of listing the files/folders you want to exclude. This varies between those systems. Some have an option in the GUI where you specify the list, some require you to include a hidden file with a specific name (like .nobackup in the case of Borg) in every folder you want to exclude. And some simply require that you type out “–exclude {folder name}” in the command that triggers the backup like it’s the case with rsync.

Whatever the case may be, you must do it manually for every folder. However, by having all your venvs under one, and only one, folder, you only have to do this exclusion thing once for the parent folder and be done with it. Every child folder will inherit this and be excluded automatically.

Easily remember the path to your venvs:

If you just apply my first suggestion and create your venv outside your project’s folder, each of your projects will have a different location and you’ll find it hard to remember where each project’s venv is.

By having all your venvs under one folder like “~/Downloads/venvs/” for example, it’s very easy to link to your venv from anywhere on your system.

An Example Setup:

Finally, an ideal setup I suggest and actually have been doing for years now is as follows:

  • The location I use for my venvs is:
    ~/Downloads/virtual-envs/ . That is: my home directory > Downloads > virtual-envs.
  • When I start a new project, I immediately create a virtualenv for it under my central venvs folder (~/Downloads/virtual-envs/) using this command:
Python3 -m venv ~/Downloads/virtual-envs/project1
  • Then I activate the venv. For a quick completion of the path, I hit “Alt +.” (that’s the Alt key and the “dot” key) to add the last argument I gave to the most recent command, which is in this case “~/Downloads/virtual-envs/project1”. So, instead of typing the path again, the activation command becomes:

source + “Alt” + “.” + /bin/activate

I even make it quicker by hitting “Tab” after /b and /a in the path so that it auto-completes “bin” and “activate” words respectively for me.

That’s it. You now have a better setup for your venvs. You don’t have to worry about adding them to the .gitignore file or excluding them from backup and sync software anymore.

Categories: FLOSS Project Planets

PyBites: Why You Should Never Put Your Virtualenv Inside Your Project’s Folder

Planet Python - Thu, 2024-03-07 14:50

Virtual environments are vital if you’re developing Python apps or just writing some Python scripts. They allow you to isolate different app requirements to prevent conflicts and keep your global OS environment clean.

This is super important for many reasons. The most obvious one is requirements isolation. Let’s say that you’re working on two different projects:

  • Project #1 requires Python 3.6 and the “requests” package version 2.1.0.
  • Project #2 requires Python 3.10 and requests 2.31.0.

You cannot install both Python 3.6 and Python 3.10 globally on your OS. You can only install one version. And the same goes for the “requests” library.

The solution is to create a virtual environment, or “venv”, for each project which will isolate it from the other environment and, most importantly isolate it from your OS global env. Each venv has its own folder that contains all the packages and libraries the project needs to run correctly including a copy of Python itself. Then you can activate “venv1” when you’re working on Project #1 or when you finally deploy it or run it as a service or a process. When it comes time to work on Project #2, you just deactivate “venv1” and activate “venv2”.
Better yet, you can have two separate terminal or IDE windows open at the same time, one for each project with its own “venv” activated.

Another great advantage of venvs is the ability to replicate the project’s environment on almost any computer regardless of its OS or currently installed packages. This eliminates the popular “well, it runs on my laptop” excuse when your code doesn’t run on other team members’ machines. Everybody can recreate the exact project’s environment on their machine using venvs and the requirements list conventionally saved in the “requirements.txt” file.

Now, the most common place that you’ll see developers put their virtualenvs is within the same directory where the project they’re developing is saved. This has some great benefits but it also has major drawbacks. I will briefly mention both the pros and cons of this approach then I will suggest a better place for your ‘venv’ folder, the one I actually do and prefer.

Benefits of creating the “venv” inside the project folder: Easier to create:

It’s easier when you create a new project to “cd” into it and create the “venv” right inside it.

Easier to activate:

It’s also easier for you to activate the venv by just running “source venv/bin/activate” from within the project folder. No need to remember where this project’s venv lives on your disk and the absolute or relative path to it.

One bundle:

When you create the venv inside the project folder, it’s always included and bundled with the project. If you copy the project, move it or send it to a team member, the venv is right there with it. No need to recreate it or “pip install” the requirements again (not sure about this last one).

IDE optimization:

Most IDE’s will discover your venv folder if it’s directly in your project’s folder. Some IDE’s will activate the venv automatically for you, while others will at least use it as the default venv without requiring you to manually set it for each project.

Drawbacks of creating the “venv” inside the project folder: Committing venv to Git is gross:

When you have your venv inside your project’s folder, you risk committing it to Git or your version control system either by accident or unknowing that it is a bad practice.

Virtual environments can contain thousands of files and their size can be in gigabytes. Committing them to Git can overload and clutter your source code repo with unnecessary files and cause confusion for anyone trying to clone and run the source code on their machine.

I can hear you saying that the venv folder should be added to the .gitignore file and thus never committed to Git. This is true but remember that the .gitignore file is not created automatically when you initialize a Git repo. You must manually create it and manually edit it to exclude the venv folder, or use a common template for Python projects. You can easily forget to do this step and won’t notice until you see the venv folder on Github. Simply adding it then to .gitignore file won’t remove it completely from Git history which makes it look unprofessional.

Making the project difficult to backup and sync:

Venvs can get huge both in size and number of files. If you’re using any system to backup your laptop or sync your home/user folder to a remote backup location such as a NAS or Nextcloud/Dropbox/Google Drive, you’ll make your life miserable by backing up your venv folders.
I know, you can set some rules to ignore venvs but believe me, I’ve tried to do so but I always forget to add newly created projects to the list of ignored files.

I often don’t notice until it’s too late. This usually comes after a backup job takes too long to complete then I discover after some digging that a venv folder is being backed up to my NAS or even worse, being uploaded to my pCloud folder.

The Solution: A better place for your virtualenvs

Virtual environments are disposable in nature. They contain packages downloaded from the internet and should not be backed up, sync’ed, copied or moved around. They also are re-producible. Using the “requirements.txt” file that’s ideally included in every Python project, you can reinstall all the content of the venv folder anywhere you have an internet connection.

With that in mind, you should not worry about loosing the venv folder or sharing the project with that folder missing. If the person you share it with has any basic Python knowledge they should be able to replicate its venv easily.

This suggests that you should create the venv in a separate folder outside the project’s folder. The question is “Where?”

Keep all your Virtualenvs in a disposable, central folder outside the project’s folder:

As the heading suggests, the ideal place for venvs is a central folder in a disposable location on your disk outside your project’s folder. This can be your “Downloads” , “var/tmp/” or even “/tmp” folder. Basically any place that any backup or sync software would NOT backup/sync by default and would automatically ignore. Most if not all backup/sync apps and systems will ignore your “Download” folder and “/var/tmp” folder. For me, it’s my Downloads folder where I keep a sub folder of all my venvs.

This guards against human error when you forget to ignore the venv folder manually. You don’t even have to think about it.
So, the venv location must be:

  1. outside.
  2. disposable.
  3. central.
    Here’s a breakdown of why:
Why outside the project’s folder?

So that you avoid drawbacks I mentioned above:

  1. Automatically ignored from Git:
    Because they live outside the Git repo, they will never get a chance to be committed or even added to the tracked files.
  2. Clutter-free source code:
    You won’t commit them to Git. They won’t clutter and overload your source code with unnecessary files. As you probably know, you repo should only contain the code you write or incorporate in your project in addition to non-reproducible assets, and NOT source code for packages and libraries you download over the internet, i.e. venvs.
  3. Automatically excluded from Backups and Sync:
    When you backup or sync a project, you usually wouldn’t include any outside folders in the process.
Why a disposable folder?

Simply put, venvs can be reproduced anytime. No need to back them up as you would your personal documents and valuable source code and assets.

So, having them in a disposable location, like the Downloads folder is ideal. You don’t have to worry about excluding them from backups or sync. They’re excluded by default.

Why central location for all venvs?

There are several advantages you will gain by having all your venvs in one central folder on your disk:

Easily exclude them from backup and sync:

There are many consequences for backing up venv folders or sync’ing them to the cloud.

Most backup and sync software have a way of listing the files/folders you want to exclude. This varies between those systems. Some have an option in the GUI where you specify the list, some require you to include a hidden file with a specific name (like .nobackup in the case of Borg) in every folder you want to exclude. And some simply require that you type out “–exclude {folder name}” in the command that triggers the backup like it’s the case with rsync.

Whatever the case may be, you must do it manually for every folder. However, by having all your venvs under one, and only one, folder, you only have to do this exclusion thing once for the parent folder and be done with it. Every child folder will inherit this and be excluded automatically.

Easily remember the path to your venvs:

If you just apply my first suggestion and create your venv outside your project’s folder, each of your projects will have a different location and you’ll find it hard to remember where each project’s venv is.

By having all your venvs under one folder like “~/Downloads/venvs/” for example, it’s very easy to link to your venv from anywhere on your system.

An Example Setup:

Finally, an ideal setup I suggest and actually have been doing for years now is as follows:

  • The location I use for my venvs is:
    ~/Downloads/virtual-envs/ . That is: my home directory > Downloads > virtual-envs.
  • When I start a new project, I immediately create a virtualenv for it under my central venvs folder (~/Downloads/virtual-envs/) using this command:
Python3 -m venv ~/Downloads/virtual-envs/project1
  • Then I activate the venv. For a quick completion of the path, I hit “Alt +.” (that’s the Alt key and the “dot” key) to add the last argument I gave to the most recent command, which is in this case “~/Downloads/virtual-envs/project1”. So, instead of typing the path again, the activation command becomes:

source + “Alt” + “.” + /bin/activate

I even make it quicker by hitting “Tab” after /b and /a in the path so that it auto-completes “bin” and “activate” words respectively for me.

That’s it. You now have a better setup for your venvs. You don’t have to worry about adding them to the .gitignore file or excluding them from backup and sync software anymore.

Categories: FLOSS Project Planets

TechBeamers Python: How to Connect to PostgreSQL in Python

Planet Python - Thu, 2024-03-07 13:44

PostgreSQL is a powerful open-source relational database management system. In this tutorial, we’ll explore all the steps you need to connect PostgreSQL from Python code. From setting up a PostgreSQL database to executing queries using Python, we’ll cover it all. By the end, you’ll have a solid foundation for seamlessly interacting with PostgreSQL databases in […]

The post How to Connect to PostgreSQL in Python appeared first on TechBeamers.

Categories: FLOSS Project Planets

death and gravity: reader 3.12 released – split search index, changes API

Planet Python - Thu, 2024-03-07 13:00

Hi there!

I'm happy to announce version 3.12 of reader, a Python feed reader library.

What's new? #

Here are the highlights since reader 3.10.

Split the search index into a separate database #

The full-text search index can get almost as large as the actual data, so I've split it into a separate, attached database, which allows backing up only the main database.

(I stole this idea from One process programming notes (with Go and SQLite).)

Change tracking internal API #

To support the search index split, Storage got a change tracking API that allows search implementations to keep in sync with text content changes.

This is a first step towards search backends that aren't tightly-coupled to a storage. For example, the SQLite storage uses its FTS5 extension for search, and a PostgreSQL storage can use its own native support; the new API allows either storage to use something like Elasticsearch. (There's still no good way for search to filter/sort results without storage cooperation, so more work is needed here.)

Also, it lays some of the groundwork for searchable tag values by having tag support already built into the API.

Here's how change tracking works (long version):

  • Each entry has a 16 byte random sequence that changes when its text changes.
  • Sequence changes get recorded and are available through the API.
  • Search update() processes pending changes and marks them as done.

While simple on the surface, this prevents a lot of potential concurrency issues that needed special handling before. For example, what if an entry changes during pre-processing, before it is added to the search index? You could use a transaction, but this may keep the database locked for too long. Also, what about search backends where you don't have transactions?

I used Hypothesis and property-based testing to validate the model, so I'm ~99% sure it is correct. A real model checker like TLA+ or Alloy may have been a better tool for it, but I don't know how to use one at this point.

Filter by entry tags #

It is now possible to filter entries by entry tags: get_entries(tags=['tag']).

I did this to see how it would look to implement the has_enclosures get_entries() argument as a plugin (it is possible, but not really worth it).

SQLite storage improvements #

As part of a bigger storage refactoring, I made a few small improvements:

  • Enable write-ahead logging only once, when the database is created.
  • Vacuum the main database after migrations.
  • Require at least SQLite 3.18, since it was required by update_search() anyway.
Python versions #

reader 3.11 (released back in December) adds support for Python 3.12.

That's it for now. For more details, see the full changelog.

Want to contribute? Check out the docs and the roadmap.

Learned something new today? Share this with others, it really helps!

What is reader#

reader takes care of the core functionality required by a feed reader, so you can focus on what makes yours different.

reader allows you to:

  • retrieve, store, and manage Atom, RSS, and JSON feeds
  • mark articles as read or important
  • add arbitrary tags/metadata to feeds and articles
  • filter feeds and articles
  • full-text search articles
  • get statistics on feed and user activity
  • write plugins to extend its functionality

...all these with:

  • a stable, clearly documented API
  • excellent test coverage
  • fully typed Python

To find out more, check out the GitHub repo and the docs, or give the tutorial a try.

Why use a feed reader library? #

Have you been unhappy with existing feed readers and wanted to make your own, but:

  • never knew where to start?
  • it seemed like too much work?
  • you don't like writing backend code?

Are you already working with feedparser, but:

  • want an easier way to store, filter, sort and search feeds and entries?
  • want to get back type-annotated objects instead of dicts?
  • want to restrict or deny file-system access?
  • want to change the way feeds are retrieved by using Requests?
  • want to also support JSON Feed?
  • want to support custom information sources?

... while still supporting all the feed types feedparser does?

If you answered yes to any of the above, reader can help.

The reader philosophy #
  • reader is a library
  • reader is for the long term
  • reader is extensible
  • reader is stable (within reason)
  • reader is simple to use; API matters
  • reader features work well together
  • reader is tested
  • reader is documented
  • reader has minimal dependencies
Why make your own feed reader? #

So you can:

  • have full control over your data
  • control what features it has or doesn't have
  • decide how much you pay for it
  • make sure it doesn't get closed while you're still using it
  • really, it's easier than you think

Obviously, this may not be your cup of tea, but if it is, reader can help.

Categories: FLOSS Project Planets

Pages