Feeds

EuroPython: EuroPython March 2024 Newsletter

Planet Python - Mon, 2024-03-04 12:07

Hey ya 👋 hope you are having a good start to 2024 ❤️

It&aposs been a dog’s age since we last spoke. In case you don’t know the reason for our hiatus: we took Elmo’s question of “How is everybody doing?” to its core and had to face the inner truths of our existence.

Luckily, soon the days got longer and the word news got worse so we had to unlock our doors and come out to organise the bestest of conferences!

We are now pumped to get things rolling and ready to cook up an awesome experience for EuroPython 2024 🚀

🇨🇿EuroPython 2024 - Prague Congress Centre 08-14 July

We are coming back to Prague! We loved last year so much that we decided to have EuroPython 2024 at the Prague Congress Centre again between the 8th and 14th of July, 2024. To stay up to date with the latest news, please visit our website.

We also encourage you to sign up to our monthly community newsletter.

Mr. Bean knows thingsCall for Proposals 🐍

The call for proposals for EuroPython 2024 is OPEN! 🎉

This year we’re aiming for a similar schedule as last year, i.e. around 120 talks, more or less 16 tutorials, posters and special events.

  • Tutorials/Workshops (8-9 July): will be hands-on 180 min sessions with 40-100 participants
  • Talks (10-12 July) : 30 or 45 min length presentations of specific and general interest to the European Python community (including a dedicated PyData track).

We are looking for sessions of all levels: beginner, intermediate, and advanced.

You are also welcome to submit posters! They will be printed in large formats. This is a graphical way to showcase your research, project or technology. Posters are exhibited at the conference, can be read at any time by participants, and can be discussed face-to-face with their authors during the Poster Sessions (between July 10th and 12th).

No matter your level of Python or public speaking experience, EuroPython&aposs job is to help you bring yourself to our community so we all flourish and benefit from each other&aposs experience and contribution. 🏆

The deadline for submission is March 6th, 2024 Anywhere on Earth.

We would love to see your proposals bubbling in there. Help us make 2024 the year of another incredible EuroPython!

For more information, have a look at https://ep2024.europython.eu/cfp.

Speaker&aposs Mentorship Programme 🎤

The Speaker&aposs Mentorship Programme is back for EuroPython 2024!

We had a call for mentees and mentors last month and it was a huge success 🚀So far, we have managed to match 28 speakers to diligent mentors.

On the 26th of February, we hosted an Ask me Anything session for all the people interested in submitting a proposal. Our programme team was there and answered all the questions raised by interested speakers. We clarified information about the Call for Proposals and shared knowledge about Financial Aid, amongst other info.

If you are curious and would like to see what was discussed, the session recording can be found on YouTube at: https://youtu.be/EiCfmf6QVIA

Call for ContributorsConference Organisers 💪

To the wonderful humans who signed up to help organise the EuroPython Conference itself, thank you! We are late in replying to everyone and we are thankful for your patience. We got caught between dengue fevers 🦟 and carnivals 🎊.

The good news is that we are now back on track and will reach out to everyone this week.

We had a lot of applications this year, so a selection process was required. But fear nothing! In case you don’t get to work with the most fun team of all time, there is always next year. 🌟

We look forward to working together to make this conference an unforgettable experience for everyone involved 💼✨

Speaker’s mentors 🎏

For everyone who signed up to either mentor on the Speaker’s Mentorship Programme, a HUGE thank you! This would not be possible without your collaboration and work. We are so very proud to have you around learning and sharing and strengthening the community.

We truly hope you take advantage of the opportunity and develop the skills you’re looking for. After all, we believe the best way to learn is to teach. 📚✨

Proposal Reviewers 📔

We also announced the Call for Proposal Reviewers. These awesome volunteers are the people who review the proposal’s submissions and vote on the ones which they think are more suitable for making up the conference&aposs programme.

This is very crucial work as it plays a significant part in shaping the EuroPython 2024 schedule!

A massive thank you to everyone who has filled the form to help. You will get a cucumber lollipop as a sign of our affection. 🍭

We will contact you by the end of the Call for Proposals to coordinate all the fun stuff. Please, keep an eye out for our email this month so we can start the labour. ⚒️

Fáilte Ireland Award 🏆

The 2022 edition of the EuroPython Conference (EPC) was in Dublin, Ireland. More old news? You bet! But this time it is relevant because in November 2023, the 21st edition of the EPC was awarded a Conference Ambassador Recognition Award from Fáilte Ireland, the National Tourism Development Authority for the country.

The evening was fantastic and a delightful way to acknowledge the people and businesses who have hosted conference events on the little Emerald Island. This is our first prize. We’re feeling proud and happy and wanted to share the news! 🥇

One of our members wrote more about the experience on LinkedIn.

Laís Carvalho (EuroPython) and Nicolas Laurance (Python Ireland) hold the Irish-sized whiskey glass awarded 🥃 from Failte IrelandEPS Board

We have a new Board for 2023-2024.

If you want to learn the names of the new board members, head over to the article on our website: https://www.europython-society.org/eps-board-2023-2024/

TL;DR: We hope to do a fine job this year and help the European Python community thrive even more!

If you want to talk to us about anything, give feedback, or ask for a partner to count the stars beside you, shoot us an email at board@europython.eu.

Upcoming Events in Europe

Here are some Python events happening in Europe in the near future.

PyCon Slovakia: March 15-17 2024

PyCon SK will happen in the wonderful city of Bratislava! The conference is organised by the civic association SPy, which focuses on the promotion and dissemination of the Python programming language and other open-source technologies and ideas.

Check out their programme here: https://2024.pycon.sk/en/speakers/index.html. More information can be found on their website: https://2024.pycon.sk/en/index.html

PyCon DE & PyData Berlin: April 22-24 2024

Immerse yourself in three days of Python and PyData excellence at our community-driven conference in the vibrant heart of Berlin. From workshops to live keynote sessions, connect with fellow enthusiasts and experts alike! Join fellow Pythonistas at the iconic BCC venue at Alexanderplatz where we actually hosted EuroPython 2014! Check out their website for more info: https://2024.pycon.de/

PyCon Italy: May 22-25 2024

PyCon Italia 2024 will happen in Florence. The birthplace of Renaissance will receive a wave of Pythonistas looking to geek out this year. The schedule is online and you can check it out at their cutely animated website ( https://2024.pycon.it/). Beginners will have a special treat by having a full day of activities on May 22nd. Starting with Workshops about Python, Data Science, and Public Speaking.

It’s time to get tickets!

GeoPython: May 27-29 2024

GeoPython 2024 will happen in Basel, Switzerland. Focused on exploring the fascinating fusion of Python programming and the boundless world of Geo, the 9th edition of GeoPython has already published a preliminary schedule (here: https://2024.geopython.net/schedule).

For more information about GeoPython 2024, you can visit their website here: https://2024.geopython.net/

Djangocon.eu: June 5-9 2024

DjangoCon Europe 2024 will happen in Vigo, Spain. Organized by Django practitioners from all levels, the 16th edition of the Conference will be hosted in the beautiful Galician city, famous for its amazing food & the Illas Atlanticas.

You can check more information about Django Con Europe at their lovely website: https://2024.djangocon.eu/

Py.Jokespip install pyjokesimport pyjokesprint(pyjokes.get_joke()) Child: Dad, why does the Sun rise in the East and set in the West? Dad: Son, it&aposs working, don&apost touch it! That&aposs all, folks! 🏁

Thanks for reading along.

We will have a regular monthly appearance in your inbox from now on. We are happy to tailor our future editions to accommodate what you&aposd like to see here. Influence what gets to your inbox by dropping us a line at communications@europython.eu

With joy and excitement,

EuroPython 2024 Team 🤗

Categories: FLOSS Project Planets

The Drop Times: Celebrating Women's Day: Honoring Strength and Contributions

Planet Drupal - Mon, 2024-03-04 10:29
Celebrate the strength and resilience of women everywhere this Women's Day! Join us as we honour the incredible contributions of women, whether in the workplace or at home. Stay tuned for inspiring stories and features celebrating women in Drupal.

As we approach International Women's Day, it's a moment for us to reflect on the incredible strength and contributions of women around the world. This day isn't just about recognizing the achievements of a few but about celebrating the resilience, determination, and diversity of all women, regardless of their roles or backgrounds.

I am a Woman Phenomenally. Phenomenal Woman, that's me.

Maya Angelou, 'And Still I Rise,' poetry collection (1978).

Whether she's in the workforce, leading teams in the corporate world, pursuing her passion in the arts and sciences, or staying at home nurturing families and communities, every woman contributes to the tapestry of our society in her unique way. This diversity of experiences and perspectives enriches our collective journey and propels us forward.

As we honour all women this International Women's Day, we also want to highlight the remarkable women within our Drupal community. Their dedication, expertise, and tireless efforts have played a pivotal role in shaping and advancing Drupal as a platform. From developers to designers, contributors to community leaders, women in Drupal have made invaluable contributions, often overcoming barriers and stereotypes.

Let's take this opportunity not only to celebrate but also to acknowledge the ongoing work needed to ensure equality, diversity, and inclusion within our community and beyond. Let's continue to support and uplift each other, amplifying women's voices in tech and beyond.

Now, let me share some featured interviews and news we covered last week.

Jon Stewart, an expert in Customer Experience Platforms (CXPs) and Marketing Technology (MarTech), shares his insights with Alka Elizabeth in the latest interview. Throughout the discussion, Stewart provides illuminating insights into his strategic decision to harness Drupal as the linchpin of ZenSource. He emphasizes its open-source, enterprise-scale capabilities and alignment with the evolving needs of clients seeking to avoid vendor lock-in and exorbitant maintenance costs.

In another Interview, Krishnan Narayanan, CEO of Gai Technologies, shares his unique journey with Elma John. From bustling city life to the serene hills of Himachal Pradesh, Krishnan reshaped the software development landscape. He delves into the 'Dharamshala Model', a philosophy intertwining quality of life with the craftsmanship of technology. Additionally, Krishnan discusses the transformative power of mentorship and continuous learning in building innovative teams.

Elma provided detailed information in her featured article about the Accessible Tableau Integration Module. This module serves as a pivotal bridge between Tableau's robust visual analytics capabilities and Drupal's versatile web framework, ushering in a new era of accessibility and inclusivity. Crafted by Emilie Young and Gautam Gottipati, with initial support from Kevin Reynen and the University of Colorado, the module's inception was driven by Emilie's idea. Kevin initiated the project on Drupal.org, while Gautam wrote the codes, contributing to its development and functionality.

Grzegorz Pietrzak presents a featured article on the trends of Content Management System (CMS) usage on official city websites. The comprehensive study spans 466 cities worldwide and delves into the prevalence of open-source solutions, highlighting the dominance of platforms like WordPress, Drupal, and others. The report categorizes findings based on city size, region, and EU membership, revealing intriguing patterns and insights. Through this research, essential questions about the adoption of open-source systems in the public sector are raised, shedding light on challenges and opportunities for promoting transparency and cost-efficiency in digital governance.

Binny Thomas, Senior Engineer at Axelerant, penned an article detailing his experience at DrupalCon Lille, where he volunteered as an onsite person for The DropTimes. His article provides insights into his overall journey at the event, offering valuable perspectives and firsthand accounts of his involvement.

Drupal enthusiasts have plenty to look forward to with several noteworthy updates and events: DrupalCamp Helsinki 2024 is set to take place on April 26, 2024, with a contribution day following on April 27. This event offers a platform for session proposals spanning various topics, catering to both beginners and advanced users, as well as technical and non-technical. Meanwhile, Drupal Iberia is accepting session submissions until March 31. This presents an opportunity for Drupal enthusiasts to share their expertise and insights with the community. 

Students have an excellent chance to participate in DrupalCamp Ghent 2024, as free tickets are being offered. Furthermore, it's worth noting that The Drop Times has become a media partner of DrupalCamp Ghent. DrupalCon Portland 2024 is looking for translation assistants who are interested in volunteering. It's a chance to contribute to the event's success and be part of a dynamic team. The detailed schedule for DrupalCon Portland has been released, allowing attendees to plan their itinerary and make the most of the event's offerings.

Additionally, Drupal Camp Asheville is making a return, calling for session proposals and training. It's an opportunity for individuals to share their knowledge and expertise with fellow Drupal enthusiasts. Session recordings from Florida Drupal Camp 2024 are now available online. This enables participants to catch up on missed sessions and benefit from the valuable insights shared during the event. The Drupal community has announced the opening of the Call for Papers for DrupalCon Barcelona.

The Drupal community is actively seeking applications for the Project Update Working Group, offering positions for both full and provisional members. Full members must possess the ability to opt-in for security coverage for Drupal.org projects, coupled with a proven track record in maintaining core or contributed projects. Further details can be found here.

We acknowledge that there are more stories to share. However, due to constraints in selection, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Lastly, Happy International Women's Day to all the incredible women! Your strength, resilience, and contributions inspire us every day.

Thank you,

Sincerely
Kazima Abbas
Sub-editor, The Drop Times

Categories: FLOSS Project Planets

Real Python: Python's __all__: Packages, Modules, and Wildcard Imports

Planet Python - Mon, 2024-03-04 09:00

Python has something called wildcard imports, which look like from module import *. This type of import allows you to quickly get all the objects from a module into your namespace. However, using this import on a package can be confusing because it’s not clear what you want to import: subpackages, modules, objects? Python has the __all__ variable to work around this issue.

The __all__ variable is a list of strings where each string represents the name of a variable, function, class, or module that you want to expose to wildcard imports.

In this tutorial, you’ll:

  • Understand wildcard imports in Python
  • Use __all__ to control the modules that you expose to wildcard imports
  • Control the names that you expose in modules and packages
  • Explore other use cases of the __all__ variable
  • Learn some benefits and best practices of using __all__

To get the most out of this tutorial, you should be familiar with a few Python concepts, including modules and packages, and the import system.

Get Your Code: Click here to download the free sample code that shows you how to use Python’s __all__ attribute.

Importing Objects in Python

When creating a Python project or application, you’ll need a way to access code from the standard library or third-party libraries. You’ll also need to access your own code from the multiple files that may make up your project. Python’s import system is the mechanism that allows you to do this.

The import system lets you get objects in different ways. You can use:

  • Explicit imports
  • Wildcard imports

In the following sections, you’ll learn the basics of both strategies. You’ll learn about the different syntax that you can use in each case and the result of running an import statement.

Explicit Imports

In Python, when you need to get a specific object from a module or a particular module from a package, you can use an explicit import statement. This type of statement allows you to bring the target object to your current namespace so that you can use the object in your code.

To import a module by its name, you can use the following syntax:

Python import module [as name] Copied!

This statement allows you to import a module by its name. The module must be listed in Python’s import path, which is a list of locations where the path based finder searches when you run an import.

The part of the syntax that’s enclosed in square brackets is optional and allows you to create an alias of the imported name. This practice can help you avoid name collisions in your code.

As an example, say that you have the following module:

Python calculations.py def add(a, b): return float(a + b) def subtract(a, b): return float(a - b) def multiply(a, b): return float(a * b) def divide(a, b): return float(a / b) Copied!

This sample module provides functions that allow you to perform basic calculations. The containing module is called calculations.py. To import this module and use the functions in your code, go ahead and start a REPL session in the same directory where you saved the file.

Then run the following code:

Python >>> import calculations >>> calculations.add(2, 4) 6.0 >>> calculations.subtract(8, 4) 4.0 >>> calculations.multiply(5, 2) 10.0 >>> calculations.divide(12, 2) 6.0 Copied!

The import statement at the beginning of this code snippet brings the module name to your current namespace. To use the functions or any other object from calculations, you need to use fully qualified names with the dot notation.

Note: You can create an alias of calculations using the following syntax:

Python import calculations as calc Copied!

This practice allows you to avoid name clashes in your code. In some contexts, it’s also common practice to reduce the number of characters to type when using qualified names.

For example, if you’re familiar with libraries like NumPy and pandas, then you’ll know that it’s common to use the following imports:

Python import numpy as np import pandas as pd Copied!

Using shorter aliases when you import modules facilitates using their content by taking advantage of qualified names.

You can also use a similar syntax to import a Python package:

Read the full article at https://realpython.com/python-all-attribute/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

GNU Guix: Identifying software

GNU Planet! - Mon, 2024-03-04 08:58

What does it take to “identify software”? How can we tell what software is running on a machine to determine, for example, what security vulnerabilities might affect it?

In October 2023, the US Cybersecurity and Infrastructure Security Agency (CISA) published a white paper entitled Software Identification Ecosystem Option Analysis that looks at existing options to address these questions. The publication was followed by a request for comments; our comment as Guix developers didn’t make it on time to be published, but we’d like to share it here.

Software identification for cybersecurity purposes is an crucial topic, as the white paper explains in its introduction:

Effective vulnerability management requires software to be trackable in a way that allows correlation with other information such as known vulnerabilities […]. This correlation is only possible when different cybersecurity professionals know they are talking about the same software.

The Common Platform Enumeration (CPE) standard has been designed to fill that role; it is used to identify software as part of the well-known Common Vulnerabilities and Exposures (CVE) process. But CPE is showing its limits as an extrinsic identification mechanism: the human-readable identifiers chosen by CPE fail to capture the complexity of what “software” is.

We think functional software deployment as implemented by Nix and Guix, coupled with the source code identification work carried out by Software Heritage, provides a unique perspective on these matters.

On Software Identification

The Software Identification Ecosystem Option Analysis white paper released by CISA in October 2023 studies options towards the definition of a software identification ecosystem that can be used across the complete, global software space for all key cybersecurity use cases.

Our experience lies in the design and development of GNU Guix, a package manager, software deployment tool, and GNU/Linux distribution, which emphasizes three key elements: reproducibility, provenance tracking, and auditability. We explain in the following sections our approach and how it relates to the goal stated in the aforementioned white paper.

Guix produces binary artifacts of varying complexity from source code: package binaries, application bundles (container images to be consumed by Docker and related tools), system installations, system bundles (container and virtual machine images).

All these artifacts qualify as “software” and so does source code. Some of this “software” comes from well-identified upstream packages, sometimes with modifications added downstream by packagers (patches); binary artifacts themselves are the byproduct of a build process where the package manager uses other binary artifacts it previously built (compilers, libraries, etc.) along with more source code (the package definition) to build them. How can one identify “software” in that sense?

Software is dual: it exists in source form and in binary, machine-executable form. The latter is the outcome of a complex computational process taking source code and intermediary binaries as input.

Our thesis can be summarized as follows:

We consider that the requirements for source code identifiers differ from the requirements to identify binary artifacts.

Our view, embodied in GNU Guix, is that:

  1. Source code can be identified in an unambiguous and distributed fashion through inherent identifiers such as cryptographic hashes.

  2. Binary artifacts, instead, need to be the byproduct of a comprehensive and verifiable build process itself available as source code.

In the next sections, to clarify the context of this statement, we show how Guix identifies source code, how it defines the source-to-binary path and ensures its verifiability, and how it provides provenance tracking.

Source Code Identification

Guix includes package definitions for almost 30,000 packages. Each package definition identifies its origin—its “main” source code as well as patches. The origin is content-addressed: it includes a SHA256 cryptographic hash of the code (an inherent identifier), along with a primary URL to download it.

Since source is content-addressed, the URL can be thought of as a hint. Indeed, we connected Guix to the Software Heritage source code archive: when source code vanishes from its original URL, Guix falls back to downloading it from the archive. This is made possible thanks to the use of inherent (or intrinsic) identifiers both by Guix and Software Heritage.

More information can be found in this 2019 blog post and in the documents of the Software Hash Identifiers (SWHID) working group.

Reproducible Builds

Guix provides a verifiable path from source code to binaries by ensuring reproducible builds. To achieve that, Guix builds upon the pioneering research work of Eelco Dolstra that led to the design of the Nix package manager, with which it shares the same conceptual foundation.

Namely, Guix relies on hermetic builds: builds are performed in isolated environments that contain nothing but explicitly-declared dependencies—where a “dependency” can be the output of another build process or source code, including build scripts and patches.

An implication is that builds can be verified independently. For instance, for a given version of Guix, guix build gcc should produce the exact same binary, bit-for-bit. To facilitate independent verification, guix challenge gcc compares the binary artifacts of the GNU Compiler Collection (GCC) as built and published by different parties. Users can also compare to a local build with guix build gcc --check.

As with Nix, build processes are identified by derivations, which are low-level, content-addressed build instructions; derivations may refer to other derivations and to source code. For instance, /gnu/store/c9fqrmabz5nrm2arqqg4ha8jzmv0kc2f-gcc-11.3.0.drv uniquely identifies the derivation to build a specific variant of version 11.3.0 of the GNU Compiler Collection (GCC). Changing the package definition—patches being applied, build flags, set of dependencies—, or similarly changing one of the packages it depends on, leads to a different derivation (more information can be found in Eelco Dolstra's PhD thesis).

Derivations form a graph that captures the entirety of the build processes leading to a binary artifact. In contrast, mere package name/version pairs such as gcc 11.3.0 fail to capture the breadth and depth elements that lead to a binary artifact. This is a shortcoming of systems such as the Common Platform Enumeration (CPE) standard: it fails to express whether a vulnerability that applies to gcc 11.3.0 applies to it regardless of how it was built, patched, and configured, or whether certain conditions are required.

Full-Source Bootstrap

Reproducible builds alone cannot ensure the source-to-binary correspondence: the compiler could contain a backdoor, as demonstrated by Ken Thompson in Reflections on Trusting Trust. To address that, Guix goes further by implementing so-called full-source bootstrap: for the first time, literally every package in the distribution is built from source code, starting from a very small binary seed. This gives an unprecedented level of transparency, allowing code to be audited at all levels, and improving robustness against the “trusting-trust attack” described by Ken Thompson.

The European Union recognized the importance of this work through an NLnet Privacy & Trust Enhancing Technologies (NGI0 PET) grant allocated in 2021 to Jan Nieuwenhuizen to further work on full-source bootstrap in GNU Guix, GNU Mes, and related projects, followed by another grant in 2022 to expand support to the Arm and RISC-V CPU architectures.

Provenance Tracking

We define provenance tracking as the ability to map a binary artifact back to its complete corresponding source. Provenance tracking is necessary to allow the recipient of a binary artifact to access the corresponding source code and to verify the source/binary correspondence if they wish to do so.

The guix pack command can be used to build, for instance, containers images. Running guix pack -f docker python --save-provenance produces a self-describing Docker image containing the binaries of Python and its run-time dependencies. The image is self-describing because --save-provenance flag leads to the inclusion of a manifest that describes which revision of Guix was used to produce this binary. A third party can retrieve this revision of Guix and from there view the entire build dependency graph of Python, view its source code and any patches that were applied, and recursively for its dependencies.

To summarize, capturing the revision of Guix that was used is all it takes to reproduce a specific binary artifact. This is illustrated by the time-machine command. The example below deploys, at any time on any machine, the specific build artifact of the python package as it was defined in this Guix commit:

guix time-machine -q --commit=d3c3922a8f5d50855165941e19a204d32469006f \ -- install python

In other words, because Guix itself defines how artifacts are built, the revision of the Guix source coupled with the package name unambiguously identify the package’s binary artifact. As scientists, we build on this property to achieve reproducible research workflows, as explained in this 2022 article in Nature Scientific Data; as engineers, we value this property to analyze the systems we are running and determine which known vulnerabilities and bugs apply.

Again, a software bill of materials (SBOM) written as a mere list of package name/version pairs would fail to capture as much information. The Artifact Dependency Graph (ADG) of OmniBOR, while less ambiguous, falls short in two ways: it is too fine-grained for typical cybersecurity applications (at the level of individual source files), and it only captures the alleged source/binary correspondence of individual files but not the process to go from source to binary.

Conclusions

Inherent identifiers lend themselves well to unambiguous source code identification, as demonstrated by Software Heritage, Guix, and Nix.

However, we believe binary artifacts should instead be treated as the result of a computational process; it is that process that needs to be fully captured to support independent verification of the source/binary correspondence. For cybersecurity purposes, recipients of a binary artifact must be able to be map it back to its source code (provenance tracking), with the additional guarantee that they must be able to reproduce the entire build process to verify the source/binary correspondence (reproducible builds and full-source bootstrap). As long as binary artifacts result from a reproducible build process, itself described as source code, identifying binary artifacts boils down to identifying the source code of their build process.

These ideas are developed in the 2022 scientific paper Building a Secure Software Supply Chain with GNU Guix

Categories: FLOSS Project Planets

PyCharm: Best PyCharm Plugins 2024

Planet Python - Mon, 2024-03-04 07:00

PyCharm is designed to be used straight out of the box, but you can customize it further by installing the right plugins. 

In this article, we’ll be highlighting the top PyCharm plugins that can bring great value to you. They are not available out of the box, but if you’re looking for something to further enhance your productivity, they’re great options. 

Every plugin that we mention in this article can be found on JetBrains Marketplace. In sharing our top picks, we offer some plugins that we recommend trying, especially those that are popular with our users. Whether you’re a seasoned developer or just starting out, these plugins are sure to add value to your development process.

Let’s start by covering some basics. 

How to install PyCharm plugins

You can download the plugin from JetBrains Marketplace. Then go to PyCharm, click on Plugins > settings > Install plugin from disk, and select the plugin you just downloaded. 

A much more straightforward way is to find the plugin directly in the IDE. Go to settings > Plugins > Marketplace, find the plugin you want to install, and then click the Install button. 

After installation, you’ll need to restart PyCharm to activate the new features. You can read more about how to install PyCharm plugins here.

How to disable PyCharm plugins

If you don’t want to use a plugin anymore, you can easily disable it.

To do so, navigate to File > Settings (or PyCharm on macOS) and select Plugins from the left-hand menu. On the Installed tab, you’ll see a list of your installed plugins. To disable a plugin, simply uncheck its box and apply the changes. 

Top AI coding tools  1. JetBrains AI Assistant

AI Assistant provides AI-powered features for software development based on the JetBrains AI service (please visit the page to find out more details about AI Assistant). The AI service will not be vendor-exclusive and will transparently connect users to different large language models (LLMs) in the future. 

At the moment, the service supports OpenAI’s GPT-3.5 and GPT-4 models and additionally hosts a number of smaller models created by JetBrains.

Its seamless integration with PyCharm means you can access these AI-driven features directly from the IDE.

This plugin is a must-have for developers who want to code smarter and faster while maximizing their code quality. This is only available for paid PyCharm Professional licenses, so student, open source, and educational licenses are excluded.

Features:

  • Use AI Assistant’s chat tool to ask questions about your code and get context-aware answers.
  • Generate a commit message by clicking the Generate Commit Message with AI Assistant button. 

  • Generate refactoring suggestions, documentation for declarations, and error explanations from the output. 
  • AI Assistant creates and maintains a library of user prompts.
  • Get suggestions for the names of classes, functions, variables, and more for selected languages. 
  • Get insights into the code’s functionality and structure.
  • Generate unit tests and create test files under the right directory.
  • Identify and pinpoint issues within the codebase.
  • Easily convert your code to a different programming language.
  • Craft personalized prompts using AI Assistant to suit your unique requirements.

The deep integration between AI Assistant and PyCharm means that suggestions can be made using deep contextual information, including the current file, recently used files, programming languages, dependencies, and the structure of the project. 

The model prompts are created using embedded ML models. This means that the outputs from AI Assistant are as relevant as possible, as they are able to take the users’ intent and current behavior into account.

2. CoPilot

CoPilot, developed by GitHub, was the first LLM-powered coding assistant to be released, and it demonstrated the potential of these powerful tools. This plugin acts as a collaborative partner, providing real-time code suggestions and assistance as you work within PyCharm.

Research suggests that CoPilot enhances developers’ coding productivity by helping them code faster and reducing the number of repetitive tasks.

Features:

  • CoPilot writes unit tests where the user can select the test cases they want to run.
  • Can’t remember the correct SQL syntax? CoPilot generates the SQL code for you.
  • CoPilot encourages collaboration by suggesting code changes and offering insights on best practices.

3. Full Line Code Completion

The Full Line Code Completion plugin by JetBrains is based on a deep-learning model that runs on your local machine. This means that your data won’t be shared anywhere, and you can access the functionalities even when offline. 

It goes beyond the standard code suggestion functionality by offering complete lines of code tailored to your current context. This means you can quickly insert code, such as if statements, loops, functions, or class definitions with just a few keystrokes. 

This plugin is a productivity booster, especially for those who frequently write boilerplate code.

Features:

  • Perform multiple checks before getting code suggestions, ensuring the semantic correctness of the generated code.
  • Define custom code templates for frequently used code patterns. You can create your own code snippets and insert them with ease, further improving your coding efficiency.
  • Customize completion shortcuts. For example, choose between the right key, tab key, enter key, etc.
  • Navigate to the plugin’s settings and information directly from the hover menu for quick navigation.
4. LLM by Hugging Face

LLM-IntelliJ by Hugging Face is a plugin that provides code completion powered by machine learning. It suggests code completions in your editor as you type your code.

It is well integrated with the Hugging Face platform, while also allowing you to choose different backends to host models. You can also use many different code models, such as StarCoder and CodeLlama, among others.

Features:

  • Benefit from ghost-text code completion.
  • Choose any model that supports code completion.
  • Works with Hugging Face’s Inference API, TGI, and other backends like Ollama. 
5. Tabnine: AI Code Completion & Chat 

Tabnine helps developers write code more efficiently and offers a built-in chat feature for communication with other developers, streamlining the coding process and promoting teamwork and knowledge sharing.

Features:

  • Tabnine provides context-aware code completions and can generate tests and documentation for existing code.
  • Tabnine’s AI assistant will help you get the right code the first time. It provides code guidance that’s consistent with your team’s best practices, saving costly and frustrating code review iterations.
  • It also improves code consistency across your entire project, suggesting completions that align with your best practices for code that’s easier to read, manage, and maintain.
Top PyCharm Plugins For Data Science 1. Big Data Tools

Big Data Tools simplifies the process of working with big data frameworks and technologies, significantly improving the development and debugging experience. 

The plugin seamlessly integrates with Apache Spark and Apache Hadoop, two of the most widely used big data frameworks. This integration empowers developers to create, debug, and optimize big data applications right from PyCharm.

The Big Data Tools plugin is only available for PyCharm Professional users.

Features:

  • Big Data Tools offers tools for visualizing big data results, which means you can spot trends, anomalies, and patterns in your data more efficiently. 
  • It provides performance monitoring capabilities, allowing developers to identify and rectify performance bottlenecks, which is critical in the world of big data.
2. OpenCV Image Viewer

The OpenCV Image Viewer facilitates the manipulation and analysis of images within PyCharm, making it a valuable tool for tasks ranging from image processing to computer vision research. 

Whether you’re developing algorithms for image recognition, object detection, or image filtering, the OpenCV Image Viewer simplifies the process by providing real-time image processing capabilities. 

Features:

  • You can easily load, display, and manipulate images in popular formats for computer vision tasks.
  • Perform real-time image processing and display results without stopping the debugger. 

This plugin is completely free as of November 2023, but the authors are soon going to shift to a freemium model along with many new functionalities that will be available to paid users. 

Check out our blog post entitled ‘How to quickly display OpenCV images during debugging’ for an easy tutorial that will ease you into using this plugin.

3. Lets-Plot in SciView

Lets-Plot in SciView is a plugin that combines the power of the Lets-Plot library with the SciView window for data visualization and analysis. 

This plugin is only available for PyCharm Professional users. 

Features:

  • Users can create a wide range of data visualizations, including scatter plots, bar charts, heatmaps, and more using Lets-Plot’s extensive capabilities.
  • They can also generate interactive plots and charts that can be customized and explored in real time within PyCharm.
Ready to take your coding to the next level?

JetBrains plugins utilize the latest technologies so you can streamline data science and coding tasks, eliminate repetitive work, and so much more. They boost your productivity, reduce errors, and allow you to tailor PyCharm to your specific needs.

It’s time to code smarter, not harder, and embrace the endless opportunities that PyCharm and its plugins offer.

Just head over to JetBrains Marketplace and start exploring!

Categories: FLOSS Project Planets

Drupal Association blog: Promoting contribution from the user registration

Planet Drupal - Mon, 2024-03-04 06:22

We have made a recent update on drupal.org that you haven’t probably noticed. In fact, although it's a meaningful and important section, I bet you have not seen it in months or even years. It's something you would have only seen when you registered on Drupal.org for the first time. I’m talking about the welcome email for new registered users.

One of the goals we have had recently is improving the way our users register and interact with drupal.org for the first time. Improvements to onboarding should improve what I call the long tail of the (Open Source) contribution pipeline (a concept, the long tail of contribution, that I will explain further in the next few days).

For now, let’s have a look at one of the first things new users in our community saw first:

This is what I called the huge wall of text. Do you remember seeing it for the first time? Did you read any of it at all? Do you remember anything important in that email? Or did you just mark it read and move on?

Fortunately, we've taken a first, incremental step to make improvements. As I said before, this isn't something our existing userbase will see, but the new welcome email to Drupal.org has changed and been simplified quite a bit. Here is the new welcome email:

We have replaced a lot of the wall of text with a simpler set of sections and links, and landing pages on drupal.org. This simplifies the welcome email, but is also going to allow us to track which links are most useful to new users, how many pages they visit on drupal.org, where do they get stuck, what interests them most, etc - and use that to make further refinements over time.

The other section I wanted to include is something that is very important for the Drupal Association, but also for the whole community. I wanted to highlight the contribution area, something that was not even mentioned in the old email. Our hope is this is an opportunity to foster and promote contribution from new users.

A few weeks ago I also launched a poll around contribution. This poll combined with the updates on these few changes in the user registration are aimed towards the same goal: improving contribution onboarding. You can still participate if you’d like to, just visit https://www.surveymonkey.com/r/XRTNNM3

Now, if you are curious about what I am calling the long tail of the contribution pipeline, watch this space.

File attachments:  confirmed-user.png
Categories: FLOSS Project Planets

Colin Watson: Free software activity in January/February 2024

Planet Debian - Mon, 2024-03-04 05:39

Two months into my new gig and it’s going great! Tracking my time has taken a bit of getting used to, but having something that amounts to a queryable database of everything I’ve done has also allowed some helpful introspection.

Freexian sponsors up to 20% of my time on Debian tasks of my choice. In fact I’ve been spending the bulk of my time on debusine which is itself intended to accelerate work on Debian, but more details on that later. While I contribute to Freexian’s summaries now, I’ve also decided to start writing monthly posts about my free software activity as many others do, to get into some more detail.

January 2024
  • I added Incus support to autopkgtest. Incus is a system container and virtual machine manager, forked from Canonical’s LXD. I switched my laptop over to it and then quickly found that it was inconvenient not to be able to run Debian package test suites using autopkgtest, so I tweaked autopkgtest’s existing LXD integration to support using either LXD or Incus.
  • I discovered Perl::Critic and used it to tidy up some poor practices in several of my packages, including debconf. Perl used to be my language of choice but I’ve been mostly using Python for over a decade now, so I’m not as fluent as I used to be and some mechanical assistance with spotting common errors is helpful; besides, I’m generally a big fan of applying static analysis to everything possible in the hope of reducing bug density. Of course, this did result in a couple of regressions (1, 2), but at least we caught them fairly quickly.
  • I did some overdue debconf maintenance, mainly around tidying up error message handling in several places (1, 2, 3).
  • I did some routine maintenance to move several of my upstream projects to a new Gnulib stable branch.
  • debmirror includes a useful summary of how big a Debian mirror is, but it hadn’t been updated since 2010 and the script to do so had bitrotted quite badly. I fixed that and added a recurring task for myself to refresh this every six months.
February 2024
  • Some time back I added AppArmor and seccomp confinement to man-db. This was mainly motivated by a desire to support manual pages in snaps (which is still open several years later …), but since reading manual pages involves a non-trivial text processing toolchain mostly written in C++, I thought it was reasonable to assume that some day it might have a vulnerability even though its track record has been good; so man now restricts the system calls that groff can execute and the parts of the file system that it can access. I stand by this, but it did cause some problems that have needed a succession of small fixes over the years. This month I issued DLA-3731-1, backporting some of those fixes to buster.
  • I spent some time chasing a console-setup build failure following the removal of kFreeBSD support, which was uploaded by mistake. I suggested a set of fixes for this, but the author of the change to remove kFreeBSD support decided to take a different approach (fair enough), so I’ve abandoned this.
  • I updated the Debian zope.testrunner package to 6.3.1.
  • openssh:
    • A Freexian collaborator had a problem with automating installations involving changes to /etc/ssh/sshd_config. This turned out to be resolvable without any changes, but in the process of investigating I noticed that my dodgy arrangements to avoid ucf prompts in certain cases had bitrotted slightly, which meant that some people might be prompted unnecessarily. I fixed this and arranged for it not to happen again.
    • Following a recent debian-devel discussion, I realized that some particularly awkward code in the OpenSSH packaging was now obsolete, and removed it.
  • I backported a python-channels-redis fix to bookworm. I wasn’t the first person to run into this, but I rediscovered it while working on debusine and it was confusing enough that it seemed worth fixing in stable.
  • I fixed a simple build failure in storm.
  • I dug into a very confusing cluster of celery build failures (1, 2, 3), and tracked the hardest bit down to a Python 3.12 regression, now fixed in unstable thanks to Stefano Rivera. Getting celery back into testing is blocked on the 64-bit time_t transition for now, but once that’s out of the way it should flow smoothly again.
Categories: FLOSS Project Planets

Open Source AI Definition – weekly update Mar 4

Open Source Initiative - Mon, 2024-03-04 05:00

A weekly summary of interesting threads on the forum.

The results from the working groups are in

The groups that analyzed OpenCV and Bloom have completed their work and the results of the votes have been published.

We now have a full overview of the result of the four (Llama2, Pythia, Bloom, OpenCV) working groups and the recommendations that they have produced.

Access our spreadsheet to see the complete overview of the compiled votes. This is a major milestone of the co-design process.

Discussion on access to training data continues

This conversation continues with a new question: What does openness look like when original datasets are not accessible due to privacy preserving?

Is the definition of “AI system” by the OECD too broad?

Central question: How can an “AI system” be precisely defined to avoid loopholes and ensure comprehensive coverage under open-source criteria?

“AI system” might create loopholes in open-source licensing, potentially allowing publishers to avoid certain criteria. 

Though, defining “AI system” is useful to clarify what constitutes an open-source AI, needed to outline necessary components, like sharing training code and model parameters, while acknowledging the need for further work on aspects such as model architecture.

If you have not already seen, our fourth town hall meeting was held on the 23/02-2024. Access the recording here and the slides here.

A new townhall meeting is scheduled for this week.

Categories: FLOSS Research

LN Webworks: Drupal 10 Theming: All You Need to Know

Planet Drupal - Mon, 2024-03-04 04:58

Drupal 10 is the latest and most advanced version of Drupal that has transformed the world of Drupal development services. Right from its release, it has been winning the hearts of users worldwide. Some numerous features and tools make Drupal 10 phenomenal, and its enhanced theming capabilities are one of them. Drupal developers and designers can take advantage of these theming capabilities to take the quality of their work to a whole new dimension. If you are still uninitiated about Drupal 10 theming, this blog will help you know everything about it in detail. 

Categories: FLOSS Project Planets

Django Weblog: Django security releases issued: 5.0.3, 4.2.11, and 3.2.25

Planet Python - Mon, 2024-03-04 03:55

In accordance with our security release policy, the Django team is issuing Django 5.0.3, Django 4.2.11, and Django 3.2.25. These releases addresses the security issue detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2024-27351: Potential regular expression denial-of-service in django.utils.text.Truncator.words()

django.utils.text.Truncator.words() method (with html=True) and truncatewords_html template filter were subject to a potential regular expression denial-of-service attack using a suitably crafted string (follow up to CVE-2019-14232 and CVE-2023-43665).

Thanks Seokchan Yoon for the report.

This issue has severity "moderate" according to the Django security policy.

Affected supported versions
  • Django 5.0
  • Django 4.2
  • Django 3.2
Resolution

Patches to resolve the issue have been applied to the 5.0, 4.2, and 3.2 release branches. The patches may be obtained from the following changesets:

The following releases have been issued:

The PGP key ID used for this release is Mariusz Felisiak: 2EF56372BA48CD1B.

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance or the django-developers list. Please see our security policies for further information.

Categories: FLOSS Project Planets

Python GUIs: Working With Python Virtual Environments — Setting Your Python Working Environment, the Right Way

Planet Python - Mon, 2024-03-04 01:00

As Python developers, we often need to add functionality to our applications which isn't provided by the standard library. Rather than implement everything ourselves, we can instead install 3rd party Python packages from the official Python package index at PyPI using pip. The pip tool downloads and installs these 3rd party packages into our Python installation so we can immediately use them in our scripts and applications.

The Python standard library is a set of Python packages that come bundled with Python installers. This library includes modules and packages like math and usually Tkinter.

There is a caveat here though: we can only install a single version of a package at any one time. While this isn't a problem when you create your first project, when you create the second any changes you make to the dependencies will also affect the first project. If any of your dependencies depend on other packages themselves (usually the case!) you may encounter conflicts where fixing the dependencies for one project breaks the dependencies for another.

This is known as dependency hell.

Thankfully, Python has a solution for this: Python virtual environments. Using virtual environments you can manage the packages for each project independently.

In this tutorial, we will learn how to create virtual environments and use them to manage our Python projects and their dependencies. We will also learn why virtual environments are an essential tool in any Python developer's arsenal.

Table of Contents Understand Why We Need Python Virtual Environments

When working on a project, we often rely on specific external or third-party Python package versions. However, as packages get updated, their way of working may change.

If we update a package, the changes in that package may mean other projects stop functioning. To solve this issue, we have a few different options:

  • Continue using the outdated version of the package in all our projects.
  • Update all our projects whenever a new version of the package comes out.
  • Use Python virtual environments to isolate the projects and their dependencies.

Not updating packages means that we won't be able to take advantage of new features or improvements. It can also affect the security of our project because we're not up-to-date with essential security updates and bug fixes for the Python packages our project relies on.

On the other hand, constantly updating our projects to keep them functional isn't much fun. While not always necessary, it quickly can become tedious and impractical, depending on the scale and the number of projects we have.

The last and best option is to use Python virtual environments. They allow us to manage the Python packages needed for each application separately. So we can choose when to update dependencies without affecting other projects that rely on those packages.

Using virtual environments gives us the benefit of being able to update packages without the risk of breaking all our projects at once. It gives us more time to make sure that each project works properly with the updated package version. It also avoids conflicts between package requirements and dependency requirements in our various projects.

Finally, using Python virtual environments is also recommended to avoid system conflicts because updating a package may break essential system tools or libraries that rely on a particular version of the package.

Explore How Python Virtual Environments Work

A virtual environment is a folder containing a lightweight installation of Python. It creates a stripped-down and isolated copy of the base Python installation on our system without requiring us to install Python again.

Because a virtual environment is an isolated copy of our current system Python, we will find a copy of the python and pip executables inside each virtual environment folder. Once we activate a virtual environment, any python or pip commands will point to these executables instead of the ones in our system installation.

We can check where our system currently points Python commands to by running python -c "import sys; print(sys.executable)". If we are not using a virtual environment, this command will show us where the system Python installation is located. Inside a virtual environment it will point to the environment's executable.

Using a virtual environment also means that pip will install external packages in the environment's site folder rather than in the system's. This way, we can keep different versions of a Python package installed in independent virtual environments. However, we still can only have one version of a given package per virtual environment.

Install Python On Your System

In case you haven't done it yet, you need to install Python on your development machine before being able to use it.

Install Python on Windows or macOS

You can install Python by going to its download page and grabbing the specific installer for either Windows or macOS. Then, you have to run the installer and follow the on-screen instructions. Make sure you select the option to Add Python to PATH during the installation process.

Python is also available for installation through Microsoft Store on Windows machines.

Install Python on Linux

If you are on Linux, you can check if Python is installed on your machine by running the python3 --version command in a terminal. If this command issues an error, you can install Python from your Linux distribution repository.

You will find that the Python version available in most Linux repositories is relatively old. To work around this issue, you can use tools like pyenv, which allows the installation of multiple Python versions.

For example, on Ubuntu and Debian, you can install Python by executing:

sh $ sudo apt install python3

You may also need to install pip, the Python package installer, and venv, the module that allows you to create virtual environments. To do this, run the following command:

sh $ sudo apt install python3-pip python3-venv

You only need to install pip and venv separately in some Linux distributions, including Ubuntu and Debian.

Create Python Virtual Environments

The standard way to create virtual environments in Python is to use the venv module, which is a part of the standard library, so you shouldn't need to install anything additional on most systems.

A virtual environment is a stripped-down and isolated copy of an existing Python installation, so it doesn't require downloading anything.

You'll typically create a virtual environment per project. However, you can also have custom virtual environments with different purposes in your system.

To add a new virtual environment to a project, go to your project folder and run the following command in a terminal:

sh $ python -m venv ./venv

If you check inside your project folder now, you'll see a new subfolder named venv. This folder contains the virtual environment you just made.

Using venv, env, or .venv as the virtual environment name is a common and accepted practice in the Python community.

Use Python Virtual Environments

Now that you've successfully created your Python virtual environment, you can start using it to install whatever packages you need for your project. Note that every new virtual is like a fresh Python installation, with no third-party package available.

Unless you choose to pass the --system-site-packages switch to the venv command when you create the virtual environment, it will only contain the Python standard library and a couple of required packages. For any additional packages, we need to use pip to install them.

In the following sections, you'll learn how to activate your virtual environment for use, install packages, manage dependency, and more.

Activate the Virtual Environment

To start using a virtual environment, you need to activate it. The command to do this depends on your operating system and terminal shell.

On Unix systems, such as macOS and Linux, you can run the following command:

sh $ source venv/bin/activate

This command activates your virtual environment, making it ready for use. You'll know that because your prompt will change from $ to (venv) $. Go ahead and run the following command to check your current Python interpreter:

sh (venv) $ python -c "import sys; print(sys.executable)" /path/to/project/venv/bin/python

The output of this command will contain the path to the virtual environment interpreter. This interpreter is different from your system interpreter.

If you're on Windows and using PowerShell, then you can activate your virtual environment by running the following command:

sh PS> venv\Scripts\activate

Again, you'll know the environment is active because your prompt will change from PS> to (venv) PS>.

Great! With these steps completed, we're now ready to start using our Python virtual environment.

Install Packages in the Virtual Environment

Once the virtual environment is active, we can start using pip to install any packages our project requires. For example, say you want to install the PyQt GUI framework to create desktop apps. In this case, you can run the following command:

sh (venv) $ python -m pip install pyqt6

This command downloads and installs PyQt from the Python package index directly. Now you can start working on your GUI project.

Once you've activated a virtual environment, you can use pip directly without the python -m prefix. However, best practices recommend using this command format.

You can also install multiple packages in one go:

sh (venv) $ python -m pip install black flake8 mypy pytest

To install multiple packages with a single command, you only have to list the desired packages separated by spaces, as you did in the above command.

Finally, sometimes it's also useful to update a given package to its latest version:

sh (venv) $ python -m pip install --upgrade pyqt6

The --upgrade flag tells pip to install the latest available version of an already installed package.

Deactivate a Virtual Environment

Once you are done working on your GUI app, you need to deactivate the virtual environment so you can switch back to your system shell or terminal. Remember that you'll have to reactivate this virtual environment next time you need to work on your project.

Here's how you can deactivate a Python virtual environment:

sh (venv) $ deactivate

This command deactivates your virtual environment and gets you back into your system shell. You can run the python -c "import sys; print(sys.executable)" command to check that your current Python interpreter is your system Python.

Manage the Location of Your Virtual Environments

Although you can place your virtual environments anywhere on your computer, there are a few standardized places for them to live. The most common one is inside the relevant project folder. As you already learned, the folder will usually be named venv or .venv.

If you use Git for version control, make sure to ignore the virtual environment folder by adding it to .gitignore.

If you want to have all your virtual environments in a central location, then you can place them in your home folder. Directories like ~/.venvs and ~/.virtualvenvs are commonly used locations.

The Python extension for Visual Studio Code automatically looks for any virtual environments in a couple of places, including inside your current project folder and specific folders in your home folder (if they exist). If you want to specify additional external folders for it to look in, go to Settings and configure python.venvFolders or python.venvPath. To avoid confusion, a good idea is to name each virtual environment in this folder after the relevant project name.

Delete a Python Virtual Environments

If you are no longer working with a given virtual environment, you can get rid of it and all the related Python packages by simply deleting the virtual environment folder.

If you delete the virtual environment associated with a given project, to run the project again, you'll have to create a new virtual environment and install the required packages.

Manage Your Project's Dependencies With pip

You've already learned how to create Python virtual environments to isolate the dependencies of your projects and avoid package versioning issues across different projects. You also learned how to install external dependencies with pip.

In the following sections, you'll learn how to efficiently manage a project's dependencies using pip and requirement files.

Generate a Requirement File

An important advantage of using a dedicated virtual environment for each of our Python projects is that the environment will only contain the packages for that specific project. We can use the pip freeze command to get the list of currently installed packages in any active virtual environment:

sh (venv) $ python -m pip freeze PyQt6==6.5.0 PyQt6-Qt6==6.5.0 PyQt6-sip==13.5.1

This command allows us to create a text file containing the lists of the packages our project needs for running. To do this, go ahead and run the following command:

sh (venv) $ python -m pip freeze > requirements.txt

After running the above command, you'll have a new file called requirements.txt in your project's directory.

By convention, the Python community uses the name requirements.txt for those files containing the list of dependencies of a given project. This file typically lives in the project's root directory.

Run the following command to check the content of this new file:

sh $ cat requirements.txt PyQt6==6.5.0 PyQt6-Qt6==6.5.0 PyQt6-sip==13.5.1

As you can see, your requirements.txt file lists those packages that you've installed in the active virtual environment. Once you have this file in place, anyone who needs to run your project from its source will know which packages they need to install. More importantly, they can use the file to install all the dependencies automatically, as you'll learn in the following section.

Install Project Dependencies From a Requirement File

Besides of being able to see the list of Python packages our application needs, the requirements file also makes installing those packages in a new virtual environment just one command away:

sh (new_venv) $ python -m pip install -r requirements.txt

The -r option of pip install allows you to provide a requirement file as an argument. Then, pip will read that file and install all the packages listed in it. If you run pip freeze again, you'll note that this new environment has the same packages installed.

Tweak the Requirements File

Although using pip freeze is quite convenient, it often creates a lot of unnecessary clutter by populating the requirement file with Python packages that our application may not rely on directly. For example, packages that are dependencies of required packages and also packages that are only needed for development.

The generated file will also include the exact version of each package we have installed. While this may be useful, it is best to keep the requirement file clean. For example, dependencies of dependencies can often be ignored, since managing those is the responsibility of that package. That way, it'll be easy to know which packages are really required.

For example, in a GUI project, the requirements.txt file may only need to include PyQt6. In that case it would look like this:

sh $ cat requirements.txt PyQt6==6.5.0

Specifying the highest or lowest version of required packages may also be beneficial. To do this, we can replace the equality operator (==) with the less than (<=) or greater than (>=) operators, depending on your needs. If we completely omit the package version, pip will install the latest version available.

Create a Development Requirement File

We should consider splitting our project requirement file into two separate files. For example, requirements.txt and requirements_dev.txt. This separation lets other developers know which packages are required for your project and which are solely relevant for development and testing purposes.

For example, you may use Black for formatting, flake8 for linting, mypy for type checking, and pytest for testing your code. In this case, your requirements_dev.txt file should look something like this:

sh $ cat requirements_dev.txt black flake8 mypy pytest

With this file in place, developers who want to contribute to your project can install the required development dependencies easily.

Conclusion

By now, you have a good understanding of how Python virtual environments work. They are straightforward to create and make it easy to manage the Python packages you need for developing your applications and projects.

Avoid installing Python packages outside of a virtual environment whenever possible. Creating a dedicated environment for a small Python script may not make sense. However, it's always a good idea to start any Python project that requires external packages by creating its own virtual environment.

Categories: FLOSS Project Planets

Petter Reinholdtsen: RAID status from LSI Megaraid controllers using free software

Planet Debian - Sun, 2024-03-03 16:40

The last few days I have revisited RAID setup using the LSI Megaraid controller. These are a family of controllers called PERC by Dell, and is present in several old PowerEdge servers, and I recently got my hands on one of these. I had forgotten how to handle this RAID controller in Debian, so I had to take a peek in the Debian wiki page "Linux and Hardware RAID: an administrator's summary" to remember that kind of software is available to configure and monitor the disks and controller. I prefer Free Software alternatives to proprietary tools, as the later tend to fall into disarray once the manufacturer loose interest, and often do not work with newer Linux Distributions. Sadly there is no free software tool to configure the RAID setup, only to monitor it. RAID can provide improved reliability and resilience in a storage solution, but only if it is being regularly checked and any broken disks are being replaced in time. I thus want to ensure some automatic monitoring is available.

In the discovery process, I came across a old free software tool to monitor PERC2, PERC3, PERC4 and PERC5 controllers, which to my surprise is not present in debian. To help change that I created a request for packaging of the megactl package, and tried to track down a usable version. The original project site is on Sourceforge, but as far as I can tell that project has been dead for more than 15 years. I managed to find a more recent fork on github from user hmage, but it is unclear to me if this is still being maintained. It has not seen much improvements since 2016. A more up to date edition is a git fork from the original github fork by user namiltd, and this newer fork seem a lot more promising. The owner of this github repository has replied to change proposals within hours, and had already added some improvements and support for more hardware. Sadly he is reluctant to commit to maintaining the tool and stated in my first pull request that he think a new release should be made based on the git repository owned by hmage. I perfectly understand this reluctance, as I feel the same about maintaining yet another package in Debian when I barely have time to take care of the ones I already maintain, but do not really have high hopes that hmage will have time to spend on it and hope namiltd will change his mind.

In any case, I created a draft package based on the namiltd edition and put it under the debian group on salsa.debian.org. If you own a Dell PowerEdge server with one of the PERC controllers, or any other RAID controller using the megaraid or megaraid_sas Linux kernel modules, you might want to check it out. If enough people are interested, perhaps the package will make it into the Debian archive.

There are two tools provided, megactl for the megaraid Linux kernel module, and megasasctl for the megaraid_sas Linux kernel module. The simple output from the command on one of my machines look like this (yes, I know some of the disks have problems. :).

# megasasctl a0 PERC H730 Mini encl:1 ldrv:2 batt:good a0d0 558GiB RAID 1 1x2 optimal a0d1 3067GiB RAID 0 1x11 optimal a0e32s0 558GiB a0d0 online errs: media:0 other:19 a0e32s1 279GiB a0d1 online a0e32s2 279GiB a0d1 online a0e32s3 279GiB a0d1 online a0e32s4 279GiB a0d1 online a0e32s5 279GiB a0d1 online a0e32s6 279GiB a0d1 online a0e32s8 558GiB a0d0 online errs: media:0 other:17 a0e32s9 279GiB a0d1 online a0e32s10 279GiB a0d1 online a0e32s11 279GiB a0d1 online a0e32s12 279GiB a0d1 online a0e32s13 279GiB a0d1 online #

In addition to displaying a simple status report, it can also test individual drives and print the various event logs. Perhaps you too find it useful?

In the packaging process I provided some patches upstream to improve installation and ensure a Appstream metainfo file is provided to list all supported HW, to allow isenkram to propose the package on all servers with a relevant PCI card.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppArmadillo 0.12.8.1.0 on CRAN: Upstream Fix, Interface Polish

Planet Debian - Sun, 2024-03-03 16:14

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1130 other packages on CRAN, downloaded 32.8 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 578 times according to Google Scholar.

This release brings a new upstream bugfix release Armadillo 12.8.1 prepared by Conrad yesterday. It was delayed for a few hours as CRAN noticed an error in one package which we all concluded was spurious as it could be reproduced outside of the one run there. Following from the previous release, we also use the slighty faster ‘Lighter’ header in the examples. And once it got to CRAN I also updated the Debian package.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.1.0 (2024-03-02)
  • Upgraded to Armadillo release 12.8.1 (Cortisol Injector)

    • Workaround in norm() for yet another bug in macOS accelerate framework
  • Update README for RcppArmadillo usage counts

  • Update examples to use '#include <RcppArmadillo/Lighter>' for faster compilation excluding unused Rcpp features

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Ben Hutchings: FOSS activity in February 2024

Planet Debian - Sun, 2024-03-03 14:28
  • I updated the Linux kernel packages in various Debian suites:
    • buster: Updated linux-5.10 to the latest security update for bullseye, and uploaded it, but it still needs to be approved.
    • bullseye-backports: Updated linux (6.1) to the latest security update from bullseye, and uploaded it.
    • bookworm-backports: Updated linux to the current version in testing, and uploaded it.
  • I reported a regression in documentation builds in the Linux 5.10 stable branch.
Categories: FLOSS Project Planets

Iustin Pop: New corydalis 2024.9.0 release!

Planet Debian - Sun, 2024-03-03 08:15

Obligatory and misused quote: It’s not dead, Jim!

I’ve kind of dropped by ball lately on organising my own photo collection, but February was a pretty good month and I managed to write some more code for Corydalis, ending up with the aforementioned new release.

The release is not a big one, but I did manage to solve one thing that was annoying me greatly: that lack of ability to play videos inline in one of the two picture viewing modes (in my preferred mode, in fact). Now, whether you’re browsing through pictures, or looking at pictures one-by-one, you can in both cases play videos easily, and to some extent, “as it should be”. No user docs for that, yet (I actually need to split the manual in user/admin/developer parts)

I did some more internal cleanups, and I’ve enabled building release zips (since that’s how GitHub actions creates artifacts), which means it should be 10% easier to test this. The rest 90% is configuring it and pointing to picture folders and and and, so this is definitely not plug-and-play.

The diff summary between 2023.44.0 and 2024.9.0 is: 56 files changed, 1412 insertions(+), 700 deletions(-). Which is not bad, but also not too much. The biggest churn was, as expected, in the viewer (due to the aforementioned video playing). The “scary” part is that the TypeScript code is not at 7.9% (and a tiny more JS, which I can’t convert yet due to lack of type definitions upstream). I say scary in quotes, because I would actually like to know Typescript better, but no time.

The new release can be seen in action on demo.corydalis.io, and as always, just after release I found two minor issues:

  • The GitHub actions don’t retrieve the tags by default, actually they didn’t use to retrieve tags at all, but that’s fixed now, just needs configuration, so the public build just says “Corydalis fbe0088, built on Mar 3 2024.” (which is the correct hash value, at least).
  • I don’t have videos on the public web site, so the new functionality is not visible. I’m not sure I want to add real videos (size/bandwidth), hmm 🤨.

Well, there will be future releases. For now, I’ve made an open-source package release, which I didn’t do in a while, so I’m happy 😁. See you!

Categories: FLOSS Project Planets

Paul Wise: FLOSS Activities Feb 2024

Planet Debian - Sun, 2024-03-03 02:52
Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes Issues Review
  • Spam: reported 1 Debian bug report
  • Debian BTS usertags: changes for the month
Administration
  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages: ovito, tahoe-lafs, tpm2-tss-engine
  • Debian wiki: produce HTML dump for a user, unblock IP addresses, approve accounts
Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC
Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

Categories: FLOSS Project Planets

This week in KDE: a smooth release

Planet KDE - Sat, 2024-03-02 20:43

Last Tuesday, the KDE Mega-Release came out, and I’m happy to report that it went well. Initial impressions seem to be overwhelmingly positive! I’ve been doing extra bug triage and social media monitoring since then to see if there were any major issues, and so far things look really good on the bug front too. I think our 3 months of QA paid off! So congratulations everyone for a job well done! Hopefully this should help banish those now 16-year-old painful memories of KDE 4. It’s a new KDE now. Harder, better, faster, stronger!

The roll-out in Neon has been a bit rockier, unfortunately. At this point, most of the packaging issues have been fixed, and folks who encountered them are strongly encouraged to update again. We’re doing an investigation into how this happened, so we can prevent it in the future. So thanks for your patience there, Neon users!

Needless to say, the week was full of other bug-fixing activity as well. There were still a few regressions, many of which have already been fixed, amazingly. I am just so impressed with KDE’s contributors this week!

New Features

There’s a new KWin effect called “Hide Cursor” (off by default for now, but try it!) that will automatically hide the pointer after a period of inactivity (Jin Liu, Plasma 6.1. Link)

On System Settings’ Legacy App Permissions page, there’s now an option to allow XWayland apps to eavesdrop on mouse buttons as well (Oliver Beard, Plasma 6.1. Link)

UI Improvements

The message shown on widgets not compatible with Plasma 6 is now clearer (Niccolò Venerandi, Plasma 6.1, though it might end up backported to 6.0.1 or 6.0.2. Link 1 and link 2):

When you try to activate the Cube effect with fewer than 3 virtual desktops, it will now tell you why it’s not working and prompt you to add some more virtual desktops so it will work (Vlad Zahorodnii, Plasma 6.1. Link):

The default top-left hotcorner that triggers Overview once again closes the effect if you trigger it a second time, while Overview is still open (Vlad Zahorodnii, Plasma 6.0.1)

A number of pages in System Settings have been modernized to move buttons that were on the bottom up to the top, and make their placeholder messages more consistent (Shubham Arora, me: Nate Graham, Fushan Wen, and Jakob Petsovits, Plasma 6.1. Link 1, link 2, link 3, link 4, link 5, link 6, link 7, link 8, link 9, link 10, link 11, and link 12):

In Kirigami-based apps, the animation used when moving from one page to another is now a lot nicer and smoother (Devin Lin, Frameworks 6.1. Link)

You know that awkward little line in the toolbars of Kirigami-based apps that separates the sidebar from the content area? Now it has the appearance of a normal toolbar separator line (Carl Schwan, Frameworks 6.1. Link):

Bug Fixes

Taking a screenshot in Spectacle immediately after a screen recording now works (Noah Davis, Spectacle 24.02.1. Link)

VLC’s fullscreen mode once again works (David Edmundson, Plasma 6.0.1. Link)

Fixed a source of brief screen freezes in the X11 session (Vlad Zahorodnii, Plasma 6.0.1. Link)

Fixed a random-seeming crash in Plasma (Fushan Wen, Plasma 6.0.1. Link)

Dragging desktop files or folders onto another screen no longer causes them to temporarily disappear, and the fix for this issue also fixes a crash in Plasma that could be caused by dragging files or folders from the desktop into a folder visible in Dolphin (Marco Martin, Plasma 6.0.1. Link 1 and link 2)

Clicking on the “Defaults” button on System Settings’ Task Switcher page no longer breaks your task switcher until you manually choose it again (Marco Martin, Plasma 6.0.1. Link)

When a panel popup is open, clicking on something else on the panel once again activates that thing instead of just closing the open popup (David Edmundson, Plasma 6.0.1. Link)

There’s once again a blue outline around the active (and now also hovered) virtual desktop in the Desktop Grid view of the Pverview effect (Akseli Lahtinen, Plasma 6.0.1. Link)

When you right-click on a panel in Auto-Hide mode and select “Add Widgets…”, the panel no longer frustratingly closes again right after the Widget Explorer opens, which previously prevented you from actually adding a widget to the panel that way, and made you want to throw your computer out the window (Niccolò Venerandi, Plasma 6.0.1. Link)

In the Tile Editor screen, you can no longer break your tile layout by dragging splits on top of other splits (Akseli Lahtinen, Plasma 6.0.1. Link)

Clicking on the search field in the Overview effect no longer closes it (Patrik Fábián, Plasma 6.0.1. Link)

Saving changes made to commands assigned to global shortcuts now works (me: Nate Graham, Plasma 6.0.1. Link)

Fixed some glitches with the new Cube effect: zooming with a scroll (did you even know that was a thing?! I didn’t!) now goes in the direction you would expect, and zooming goes in out too far no longer clips away the cube (Vlad Zahorodnii, Plasma 6.0.1. Link 1 and link 2)

When sending a file to a Bluetooth device, the notification that indicates the progress of the transfer no longer shows a broken link after the transfer finishes (Kai Uwe Broulik, Plasma 6.0.1. Link)

MPV windows with the “Keep Aspect Ratio” setting turned on are now full-screenable (Łukasz Patron, Plasma 6.1. Link)

The layout of the “Overwrite this file?” dialog in “Get New [Thing]” windows is no longer visually broken (Akseli Lahtinen, Frameworks 6.1. Link)

Fixed an issue that could cause glitchy horizontal lines to appear on graphs and charts in System Monitor when using a fractional scale factor with certain integrated Intel GPUs (Arjen Hiemstra, Frameworks 6.1. Link)

Other bug information of note:

Automation & Systematization

Created documentation about how to write Appium-based GUI tests for KDE software (Fushan Wen and Thiago Sueto, link)

Created documentation about how to expose C++ models to the QML GUI side in KDE software (Thiago Sueto, link)

Added a test to ensure the functioning of the SystemDialog component, which powers a number of portal-based permission dialogs (Fushan Wen, link)

…And Everything Else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

Thanks to you, our Plasma 6 fundraiser was been a crazy success! The final number of members is an amazing 885. Not quite 1000, but given that the original goal was 500 (which I even viewed as wildly optimistic at the beginning), I’m just bowled over by the level of support here. Thank you everyone for the confidence you’ve shown in us; we’ll try not to screw it up! For those who haven’t donated yet, it’s not too late!

If you’re a developer, let’s continue to try to focus on bug reports for the next week or two in the software we’re involved with, to make sure that any issued people find get noticed and fixed. I want that perception of quality to continue! We’re building up a good reputation here, so let’s keep pushing for just a bit longer before we pivot to feature work for Plasma 6.1.

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Categories: FLOSS Project Planets

New blogs.kde.org

Planet KDE - Sat, 2024-03-02 19:00
I'm happy to announce the launch of our revamped Blogs.KDE.org website, now powered by Hugo instead of Drupal 7! This switch was motivated by the fact that Drupal 7 is now reaching end of life and since Blogs.
Categories: FLOSS Project Planets

Ravi Dwivedi: Malaysia Trip

Planet Debian - Sat, 2024-03-02 08:59

Last month, I had a trip to Malaysia and Thailand. I stayed for six days in each of the countries. The selection of these countries was due to both of them granting visa-free entry to Indian tourists for some time window. This post covers the Malaysia part and Thailand part will be covered in the next post. If you want to travel to any of these countries in the visa-free time period, I have written all the questions asked during immigration and at airports during this trip here which might be of help.

I mostly stayed in Kuala Lumpur and went to places around it. Although before the trip, I planned to visit Ipoh and Cameron Highlands too, but could not cover it during the trip. I found planning a trip to Malaysia a little difficult. The country is divided into two main islands - Peninsular Malaysia and Borneo. Then there are more islands - Langkawi, Penang island, Perhentian and Redang Islands. Reaching those islands seemed a little difficult to plan and I wish to visit more places in my next Malaysia trip.

My first day hostel was booked in Chinatown part of Kuala Lumpur, near Pasar Seni LRT station. As soon as I checked-in and entered my room, I met another Indian named Fletcher, and after that we accompanied each other in the trip. That day, we went to Muzium Negara and Little India. I realized that if you know the right places to buy what you want, Malaysia could be quite cheap. Malaysian currency is Malaysian Ringgit (MYR). 1 MYR is equal to 18 INR. For 2 MYR, you can get a good masala tea in Little India and it costs like 4-5 MYR for a masala dosa. The vegetarian food has good availability in Kuala Lumpur, thanks to the Tamil community. I also tried Mee Goreng, which was vegetarian, and I found it fine in terms of taste. When I checked about Mee Goreng on Wikipedia, I found out that it is unique to Indian immigrants in Malaysia (and neighboring countries) but you don’t get it in India!

Mee Goreng, a dish made of noodles in Malaysia.

For the next day, Fletcher had planned a trip to Genting Highlands and pre booked everything. I also planned to join him but when we went to KL Sentral to take the bus, his bus tickets were sold out. I could take a bus at a different time, but decided to visit some other place for the day and cover Genting Highlands later. At the ticket counter, I met a family from Delhi and they wanted to go to Genting Highlands but due to not getting bus tickets for that day, they decided to buy a ticket for the next day and instead planned for Batu Caves that day. I joined them and went to Batu Caves.

After returning from Batu Caves, we went our separate ways. I went back and took rest at my hostel and later went to Petronas Towers at night. Petronas Towers is the icon of Kuala Lumpur. Having a photo there was a must. I was at Petronas Towers at around 9 PM. Around that time, Fletcher came back from Genting Highlands and we planned to meet at KL Sentral to head for dinner.

Me at Petronas Towers.

We went back to the same place as the day before where I had Mee Goreng. This time we had dosa and a masala tea. Their masala tea from the last day was tasty and that’s why I was looking for them in the first place. We also met a Malaysian family having Indian ancestry dining there and had a nice conversation. Then we went to a place to eat roti canai in Pasar Seni market. Roti canai is a popular non-vegetarian dish in Malaysia but I took the vegetarian version.

Photo with Malaysians.

The next day, we went to Berjaya Time Square has a shopping place which sells pretty cheap items for daily use and souveniers too. However, I bought souveniers from Petaling Street, which is in Chinatown. At night, we explored Bukit Bintang, which is the heart of Kuala Lumpur and is famous for its nightlife.

After that, Fletcher went to Bangkok and I was in Malaysia for two more days. Next day, I went to Genting Highlands and took the cable car, which had awesome views. I came back to Kuala Lumpur by the night. The remaining day I just roamed around in Bukit Bintang. Then I took a flight for Bangkok on 7th Feb, which I will cover in the next post.

In Malaysia, I met so many people from different countries - apart from people from Indian subcontinent, I met Syrians, Indonesians (Malaysia seems to be a popular destination for Indonesian tourists) and Burmese people. Meeting people from other cultures is an integral part of travel for me.

My expenses for Food + Accommodation + Travel added to 10,000 INR for a week in Malaysia, while flight costs were: 13,000 INR (Delhi to Kuala Lumpur) + 10,000 INR (Kuala Lumpur to Bangkok) + 12,000 INR (Bangkok to Delhi).

For OpenStreetMap users, good news is Kuala Lumpur is fairly well-mapped on OpenStreetMap.

Tips
  • I bought local SIM from a shop at KL Sentral station complex which had “news” in their name (I forgot the exact name and there are two shops having “news” in their name) and it was the cheapest option I could find. The SIM was 10 MYR for 5 GB data for a week. If you want to make calls too, then you need to spend extra 5 MYR.

  • 7-Eleven and KK Mart convenience stores are everywhere in the city and they are open all the time (24 hours a day). If you are a vegetarian, you can at least get some bread and cheese from there to eat.

  • A lot of people know English (and many - Indians, Pakistanis, Nepalis - know Hindi) in Kuala Lumpur, so I had no language problems most of the time.

  • For shopping on budget, you can go to Petaling Street, Berjaya Time Square or Bukit Bintang. In particular, there is a shop named I Love KL Gifts in Bukit Bintang which had very good prices. just near the metro/monorail stattion. Check out location of the shop on OpenStreetMap.

Categories: FLOSS Project Planets

Megarelease Teething Problems

Planet KDE - Sat, 2024-03-02 07:16

As many have noticed, Neon’s release of Plasma 6 was not without its problems. We would like to apologise for the totally unexpected packaging problems as “Testing Edition” and “Unstable Edition” had been working just fine without these issues.

Of course the first round of fixes have already hit the “User Edition” archives. Expect more to follow as we continue to ‘Q.A.’ the packages and eliminate as many bugs as we can.

Categories: FLOSS Project Planets

Pages