Feeds

Real Python: Generating QR Codes With Python

Planet Python - Tue, 2024-04-09 10:00

From restaurant e-menus to airline boarding passes, QR codes have numerous applications that impact your day-to-day life and enrich the user’s experience. Wouldn’t it be great to make them look good, too? With the help of this video course, you’ll learn how to use Python to generate beautiful QR codes for your personal use case.

In its most basic format, a QR code contains black squares and dots on a white background, with information that any smartphone or device with a dedicated QR scanner can decode. Unlike a traditional bar code, which holds information horizontally, a QR code holds the data in two dimensions, and it can hold over a hundred times more information.

In this video course, you’ll learn how to:

  • Generate a basic black-and-white QR code
  • Change the size and margins of the QR code
  • Create colorful QR codes
  • Rotate the QR code
  • Replace the static background with an animated GIF

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Submit your proposal for All Things Open – Doing Business with Open Source

Open Source Initiative - Tue, 2024-04-09 09:28

The supply-side value of widely-used Open Source software is estimated to be worth $4.15 billion, and the demand-side value is much larger, at $8.8 trillion. And yet, maintaining a healthy business while producing Open Source software feels more like an art than a science.

The Open Source Initiative wants to facilitate discussions about doing business with and for Open Source.

If you run a business producing Open Source products or your company’s revenue depends on Open Source in any way, we want to hear from you! Share your insights on:

  • How you balance the needs of paying customers with those of partners and non-paying users
  • How you organize your sales, marketing, product and engineering teams to deal with your communities
  • What makes you decide where to draw the lines between pushing fixes upstream and maintaining a private fork
  • Where do you see the value of copyleft in software-as-a-service
  • Why you chose a specific license for your product offering and how do you deal with external contributions
  • What trends do you see in the ecosystem and what effects are these having

We want to hear about these and other topics, from personal experiences and research. Our hope is to provide the ecosystem with accessible resources to better understand the problem space and find solutions.

How it works

We’re tired of panel discussions that start and end at a conference. We want to share knowledge to the widest possible base. We’re going to have a panel at All Things Open, with preparation work before the event.

  • You’ll send your proposals as pitches to OpenSource.net, a title and abstract (300 words max) and a short bio.
  • Our staff will review the pitches and get back to you, selecting as many articles as deemed interesting for publication.
  • We’ll also pick the authors of five of the most interesting articles to be speakers at a panel discussion at ATO, on October 29 in Raleigh, NC. Full conference passes will be offered. 
  • Authors of accepted pitches to write a full article (1,200-1,500 words) to be published leading up to ATO.
  • We’ll also select other pitches worth developing into full-length articles but, for any reason, didn’t fit into the panel discussion.

Note: Please read and follow the guidelines carefully before submitting your proposal.

Submission Requirements
  • Applications should be submitted via web form
  • Add a title and a pitch, 300 words maximum
  • Include a brief bio, highlighting why you’re the right person to write about this topic
  • Submissions should be well-structured, clear and concise
Evaluation Criteria
  • Relevance to the topic
  • Originality and uniqueness of the submission
  • Clarity and coherence of argumentation
  • Quality of examples and case studies
  • Presenter’s expertise and track record in the field
  • Although the use of generative AI is permitted, pitches evidently written by AI won’t be considered
Timeline
  • Submission deadline: May 17, 2024
  • Notification of acceptance: May 30, 2024
  • Accepted authors must submit their full article by June 30, 2024
  • Articles will be published between July 8 and October 10, 2024
  • The authors of the selected articles will be invited to join a panel by July 20, 2024
  • Event dates: Oct 28, 29, 2024
What to Expect
  • Your submission will be reviewed by a panel of experts in the field
  • If accepted, you will be asked to produce a full article that will be published at opensource.net

We look forward to receiving your submission!

Follow The Open Source Initiative:

Categories: FLOSS Research

Compelling responses to NTIA’s AI Open Model Weights RFC

Open Source Initiative - Tue, 2024-04-09 08:03

The National Telecommunications and Information Administration (NTIA) posted a request for comments on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights, and it has received 362 comments.

In addition to the Open Source Initiative’s (OSI) joint letter drafted by Mozilla and the Center for Democracy and Technology (CDT), the OSI has also sent a letter of its own, highlighting our multi-stakeholder process to create a unified, recognized definition of Open Source AI.

The following is a list of some comments from nonprofit organizations and companies.

Comments from additional nonprofit organizations
  • Researchers from Stanford University’s Human-centered AI (HAI) and Princeton University recommend that the federal government prioritize understanding of the marginal risk of open foundational models when compared to proprietary, creating policies based on this marginal risk. Their response also highlighted several unique benefits from open foundational models, including higher innovation, transparency, diversification, and competitiveness.
  • Wikimedia Foundation recommends that regulatory approaches should support and encourage the development of beneficial uses of open technologies rather than depending on more closed systems to mitigate risks. Wikimedia believes open and widely available AI models, along with the necessary infrastructure to deploy them, could be an equalizing force for many jurisdictions around the world by mitigating historical disadvantages in the ability to access, learn from, and use knowledge.
  • EleutherAI Institute recommends Open Source AI and warns that restrictions on open-weight models are a costly intervention with comparatively little benefit. EleutherAI believes that open models enable people close to the deployment context to have greater control over the capabilities and usage restrictions of their models, study the internal behavior of models during deployment, and examine the training process and especially training data for signs that a model is unsafe to deploy in a specific use-case. They also lower barriers of entry by making models cheaper to run and enable users whose use-cases require strict guarding of privacy (e.g., medicine, government benefits, personal financial information) to use.
  • MLCommons recommends the use of standardized benchmarks, which will be a critical component for mitigating the risk of models both with and without widely available open weights. MLCommons believes models with widely available open weights allow the entire AI safety community – including auditors, regulators, civil society, users of AI systems, and developers of AI systems – to engage with the benchmark development process. Together with open data and model code, open weights enable the community to clearly and completely understand what a given safety benchmark is measuring, eliminating any confounding opacity around how a model was trained or optimized.
  • The AI Alliance recommends regulation shaped by independent, evidence-based research on reliable methods of assessing the marginal risks posed by open foundation models; effective risk management frameworks for the responsible development of open foundation models; and balancing regulation with the benefits that open foundation models offer for expanding access to the technology and catalyzing economic growth.
  • The Alliance for Trust in AI recommends that regulation should protect the many benefits of increasing access to AI models and tools. The Alliance of Trust in AI believes that openness should not be artificially restricted based on a misplaced belief that this will decrease risk.
  • Access Now recommends NTIA to think broadly about how developments in AI are reshaping or consolidating corporate power, especially with regard to ‘Big Tech.’ Access Now believes in the development and use of AI systems in a sustainable, resource-friendly way that considers the impact of models on marginalized communities and how those communities intersect with the Global South.
  • Partnership on AI (PAI) recommends NTIA’s work should be informed by the following principles: all foundation models need risk mitigations; appropriate risk mitigations will vary depending on model characteristics; risk mitigation measures, for either open or closed models, should be proportionate to risk; and voluntary frameworks are part of the solution.
  • R Street recommends pragmatic steps towards AI safety, relying on multistakeholder processes to address problems in a more flexible, agile, and iterative fashion. The government should not impose arbitrary limitations on the power of Open Source AI systems, which could result in a net loss of competitive advantage.
  • The Computer and Communications Industry Association (CCIA) recommends assessment based on the risks, highlighting that open models provide the potential for better security, less bias, and lower costs to AI developers and users alike. The CCIA acknowledged that the vast majority of Americans already use systems based on Open Source software (knowingly or unknowingly) on a daily basis.
  • The Information Technology Industry Council (ITI) recommends adopting a risk-based approach with respect to open foundation models, since not all models pose an equivalent degree of risk, and that the risk management is a shared responsibility across the AI value chain.
  • The Center for Data Innovation recommends that U.S. policymakers defend open AI models at the international level as part of its continued embrace of the global free flow of data. It also encourages them to learn lessons from past debates about dual-use technologies, such as encryption, and refrain from imposing restrictions on foundation models because such policies would not only be ultimately ineffective at addressing risk, but they would slow innovation, reduce competition, and decrease U.S. competitiveness.
  • The International Center for Law & Economics recommends that AI regulation must be grounded in empirical evidence and data-driven decision making. Demanding a solid evidentiary basis as a threshold for intervention would help policymakers to avoid the pitfalls of reacting to sensationalized or unfounded AI fears.
  • New America’s Open Technology Institute (OTI) recommends a coordinated interagency approach designed to ensure that the vast potential benefits of a flourishing open model ecosystem serve American interests, in order to counter or at least offset the trend toward dominant closed AI systems and continued concentrations of power in the hands of a few companies.
  • Electronic Privacy Information Center (EPIC) recommends NTIA to grapple with the nuanced advantages, disadvantages, and regulatory hurdles that emerge within AI models along the entire gradient of openness, highlighting that AI models with weights widely available may foster more independent evaluation of AI systems and greater competition compared to closed systems.
  • The Software & Information Industry Association (SIIA) recommends a risk-based approach to foundation models that considers the degree and type of openness. SIIA believes openness has already proved to be a catalyst for research and innovation by essentially democratizing access to models that are cost-prohibitive for many actors in the AI ecosystem to develop on their own.
  • The Future Society recommends that the government should establish risk categories (i.e., designations of “high-risk” or “unacceptable-risk”), thresholds, and risk-mitigation measures that correspond to evaluation outcomes. The Future Society is concerned that overly restrictive policies could lead to market concentration, hindering competition and innovation in both industry and academia. A lack of competition in the AI market can have far-reaching knock-on consequences, including potentially stifling efforts to improve transparency, safety, and accountability in the industry. This, in turn, can impair the ability to monitor and mitigate the risks associated with dual-use foundation models and to develop evidence-based policymaking.
  • The Software Alliance (BSA) recommends NTIA to avoid restricting the availability of open foundation models; ground policies that address risks of open foundation models on empirical evidence; and encourage the implementation of safeguards to enhance the safety of open foundation models. BSA recognizes the substantial benefits that open foundation models provide to both consumers and businesses.
  • The US Chamber of Commerce recommends NTIA to make decisions based on sound science and not unsubstantiated concerns that open models pose an increased risk to society. The US Chamber of Commerce believes that Open-source technology allows developers to build, create, and innovate in various areas that will drive future economic growth.
Comments from companies
  • Meta recommends NTIA to establish common standards for risk assessments, benchmarks and evaluations informed by science, noting that the U.S. national interest is served by the broad availability of U.S.-developed open foundation models. Meta highlighted that Open source democratizes access to the benefits of AI, and that these benefits are potentially profound for the U.S., and for societies around the world. 
  • Google recommends a rigorous and holistic assessment of the technology to evaluate benefits and risks. Google believes that Open models allow users across the world, including in emerging markets, to experiment and develop new applications, lowering barriers to entry and making it easier for organizations of all sizes to compete and innovate.
  • IBM recommends preserving and prioritizing the critical benefits of open innovation ecosystems for AI for increasing AI safety, advancing national competitiveness, and promoting democratization and transparency of this technology. 
  • Intel recommends accountability for responsible design and implementation to help mitigate potential individual and societal harm. This includes establishing robust security protocols and standards to identify, address, and report potential vulnerabilities. Intel believes openness not only allows for faster advancement of technology and innovation, but also faster, transparent discovery of potential harms and community remediation and address. Intel also believes that Open AI development is essential to facilitate innovation and equitable access to AI, as open innovation, open platforms, and horizontal competition help offer choice and build trust. 
  • Stability AI recommends that regulation must support a diverse AI ecosystem – from the large firms building closed products to the everyday developers using, refining, and sharing open technology. Stability AI recognizes that Open models promote transparency, security, privacy, accessibility, competition, and grassroots innovation in AI.
  • Hugging Face recommends establishing standards for best practices building on existing work and prioritizing requirements of safety by design across both the AI development chain and its deployment environments. Hugging Face believes that open-weight models contribute to competition, innovation, and broad understanding of AI systems to support effective and reliable development.
  • GitHub recommends regulatory risk assessment should weigh empirical evidence of possible harm against the benefits of widely available model weights. GitHub believes Open source and widely available AI models support research on AI development and safety, as well as the use of AI tools in research across disciplines. To-date, researchers have credited these models with supporting work to advance the interpretability, safety, and security of AI models; to advance the efficiency of AI models enabling them to use less resources and run on more accessible hardware; and to advance participatory, community-based ways of building and governing AI.
  • Microsoft recommends cultivating a healthy and responsible open source AI ecosystem and ensuring that policies foster innovation and research. This will be achieved through direct engagement with open source communities to understand the impact of policy interventions on them and, as needed, calibrations to address risks of concern while also minimizing negative impacts on innovation and research.
  • Y Combinator recommends NTIA and all stakeholders to realize the immense promise of open-weight AI models while ensuring this technology develops in alignment with our values. Y Combinator believes the degree of openness of AI models is a crucial factor shaping the trajectory of this transformative technology. Highly open models, with weights accessible to a broad range of developers, offer unparalleled opportunities to democratize AI capabilities and promote innovation across domains. Y Combinator has seen firsthand the incredible progress driven by open models, with a growing number of startups harnessing these powerful tools to pioneer groundbreaking applications. 
  • AH Capital Management, L.L.C. (a16z) recommends NTIA to be wary of generalized claims about the risks of Open Models and calls to treat them differently from Closed Models, especially those made by AI companies seeking to insulate themselves from market competition. a16z believes Open Models promote innovation, reduce barriers to entry, protect against bias, and allow such models to leverage and benefit from the collective expertise of the broader artificial intelligence (“AI”) community. 
  • Uber recommends promoting widely available model weights to spur innovation in the field of AI. Uber believes that, by democratizing access to foundational AI models, innovators from diverse backgrounds can build upon existing frameworks, accelerating the pace of technological advancement and increasing competition in the space. Uber also believes widely available model weights, source code, and data are necessary to foster accountability, facilitate collaboration in risk mitigation, and promote ethical and responsible AI development.
  • Databricks recommends regulation of highly capable AI models should focus on consumer-facing deployments and high risk deployments, with the obligations focused on the deployer. Databricks believes that the benefits of open models substantially outweigh the marginal risks, so open weights should be allowed, even at the frontier level.
Categories: FLOSS Research

Ramsalt Lab: WordPress vs Drupal, which is the best CMS?

Planet Drupal - Tue, 2024-04-09 07:57
WordPress vs Drupal, which is the best CMS? Yngve W. Bergheim CEO Sven Berg Ryen Senior Drupal developer Sohail Lajevardi Drupal Frontend Engineer Stephan Zeidler Chief Technical Architect 09.04.2024

Content Management Systems (CMS) have revolutionized the way we build and manage websites. Drupal and WordPress are two of the most popular CMS platforms worldwide.

In Ramsalt we have many employees with experience from both CMSes and in this article we have gathered some reasons why Drupal could be a better choice for your needs:

Performance

Flexibility and Complexity

  • WordPress is like Duplo, Drupal is like Lego. Drupal is known for its flexibility in building more complex websites. It’s ideal for users with technical skills or access to a developer.
  • With the Gutenberg Editor, the editorial interface with WordPress and Drupal gets merged. so you can get the WordPress feeling combined with the strengths of Drupal.
  • Drupal is often chosen for sites that require complex data organization and for projects that require precise permissions and workflows.

Security

  • Drupal is considered to be the most secure CMS. Drupal has robust security measures, making it a popular choice for government institutions and other large, security-conscious entities.
  • Drupal sites tend to get hacked less often than WordPress sites, which speaks volumes about its robust security measures.
  • WordPress accounted for 96 percent of all hacked CMS sites in 2022.

Multilingual Support

  • Drupal supports multilingual websites by default, which can be a crucial feature for global businesses.

Developer Experience

  • WordPress has a “hacky” architecture and the developer experience is worse than Drupal.
  • Drupal has a clean open source mentality, everything on drupal.org is free to use. WordPress has a more commercial model where modules and themes etc you might have to pay for.
  • Drupal has very good migration tools, so it makes it easy to migrate from existing CMS to Drupal.
  • Drupal has a granular role and permission handling whereas in WordPress you have to go through hoops to get anything besides a few predefined roles.

Other

  • WordPress was originally made for the blogging community and is struggling to solve bigger challenges.
  • WordPress plugins are “monsters” containing “everything and the kitchen sink” and are not always designed to be expandable through hooks.
  • There’s mainly professional development agencies offering Drupal. While there are a lot of companies offering WordPress services they tend to be freelancers and advertising agencies without professional developers, which make the websites often suffer with bad architecture choices and buggy code, leaving them vulnerable for hackers.
  • Some of the “free” themes and modules constantly nag you to buy into the premium version and there’s no way to turn off the noisy notifications.
  • Plugins in WordPress often don’t work well with each other, if you enable one plugin, it might cause conflict with another.
  • Layout builder - make it possible for an editor to make landing pages fast and easy.
  • Drupal is packed with tools for multichannel publishing, digital asset management, and SEO.

While WordPress is a great platform for beginners and bloggers, Drupal’s flexibility, robust security, superior user access control, multilingual support, scalability, and development opportunities make it a powerful solution for most websites. 

Remember, the choice between Drupal and WordPress depends on your specific needs for website you intend to build. Both have their strengths and cater to different types of projects. 

Contact us for a free talk about your requirements so we can find the best solution for you. 

Categories: FLOSS Project Planets

Python Bytes: #378 Python is on the edge

Planet Python - Tue, 2024-04-09 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><a href="https://github.com/brohrer/pacemaker"><strong>pacemaker</strong></a> - For controlling time per iteration loop in Python.</li> <li><a href="https://www.bleepingcomputer.com/news/security/pypi-suspends-new-user-registration-to-block-malware-campaign/">PyPI suspends new user registration to block malware campaign</a></li> <li><a href="https://hynek.me/articles/python-virtualenv-redux/"><strong>Python Project-Local Virtualenv Management Redux</strong></a></li> <li><a href="https://blog.cloudflare.com/python-workers">Python Edge Workers at Cloudflare</a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=4oALfE-zDf8' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="378">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1:</strong> <a href="https://github.com/brohrer/pacemaker"><strong>pacemaker</strong></a> - For controlling time per iteration loop in Python.</p> <ul> <li>Brandon Rohrer</li> <li>Good example of a small bit of code made into a small package.</li> <li>With speedups to dependencies, like with uv, for example, I think we’ll see more small projects.</li> <li>Cool stuff <ul> <li>Great README, including quirks that need to be understood by users. <ul> <li>“If the pacemaker experiences a delay, it will allow faster iterations to try to catch up. Heads up: because of this, any individual iteration might end up being much shorter than suggested by the pacemaker's target rate.”</li> </ul></li> <li>Nice use of <a href="https://docs.python.org/3/library/time.html#time.monotonic">time.monotonic()</a> <ul> <li>deltas are guaranteed to never go back in time regardless of what adjustments are made to the system clock.</li> </ul></li> </ul></li> <li>Watch out for <ul> <li>pip install pacemaker-lite <ul> <li>NOT pacemaker</li> <li>pacemaker is taken by a package named PaceMaker with a repo named pace-maker, that hasn’t been updated in 3 years. Not sure if it’s alive. </li> </ul></li> <li>No tests (yet). I’m sure they’re coming. ;) <ul> <li>Seriously though, Brandon says this is “a glorified snippet”. And I love the use of packaging to encapsulate shared code. Realistically, small snippet like packages have functionality that’s probably going to be tested by end user code.</li> <li>And even if there are tests, users should test the functionality they are depending on.</li> </ul></li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://www.bleepingcomputer.com/news/security/pypi-suspends-new-user-registration-to-block-malware-campaign/">PyPI suspends new user registration to block malware campaign</a></p> <ul> <li><a href="https://status.python.org/incidents/dc9zsqzrs0bv">Incident Report for Python Infrastructure</a></li> <li><a href="https://medium.com/checkmarx-security/pypi-is-under-attack-project-creation-and-user-registration-suspended-heres-the-details-c3b6291d4579">PyPi Is Under Attack: Project Creation and User Registration Suspended — Here’s the details</a> <ul> <li>I hate medium, but it’s the best details I’ve found so far</li> </ul></li> </ul> <p><strong>Brian #3:</strong> <a href="https://hynek.me/articles/python-virtualenv-redux/"><strong>Python Project-Local Virtualenv Management Redux</strong></a></p> <ul> <li>Hynek</li> <li>Concise writeup of how Hynek uses various tools for dealing with environments</li> <li>Covers (paren notes are from Brian) <ul> <li>In project .venv directories</li> <li>direnv for handling .envrc files per project (time for me to try this again)</li> <li>uv for pip and pip-compile functionality</li> <li>Installing Python via python.org</li> <li>Using a .python-version-default file (I’ll need to play with this a bit) <ul> <li>Works with GH Action setup-python. (ok. that’s cool)</li> </ul></li> <li>Some fish shell scripting</li> <li>Bonus tip on using requires-python in .pyproject.toml and extracting it in GH actions to be able to get the python exe name, and then be able to pass it to Docker and reference it in a Dockerfile. (very cool)</li> </ul></li> </ul> <p><strong>Michael #4:</strong> <a href="https://blog.cloudflare.com/python-workers">Python Edge Workers at Cloudflare</a></p> <ul> <li>What are <a href="https://developers.cloudflare.com/workers/">edge workers</a>?</li> <li>Based on workers using Pyodide and WebAssembly</li> <li>This new support for Python is different from how Workers have historically supported languages beyond JavaScript — in this case, we have directly integrated a Python implementation into <a href="https://github.com/cloudflare/workerd">workerd</a>, the open-source Workers runtime.</li> <li>Python Workers can import a subset of popular Python <a href="https://developers.cloudflare.com/workers/languages/python/packages/">packages</a> including <a href="https://fastapi.tiangolo.com/">FastAPI</a>, <a href="https://python.langchain.com/docs/get_started/introduction">Langchain</a>, <a href="https://numpy.org/">numpy</a></li> <li>Check out the <a href="https://github.com/cloudflare/python-workers-examples">examples repo</a>.</li> </ul> <p><strong>Extras</strong> </p> <p>Michael:</p> <ul> <li><a href="https://fosstodon.org/@btskinn/112226004327304352">LPython follow up</a> from Brian Skinn</li> <li><a href="https://github.com/epogrebnyak/justpath/issues/26">Featured on Python Bytes badge</a></li> <li><a href="https://twitter.com/TalkPython/status/1777505296807850101">A little downtime</a>, thanks for the understanding <ul> <li>We were rocking a <a href="https://python-bytes-static.nyc3.digitaloceanspaces.com/python-bytes-health.png">99.98% uptime</a> until then. :)</li> </ul></li> </ul> <p><strong>Joke:</strong> </p> <ul> <li><a href="https://devhumor.com/media/gemini-says-that-c-is-not-safe-for-people-under-18">C++ is not safe for people under 18</a></li> <li>Baseball joke</li> </ul>
Categories: FLOSS Project Planets

Specbee: How to create custom tokens in Drupal

Planet Drupal - Tue, 2024-04-09 01:59
It’s stuff like these that make Drupal not just powerful, but also highly customizable and user-friendly. What are we talking about? Tokens! It’s one of the most versatile and super handy Drupal modules.​​ Sometimes, users need to establish a specific pattern to programmatically retrieve values. In these instances, tokens come to the rescue, providing a seamless solution. Read on to find out more about tokens and how you can create custom tokens for your Drupal website. What are Tokens Tokens in Drupal are primarily used for dynamically inserting data into content, such as user information, node details, or site settings. They make content more personalized and automated without manual intervention, streamlining the editing process and enhancing user experiences. For example, they can be used while sending emails during webform submissions or content moderation. Before creating custom tokens you need to have the Drupal tokens module installed on your Drupal site. This contributed module already comes with some predefined tokens. These defined tokens can be used globally. Steps to Create Custom Tokens Step 1: Create a custom moduleTo create a custom token in Drupal, we either need to develop a new custom module or incorporate it into an existing one. For example, let's name the module "Custom Token," and the corresponding directory would be named "custom_token." After creating this folder, we should generate a "custom_token.info.yml" file, where we'll specify the module details. name: Custom token type: module description: Provides custom tokens. package: tokens core_version_requirement: ^10Step 2: Clear the cacheAfter adding this code, clear the cache and refresh the page to apply the changes. Next, search for the custom token module and install it. Step 3: Create the custom tokenOnce the module is installed, create a file named "custom_token.tokens.inc" within the folder. Inside this file, we'll define the custom tokens. In the given scenario, there's a webform for reviewing article content, and a link to this webform is added to the detailed page of articles. Now, the URL to the webform appears as follows:‘webform/contact_new/test?article=1’. The article field is also auto-filled based on the token. Here, the article author is a hidden field that should auto-fill after form submission. Additionally, the article author is a field within the article content type. To dynamically retrieve this data, we need to create a custom token. The code that will be added inside the "tokens.inc" file is provided below. <?php /** * @file * File to add custom token. */ use Drupal\Core\Render\BubbleableMetadata; /** * Implements hook_token_info(). */ function custom_token_token_info() {   $types['article'] = [     'name' => t('Custom token'),     'description' => t('Define custom tokens.'),   ];   $tokens['article_title'] = [     'name' => t('Article title'),     'description' => t('Token to get current article title.'),   ];   $tokens['article_author'] = [     'name' => t('Article author'),     'description' => t('Token to get current article author.'),   ];   return [     'types' => $types,     'tokens' => ['article' => $tokens],   ]; } /** * Implements hook_tokens(). */ function custom_token_tokens($type, $tokens, array $data, array $options, BubbleableMetadata $bubbleable_metadata) {   $replacements = [];   if ($type == 'article') {     $nid = \Drupal::request()->query->get('article');     if ($nid) {       $node_details = \Drupal::entityTypeManager()->getStorage('node')->load($nid);     }     foreach ($tokens as $name => $original) {       // Find the desired token by name.       switch ($name) {         case 'article_author':           if ($node_details) {             $user_id = $node_details->field_author->target_id;             if ($user_id) {               $user_details = \Drupal::entityTypeManager()->getStorage('user')->load($user_id);               $replacements[$original] = $user_details->name->value;             }           }           break;           case 'article_title':           if ($node_details) {             $replacements[$original] = $node_details->label();           }           break;       }     }   }   return $replacements; }And this is how we can craft custom tokens to suit our specific needs. Once implemented, the webform results will seamlessly display the auto-filled value. Final Thoughts Drupal's power lies not just in its functionality, but in its adaptability and ease of use. Tokens are an example of this versatility, since they offer a way to dynamically retrieve data as well as personalize content. Tokens streamline processes and improve user experience, whether they are used for user information, node details, or site settings.
Categories: FLOSS Project Planets

Matthew Palmer: How I Tripped Over the Debian Weak Keys Vulnerability

Planet Debian - Mon, 2024-04-08 20:00

Those of you who haven’t been in IT for far, far too long might not know that next month will be the 16th(!) anniversary of the disclosure of what was, at the time, a fairly earth-shattering revelation: that for about 18 months, the Debian OpenSSL package was generating entirely predictable private keys.

The recent xz-stential threat (thanks to @nixCraft for making me aware of that one), has got me thinking about my own serendipitous interaction with a major vulnerability. Given that the statute of limitations has (probably) run out, I thought I’d share it as a tale of how “huh, that’s weird” can be a powerful threat-hunting tool – but only if you’ve got the time to keep pulling at the thread.

Prelude to an Adventure

Our story begins back in March 2008. I was working at Engine Yard (EY), a now largely-forgotten Rails-focused hosting company, which pioneered several advances in Rails application deployment. Probably EY’s greatest claim to lasting fame is that they helped launch a little code hosting platform you might have heard of, by providing them free infrastructure when they were little more than a glimmer in the Internet’s eye.

I am, of course, talking about everyone’s favourite Microsoft product: GitHub.

Since GitHub was in the right place, at the right time, with a compelling product offering, they quickly started to gain traction, and grow their userbase. With growth comes challenges, amongst them the one we’re focusing on today: SSH login times. Then, as now, GitHub provided SSH access to the git repos they hosted, by SSHing to git@github.com with publickey authentication. They were using the standard way that everyone manages SSH keys: the ~/.ssh/authorized_keys file, and that became a problem as the number of keys started to grow.

The way that SSH uses this file is that, when a user connects and asks for publickey authentication, SSH opens the ~/.ssh/authorized_keys file and scans all of the keys listed in it, looking for a key which matches the key that the user presented. This linear search is normally not a huge problem, because nobody in their right mind puts more than a few keys in their ~/.ssh/authorized_keys, right?

Of course, as a popular, rapidly-growing service, GitHub was gaining users at a fair clip, to the point that the one big file that stored all the SSH keys was starting to visibly impact SSH login times. This problem was also not going to get any better by itself. Something Had To Be Done.

EY management was keen on making sure GitHub ran well, and so despite it not really being a hosting problem, they were willing to help fix this problem. For some reason, the late, great, Ezra Zygmuntowitz pointed GitHub in my direction, and let me take the time to really get into the problem with the GitHub team. After examining a variety of different possible solutions, we came to the conclusion that the least-worst option was to patch OpenSSH to lookup keys in a MySQL database, indexed on the key fingerprint.

We didn’t take this decision on a whim – it wasn’t a case of “yeah, sure, let’s just hack around with OpenSSH, what could possibly go wrong?”. We knew it was potentially catastrophic if things went sideways, so you can imagine how much worse the other options available were. Ensuring that this wouldn’t compromise security was a lot of the effort that went into the change. In the end, though, we rolled it out in early April, and lo! SSH logins were fast, and we were pretty sure we wouldn’t have to worry about this problem for a long time to come.

Normally, you’d think “patching OpenSSH to make mass SSH logins super fast” would be a good story on its own. But no, this is just the opening scene.

Chekov’s Gun Makes its Appearance

Fast forward a little under a month, to the first few days of May 2008. I get a message from one of the GitHub team, saying that somehow users were able to access other users’ repos over SSH. Naturally, as we’d recently rolled out the OpenSSH patch, which touched this very thing, the code I’d written was suspect number one, so I was called in to help.

They're called The Usual Suspects for a reason, but sometimes, it really is Keyser Söze

Eventually, after more than a little debugging, we discovered that, somehow, there were two users with keys that had the same key fingerprint. This absolutely shouldn’t happen – it’s a bit like winning the lottery twice in a row1 – unless the users had somehow shared their keys with each other, of course. Still, it was worth investigating, just in case it was a web application bug, so the GitHub team reached out to the users impacted, to try and figure out what was going on.

The users professed no knowledge of each other, neither admitted to publicising their key, and couldn’t offer any explanation as to how the other person could possibly have gotten their key.

Then things went from “weird” to “what the…?”. Because another pair of users showed up, sharing a key fingerprint – but it was a different shared key fingerprint. The odds now have gone from “winning the lottery multiple times in a row” to as close to “this literally cannot happen” as makes no difference.

Once we were really, really confident that the OpenSSH patch wasn’t the cause of the problem, my involvement in the problem basically ended. I wasn’t a GitHub employee, and EY had plenty of other customers who needed my help, so I wasn’t able to stay deeply involved in the on-going investigation of The Mystery of the Duplicate Keys.

However, the GitHub team did keep talking to the users involved, and managed to determine the only apparent common factor was that all the users claimed to be using Debian or Ubuntu systems, which was where their SSH keys would have been generated.

That was as far as the investigation had really goten, when along came May 13, 2008.

Chekov’s Gun Goes Off

With the publication of DSA-1571-1, everything suddenly became clear. Through a well-meaning but ultimately disasterous cleanup of OpenSSL’s randomness generation code, the Debian maintainer had inadvertently reduced the number of possible keys that could be generated by a given user from “bazillions” to a little over 32,000. With so many people signing up to GitHub – some of them no doubt following best practice and freshly generating a separate key – it’s unsurprising that some collisions occurred.

You can imagine the sense of “oooooooh, so that’s what’s going on!” that rippled out once the issue was understood. I was mostly glad that we had conclusive evidence that my OpenSSH patch wasn’t at fault, little knowing how much more contact I was to have with Debian weak keys in the future, running a huge store of known-compromised keys and using them to find misbehaving Certificate Authorities, amongst other things.

Lessons Learned

While I’ve not found a description of exactly when and how Luciano Bello discovered the vulnerability that became CVE-2008-0166, I presume he first came across it some time before it was disclosed – likely before GitHub tripped over it. The stable Debian release that included the vulnerable code had been released a year earlier, so there was plenty of time for Luciano to have discovered key collisions and go “hmm, I wonder what’s going on here?”, then keep digging until the solution presented itself.

The thought “hmm, that’s odd”, followed by intense investigation, leading to the discovery of a major flaw is also what ultimately brought down the recent XZ backdoor. The critical part of that sequence is the ability to do that intense investigation, though.

When I reflect on my brush with the Debian weak keys vulnerability, what sticks out to me is the fact that I didn’t do the deep investigation. I wonder if Luciano hadn’t found it, how long it might have been before it was found. The GitHub team would have continued investigating, presumably, and perhaps they (or I) would have eventually dug deep enough to find it. But we were all super busy – myself, working support tickets at EY, and GitHub feverishly building features and fighting the fires in their rapidly-growing service.

As it was, Luciano was able to take the time to dig in and find out what was happening, but just like the XZ backdoor, I feel like we, as an industry, got a bit lucky that someone with the skills, time, and energy was on hand at the right time to make a huge difference.

It’s a luxury to be able to take the time to really dig into a problem, and it’s a luxury that most of us rarely have. Perhaps an understated takeaway is that somehow we all need to wrestle back some time to follow our hunches and really dig into the things that make us go “hmm…”.

Support My Hunches

If you’d like to help me be able to do intense investigations of mysterious software phenomena, you can shout me a refreshing beverage on ko-fi.

  1. the odds are actually probably more like winning the lottery about twenty times in a row. The numbers involved are staggeringly huge, so it’s easiest to just approximate it as “really, really unlikely”. 

Categories: FLOSS Project Planets

PyBites: Adventures in Import-land, Part II

Planet Python - Mon, 2024-04-08 14:15

“KeyError: 'GOOGLE_APPLICATION_CREDENTIALS‘”

It was way too early in the morning for this error. See if you can spot the problem. I hadn’t had my coffee before trying to debug the code I’d written the night before, so it will probably take you less time than it did me.

app.py:

from dotenv import load_dotenv from file_handling import initialize_constants load_dotenv() #...

file_handling.py:

import os from google.cloud import storage UPLOAD_FOLDER=None DOWNLOAD_FOLDER = None def initialize_cloud_storage(): """ Initializes the Google Cloud Storage client. """ os.environ["GOOGLE_APPLICATION_CREDENTIALS"] storage_client = storage.Client() bucket_name = #redacted return storage_client.bucket(bucket_name) def set_upload_folder(): """ Determines the environment and sets the path to the upload folder accordingly. """ if os.environ.get("FLASK_ENV") in ["production", "staging"]: UPLOAD_FOLDER = os.path.join("/tmp", "upload") os.makedirs(UPLOAD_FOLDER, exist_ok=True) else: UPLOAD_FOLDER = os.path.join("src", "upload_folder") return UPLOAD_FOLDER def initialize_constants(): """ Initializes the global constants for the application. """ UPLOAD_FOLDER = initialize_upload_folder() DOWNLOAD_FOLDER = initialize_cloud_storage() return UPLOAD_FOLDER, DOWNLOAD_FOLDER DOWNLOAD_FOLDER=initialize_cloud_storage() def write_to_gcs(content: str, file: str): "Writes a text file to a Google Cloud Storage file." blob = DOWNLOAD_FOLDER.blob(file) blob.upload_from_string(content, content_type="text/plain") def upload_file_to_gcs(file_path:str, gcs_file: str): "Uploads a file to a Google Cloud Storage bucket" blob = DOWNLOAD_FOLDER.blob(gcs_file) with open(file_path, "rb") as f: blob.upload_from_file(f, content_type="application/octet-stream")

See the problem?

This was just the discussion of a recent Pybites article.

When app.py imported initialize_constants from file_handling, the Python interpreter ran

DOWNLOAD_FOLDER = initialize_cloud_storage()

and looked for GOOGLE_APPLICATION_CREDENTIALS from the environment path, but load_dotenv hadn’t added them to the environment path from the .env file yet.

Typically, configuration variables, secret keys, and passwords are stored in a file called .env and then read as environment variables rather than as pure text using a package such as python-dotenv, which is what is being used here.

So, I had a few options.

I could call load_dotenv before importing from file_handling:

from dotenv import load_dotenv load_dotenv() from file_handling import initialize_constants

But that’s not very Pythonic.

I could call initialize_cloud_storage inside both upload_file_to_gcs and write_to_gcs

def write_to_gcs(content: str, file: str): "Writes a text file to a Google Cloud Storage file." DOWNLOAD_FOLDER = initialize_cloud_storage() blob = DOWNLOAD_FOLDER.blob(file) blob.upload_from_string(content, content_type="text/plain") def upload_file_to_gcs(file_path:str, gcs_file: str): "Uploads a file to a Google Cloud Storage bucket" DOWNLOAD_FOLDER = initialize_cloud_storage() blob = DOWNLOAD_FOLDER.blob(gcs_file) with open(file_path, "rb") as f: blob.upload_from_file(f, content_type="application/octet-stream")

But this violates the DRY principle. Plus we really shouldn’t be initializing the storage client multiple times. In fact, we already are initializing it twice in the way the code was originally written.

Going Global

So what about this?

DOWNLOAD_FOLDER = None def initialize_constants(): """ Initializes the global constants for the application. """ global DOWNLOAD_FOLDER UPLOAD_FOLDER = initialize_upload_folder() DOWNLOAD_FOLDER = initialize_cloud_storage() return UPLOAD_FOLDER, DOWNLOAD_FOLDER

Here, we are defining DOWNLOAD_FOLDER as having global scope.

This will work here.

This will work here, because upload_file_to_gcs and write_to_gcs are in the same module. But if they were in a different module, it would break.

Why does it matter?

Well, let’s go back to how Python handles imports. Remember that Python runs any code outside of a function or class at import. That applies to variable (or constant) assignment, as well. So if upload_file_to_gcs and write_to_gcs were in another module and importing DOWNLOAD_FOLDER from file_handling,p it would be importing it while assigned a value of None. It wouldn’t matter that by the time it was needed, it wouldn’t be assigned to None any longer. Inside this other module, it would still be None.

What would be necessary in this situation would be another function called get_download_folder.

def get_download_folder(): """ Returns the current value of the Google Cloud Storage bucket """ return DOWNLOAD_FOLDER

Then, in this other module containing the upload_file_to_gcs and write_to_gcs functions, I would import get_download_folder instead of DOWNLOAD_FOLDER. By importing get_download_folder, you can get the value of DOWNLOAD_FOLDER after it has been assigned to an actual value, because get_download_folder won’t run until you explicitly call it. Which, presumably wouldn’t be until after you’ve let initialize_cloud_storage do its thing.

I have another part of my codebase where I have done this. On my site, I have a tool that helps authors create finetunes of GPT 3.5 from their books. This Finetuner is BYOK, or ‘bring your own key’ meaning that users supply their own OpenAI API key to use the tool. I chose this route because charging authors to fine-tune a model and then charging them to use it, forever, is just not something that benefits either of us. This way, they can take their finetuned model and use it an any of the multiple other BYOK AI writing tools that are out there, and I don’t have to maintain writing software on top of everything else. So the webapp’s form accepts the user’s API key, and after a valid form submit, starts a thread of my Finetuner application.

This application starts in the training_management.py module, which imports set_client and get_client from openai_client.py and passes the user’s API key to set_client right away. I can’t import client directly, because client is None until set_client has been passed the API key, which happens after import.

from openai import OpenAI client = None def set_client(api_key:str): """ Initializes OpenAI API client with user API key """ global client client = OpenAI(api_key = api_key) def get_client(): """ Returns the initialized OpenAI client """ return client

When the function that starts a fine tuning job starts, it calls get_client to retrieve the initialized client. And by moving the API client initialization into another module, it becomes available to be used for an AI-powered chunking algorithm I’m working on. Nothing amazing. Basically, just generating scene beats from each chapter to use as the prompt, with the actual chapter as the response. It needs work still, but it’s available for authors who want to try it.

A Class Act

Now, we could go one step further from here. The code we’ve settled on so far relies on global names. Perhaps we can get away with this. DOWNLOAD_FOLDER is a constant. Well, sort of. Remember, it’s defined by initializing a connection to a cloud storage container. It’s actually a class. By rights, we should be encapsulating all of this logic inside of another class.

So what could that look like? Well, it should initialize the upload and download folders, and expose them as properties, and then use the functions write_to_gcs and upload_file_to_gcs as methods like this:

class FileStorageHandler: def __init__(self): self._upload_folder = self._set_upload_folder() self._download_folder = self._initialize_cloud_storage() @property def upload_folder(self): return self._upload_folder @property def download_folder(self): return self._download_folder def _initialize_cloud_storage(self): """ Initializes the Google Cloud Storage client. """ os.environ["GOOGLE_APPLICATION_CREDENTIALS"] storage_client = storage.Client() bucket_name = #redacted return storage_client.bucket(bucket_name) def _set_upload_folder(self): """ Determines the environment and sets the path to the upload folder accordingly. """ if os.environ.get("FLASK_ENV") in ["production", "staging"]: upload_folder = os.path.join("/tmp", "upload") os.makedirs(upload_folder, exist_ok=True) else: upload_folder = os.path.join("src", "upload_folder") return upload_folder def write_to_gcs(self, content: str, file_name: str): """ Writes a text file to a Google Cloud Storage file. """ blob = self._download_folder.blob(file_name) blob.upload_from_string(content, content_type="text/plain") def upload_file_to_gcs(self, file_path: str, gcs_file_name: str): """ Uploads a file to a Google Cloud Storage bucket. """ blob = self._download_folder.blob(gcs_file_name) with open(file_path, "rb") as file_obj: blob.upload_from_file(file_obj)

Now, we can initialize an instance of FileStorageHandler in app.py and assign UPLOAD_FOLDER and DOWNLOAD_FOLDER to the properties of the class.

from dotenv import load_dotenv from file_handling import FileStorageHandler load_dotenv() folders = FileStorageHandler() UPLOAD_FOLDER = folders.upload_folder DOWNLOAD_FOLDER = folders.download_folder Key take away

In the example, the error arose because initialize_cloud_storage was called at the top level in file_handling.py. This resulted in Python attempting to access environment variables before load_dotenv had a chance to set them.

I had been thinking of module level imports as “everything at the top runs at import.” But that’s not true. Or rather, it is true, but not accurate. Python executes code based on indentation, and functions are indented within the module. So, it’s fair to say that every line that isn’t indented is at the top of the module. In fact, it’s even called that: top-level code, which is defined as basically anything that is not part of a function, class or other code block.

And top-level code runs runs when the module is imported. It’s not enough to bury an expression below some functions, it will still run immediately when the module is imported, whether you are ready for it to run or not. Which is really what the argument against global variables and state is all about, managing when and how your code runs.

Understanding top-level code execution at import helped solved the initial error and design a more robust pattern.

Next steps

The downside with using a class is that if it gets called again, a new instance is created, with a new connection to the cloud storage. To get around this, something to look into would be to implement something called a Singleton Pattern, which is outside of the scope of this article.

Also, the code currently doesn’t handle exceptions that might arise during initialization (e.g., issues with credentials or network connectivity). Adding robust error handling mechanisms will make the code more resilient.

Speaking of robustness, I would be remiss if I didn’t point out that a properly abstracted initialization method should retrieve the bucket name from a configuration or .env file instead of leaving it hardcoded in the method itself.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #445 - Drupal Bounty Program

Planet Drupal - Mon, 2024-04-08 14:00

Today we are talking about The Drupal Bounty Program, How it supports innovation, and how you can get involved with guest Alex Moreno. We’ll also cover WebProfiler as our module of the week.

For show notes visit: www.talkingDrupal.com/445

Topics
  • What is the Drupal Bounty program
  • How and when did it start
  • What issues and tasks are included
  • Has the bounty program been successful
  • Why was this program extended
  • Do you see any drawbacks
  • Can anyone participate
  • How are issues for the second round being selected
  • What do you see the future of the bounty program looking like
  • Could this become like other bounty programs with cash
  • Do you think the bounty program will help maintainers get sponsorship
Resources Guests

Alejandro Moreno - alexmoreno.net alexmoreno

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Matt Glaman - mglaman.dev mglaman

MOTW Correspondent

Martin Anderson-Clutz - mandclu

  • Brief description:
    • Have you ever wanted to get detailed performance data for the pages on your Drupal sites? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in Jan 2014 by Luca Lusso of Italy who was a guest on the show in episode #425
    • Versions available: 10.1.5 which works with Drupal >=10.1.2
  • Maintainership
    • Actively maintained, latest release on Feb 1
    • Security coverage
    • Test coverage
    • Not much in the way of documentation, but the module is largely a wrapper for the Symfony WebProfiler bundle, which has its own section in the Symfony documentation
    • Number of open issues: 36 open issues, 13 of which are bugs
  • Usage stats:
    • 477 sites
  • Module features and usage
    • Once installed the module adds a toolbar to the bottom of your site, within which it will show a variety of data for every page:
    • Route and Controller
    • Memory usage
    • Time to load (with some additional setup)
    • Number of AJAX requests
    • Number of queries run and the total query time
    • Number of blocks visible
    • How many forms are on the profile
    • Lots of other detailed information available through links
    • Reports are saved into the database, so you can dig through additional details such as:
    • Request information like access metadata, cookies, session info, and server parameters, in addition to the request and response headers
    • All of the queries that ran, how long each took, and even a quick way to create an EXPLAIN statement to get deeper insight from your database engine
    • You can also view all the services available, and with a single click open the class file in the IDE of your choice
    • A handy alternative to other performance monitoring tools like XHProf (either as Drupal module, or installed directly into your development environment), or commercial tools like Blackfire or New Relic
    • Discussion
    • Luca’s book Modernizing Drupal 10 Theme Development actually provides a great deep dive into this module
Categories: FLOSS Project Planets

Open Source AI Definition – Weekly update April 8

Open Source Initiative - Mon, 2024-04-08 13:15
Seeking document reviewers for OpenCV
  • This is your final opportunity to register for the review of licenses provided by OpenCV. Join us for our upcoming phase, where we meticulously compare various systems’ documentation against our latest definition to test compatibility.
    • For more information, check the forum
Action on the 0.0.6 draft 
  • Under “The following components are not required, but their inclusion in public releases is appreciated”, a user highlighted that model cards should be a required open component, as its purpose is to promote transparency and accountability
  • Under “What is Open Source AI?”, a user raises a concern regarding “made available to the public”, stating that software carries an Open Source license, even if a copy was only made available to a single person.
    • This will be considered for the next draft
Open Source AI Definition Town Hall – April 5, 2024

Access the slides and the recording of the previous town hall meeting here.

Categories: FLOSS Research

Bastian Blank: Python dataclasses for Deb822 format

Planet Debian - Mon, 2024-04-08 13:00

Python includes some helping support for classes that are designed to just hold some data and not much more: Data Classes. It uses plain Python type definitions to specify what you can have and some further information for every field. This will then generate you some useful methods, like __init__ and __repr__, but on request also more. But given that those type definitions are available to other code, a lot more can be done.

There exists several separate packages to work on data classes. For example you can have data validation from JSON with dacite.

But Debian likes a pretty strange format usually called Deb822, which is in fact derived from the RFC 822 format of e-mail messages. Those files includes single messages with a well known format.

So I'd like to introduce some Deb822 format support for Python Data Classes. For now the code resides in the Debian Cloud tool.

Usage Setup

It uses the standard data classes support and several helper functions. Also you need to enable support for postponed evaluation of annotations.

from __future__ import annotations from dataclasses import dataclass from dataclasses_deb822 import read_deb822, field_deb822 Class definition start

Data classes are just normal classes, just with a decorator.

@dataclass class Package: Field definitions

You need to specify the exact key to be used for this field.

package: str = field_deb822('Package') version: str = field_deb822('Version') arch: str = field_deb822('Architecture')

Default values are also supported.

multi_arch: Optional[str] = field_deb822( 'Multi-Arch', default=None, ) Reading files for p in read_deb822(Package, sys.stdin, ignore_unknown=True): print(p) Full example from __future__ import annotations from dataclasses import dataclass from debian_cloud_images.utils.dataclasses_deb822 import read_deb822, field_deb822 from typing import Optional import sys @dataclass class Package: package: str = field_deb822('Package') version: str = field_deb822('Version') arch: str = field_deb822('Architecture') multi_arch: Optional[str] = field_deb822( 'Multi-Arch', default=None, ) for p in read_deb822(Package, sys.stdin, ignore_unknown=True): print(p) Known limitations
Categories: FLOSS Project Planets

Anwesha Das: Test container image with eercheck

Planet Python - Mon, 2024-04-08 10:25

Execution Environments serves us the benefits of containerization by solving the issues such as software dependencies, portability. Ansible Execution Environment are Ansible control nodes packaged as container images. There are two kinds of Ansible execution environments

  • Base, includes the following

    • fedora base image
    • ansible core
    • ansible collections : The following set of collections
      ansible.posix
      ansible.utils
      ansible.windows
  • Minimal, includes the following

    • fedora base image
    • ansible core

I have been the release manager for Ansible Execution Environments. After building the images I perform certain steps of tests to check if the versions of different components of the newly built correct or not. So I wrote eercheck to ease the steps of tests.

What is eercheck?

eercheck is a command line tool to test Ansible community execution environment before release. It uses podman py to connect and work with the podman container image, and Python unittest for testing the containers.

eercheck is a command line tool to test Ansible Community Execution Environment before release. It uses podman-py to connect and work with the podman container image, and Python unittest for testing the containers. The project is licensed under GPL-3.0-or-later.

How to use eercheck?

Activate the virtual environment in the working directory.

python3 -m venv .venv source .venv/bin/activate python -m pip install -r requirements.txt

Activate the podman socket.

systemctl start podman.socket --user

Update vars.json with correct version numbers.Pick the correct versions of the Ansible Collections from the .deps file of the corresponding Ansible community package release. For example for 9.4.0 the Collection versions can be found in here. You can find the appropriate version of Ansible Community Package here. The check needs to be carried out each time before the release of the Ansible Community Execution Environment.

Execute the program by giving the correct container image id.

./containertest.py image_id

Happy automating.

Categories: FLOSS Project Planets

Real Python: Python News: What's New From March 2024

Planet Python - Mon, 2024-04-08 10:00

While many people went hunting for Easter eggs, the Python community stayed active through March 2024. The free-threaded Python project reached a new milestone, and you can now experiment with disabling the GIL in your interpreter.

The Python Software Foundation does a great job supporting the language with limited resources. They’ve now announced a new position that will support users of PyPI. NumPy is an old workhorse in the data science space. The library is getting a big facelift, and the first release candidate of NumPy 2 is now available.

Dive in to learn more about last month’s most important Python news.

Free-Threaded Python Reaches an Important Milestone

Python’s global interpreter lock (GIL) has been part of the CPython implementation since the early days. The lock simplifies a lot of the code under the hood of the language, but also causes some issues with parallel processing.

Over the years, there have been many attempts to remove the GIL. However, until PEP 703 was accepted by the steering council last year, none had been successful.

The PEP describes how the GIL can be removed based on experimental work done by Sam Gross. It suggests that what’s now called free-threaded Python is activated through a build option. In time, this free-threaded Python is expected to become the default version of CPython, but for now, it’s only meant for testing and experiments.

When free-threaded Python is ready for bigger audiences, the GIL will still be enabled by default. You can then set an environment variable or add a command-line option to try out free-threaded Python:

Read the full article at https://realpython.com/python-news-march-2024/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Zato Blog: Integrating with Jira APIs

Planet Python - Mon, 2024-04-08 09:44
Integrating with Jira APIs 2024-04-08, by Dariusz Suchojad Overview

Continuing in the series of articles about newest cloud connections in Zato 3.2, this episode covers Atlassian Jira from the perspective of invoking its APIs to build integrations between Jira and other systems.

There are essentially two use modes of integrations with Jira:

  1. Jira reacts to events taking place in your projects and invokes your endpoints accordingly via WebHooks. In this case, it is Jira that explicitly establishes connections with and sends requests to your APIs.
  2. Jira projects are queried periodically or as a consequence of events triggered by Jira using means other than WebHooks.

The first case is usually more straightforward to conceptualize - you create a WebHook in Jira, point it to your endpoint and Jira invokes it when a situation of interest arises, e.g. a new ticket is opened or updated. I will talk about this variant of integrations with Jira in a future instalment as the current one is about the other situation, when it is your systems that establish connections with Jira.

The reason why it is more practical to first speak about the second form is that, even if WebHooks are somewhat easier to reason about, they do come with their own ramifications.

To start off, assuming that you use the cloud-based version of Jira (e.g. https://example.atlassian.net), you need to have a publicly available endpoint for Jira to invoke through WebHooks. Very often, this is undesirable because the systems that you need to integrate with may be internal ones, never meant to be exposed to public networks.

Secondly, your endpoints need to have a TLS certificate signed by a public Certificate Authority and they need to be accessible on port 443. Again, both of these are something that most enterprise systems will not allow at all or it may take months or years to process such a change internally across the various corporate departments involved.

Lastly, even if a WebHook can be used, it is not always a given that the initial information that you receive in the request from a WebHook will already contain everything that you need in your particular integration service. Thus, you will still need a way to issue requests to Jira to look up details of a particular object, such as tickets, in this way reducing WebHooks to the role of initial triggers of an interaction with Jira, e.g. a WebHook invokes your endpoint, you have a ticket ID on input and then you invoke Jira back anyway to obtain all the details that you actually need in your business integration.

The end situation is that, although WebHooks are a useful concept that I will write about in a future article, they may very well not be sufficient for many integration use cases. That is why I start with integration methods that are alternative to WebHooks.

Alternatives to WebHooks

If, in our case, we cannot use WebHooks then what next? Two good approaches are:

  1. Scheduled jobs
  2. Reacting to emails (via IMAP)

Scheduled jobs will let you periodically inquire with Jira about the changes that you have not processed yet. For instance, with a job definition as below:

Now, the service configured for this job will be invoked once per minute to carry out any integration works required. For instance, it can get a list of tickets since the last time it ran, process each of them as required in your business context and update a database with information about what has been just done - the database can be based on Redis, MongoDB, SQL or anything else.

Integrations built around scheduled jobs make most sense when you need to make periodic sweeps across a large swaths of business data, these are the "Give me everything that changed in the last period" kind of interactions when you do not know precisely how much data you are going to receive.

In the specific case of Jira tickets, though, an interesting alternative may be to combine scheduled jobs with IMAP connections:

The idea here is that when new tickets are opened, or when updates are made to existing ones, Jira will send out notifications to specific email addresses and we can take advantage of it.

For instance, you can tell Jira to CC or BCC an address such as zato@example.com. Now, Zato will still run a scheduled job but instead of connecting with Jira directly, that job will look up unread emails for it inbox ("UNSEEN" per the relevant RFC).

Anything that is unread must be new since the last iteration which means that we can process each such email from the inbox, in this way guaranteeing that we process only the latest updates, dispensing with the need for our own database of tickets already processed. We can extract the ticket ID or other details from the email, look up its details in Jira and the continue as needed.

All the details of how to work with IMAP emails are provided in the documentation but it would boil down to this:

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class MyService(Service): def handle(self): conn = self.email.imap.get('My Jira Inbox').conn for msg_id, msg in conn.get(): # Process the message here .. process_message(msg.data) # .. and mark it as seen in IMAP. msg.mark_seen()

The natural question is - how would the "process_message" function extract details of a ticket from an email?

There are several ways:

  1. Each email has a subject of a fixed form - "[JIRA] (ABC-123) Here goes description". In this case, ABC-123 is the ticket ID.
  2. Each email will contain a summary, such as the one below, which can also be parsed:
Summary: Here goes description Key: ABC-123 URL: https://example.atlassian.net/browse/ABC-123 Project: My Project Issue Type: Improvement Affects Versions: 1.3.17 Environment: Production Reporter: Reporter Name Assignee: Assignee Name
  1. Finally, each email will have an "X-Atl-Mail-Meta" header with interesting metadata that can also be parsed and extracted:
X-Atl-Mail-Meta: user_id="123456:12d80508-dcd0-42a2-a2cd-c07f230030e5", event_type="Issue Created", tenant="https://example.atlassian.net"

The first option is the most straightforward and likely the most convenient one - simply parse out the ticket ID and call Jira with that ID on input for all the other information about the ticket. How to do it exactly is presented in the next chapter.

Regardless of how we parse the emails, the important part is that we know that we invoke Jira only when there are new or updated tickets - otherwise there would not have been any new emails to process. Moreover, because it is our side that invokes Jira, we do not expose our internal system to the public network directly.

However, from the perspective of the overall security architecture, email is still part of the attack surface so we need to make sure that we read and parse emails with that in view. In other words, regardless of whether it is Jira invoking us or our reading emails from Jira, all the usual security precautions regarding API integrations and accepting input from external resources, all that still holds and needs to be part of the design of the integration workflow.

Creating Jira connections

The above presented the ways in which we can arrive at the step of when we invoke Jira and now we are ready to actually do it.

As with other types of connections, Jira connections are created in Zato Dashboard, as below. Note that you use the email address of a user on whose behalf you connect to Jira but the only other credential is that user's API token previously generated in Jira, not the user's password.

Invoking Jira

With a Jira connection in place, we can now create a Python API service. In this case, we accept a ticket ID on input (called "a key" in Jira) and we return a few details about the ticket to our caller.

This is the kind of a service that could be invoked from a service that is triggered by a scheduled job. That is, we would separate the tasks, one service would be responsible for opening IMAP inboxes and parsing emails and the one below would be responsible for communication with Jira.

Thanks to this loose coupling, we make everything much more reusable - that the services can be changed independently is but one part and the more important side is that, with such separation, both of them can be reused by future services as well, without tying them rigidly to this one integration alone.

# -*- coding: utf-8 -*- # stdlib from dataclasses import dataclass # Zato from zato.common.typing_ import cast_, dictnone from zato.server.service import Model, Service # ########################################################################### if 0: from zato.server.connection.jira_ import JiraClient # ########################################################################### @dataclass(init=False) class GetTicketDetailsRequest(Model): key: str @dataclass(init=False) class GetTicketDetailsResponse(Model): assigned_to: str = '' progress_info: dictnone = None # ########################################################################### class GetTicketDetails(Service): class SimpleIO: input = GetTicketDetailsRequest output = GetTicketDetailsResponse def handle(self): # This is our input data input = self.request.input # type: GetTicketDetailsRequest # .. create a reference to our connection definition .. jira = self.cloud.jira['My Jira Connection'] # .. obtain a client to Jira .. with jira.conn.client() as client: # Cast to enable code completion client = cast_('JiraClient', client) # Get details of a ticket (issue) from Jira ticket = client.get_issue(input.key) # Observe that ticket may be None (e.g. invalid key), hence this 'if' guard .. if ticket: # .. build a shortcut reference to all the fields in the ticket .. fields = ticket['fields'] # .. build our response object .. response = GetTicketDetailsResponse() response.assigned_to = fields['assignee']['emailAddress'] response.progress_info = fields['progress'] # .. and return the response to our caller. self.response.payload = response # ########################################################################### Creating a REST channel and testing it

The last remaining part is a REST channel to invoke our service through. We will provide the ticket ID (key) on input and the service will reply with what was found in Jira for that ticket.

We are now ready for the final step - we invoke the channel, which invokes the service which communicates with Jira, transforming the response from Jira to the output that we need:

$ curl localhost:17010/jira1 -d '{"key":"ABC-123"}' { "assigned_to":"zato@example.com", "progress_info": { "progress": 10, "total": 30 } } $

And this is everything for today - just remember that this is just one way of integrating with Jira. The other one, using WebHooks, is something that I will go into in one of the future articles.

More blog posts
Categories: FLOSS Project Planets

The Drop Times: Small Strides to Dramatic Leaps

Planet Drupal - Mon, 2024-04-08 09:29

Dear Readers,

The DropTimes [TDT], as you know, is a news website with the vision of contributing to the growth of a vibrant community of users and contributors around Drupal through the process of covering and promoting everything happening around Drupal. To borrow our founder, Anoop John's words, 

"What we are doing as TDT is not just running a news website but we are trying to mobilize a whole community of users toward revitalizing the community."

 We are working towards improving the technology, driving the contributions back to the Drupal community, and ultimately contributing back to society at large. We are driving towards something bigger than all of us. 

The growth of such a venture certainly will be slow, like a writer adding a few words to their novel each day, a runner slightly extending their distance each week, or a business steadily enhancing its customer service. These small steps may seem insignificant in isolation, but they compound into significant advancements over months and years. These seemingly minor improvements compound over time, accumulating smaller strides and preparing for the dramatic leaps. The DropTimes has, day by day, accumulated the strength to make bigger leaps.

In digital innovation, embracing new directions and challenging conventional norms often leads to remarkable discoveries. Just as the Drupal community continuously strives to push boundaries and advocate for the principles of openness and community-driven development, we are likewise urged to delve into diverse domains with immense potential for creativity and impact. The DropTimes model is one based on resilience, fosters a culture of continuous learning, and ultimately transforms modest efforts into significant accomplishments. This method underscores the importance of the journey, teaching patience and discipline and proving that steady progress can lead to remarkable success.

While grateful to all the readers and loyal supporters of TDT, we seek your continued support in building something impactful and helping contribute to Drupal and open-source. With that, let's revisit the highlights from last week's coverage.

The DropTimes Community Strategist, Tarun Udayaraj, had an opportunity to converse with Tim Doyle, the first-ever Chief Executive Officer and the appointed leader of the Drupal Association. In this exclusive interview, the CEO of the Drupal Association shares his perspectives on the future of Drupal and the open-source community at large. Read the full interview here.

Preston So is a dynamic figure in software development. He showcases a rich career spanning diverse roles within the tech industry and emphasizes a leadership philosophy rooted in empathy and adaptability. Beyond his professional endeavors, Preston's commitment to the Drupal community is notable, having been a significant part of it since 2007. Learn more about this multifaceted individual and his contributions to open-source with an interview by Elma John, a sub-editor at TDT, "Navigating the Currents of Change: The Multidimensional Journey of Preston So."

In an interview with Kazima Abbas, sub-editor of The DropTimes, Adrian Ababei, the Senior Drupal Architect and CEO at OPTASY, shares his extensive experience in web development and Drupal architecture. He discusses overseeing full-cycle project management, conducting technology research, and leading a team of developers at OPTASY.

The third part of the hit Page Builder series by  André Angelantoni, Senior Drupal Architect at HeroDevs, came out last week. "Drupal Page Builders—Part 3: Other Alternative Solutions" discusses alternatives to Paragraphs and Layout Builder. This segment navigates through a variety of server-side rendered page generation solutions, offering a closer look at innovative modules that provide a broader range of page-building capabilities beyond Drupal's native tools.

March has ended, and TDT has successfully concluded its "Women in Drupal" campaign. As the series ends with the third part of Inspiring Inclusion: Celebrating Women in Drupal, The DropTimes reflects on the powerful narratives and insightful messages shared by women Drupalers from around the globe. 

In exciting news, TDT has been announced as the Media Partner for DrupalCon Barcelona 2024 and Drupal Iberia 2024 as a testament to the platform's growth and resilience. We are also seeking volunteers from the members of the Drupal Community to help us cover the upcoming DrupalCon Portland 2024

The Regular Registration window for DrupalCon Portland is now open. Registration for DrupalCon Portland will unlock an additional $100 discount on your ticket for DrupalCon North America 2025, in addition to the Early Bird discount during the Early Bird registration window. 

Every week, we will have some events somewhere around the world. A complete list of events for the week is available here.

In other news, Drupal 10.2.5 is now available, featuring a collection of bug fixes. This patch release addresses various issues to ensure stability for production sites. Janez Urevc has reported a 10% improvement in Drupal core test suite runtime, attributed to Gander, a performance testing framework part of Drupal since version 10.2. The latest WebAIM Million report reveals insights into web accessibility, with Drupal holding strong in the CMS rankings. Discover the subtle shifts in WCAG 2 compliance and the strategic decision to exclude subdomains for improved analysis. 

In another interesting update, Mufeed VH, a 21-year-old from Kerala, India, and founder of Lyminal and Stition.AI, has created Devika, an open-source AI software similar to Devin. Devika, conceived initially as a joke, can understand instructions, conduct research, and write code autonomously.

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely
Alka Elizabeth
Sub-editor, The DropTimes.

Categories: FLOSS Project Planets

EuroPython: EuroPython April 2024 Newsletter

Planet Python - Mon, 2024-04-08 06:42

Hello, Python enthusiasts! &#x1F44B;

Guess what? We&aposre on the home stretch now, with less than 100 days left until we all rendezvous in the enchanting city of Prague for EuroPython 2024!

Only 91 days left until EuroPython 2024!

Can you feel the excitement tingling in your Pythonic veins?

Let’s look up what&aposs been cooking in the EuroPython pot lately. &#x1FA84;&#x1F35C;

&#x1F4E3; Programme

The curtains have officially closed on the EuroPython 2024 Call for Proposals! &#x1F3AC;

We&aposve hit records with an incredible 627 submissions this year!! &#x1F389;

Thank you to each and every one of you brave souls who tossed your hats into the ring! &#x1F3A9; Your willingness to share your ideas has truly made this a memorable journey.

&#x1F5C3;️ Community Voting

EuroPython 2024 Community Voting was a blast!

The Community Voting is composed of all past EuroPython attendees and prospective speakers between 2015 and 2024.

We had 297 people contributing, making EuroPython more responsive to the community’s choices. &#x1F60E; We can’t thank you enough for helping us hear the voice of the Community.

Now, our wonderful programme crew along with the team of reviewers and community voters have been working hard to create the schedule for the conference! &#x1F4CB;✨

&#x1F4B0; Sponsor EuroPython 2024

EuroPython is a volunteer-run, non-profit conference. All sponsor support goes to cover the cost of running the Europython Conference and supporting the community with Grants and Financial Aid.

If you want to support EuroPython and its efforts to make the event accessible to everyone, please consider sponsoring (or asking your employer to sponsor).

Sponsoring EuroPython guarantees you highly targeted visibility and the opportunity to present your company to one of the largest and most diverse Python communities in Europe and beyond!

There are various sponsor tiers and some have limited slots available. This year, besides our main packages, we offer add-ons as optional extras. For more information, check out our Sponsorship brochure.

&#x1F426; We have an Early Bird 10% discount for companies that sign up by April 15th.&#x1F426;

More information at:  https://ep2024.europython.eu/sponsor &#x1FAC2; Contact us at sponsoring@europython.eu

&#x1F39F;️ Ticket Sales

The tickets are now open to purchase, and there is a variety of options:

  • Conference Tickets: access to Conference and Sprint Weekends.
  • Tutorial Tickets: access to the Workshop/Tutorial Days and Sprint Weekend (no access to the main conference).
  • Combined Tickets: access to everything during the whole seven-day, i.e. workshops, conference talks and sprint weekend!

We also offer different payment tiers designed to answer each attendee&aposs needs. They are:

Business Tickets: for companies and employees funded by their companies
  • Tutorial Only Business (Net price €400.00 + 21% VAT)
  • Conference Only Business (Net price €500.00 + 21% VAT)
  • Late Bird (Net price €750.00 + 21% VAT)
  • Combines Business (Net price €800.00 + 21% VAT)
  • Late Bird (Net price €1200.00 + 21% VAT)
Personal Tickets: for individuals
  • Tutorial Only Personal (€200.00 incl. 21%VAT)
  • Conference Only Personal (€300.00 incl. 21% VAT)
  • Late Bird (€450.00 incl. 21% VAT)
  • Combined Personal (€450.00 incl. 21% VAT)
  • Late Bird (€675.00 incl. 21% VAT)
Education Tickets: for students and active teachers (Educational ID is required at registration)
  • Conference Only Education (€135.00 incl. 21% VAT)
  • Tutorial Only Education (€100.00 incl. 21% VAT)
  • Combined Education (€210.00 incl. 21% VAT)
Fun fact: Czechia has been ranked among the world&aposs top 20 happiest countries recently.

Seize the chance to grab an EP24 ticket and connect with the delightful community of Pythonistas and happy locals this summer! ☀️

Need more information regarding tickets? Please visit https://ep2024.europython.eu/tickets or contact us at helpdesk@europython.eu.

⚖️ Visa Application

If you require a visa to attend EuroPython 2024 in Prague, now is the time to start preparing.

The first step is to verify if you require a visa to travel to the Czech Republic.

The Czech Republic is a part of the EU and the Schengen Area. If you already have a valid Schengen visa, you may NOT need to apply for a Czech visa. If you are uncertain, please check this website and consult your local consular office or embassy. &#x1F3EB;

If you need a visa to attend EuroPython, you can lodge a visa application for Short Stay (C), up to 90 days, for the purpose of “Business /Conference”. We recommend you do this as soon as possible.

Please, make sure you read all the visa pages carefully and prepare all the required documents before making your application. The EuroPython organisers are not able nor qualified to give visa advice.

However, we are more than happy to help with the visa support letter issued by the EuroPython Society. Every registered attendee can request one; we only issue visa support letters to confirmed attendees. We kindly ask you to purchase your ticket before filling in the request form.

For more information, please check https://ep2024.europython.eu/visa or contact us at visa@europython.eu. ✈️

&#x1F4B6; Financial Aid

We are also pleased to announce our financial aid, sponsored by the EuroPython Society. The goal is to make the conference open to everyone, including those in need of financial assistance.

Submissions for the first round of our financial aid programme are open until April 21st 2024.

There are three types of grants including:

  • Free Ticket Voucher Grant
  • Travel/Accommodation Grant (reimbursement of travel costs up to €400.)
  • Visa Application Fee Grant (up to €80)
⏰ FinAid timeline

If you apply for the first round and do not get selected, you will automatically be considered for the second round. No need to reapply.

8 March 2024Applications open21 April 2024Deadline for submitting first-round applications8 May 2024First round of grant notifications12 May 2024Deadline to accept a first-round grant19 May 2024Deadline for submitting second-round applications15 June 2024Second round of grant notifications12 June 2024Deadline to accept a second-round grant21 July 2024Deadline for submitting receipts/invoices

Visit https://europython.eu/finaid for information on eligibility and application procedures for Financial Aid grants.

&#x1F3A4; Public Speaking Workshop for Mentees

We are excited to announce that this year’s Speaker Mentorship Programme comes with an extra package!

We have selected a limited number of mentees for a 5-week interactive course covering the basics of a presentation from start to finish.

The main facilitator is the seasoned speaker Cheuk Ting Ho and the participants will end the course by delivering a talk covering all they have learned.

We look forward to the amazing talks the workshop participants will give us. &#x1F64C;

&#x1F40D; Upcoming Events in Europe

Here are some upcoming events happening in Europe soon.

Czech Open Source Policy Forum: Apr 24, 2024 (In-Person)

Interested in open source and happen to be near Brno/Czech Republic in April? Join the Czech Open Source Policy Forum and have the chance to celebrate the launch of the Czech Republic&aposs first Open Source Policy Office (OSPO). More info at: https://pretix.eu/om/czospf2024/

OSSCi Prague Meetup: May 16, 2024 (In-Person)

Join the forefront of innovation at OSSci Prague Meetup, where open source meets science. Call for Speakers is open!  https://pydata.cz/ossci-cfs.html

PyCon DE & PyData Berlin: April 22-24 2024

Dive into three days of Python and PyData excellence at Pycon DE! Visit https://2024.pycon.de/ for details.

PyCon Italy: May 22-25 2024

PyCon Italia 2024 will happen in Florence. The schedule is online and you can check it out at their nice website: https://2024.pycon.it/

GeoPython 2024: May 27-29, 2024

GeoPython 2024 will happen in Basel, Switzerland. For more information visit their website: https://2024.geopython.net/

&#x1F92D; Py.Jokes

Can you imagine our newsletter without joy and laughter? We can’t. &#x1F63E;&#x1F645;‍♀️❌ Here’s this month&aposs PyJoke:

pip install pyjokesimport pyjokesprint(pyjokes.get_joke()) How many programmers does it take to change a lightbulb? None, they just make darkness a standard! &#x1F423; See You All Next Month

Before saying goodbye, thank you so much for reading this far.

We can’t wait to reunite with all you amazing people in beautiful Prague again.

Let me remind you how pretty Prague is during summer. &#x1F33A;&#x1F33C;&#x1F33A;

Rozkvetlá jarní Praha, b&#x159;ezen 2024 by Radoslav Vnen&#x10D;ák

Remember to take good care of yourselves, stay hydrated and mind your posture!

Oh, and don’t forget to force encourage your friends to join us at EuroPython 2024! &#x1F60C;

It’s time again to make new Python memories together!

Looking forward to meeting you all here next month!

With much joy and excitement,

EuroPython 2024 Team &#x1F917;

Categories: FLOSS Project Planets

LN Webworks: How To Create Drupal Custom Entity: Step By Step Guide

Planet Drupal - Mon, 2024-04-08 06:15

Custom Entities are a powerful tool for building complex web applications and content management systems. Entities in Drupal provide a standardized way to store and manipulate data. Custom entity types in Drupal empower developers to define custom functionality, enhance performance, and maintain full control over data structures, supplementing the numerous built-in entity types.

Here are the steps for creating a custom entity.

Drupal 10 Custom Entity Development in easy steps:

Step 1: Create a custom folder for your module.

                                                     

Choose a short name or machine name for your module.

Categories: FLOSS Project Planets

Golems GABB: Innovative Methods of Integrating Drupal with Other Systems to Expand Your Website's Capabilities

Planet Drupal - Mon, 2024-04-08 05:42
Innovative Methods of Integrating Drupal with Other Systems to Expand Your Website's Capabilities Editor Mon, 04/08/2024 - 12:42

Given the variety of tools and techniques Drupal offers, it is a must to estimate your business needs first. AI, VR, AR, blockchain, and other technologies will keep reshaping industrial processes, so your task is to ensure the overall Drupal site’s scalability and versatility. 
Businesses of any calibre won’t achieve excellent results if they don’t align the server-side and client-side aspects of website development. Expanding website capabilities with Drupal integration will help you keep the momentum and improve online experiences for your audiences. 
The demand for mobile-first designs, as well as emerging technologies and e-commerce growth, make surviving in the niche without implementing innovative methods of performance and communication impossible. Stay tuned to explore the palette of tools and techniques to level up the standards of website architecture and efficiency for your business.

Categories: FLOSS Project Planets

Python Insider: Python 3.11.9 is now available

Planet Python - Mon, 2024-04-08 00:50

  


 This is the last bug fix release of Python 3.11 

This is the ninth maintenance release of Python 3.11

Python 3.11.9 is the newest major release of the Python programming language, and it contains many new features and optimizations. Get it here:

https://www.python.org/downloads/release/python-3119/

Major new features of the 3.11 series, compared to 3.10

Among the new major new features and changes so far:

  • PEP 657 – Include Fine-Grained Error Locations in Tracebacks
  • PEP 654 – Exception Groups and except*
  • PEP 673 – Self Type
  • PEP 646 – Variadic Generics
  • PEP 680 – tomllib: Support for Parsing TOML in the Standard Library
  • PEP 675 – Arbitrary Literal String Type
  • PEP 655 – Marking individual TypedDict items as required or potentially-missing
  • bpo-46752 – Introduce task groups to asyncio
  • PEP 681 – Data Class Transforms
  • bpo-433030– Atomic grouping ((?>…)) and possessive quantifiers (*+, ++, ?+, {m,n}+) are now supported in regular expressions.
  • The Faster Cpython Project is already yielding some exciting results. Python 3.11 is up to 10-60% faster than Python 3.10. On average, we measured a 1.22x speedup on the standard benchmark suite. See Faster CPython for details.

More resources

And now for something completely different

A kugelblitz is a theoretical astrophysical object predicted by general relativity. It is a concentration of heat, light or radiation so intense that its energy forms an event horizon and becomes self-trapped. In other words, if enough radiation is aimed into a region of space, the concentration of energy can warp spacetime so much that it creates a black hole. This would be a black hole whose original mass–energy was in the form of radiant energy rather than matter, however as soon as it forms, it is indistinguishable from an ordinary black hole.

We hope you enjoy the new releases!Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.

https://www.python.org/psf/

Your friendly release team,
Ned Deily @nad 
Steve Dower @steve.dower 
Pablo Galindo Salgado @pablogsal

Categories: FLOSS Project Planets

Pages