Feeds

The Drop Times: DrupalCon Barcelona 2024 Day 1: Awards, Driesnote, and Community

Planet Drupal - Wed, 2024-09-25 01:20
DrupalCon Barcelona 2024 kicked off with contribution workshops, the Women in Drupal award ceremony, and Dries Buytaert’s 40th State of Drupal address. Key announcements included the upcoming Drupal CMS 1.0 launch, AI integration, and the Experience Builder initiative.
Categories: FLOSS Project Planets

Specbee: Why You Can’t Ignore Google Crawlability, Indexability & Mobile-First Indexing in SEO

Planet Drupal - Wed, 2024-09-25 00:52
Do you ever feel like you’re living ahead of your time when you watch those world-conquering AI movies? The only reason such movies are gaining popularity with every passing day is because a similar reality is not too far away. The world today is living wire-free and digitally. Online is the popular and future-demanding way of living. And for your business to prosper in this online world, mastering SEO is a necessity! Now, to achieve optimal visibility, you need to understand the three most important concepts: crawlability, indexability, and mobile-first indexing. Why all of them? These three concepts are codependent, and they make the most crucial pillars to determining how search engines discover, understand, and rank your website. Your goal here is to make your website stand out in this ever-competitive online space; your tool: SEO. In this blog, we focus on the what and why of all three concepts and the best practices to help your website rank higher on Google search results. Introduction to Crawlability, Indexability, and Mobile-First Indexing Each of these three concepts is connected to the other and therefore, can get confusing for the masses to understand the difference without proper analysis. Crawlability refers to the process search engines like Google use to discover web pages easily. In simple words, Google discovers web pages by crawling, using computer programs called web crawlers or bots. Indexing is the process that follows crawling. Indexability refers to the process where search engines can add pages to their index. Google indexes by analyzing pages and their content to add them to the database of zillions of pages. Mobile-first indexing refers to Google’s crawling process as it prioritizes the mobile version of website content over its website version to inform higher rankings. Why These Concepts Matter in Today’s SEO Landscape With the continuous evolution of Google and other search engines, your SEO strategies need to be top-notch. With Google prioritizing mobile traffic, and hence, its mobile-first initiative, optimizing your website for both crawlability and indexability is the need of the hour. Without proper measures implemented in these areas, your website may be heading towards stunted organic growth. And you don’t wish for that to happen, do you? So, go on reading till the end. What is Crawlability in SEO? Performed by crawler bots, crawlability is the ability of search engines to find and index web content. Google crawl bots are essential computer programs that analyze websites and collect their respective information. This information helps Google index websites and show relevant search results. Google crawl bots follow a list of web page URLs based on previous crawls and sitemaps. As crawl bots visit a page, they detect links on each page and add those links to the list of pages to crawl. These new sites then become a part of SERP results and change to existing sites. Furthermore, dead links are spotted and taken in to update the Google index. However,  it’s important to know how to get Google to crawl your site. If your website is not crawlable, chances are, some of your web pages won’t be indexed, leading to a negative impact on your website’s visibility and organic traffic, thus, affecting your site’s SEO numbers. Key Factors Affecting Crawlability While crawlability is crucial for optimizing your site’s visibility, there are several factors impacting it. Learn about some of the most crucial ones are below: Site structure – Your site structure plays a pivotal role in its crawlability. If any of your pages aren’t linked to any source, it won’t be easily accessible to crawl bots for crawling. Internal links – Web crawlers follow links while crawling your site. These crawl bots only find pages linked to other content. To get Google to crawl your site, make sure to have internal linking structures across your website, allowing crawl bots to easily spot these pages in your site structure. Robots.txt and Meta Tags – These two parameters are crucial to maximizing your crawl budget. Make sure to avoid commands that prevent crawlers from crawling your pages. Check your meta tags and robots.txt file to avoid common errors:       ◦ <meta name="robots" content="noindex"/>: This robots meta tag directive can block your page from indexing.       ◦ <meta name="robots" content="nofollow">: Although nofollow links allow your pages to be crawled, it exudes all permissions for crawlers to follow links on your page.       ◦ User-agent: * Disallow: /: This particular code bars all your pages from getting indexed. Crawl Budget – Google offers limited recommendations for crawl budget optimizations since Google crawl bots finish the job without reaching their limit. But B2B and e-commerce sites with several hundreds of landing pages can be at the risk of running out of their crawl budget, missing the chance of your web pages making their way to Google’s index for crawlability. Tools to Check and Improve Crawlability There are various tools in the digital realm that claim to improve crawlability. Let me highlight a few most popular tools you can use to improve crawlability: Google Search Console What better tool to analyze Google crawlability than its in-house tool? Although it’s not a surprise to know how often does Google crawl a site, i.e., once every 3 days to 4 weeks, there are no fixed times.  How to fix crawl errors in Google Search Console? You can analyze your page traffic, keyword data, page traffic broken down by device, and page rankings in SERP results. Google Search Console also provides you with insights on how to rank higher on Google search results following Google algorithms. Submit sitemaps and provide comprehensive crawl, index, and related information directly from the Google index. Screaming Frog SEO Spider This is the tool you want to use to take charge of two essential jobs at once: crawl your website and assist with SEO audits. This web crawler helps increase the chances of your web content appearing in search engine results. Analyze page titles, and meta descriptions, and discover duplicate pages – all the actions that impact your site’s SEO. The free version can crawl up to 500 pages on a website and can find if any pages are redirected or if your site has the dreaded 404 errors. If you need more detailed features, the paid version brings many to the table, including JavaScript rendering, Google Analytics integration, custom source code search, and more. XML Sitemaps A roadmap for search engine crawlers, XML sitemaps ensure crawlers reach the important pages that need to be crawled and indexed. For better crawlability, ensure that the following requirements are met while creating sitemaps: Choose Canonical URLs: Identify and include only your preferred URLs (canonical URLs) in the sitemap to avoid duplicates from the same content accessible via multiple URLs. CMS-generated Sitemaps: Most modern CMS platforms can automatically generate sitemaps. For platforms like Drupal, you get to apply the benefits of the Drupal XML sitemap module that allows you to create XML sitemaps adhering to sitemaps.org guidelines. Check your CMS documentation for specific instructions on sitemap creation. Manual Sitemaps for Small Sites: For websites with fewer than a few dozen URLs, create a sitemap manually using a text editor and follow standard sitemap syntax. Automatic Sitemap Generation for Large Sites: For larger websites, use tools or your website’s database to automatically generate sitemaps. Make sure to split your sitemap into smaller files if necessary.Submit Sitemap to Google: Submit the sitemap using Google Search Console or the Search Console API. You can also include the sitemap URL in your robots.txt file for easy discovery. Bear in mind that submitting a sitemap doesn’t ensure Google will crawl every URL. It serves as a hint to Google but does not guarantee URL crawling or indexing. What is Google Indexing in SEO? The process of adding pages of a website to a search engine’s database to display them in search results when relevant is called indexing. Search engines like Google, crawl websites and follow links on the sites to add new pages. They analyze their content to determine its quality and relevance. When search engines find quality and relevant content, they add those pages to their index. One of the main processes of the Google search engine is indexing. Google Indexing allows it to discover and analyze web content and display them as search results to users, resulting in higher SEO ranks. Factors That Influence Indexability Given the importance of indexability for search engines to work properly, it is only understood that several factors can hinder or contribute to its influence. Below is a list of the common factors that influence the indexability of website pages: Content Quality and Relevance - Search engine bots or bot crawlers are fonder of high-quality content. Well-written, original, informative, and content with clear headings and organized structure that is relevant to users make it to the list of indexed pages in search engines. Canonical Tags and Duplicate Content - Search engines cannot index duplicate content. Duplicate content raises confusion during Google page indexing. Therefore, canonical tags are the counter activity here to solve duplicate content issues. These tags help search engines distinguish between original and duplicate content while indexing the relevant content pages. Technical Issues – Technical issues like broken links, page load times, noindex tags or directives, redirect loops, etc., can contribute to a negative impact on SEO, preventing crawling or indexing your site content. Using Google Search Console for Indexing Your Site Various tools allow you to check whether or not your web pages are crawlable or indexed. However, Google being a giant search engine, it’s highly recommended to use Google indexing service to meet your SEO goals relatively easily. Let me walk you through a brief guide as to how you can use Google Search Console request indexing for your website. Verify Website Ownership: Confirm ownership by adding an HTML file or meta tag. Submit Your Sitemap: Submit your sitemap URL in the Sitemaps section to help Google discover your significant pages faster. Use the URL Inspection Tool: Submit specific pages for re-crawling or direct indexing requests. Monitor Coverage Issues: Use the "Coverage" report to identify and fix indexing issues like blocked pages. Improve Internal Linking: Ensure proper internal links for better crawling. Enhance Content Quality: Avoid low-quality or duplicate content to improve indexing chances. Fixing Common Google Indexing Issues Here are some common Google indexing issues (similar and in addition to those of crawlability issues) you may come across on your website that affect its visibility and performance in SERP results: Low-Quality Content Google always targets original and relevant content to rank highest in user search results. If your web content is low quality, scraped, or includes keyword stuffing, chances are, that Google won’t index your web pages. Make sure to cater to the needs of your users and provide relevant information to them that is unique and aligned with the Webmaster Guidelines. Not Mobile-Friendly Google prioritizes mobile-friendliness while crawling web pages. If your website content isn’t mobile-friendly, Google is likely to not index it. Make sure to implement an adaptable design, improve load times, compress images, get rid of pop-ups, and keep finder reach in mind while curating your website. Missing Sitemap Without proper sitemaps, you cannot get Google to crawl your site. Make sure to use XML sitemaps instead of HTML. It helps enhance your website performance in search engines. Once you’ve created the sitemap, submit it with the help of Google Search Console or robots.txt file. Poor Site Structure Websites with poor navigability are often missed by crawl bots. Poor site structure negatively impacts your site crawlability and indexability. Make sure to use a clear website structure and good internal linking for SEO purposes. Redirect Loops Redirect loops block Google crawl bots from indexing your pages correctly, as these bots indulge in these loops and cannot crawl your website. Check your .htaccess or HTML sources of your website for unintentional or incorrect redirect loops. 301 redirects for permanent pages and 302 redirects for temporary pages should be used to prevent this issue from occurring. What is the difference between crawlability and indexability Since these two concepts are connected, oftentimes people mistake one for the other.Crawling is the process of search engines accessing and reading web pages. On the contrary, indexing is the process of adding pages to Google index. Indexing comes after crawling. However, in selective cases, indexing website on Google or other search engine bots can take place without reading the content. What is Mobile-First Indexing As the name suggests, mobile-first indexing means Google crawl bots prioritize indexing the mobile version of website content over desktop. This Google mobile-first indexing is meant to inform rankings. If your website content is not optimized for mobile devices, it could hamper your SEO ranks, even for desktop users. Having mobile-friendly content gives your website the edge for getting recognized for relevance and accessibility, thus improving visibility in search results. Mobile-first indexing is highly prioritized today to meet the expectations of the growing number of mobile users. For a seamless mobile browsing experience, it is crucial to have relevant, navigable content to reduce bounce rates and improve the overall SEO performance of your website. The Impact of Mobile-First Indexing on SEO Mobile-first indexing is a necessity today. It highly impacts the implications of crawlability and indexability, necessary for SEO purposes. Optimize your website for mobile devices using an adaptable design. You can develop your website specially for mobile devices using a responsive web design with a mobile-friendly theme or a completely separate mobile website. Alternatively, you can develop app-based solutions or AMP pages to enhance your website performance on mobile devices. Small businesses, e-commerce and B2B websites, and enterprise-level organizations stand highly affected by this shift to mobile-first indexing. It’s high time for these organizations to develop mobile-friendly websites to meet their customers’ expectations. Additionally, SEO professionals have witnessed a surge in demand for mobile-responsive website design and development services. Therefore, aligning your SEO strategies to Google’s algorithm to stand alongside your competitors in this mobile-first landscape is key. Are mobile-first indexing and mobile usability the same? Definitely not. A website may or may not be usable on mobile devices, despite containing content that calls for mobile-first indexing. Best Practices for Optimizing Crawlability and Indexability in a Mobile-First World In a mobile-first world, it is important to optimize your web pages in the direction of gaining greater visibility and better performance in search engine results. Below are a few best practices supported by Google and other search engines that can help improve the crawlability and indexability of your website for Google mobile-first indexing as a priority in the present scenario. Mobile-friendliness: Effective responsive designs serving the same HTML code on the same URL, irrespective of the device used, is key to optimizing mobile-friendliness. Your site should adapt to the screen size automatically. According to Google, adaptable web designs work best for ease and maintenance. Navigation: Mobile users often use one hand to operate their devices and therefore, make sure to design your website with liquid navigation. Implement larger buttons, and menus, add ample spaces, and additional interactive elements for easy finger-friendly navigation. A pro tip: maintain a minimum tap target size of 48 x 48 px. Content Consistency: Use the same meta robot tags for the desktop and mobile versions of your website. Ensure that your robot.txt file has not barred access to the main content, specifically on mobile devices. Google crawl bots can crawl and index content only when mobile-first indexing is enabled on your website. Page Speed Optimization: Avoid lazy loading of web content. Slow loading times hamper user experience for mobile users. Make sure to optimize your web images without compromising quality. Minimize HTTP requests and leverage browser caching. Ensure faster loading times for your web content across the globe with the help of CDN distribution. Tools and Techniques for Ongoing Optimization With mobile-first indexing in mind, it only makes sense to stay updated with the ongoing trends to check your site for mobile-friendliness. Here are a couple of ways to do the necessary: Use Mobile-Friendly Test Tools with Google Search Console Open your account in Google Search Console and click on “Mobile-Friendly Test” in the Enhancements sections. Input the URL to be analyzed for mobile-friendliness and proceed with the “Test URL” button. Review the report generated that presents you with the result of whether or not the page is mobile-friendly and also offers suggestions to address any issues found. If not mobile-friendly, the list of issues may include small text, appearance issues, content too wide for the size of the screen, or other similar and/or related usability problems. Once you’ve addressed these issues, re-test the page using the Mobile-Friendly Test tool. Run regular audits like this on Google Search Console to ensure the mobile-friendliness of pages across your site. Optimize Core Web Vitals Pay attention to page speed, interactivity, and visual stability to align with Google’s Core Web Vitals. One of the ways to go about it is by utilizing the PageSpeed Insights tool by Google. It allows you to examine web pages and provides data based on both mobile and desktop content versions, along with improvement suggestions. You can simply input the URL in the tool and click on “Analyze.” Proceed from here accordingly to generate a report that checks your website’s mobile-friendliness with a Core Web Vitals Assessment. Higher metrics mean greater mobile responsiveness! Common Pitfalls to Avoid in Mobile-First Indexing Ignoring the Mobile Version of Your Site: Make sure all content, inclusive of text and media, is accessible not only on the desktop version of your site but also on mobile devices. Maintain content parity and SEO rankings. Overlooking Technical SEO on Mobile: Mobile content also needs indexing for SEO purposes. Ensure that both your mobile and desktop sites use the same structured data for consistency. Doing so helps search engines properly analyze your web content. Slow Mobile Page Speed: Compress images, leverage browser caching, and minimize code to enhance the page loading speed for your mobile site. This way, you deliver a seamless user experience for mobile users. Final thoughts Crawlability and indexability are two significant entities that shape the SEO performance of your website. With Google prioritizing mobile-first indexing today, it’s time to roll up your sleeves and strategize a brand-new mobile-focused SEO strategy that aligns with Google’s algorithm and highlights your brand identity. With AI and machine learning increasingly shaping SEO today, optimizing for mobile devices isn’t just about adapting to algorithms—it’s about meeting user expectations and staying competitive. A mobile-friendly site with fast loading times, responsive design, and consistent content will improve search rankings and user experience. As mobile usage grows, embracing mobile-first strategies ensures businesses stay ahead, enhancing both visibility and customer satisfaction in the evolving digital landscape.
Categories: FLOSS Project Planets

Krita 5.2.5 Released!

Planet KDE - Tue, 2024-09-24 20:00

Krita 5.2.5 is here, bringing over 50 bugfixes since 5.2.3 (5.2.4 was a Windows-specific hotfix). Major fixes have been done to audio playback, transform mask calculation and more!

In addition to the core team, special thanks to Maciej Jesionowski, Ralek Kolemios, Freya Lupen, Michael Genda, Rasyuqa A. H., Simon Ra and Sam James for a variety of fixes!

Changes since 5.2.3:
  • Correctly adjust audio playback when animation framerate is changed.
  • Fix no layer being activated on opening a document (Bug 490375)
  • [mlt] Fix incorrect usage of get_frame API (Bug 489146)
  • Fix forbidden cursor blinking when switching tools with shortcuts (Bug 490255)
  • Fix conflicts between mouse and touch actions requested concurrently (Bug 489537)
  • Only check for the presence of bt2020PQColorSpace on Windows (Bug 490301)
  • Run macdeployqt after searching for missing libs (Bug 490181)
  • Fix crash when deleting composition
  • Fix scaling down image with 1px grid spacing (Bug 490898)
  • Fix layer activation issue when opening multiple documents (Bug 490843)
  • Make clip-board pasting code a bit more robust (Bug 490636)
  • Fix a number of issues with frame generation (Bug 486417)
  • A number of changes related to qt6 port changes.
  • Fix black canvas appearing when "Limit animation frame size" is active (Bug 486417)
  • WebP: fix colorspace export issue when dithering is enabled (Bug 491231)
  • WebP: preserve color profile on export if color model is RGB(A)
  • Fix layer selection when a layer was removed while view was inactive
  • Fix On-Canvas Brush Editor's decimal sliders (Bug 447800, Bug 457744)
  • Make sure file layers are updated when image size or resolution changes (Bug 467257, Bug 470110)
  • Fix Advanced Export of the image with filter masks or layer styles (Bug 476980)
  • Avoid memory leak in the advanced export function
  • Fix mipmaps not being regenerated after transformation was finished or cancelled (Bug 480973)
  • [Gentoo] Don't use xsimd::default_arch in the pixel scaler code
  • KisZug: Fix ODR violation for map_*
  • Fix a crash in Filter Brush when changing the filter type (Bug 478419)
  • PSD: Don't test reference layer for homogenous check (Bug 492236)
  • Fix an assert that should have been a safe assert (Bug 491665)
  • Set minimum freetype version to 2.11 (Bug 489377)
  • Set Krita Default on restoring defaults (Bug 488478)
  • Fix loading translated news (Bug 489477)
  • Make sure that older files with simple transform masks load fine & Fix infinite loop with combination of clone + transform-mask-on-source (Bug 492320)
  • Fix more cycling updates in clone/transform-masks combinations (Bug 443766)
  • Fix incorrect threaded image access in multiple clone layers (Bug 449964)
  • TIFF: Ignore resolution if set to 0 (Bug 473090)
  • Specific Color Selector: Update labels fox HSX (Bug 475551)
  • Specific Color Selector: Fix RGB sliders changing length (Bug 453649)
  • Specific Color Selector: Fix float slider step 1 -> 0.01
  • Specific Color Selector: Fix holding down spinbox arrows (Bug 453366)
  • Fix clone layers resetting the animation cache (Bug 484353)
  • Fix an assert when trying to activate an image snapshot (Bug 492114)
  • Fix redo actions to appear when undoing juggler-compressed actions (Bug 491186)
  • Update cache when cloning perspective assistants (Bug 493185)
  • Fix a warning on undoing flattening a group (Bug 474122)
  • Relink clones to the new layer when flattening (Bug 476514)
  • Fix onion skins rendering on layers with a transform masks (Bug 457136)
  • Fix perspective value for hovering pixel
  • Fix Move and Transform tool to work with Pass-Through groups (Bug 457957)
  • JPEG XL: Export: implement streaming encoding and progress reporting
  • Deselect selection when pasting from the clipboard (Bug 459162)
Download Windows

If you're using the portable zip files, just open the zip file in Explorer and drag the folder somewhere convenient, then double-click on the Krita icon in the folder. This will not impact an installed version of Krita, though it will share your settings and custom resources with your regular installed version of Krita. For reporting crashes, also get the debug symbols folder.

Note: We are no longer making 32-bit Windows builds.

Linux

The separate gmic-qt AppImage is no longer needed.

(If, for some reason, Firefox thinks it needs to load this as text: right-click on the link to download.)

MacOS

Note: We're not supporting MacOS 10.13 anymore, 10.14 is the minimum supported version.

Android

We consider Krita on ChromeOS as ready for production. Krita on Android is still beta. Krita is not available for Android phones, only for tablets, because the user interface requires a large screen.

Source code md5sum

For all downloads, visit https://download.kde.org/stable/krita/5.2.5/ and click on "Details" to get the hashes.

Key

The Linux AppImage and the source .tar.gz and .tar.xz tarballs are signed. You can retrieve the public key here. The signatures are here (filenames ending in .sig).

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #648 (Sept. 24, 2024)

Planet Python - Tue, 2024-09-24 15:30

#648 – SEPTEMBER 24, 2024
View in Browser »

Python 3.13 Preview: Free Threading and a JIT Compiler

Get a sneak peek at the upcoming features in Python 3.13 aimed at enhancing performance. In this tutorial, you’ll make a custom Python build with Docker to enable free threading and an experimental JIT compiler. Along the way, you’ll learn how these features affect the language’s ecosystem.
REAL PYTHON

Let’s Build and Optimize a Rust Extension for Python

Python code too slow? You can quickly create a Rust extension to speed it up. This post shows you how to re-implement some Python code as Rust, connect the Rust to Python, and optimize the Rust for even further performance improvements.
ITAMAR TURNER-TRAURING

Join John Hammond & Daniel Miessler at DevSecCon 2024

Snyk is thrilled to announce DevSecCon 2024, Developing AI Trust Oct 8-9, a FREE virtual summit designed for DevOps, developer and security pros of all levels. Hear from industry pros John Hammond & Daniel Miessler for some practical tips on devsecops approach to secure development. Save our spot →
SNYK.IO sponsor

Document Intended Usage Through Tests With doctest

This post covers Python’s doctest which allows you to write tests within your code’s docstrings. This does two things: it gives you tests, but it also documents how your code can be used.
JUHA-MATTI SANTALA

Quiz: Python 3.13: Free-Threading and a JIT Compiler

REAL PYTHON

Quiz: Lists vs Tuples in Python

REAL PYTHON

Articles & Tutorials Customizing VS Code Through Color Themes

A well-designed coding environment enhances your focus and productivity and makes coding sessions more enjoyable. In this Code Conversation, your instructor Philipp Ascany will guide you step-by-step through the process of finding, installing, and adjusting color themes in VS Code.
REAL PYTHON course

Thriving as a Developer With ADHD

What are strategies for being a productive developer with ADHD? How can you help your team members with ADHD to succeed and complete projects? This week on the show, we speak with Chris Ferdinandi about his website and podcast “ADHD For the Win!”
REAL PYTHON podcast

Build AI Apps with More Flexibility for the Edge

Intel makes it easy to develop AI applications on open frameworks and libraries. Seamlessly integrate into an open ecosystem and build AI solutions with more flexibility and choice. Explore the possibilities available with Intel’s OpenVINO toolkit →
INTEL CORPORATION sponsor

Things I’ve Learned Serving on the Board of the PSF

Simon has been on the Python Software Foundation Board for two years now and has recently returned from a board retreat. This post talks about what he has learned along the way, including just what does the PSF do, PyPI, PyCons, and more.
SIMON WILLISON

Goodhart’s Law in Software Engineering

Goodhart’s law states: “When a measure becomes a target, it ceases to be a good measure.” Whether that’s test coverage, cyclomatic complexity, or code performance, all metrics are proxies and proxies can be gamed.
HILLEL WAYNE

AI-Extracted Asian Building Footprints

This post covers how Mark did a deep dive on a large dataset covering building footprints in Asia. He uses a variety of tools including duckdb and the multiprocessing module in Python.
MARK LITWINTSCHIK

Why We Wrote a New Form Library for Django

“Django comes with a form library, and yet we wrote a total replacement library… Django forms were fundamentally not usable for what we wanted to do.”
KODARE.NET

Case-Insensitive String Class

This article shows how you can create a case-insensitive string class using some basic meta programming with the dunder method __new__.
RODRIGO GIRÃO SERRÃO

7 Ways to Use Jupyter Notebooks Inside PyCharm

Discover seven ways you can use Jupyter notebooks in PyCharm to explore and work with your data more quickly and effectively.
HELEN SCOTT

Unified Python Packaging With uv

Talk Python interviews Charlie Marsh, the maintainer of ruff, and they talk about uv and other projects at Astral.
KENNEDY & MARSH podcast

It’s Time to Stop Using Python 3.8

Python 3.8 will stop getting security updates in November 2024. You really should upgrade!
ITAMAR TURNER-TRAURING

Projects & Code LightAPI: Lightweight API Framework

GITHUB.COM/IKLOBATO

peepdb: CLI Tool to View Database Tables

GITHUB.COM/EVANGELOSMEKLIS

dante: Zero-Setup Document Store for Python

GITHUB.COM/SENKO

cookiecutter-uv: A Modern Template for Python Projects

GITHUB.COM/FPGMAAS • Shared by Florian Maas

Django Content Settings: Advanced Admin Editable Settings

DJANGO-CONTENT-SETTINGS.READTHEDOCS.IO • Shared by oduvan

Events Weekly Real Python Office Hours Q&A (Virtual)

September 25, 2024
REALPYTHON.COM

PyData Paris 2024

September 25 to September 27, 2024
PYDATA.ORG

PiterPy 2024

September 26 to September 27, 2024
PITERPY.COM

PyConf Mini Davao 2024

September 26 to September 27, 2024
DURIANPY.ORG

PyCon JP 2024

September 27 to September 30, 2024
PYCON.JP

PyCon Niger 2024

September 28 to September 30, 2024
PYCON.ORG

PyConZA 2024

October 3 to October 5, 2024
PYCON.ORG

PyCon ES 2024

October 4 to October 6, 2024
PYCON.ORG

Happy Pythoning!
This was PyCoder’s Weekly Issue #648.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppFastAD 0.0.4 on CRAN: Updated Again

Planet Debian - Tue, 2024-09-24 13:15

A new release 0.0.4 of the RcppFastAD package by James Yang and myself is now on CRAN.

RcppFastAD wraps the FastAD header-only C++ library by James which provides a C++ implementation of both forward and reverse mode of automatic differentiation. It offers an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C++ applications. This release updates the quick fix in release 0.0.3 from a good week ago. James took a good look and properly disambiguated the statement that lead clang to complain, so we are back to compiling as C++17 under all compilers which makes for a slightly wider reach.

The NEWS file for this release follows.

Changes in version 0.0.4 (2024-09-24)
  • The package now properly addresses a clang warning on empty variadic macros arguments and is back to C++17 (James in #10)

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

The Drop Times: Lupus Decoupled Drupal: Bridging Drupal’s Backend Strength with Frontend Freedom

Planet Drupal - Tue, 2024-09-24 12:00
In an article for The DropTimes, Sinduri Guntupalli explores how Lupus Decoupled Drupal merges the power of Drupal's backend with modern frontend frameworks like Vue.js and Nuxt. The platform offers a flexible, API-driven architecture with custom elements, caching optimizations, and diverse deployment options, providing an efficient solution for both developers and content editors working on complex web projects.
Categories: FLOSS Project Planets

Drupal life hack's: Invoked Controllers as a Service

Planet Drupal - Tue, 2024-09-24 11:25
Invoked Controllers as a Service admin Tue, 09/24/2024 - 18:25
Categories: FLOSS Project Planets

Drupal life hack's: Practical Use Cases of Tagged Services in Drupal

Planet Drupal - Tue, 2024-09-24 10:13
Practical Use Cases of Tagged Services in Drupal admin Tue, 09/24/2024 - 17:13
Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #468 - Drupal AI

Planet Drupal - Tue, 2024-09-24 10:00

Today we are talking about Artificial Intelligence (AI), How to integrate it with Drupal, and What the future might look like with guest Jamie Abrahams. We’ll also cover AI SEO Analyzer as our module of the week.

For show notes visit: www.talkingDrupal.com/468

Topics
  • What is AI
  • What is Drupal AI
  • How is it different from other AI modules
  • How do people use AI in Drupal
  • How does Drupal AI make AI easier to integrate in Drupal
  • What is RAG
  • How has Drupal AI evolved from AI Interpolator
  • What does the future of AI look like
Resources Guests

Jamie Abrahams - freelygive.io yautja_cetanu

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Martin Anderson-Clutz - mandclu.com mandclu

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you ever wanted an AI-based tool to give your Drupal site’s editors feedback on the SEO readiness of their content? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in Aug 2024 by Juhani Väätäjä (j-vee)
    • Versions available: 1.0.0-beta1, which supports Drupal 10.3 and 11
  • Maintainership
    • Actively maintained
    • Number of open issues: none
  • Usage stats:
    • 2 sites
  • Module features and usage
    • Once you enable this module along with the AI module, you can select the default provider, and optionally modify the default prompt that will be used to generate the report
    • With that done, editors (or anyone with the new “view seo reports” permission) will see an “Analyze SEO” tab on nodes throughout the site.
    • Generated reports are stored in the database, for ongoing reference
    • The reports are also revision-specific, so you could run reports on both a published node and a draft revision
    • There’s a separate “create seo reports” permission needed to generate reports. Within the form an editor can modify the default prompt, for example to get suggestions on optimizing for a specific topic, or to add or remove areas from the generated report.
    • By default the report will include areas like topic authority and depth, detailed content analysis, and even technical considerations like mobile responsiveness and accessibility. It’s able to do the latter by generating the full HTML markup of the node, and passing that to the AI provider for analysis
    • It feels like it was just yesterday that the AI module had its first release, so I think it’s great to see that there are community-created additions like this one already evolving as part of Drupal’s AI ecosystem
Categories: FLOSS Project Planets

Real Python: Advanced Python import Techniques

Planet Python - Tue, 2024-09-24 10:00

In Python, you use the import keyword to make code in one module available in another. Imports in Python are important for structuring your code effectively. Using imports properly will make you more productive, allowing you to reuse code while keeping your projects maintainable.

This video course provides a comprehensive overview of Python’s import statement and how it works. The import system is powerful, and this course will teach you how to harness this power. While you’ll cover many of the concepts behind Python’s import system, this video course is mostly example driven, so you’ll learn from the numerous code examples shared throughout.

In this video course, you’ll learn how to:

  • Use modules, packages, and namespace packages
  • Manage namespaces and avoid shadowing
  • Avoid circular imports
  • Import modules dynamically at runtime
  • Customize Python’s import system

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

PyCharm: PyCharm vs. Jupyter Notebook

Planet Python - Tue, 2024-09-24 08:52

Jupyter notebooks are an important tool for data scientists, providing an easy option for conducting experiments and presenting results. According to our Developer Ecosystem Survey 2023, at least 35% of data professionals use Jupyter notebooks. Furthermore, over 40% of these users spend more than 20% of their working time using these resources.

There are several implementations of notebook technology available to data professionals. At first we’ll look at the well-known Jupyter Notebook platforms by Project Jupyter. For the purposes of this article, we’ll refer to the Project Jupyter implementations of notebooks as “vanilla Jupyter” in order to avoid confusion, since there are several other implementations of the tool.

While vanilla Jupyter notebooks can be sufficient for some tasks, there are other cases where it would be better to rely on another tool for working with data. In this article, we’ll outline the key differences between PyCharm Professional and vanilla Jupyter when it comes to data science applications.

What is Jupyter Notebook?

Jupyter Notebook is an open-source platform that allows users to create and share code, visualizations, and text. It’s primarily used for data analysis and scientific research. Although JupyterLab offers some plugins and tools, its capabilities and user experience are significantly more limited than PyCharm’s.

What is PyCharm Professional?

PyCharm is a comprehensive integrated development environment (IDE) that supports a wide range of technologies out of the box, offering deep integration between them. In addition to enhanced support for Jupyter notebooks, PyCharm Professional also provides superior database support, Python script editing, and GitHub integration, as well as support for AI Assistant, Hugging Face, dbt-Core, and much more. 

Feature comparison: PyCharm Pro vs. Jupyter Language support

While Jupyter notebooks claim to support over 40 programming languages, their usage is limited to the .ipynb format, which makes working with traditional file extensions like .py, .sql, and others less convenient. On the other hand, while PyCharm offers support for fewer languages – Python, JavaScript and TypeScript, SQL, and R (via plugin), along with several markup languages like HTML and CSS – the support is much more comprehensive.

Often, Jupyter notebooks and Python scripts serve different purposes. Notebooks are typically used for prototyping and experimentation, while Python scripts are more suitable for production. In PyCharm Professional, you can work with both of these formats and it’s easy to convert .ipynb files into .py files. See the video below for more information.

The smartest code completion

If you’ve ever written code in PyCharm Professional, you’ll have definitely noticed its code completion capabilities. In fact, the IDE offers several different types of code completion. In addition to the standard JetBrains completion, which provides suggestions based on an extensive understanding of your project and libraries, there’s also runtime completion, which can suggest names of data objects like columns in a pandas or Polars DataFrame, and ML-powered, full-line completion that suggests entire lines of code based on the current file. Additionally, you can enhance these capabilities with LLM-powered tools such as JetBrains AI Assistant, GitHub Copilot, Amazon Whisper, and others.

In contrast, code completion in Jupyter notebooks is limited. Vanilla Jupyter notebooks lack awareness of your project’s context, there’s no local ML-based code completion, and there’s no runtime code completion for database objects.

Code quality features and debugger

PyCharm offers several tools to enhance your code quality, including smart refactorings, quick-fixes, and AI Assistant – none of which are available in vanilla Jupyter notebooks. 

If you’ve made a mistake in your code, PyCharm Professional will suggest several actions to fix it. These become visible when you click on the lightbulb icon.

PyCharm Professional also inspects code on the file and project level. To see all of the issues you have in your current file, you can click on the image in the top right-hand corner.

While vanilla Jupyter notebooks can highlight any issues after a code cell has been executed (as seen below), it doesn’t have features that allow you to analyze your entire file or project.   

PyCharm provides a comprehensive and advanced debugging environment for both Python scripts and Jupyter notebooks. This debugger allows you to step into your code, running through the execution steps line by line, and pinpointing exactly where an error was made. If you’ve never used the debugger in PyCharm, you can learn how to debug a Jupyter notebook in PyCharm with the help of this blog by Dr. Jodie Burchell. In contrast, vanilla Jupyter offers basic debugging tools such as cell-by-cell execution and interactive %debug commands.

Refactorings

Web-based Jupyter notebooks lack refactoring capabilities. If you need to rename a variable, introduce a constant, or perform any other operation, you have to do it manually, cell by cell. In PyCharm Professional, you can access the Refactoring menu via Control + T and use it to make changes in your file faster. More information about refactorings in PyCharm you can find in the video.

Other code-related features

If you forget how to work with a library in vanilla Jupyter notebooks, you need to open another tab in a browser to look up the documentation, taking you out of your development environment and programming flow.

In PyCharm Professional, you can get information about a function or library you’re currently using right in the IDE by hovering over the code.

If you have a subscription to AI Assistant you can also use it for troubleshooting, such as asking it to explain code and runtime errors, as well as finding potential problems with your code before you run it.

Working with tables

DataFrames are one of the most important types of data formats for the majority of data professionals. In vanilla Jupyter notebooks, if you print a pandas or Polars DataFrame, you’ll see a static, output with a limited number of columns and rows shown. Since the DataFrame outputs in Jupyter notebooks are static, this makes it difficult to explore your data without writing additional code.

In PyCharm Professional, you can use interactive tables that allow you to easily view, navigate, sort, and filter data. You can create charts and access essential data insights, including descriptive statistics and missing values – all without writing a single line of code. 

What’s more, the interactive tables are designed to give you a lot of information about your data, including details of: 

  • Data type symbols in the column headers
  • The size of your DataFrame (in our case it is 2390 rows and 82 columns). 
  • Descriptive statistics and missing values and many more. 

If you want to get more information about how interactive tables work in PyCharm, check out the documentation.

Versioning and GitHub integration

In PyCharm Professional, you have several version control options, including Git. 

With PyCharm’s GitHub integration, you can see and revert your changes with the help of the IDE’s built-in visual diff tool. This enables you to compare changes between different commits of your notebooks. You can find an in-depth overview of the functionality in this tutorial.

Another incredibly useful feature is the local history, which automatically saves a version history of your changes. This means that if you haven’t committed something, and you need to roll back to an earlier version, you can do so with the click of a button.  

In vanilla Jupyter notebooks, you have to rely on the CLI git tool. In addition, Git is the only way of versioning your work, meaning there is no way to revert changes if you haven’t committed them.

Navigation

When you work on your project in Jupyter Notebook, you always need to navigate either within a given file or the whole project. On the other hand, the navigation functionality in PyCharm is significantly richer.

 Beyond the Structure view that is also present in JupyterLab, you can also find some additional features to navigate your project in our IDEs. For example, double pressing Shift will help you find anything in your project or settings.

In addition to that, you can find the specific source of code in your project using PyCharm Professional’s Find Usages, Go to Implementation, Go to Declaration, and other useful features. 

Check out this blog post for more information about navigation in Jupyter notebooks in PyCharm.

Visualizations

In addition to libraries that are available in vanilla Jupyter notebooks, PyCharm also provides further visualization possibilities with the help of interactive tables. This means you don’t have to remember or type boilerplate code to create graphs. 

How to choose between PyCharm and vanilla Jupyter notebooks

Vanilla Jupyter notebooks are a lightweight tool. If you need to do some fast experiments, it makes sense to use this implementation. 

On the other hand, PyCharm Professional is a feature-rich IDE that simplifies the process of working with Jupyter notebooks. If you need to work with complex projects with a medium or large codebase, or you want to add a significant boost to productivity, PyCharm Professional is likely to be more suitable, allowing you to complete your data project more smoothly and quickly .

Get started with PyCharm Professional

PyCharm Professional is a data science IDE that supports Python, rich databases, Jupyter, Git, Conda, and other technologies right out of the box. Work on projects located in local or remote development environments. Whether you’re developing data pipelines, prototyping machine learning models, or analyzing data, PyCharm equips you with all the tools you need.

The best way to understand the difference between tools is to try them for yourself. We strongly recommend downloading PyCharm and testing it in your real-life projects. 

Download PyCharm Professional and get an extended 60-day trial by using the promo code “PyCharmNotebooks”. The free subscription is available for individual users only.

Activate your 60-day trial

Below are other blog posts that you might find useful for boosting your productivity with PyCharm Professional.

Categories: FLOSS Project Planets

Plasma Browser Integration 2.0

Planet KDE - Tue, 2024-09-24 07:19

I’m pleased to announce the immediate availability of Plasma Browser Integration version 2.0 on the Chrome Web Store and Microsoft Edge Add-ons page. This release updates the extension to Manifest Version 3 which will be required by Chrome soon. The major version bump reflects the amount of work it has taken to achieve this port.

Konqi surfing the world wide web

Plasma Browser Integration bridges the gap between your browser and the Plasma desktop. It lets you share links, find browser tabs and visited websites in KRunner, monitor download progress in the notification center, and control music and video playback anytime from within Plasma, or even from your phone using KDE Connect!

Despite the version number, there aren’t many user-facing changes. This release comes with the usual translation updates, however. Since this release doesn’t bring any advantage to Firefox users over the previous 1.9.1, it will not be provided on the Mozilla add-ons store.

We have taken the opportunity of reworking the extension manifest to make the “history” permission mandatory. When the browser history KRunner module was originally added, the permission was optional as we feared it might scare users away when presented with a scary permission warning when updating the extension.

(also see the Changelog Page on our Community Wiki)

Categories: FLOSS Project Planets

The Drop Times: Drupal CMS Expected by 15 Jan, XB Further Away in 2025: A Quick and Dirty Summary of Driesnote

Planet Drupal - Tue, 2024-09-24 07:08
Get ready to witness a transformative era for Drupal! In his 40th State of Drupal address, founder Dries Buytaert announced groundbreaking plans that are set to redefine the platform. With the targeted launch of Drupal CMS 1.0 on January 15, 2025, coinciding with Drupal's 25th anniversary, the platform is gearing up to become the gold standard for no-code website building. From the ambitious Starshot Project aiming to revolutionize site-building for non-developers, to the introduction of the Experience Builder (XB) that brings React into the fold, Drupal is embracing innovation like never before. Plus, with a strong commitment to responsible AI integration and a complete overhaul of its documentation, Drupal is positioning itself at the forefront of web development. Dive into how these exciting developments will shape the future of Drupal and what they mean for the global community!
Categories: FLOSS Project Planets

Data Transparency in Open Source AI: Protecting Sensitive Datasets

Open Source Initiative - Tue, 2024-09-24 06:49

The Open Source Initiative (OSI) is running a blog series to introduce some of the people who have been actively involved in the Open Source AI Definition (OSAID) co-design process. The co-design methodology allows for the integration of diverging perspectives into one just, cohesive and feasible standard. Support and contribution from a significant and broad group of stakeholders is imperative to the Open Source process and is proven to bring diverse issues to light, deliver swift outputs and garner community buy-in.

This series features the voices of the volunteers who have helped shape and are shaping the Definition.

Meet Tarunima Prabhakar

I am the research lead and co-founder at Tattle, a civic tech organization that builds citizen centric tools and datasets to respond to inaccurate and harmful content. My broad research interests are in the intersection of technology, policy and global development. Prior to starting Tattle, I worked as a research fellow at the Center for Long-Term Cybersecurity at UC, Berkeley studying the deployment of behavioral credit scoring algorithms towards financial inclusion goals in the global majority. I’ve also been fortunate to work on award-winning ICTD and data driven development projects with stellar non-profits. My career working in low-resource environments has turned me into an ardent advocate for Open Source development and citizen science movements. 

Protecting Sensitive Datasets

I recently gave a lightning talk at IndiaFOSS where I shared about Uli, a project to co-design solutions to online gendered abuse in Indian languages. As a part of this project, we’re building and maintaining datasets that are useful for machine learning models that detect abuse. The talk exhibited the importance of and the care that must be given when choosing a license for sensitive data, and why open datasets in Open Source AI should be carefully considered.

With the Uli project, we created a dataset annotated by gender rights activists and researchers who speak Hindi, Tamil and Indian English. Then, we fine-tuned Twitter’s XLM-RoBERTa model to detect gender abuse, which we deployed as a browser plugin. When activated, the Uli plugin would redact abusive tweets from a person’s feed. Another dataset we created was of slur words in the three languages that might be used to target people. Such a list is not only useful for the Uli plugin- these words are redacted from web pages if the plugin is installed- but they are also useful for any platform needing to moderate conversations in these languages.  At the time of the launch of the plugin, we chose to license the two datasets under an Open Data License (ODL). The model is hosted on Hugging Face and the code is available on GitHub. 

As we have continued to maintain and grow Uli, we have reconsidered how we license the data. When thinking about how to license this data, several factors come into play. First, annotating a dataset on abuse is labor-intensive and mentally exhausting, and the expert annotators should be fairly compensated for their expertise. Second, when these datasets are used by platforms for abuse detection, it creates a potential loophole—if abusive users realize the list of flagged words is public, they can change their language to evade moderation.

These concerns have led us to think carefully about how to license the data. On one end of the spectrum, we could continue to make everything open, regardless of commercial use. On the other end, we could keep all the data closed. We’ve historically operated as an Open Source organization, and every decision we make about data access impacts how we license our machine learning models as well. We are trying to find a happy medium that lets us balance the numerous concerns- recognition of effort and effectiveness of the data on one hand, and transparency, adaptability and extensibility on the other.

As we’ve thought about different strategies for data licensing, we haven’t been sure what that would mean for the license of the machine learning models. And that’s partly because we don’t have a clear definition for what “Open Source AI” really means. 

It is for this reason that we’ve closely followed the Open Source Initiative’s (OSI) process for converging on a definition for Open Source AI. OSI has been grappling with the definition of “Open Source AI” as it pertains to the four freedoms: the freedom to use, study, modify, and share. Over the past year, the OSI has been iterating on a definition for Open Source AI, and they’ve reached a point where they propose the following:

  • Open weights: The model weights and parameters should be open.
  • Open source code: The source code used to train the system should be open.
  • Open data or transparent data: Either the dataset should be open, or there should be enough detailed information for someone to recreate the dataset.

It’s important to note that the dataset doesn’t necessarily have to be open. The departure from a stance of maximally open dataset accounts for the complexity in the collection and management of data driving real world ML applications. While frontier models need to deal with copyright and privacy concerns, many smaller projects like ours worry about the uneven power dynamics between those creating the data and the entities using it. In our specific case, opening data also reduces its efficacy.

But having struggled with papers that describe research or data without sharing the dataset itself, I also recognize that ‘enough detailed information’ might not be information enough to repeat, adapt or extend another group’s work. In the end, the question becomes: how much information about the dataset is enough to consider the model “open?” It’s a fine line, and not everyone is comfortable with OSI’s stance on this issue. For our project in particular, we are considering the option of staggered data release- older data is released under an open data license, while the newest data requires users to request access. 

If you have strong opinions on this process, I encourage you to visit the OSI website and leave feedback. The OSI process is influential, and your input on open weights, open code, and their specifications around data openness could shape the future of Open Source AI.

You can learn more about the participatory process behind the Uli dataset here, and about Uli and Tattle on their respective websites. 

How to get involved

The OSAID co-design process is open to everyone interested in collaborating. There are many ways to get involved:

  • Join the forum: share your comment on the drafts.
  • Leave comment on the latest draft: provide precise feedback on the text of the latest draft.
  • Follow the weekly recaps: subscribe to our monthly newsletter and blog to be kept up-to-date.
  • Join the town hall meetings: we’re increasing the frequency to weekly meetings where you can learn more, ask questions and share your thoughts.
  • Join the workshops and scheduled conferences: meet the OSI and other participants at in-person events around the world.
Categories: FLOSS Research

The Drop Times: Winners of the 2024 Women in Drupal Awards Announced at DrupalCon Barcelona

Planet Drupal - Tue, 2024-09-24 06:38
The 2024 Women in Drupal Awards, sponsored by JAKALA, have recognized Esmeralda Braad-Tijhoff, Pamela Barone, and Alla Petrovska for their exceptional contributions in the Define, Build, and Scale categories, respectively, at DrupalCon Barcelona.
Categories: FLOSS Project Planets

Python Software Foundation: Service Awards given by the PSF: what are they and how they differ

Planet Python - Tue, 2024-09-24 05:00

Do you know someone in the Python community who inspires you and whose contributions to the Python community are outstanding? Other than saying thank you (definitely do this too!), you can also nominate them to receive recognition given by the PSF. In this blog post, we will explain what each of the awards are and how they differ. We hope this will encourage you to nominate your favorite inspirational community member to receive an award!

PSF Community Service Awards

The most straightforward way to acknowledge someone’s volunteer effort serving the Python community is to nominate them for the PSF Community Service Awards (CSA). The awardee will receive:


  • A cash award of $599 USD

  • Free registration at all future PyCon US events


Recipients need not be PSF members and can receive multiple awards if they have continuous outstanding contributions. Other than individuals, there are also small organizational groups (e.g. PyCon JP Association 2021) who can receive the CSA award.


The PSF Board reviews nominations quarterly. CSA recipients will be recognized at PyCon US every year.

 

CSA Award Winners 

The PSF Community Service Awards are all about the wonderful and dedicated folks in our community, and we had to take this opportunity to show some of their faces! You can find all of the inspiring PSF CSA recipients on our CSA webpage.  

 

CSA Recipients (left to right, top): Jessica Upani, Mariatta Wijaya, Abigail Mesrenyame Dogbe, Lais Carvalho, Mason Egger CSA Recipients (left to right, bottom): Kojo Idrissa, Tereza Iofciu, Jessica Greene, Carol Willing, Vicky Twomey-Lee 
PyCon JP Association CSA Recipients (left to right): Takayuki Shimizukawa,
Shunsuke Yoshida, Jonas Obrist, Manabu Terada, Takanori SuzuPSF Distinguished Service Awards

As the highest award that the PSF bestows, the Distinguished Service Award is the level up of the CSA award described above. Recipients of a DSA need to have made significant, sustained, and exemplary contributions with an exceptionally positive impact on the Python community. Recognition will take the form of an award certificate plus a cash award of $5000 USD. As of the writing of this blog post, there are only 7 awardees of the DSA in history.


Naomi Ceder is the latest Distinguished Service Awards recipient, she received the award in 2022

PSF Fellow Membership

Although it is also a form of recognition, the PSF Fellow Membership is different from the awards above and there’s no comparison of the level of recognition between fellowship and any of the awards above. Fellows are members who have been nominated for their extraordinary efforts and impact upon Python, the community, and the broader Python ecosystem. Fellows are nominated from the broader community and if they meet Fellow criteria, they are elevated by a vote of the Fellows Working Group. PSF Fellows are lifetime voting members of the PSF. That means Fellows are eligible to vote in PSF elections as well as follow the membership rules and Bylaws of the PSF. 

Nominate someone!

We hope this makes the types of recognition given by the PSF clear, as well as gives you confidence in nominating folks in the Python community that you think should be recognized for a CSA, DSA, or as a PSF Fellow. We also hope that this will inspire you to become a Python community member that receives a service award!


Categories: FLOSS Project Planets

Specbee: A practical guide to Personalization with user personas (sample campaigns included!)

Planet Drupal - Tue, 2024-09-24 04:46
You know that moment when Google finishes your search with exactly what you were thinking? Or when you want to just Netflix and chill after a long day but you’re recommended the perfect show that completely blows your mind? That’s personalization in action! But how does this magic happen? It begins with truly understanding your audience—building user personas. From there, you segment them based on key behaviors and preferences, and finally, you deliver exactly what they’re looking for, right when they need it.  In this article, we’ll discuss how you can offer personalized content to your audience using some simple-to-implement techniques. How to create User Personas You might probably know your audience, but do you really understand who they are and what they prefer? That’s where user personas come in.  It's possible to perform a UX research project to identify specific "Personas," and identify the needs, desires, and habits of each.  Take a look at this decent write-up on Personas. From this article, here's an example of what a final "Persona" might look like: In this example, each Persona is given a name, background, brand preferences, etc. You may have already done this at some point. It can be a helpful practice, but it takes time and a lot of corporate buy-in.  So if you think that’s not a practical choice at the moment, we’d suggest you identify your core target audiences and use technology to give your web visitors a personalized experience. For the sake of this overview, let's instead use the term "Segment" instead of "Persona". Creating a personalized website experience There are 3 basic steps to successfully manage personalization. It is primarily asking these questions: Defining segments - What type of people are you targeting? Connecting users to segments - Is this visitor one of those types of people? Providing personalized content for each segment - What do we want to show these targets? 1. Defining Segments At a high level, a "Segment" is simply a user type. And those can be identified in any way that you prefer. For instance, let’s imagine your company sells shoes. You might identify Segments like: Casual Shoewear Enthusiast Sports and Fitness Buff Fashion-Conscious Shopper Parents Shopping for Kids Professional or Workwear Buyer Seasonal Shopper Discount Shopper Luxury Buyer Orthopedic Footwear Seeker Chances are, you already have Segments in place. They could be from existing email list groups or sales teams managing clients by size, product, or region. These Segments can start simple ("newsletter subscribers") and grow as your efforts grow. Task 1: Identify and define Segments that you would like to target. 2. Connect users to Segments  Once target Segments are identified, you need to determine how to identify which website visitors belong to which Segments. This is where the tech comes in. You can identify users based on their actions. These actions can include these and more: Logging in,  Filling out a form, Participating in a survey, Accessing a specific landing page, Following an email newsletter link, Clicking a Google/LinkedIn/Facebook ad. Any defined action that you track will allow you to identify the visitor's Segment and set a Cookie to maintain that identification.  Ads and marketing links will have parameters in their URLs. We can discuss more specifics here, but here's a nice general overview.  Once a visitor's Segment is identified you can present personalized content. Task 2: Determine the specific actions that categorize web visitors into User Segments 3. Provide personalized content for each Segment Now that we know the User Segments (Task 1), and we know how to identify the website visitor as a member of a specific Segment (Task 2), we provide the personalized content. Here's an example of 3 visitors (and 4 visits): User A receives an email newsletter with a link related to your marketing sales campaign. They click the link and when they go to the homepage they see a marketing page banner related to the campaign - "Returning Customers New Arrivals 50% Off." User B visits from a Google ad for a New Year's campaign. They click the ad, and the marketing page banner reads "2025 New Year Flash Sale" User C visits your website by directly typing in the URL like a caveman would. They see the homepage with a generic marketing page banner. They fill out a form to get more information. User C returns two weeks later and the homepage displays with a marketing page banner that reads, "Welcome Back User C!"  Task 3: Determine content that should be seen for each Segment Sample campaigns for a shoe retailer Going with the same example of a company that retails shoes, let’s dive into some personalized campaigns you can create for your target audience. Completing the three tasks above might give us the following Campaign outlines for site personalization: 1. Sneaker Sale Campaign Segment:Sneaker Enthusiasts User Identification:Visitor is a member of the Segment if they took any of the following actions: Clicked a Google ad for sneakers. Clicked a link in an email campaign featuring sneakers. Visited the "Sneakers" category page on the website. Personalized Content: Homepage banner showcasing a "Sneaker Sale: Up to 50% Off" promotion. Personalized CTA offering "Exclusive Sneaker Offers" to encourage sign-ups for further deals.  2. Coastal Cities Campaign Segment:Coastal City Customers User Identification:Visitor is a member of the Segment if they took any of the following actions: Clicked a link in an email campaign targeting customers in coastal cities (e.g., Miami, Los Angeles, New York). Filled out a form with a coastal city location. Visited the site from an IP address associated with coastal regions. Personalized Content: Homepage banner promoting "Summer-Ready Footwear for Coastal Living", featuring sandals, lightweight sneakers, and water-resistant shoes. A "Coastal City Style Guide" featured at the top of the blog page, showing trending footwear options for beach and city life. CTA below the banner offering "Exclusive Deals on Summer Styles for Coastal Shoppers", with a localized discount code for users in coastal regions. Creating a Campaign statement might help keep things organized and aid in communication, internally and externally. Final thoughts While there are many ways to personalize your content, what we’ve covered today represents some of the simplest yet most effective methods. Personalization can change how you connect with your customers. The key is knowing your audience and delivering the right message at the right time. If this got you excited, know that you are not on your own here! Specbee can work with you on each of these steps, and we always recommend starting small. Talk to one of our experts today to find out how we can help.
Categories: FLOSS Project Planets

Vasudev Kamath: Note to Self: Enabling Secure Boot with UKI on Debian

Planet Debian - Tue, 2024-09-24 02:00

Note

This post is a continuation of my previous article on enabling the Unified Kernel Image (UKI) on Debian.

In this guide, we'll implement Secure Boot by taking full control of the device, removing preinstalled keys, and installing our own. For a comprehensive overview of the benefits and process, refer to this excellent post from rodsbooks.

Key Components

To implement Secure Boot, we need three essential keys:

  1. Platform Key (PK): The top-level key in Secure Boot, typically provided by the motherboard manufacturer. We'll replace the vendor-supplied PK with our own for complete control.
  2. Key Exchange Key (KEK): Used to sign updates for the Signatures Database and Forbidden Signatures Database.
  3. Database Key (DB): Used to sign or verify binaries (bootloaders, boot managers, shells, drivers, etc.).

There's also a Forbidden Signature Key (dbx), which is the opposite of the DB key. We won't be generating this key in this guide.

Preparing for Key Enrollment

Before enrolling our keys, we need to put the device in Secure Boot Setup Mode. Verify the status using the bootctl status command. You should see output similar to the following image:

Generating Keys

Follow these instructions from the Arch Wiki to generate the keys manually. You'll need the efitools and openssl packages. I recommend using rsa:2048 as the key size for better compatibility with older firmware.

After generating the keys, copy all .auth files to the /efi/loader/keys/<hostname>/ folder. For example:

❯ sudo ls /efi/loader/keys/chamunda db.auth KEK.auth PK.auth Signing the Bootloader

Sign the systemd-boot bootloader with your new keys:

sbsign --key <path-to db.key> --cert <path-to db.crt> \ /usr/lib/systemd/boot/efi/systemd-bootx64.efi

Install the signed bootloader using bootctl install. The output should resemble this:

Note

If you encounter warnings about mount options, update your fstab with the `umask=0077` option for the EFI partition.

Verify the signature using sbsign --verify:

Configuring UKI for Secure Boot

Update the /etc/kernel/uki.conf file with your key paths:

SecureBootPrivateKey=/path/to/db.key SecureBootCertificate=/path/to/db.crt Signing the UKI Image

On Debian, use dpkg-reconfigure to sign the UKI image for each kernel:

sudo dpkg-reconfigure linux-image-$(uname -r) # Repeat for other kernel versions if necessary

You should see output similar to this:

sudo dpkg-reconfigure linux-image-$(uname -r) /etc/kernel/postinst.d/dracut: dracut: Generating /boot/initrd.img-6.10.9-amd64 Updating kernel version 6.10.9-amd64 in systemd-boot... Signing unsigned original image Using config file: /etc/kernel/uki.conf + sbverify --list /boot/vmlinuz-6.10.9-amd64 + sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukicc7vcxhy --output /tmp/kernel-install.staging.QLeGLn/uki.efi Wrote signed /tmp/kernel-install.staging.QLeGLn/uki.efi /etc/kernel/postinst.d/zz-systemd-boot: Installing kernel version 6.10.9-amd64 in systemd-boot... Signing unsigned original image Using config file: /etc/kernel/uki.conf + sbverify --list /boot/vmlinuz-6.10.9-amd64 + sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukit7r1hzep --output /tmp/kernel-install.staging.dWVt5s/uki.efi Wrote signed /tmp/kernel-install.staging.dWVt5s/uki.efi Enrolling Keys in Firmware

Use systemd-boot to enroll your keys:

systemctl reboot --boot-loader-menu=0

Select the enroll option with your hostname in the systemd-boot menu.

After key enrollment, the system will reboot into the newly signed kernel. Verify with bootctl:

Dealing with Lockdown Mode

Secure Boot enables lockdown mode on distro-shipped kernels, which restricts certain features like kprobes/BPF and DKMS drivers. To avoid this, consider compiling the upstream kernel directly, which doesn't enable lockdown mode by default.

As Linus Torvalds has stated, "there is no reason to tie Secure Boot to lockdown LSM." You can read more about Torvalds' opinion on UEFI tied with lockdown.

Next Steps

One thing that remains is automating the signing of systemd-boot on upgrade, which is currently a manual process. I'm exploring dpkg triggers for achieving this, and if I succeed, I will write a new post with details.

Acknowledgments

Special thanks to my anonymous colleague who provided invaluable assistance throughout this process.

Categories: FLOSS Project Planets

parallel @ Savannah: GNU Parallel 20240922 ('Gold Apollo AR924') released

GNU Planet! - Mon, 2024-09-23 16:49

GNU Parallel 20240922 ('Gold Apollo AR924') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  Recently executed a flawless live data migration of ~2.4pb using GNU parallel for scale and bash scripts.
    -- @mechanicker@twitter Dhruva

New in this release:

  • --fast disables a lot of functionality to speed up running jobs.
  • Bug fixes and man page updates.

News about GNU Parallel:


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/

Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists

not already there)

  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the
limit.

Categories: FLOSS Project Planets

Pages