Feeds

Specbee: Improving Drupal SEO: How to Fix Duplicate Content with the Global Redirect Module

Planet Drupal - Tue, 2024-04-23 03:36
As a content writer, one thing that poses a serious concern for professionals in this field is plagiarism. It refers to the act of using someone else’s work and calling it your own. It can be frustrating and is ethically and legally unacceptable. Duplicate content is no less than plagiarism. In fact, it can cause concerns not only for content professionals but also for website owners and SEO professionals. Surprisingly, 25-30% of the content on the internet today is duplicated without people even realizing it. In this blog, we’ll talk about duplicate content, the significance of this issue in today’s metrics, and how Drupal helps you solve the problem with convenience. What is duplicate content Duplicate content simply means identical content that exists across various URLs on the internet. When identical content shows up on different URLs, search engines typically face confusion in determining which URL to prioritize in their search results. This leads to low-ranking issues for all the URLs that display similar content, while preference slides over to alternative web pages. Why is it a significant problem As mentioned earlier, duplicate content can cause issues for marketers, website owners, and SEO professionals. Let me categorize the types of issues it can bring you: For Search Engines Identification: Search engines struggle to determine which version(s) of content to include or exclude from their indexes. Link Metrics Distribution: They face uncertainty in distributing link metrics like trust, authority, anchor text, and link equity among multiple versions or a single page. Ranking Ambiguity: It becomes unclear which version(s) should rank for specific search queries. For Website Owners Search Engine Optimization: To enhance user experience, search engines tend to display only one version of duplicated content. This reduces the visibility of all duplicates. Link Equity Dilution: Inbound links are divided among duplicates rather than consolidating on one page. This disperses the link equity, affecting the search visibility of the content piece. Consequently, duplicated content fails to achieve the search visibility it could otherwise attain. What causes duplicate content Now that you’ve gathered some information about the seriousness of the issue of duplicate content, here are a few potential and technical causes that lead to content duplication. URL Parameters: This issue arises when you apply URL parameters or tracking in your website’s code. According to Google, these URL variations are created by pairing a key and a value which are separated by an equal sign, and linked by an ampersand. Consequently, although the URLs may seem distinct, users end up on the same page regardless of the link they click. Session IDs: Similar to applied URL parameters, session IDs are assigned different IDs within the URL to each user visiting your site. Multiple Versions of Your Site: This issue arises with websites that have both a www.example.com and an example.com version of their pages. It also applies to sites with an SSL certificate that maintain both HTTP and HTTPS versions of their site. Faceted Navigation: Faceted or filtered navigation allows users to refine details on your site to find the information they are looking for. It enables them to customize their search experience. However, search engines may perceive these filtered URL results as duplicated content. Types of duplicate content Duplicate content affects your SEO ranking. Having said that, in terms of search engine optimization, duplicate content is of two different types:    1. Site-wide/Cross-domain Duplicate Content Site-wide duplicate content occurs when identical or similar content is available across different pages within the same website or across multiple domains. For instance, some e-commerce platforms may use similar product descriptions on their primary domain (store.com), mobile version (m.store.com), or localized domain versions like store.ca. Such duplication of content challenges your SEO ranking.  If the duplicate content extends beyond a single website to multiple websites, determining the organic search result becomes a challenging task, requiring specific strategies to deal with the same.    2. Copied Content/Technical Problems This sounds like an oversight yet a major issue at that. Such duplicate content occurs from directly copying content to multiple locations or due to technical issues that result in the display of similar content on various URLs. This may be the case with URLs with parameters lacking canonical tags, duplicate pages without the no index directive, and copied content published without proper redirection.  When canonical tags or redirects are not appropriately set up, search engines may index and attempt to rank nearly identical versions of pages, leading to potential SEO complications. How Can Google Help Detect Duplicate Content One of the most popular and verified methods of assessing duplicate content is by simply selecting a few words from the site, enclosing them within quotes, and inputting them into Google search.  Ideally, this test on any page from your website should show your webpage in the search results, without other entries. However, if other websites also appear along with yours, Google considers the top results to be the original source of content. If you don’t find your webpage on top of the results, it might be a concern of duplicate content. You can repeat this procedure by testing several short, random sentences from your webpage using Google. Additionally, there are various free tools to check for duplicate content over the web, such as Copyscape, Plagspotter, Duplichecker, Smallseotools, and more. The Global Redirect Module: Fixing Duplicate Content SEO Issues in Drupal For those who’re unaware, Drupal is an open-source content management system that works towards simplifying development and creation processes for its global community. It offers many core and contributed modules that serve its purposes. One such module works towards fixing the issue of duplicate content - the Global Redirect Module.  The Global Redirect module is an SEO-friendly module that offers you a user-friendly interface to manage your URL path redirects. In Drupal, the alias system can sometimes lead to duplicate URLs, which can affect your website's search engine ranking. This Drupal module tackles this issue by checking for existing aliases and redirecting to the correct URL. Additionally, it manages URL formatting by removing trailing slashes, ensuring clean URL usage, and verifying node permissions and access. As per the module’s documentation source, here’s what it does to correct/fix duplicate content: Verifies if the current URL has an alias and performs a 301 redirect to it if it's not currently in use. Checks for a trailing slash in the current URL, then removes it if found, and repeats the first check with the updated request. Determines if the current URL matches the site's front page and redirects to the front page if there's a match. Ensures that Clean URLs are enabled and confirms whether the current URL is accessed using the clean method rather than the unclean method. Validates access to the URL; if the user lacks access to the path, no redirects occur. This feature helps protect private aliased nodes from exposure. Enforces the case sensitivity of the accessed URL to match the one set by the author/administrator. In non-technical interpretation of the above information, the Drupal Global Redirect module  Establishes fresh redirects. Detects faulty URL paths (ensure the “Redirect 4040” sub-module is activated for this feature). Configures redirects at the domain level (utilize the “Redirect Domain” sub-module for this purpose). Imports existing redirects. This way, you avoid the risk of having the very same content displayed on multiple URL paths. How to Configure the Global Redirect Module Before you configure the module, make sure to download it from https://www.drupal.org/project/globalredirect and then proceed with the following steps to configure the module to fix the issue of duplicate content on your Drupal site: Navigate to the Configuration page in your Drupal dashboard. In the Search and Metadata section, click on the URL redirects option. If you don’t find it, try clearing your Drupal cache and check again. On the URL redirects page, find the list of created redirects. Click on the +Add redirect button to add a new redirect. In the Path field, enter the old title or URL alias that you want to redirect from. In the To field, specify the relative internal path or the absolute external path that you want the old URL to redirect to. Choose the appropriate Redirect status from the drop-down menu. Click on Save at the bottom of the page to save the redirect configuration. Navigate to the Settings tab at the top of the page to access additional configuration options for the Global Redirect module. In the Settings tab, modify the default redirect status and adjust global redirect settings as needed.Click on Save Configuration to apply the new settings. Final Thoughts To wrap up, duplicate content affects your search engine rankings and impacts user experience as well as link equity distribution. It poses threatening challenges for content creators, website owners, and SEO professionals. Understanding the causes and types of duplicate content can help devise effective strategies to fix this issue. In Drupal, the Global Redirect module offers a convenient solution to manage URL redirects and prevent duplicity of content. While there are many other ways to boost your SEO ranking, Drupal users trust this module to work best in fixing duplicate content issues. It ensures clean URLs, proper redirection, and improved SEO performance on your Drupal site. So, employ effective measures to rank your Drupal website - avoid duplicate content, stick with Google algorithms, stay up-to-date with the latest technological measures, or you could connect with an expert Drupal development agency to enhance your Drupal SEO and boost your site’s search engine ranking.
Categories: FLOSS Project Planets

The Drop Times: A Conversation with Dominique de Cooman on Drupal, Mautic, Open DXP and Dropsolid

Planet Drupal - Tue, 2024-04-23 02:07
Join us as we delve into the realm of Dropsolid with Dominique de Cooman. Discover the driving forces behind their innovative integration of Drupal and open-source technology, and gain insights into the evolving landscape of digital experiences. From their founding principles to navigating Drupal's future, Dominique offers a candid exploration of Dropsolid's journey and its impact on the digital realm.
Categories: FLOSS Project Planets

PowerDevil in Plasma 6.0 and beyond

Planet KDE - Mon, 2024-04-22 20:00
Hi, I'm Jakob and this is my new KDE blog. Let's see how this goes as I haven't blogged for literally decades. I started working on Plasma code sometime last year and hope to play a tiny part in setting the stage for world domination improving user experiences like so many other awesome and dedicated contributors do every week.
Categories: FLOSS Project Planets

Django Weblog: An open letter regarding the DjangoCon Europe CfP

Planet Python - Mon, 2024-04-22 18:32

As many of you are not doubt aware there have recently been some community conversation regarding a license agreement that was published as part of the DjangoCon Europe Call for Proposals for organizers. I believe it is fair to characterize the reaction as negative, with expressions of anger, frustration, and confusion towards the DSF for placing burdensome and one sided requirements for hosting a DjangoCon Europe.

I want to state first and foremost that I added the licensing agreement to the CfP. My deepest apologies for everyone who felt betrayed by it. I hold our conference organizers in the highest regard, for the equity, effort, blood, sweat, and tears they put into creating amazing community events that bring us all together. I considered resigning from the board for the pain and confusion that I have caused. I have not because I want to be part of finding a way forward and keeping DjangoCon Europe the calendar event that so many of us eagerly await. It will be up to all of you if I can be part of that journey back.

The most difficulty accusation to face is that of a lack of transparency and surprising former and prospective organizers with the license. It is difficult for me because I honestly believed I was increasing the level of transparency to organizers. This licensing agreement is not a new document, though I now understand that it is new to, well, everyone. The license has existed in the DSF legal documents repository since at least 2016 and has been signed for at least one DjangoCon Europe. In my years on the board prior to my presidency on several occasions there were todos to have selected organizers "sign the organizer contract". In addition I know that DEFNA is party to an almost identical licensing contract with the DSF. It was my intention to provide to potential organizers with the contract prior to making a proposal so that when a proposal was selected the organizers would not be taken aback when being presented with the license.

I was mortified when I heard, first in private and then publicly, that no DjangoCon Europe organizers of current memory had ever signed the license. The group of some of our community's most dedicated volunteers felt like with the click of a keyboard had turned into an arms length transaction among lawyers. At the same time I was mortified at the potential legal jeopardy the DSF had unknowingly been in by not having license agreements in place for all DjangoCons.

I'd like to touch on several points brought up in the discussion that has come from this.

The first regards the time window of DjangoCon Europe as laid out in the CfP. The DSF has long standing agreements with the PSF, DEFNA, and EuroPython to not adversely impact each other's conferences by scheduling them too close together. The agreement specifies that no two conferences should be within a month of each other. The sentiment of the agreement is well placed, but I believe it is time to revisit it. With new conferences coming into the fold (it was rightly pointed out the DjangoCon Africa is not mentioned in the license), the ability for conferences to have meaningful remote participation, and a calendar that due to weather often has the four stated conferences vying for a 5-7 month slot the one month gap is difficult to plan around. I am publicly proposing here that the interested parties to this agreement modify it such that the one month window apply to either region or theme, but not both. That would have, for instance, DjangoCon US avoiding Pycon and DjangoCons Europe and Africa but not EuroPython and vice versa.

The second point is in regards to the requirement to not schedule against major religious and cultural holidays. While I did not write that, I embody it. My first DjangoCon was DjangoCon Europe. I was unable to attend DjangoCon US in Chicago while I was living in Chicago as it had been partly scheduled against Rosh Hashanah. So I convinced my boss to send me to France instead and fell in love with the Django community.

Finally to the comments that the DSF does not care about organizers and has no interest in helping them put on a conference. Unfortunately DSF does not have inherent knowledge of running conferences, no legal entity outside the US to bring to bear, or the finances to backstop a DjangoCon. We ask for your help every year to run DjangoCon Europe because, quite simply, we can't. There was a meeting in Edinburgh of former organizers to discuss how organizing a DjangoCon Europe coule be improved. That meeting was the start of the European Organizers Support working group. It did take time to get started. We have it now. It is the sincerest hope of all of us that it is the first step in improving the experience of European organizers, and hopefully others down the line. It is a slow process, but one I dearly hope bears fruit.

I sincerely hope that we can all together still work to make DjangoCon Europe 2025 a reality. I will do all that I can to make that happen, if you will continue to have me. I do not participate in social media, but have started a thread on the Django Forum to continue this discussion. For most of the next week I will be unavailable due to the Passover holiday but it was imperative to me that I get this message to you all as soon as I could.

Thank you for your time, and for letting me serve you, Chaim Kirby President, Django Software Foundation

Categories: FLOSS Project Planets

parallel @ Savannah: GNU Parallel 20240422 ('Børsen') [stable]

GNU Planet! - Mon, 2024-04-22 17:12

GNU Parallel 20240422 ('Børsen') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  I’m a big fan of GNU parallel!
    -- Scott Cain @scottjcain@twitter
 
New in this release:

  • Bug fixes and man page updates.


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #447 - Drupal Single Sign On

Planet Drupal - Mon, 2024-04-22 14:00
Today we are talking about Drupal Single Sign On, The Benefits it brings to the Drupal Community, and A new book called Fog & Fireflies with guest Tim Lehnen. We’ll also cover Drupal.org Username Field as our module of the week.

For show notes visit: www.talkingDrupal.com/447

Topics
  • What is Single Sign On (SSO)
  • Does Drupal already support SSO
  • Why is SSO on drupal.org important
  • Camps using SSO
  • Other possibilities
  • Gitlab login
  • Cloud IAM
  • Why did the Drupal Association choose Cloud IAM
  • How do you see the collaboration growing
  • Where are we now
  • What are the next steps
  • How far are we from this becoming a reality
  • What does onboarding look like
  • Will third party sites be able to use D.O SSO
  • Can the community help
  • Fog & Fireflies
    • First book
    • Can you buy it now
Resources Guests

Tim Lehnen - aspenthornpress.com hestenet

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Matt Glaman - mglaman.dev mglaman

MOTW Correspondent

Martin Anderson-Clutz - mandclu

  • Brief description:
    • Have you ever wanted to have a field on user profiles specifically designed for drupal.org usernames? There’s a module for that
  • Module name/project name:
  • Brief history
    • How old: created in August 2019 by hussainweb AKA Hussain Abbas of Axelerant
    • Versions available: 2.0.0-beta4
  • Maintainership
    • Actively maintained, that release was made less than 6 months ago
    • Test coverage
    • Number of open issues: 8 open issues, none of which are bugs
  • Usage stats:
    • 1 site
  • Module features and usage
    • The module provides a new field type, along with its own widget and formatter
    • With a simple checkbox, you can get the module to validate that the provided username is registered on drupal.org
    • What’s really powerful about this module is that it can also pull other information from the drupal.org profile, such as first and last name, country, bio, and more
    • It does this by leveraging a Guzzle-based API client for drupal.org that Hussain created as part of a DrupalCon Asia developer contest
    • I believe the intended use of the modules is to use a provided drush command to copy the values from the drupal.org username field into other fields, where they would be displayed to site visitors
    • Although this module isn’t something that a lot of sites will need, I could see it being really useful for Drupal camp websites, to automatically collect a lot of the information that many such sites ask users to populate manually
    • I think it’s also an interesting use of the Drupal.org API, and could be a useful reference for anyone needing to implement a custom integration
Categories: FLOSS Project Planets

Contributing is more than just code

Planet KDE - Mon, 2024-04-22 12:00

When thinking about how to contribute to KDE, many people probably still think that you have to write actual code. While it’s true that C++ and QML is at the heart of our applications, it’s just one puzzle piece of many that make up a successful product. Besides donating money to KDE or developers like me individually, there’s much more you can do to support us: promo work, drawing icons, brainstorming ideas, writing documentation, triaging bug reports or writing new ones, or in this case sending the relevant piece of hardware to a developer. Every single contribution counts!

It’s been at least ten years since I last used an optical drive

A key ingredient to KDE’s cross-platform story is Solid, our device integration framework. It lets applications enumerate devices, such as hard drive partitions, USB thumb drives, but also batteries and peripherals, in a platform-independent way. When it comes to hardware, sometimes emulating its behavior is tough and even a virtual machine might not behave exactly the same as the real thing. Here’s the story of how the donation of a portable DVD drive let me unlock a massive performance boost.

On Linux, to enumerate storage devices it talks to UDisks2 on DBus. You can actually view all the information yourself by using qdbusviewer or d-feet and navigate to the org.freedesktop.UDisks2 service on the System Bus. The Places panel found in applications like Dolphin but also the Device Notifier applet in System Tray query Solid for interesting storage devices to display to the user.

On a typical system, there’s plenty of mount points (particularly Snap is notorious for creating lots of loop devices) which we don’t want to show. Nevertheless, we have to fetch them all to decide whether they’re interesting to us. For example, usually only storage devices explicitly listed in fstab, mounted from /media (your typical USB stick drive), or originating in the user’s home directory (an ISO image in your Downloads folder) are displayed.

Currently, owing to Solid’s modular nature and the fact that a lot of its original code was written in KDE 4 times where many API conveniences in DBus and UDisks didn’t exist yet, Solid uses the DBus Introspectable interface to enumerate all devices. This gives us an XML description of the available interfaces and object paths on the service. As you can imagine, receiving and processing that data string can be quite expensive. Furthermore, for every device that was enumerated, a Solid Device instance is created which then fetches all properties from all interfaces on the relevant object, which again can be slow. The DBus interfaces an object implements in UDisks gives us a good idea of what type of storage we’re dealing with, for example org.freedesktop.UDisks.Loop contains properties regarding loop devices, such as the original path of the image file that has been mounted, which in turn is also a org.freedesktop.UDisks.Block device, and so on.

Retrieving a mount point via DBus, eventually you realize that “47” is forward slash.

There must be a better way to do this, right? There is! It’s called org.freedesktop.DBus.ObjectManager. It lets you fetch all objects and their properties in a single call. This would allow Solid to query everything at once on startup and then only fetch individual properties when they get invalidated or a new device is plugged in at runtime.

Both encrypted drives and optical media are somewhat special in that they’re a drive (or container) containing the actual media or partition. While a USB stick just disappears entirely as you unplug it, a DVD drive will only have its media ejected. It means we need to monitor the drive and check its media availability and then announce the disc inside of it. However, when I asked fellow KDE developers to test my changes, the patch-set worked fine with the CD-ROM drive emulated in a virtual machine but failed miserably with a legit drive. The situation with Audio CDs was even worse since they don’t have a regular file system associated with. And guess what: there’s also CDs that contian both audio and data.

I asked around on KDE’s Matrix channels whether someone might have a spare USB CD-ROM drive and is willing to help. The other day MartinR approached me in the KDE neon channel and said he had a spare one he could mail somewhere. When it arrived a week later, I immediately tried it out (it’s been some time since I’ve seen a device with a USB Y cable) and it indeed let me iron out a bunch of remaining issues with the original patch-set. There’s other examples, too, where having the actual hardware is key. For instance, in order to properly develop HDR support in KWin, the developers need to have an actual screen capable of displaying it.

KWin Wayland running in HDR mode on a portable OLED screen, courtesy of Xaver (the picture of course doesn’t do it justice)

The change isn’t actually merged yet as I am in the process of writing a fake UDisks2 service for Solid. This would let us run a bunch of automated tests, particularly for the weird cases, and ensure that my refactor doesn’t cause any regressions. Solid has unit tests for its general working but not specifically to the way it interacts with UDisks2. A bug in Solid that renders your data accessible (sorry about that encrypted drives bug the other week) or causes the shell or some KDE background service to crash upon plugging in a device would be a disaster.

Having said all of that, let me thank you again very much, without your generous donation I would not have been able to realize this project. On my laptop, the time it took to initialize a KFilePlacesModel went down from 55–60 ms to just under 20 ms. The number of DBus calls it places to the UDisks2 service went down from around 60 “get all properties”, 45 “introspect”, and 15 “get this particular property” calls to a single “get all managed objects” call, and one “introspect” call I have yet to hunt down. I’m sure our users will very much appreciate a faster starting Dolphin and snappier file dialog! Also many thanks to Fabian Vogt and notably Jan R. for continued advice and testing.

If you have a KDE development setup (and if you don’t, go set one up), please test this Solid patch, and let me know if it causes any trouble for you!

Categories: FLOSS Project Planets

The Drop Times: Dialogues that Shape the Future.

Planet Drupal - Mon, 2024-04-22 11:37

Dear Readers,

At the core of any vibrant community lies the foundational practice of open and effective communication—where growth is nurtured, and innovation is sparked. For The DropTimes, our mission transcends beyond the basic dissemination of information; we actively facilitate a platform for dialogue, connecting the latest technological evolutions with an engaged readership. This integral role embodies our commitment to not only inform but also to catalyze conversation and change, ensuring that every voice can be heard and every insight can be shared, thereby driving Drupal's growth.

The vitality of these discussions within the Drupal community is observable. Whether debating the necessity of new modules for Drupal 10 or discussing the release dates for Drupal 11, each dialogue enriches our collective understanding and propels the platform forward. These conversations, highlighted in our articles and series, testify to the importance of discussing diverse perspectives and disseminating varied opinions. 

By engaging in such dynamic exchanges, the community not only addresses immediate needs and challenges but also shapes the future of the technology. This ongoing dialogue ensures that Drupal remains at the cutting edge, constantly evolving in response to the contributions and concerns of its global user base. With that, let's move on to last week's highlights.

Kazima Abbas, a sub-editor with The DropTimes [TDT]  in the second part of the "Drupal's Innovation & Future: 2024 and Beyond" series, takes a closer look at the thoughts and predictions of Drupal experts: Carlos Rincon Sanchez, Oscar Loria, Stella Power, Krishna R P, and Sinduri Guntupalli. The series aims to examine what might lie ahead for the Drupal platform with meticulous discussions.

In another interesting exchange, I had the opportunity to sit with Grzegorz Bartman, the CEO of Droptica, to discuss Droopler 4, the latest version of their Drupal distribution. Droopler is a Drupal distribution, a pre-built website that users can customize and manage to fit their requirements without having to start from scratch. Learn in-depth about Droopler 4 with this interview with Greg Bartman.

Last week's TDT Spotlight featured Gai Technologies, an organization rooted in the open-source ethos with more than 12 years of experience in web solutions. Elma John, our sub-editor, discussed the inception and journey of Gai Technologies, which is located in the serene setting of the Himalayas, with N. Krishnan, the CEO.

Pantheon, a leading SaaS-based WebOps platform, has entered into a strategic partnership with Lytics, a premier customer data platform (CDP) fully integrated with Google Cloud. Chris Yates, Pantheon’s vice president of products, and James McDermott elaborated on the technical synergies during a detailed discussion with The DropTimes. Read the detailed article to learn how this synergy will benefit Pantheon and Lytics' customers.

TDT is actively seeking volunteers to cover DrupalCon Portland 2024. Meanwhile, the insights from our DrupalCon Pittsburg 2023 volunteers can be read here. Aiden F Dean Dunn, a then-recent graduate from the University of Pittsburgh, provides a fresh perspective on the event in a conversation with Varun Baker, highlighting the welcoming and diverse nature of the Drupal community. Additionally, the regular registration window of DrupalCon Portland 2024 has ended; late registration is open today.

Drupalers can now win a free ticket to DrupalCon Barcelona 2024 with the promo video contest. All are encouraged to record a brief video—preferably in landscape mode—stating their name and country of origin and send it to the organizers. Participants are also urged to share their videos on social media to encourage others to join the campaign, which aims to highlight the Drupal community's global diversity. The deadline for submissions is April 24, 2024.

The Healthcare Summit at DrupalConPortland, scheduled for Thursday, May 9, is hailed as a must-attend event that will offer valuable insights for navigating the complexities of the healthcare industry in the digital age. Tickets are now available for Drupal Camping 2024 in Wolfsburg. Tickets, priced between 80 € and 100 €, offer access to four days of camping, chilling, swimming, barbecuing, and, of course, all things Drupal. 

There are Drupal events around the globe each week to keep Drupal enthusiasts engaged. A complete list of events for the week is available here.

Jeff Greenberg, a seasoned Drupal Architect at iFACTORY, recently sparked a discussion on LinkedIn about the apparent lack of a Drupal 10 module that enables users to subscribe by email to be notified when new content is posted. This query led to a broad response from various Drupal community experts, each offering insights or potential solutions. Dive into the discussion here.

The organizers of Splash Awards, Deutschland & Österreich, have announced the date for this year's Splash Awards 2024 Germany & Austria, set to take place on November 7 in Berlin. The Singapore Government Digital Services team has recently introduced Purple A11y, a comprehensive open-source accessibility testing tool designed to enhance website usability for persons with disabilities

The BAT API module, widely recognized for its robust booking and availability management capabilities, has officially launched support for Drupal 10. Baddy Sonja Breidert, CEO and Co-Founder of 1xINTERNET, recently discussed enhancements to the Frontend Editing module they designed to streamline the editor experience. 

In other news, Smile launched a new no-code Drupal profile named Sobki at DrupalCamp Rennes 2024 on March 27.

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely
Alka Elizabeth
Sub-editor, The DropTimes.

Categories: FLOSS Project Planets

Open Source AI Definition – Weekly update April 22

Open Source Initiative - Mon, 2024-04-22 10:42
Comments on the forum
  • A user added in the forum that there is an issue as traditional copyright protection might not apply to weight models because they are essentially mathematical calculations. “ licensing them through any kind of copyright license will not be enforceable !! and this means that anybody can use them without any copyright restriction (assuming that they have been made public) and this means that you cannot enforce any kind of provisions such as attribution, no warranty or copyleft” They suggest using contractual terms instead of relying on copyright as a workaround, acknowledgement that this will trigger a larger conversation
Comments left on the definition text
  • Clarification needed under “What is Open Source AI”
  1. Discussion on whether “made available” should be changed to “released” or “distributed”
    1. One user pointed out that “made available” is the most appropriate, as the suggested wordings would be antagonistic and limiting
  2. Continuation of last week’s issue regarding defining who these four freedoms are for, deployers, users or someone else.
    1. Added that a user understands it as “We need essential freedoms to enable users…”
    2. But, then who are we defining as “Users”? Is it the person deploying the AI or the calling prompt?
    3. Another wording is suggested: “Open Source AI is an AI system that is made available under terms that grant, without conditions or restrictions, the rights to…”
  • Clarification is needed under “Preferred form to make modification to a machine learning system”, 
  1. Specifically to the claim: (The following components are not required,) but their inclusion in releases is appreciated.
    1. Clarification regarding whether this means best practice or it’s a mere a suggestion.
    2. Suggestion to change the sentence to “The following components are not required to meet the Open Source AI definition and may be provided for convenience.” This will also “consider if those components are provided, can they be provided under different terms that don’t meet the Open Source AI definition, or do they fall under the same OSI compliant license automatically. “
  2. Question regarding the addition of “may” under data transparency in the 0.0.7 draft definition, which was not included in the 0.0.6 one, considering that the components are described as “required” in the checklist below
    1. (Context: “Sufficiently detailed information on how the system was trained. This may include the training methodologies and techniques, the training data sets used, information about the provenance of those data sets, their scope and characteristics; how the data was obtained and selected, the labelling procedures and data cleaning methodologies.”)
    2. Another user seconds this and further adds that it should be changed to “must”, or something else which is definitive.
Town Hall meeting was held on April 19th

In case you missed it, the with town hall was held last Friday. Access the recordings and slides used here

Categories: FLOSS Research

Real Python: Write Unit Tests for Your Python Code With ChatGPT

Planet Python - Mon, 2024-04-22 10:00

Having a good battery of tests for your code may be a requirement for many Python projects. In practice, writing unit tests is hard and can take a lot of time and effort. Therefore, some developers don’t like to write them. However, with large language models (LLMs) and tools like ChatGPT, you can quickly create robust and complete sets of tests for your Python code.

In Python, you can use multiple different tools for writing tests. The most commonly used tools include doctest, unittest, and pytest. ChatGPT can be of great help in writing tests with any of these tools.

In this tutorial, you’ll:

  • Prompt ChatGPT to create tests using doctest
  • Use ChatGPT to write unittest tests, fixtures, and suites
  • Craft ChatGPT prompts to write pytest tests and fixtures
  • Use alternative prompts for cases where the code isn’t available

To get the most out of this tutorial, you should set up a ChatGPT account and know the basics of interacting with this tool using prompt engineering. You should also know the basics of how to test code in Python.

Get Your Code: Click here to download the free sample code you’ll use to write unit tests for your Python code using ChatGPT.

Benefits of Using ChatGPT for Testing Python Code

Having good and up-to-date unit tests for your code is a must for any Python project. Poorly tested code or code without tests may end up being unreliable and weak. With automated tests, you can ensure and show that your code works correctly in different scenarios. So, having tests is important from the technical and commercial point of view.

Writing good tests is hard and can take a lot of time. That’s why some developers don’t like to write them at all. Using large language models (LLM) like ChatGPT can be a viable alternative for providing your projects and code with proper tests.

Note: You can use ChatGPT and other LLMs in many useful ways in the context of Python development. Check out the following resources for other use cases and useful insights:

Some of the benefits of using ChatGPT to write tests for your Python code include the following:

  • Efficiency and speed: It can generate unit tests based on specifications or code snippets. This possibility significantly reduces the time that you need to spend writing tests. So you can focus on writing application logic.
  • Coverage improvement: It can suggest tests for edge cases or scenarios that developers might not immediately consider. This way, you can improve your code’s test coverage.
  • Error reduction: It can reduce human error in writing repetitive or boilerplate test code.
  • Learning and onboarding: It can serve as an educational tool for developers who are new to testing frameworks or unfamiliar with best practices in unit testing. The generated tests can help developers learn about testing patterns, assertions, and ways to effectively write tests.

With ChatGPT, you can generate unit tests for your Python code in almost no time. However, you must note that even though the generated tests can look good, you should still review and possibly refine the generated tests.

Note: For this tutorial, you’ll use a free ChatGPT account that’s powered by the GPT-3.5 LLM. You can use this model for free with the default web interface by OpenAI. If you use a paid account with GPT-4, then your results might be more accurate and faster.

Additionally, you’ll use an individual chat thread for each major section in the tutorial. So you’ll use different threads for the doctest, unittest, and pytest sections.

Using a single chat thread to run all the examples in this tutorial may cause the chat to generate significantly different responses because of the past context, which includes previous prompts and responses.

In the following sections, you’ll learn the basics of using ChatGPT as an assistant for creating coherent unit tests for your Python projects using different test frameworks and libraries.

Writing doctest Tests With ChatGPT

Python’s doctest module provides a lightweight testing framework that you can use for test automation. It can read the test cases from your project’s documentation and your code’s docstrings. This framework comes with the Python interpreter, so you’ll have it at your disposal with any Python installation, which is great.

Note: To dive deeper into how to use the doctest module for testing your code, check out Python’s doctest: Document and Test Your Code at Once.

With doctest, you can write code that provides examples of how your code works, simulating REPL sessions. These examples work as documentation and also as test cases or unit tests. Each doctest test needs a statement or expression and an expected result.

You can use ChatGPT to help you out with writing doctest tests. First, you need some code to test. The following function tries to tackle the FizzBuzz challenge, where you return "fizz" for numbers divisible by three, "buzz" for those divisible by five, and "fizz buzz" for those divisible by both three and five.

Go ahead and create a fizzbuzz.py file and add the following code to it:

Python fizzbuzz.py def fizzbuzz(number): if number % 3 == 0: return "fizz" elif number % 5 == 0: return "buzz" elif number % 15 == 0: return "fizz buzz" else: return number Copied!

This function works okay for numbers divisible by 3 or 5. However, there is a slight issue with numbers that are divisible by both. Even though you know this, go ahead and start a new chat thread. Then, prompt ChatGPT to write doctest test for the function:

You: Write doctest tests for the following function:

Python def fizzbuzz(number): if number % 3 == 0: return "fizz" elif number % 5 == 0: return "buzz" elif number % 15 == 0: return "fizz buzz" else: return number Copied!

ChatGPT:

Certainly! The function you’ve provided implements a basic FizzBuzz algorithm but with a slight logical error. The check for divisibility by 15 should come before checks for 3 and 5, as every number divisible by 15 is also divisible by both 3 and 5, but your current order will never reach the "fizz buzz" case. Here’s a corrected version of your function with the logical order fixed, followed by doctest tests for it:

Python def fizzbuzz(number): """ Returns 'fizz' if number is divisible by 3, 'buzz' if number is divisible by 5, 'fizz buzz' if number is divisible by 15, and the number itself otherwise. >>> fizzbuzz(3) 'fizz' >>> fizzbuzz(5) 'buzz' >>> fizzbuzz(15) 'fizz buzz' >>> fizzbuzz(4) 4 >>> fizzbuzz(30) 'fizz buzz' >>> fizzbuzz(9) 'fizz' >>> fizzbuzz(10) 'buzz' >>> fizzbuzz(2) 2 """ if number % 15 == 0: return "fizz buzz" elif number % 3 == 0: return "fizz" elif number % 5 == 0: return "buzz" else: return number Copied!

To use these doctest tests, you’ll need to include them in the docstring of your function as shown above. Then, you can run the tests with the doctest module by adding the following to your script:

Python if __name__ == "__main__": import doctest doctest.testmod() Copied!

This code will automatically run the tests defined in the docstrings when you execute the Python file. Remember, the corrected order of conditions in the function ensures that numbers divisible by both 3 and 5 return "fizz buzz", which is crucial for the FizzBuzz logic.

Wow! That great! The chat not only wrote the doctest tests for you. It also fixed the issue, explained important concepts around doctest, and provided instructions on how to run the tests. Note that the tests cover both regular and edge cases, which is pretty convenient.

Read the full article at https://realpython.com/chatgpt-unit-tests-python/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Bits from Debian: Debian Project Leader Election 2024, Andreas Tille elected.

Planet Debian - Mon, 2024-04-22 08:00

The voting period for the Debian Project Leader election has ended. Please join us in congratulating Andreas Tille as the new Debian Project Leader.

The new term for the project leader started on 2024-04-21.

369 of 1,010 Debian Developers voted using the Condorcet method.

More information about the results of the voting are available on the Debian Project Leader Elections 2024 page.

Many thanks all of our Developers for voting.

Categories: FLOSS Project Planets

Vincent Fourmond: QSoas version 3.3 is out

Planet Debian - Mon, 2024-04-22 06:50
Version 3.3 brings in new features, including reverse Laplace transforms and fits, pH fits, commands for picking points from a dataset, averaging points with the same X value, or perform singular value decomposition.

In addition to these new features, many previous commands were improved, like the addition of a bandcut filter in FFT filtering, better handling of the loading of files produced by QSoas itself, and a button to interrupt the processing of scripts.

There are a lot of other new features, improvements and so on, look for the full list there. About QSoas
QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.3. You can download for free its source code or precompiled versions for MacOS and Windows there. Alternatively, you can clone from the GitHub repository.
Categories: FLOSS Project Planets

LN Webworks: How To Use Cron Jobs For Task Automation In Drupal

Planet Drupal - Mon, 2024-04-22 04:07

The efficiency of any website is a must in this fast-paced world of web development. One of the most important tools to maximize the efficiency of your Drupal site is cron jobs. 

Cron jobs are powerful tools that automate any kind of routine tasks of the website which decreases the big load of a developer or site administrator. In this article, I will give you important insights about cron jobs and how we can use them within Drupal, followed by how we customize the tasks by overriding services of the ultimate cron job module.

What is Cron Jobs in Drupal? 

Cron in Drupal works quite well as it coordinates all the backstage tasks. It handles everything from clearing caches to checking for updates of any kind of content and executing scheduled tasks set by your modules.

Categories: FLOSS Project Planets

Zato Blog: Enterprise Python: Integrating with Salesforce

Planet Python - Mon, 2024-04-22 04:00
Enterprise Python: Integrating with Salesforce 2024-04-22, by Dariusz Suchojad Overview

Salesforce connections are one of the newest additions to Zato 3.2, allowing you to look up and manage Salesforce records and other business data. To showcase it, the article will create a sample Salesforce marketing campaign in a way that does not require the usage of anything else except for basic REST APIs combined with plain Python objects, such as dicts.

If you have not done it already, you can download Zato here.

Basic workflow

The scope of our works will be:

  • Creating Salesforce credentials for our integration project
  • Defining a Salesforce connection in Zato
  • Authoring a service that will map input data to the format that Salesforce expects
  • Creating a Zato REST channel that will be invoked through curl during tests
  • Testing the integration
Creating Salesforce credentials

To be able to create as connection to Salesforce in the next step, we need a few credentials. There is a full article about how to prepare them and this section is the gist of it.

In runtime, based on this information, Zato will obtain the necessary authentication and authorization tokens itself, which means that you will only focus on the business side of the integrations, not on the low-level aspects of it.

The process of obtaining the credentials needs to be coordinated with an administrator of your organization. To assist in that, the screenshots below explain where to find them.

The credentials are:

  • Username and password
  • Consumer key
  • Consumer secret

The username and password are simply the same credentials that can be used to log in to Salesforce:

Consumer key and secret are properties of a connected app - this is a term that Salesforce uses for API clients that invoke its services. If you are already an experienced Salesforce REST API user, you may know the key and secret under their aliases of "client_id" and "client_secret" - these are the same objects.

Note that when a connected app already exists and you would like to retrieve the key and secret, they will be available under the "View" menu option for the app, not under "Edit" or "Manage".

Defining a Salesforce connection in Zato

With all the credentials in place, we can create a new Salesforce connection in Zato Dashboard, as below.

Authoring an integration service in Python

Above, we created a connection definition that lets Zato obtain session tokens and establish connections to Salesforce. Now, we can create an API service that will make use of such connections.

In the example below, we are using the POST REST method to invoke an endpoint that creates new Salesforce campaigns. In your own integrations, you can invoke any other Salesforce endpoint, using any REST method as needed, by following the same pattern, which is, create a model with input fields, build a Python dict for the request to Salesforce, invoke it and map all the required from the response from Salesforce to that which your own service returns to its own callers.

Note that we use a datamodel-based SimpleIO definition for the service. Among other things, although we are not going to do it here, this would let us offer definitions for this and other services.

# -*- coding: utf-8 -*- # stdlib from dataclasses import dataclass # Zato from zato.server.service import Model, Service # ########################################################################### if 0: from zato.server.connection.salesforce import SalesforceClient # ########################################################################### @dataclass(init=False) class CreateCampaignRequest(Model): name: str segment: str # ########################################################################### @dataclass(init=False) class CreateCampaignResponse(Model): campaign_id: str # ########################################################################### class CreateCampaign(Service): class SimpleIO: input = CreateCampaignRequest output = CreateCampaignResponse def handle(self): # This is our input data input = self.request.input # type: CreateCampaignRequest # Salesforce REST API endpoint to invoke - note that Zato # will add a prefix to it containing the API version. path = '/sobjects/Campaign/' # Build the request to Salesforce based on what we received request = { 'Name': input.name, 'Segment__c': input.segment, } # .. create a reference to our connection definition .. salesforce = self.cloud.salesforce['My Salesforce Connection'] # .. obtain a client to Salesforce .. with salesforce.conn.client() as client: # type: SalesforceClient # .. create the campaign now .. sf_response = client.post(path, request) # .. build our response object .. response = CreateCampaignResponse() response.campaign_id = sf_response['id'] # .. and return its ID to our caller. self.response.payload = response # ########################################################################### Creating a REST channel

Note that we assign HTTP Basic Auth credentials to the channel. In this manner, it is possible for clients of this REST channel to authenticate using a method that they are already familiar which simplifies everyone's work - it is Zato that deals with how to authenticate against Salesforce whereas your API clients use the ubiquitous HTTP Basic Auth method.

Testing

The last step is to invoke the newly created channel:

$ curl http://api:password@localhost:17010/api/campaign/create -d '{"name":"Hello", "segment":"123"}' {"campaign_id":"8901Z3VHXDTebEJWs"} $

That is everything - you have just integrated with Salesforce and exposed a REST channel for external applications to integrate with!

More blog posts
Categories: FLOSS Project Planets

Russ Allbery: Review: The Stars, Like Dust

Planet Debian - Sun, 2024-04-21 22:22

Review: The Stars, Like Dust, by Isaac Asimov

Series: Galactic Empire #2 Publisher: Fawcett Crest Copyright: 1950, 1951 Printing: June 1972 Format: Mass market Pages: 192

The Stars, Like Dust is usually listed as the first book in Asimov's lesser-known Galactic Empire Trilogy since it takes place before Pebble in the Sky. Pebble in the Sky was published first, though, so I count it as the second book. It is very early science fiction with a few mystery overtones.

Buying books produces about 5% of the pleasure of reading them while taking much less than 5% of the time. There was a time in my life when I thoroughly enjoyed methodically working through a used book store, list in hand, tracking down cheap copies to fill in holes in series. This means that I own a lot of books that I thought at some point that I would want to read but never got around to, often because, at the time, I was feeling completionist about some series or piece of world-building. From time to time, I get the urge to try to read some of them.

Sometimes this is a poor use of my time.

The Galactic Empire series is from Asimov's first science fiction period, after the Foundation series but contemporaneous with their collection into novels. They're set long, long before Foundation, but after humans have inhabited numerous star systems and Earth has become something of a backwater. That process is just starting in The Stars, Like Dust: Earth is still somewhere where an upper-class son might be sent for an education, but it has been devastated by nuclear wars and is well on its way to becoming an inward-looking relic on the edge of galactic society.

Biron Farrill is the son of the Lord Rancher of Widemos, a wealthy noble whose world is one of those conquered by the Tyranni. In many other SF novels, the Tyranni would be an alien race; here, it's a hierarchical and authoritarian human civilization. The book opens with Biron discovering a radiation bomb planted in his dorm room. Shortly after, he learns that his father had been arrested. One of his fellow students claims to be in Biron's side against the Tyranni and gives him false papers to travel to Rhodia, a wealthy world run by a Tyranni sycophant.

Like most books of this era, The Stars, Like Dust is a short novel full of plot twists. Unlike some of its contemporaries, it's not devoid of characterization, but I might have liked it better if it were. Biron behaves like an obnoxious teenager when he's not being an arrogant ass. There is a female character who does a few plot-relevant things and at no point is sexually assaulted, so I'll give Asimov that much, but the gender stereotypes are ironclad and there is an entire subplot focused on what I can only describe as seduction via petty jealousy.

The writing... well, let me quote a typical passage:

There was no way of telling when the threshold would be reached. Perhaps not for hours, and perhaps the next moment. Biron remained standing helplessly, flashlight held loosely in his damp hands. Half an hour before, the visiphone had awakened him, and he had been at peace then. Now he knew he was going to die.

Biron didn't want to die, but he was penned in hopelessly, and there was no place to hide.

Needless to say, Biron doesn't die. Even if your tolerance for pulp melodrama is high, 192 small-print pages of this sort of thing is wearying.

Like a lot of Asimov plots, The Stars, Like Dust has some of the shape of a mystery novel. Biron, with the aid of some newfound companions on Rhodia, learns of a secret rebellion against the Tyranni and attempts to track down its base to join them. There are false leads, disguised identities, clues that are difficult to interpret, and similar classic mystery trappings, all covered with a patina of early 1950s imaginary science. To me, it felt constructed and artificial in ways that made the strings Asimov was pulling obvious. I don't know if someone who likes mystery construction would feel differently about it.

The worst part of the plot thankfully doesn't come up much. We learn early in the story that Biron was on Earth to search for a long-lost document believed to be vital to defeating the Tyranni. The nature of that document is revealed on the final page, so I won't spoil it, but if you try to think of the stupidest possible document someone could have built this plot around, I suspect you will only need one guess. (In Asimov's defense, he blamed Galaxy editor H.L. Gold for persuading him to include this plot, and disavowed it a few years later.)

The Stars, Like Dust is one of the worst books I have ever read. The characters are overwrought, the politics are slapdash and build on broad stereotypes, the romantic subplot is dire and plays out mainly via the Biron egregiously manipulating his petulant love interest, and the writing is annoying. Sometimes pulp fiction makes up for those common flaws through larger-than-life feats of daring, sweeping visions of future societies, and ever-escalating stakes. There is little to none of that here. Asimov instead provides tedious political maneuvering among a class of elitist bankers and land owners who consider themselves natural leaders. The only places where the power structures of this future government make sense are where Asimov blatantly steals them from either the Roman Empire or the Doge of Venice.

The one thing this book has going for it — the thing, apart from bloody-minded completionism, that kept me reading — is that the technology is hilariously weird in that way that only 1940s and 1950s science fiction can be. The characters have access to communication via some sort of interstellar telepathy (messages coded to a specific person's "brain waves") and can travel between stars through hyperspace jumps, but each jump is manually calculated by referring to the pilot's (paper!) volumes of the Standard Galactic Ephemeris. Communication between ships (via "etheric radio") requires manually aiming a radio beam at the area in space where one thinks the other ship is. It's an unintentionally entertaining combination of technology that now looks absurdly primitive and science that is so advanced and hand-waved that it's obviously made up.

I also have to give Asimov some points for using spherical coordinates. It's a small thing, but the coordinate systems in most SF novels and TV shows are obviously not fit for purpose.

I spent about a month and a half of this year barely reading, and while some of that is because I finally tackled a few projects I'd been putting off for years, a lot of it was because of this book. It was only 192 pages, and I'm still curious about the glue between Asimov's Foundation and Robot series, both of which I devoured as a teenager. But every time I picked it up to finally finish it and start another book, I made it about ten pages and then couldn't take any more. Learn from my error: don't try this at home, or at least give up if the same thing starts happening to you.

Followed by The Currents of Space.

Rating: 2 out of 10

Categories: FLOSS Project Planets

www-zh-cn @ Savannah: Welcome our new member - integral

GNU Planet! - Sun, 2024-04-21 20:56

Hi, All:

Please join me in welcoming our new member:

 User Details:
-------------
Name:
Login:   integral
Email:   integral@member.fsf.org

I wish integral a wonderful journey in GNU CTT.

Happy Hacking
wxie

Categories: FLOSS Project Planets

Go Deh: Searching OEIS tables

Planet Python - Sun, 2024-04-21 16:32

 

 A few months ago I submitted a series to OEIS* that was accepted; yes, but OEIS does not seem to leave my series searchable!

*OEIS is the Online Encyclopedia of  Integer Series. I guess table is not in the name, but...

(best viewed on larger than a portrait phone)

Let me explain.

The documentation for OEIS, explains that if you have a 2D triangle or table of values rather than a one dimensional strict series, then one should antidiagonalise the data and submit the series produced.

They give as an example A003987 . This gives this table:

Table begins   0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, ...   1,  0,  3,  2,  5,  4,  7,  6,  9,  8, 11, 10, ...   2,  3,  0,  1,  6,  7,  4,  5, 10, 11,  8, ...   3,  2,  1,  0,  7,  6,  5,  4, 11, 10, ...   4,  5,  6,  7,  0,  1,  2,  3, 12, ...   5,  4,  7,  6,  1,  0,  3,  2, ...   6,  7,  4,  5,  2,  3,  0, ...   7,  6,  5,  4,  3,  2, ...   8,  9, 10, 11, 12, ...   9,  8, 11, 10, ...  10, 11,  8, ...  11, 10, ...  12, ...  ...

The above gets antidiagonalised to the series beginning:

  0, 1, 1, 2, 0, 2, 3, 3, 3, 3, 4, 2, 0, 2, 4, 5, 5, 1, 1, 5, 5, 6, 4,  6, 0, 6, 4, 6, 7, 7, 7, 7, 7, 7, 7, 7, 8, 6, 4, 6, 0, 6, 4, 6, 8, 9,  9, 5, 5, 1, 1, 5, 5, 9, 9, 10, 8, 10, 4, 2, 0, 2, 4, 10, 8, 10, 11,  11, 11, 11, 3, 3, 3, 3, 11, 11, 11, 11, 12, 10, 8, 10, 12, 2, 0, 2,  12, 10, 8, 10, 12, 13, 13, 9, 9, 13, 13, 1, 1, 13, 13, 9, 9, 13, 13

Searching...

If we search for a sub-sequence of the antidiagonalised table, we can find the correct entry.

If, however, we search for a row of the values from the table,  A003987 is not found!
The values chosen: to search for: 5,4,7,6,1,0,3 appear near the end of the table which shows that that row of numbers should be followed by a 2.
The table shows 13*13 / 2 ~ 85 values. OEIS has a list of 104 values, so it has the data to search through.

No intuitive search of OEIS tables

It seems to me that the most intuitive way to search a table of values is by row, left to right. There are other ways to search a table, (assuming an origin at top left and the table extends to the right and down):

  • By row, L2R. , R2L
  • By Column Top2Bottom, , B2T
  • By 45 degree diagonals, ↘, ↖, ↙, ↗

OEIS doesn't seem to do these searches on tabular data.

Regenerating a 2D table from antidiagonalised data.

I did play around and created some code to recreate a table as a list of row-lists, in Python, given an OEIS B-file. The options handling is a work in progress, but the main part was being able to generate the table.

# -*- coding: utf-8 -*-# %%"""adia_to_table.py file
Generate 2D table from OEIS type anti-diagonalised sequence held in file

Created on Thu Apr  4 18:16:07 2024
@author: paddy3118"""
from itertools import zip_longestimport math# from pprint import ppfrom antidiagonals import antidiag_triangle_indices

def read_bfile(bfname: str) -> tuple[int | None,  # first index                                     list[int]]:   # anti-diag values    """Read B-file bfname
## B-File format:* Ref: https://oeis.org/SubmitB.html* Blank lines ignored* Lines beginning  with # ignored* Lines of two, space-separated integers <index> <indexed-value>
It is assumed the index increments by one on subsequent lines.    """    first_index, values = None, []    with open(bfname) as bfile:        for line in bfile:            ln = line.strip()            if not ln or ln.startswith('#'):                continue            index, value = [int(field) for field in ln.split()[:2]]            if first_index is None:                first_index = index            values.append(value)
    return first_index, values

def antidiag_to_table(sequence: list[int]) -> list[list[int]]:    """    Convert anti-diagonalised sequence back to infinite 2D table.
    Parameters    ----------    sequence : list[int]        Anti-diagonalised values from table.
    Returns    -------    list[list[int]]        Table of rows of ints.
    Table rows will fill in from successive sequence values like this:
     1  2  4  7 11 16 ...     3  5  8 12 17     6  9 13 18    10 14 19    15 20    21    .    .    .    """
    # 1, 3, 6, 10, 15, 21, ... rows*(rows+1) / 2
    # min columns in triangular table generation. ~= min rows    size = len(sequence)                                  # = rows*(rows+1)/2    rows = math.ceil((-1 + math.sqrt(4 * 2 * size)) / 2)  # solve for rows    # cols = rows  # last row may be deleted if last anti-diag is part filled.    # print(f"{(size, cols) = }")
    # Empty (triangular) table of None's    table = [[None] * (rows - i)             for i in range(rows)]
    indices = antidiag_triangle_indices()    col = 0  # for if sequence is empty    for val, (row, col) in zip(sequence, indices):        table[row][col] = val
    # Remove unfilled part of last anti-diag of table    while col > 0:        row, col = next(indices)        table[row].pop(-1)    # remove last row if present and empty    if table and not table[-1]:        table.pop(-1)
    return table

def pp_table(table: list[list[int]]) -> None:    "Pretty-print table of differring row lengths"    if not table:        return    col_width = max(max(len(str(val)) for val in row) for row in table)    for row in table:        print(''.join(f"{val:{col_width}}" for val in row))

def transpose(table: list[list[int]]) -> list[list[int]]:    "Table of rows to x<>y transposed table of new rows"    fv = math.nan    tr = [list(new_row)          for new_row in zip_longest(*table, fillvalue=fv)]    # remove fillvalues in triangular transposition    for row in tr:        try:            row[row.index(fv):] = []        except ValueError:            continue
    return tr

if __name__ == "__main__":    print("# TEST FILL BY ANTI-DIAGONAL\n")    for n in range(0, 8):        print(f"{n = }:\n")        ans = antidiag_to_table(list(range(1, n+1)))        pp_table(ans)        print()
    fname, m = 'b365096.txt', 505    print(f"\n\n# Data from {fname}, first {m} values:\n")    ad = read_bfile(fname)    ans = antidiag_to_table(ad[1][:m])    pp_table(ans)    print("\n## Transposed:\n")    pp_table((tr:=transpose(ans)))


And antidiagonals.py is this:

# -*- coding: utf-8 -*-"""Anti-diagonals:
0,0 0,1 0,2 0,31,0 1,1 1,2 1,32,0 2,1 2,2 2,33,0 3,1 3,2 3,3
Of Square:0,0  0,1 1,0  0,2 1,1 2,0  0,3 1,2 2,1 3,0   1,3 2,2 3,1  2,3 3,2  3,3
of Infinite table:0,0  0,1 1,0  0,2 1,1 2,0  0,3 1,2 2,1 3,0   0,4 1,3 2,2 3,1 4,0 ...

Created on Mon Aug 21 13:36:31 2023
@author: paddy
"""# %% Triangles
from itertools import islice

def antidiag_triangle_indices() -> tuple[int, int]:    x = y = 0    while True:        yield (x, y)        x, y = (x+1, y-1) if y else (0, x+1)

list(islice(antidiag_triangle_indices(), 15))
# %% Rectangles
from itertools import islice

def antidiag_rectangle_indices(sizex: int=4, sizey: int=4) -> tuple[int, int]:    x = y = 0    while True:        yield (x, y)        if (x, y) == (sizex - 1, sizey - 1):            break        x, y = (x+1, y-1)        if x == sizex or y < 0:            u = x + y + 1            x, y = (0, u) if u < sizey else (u - sizey + 1, sizey - 1)
list(antidiag_rectangle_indices(3, 4))


END.

Categories: FLOSS Project Planets

gnulib @ Savannah: GNU gnulib: calling for beta-testers

GNU Planet! - Sun, 2024-04-21 06:47

If you are developer on a package that uses GNU gnulib as part of its build system:

gnulib-tool has been known for being slow for many years. We have listened to your complaints. A rewrite of gnulib-tool in another programming language (Python) is ready for beta-testing. It is between 8 times and 100 times faster than the original gnulib-tool.

Both implementations should behave identically, that is, produce the same generated files and the same output. You can help us ensure this, through the following steps:

1. Make sure you have Python (version 3.7 or newer) installed on your machine.

2. Update your gnulib checkout. (For some packages, it comes as a git submodule named 'gnulib'.) Like this:

  $ git checkout master
  $ git pull

     Set the environment variable GNULIB_SRCDIR, pointing to this checkout.

     If the package is using a git submodule named 'gnulib', it is also advisable to do

  $ git commit -m 'build: Update gnulib submodule to latest.' gnulib

     (as a preparation for step 5, because the --no-git option does not work as expected in all variants of 'bootstrap').

3. Set an environment variable that enables checking that the two implementations behave the same:

  $ export GNULIB_TOOL_IMPL=sh+py


4. Clean the built files of your package:

  $ make -k distclean


5. Regenerate the fetched and generated files of your package. Depending on the package, this may be a command such as

  $ ./bootstrap --no-git --gnulib-srcdir=$GNULIB_SRCDIR

     or

  $ export GNULIB_SRCDIR; ./autopull.sh; ./autogen.sh

     or, if no such script is available:

  $ $GNULIB_SRCDIR/gnulib-tool --update

     If there is a failure, due to differences between the 'sh' and 'py' results, please report it to <bug-gnulib@gnu.org>.

6. If this invocation was successful, you can trust the rewritten gnulib-tool and use it from now on, by setting the environment variable

  $ export GNULIB_TOOL_IMPL=py


7. Continue with

  $ ./configure
  $ make

     as usual.

And enjoy the speed! The rewritten gnulib-tool was implemented by Dmitry Selyutin, Collin Funk, and me.

Categories: FLOSS Project Planets

KDE Gear 24.05 branches created

Planet KDE - Sun, 2024-04-21 06:39
Make sure you commit anything you want to end up in the KDE Gear 24.05
releases to them

Next Dates
  • April 25 2024: 24.05 Freeze and Beta (24.04.80) tag & release
  • May 9, 2024: 24.05 RC (24.04.90) Tagging and Release
  • May 16, 2024: 24.05 Tagging
  • May 23, 2024: 24.05 Release

https://community.kde.org/Schedules/KDE_Gear_24.05_Schedule
Categories: FLOSS Project Planets

Pages