Feeds

TechBeamers Python: How to Check If Python List is Empty

Planet Python - Tue, 2023-12-26 07:05

Checking whether a Python list is empty is a fundamental task in programming. An empty list often signals the absence of data or the need for specific handling. In this tutorial, we will explore various methods to check if a Python list is empty, covering different aspects and providing multiple examples. Understanding these techniques is […]

The post How to Check If Python List is Empty appeared first on TechBeamers.

Categories: FLOSS Project Planets

Specbee: A Deep Dive into the Webform Module for Drupal 10

Planet Drupal - Tue, 2023-12-26 06:47
The Webform module is the most powerful and flexible form builder and submission manager for Drupal. It gives  site builders the power to easily create complex forms instantly. It comes with a certain level of default settings, also letting you customize it as per your requirements. Check out this amazing blog - Drupal 9 Webform Module - A Brief Tutorial to help you get started with the Webform module for your Drupal 9 or 10 website. This will help you understand the basics easily.  The Webform module ships with tons of interesting features! Allow me to present a concise overview of some of the noteworthy features and functionalities that make this module a powerhouse in the world of Drupal.  This is an updated version of the article last published on 12 October 2021, designed to keep you in the loop with Drupal 10 and its latest developments. Altering form & elements Any form, element and its related settings can be altered by using their respective hooks. Below are few hooks that are available to use and you can find more in the webform.api.php file: Form hooks         ◦ hook_webform_submission_form_alter()            ◦ Perform alterations before a webform submission form is rendered. Element hooks         ◦ hook_webform_element_alter()        ◦ Alter webform elements. Option hooks         ◦ hook_webform_options_alter()        ◦ Alter webform options. Handler hooks         ◦ hook_webform_handler_invoke_alter()        ◦ Act on a webform handler when a method is invoked. more hooks…         ◦ hook_webform_access_rules_alter() etc..        ◦ Alter list of access rules that should be managed on per webform level. YAML Source The Webform module started as a YAML Form module, which allowed people to build forms by writing YAML markup. At some point, the YAML Form module started to have UI and became the Webform module for Drupal 8. YAML provides a simple & easy to learn markup language for building & bulk editing a webform's elements. The (View) Source page allows developers to edit a webform's render array using YAML markup. Developers can use the (View) Source page to hand-code webforms to alter a webform's labels quickly, cut-n-paste multiple elements, reorder elements, as well as add custom properties and markup to elements. Here’s an example of a Contact form and its corresponding YAML source code:   A Contact form with drag-n-drop UI   The Contact form’s YAML source code Conditional fields Webform lets you add conditional logic to your elements within your form. Let us consider a small example, wherein we need to conditionally handle the visibility of elements based on the value of another element within the form. Here’s an example form with two-step fields, STEP 1 (Radios element) with options ‘Email’ and ‘Mobile Number’. STEP 2 (Fieldset) with two elements, 'Email' and 'Mobile Number'. Form Build page Form View page In the above example, I would like to show the ‘Email’ field if the ‘Email’ option is chosen in Step 1, else show the ‘Mobile Number’ field if the ‘Mobile Number’ option is chosen in Step 1. To achieve that, edit your ‘Email’ field, click on the ‘Conditions’ tab, choose the ‘State’ as ‘Visible’ and set the ‘Trigger/Value’ as ‘STEP 1 [Radios] value is email’. Similarly, follow the same steps to add conditional logic to your ‘Mobile Number’ field and set the ‘Trigger/Value’ as ‘STEP 1 [Radios] value is mobile_number’. Here’s the final look of the Webform: Setting up conditional logic Form when ‘Email’ is chosen on STEP 1 Form when ‘Mobile Number’ is chosen on STEP 1 Custom options properties Webform lets you add custom option properties to your from elements. Imagine a scenario where you would want to conditionally handle the options of a radio element based on the value of a different element within the form. How would you do that? Well, I did not find any way to handle it through the Conditional logic settings from the UI. But there is a provision to set ‘custom options properties’ to your element, wherein you write the required conditional logic targeting your options within the element using the YAML code. Here is an example, where we can see two radio elements and based on the option I select in the first element, the visibility of the options within the second element should change. Form Build page Form View page before adding any custom options properties: If ‘Type A’ is chosen, then ‘Option 1’ and ‘Option 2’ should be visible from the second element. Similarly, if ‘Type B’ is chosen, then ‘Option 3’ and ‘Option 4’ should be visible.  To achieve this edit the second element, go to the ‘Advanced’ tab, scroll down to the ‘Options (custom) properties’ sections and write the necessary logic in YAML. Setting up the options properties Form when ‘Type A’ is chosen Form when ‘Type B’ is chosen Webform submission email handlers Email handlers Email handlers sends a webform submission via email. To add email handlers to your webform, go to ‘Settings’ and then ‘Emails/Handlers’ tab. Next, click on the ‘Add Email / Add handler’ button. Add Email Handler As shown in the below image, on the ‘General’ tab, add the ‘Title’ and set ‘Send to’  and ‘Send from’ details. Add the message ‘Subject’ and ‘Body ‘ as required and save the configuration form.    And that’s about it. Your handler gets fired whenever the form is submitted. You can also set conditional email handlers to your webform i.e., trigger different email handlers based on the value of certain elements within the form. For example, let us consider a ‘Select’ element with values ‘Type 1’ and ‘Type 2’. If the user submits ‘Type 1’, trigger the ‘Email - Type 1’ handler that has set ‘To’ address to ‘user1@gmail.com’. If the user submits ‘Type 2’, trigger the ‘Email - Type 2’ handler that has set ‘To’ address to ‘user2@gmail.com’. To add conditional logic to your email handler, create one handler and name it ‘Email - Type 1’. Set ‘To’ address to ‘user1@mail.com’, switch to the ‘Conditions’ Tab, choose the ‘State’ as ‘Visible’ and set the ‘Trigger/Value’ as ‘Select Type [Select] value is type_1’.  Similarly, create the second handler and name it ‘Email - Type 2’. Set ‘To’ address to ‘user2@gmail.com’, switch to the ‘Conditions’ Tab, choose the ‘State’ as ‘Visible’ and set the ‘Trigger/Value’ as ‘Select Type [Select] value is type_2’.   Scheduled email handlers It extends the Webform module's email handler to allow emails to be scheduled. To use this feature, enable the ‘Webform Scheduled Email Handler’ sub-module.  To schedule sending an email of the form submissions, click on the ‘Add handler’ button. Select ‘Scheduled Email’ handler here.   There is only one extra config setting in the ‘Scheduled Email’ handler compared to the normal ‘Email handler’. And that is to add Schedule email date under the General settings tab.   Scheduled email handler Set the date to trigger your handler and when the next cron is run, your email will be sent!  Generate PDF copies of Webform submission The webform module lets us download and export webform submissions as PDF documents. It also supports sending these PDF documents as email attachments and customising the templates and CSS of the PDF documents. How do we get this working? To begin with, you need to enable the Webform Entity Print (PDF) submodule (to download and export the PDF documents) and Webform Entity Print (PDF) Attachment submodule (to send these PDF documents as email attachments). Please note: It requires the Entity Print module to be installed which allows you to print any Drupal entity (Drupal 7, 9 and 10) or View (Drupal 9 and 10 only) to PDF. Download the PDF of a single submission Visit your webforms results page, view the submission you want to download and scroll down to click on ‘Download PDF’. Click on ‘Download PDF’   Downloaded PDF Export multiple webform submissions as a PDP document in a zip file  Visit your webforms results page, go to the Download exports tab, choose export format as ‘PDF document’ and hit Download.   You will get all your webform submissions as a PDF in a single *.tar.gz or *.zip file.  Choose Export format as ‘PDF documents Customise the templates and CSS of the PDF documents Go to your webforms general settings tab (/admin/structure/webform/manage/{your-webform-name}/settings), and scroll down to the ‘Entity print’ settings section.  Add necessary header and footer template details (it supports tokens) and some CSS to be shown in your PDF.   Customising your PDF by adding a default header, footer and some CSS PDF after customizing Attach these PDF documents to your emails Start by adding an element called PDF to your webform. Add the PDF element details like its title, view mode, file name etc and save it. Click on ‘Add element’ Add PDF element details The PDF element is now added to your results column (Please note: we are not adding it to our form). When you click on it, you will be redirected to your PDF document. The next step is to attach it to your emails. For that, go into Settings > Email/Handlers. Edit any of your email handlers, check the PDF element and enable ‘Include files as attachments’ and save the handler. Enable ‘Include files as attachments Next time you submit the webform, the PDF will be attached to your email along with webform submission details. Contact.pdf is attached in the email attachments Finding Help There are different ways through which you can seek help with the webform module. Here is a list of a few sources: Documentation, Cookbook, & Screencasts - https://www.drupal.org/docs/contributed-modules/webform Webform Issue Queue - https://www.drupal.org/project/issues/webform Drupal Answers - http://drupal.stackexchange.com Slack channel - You can always post your queries regarding the Webform module at the #webform channel within the Drupal slack workspace. Anyone from the community, even the module maintainer themselves are always around and are kind enough to guide you with your problems. A HUGE shoutout to Jacob Rockowitz for his relentless support towards the Drupal 9/10 Webform module. Webform wouldn't have been what it is now without him. Final Thoughts There’s just so much this versatile Webform module can do that we cannot possibly cover it all in one article. If you love Webforms as much as we do, we’d encourage you to get involved and contribute in any way possible. Looking for professional Drupal services and expertise to build your next Drupal website with beneficial modules such as this? We would love to hear from you. 
Categories: FLOSS Project Planets

Golems GABB: Enhancing Site Performance with Drupal's BigPipe Module

Planet Drupal - Tue, 2023-12-26 06:04
Enhancing Site Performance with Drupal's BigPipe Module Editor Tue, 12/26/2023 - 15:09

Nowadays, the stakes are getting higher and higher. Speed is of the essence for an ordinary user, and each web page and developer has to comply with it. This is where Drupal's BigPipe module takes the stage. 
BigPipe is not a simple addition but a powerful tool for ensuring faster page loading times. It is achieved by fragmenting a web page into smaller units and prioritizing their loading order. Following this logic, the most crucial parts will be loaded first, while the others will catch up later. Let's delve deeper into this marvelous technology and learn more about web page optimization using Drupal's BigPipe module

Categories: FLOSS Project Planets

LN Webworks: Top 7 Secrets No One Told You For The Success of Drupal Web Development Projects

Planet Drupal - Tue, 2023-12-26 04:52

Drupal has solidified its position as a go-to content management system and web development framework, earning the trust of businesses and developers worldwide. As more organizations lean on Drupal for their web development needs, the demand for top-tier projects, handled by a seasoned Drupal development company, is on the rise.

In this article, we're delving into the seven secrets that set the best Drupal web development projects apart. These are some core industry insights from the top Drupal website development companies for higher chances of project success. 

Categories: FLOSS Project Planets

The Drop Times: The Essential API Client for Seamless JavaScript Integration

Planet Drupal - Tue, 2023-12-26 02:54
Explore the unfolding saga of Drupal's leap into JavaScript with the official API Client. Delve into insights from contributors Coby Sher and Pratik Kamble, offering a glimpse into the initiative's transformative journey. Discover how this development is set to reshape the Drupal and JavaScript integration.
Categories: FLOSS Project Planets

TechBeamers Python: Multiple Ways to Loop Through a Dictionary in Python

Planet Python - Tue, 2023-12-26 00:09

Dictionaries are a fundamental data structure in Python, offering a powerful way to store and organize data. When it comes to working with dictionaries, one common task is iterating through their key-value pairs. In this tutorial, we will delve into various methods to loop through a dictionary in Python, exploring different aspects and providing multiple […]

The post Multiple Ways to Loop Through a Dictionary in Python appeared first on TechBeamers.

Categories: FLOSS Project Planets

Russ Allbery: Review: A Study in Honor

Planet Debian - Mon, 2023-12-25 22:37

Review: A Study in Honor, by Claire O'Dell

Series: Janet Watson Chronicles #1 Publisher: Harper Voyager Copyright: July 2018 ISBN: 0-06-269932-6 Format: Kindle Pages: 295

A Study in Honor is a near-future science fiction novel by Claire O'Dell, a pen name for Beth Bernobich. You will see some assertions, including by the Lambda Literary Award judges, that it is a mystery novel. There is a mystery, but... well, more on that in a moment.

Janet Watson was an Army surgeon in the Second US Civil War when New Confederacy troops overran the lines in Alton, Illinois. Watson lost her left arm to enemy fire. As this book opens, she is returning to Washington, D.C. with a medical discharge, PTSD, and a field replacement artificial arm scavenged from another dead soldier. It works, sort of, mostly, despite being mismatched to her arm and old in both technology and previous wear. It does not work well enough for her to resume her career as a surgeon.

Watson's plan is to request a better artificial arm from the VA (the United States Department of Veterans Affairs, which among other things is responsible for the medical care of wounded veterans). That plan meets a wall of unyielding and uninterested bureaucracy. She has a pension, but it's barely enough for cheap lodging. A lifeline comes in the form of a chance encounter with a former assistant in the Army, who has a difficult friend looking to split the cost of an apartment. The name of that friend is Sara Holmes.

At this point, you know what to expect. This is clearly one of the many respinnings of Arthur Conan Doyle. This time, the setting is in the future and Watson and Holmes are both black women, but the other elements of the setup are familiar: the immediate deduction that Watson came from the front, the shared rooms (2809 Q Street this time, sacrificing homage for the accuracy of a real address), Holmes's tendency to play an instrument (this time the piano), and even the title of this book, which is an obvious echo of the title of the first Holmes novel, A Study in Scarlet.

Except that's not what you'll get. There are a lot of parallels and references here, but this is not a Holmes-style detective novel.

First, it's only arguably a detective novel at all. There is a mystery, which starts with a patient Watson sees in her fallback job as a medical tech in the VA hospital and escalates to a physical attack, but that doesn't start until a third of the way into the book. It certainly is not solved through minute clues and leaps of deduction; instead, that part of the plot has the shape of a thriller rather than a classic mystery. There is a good argument that the thriller is the modern mystery novel, so I don't want to overstate my case, but I think someone who came to this book wanting a Doyle-style mystery would be disappointed.

Second, the mystery is not the heart of this book. Watson is. She, like Doyle's Watson, is the first-person narrator, but she is far more present in the book. I have no idea how accurate O'Dell's portrayal of Watson's PTSD is, but it was certainly compelling and engrossing reading. Her fight for basic dignity and her rage at the surface respect and underlying disinterested hostility of the bureaucratic war machinery is what kept me turning the pages. The mystery plot is an outgrowth of that and felt more like a case study than the motivating thread of the plot.

And third, Sara Holmes... well, I hesitate to say definitively that she's not Sherlock Holmes. There have been so many versions of Holmes over the years, even apart from the degree to which a black woman would necessarily not be like Doyle's character. But she did not remind me of Sherlock Holmes. She reminded me of a cross between James Bond and a high fae.

This sounds like a criticism. It very much is not. I found this high elf spy character far more interesting than I have ever found Sherlock Holmes. But here again, if you came into this book hoping for a Holmes-style master detective, I fear you may be wrong-footed.

The James Bond parts will be obvious when you get there and aren't the most interesting (and thankfully the misogyny is entirely absent). The part I found more fascinating is the way O'Dell sets Holmes apart by making her fae rather than insufferable. She projects effortless elegance, appears and disappears on a mysterious schedule of her own, thinks nothing of reading her roommate's diary, leaves meticulously arranged gifts, and even bargains with Watson for answers to precisely three questions. The reader does learn some mundane explanations for some of this behavior, but to be honest I found them somewhat of a letdown. Sara Holmes is at her best as a character when she tacks her own mysterious path through a rather grim world of exhausted war, penny-pinching bureaucracy, and despair, pursuing an unexplained agenda of her own while showing odd but unmistakable signs of friendship and care.

This is not a romance, at least in this book. It is instead a slowly-developing friendship between two extremely different people, one that I thoroughly enjoyed.

I do have a couple of caveats about this book. The first is that the future US in which it is set is almost pure Twitter doomcasting. Trump's election sparked a long slide into fascism, and when that was arrested by the election of a progressive candidate backed by a fragile coalition, Midwestern red states seceded to form the New Confederacy and start a second civil war that has dragged on for nearly eight years. It's a very specific mainstream liberal dystopian scenario that I've seen so many times it felt like a cliche even though I don't remember seeing it in a book before. This type of future projection of current fears is of course not new for science fiction; Cold War nuclear war novels are probably innumerable. But I had questions, such as how a sparsely-populated, largely non-industrial, and entirely landlocked set of breakaway states could maintain a war footing for eight years. Despite some hand-waving about covert support, those questions are not really answered here.

The second problem is that the ending of this book kind of falls apart. The climax of the mystery investigation is unsatisfyingly straightforward, and the resulting revelation is a hoary cliche. Maybe I'm just complaining about the banality of evil, but if I'd been engrossed in this book for the thriller plot, I think I would have been annoyed.

I wasn't, though; I was here for the characters, for Watson's PTSD and dogged determination, for Sara's strangeness, and particularly for the growing improbable friendship between two women with extremely different life experiences, emotions, and outlooks. That part was great, regardless of the ending.

Do not pick this book up because you want a satisfying deductive mystery with bumbling police and a blizzard of apparently inconsequential clues. That is not at all what's happening here. But this was great on its own terms, and I will be reading the sequel shortly. Recommended, although if you are very online expect to do a bit of eye-rolling at the setting.

Followed by The Hound of Justice.

Rating: 8 out of 10

Categories: FLOSS Project Planets

Russ Allbery: krb5-strength 3.3

Planet Debian - Mon, 2023-12-25 22:24

krb5-strength is a toolkit of plugins and support programs for password strength checking for Kerberos KDCs, either Heimdal or MIT. It also includes a password history mechanism for Heimdal KDCs.

This is a maintenance release, since there hadn't been a new release since 2020. It contains the normal sort of build system and portability updates, and the routine bump in the number of hash iterations used for the history mechanism to protect against (some) brute force attacks. It also includes an RPM spec file contributed by Daria Phoebe Brashear, and some changes to the Perl dependencies to track current community recommendations.

You can get the latest version from the krb5-strength distribution page.

Categories: FLOSS Project Planets

Sergio Talens-Oliag: GitLab CI/CD Tips: Automatic Versioning Using semantic-release

Planet Debian - Mon, 2023-12-25 18:30

This post describes how I’m using semantic-release on gitlab-ci to manage versioning automatically for different kinds of projects following a simple workflow (a develop branch where changes are added or merged to test new versions, a temporary release/#.#.# to generate the release candidate versions and a main branch where the final versions are published).

What is semantic-release

It is a Node.js application designed to manage project versioning information on Git Repositories using a Continuous integration system (in this post we will use gitlab-ci)

How does it work

By default semantic-release uses semver for versioning (release versions use the format MAJOR.MINOR.PATCH) and commit messages are parsed to determine the next version number to publish.

If after analyzing the commits the version number has to be changed, the command updates the files we tell it to (i.e. the package.json file for nodejs projects and possibly a CHANGELOG.md file), creates a new commit with the changed files, creates a tag with the new version and pushes the changes to the repository.

When running on a CI/CD system we usually generate the artifacts related to a release (a package, a container image, etc.) from the tag, as it includes the right version number and usually has passed all the required tests (it is a good idea to run the tests again in any case, as someone could create a tag manually or we could run extra jobs when building the final assets …​ if they fail it is not a big issue anyway, numbers are cheap and infinite, so we can skip releases if needed).

Commit messages and versioning

The commit messages must follow a known format, the default module used to analyze them uses the angular git commit guidelines, but I prefer the conventional commits one, mainly because it’s a lot easier to use when you want to update the MAJOR version.

The commit message format used must be:

<type>(optional scope): <description> [optional body] [optional footer(s)]

The system supports three types of branches: release, maintenance and pre-release, but for now I’m not using maintenance ones.

The branches I use and their types are:

  • main as release branch (final versions are published from there)
  • develop as pre release branch (used to publish development and testing versions with the format #.#.#-SNAPSHOT.#)
  • release/#.#.# as pre release branches (they are created from develop to publish release candidate versions with the format #.#.#-rc.# and once they are merged with main they are deleted)

On the release branch (main) the version number is updated as follows:

  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last version change (it looks for tags on the same branch).
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last version change.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.

On the pre release branches (develop and release/#.#.#) the version and pre release numbers are always calculated from the last published version available on the branch (i. e. if we published version 1.3.2 on main we need to have the commit with that tag on the develop or release/#.#.# branch to get right what will be the next version).

The version number is updated as follows:

  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last released version.

    In our example it was 1.3.2 and the version is updated to 2.0.0-SNAPSHOT.1 or 2.0.0-rc.1 depending on the branch.

  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last released version.

    In our example the release was 1.3.2 and the version is updated to 1.4.0-SNAPSHOT.1 or 1.4.0-rc.1 depending on the branch.

  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.

    In our example the release was 1.3.2 and the version is updated to 1.3.3-SNAPSHOT.1 or 1.3.3-rc.1 depending on the branch.

  4. The pre release number is incremented if the MAJOR, MINOR and PATCH numbers are not going to be changed but there is a commit that would otherwise update the version (i.e. a fix on 1.3.3-SNAPSHOT.1 will set the version to 1.3.3-SNAPSHOT.2, a fix or feat on 1.4.0-rc.1 will set the version to 1.4.0-rc.2 an so on).
How do we manage its configuration

Although the system is designed to work with nodejs projects, it can be used with multiple programming languages and project types.

For nodejs projects the usual place to put the configuration is the project’s package.json, but I prefer to use the .releaserc file instead.

As I use a common set of CI templates, instead of using a .releaserc on each project I generate it on the fly on the jobs that need it, replacing values related to the project type and the current branch on a template using the tmpl command (lately I use a branch of my own fork while I wait for some feedback from upstream, as you will see on the Dockerfile).

Container used to run it

As we run the command on a gitlab-ci job we use the image built from the following Dockerfile:

Dockerfile # Semantic release image FROM golang:alpine AS tmpl-builder #RUN go install github.com/krakozaure/tmpl@v0.4.0 RUN go install github.com/sto/tmpl@v0.4.0-sto.2 FROM node:lts-alpine COPY --from=tmpl-builder /go/bin/tmpl /usr/local/bin/tmpl RUN apk update &&\ apk upgrade &&\ apk add curl git jq openssh-keygen yq zip &&\ npm install --location=global\ conventional-changelog-conventionalcommits@6.1.0\ @qiwi/multi-semantic-release@7.0.0\ semantic-release@21.0.7\ @semantic-release/changelog@6.0.3\ semantic-release-export-data@1.0.1\ @semantic-release/git@10.0.1\ @semantic-release/gitlab@9.5.1\ @semantic-release/release-notes-generator@11.0.4\ semantic-release-replace-plugin@1.2.7\ semver@7.5.4\ &&\ rm -rf /var/cache/apk/* CMD ["/bin/sh"] Note:

The versions of some of the components are not the latest ones, I try to review them from time to time but you know the saying: if it ain’t broken, don’t fix it.

Note:

The image includes some tools and modules I use on other projects like the qiwi/multi-semantic-release, a fork of semantic-release that allows to publish multiple packages from a single repository (monorepo).

I’ll probably write about how I use it on a future post.

How and when is it executed

The job that runs semantic-release is executed when new commits are added to the develop, release/#.#.# or main branches (basically when something is merged or pushed) and after all tests have passed (we don’t want to create a new version that does not compile or passes at least the unit tests).

The job is something like the following:

semantic_release: image: $SEMANTIC_RELEASE_IMAGE rules: - if: '$CI_COMMIT_BRANCH =~ /^(develop|main|release\/\d+.\d+.\d+)$/' when: always stage: release before_script: - echo "Loading scripts.sh" - . $ASSETS_DIR/scripts.sh script: - sr_gen_releaserc_json - git_push_setup - semantic-release

Where the SEMANTIC_RELEASE_IMAGE variable contains the URI of the image built using the Dockerfile above and the sr_gen_releaserc_json and git_push_setup are functions defined on the $ASSETS_DIR/scripts.sh file:

  • The sr_gen_releaserc_json function generates the .releaserc.json file using the tmpl command.
  • The git_push_setup function configures git to allow pushing changes to the repository with the semantic-release command, optionally signing them with a SSH key.
The sr_gen_releaserc_json function

The code for the sr_gen_releaserc_json function is the following:

sr_gen_releaserc_json() { # Use nodejs as default project_type project_type="${PROJECT_TYPE:-nodejs}" # REGEX to match the rc_branch name rc_branch_regex='^release\/[0-9]\+\.[0-9]\+\.[0-9]\+$' # PATHS on the local ASSETS_DIR assets_dir="${CI_PROJECT_DIR}/${ASSETS_DIR}" sr_local_plugin="${assets_dir}/local-plugin.cjs" releaserc_tmpl="${assets_dir}/releaserc.json.tmpl" pipeline_runtime_values_yaml="/tmp/releaserc_values.yaml" pipeline_values_yaml="${assets_dir}/values_${project_type}_project.yaml" # Destination PATH releaserc_json=".releaserc.json" # Create an empty pipeline_values_yaml if missing test -f "$pipeline_values_yaml" || : >"$pipeline_values_yaml" # Create the pipeline_runtime_values_yaml file echo "branch: ${CI_COMMIT_BRANCH}" >"$pipeline_runtime_values_yaml" echo "sr_local_plugin: ${sr_local_plugin}" >>"$pipeline_runtime_values_yaml" # Add the rc_branch name if we are on an rc_branch if [ "$(echo "$CI_COMMIT_BRANCH" | sed -ne "/$rc_branch_regex/{p}")" ]; then echo "rc_branch: ${CI_COMMIT_BRANCH}" >>"$pipeline_runtime_values_yaml" elif [ "$(echo "$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME" | sed -ne "/$rc_branch_regex/{p}")" ]; then echo "rc_branch: ${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME}" \ >>"$pipeline_runtime_values_yaml" fi # Create the releaserc_json file tmpl -f "$pipeline_runtime_values_yaml" -f "$pipeline_values_yaml" \ "$releaserc_tmpl" | jq . >"$releaserc_json" # Remove the pipeline_runtime_values_yaml file rm -f "$pipeline_runtime_values_yaml" # Print the releaserc_json file print_file_collapsed "$releaserc_json" # --*-- BEG: NOTE --*-- # Rename the package.json to ignore it when calling semantic release. # The idea is that the local-plugin renames it back on the first step of the # semantic-release process. # --*-- END: NOTE --*-- if [ -f "package.json" ]; then echo "Renaming 'package.json' to 'package.json_disabled'" mv "package.json" "package.json_disabled" fi }

Almost all the variables used on the function are defined by gitlab except the ASSETS_DIR and PROJECT_TYPE; in the complete pipelines the ASSETS_DIR is defined on a common file included by all the pipelines and the project type is defined on the .gitlab-ci.yml file of each project.

If you review the code you will see that the file processed by the tmpl command is named releaserc.json.tmpl, its contents are shown here:

{ "plugins": [ {{- if .sr_local_plugin }} "{{ .sr_local_plugin }}", {{- end }} [ "@semantic-release/commit-analyzer", { "preset": "conventionalcommits", "releaseRules": [ { "breaking": true, "release": "major" }, { "revert": true, "release": "patch" }, { "type": "feat", "release": "minor" }, { "type": "fix", "release": "patch" }, { "type": "perf", "release": "patch" } ] } ], {{- if .replacements }} [ "semantic-release-replace-plugin", { "replacements": {{ .replacements | toJson }} } ], {{- end }} "@semantic-release/release-notes-generator", {{- if eq .branch "main" }} [ "@semantic-release/changelog", { "changelogFile": "CHANGELOG.md", "changelogTitle": "# Changelog" } ], {{- end }} [ "@semantic-release/git", { "assets": {{ if .assets }}{{ .assets | toJson }}{{ else }}[]{{ end }}, "message": "ci(release): v${nextRelease.version}\n\n${nextRelease.notes}" } ], [ "@semantic-release/gitlab", { "gitlabUrl": "https://gitlab.com/", "successComment": false } ] ], "branches": [ { "name": "develop", "prerelease": "SNAPSHOT" }, {{- if .rc_branch }} { "name": "{{ .rc_branch }}", "prerelease": "rc" }, {{- end }} "main" ] }

The values used to process the template are defined on a file built on the fly (releaserc_values.yaml) that includes the following keys and values:

  • branch: the name of the current branch
  • sr_local_plugin: the path to the local plugin we use (shown later)
  • rc_branch: the name of the current rc branch; we only set the value if we are processing one because semantic-release only allows one branch to match the rc prefix and if we use a wildcard (i.e. release/*) but the users keep more than one release/#.#.# branch open at the same time the calls to semantic-release will fail for sure.

The template also uses a values_${project_type}_project.yaml file that includes settings specific to the project type, the one for nodejs is as follows:

replacements: - files: - "package.json" from: "\"version\": \".*\"" to: "\"version\": \"${nextRelease.version}\"" assets: - "CHANGELOG.md" - "package.json"

The replacements section is used to update the version field on the relevant files of the project (in our case the package.json file) and the assets section includes the files that will be committed to the repository when the release is published (looking at the template you can see that the CHANGELOG.md is only updated for the main branch, we do it this way because if we update the file on other branches it creates a merge nightmare and we are only interested on it for released versions anyway).

The local plugin adds code to rename the package.json_disabled file to package.json if present and prints the last and next versions on the logs with a format that can be easily parsed using sed:

local-plugin.cjs // Minimal plugin to: // - rename the package.json_disabled file to package.json if present // - log the semantic-release last & next versions function verifyConditions(pluginConfig, context) { var fs = require('fs'); if (fs.existsSync('package.json_disabled')) { fs.renameSync('package.json_disabled', 'package.json'); context.logger.log(`verifyConditions: renamed 'package.json_disabled' to 'package.json'`); } } function analyzeCommits(pluginConfig, context) { if (context.lastRelease && context.lastRelease.version) { context.logger.log(`analyzeCommits: LAST_VERSION=${context.lastRelease.version}`); } } function verifyRelease(pluginConfig, context) { if (context.nextRelease && context.nextRelease.version) { context.logger.log(`verifyRelease: NEXT_VERSION=${context.nextRelease.version}`); } } module.exports = { verifyConditions, analyzeCommits, verifyRelease } The git_push_setup function

The code for the git_push_setup function is the following:

git_push_setup() { # Update global credentials to allow git clone & push for all the group repos git config --global credential.helper store cat >"$HOME/.git-credentials" <<EOF https://fake-user:${GITLAB_REPOSITORY_TOKEN}@gitlab.com EOF # Define user name, mail and signing key for semantic-release user_name="$SR_USER_NAME" user_email="$SR_USER_EMAIL" ssh_signing_key="$SSH_SIGNING_KEY" # Export git user variables export GIT_AUTHOR_NAME="$user_name" export GIT_AUTHOR_EMAIL="$user_email" export GIT_COMMITTER_NAME="$user_name" export GIT_COMMITTER_EMAIL="$user_email" # Sign commits with ssh if there is a SSH_SIGNING_KEY variable if [ "$ssh_signing_key" ]; then echo "Configuring GIT to sign commits with SSH" ssh_keyfile="/tmp/.ssh-id" : >"$ssh_keyfile" chmod 0400 "$ssh_keyfile" echo "$ssh_signing_key" | tr -d '\r' >"$ssh_keyfile" git config gpg.format ssh git config user.signingkey "$ssh_keyfile" git config commit.gpgsign true fi }

The function assumes that the GITLAB_REPOSITORY_TOKEN variable (set on the CI/CD variables section of the project or group we want) contains a token with read_repository and write_repository permissions on all the projects we are going to use this function.

The SR_USER_NAME and SR_USER_EMAIL variables can be defined on a common file or the CI/CD variables section of the project or group we want to work with and the script assumes that the optional SSH_SIGNING_KEY is exported as a CI/CD default value of type variable (that is why the keyfile is created on the fly) and git is configured to use it if the variable is not empty.

Warning:

Keep in mind that the variables GITLAB_REPOSITORY_TOKEN and SSH_SIGNING_KEY contain secrets, so probably is a good idea to make them protected (if you do that you have to make the develop, main and release/* branches protected too).

Warning:

The semantic-release user has to be able to push to all the projects on those protected branches, it is a good idea to create a dedicated user and add it as a MAINTAINER for the projects we want (the MAINTAINERS need to be able to push to the branches), or, if you are using a Gitlab with a Premium license you can use the api to allow the semantic-release user to push to the protected branches without allowing it for any other user.

The semantic-release command

Once we have the .releaserc file and the git configuration ready we run the semantic-release command.

If the branch we are working with has one or more commits that will increment the version, the tool does the following (note that the steps are described are the ones executed if we use the configuration we have generated):

  1. It detects the commits that will increment the version and calculates the next version number.
  2. Generates the release notes for the version.
  3. Applies the replacements defined on the configuration (in our example updates the version field on the package.json file).
  4. Updates the CHANGELOG.md file adding the release notes if we are going to publish the file (when we are on the main branch).
  5. Creates a commit if all or some of the files listed on the assets key have changed and uses the commit message we have defined, replacing the variables for their current values.
  6. Creates a tag with the new version number and the release notes.
  7. As we are using the gitlab plugin after tagging it also creates a release on the project with the tag name and the release notes.
Notes about the git workflows and merges between branches

It is very important to remember that semantic-release looks at the commits of a given branch when calculating the next version to publish, that has two important implications:

  1. On pre release branches we need to have the commit that includes the tag with the released version, if we don’t have it the next version is not calculated correctly.
  2. It is a bad idea to squash commits when merging a branch to another one, if we do that we will lose the information semantic-release needs to calculate the next version and even if we use the right prefix for the squashed commit (fix, feat, …​) we miss all the messages that would otherwise go to the CHANGELOG.md file.

To make sure that we have the right commits on the pre release branches we should merge the main branch changes into the develop one after each release tag is created; in my pipelines the fist job that processes a release tag creates a branch from the tag and an MR to merge it to develop.

The important thing about that MR is that is must not be squashed, if we do that the tag commit will probably be lost, so we need to be careful.

To merge the changes directly we can run the following code:

# Set the SR_TAG variable to the tag you want to process SR_TAG="v1.3.2" # Fetch all the changes git fetch --all --prune # Switch to the main branch git switch main # Pull all the changes git pull # Switch to the development branch git switch develop # Pull all the changes git pull # Create followup branch from tag git switch -c "followup/$SR_TAG" "$SR_TAG" # Change files manually & commit the changed files git commit -a --untracked-files=no -m "ci(followup): $SR_TAG to develop" # Switch to the development branch git switch develop # Merge the followup branch into the development one using the --no-ff option git merge --no-ff "followup/$SR_TAG" # Remove the followup branch git branch -d "followup/$SR_TAG" # Push the changes git push

If we can’t push directly to develop we can create a MR pushing the followup branch after committing the changes, but we have to make sure that we don’t squash the commits when merging or it will not work as we want.

Note:

We haven’t discussed the release/#.#.# branches because our assumption is that they don’t exist after a release is published (the branch is deleted after merging it to main) and new release candidate branches will be created from develop and the commits included with the followup will already be present.

Categories: FLOSS Project Planets

TechBeamers Python: A Beginner’s Guide to Linked Lists in Python

Planet Python - Mon, 2023-12-25 15:23

Linked lists are like a series of connected boxes, where each box holds a piece of information and points to the next box. They’re a cool way to organize data in Python. Let’s check out how to use linked lists in Python in a simple and friendly way. Understanding Linked Lists What is a Linked […]

The post A Beginner’s Guide to Linked Lists in Python appeared first on TechBeamers.

Categories: FLOSS Project Planets

John Goerzen: The Grumpy Cricket (And Other Enormous Creatures)

Planet Debian - Mon, 2023-12-25 15:23

This Christmas, one of my gifts to my kids was a text adventure (interactive fiction) game for them. Now that they’ve enjoyed it, I’m releasing it under the GPL v3.

As interactive fiction, it’s like an e-book, but the reader is also the player, guiding the exploration of the world.

The Grumpy Cricket is designed to be friendly for a first-time player of interactive fiction. There is no way to lose the game or to die. There is an in-game hint system providing context-sensitive hints anytime the reader types HINT. There are splashes of humor throughout that got all three of my kids laughing.

I wrote it in 2023 for my kids, which range in age from 6 to 17. That’s quite a wide range, but they all were enjoying it.

You can download it, get the source, or play it online in a web browser at https://www.complete.org/the-grumpy-cricket/

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #430 - Drupal in 2024

Planet Drupal - Mon, 2023-12-25 14:00

Today we are talking about Drupal in 2024, What we are looking forward to with Drupal 11, and the Drupal Advent Calendar with James Shields. We’ll also cover Drupal 10.2 as our module of the week.

For show notes visit: www.talkingDrupal.com/430

Topics
  • Advent calendar
  • Selection process
  • Popularity
  • Next year
  • Drupal features in 2024
  • Drupal 11
    • Project browser
    • Recipes / Starter templates
    • Automated updates
    • Gitlab
    • Smaller core
  • Predictions
Resources Guests

James Shields - lostcarpark.com lostcarpark

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Martin Anderson-Clutz - mandclu Ron Northcutt - community.appsmith.com rlnorthcutt

MOTW Correspondent

Martin Anderson-Clutz - mandclu Drupal 10.2

  • Improvements include
    • Technology Updates
      • PHP 8.3
      • Includes capabilities that previously required contrib projects
      • File name sanitization
      • A search filter on the permissions page
    • End Users
      • Performance enhancements and improved caching APIs
      • Support for PHP Fibers to accelerate handling things like asynchronous remote calls
    • Content Creators
      • Revision UI for media
      • Wider editing area in Claro on large screens
      • The return of “Show blocks” in CKEditor 5, missing until now
    • Site Builders
      • Field creation UI has a new, more visual interface, and an updated workflow
      • Block visibility can now be based on the HTTP response status, for example to make it visible or invisible on 404 or 403 responses
      • Tour module is no longer enabled by default for the Standard and Umami profiles
      • New “negated regular expression” operator for views filters (string/integer), to exclude results matching a provided pattern
    • Site Owners
      • Announcements Feed is now stable and included in the Standard profile
      • The functionality in the experimental Help Topics module has been merged into the main Help module, so the Help Topics module is now deprecated
      • New permission: Use help pages
    • Developers
      • A fairly sizable change is a move to use native PHP attributes instead of doctrine annotations to declare metadata for plugin classes. Work is already underway to get core code converted, and an issue has been opened to have rector do this conversion for contrib projects
      • A new DeprecationHelper::backwardsCompatibleCall() method to help write Drupal extensions that support multiple versions of core
      • A PerformanceTestBase is now in core, to support automated testing of performance metrics
      • A new #config_target property in ConfigFormBase to simplify creating configuration forms
      • Symfony mailer is now a composer dependency of core
      • New decimal primitive data type
      • Expanded configuration validation, Symfony autowiring support, HTML5 output from the HTML utility class is now default, and more
      • In addition to these and the features highlighted in the official announcement, there are three pages of change records for the 10.2.0 release, and we’ll include a link to those in the show notes
Categories: FLOSS Project Planets

TechBeamers Python: Python vs C++ – Is Python or C++ Better?

Planet Python - Mon, 2023-12-25 12:48

Python and C++ are both powerful programming languages, but they cater to different needs and come with distinct features. Deciding which language is better depends on various factors, including the nature of the project, performance requirements, ease of development, and personal preferences. In this tutorial, we will explore different aspects of Python and C++ to […]

The post Python vs C++ – Is Python or C++ Better? appeared first on TechBeamers.

Categories: FLOSS Project Planets

TechBeamers Python: 10 Python Beginner Projects for Ultimate Practice

Planet Python - Mon, 2023-12-25 10:59

In this tutorial, we’ll explore a variety of Python beginner projects that cover different aspects of programming. As you know programming is best learned by doing, and Python is an excellent language for beginners due to its simplicity and readability. Each project here will give you new concepts and build on the skills you’ve acquired. […]

The post 10 Python Beginner Projects for Ultimate Practice appeared first on TechBeamers.

Categories: FLOSS Project Planets

TechBeamers Python: Beginner’s Guide to Datetime Format in Python

Planet Python - Mon, 2023-12-25 07:02

Handling dates and times is a common task in programming, and Python’s datetime module provides a robust framework for working with these values. In this tutorial, we’ll explore how to use the datetime format in Python, covering the basics, custom formatting, localization, and parsing datetime strings. 1. Introduction to datetime Module Before diving into formatting, […]

The post Beginner’s Guide to Datetime Format in Python appeared first on TechBeamers.

Categories: FLOSS Project Planets

PyPy: PyPy v7.3.14 release

Planet Python - Sun, 2023-12-24 23:22
PyPy v7.3.14: release of python 2.7, 3.9, and 3.10

The PyPy team is proud to release version 7.3.14 of PyPy.

Hightlights of this release are compatibility with HPy-0.9, cffi 1.16, additional C-API interfaces, and more python3.10 fixes.

The release includes three different interpreters:

  • PyPy2.7, which is an interpreter supporting the syntax and the features of Python 2.7 including the stdlib for CPython 2.7.18+ (the + is for backported security updates)

  • PyPy3.9, which is an interpreter supporting the syntax and the features of Python 3.9, including the stdlib for CPython 3.9.18.

  • PyPy3.10, which is an interpreter supporting the syntax and the features of Python 3.10, including the stdlib for CPython 3.10.13.

The interpreters are based on much the same codebase, thus the multiple release. This is a micro release, all APIs are compatible with the other 7.3 releases. It follows after 7.3.13 release on Sept 29, 2023.

We recommend updating. You can find links to download the v7.3.14 releases here:

https://pypy.org/download.html

We would like to thank our donors for the continued support of the PyPy project. If PyPy is not quite good enough for your needs, we are available for direct consulting work. If PyPy is helping you out, we would love to hear about it and encourage submissions to our blog via a pull request to https://github.com/pypy/pypy.org

We would also like to thank our contributors and encourage new people to join the project. Since the last release we have contributions from three new contributors. PyPy has many layers and we need help with all of them: bug fixes, PyPy and RPython documentation improvements, or general help with making RPython's JIT even better.

If you are a python library maintainer and use C-extensions, please consider making a HPy / CFFI / cppyy version of your library that would be performant on PyPy. In any case, both cibuildwheel and the multibuild system support building wheels for PyPy.

What is PyPy?

PyPy is a Python interpreter, a drop-in replacement for CPython It's fast (PyPy and CPython 3.7.4 performance comparison) due to its integrated tracing JIT compiler.

We also welcome developers of other dynamic languages to see what RPython can do for them.

We provide binary builds for:

  • x86 machines on most common operating systems (Linux 32/64 bits, Mac OS 64 bits, Windows 64 bits)

  • 64-bit ARM machines running Linux (aarch64).

  • Apple M1 arm64 machines (macos_arm64).

  • s390x running Linux

PyPy support Windows 32-bit, Linux PPC64 big- and little-endian, and Linux ARM 32 bit, but does not release binaries. Please reach out to us if you wish to sponsor binary releases for those platforms. Downstream packagers provide binary builds for debian, Fedora, conda, OpenBSD, FreeBSD, Gentoo, and more.

What else is new?

For more information about the 7.3.14 release, see the full changelog.

Please update, and continue to help us make pypy better.

Cheers, The PyPy Team

Categories: FLOSS Project Planets

Pages