Feeds

From GitLab to Microsoft Store

Planet KDE - Wed, 2023-12-20 03:26
KDE Project:

This is an update on the ongoing migration of jobs from Binary Factory to KDE's GitLab. Since the last blog a lot has happened.

A first update of Itinerary was submitted to Google Play directly from our GitLab.

Ben Cooksley has added a service for publishing our websites. Most websites are now built and published on our GitLab with only 5 websites remaining on Binary Factory.

Julius Künzel has added a service for signing macOS apps and DMGs. This allows us to build signed installers for macOS on GitLab.

The service for signing and publishing Flatpaks has gone live. Nightly Flatpaks built on our GitLab are now available at https://cdn.kde.org/flatpak/. For easy installation builds created since yesterday include .flatpakref files and .flatpakrepo files.

Last, but not least, similar to the full CI/CD pipeline for Android we now also have a full CI/CD pipeline for Windows. For Qt 5 builds this pipeline consists of the following GitLab jobs:

  • windows_qt515 - Builds the project with MSVC and runs the automatic tests.
  • craft_windows_qt515_x86_64 - Builds the project with MSVC and creates various installation packages including (if enabled for the project) a *-sideload.appx file and a *.appxupload file.
  • sign_appx_qt515 - Signs the *-sideload.appx file with KDE's signing certificate. The signed app package can be downloaded and installed without using the Microsoft store.
  • microsoftstore_qt515 - Submits the *.appxupload package to the Microsoft store for subsequent publication. This job doesn't run automatically.

Notes:

  • The craft_windows_qt515_x86_64 job also creates .exe installers. Those installers are not yet signed on GitLab, i.e. Windows should warn you when you try to install them. For the time being, you can download signed .exe installers from Binary Factory.
  • There are also jobs for building with MinGW, but MinGW builds cannot be used for creating app packages for the Microsoft Store. (It's still possible to publish apps with MinGW installers in the Microsoft Store, but that's a different story.)

The workflow for publishing an update of an app in the Microsoft Store as I envision it is as follows:

  1. You download the signed sideload app package, install it on a Windows (virtual) machine (after uninstalling a previously installed version) and perform a quick test to ensure that the app isn't completely broken.
  2. Then you trigger the microsoftstore_qt515 job to submit the app to the Microsoft Store. This creates a new draft submission in the Microsoft Partner Center. The app is not published automatically. To actually publish the submission you have to log into the Microsoft Partner Center and commit the submission.

Enabling the Windows CD Pipeline for Your Project
If you want to start building Windows app packages (APPX) for your project then add the craft-windows-x86-64.yml template for Qt 5 or the craft-windows-x86-64-qt6.yml template for Qt 6 to the .gitlab-ci.yml of your project. Additionally, you have to add a .craft.ini file with the following content to the root of your project to enable the creation of the Windows app packages.

[BlueprintSettings] kde/applications/myapp.packageAppx = True

kde/applications/myapp must match the path of your project's Craft blueprint.

When you have successfully built the first Windows app packages then add the craft-windows-appx-qt5.yml or the craft-windows-appx-qt6.yml template to your .gitlab-ci.yml to get the sign_appx_qt* job and the microsoftstore_qt* job.

To enable signing your project (more precisely, a branch of your project) needs to be cleared for using the signing service. This is done by adding your project to the project settings of the appxsigner. Similarly, to enable submission to the Microsoft Store your project needs to be cleared by adding it to the project settings of the microsoftstorepublisher. If you have carefully curated metadata set in the store entry of you app that shouldn't be overwritten by data from your app's AppStream data then have a look at the keep setting for your project. I recommend to use keep sparingly if at all because at least for text content you will deprive people using the store of all the translations added by our great translation teams to your app's AppStream data.

Note that the first submission to the Microsoft Store has to be done manually.

Categories: FLOSS Project Planets

Python Bytes: #365 Inheritance, but not Inheritance!

Planet Python - Wed, 2023-12-20 03:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* <a href="https://hatch.pypa.io/latest/blog/2023/12/11/hatch-v180/">Hatch v1.8</a></em>*</li> <li><a href="https://svcs.hynek.me/en/stable/">svcs: A Flexible Service Locator for Python</a></li> <li><a href="https://discuss.python.org/t/steering-council-election-results-2024-term/40851">Steering Council 2024 Term Election Results</a></li> <li><a href="https://typethepipe.com/post/python-protocols-when-to-use">Python protocols. When to use them in your projects to abstract and decoupling</a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=x2X0GVX9624' style='font-weight: bold;'>Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too.</p> <p><strong>Michael #1: <a href="https://hatch.pypa.io/latest/blog/2023/12/11/hatch-v180/">Hatch v1.8</a></strong></p> <ul> <li>Hatch now manages installing Python for you.</li> <li>Hatch can build .app and .exe stand-alone binaries for you</li> <li>The macOS ones are signed (signed!)</li> <li>Discussion <a href="https://discuss.python.org/t/announcement-hatch-v1-8-0/40865">here</a></li> </ul> <p><strong>Brian #2:</strong> <a href="https://svcs.hynek.me/en/stable/">svcs : A Flexible Service Locator for Python</a></p> <ul> <li>Hynek</li> <li>A library to help structure and test Python web applications.</li> <li>“<em>svcs</em> (pronounced <em>services</em>) is a <strong>dependency container<em>* for Python. It gives you a central place to register factories for types/interfaces and then imperatively acquire instances of those types with </em><em>automatic cleanup</em>* and **health checks</strong>.”</li> <li>“Benefits: <ul> <li>Eliminates tons of repetitive <strong>boilerplate</strong> code,</li> <li>unifies <strong>acquisition<em>* and </em><em>cleanups</strong> of services,</li> <li>provides full <em>static</em> <strong>type safety</strong> for them,</li> <li>simplifies <strong>testing<em></em> through </em><em>loose coupling</strong>,</li> <li>improves <em>live</em> <strong>introspection<em></em> and </em><em>monitoring</em>* with **health checks</strong>.”</li> </ul></li> <li>Hynek has started a <a href="https://www.youtube.com/watch?v=d1elMD9WgpAhttps://www.youtube.com/watch?v=d1elMD9WgpA">YouTube channel, and is starting with an explanation of svcs.</a></li> <li>Yes, Hynek, we want more videos. I like that it’s not a beginner level. </li> <li>My request for future videos: just past beginner, and also intermediate level.</li> <li>There are plenty of basics videos out there, not as many filling the gaps between beginner and production.</li> </ul> <p><strong>Michael #3:</strong> <a href="https://discuss.python.org/t/steering-council-election-results-2024-term/40851">Steering Council 2024 Term Election Results</a> </p> <ul> <li>The 2024 Term Python Steering Council is: <ul> <li>Pablo Galindo Salgado</li> <li>Gregory P. Smith</li> <li>Emily Morehouse</li> <li>Barry Warsaw</li> <li>Thomas Wouters</li> </ul></li> <li>Full results are available in <a href="https://peps.python.org/pep-8105/#results">PEP 8105</a> .</li> <li>How do you become a candidate? <ul> <li>Candidates must be nominated by a core team member. If the candidate is a core team member, they may nominate themselves.</li> </ul></li> </ul> <p><strong>Brian #4:</strong> <a href="https://typethepipe.com/post/python-protocols-when-to-use">Python protocols. When to use them in your projects to abstract and decoupling</a></p> <ul> <li>Carlos Vecina</li> <li>“Protocols are an alternative (or a complement) to inheritance, abstract classes and Mixins.”</li> <li>Understanding interactions between ABC, MixIns and Protocols in Python</li> <li>With examples</li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>Donations. It’s a decent time of the year to donate to projects that help you <ul> <li><a href="https://www.python.org/psf/donations/2023-q4-drive/">Python Software Foundation</a></li> <li><a href="https://www.djangoproject.com/fundraising/">Django Software Foundation</a></li> <li><a href="https://www.patreon.com/pythonbytes">Python Bytes</a></li> <li>Also, look for “Sponsor this project” links in GitHub for projects you depend on.</li> </ul></li> </ul> <p>Michael:</p> <ul> <li><strong>Mastodon guidelines</strong> (mine): <ul> <li>If you have a picture and description, I’ll probably follow you back</li> <li>If you have posts that seem relevant +1</li> <li>If you have a verified webpage +1</li> <li>If your account is private, won’t. I don’t understand really since private group messages already exist and the profile itself is public.</li> </ul></li> <li>Speaking of Mastodon. I had a productive conversation with the PSF and others around masks and conferences.</li> <li><a href="https://arstechnica.com/information-technology/2023/12/dropbox-spooks-users-by-sending-data-to-openai-for-ai-search-features/">Dropbox spooks users by sending data to OpenAI for AI search features</a> <ul> <li>There was a comment in the above article to the effect of “Once you give your data to a third party (even trusted like Dropbox), you no longer control that data.” That sent me searching and thinking…</li> <li><a href="https://www.sync.com/?_sync_refer=721a76dd0">sync.com</a>? <a href="https://proton.me/drive">proton drive</a> (<a href="https://pr.tn/ref/4SM38ZQPNS3G">discount code</a>)? <a href="https://nextcloud.com">nextcloud</a>? <a href="https://filen.io">filen.io</a>? <a href="https://icedrive.net">icedrive.net</a>?</li> <li>ownCloud’s <a href="https://www.greynoise.io/blog/cve-2023-49103-owncloud-critical-vulnerability-quickly-exploited-in-the-wild">recent CVE</a> makes me a bit nervous of self-hosted options.</li> <li>Either way, <a href="https://cryptomator.org">Cryptomator</a> is very interesting.</li> </ul></li> <li>Beyond privacy, this got me thinking, just how many hours of dev time have been diverted to add mediocre-at-best AI features to everything?</li> <li>I’m doing a big digital decluttering and have lots to say on that soon.</li> <li><strong>Not</strong> submitting my talks to PyCascades this year. </li> <li>But I <strong>did submit</strong> 3 talks to PyCon US. 🤞 </li> <li>I <strong>will</strong> be giving the keynote at <a href="https://pycon-2024.python.ph">PyCon Philippines</a>.</li> </ul> <p><strong>Joke:</strong> <a href="https://mastodon.social/@tveskov/111289358585305218">The dream is dead?</a></p>
Categories: FLOSS Project Planets

LostCarPark Drupal Blog: Drupal Advent Calendar day 20 - Event Organizers Working Group (EOWG)

Planet Drupal - Wed, 2023-12-20 02:00
Drupal Advent Calendar day 20 - Event Organizers Working Group (EOWG) james Wed, 12/20/2023 - 07:00

Once again, welcome back to the Drupal Advent Calendar. It’s day 20, and time to open our next door. Today Leslie Glynn (leslieg) joins us to talk about the Event Organizers Working Group (EOWG).

I bet a lot of you learned about Drupal and the awesome Drupal Community at a local Drupal event. I went to the Western Massachusetts Drupal Camp back in 2011 after I was assigned a Drupal website to support at work and had no idea what Drupal even was. I attended many great sessions, met a lot of folks in the local Drupal community and even purchased the “Definitive Guide to Drupal 7” (remember all…

Tags
Categories: FLOSS Project Planets

Seth Michael Larson: Security Developer-in-Residence Weekly Report #22

Planet Python - Tue, 2023-12-19 19:00
Security Developer-in-Residence Weekly Report #22 AboutBlogNewsletterLinks Security Developer-in-Residence Weekly Report #22

Published 2023-12-20 by Seth Larson
Reading time: minutes

This critical role would not be possible without funding from the OpenSSF Alpha-Omega project. Massive thank-you to Alpha-Omega for investing in the security of the Python ecosystem!

This week was all about working on Software Bill-of-Materials tooling and documentation for CPython. I published a new resource to the CPython core developer guide. This documents Software Bill-of-Materials and all of the tooling and processes for adding, updating, and removing dependencies. I'll continue to add to this document as more is developed in this project.

During an upgrade to CPython's ensurepip module, the bundled pip wheel was upgraded to version 23.3.2 however during the upgrade there was some confusion about what to do with an SBOM CI failure due to the Developer Guide documentation not yet being live. This resulted in the SBOM becoming out-of-date.

I fixed the SBOM ahead of the 3.13.0a3 release and automated the pip SBOM metadata discovery since pip is a part of a packaging ecosystem which isn't the case for most of CPython dependencies in the source tree.

Next steps for the SBOM infrastructure for CPython include adding Windows dependencies into the SBOMs released for the Windows installers and doing discovery work on macOS installers.

Other items
  • The OpenSSF published its annual report for 2023 which contained a bunch of highlights from the Python ecosystem and Alpha-Omega's engagement with the Python Software Foundation (including all the work I've done this year!) Give it a read if you're interested in a one-stop-shop for big things happening in the open source ecosystem.
  • Switched to using make regen-configure for the CPython release process now that the Makefile target is available for all currently supported CPython release streams.
  • Reviewed the secret scanning payload proposed by GitGuardian. This payload would allow PyPI to alert users when secrets are uploaded with donated secret scanning expertise from GitGuardian.

That's all for this week! 👋 If you're interested in more you can read last week's report.

Thanks for reading! ♡ Did you find this article helpful and want more content like it? Get notified of new posts by subscribing to the RSS feed or the email newsletter.

This work is licensed under CC BY-SA 4.0

Categories: FLOSS Project Planets

KDE's 6th Megarelease - Beta 2

Planet KDE - Tue, 2023-12-19 19:00
Plasma 6, Frameworks and Gear draw closer

Every few years we port the key components of our software to a new version of Qt, taking the opportunity to remove cruft and leverage the updated features the most recent version of Qt has to offer us.

We are now just over two months away from KDE's megarelease. At the end of February 2024 we will publish Plasma 6, Frameworks 6 and a whole new set of applications in a special edition of KDE Gear all in one go.

If you have been following the updates here and here, you will know we are deep in the testing phase; and KDE is making available today the second Beta version of all the software we will include in the megarelease.

As with versions Alpha and the first Beta, this is a preview intended for developers and testers. The software in this second beta release is reaching stability fast, but it is still not 100% safe to use in a production environment. We still recommend you continue using stable versions of Plasma, Frameworks and apps for your everyday work. But if you do use this, watch out for bugs and report them promptly, so we can solve them.

Read on to find out more about KDE's 6th Megarelease, what it covers, and how you can help make the new versions of Plasma, KDE's apps and Frameworks a success now.

Plasma

Plasma is KDE's flagship desktop environment. Plasma is like Windows or macOS, but is renowned for being flexible, powerful, lightweight and configurable. It can be used at home, at work, for schools and research.

Plasma 6 is the upcoming version of Plasma that integrates the latest version of Qt, Qt 6, the framework upon which Plasma is built.

Plasma 6 incorporates new technologies from Qt and other constantly evolving tools, providing new features, better support for the latest hardware, and supports for the hardware and software technologies to come.

You can be part of the new Plasma. Download and install a Plasma 6-powered distribution (like Neon Unstable) to a test machine and start trying all its features. Check the Contributing section below to find out how you can deliver reports of what you find to the developers.

KDE Gear

KDE Gear is a collection of applications produced by the KDE community. Gear includes file explorers, music and video players, text and video-editors, apps to manage social media and chats, email and calendaring applications, travel assistants, and much more.

Developers of these apps also rely on the Qt toolbox, so most of the software will also be adapted to use the new Qt6 toolset and we need you to help us test them too.

Frameworks

KDE's Frameworks add tools created by the KDE community on top of those provided by the Qt toolbox. These tools give developers more and easier ways of developing interfaces and functionality that work on more platforms.

Among many other things, KDE Frameworks provide

  • widgets (buttons, text boxes, etc.) that make building your apps easier and their looks more consistent across platforms, including Windows, Linux, Android and macOS
  • libraries that facilitate storing and retrieving configuration settings
  • icon sets, or technologies that make the integration of the translation workflow of applications easier

KDE's Frameworks also rely heavily on Qt and will also be upgraded to adapt them to the new version 6. This change will add more features and tools, enable your applications to work on more devices, and give them a longer shelf life.

Contributing

KDE relies on volunteers to create, test and maintain its software. You can help too by...

  • Reporting bugs -- When you come across a bug when testing the software included in this Alpha Megarelease, you can report it so developers can work on it and remove it. When reporting a bug
    • make sure you understand when the bug is triggered so you can give developers a guide on how to check it for themselves
    • check you are using the latest version of the software you are testing, just in case the bug has been solved in the meantime
    • go to KDE's bug-tracker and search for your bug to make sure it does not get reported twice
    • if no-one has reported the bug yet, fill in the bug report, giving all the details you think are significant.
    • keep tabs on the report, just in case developers need more details.
  • Solving bugs -- Many bugs are easy to solve. Some just require changing a version number or tweaking the name of a library to its new name. If you have some basic programming knowledge of C++ and Qt, you too can help carry the weight of debugging KDE's software for the grand release in February.
  • Joining the development effort -- You may have a deeper knowledge development, and would like to contribute to KDE with your own solutions. This is the perfect moment to get involved in KDE and contribute with your own code.
  • Donating to KDE -- Creating, debugging and maintaining the large catalogue of software KDE distributes to users requires a lot of resources, many of which cost money. Donating to KDE helps keep the day-to-day operation of KDE running smoothly and allows developers to concentrate on creating great software. KDE is currently running a drive to encourage more people to become contributing supporters, but you can also give one-time donations if you want.
A note on pre-release software

Pre-release software is only suited for developers and testers. Alpha/Beta/RC software is unfinished, will be unstable and will contain bugs. It is published so volunteers can trial-run it, identify its problems, and report them so they can be solved before the publication of the final product.

The risks of running pre-release software are many. Apart from the hit to productivity produced by instability and the lack of features, the using pre-release software can lead to data loss, and, in extreme cases, damage to hardware. That said, the latter is highly unlikely in the case of KDE software.

The version of the software included in KDE's 6th Megarelease is beta software. We strongly recommend you do not use it as your daily driver.

If, despite the above, you want to try the software distributed in KDE's 6th Megarelease, you do so under your sole responsibility, and in the understanding that the main aim, as a tester, you help us by providing feedback and your know-how regarding the software. Please see the Contributing section above.

Categories: FLOSS Project Planets

Signed container images with buildah, podman and cosign via GitHub Actions

Planet KDE - Tue, 2023-12-19 18:00

All the Toolbx and Distrobox container images and the ones in my personal namespace on Quay.io are now signed using cosign.

How to set this up was not really well documented so this post is an attempt at that.

First we will look at how to setup a GitHub workflow using GitHub Actions to build multi-architecture container images with buildah and push them to a registry with podman. Then we will sign those images with cosign (sigstore) and detail what is needed to configure signature validation on the host. Finally we will detail the remaining work needed to be able to do the entire process only with podman.

Full example ready to go

If you just want to get going, you can copy the content of my github.com/travier/cosign-test repo and start building and pushing your containers. I recommend keeping only the cosign.yaml workflow for now (see below for the details).

“Minimal” GitHub workflow to build containers with buildah / podman

You can find those actions at github.com/redhat-actions.

Here is an example workflow with the Containerfile in the example subfolder:

name: "Build container using buildah/podman" env: NAME: "example" REGISTRY: "quay.io/example" on: # Trigger for pull requests to the main branch, only for relevant files pull_request: branches: - main paths: - 'example/**' - '.github/workflows/cosign.yml' # Trigger for push/merges to main branch, only for relevant files push: branches: - main paths: - 'example/**' - '.github/workflows/cosign.yml' # Trigger every Monday morning schedule: - cron: '0 0 * * MON' permissions: read-all # Prevent multiple workflow runs from racing to ensure that pushes are made # sequentialy for the main branch. Also cancel in progress workflow runs for # pull requests only. concurrency: group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: ${{ github.event_name == 'pull_request' }} jobs: build-push-image: runs-on: ubuntu-latest steps: - name: Checkout repo uses: actions/checkout@v4 - name: Setup QEMU for multi-arch builds shell: bash run: | sudo apt install qemu-user-static - name: Build container image uses: redhat-actions/buildah-build@v2 with: # Only select the architectures that matter to you here archs: amd64, arm64, ppc64le, s390x context: ${{ env.NAME }} image: ${{ env.NAME }} tags: latest containerfiles: ${{ env.NAME }}/Containerfile layers: false oci: true - name: Push to Container Registry uses: redhat-actions/push-to-registry@v2 # The id is unused right now, will be used in the next steps id: push if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' with: username: ${{ secrets.BOT_USERNAME }} password: ${{ secrets.BOT_SECRET }} image: ${{ env.NAME }} registry: ${{ env.REGISTRY }} tags: latest

This should let you to test changes to the image via builds in pull requests and publishing the changes only once they are merged.

You will have to setup the BOT_USERNAME and BOT_SECRET secrets in the repository configuration to push to the registry of your choice.

If you prefer to use the GitHub internal registry then you can use:

env: REGISTRY: ghcr.io/${{ github.repository_owner }} ... username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }}

You will also need to set the job permissions to be able to write GitHub Packages (container registry):

permissions: contents: read packages: write

See the Publishing Docker images GitHub Docs.

You should also configure the GitHub Actions settings as follow:

  • In the “Actions permissions” section, you can restict allowed actions to: “Allow <username>, and select non-<username>, actions and reusable workflows”, with “Allow actions created by GitHub” selected and the following additionnal actions: redhat-actions/*,
  • In the “Workflow permissions” section, you can select the “Read repository contents and packages permissions” and select the “Allow GitHub Actions to create and approve pull requests”.

  • Make sure to add all the required secrets in the “Secrets and variables”, “Actions”, “Repository secrets” section.
Signing container images

We will use cosign to sign container images. With cosign, you get two main options to sign your containers:

  • Keyless signing: Sign containers with ephemeral keys by authenticating with an OIDC (OpenID Connect) protocol supported by Sigstore.
  • Self managed keys: Generate a “classic” long-lived key pair.

We will choose the the “self managed keys” option here as it is easier to setup for verification on the host in podman. I will likely make another post once I figure out how to setup keyless signature verification in podman.

Generate a key pair with:

$ cosign generate-key-pair

Enter an empty password as we will store this key in plain text as a repository secret (COSIGN_PRIVATE_KEY).

Then you can add the steps for signing with cosign at the end of your workflow:

# Include at the end of the worklow previously defined - name: Login to Container Registry uses: redhat-actions/podman-login@v1 if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' with: registry: ${{ env.REGISTRY }} username: ${{ secrets.BOT_USERNAME }} password: ${{ secrets.BOT_SECRET }} - uses: sigstore/cosign-installer@v3.3.0 if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' - name: Sign container image if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' run: | cosign sign -y --key env://COSIGN_PRIVATE_KEY ${{ env.REGISTRY }}/${{ env.NAME }}@${{ steps.push.outputs.digest }} env: COSIGN_EXPERIMENTAL: false COSIGN_PRIVATE_KEY: ${{ secrets.COSIGN_PRIVATE_KEY }}

We need to explicitely login to the container registry to get an auth token that will be used by cosign to push the signature to the registry.

This step sometimes fails, likely due to a race condition, that I have not been able to figure out yet. Retrying failed jobs usually works.

You should then update the GitHub Actions settings to allow the new actions as follows:

redhat-actions/*, sigstore/cosign-installer@*, Configuring podman on the host to verify image signatures

First, we copy the public key to a designated place in /etc:

$ sudo mkdir /etc/pki/containers $ curl -O "https://.../cosign.pub" $ sudo cp cosign.pub /etc/pki/containers/ $ sudo restorecon -RFv /etc/pki/containers

Then we setup the registry config to tell it to use sigstore signatures:

$ cat /etc/containers/registries.d/quay.io-example.yaml docker: quay.io/example: use-sigstore-attachments: true $ sudo restorecon -RFv /etc/containers/registries.d/quay.io-example.yaml

And then we update the container signature verification policy to:

  • Default to reject everything
  • Then for the docker transport:
    • Verify signatures for containers coming from our repository
    • Accept all other containers from other registries

If you do not plan on using container from other registries, you can even be stricter here and only allow your containers to be used.

/etc/containers/policy.json:

{ "default": [ { "type": "reject" } ], "transports": { "docker": { ... "quay.io/example": [ { "type": "sigstoreSigned", "keyPath": "/etc/pki/containers/quay.io-example.pub", "signedIdentity": { "type": "matchRepository" } } ], ... "": [ { "type": "insecureAcceptAnything" } ] }, ... } }

See the full man page for containers-policy.json(5).

You should now be good to go!

What about doing everything with podman?

Using this workflow, there is a (small) time window where the container images are pushed to the registry but not signed.

One option to avoid this problem would be to first push the container to a “temporary” tag first, sign it, and then copy the signed container to the latest tag.

Another option is to use podman to push and sign the container image “at the same time”. However podman still needs to push the image first and then sign it so there is still a possibility that signing fails and that you’re left with an unsigned image (this happened to me during testing).

Unfortunately for us, the version of podman available in the version of Ubuntu used for the GitHub Runners (22.04) is too old to support signing containers. We thus need to use a newer podman from a container image to workaround this.

Here is the same worklow, adapted to only use podman for signing:

name: "Build container using buildah, push and sign it using podman" env: NAME: "example" REGISTRY: "quay.io/example" REGISTRY_DOMAIN: "quay.io" on: pull_request: branches: - main paths: - 'example/**' - '.github/workflows/podman.yml' push: branches: - main paths: - 'example/**' - '.github/workflows/podman.yml' schedule: - cron: '0 0 * * MON' permissions: read-all # Prevent multiple workflow runs from racing to ensure that pushes are made # sequentialy for the main branch. Also cancel in progress workflow runs for # pull requests only. concurrency: group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: ${{ github.event_name == 'pull_request' }} jobs: build-push-image: runs-on: ubuntu-latest container: image: quay.io/travier/podman-action options: --privileged -v /proc/:/host/proc/:ro steps: - name: Checkout repo uses: actions/checkout@v4 - name: Setup QEMU for multi-arch builds shell: bash run: | for f in /usr/lib/binfmt.d/*; do cat $f | sudo tee /host/proc/sys/fs/binfmt_misc/register; done ls /host/proc/sys/fs/binfmt_misc - name: Build container image uses: redhat-actions/buildah-build@v2 with: archs: amd64, arm64, ppc64le, s390x context: ${{ env.NAME }} image: ${{ env.NAME }} tags: latest containerfiles: ${{ env.NAME }}/Containerfile layers: false oci: true - name: Setup config to enable pushing Sigstore signatures if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' shell: bash run: | echo -e "docker:\n ${{ env.REGISTRY_DOMAIN }}:\n use-sigstore-attachments: true" \ | sudo tee -a /etc/containers/registries.d/${{ env.REGISTRY_DOMAIN }}.yaml - name: Push to Container Registry # uses: redhat-actions/push-to-registry@v2 uses: travier/push-to-registry@sigstore-signing if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' with: username: ${{ secrets.BOT_USERNAME }} password: ${{ secrets.BOT_SECRET }} image: ${{ env.NAME }}

This uses two additional workarounds for missing features:

  • There is no official container image that includes both podman and buildah right now, thus I made one: github.com/travier/podman-action
  • The redhat-actions/push-to-registry Action does not support signing yet (issue#89). I’ve implemented support for self managed key signin in pull#90. I’ve not looked at keyless signing yet.

You will also have to allow running my actions in the repository settings. In the “Actions permissions” section, you should use the following actions:

redhat-actions/*, travier/push-to-registry@*, Conclusion

The next steps are to figure out all the missing bits for keyless signing and replicate this entire process in GitLab CI.

Categories: FLOSS Project Planets

FSF Events: Free Software Directory meeting on IRC: Friday, December 22, starting at 12:00 EST (17:00 UTC)

GNU Planet! - Tue, 2023-12-19 17:46
Join the FSF and friends on Friday, December 22, from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.
Categories: FLOSS Project Planets

Our work isn't over: Keep fighting for the freedom to learn

FSF Blogs - Tue, 2023-12-19 17:32
IDAD may be over, but the fight isn't. Read a summary of this year's activities, and learn how you can continue to take action to help end DRM.
Categories: FLOSS Project Planets

FSF Blogs: Our work isn't over: Keep fighting for the freedom to learn

GNU Planet! - Tue, 2023-12-19 17:32
IDAD may be over, but the fight isn't. Read a summary of this year's activities, and learn how you can continue to take action to help end DRM.
Categories: FLOSS Project Planets

remotecontrol @ Savannah: Tennessee Tech's research will impact hundreds of businesses

GNU Planet! - Tue, 2023-12-19 16:39

https://www.tntech.edu/news/releases/22-23/tech-tapped-to-lead-multi-state-consortium-on-electric-grid-modernization-backed-with-largest-grant-in-tech-history.php

https://www.tennessean.com/story/sponsor-story/tennessee-tech/2023/11/22/tennessee-techs-research-will-impact-hundreds-of-businesses/71658586007/

"This is expected to include seven rural electric utilities, one energy tech startup, 60 electrical engineering firms and 400 freelance software developers. The work will impact 191 counties across Tennessee, Ohio, Pennsylvania and West Virginia."

"Students will experiment in a real-time simulated environment so electric utilities can provide cost-effective testing and solutions prior to the implementation."

Categories: FLOSS Project Planets

James Bennett: Show Python deprecation warnings

Planet Python - Tue, 2023-12-19 14:48

This is part of a series of posts I’m doing as a sort of Python/Django Advent calendar, offering a small tip or piece of information each day from the first Sunday of Advent through Christmas Eve. See the first post for an introduction.

Let this be a warning

Python provides the ability to issue a warning as a step below raising an exception; warnings are issued by calling the warnings.warn() function, which at minimum …

Read full entry

Categories: FLOSS Project Planets

Matthew Garrett: Making SSH host certificates more usable

Planet Debian - Tue, 2023-12-19 14:48
Earlier this year, after Github accidentally committed their private RSA SSH host key to a public repository, I wrote about how better support for SSH host certificates would allow this sort of situation to be handled in a user-transparent way without any negative impact on security. I was hoping that someone would read this and be inspired to fix the problem but sadly that didn't happen so I've actually written some code myself.

The core part of this is straightforward - if a server presents you with a certificate associated with a host key, then make the trust in that host be whoever signed the certificate rather than just trusting the host key. This means that if someone needs to replace the host key for any reason (such as, for example, them having published the private half), you can replace the host key with a new key and a new certificate, and as long as the new certificate is signed by the same key that the previous certificate was, you'll trust the new key and key rotation can be carried out without any user errors. Hurrah!

So obviously I wrote that bit and then thought about the failure modes and it turns out there's an obvious one - if an attacker obtained both the private key and the certificate, what stops them from continuing to use it? The certificate isn't a secret, so we basically have to assume that anyone who possesses the private key has access to it. We may have silently transitioned to a new host key on the legitimate servers, but a hostile actor able to MITM a user can keep on presenting the old key and the old certificate until it expires.

There's two ways to deal with this - either have short-lived certificates (ie, issue a new certificate every 24 hours or so even if you haven't changed the key, and specify that the certificate is invalid after those 24 hours), or have a mechanism to revoke the certificates. The former is viable if you have a very well-engineered certificate issuing operation, but still leaves a window for an attacker to make use of the certificate before it expires. The latter is something SSH has support for, but the spec doesn't define any mechanism for distributing revocation data.

So, I've implemented a new SSH protocol extension that allows a host to send a key revocation list to a client. The idea is that the client authenticates to the server, receives a key revocation list, and will no longer trust any certificates that are contained within that list. This seems simple enough, but a naive implementation opens the client to various DoS attacks. For instance, if you simply revoke any key contained within the received KRL, a hostile server could revoke any certificates that were otherwise trusted by the client. The easy way around this is for the client to ensure that any revoked keys are associated with the same CA that signed the host certificate - that way a compromised host can only revoke certificates associated with that CA, and can't interfere with anyone else.

Unfortunately that still means that a single compromised host can still trigger revocation of certificates inside that trust domain (ie, a compromised host a.test.com could push a KRL that invalidated the certificate for b.test.com), because there's no way in the KRL format to indicate that a given revocation is associated with a specific hostname. This means we need a mechanism to verify that the KRL update is legitimate, and the easiest way to handle that is to sign it. The KRL format specifies an in-band signature but this was deprecated earlier this year - instead KRLs are supposed to be signed with the sshsig format. But we control both the server and the client, which means it's easy enough to send a detached signature as part of the extension data.

Putting this all together: you ssh to a server you've never contacted before, and it presents you with a host certificate. Instead of the host key being added to known_hosts, the CA key associated with the certificate is added. From now on, if you ssh to that host and it presents a certificate signed by that CA, it'll be trusted. Optionally, the host can also send you a KRL and a signature. If the signature is generated by the CA key that you already trust, any certificates in that KRL associated with that CA key will be incorporated into local storage. The expected flow if a key is compromised is that the owner of the host generates a new keypair, obtains a new certificate for the new key, and adds the old certificate to a KRL that is signed with the CA key. The next time the user connects to that host, they receive the new key and new certificate, trust it because it's signed by the same CA key, and also receive a KRL signed with the same CA that revokes trust in the old certificate.

Obviously this breaks down if a user is MITMed with a compromised key and certificate immediately after the host is compromised - they'll see a legitimate certificate and won't receive any revocation list, so will trust the host. But this is the same failure mode that would occur in the absence of keys, where the attacker simply presents the compromised key to the client before trust in the new key has been created. This seems no worse than the status quo, but means that most users will seamlessly transition to a new key and revoke trust in the old key with no effort on their part.

The work in progress tree for this is here - at the point of writing I've merely implemented this and made sure it builds, not verified that it actually works or anything. Cleanup should happen over the next few days, and I'll propose this to upstream if it doesn't look like there's any showstopper design issues.

comments
Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #608 (Dec. 19, 2023)

Planet Python - Tue, 2023-12-19 14:30

#608 – DECEMBER 19, 2023
View in Browser »

Build a Hangman Game With Python and PySimpleGUI

In this step-by-step tutorial, you’ll learn how to write the game of hangman in Python with a PySimpleGUI-based interface. You’ll see how to structure the game, build its GUI, and program the game’s logic and rules.
REAL PYTHON

Why if TYPE_CHECKING?

“Typechecking is brittle yet important”, learn more about where it works and where it doesn’t and what that might mean for your code.
VICKI BOYKIS

Build Invincible Apps With Temporal’s Python SDK

Get an introduction to Temporal’s Python SDK by walking through our easy, free tutorials. Learn how to build Temporal applications using Python, including building a data pipeline Workflow and a subscription Workflow. Get started here →
TEMPORAL sponsor

Real-World match/case

The match statement was added in Python 3.10. This article covers a real-world example use case and shows its power.
NED BATCHELDER

Call for Proposals: Pycascades-2024

PRETALX.COM

Python Memory-Safe Says Latest CISA Recommendations

SETH LARSON

Python 3.11.7 Released

CPYTHON DEV BLOG

Python Jobs Senior Python Architect and Tech Lead (America)

Six Feet Up

Software Engineer - Intern (Summer 2024) (Dallas, TX, USA)

Causeway Capital Management

Python Tutorial Editor (Anywhere)

Real Python

More Python Jobs >>>

Articles & Tutorials Getting the First Match From a Python List or Iterable

In this video course, you’ll learn about the best ways to get the first match from a Python list or iterable. You’ll look into two different strategies, for loops and generators, and compare their performance. Then you’ll finish up by creating a reusable function for all your first matching needs.
REAL PYTHON course

Pytest Daemon: 10X Local Test Iteration Speed

Discord has a large Python monolith with lots of imports, which now takes 13 seconds to start up. On the server that’s not a problem but to run a test it is. Ruby’s solution is to have a daemon that hot loads a test on a process that already has the imports completed.
RUBY FEINSTEIN

5 Ways to Flatten a List of Lists

Flattening a list-of-lists is the process of taking each of the inner lists’ contents and putting them together in a single list. There are several ways of attacking this problem in Python, and this article shows 5 methods ranked from worst to best.
RODRIGO GIRÃO SERRÃO

Django: Sanitize Incoming HTML Fragments With nh3

Allowing users to input HTML in comments or blog posts is problematic, it can lead to exploits on your site. For years the Django community used django-bleach, but since its deprecation, Adam has been using nh3, a Rust-based HTML sanitizer.
ADAM JOHNSON

Hide Those Terminal Secrets!

Michael Kennedy of Talk Python fame does a lot of screen casts, and one of his concerns is accidentally exposing data from a terminal when doing so. This quick article covers the Warp terminal’s “redaction” feature for just this situation.
MICHAEL KENNEDY

How to Deploy Reflex Apps to Fly.io

Reflex is a recent entrant to the world of Python web frameworks. Fly.io is a hosting provider that lets you host your applications in production really quickly. This articles shows you how to deploy Reflex applications on Fly.io.
SIDDHANT GOEL • Shared by Siddhant Goel

Serialize Your Data With Python

In this in-depth tutorial, you’ll explore the world of data serialization in Python. You’ll compare and use different data serialization formats, serialize Python objects and executable code, and handle HTTP message payloads.
REAL PYTHON

You Are Never Taught How to Build Quality Software

Learning how to build quality software is not part of computer science education. How do we learn it? See also the associated HN discussion.
FLORIAN BELLMANN

Use “pip install” Safely

The correct extra arguments to pip install can make it less risky to execute. Learn what arguments James suggests and how to use each of them.
JAMES BENNETT

Use Unittest’s subtest Helper

When iterating over multiple arguments in testing a function, the subtest feature of the unittest module helps you track which part failed.
JAMES BENNETT

Create a Subscription SaaS Application With Django and Stripe

All the technical details of creating a subscription SaaS business using the Python-based Django web framework and Stripe payment processor.
SAAS PEGASUS

The Key to the key Parameter in Python

A parameter named key is present in several Python functions, such as sorted(). This article explores what it is and how to use it.
STEPHEN GRUPPETTA • Shared by Stephen Gruppetta

CPython Dynamic Dispatch Internals

Just how many lines of C does it take to execute a + b in Python? This article goes into detail about the CPython internals.
ABHINAV UPADHYAY

Projects & Code python-skyfield: Elegant Astronomy for Python

GITHUB.COM/SKYFIELDERS

Above: Invisible Network Protocol Sniffer

GITHUB.COM/WEARECASTER

mlx: MLX: An Array Framework for Apple Silicon

GITHUB.COM/ML-EXPLORE

The Eval Game: Test Your Python Skills

OSKAERIK.GITHUB.IO • Shared by Oskar Eriksson

kanban-python: Kanban Terminal App Written in Python

GITHUB.COM/ZALOOG

Events Weekly Real Python Office Hours Q&A (Virtual)

December 20, 2023
REALPYTHON.COM

PyData Bristol Meetup

December 21, 2023
MEETUP.COM

PyLadies Dublin

December 21, 2023
PYLADIES.COM

Python Sheffield

December 26, 2023
GOOGLE.COM

PyCon Somalia 2023

December 27 to December 29, 2023
PYCON.ORG.SO

Happy Pythoning!
This was PyCoder’s Weekly Issue #608.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

PyCharm: Join the Private Preview of Our New Tool for Your ML Experiments!

Planet Python - Tue, 2023-12-19 13:29

In response to the rapid growth of the data science market, we’re enhancing support for data science libraries, ML models, and MLOps, including by collaborating with ML vendors.

We’re starting with a private preview of our brand-new tool for ML experiments. It lets you set up and launch an experiment from local code on a Virtual Machine (VM) in the cloud directly from PyCharm (a Command Line Interface is also available). We’re granting free access for cloud resources during the preview!

If you are interested in participating, please fill out the following form:

Apply to join the Preview

Why use this new tool? Effortless user experience:
  • Launch experiments with just one click directly from your local setup.
  • Enjoy a seamless PyCharm experience, just like you’re used to.
  • Take advantage of automated setup for your environment and data.
Convenient scheduling features:
  • Access VMs on demand, with a variety of hardware options to suit your specific needs.
  • Benefit from automatic VM lifecycle management.
  • No additional DevOps expertise is required.


Your ML experiments are just a click away! Submit your details to apply for priority access to the upcoming preview.

Categories: FLOSS Project Planets

Promet Source: Ask Us Anything about Drupal 10 Webinar Recap

Planet Drupal - Tue, 2023-12-19 13:10
Drupal 10 celebrated the one year anniversary of it's release this month, and despite both Drupal 8 and 9 having reached end of life, the Drupal Community's pace of upgrading to Drupal 10 lags far behind where it needs to be.  There's much to love about Drupal 10, and Promet recently assembled three of our top in-house experts on the topic for a free webinar designed to answer the big questions and help remove barriers to upgrading ASAP. Here's a link to the conversation along with a recap of the main points. 
Categories: FLOSS Project Planets

Christine Lemmer-Webber: In Unexpected Places: a music video

GNU Planet! - Tue, 2023-12-19 12:15

Today my girlfriend Vivi Langdon (EncryptedWhispers) and I released the music video In Unexpected Places, available on YouTube and on PeerTube both. It's based off of Vivi's song by the same name, available on BandCamp and on Vivi's Funkwhale instance! It features some kind of retro video-game'y aesthetics but its story departs from what you might expect from that while leveraging that, more on this below.

Everything was made with Blender, including some hand-drawn effects using Blender's excellent Grease Pencil! The video is also an open movie project; you can download the full .blend file, which is CC BY-SA 4.0, and play around with everything yourself. The eye rig is especially fun to control and the ships are too!

All in all this was project took about 6 months of "free time" (read: weekends, especially at Hack & Craft) to animate. I played the music over and over again until I had a good story to accompany it and then on a car trip I drew the storyboard on a pile of index cards from the passenger seat while Morgan was driving, and when I got home I pulled up Blender quickly and timed everything with the storyboard. That storyboard was meant just for me, so it doesn't really look particularly good, but it convinced me that this was going to work, and the final animation matches that hastily drawn storyboard very closely!

The title was Vivi's choosing and was selected after I storyboarded the video. It references the phrase "love is found in unexpected places" and references both the eye falling in love with the cube but also that this music video was made shortly after Vivi and I fell in love (which also happened in ways that felt surprising to be swept up into, in a good way), and a lot of the depth of our relationship grew during the half-year of making the video.

Some commentary about the narrative

You should watch the video yourself first before reading this section to develop your own interpretation! But if you are interested in my interpretation as author of the animation and its script, here we go.

The video is a commentary about violence and the ways we are conditioned. Without having watched the entire narrative, having just seen clips of the "action sequences", this might look like a video game, and a viewer used to playing video games of the genre seen (of which I am exactly the type of person who both plays and makes games of this genre, including with these tropes) would be most likely to identify with and assume that the two ships which circle and attack the eye are the "heroes" of the piece. Of course, we can see from the comments here that most people are identifying with the eye, which was the intention. Which then puts the viewer at a seeming contradiction: identifying with the character they would normally be attacking.

Of course, villain-subversion tropes are hardly new, particularly within the last couple of decades. But here I wanted to do so without any written story or narrative. In particular, the question is to highlight social conditioning. If I play a game and I see an eyeball in space, I'm gonna want to shoot it! But what will you think the next time you see a floating eyeball in a video game and assume you're supposed to hurt it? The goal is to get you thinking about the way we are conditioned from small signals towards aggression and confrontation and to instead take a moment to try to approach from a point of empathy. That the eye falls in love with the cube at the beginning, and then when disabled or killed at the end is unable to "see" the cube it fell in love with, is meant to be a bridge to help the viewer along that journey. (The goal here is to less criticize video game violence and more the kinds of conditioning we experience in general, though you could use it to criticize that too if you like.)

Likewise, aside from the two ships that destroy the eye, there were two prior ships that also seemed to be aggressive against the eye. These are what I called the "scanner ships": they gather information on the eye, and they confuse and irritate it but not in terms of long term damage the way the "player ships" do, not directly. Instead what they are doing is gathering information to send the players, hence the "speech balloons with antennae on them". This is your briefing at the start of the level, telling you in Star Fox or whatever what your objective is. Their goal is to provide something "fun" for the players to attack and destroy so that they may level up. They represent the kind of media that we consume which pre-conditions us against empathy and towards aggression.

Of course, this is just one interpretation, and it is not the only valid one. I have heard some interesting interpretations about this video being about the way we are pushed into engagement driven consumption platforms and how it hurts us, and another about this being about the way we wake up within the world and have to struggle with what is thrown at us, and another which was just "whoa cool, videogames!", and all of those are valid.

Regardless, I hope you enjoy. As a side note: no sound effects were added to the music video, all of those were in the song. I shaped the story around the song... you could say that the story "grew" from the song. So amongst all the rest of the story above, the real goal was to deliver an experience. I hope you enjoyed it!

Categories: FLOSS Project Planets

Steinar H. Gunderson: Waiting for updates

Planet Debian - Tue, 2023-12-19 09:06

Apple M1: A CPU that's twice as fast, that requires you to run an OS that's 5x as slow.

(OK, the Asahi people are doing great work and one day I'll try running their product, but for macOS development you kind of need macOS)

Categories: FLOSS Project Planets

Real Python: Python Basics Exercises: Reading and Writing Files

Planet Python - Tue, 2023-12-19 09:00

Files play a key role in computing, as they store and transfer data. You likely come across numerous files on a daily basis. In Python Basics: Reading and Writing Files, you dove into the world of file manipulation using Python.

In this video course, you’ll practice:

  • Differentiating between text and binary files
  • Using character encodings and line endings
  • Manipulating file objects in Python
  • Reading and writing character data with different file modes
  • Using open(), Path.open(), and the with statement
  • Leveraging the csv module to manipulate CSV data

This video course is part of the Python Basics series, which is complements the book Python Basics: A Practical Introduction to Python 3. Additionally, you can explore other Python Basics courses.

Please note that throughout this course, you’ll be using IDLE to interact with Python. If you’re new to Python, then you might want to check out Python Basics: Setting Up Python before diving into this course.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Tag1 Consulting: Unraveling the ETL Process: Extract

Planet Drupal - Tue, 2023-12-19 07:47

Our latest episode of Tag1 Team Talks is an insightful guide through the Extract phase of the ETL (Extract, Transform, Load) process in Drupal migrations. Hosted by Janez Urevc, the episode features experts Mike Ryan and Benji Fisher, who offer a deep understanding of data extraction relevant to migrations from older versions of Drupal (6,7,8) or other CMS platforms altogether and more.

Read more janez Tue, 12/19/2023 - 04:47
Categories: FLOSS Project Planets

SeanB: Introducing YAML Bundles: The easiest way to maintain your content types!

Planet Drupal - Tue, 2023-12-19 07:21

Let’s face it — manually creating and maintaining a lot of content types in Drupal can be a real pain. The repetitive nature of tasks, inconsistencies in field creation, and the time-consuming process of updating settings across multiple content types are familiar struggles for developers working in teams.

At TwelveBricks, we maintain sites with very different content models. Too much time was spent setting up and maintaining those models, so we finally decided to do something about it!

Enter YAML Bundles: A Developer’s new best friend

YAML Bundles is a pragmatic approach to streamlining content type management through YAML-based Drupal plugins. It allows developers to define fields and content types from custom modules, making it a lot easier to add or update fields and content types.

The module defines 3 important plugin types:

  • Field types
    By defining common field types and their form/display settings (like widgets and formatters), you can remove repetitive configuration from the bundle definitions.
  • Entity types
    You can define common settings for entity types to remove even more repetitive configurations from the bundle definitions. We have also integrated support for a bunch of other contrib modules we often use, to save even more time.
  • Entity bundles
    You can use the defined fields in entity bundles, complete with customizable form and display settings that can be overridden on a field-by-field basis. The default settings of the entity types can also be overridden if you need to.

The module ships with defaults for the most common field and entity types, but you can easily override them. We have tests, which we also used to document all the available plugin settings and options. You can check out the documentation in the plugin definitions of the yaml_bundles_test module.

YAML Bundles’ Drush Tools

To use the power of YAML bundles, we’ve added a couple of helpful Drush commands that will revolutionize your content type management.

drush yaml-bundles:create-bundles
This command uses the defined plugins to (re)create all the specified bundles, fields, and settings. Whether you’re starting from scratch or optimizing an existing configuration, this command ensures the seamless generation of all your content types with a single command.

drush yaml-bundles:create-bundle
You don’t have to use the bundle plugins if you don’t want to. For those who prefer a more hands-on approach, this command offers the flexibility to create a new content type directly through Drush.

drush yaml-bundles:create-field
This command simplifies the process of adding new fields to existing content types using default plugin configurations. With the ability to customize form and view displays, this command makes adding fields to different content types much easier.

Integrating the modules we all love

YAML Bundles comes with out-of-the-box support for several popular Drupal modules, including:

  • Pathauto: Define path aliases for entities and bundles effortlessly.
  • Content Translation: Translate fields, labels, and settings into multiple languages.
  • Content Moderation: Integrate content types into existing workflows.
  • Field Group: Group fields for your content type directly from the YAML plugins.
  • Simple Sitemap: Ensure your entity/bundle is included in the sitemap.
  • Maxlength: Define maximum lengths for text fields if needed.
  • Layout Builder: Utilize layout builder for your entity/bundle.
  • Search API: Index entities/bundles and defined fields using the powerful Search API.
Let’s define a content type

To define default settings and fields for an entity type in your custom module, create a [mymodule].yaml_bundles.entity_type.yml file. In this file, you can create all the defaults you need for the entity type.

# Add the default settings and fields for node types.
node:
description: ''
langcode: en

# Make sure our plugin gets preference over the yaml_bundle plugin.
weight: 1

# Default content type settings.
new_revision: true
preview_mode: 0
display_submitted: false

# Enable content translation.
content_translation:
enabled: true
bundle_settings:
untranslatable_fields_hide: '1'
language_alterable: true
default_langcode: current_interface

# Add the content to the sitemap by default.
sitemap:
index: true
priority: 0.5
changefreq: 'weekly'
include_images: false

# Enable layout builder.
layout_builder: true

# Enable content moderation.
workflow: content

# Add the content to the default search index.
search_indexes:
- default

# Create the full and teaser view displays.
view_displays:
- teaser

# Add the default fields.
fields:
langcode:
label: Language
weight: -100
form_displays:
- default
title:
label: Title
search: true
search_boost: 20
weight: -98
form_displays:
- default
field_meta_description:
label: Meta description
type: text_plain
search: true
search_boost: 20
maxlength: 160
maxlength_label: 'The content is limited to @limit characters, remaining: <strong>@remaining</strong>'
weight: -25
form_displays:
- default

# Translate the default fields.
translations:
nl:
fields:
langcode:
label: Taal
title:
label: Titel
field_meta_description:
label: Metabeschrijving
maxlength_label: 'De inhoud is beperkt tot @limit tekens, resterend: <strong>@remaining</strong>'
de:
fields:
langcode:
label: Sprache
title:
label: Titel
field_meta_description:
label: Meta-Beschreibung
maxlength_label: 'Der Inhalt ist auf @limit-Zeichen beschränkt, verbleibend: <strong>@remaining</strong>'

To define a content type in your custom module, create a [mymodule].yaml_bundles.bundle.yml file. In this file, you can create all the bundle definitions you need for the bundle.

node.news:
label: News
description: A description for the news type.
langcode: en

# Add generic node type settings.
help: Help text for the news bundle.

# Enable a custom path alias for the bundle. Requires the pathauto module to
# be enabled.
path: 'news/[node:title]'

# Configure the simple_sitemap settings for the bundle. Requires the
# simple_sitemap module to be enabled.
sitemap:
priority: 0.5

# Configure the search API index boost for the bundle. Requires the
# search_indexes to be configured and the search_api module to be enabled.
boost: 1.5

# Configure the fields for the bundle. For base fields, the field only needs
# a label. For custom fields, the type needs to be specified. The type
# configuration from the yaml_bundles.field_type plugins will be merged with
# the field configuration to allow the definition to be relatively simple.
# Generic defaults for a field type can be configured in the
# yaml_bundles.field_type plugins.
#
# See yaml_bundles.yaml_bundles.field_type.yml for the list of supported
# field types and their configuration properties.
fields:

field_date:
type: datetime
label: Date
required: false
search: true
search_boost: 1
cardinality: 1
field_default_value:
-
default_date_type: now
default_date: now
form_displays:
- default
view_displays:
- full
- teaser

field_body:
type: text_long
label: Text
required: true
search: true
search_boost: 1
form_displays:
- default
view_displays:
- full

field_image:
type: image
label: Image
required: true
search: true
form_displays:
- default
view_displays:
- full
- teaser

field_category:
type: list_string
label: Category
required: false
search: true
options:
category_1: Category 1
category_2: Category 2
form_displays:
- default
view_displays:
- full
- teaser

field_link:
type: link
label: Link
required: true
search: true
cardinality: 2
form_displays:
- default
view_displays:
- full
- teaserThat’s all folks!

We’re excited about the potential YAML Bundles brings to the Drupal ecosystem, and we can’t wait for you to experience the difference it makes in your projects. Please check it out and let us know what you think!

Download YAML Bundles | Documentation | Example plugins

Sponsored by TwelveBricks — The easiest CMS for your business

Building a new website can be a time-consuming and expensive process, but it doesn’t have to be. TwelveBricks offers an affordable and easy-to-use (Award-winning!) Content Management System (CMS) for a fixed monthly price.

We’ve optimized the workflow for editors so they can fully focus on their content without worrying about the appearance. With built-in analytics and extensive SEO tools, optimizing content becomes even easier. Each website can be delivered in just 2 days!

If you’re looking for a new website and don’t want to wait for months, spend a fortune, or compromise on quality, check out our website at https://www.twelvebricks.com/en.

Categories: FLOSS Project Planets

Pages