Feeds

Gunnar Wolf: Do you have a minute..?

Planet Debian - Thu, 2024-10-31 18:07

…to talk about the so-called “Intellectual Property”?

Categories: FLOSS Project Planets

Qt Creator 15 Beta2 released

Planet KDE - Thu, 2024-10-31 08:55

We are happy to announce the release of Qt Creator 15 Beta2!

Categories: FLOSS Project Planets

John Cook: How hard is constraint programming?

Planet Python - Thu, 2024-10-31 07:52

I’ve been writing code for the Z3 SMT solver for several months now. Here are my findings.

Python is used here as the base language. Python/Z3 feels like a two-layer programming model—declarative code for Z3, imperative code for Python. In this it seems reminiscent of C++/CUDA programming for NVIDIA GPUs—in that case, mixed CPU and GPU imperative code. Either case is a clever combination of methodologies that is surprisingly fluent and versatile, albeit not a perfect blend of seamless conceptual cohesion.

Other comparisons:

  • Both have two separate memory spaces (CUDA CPU/GPU memories for one; pure Python variables and Z3 variables for the other).
  • Both can be tricky to debug. In earlier days, CUDA had no debugger, so one had to fall back to the trusty “printf” statement (for a while it didn’t even have that!). If the code crashed, you might get no output at all. To my knowledge, Z3 has no dedicated debugger. If the problem being solved comes back as satisfiable, you can print out the discovered model variables, but if satisfiability fails, you get very little information. Like some other novel platforms, something of a “black box.”
  • In both cases, programmer productivity can be well-served by developing custom abstractions. I developed a Python class to manage multidimensional arrays of Z3 variables, this was a huge time saver.

There are differences too, of course.

  • In Python, “=” is assignment, but in Z3, one only has “==”, logical or numeric equality, not assignment per se. Variables are set once and can’t be changed—sort of a “write-once variables” programming model—as is natural to logic programming.
  • Code speed optimization is challenging. Code modifications for Z3 constraints/variables can have extreme and unpredictable runtime effects, so it’s hard to optimize. Z3 is solving an NP-complete problem after all, so runtimes can theoretically increase massively. Speedups can be massive also; one round of changes I made gave 2000X speedup on a test problem. Runtime of CUDA code can be unpredictable to a lesser degree, depending on the PTX and SASS code generation phases and the aggressive code optimizations of the CUDA compiler. However, it seems easier to “see through” CUDA code, down to the metal, to understand expected performance, at least for smaller code fragments. The Z3 solver can output statistics of the solve, but these are hard to actionably interpret for a non-expert.
  • Z3 provides many, many algorithmic tuning parameters (“tactics”), though it’s hard to reason about which ones to pick. Autotuners like FastSMT might help. Also there have been some efforts to develop tools to visualize the solve process, this might be of help.

It would be great to see more modern tooling support and development of community best practices to help support Z3 code developers.

The post How hard is constraint programming? first appeared on John D. Cook.
Categories: FLOSS Project Planets

drunomics: Drupal 11 Released - Key Features and Modernised Technology

Planet Drupal - Thu, 2024-10-31 07:00
Drupal 11 Released - Key Features and Modernised Technology Drupal Logo_vertical_Black.png sinduri.guntup… Thu, 10/31/2024 - 12:00 Drupal 11 was released on August 2, 2024, introducing a suite of new features and enhancements aimed at improving functionality, performance, and user experience. In this article, we will highlight the most significant updates, and emphasise the importance of ongoing maintenance for your Drupal sites.
Categories: FLOSS Project Planets

Preparing for the European Cyber Resilience Act (CRA)

Planet KDE - Thu, 2024-10-31 06:26

In an era where digital security is paramount, the European Union is taking steps to improve cybersecurity legislation with the introduction of the European Union Cyber Resilience Act (CRA). As the European Union has now adopted the CRA, Qt Group continues to work towards making our products CRA compliant and supporting our customers with their compliancy. 

Categories: FLOSS Project Planets

eGenix.com: PyDDF Python Herbst Sprint 2024

Planet Python - Thu, 2024-10-31 05:00

The following text is in German, since we're announcing a Python sprint in Düsseldorf, Germany.

Ankündigung

Python Meeting Herbst Sprint 2024 in
Düsseldorf

Samstag, 09.11.2024, 10:00-18:00 Uhr
Sonntag, 10.11.2024. 10:00-18:00 Uhr

Eviden / Atos Information Technology GmbH, Am Seestern 1, 40547 Düsseldorf

Informationen Das Python Meeting Düsseldorf (PyDDF) veranstaltet mit freundlicher Unterstützung von Eviden Deutschland ein Python Sprint Wochenende.

Der Sprint findet am Wochenende 09./10.11.2024 in der Eviden / Atos Niederlassung, Am Seestern 1, in Düsseldorf statt.Folgende Themengebiete sind als Anregung bereits angedacht:
  • AI/ML: Bilderkennung mit Azure Computervision
  • AI/ML: Texte und Meta Daten aus Presseseiten extrahieren, mit Hilfe eines lokalen LLMs
  • AI/ML: Transkription von Videos/Audiodateien mit Whisper
  • Kodi Add-Ons für ARD, ZDF und ARTE
Natürlich können die Teilnehmenden weitere Themen vorschlagen und umsetzen.
Anmeldung, Kosten und weitere Infos

Alles weitere und die Anmeldung findet Ihr auf der Meetup Sprint Seite:

WICHTIG: Ohne Anmeldung können wir den Gebäudezugang nicht vorbereiten. Eine spontane Anmeldung am Sprint Tag wird daher vermutlich nicht funktionieren.

Teilnehmer sollten sich zudem in der PyDDF Telegram Gruppe registrieren, da wir uns dort koordinieren:
Über das Python Meeting Düsseldorf

Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python-Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf.

Viel Spaß !

Marc-André Lemburg, eGenix.com

Categories: FLOSS Project Planets

CXX-Qt 0.7 Release

Planet KDE - Thu, 2024-10-31 04:30

We just released CXX-Qt version 0.7!

CXX-Qt is a set of Rust crates for creating bidirectional Rust ⇄ C++ bindings with Qt. It supports integrating Rust into C++ applications using CMake or building Rust applications with Cargo. CXX-Qt provides tools for implementing QObject subclasses in Rust that can be used from C++, QML, and JavaScript.

For 0.7, we have stabilized the cxx-qt bridge macro API and there have been many internal refactors to ensure that we have a consistent baseline to support going forward. We encourage developers to reach out if they find any unclear areas or missing features, to help us ensure a roadmap for them, as this may be the final time we can adapt the API. In the next releases, we’re looking towards stabilizing the cxx-qt-build and getting the cxx-qt-lib APIs ready for 1.0.

Check out the new release through the usual channels:

Some of the most notable developer-facing changes: Stabilized #[cxx_qt::bridge] macro

CXX-Qt 0.7 reaches a major milestone by stabilizing the bridge macro that is at the heart of CXX-Qt. You can now depend on your CXX-Qt bridges to remain compatible with future CXX-Qt versions. As we’re still pre-1.0, we may still introduce very minor breaking changes to fix critical bugs in the edge-cases of the API, but the vast majority of bridges should remain compatible with future versions.

This stabilization is also explicitly limited to the bridge API itself. Breaking changes may still occur in e.g. cxx-qt-lib, cxx-qt-build, and cxx-qt-cmake. We plan to stabilize those crates in the next releases.

Naming Changes

The handling of names internally has been refactored to ensure consistency across all usages. During this process, implicit automatic case conversion has been removed, so cxx_name and rust_name are now used to specify differing Rust and C++ names. Since the automatic case conversion is useful, it can be explicitly enabled using per extern block attributes auto_cxx_name and auto_rust_name, while still complimenting CXX. For more details on how these attributes can be used, visit the attributes page in the CXX-Qt book.

// with 0.6 implicit automatic case conversion #[cxx_qt::bridge] mod ffi { unsafe extern "RustQt" { #[qobject] #[qproperty(i32, my_number) // myNumber in C++ type MyObject = super::MyObjectRust; fn my_method(self: &MyObject); // myMethod in C++ } } // with 0.7 cxx_name / rust_name #[cxx_qt::bridge] mod ffi { unsafe extern "RustQt" { #[qobject] #[qproperty(i32, my_number, cxx_name = "myNumber") type MyObject = super::MyObjectRust; #[cxx_name = "myMethod"] fn my_method(self: &MyObject); } } // with 0.7 auto_cxx_name / auto_rust_name #[cxx_qt::bridge] mod ffi { #[auto_cxx_name] // <-- enables automatic cxx_name generation within the `extern "RustQt"` block unsafe extern "RustQt" { #[qobject] #[qproperty(i32, my_number) // myNumber in C++ type MyObject = super::MyObjectRust; fn my_method(self: &MyObject); // myMethod in C++ } } cxx_file_stem Removal

In previous releases, the output filename of generated C++ files used the cxx_file_stem attribute of the CXX-Qt bridge. This has been changed to use the filename of the Rust source file including the directory structure.

Previously, the code below would generate a C++ header path of my_file.cxxqt.h. After the changes, the cxx_file_stem must be removed and the generated C++ header path changes to crate-name/src/my_bridge.cxxqt.h. This follows a similar pattern to CXX.

// crate-name/src/my_bridge.rs // with 0.6 a file stem was specified #[cxx_qt::bridge(cxx_file_stem = "my_file")] mod ffi { ... } // with 0.7 the file path is used #[cxx_qt::bridge] mod ffi { ... } Build System Changes

The internals of the build system have changed so that dependencies are automatically detected and configured by cxx-qt-build, libraries can pass build information to cxx-qt-build, and a CXX-Qt CMake module is now available providing convenience wrappers around corrosion. This means that the cxx-qt-lib-headers crate has been removed and only cxx-qt-lib is required. With these changes, there is now no need for the -header crates that existed before. Previously, some features were enabled by default in cxx-qt-lib. Now these are all opt-in. We have provided full and qt_full as convenience to enable all features; however, we would recommend opting in to the specific features you need.

We hope to improve the API of cxx-qt-build in the next cycle to match the internal changes and become more modular.

Further Improvements

CXX-Qt can now be successfully built for WASM, with documented steps available in the book and CI builds for WASM to ensure continued support.

Locking generation on the C++ side for all methods has been removed, which simplifies generation and improves performance. Using queue from cxx_qt::CxxQtThread is still safe, as it provides locking, but it is up to the developer to avoid incorrect multi-threading in C++ code (as in the CXX crate). Note that Qt generally works well here, with the signal/slot mechanism working safely across threads.

As with most releases, there are more Qt types wrapped in cxx-qt-lib and various other changes are detailed in the CHANGELOG.

Make sure to subscribe to the KDAB YouTube channel, where we’ll post more videos on CXX-Qt in the coming weeks.

Thanks to all of our contributors that helped us with this release:
  • Ben Ford
  • Laurent Montel
  • Matt Aber
  • knox (aka @knoxfighter)
  • Be Wilson
  • Joshua Goins
  • Alessandro Ambrosano
  • Alexander Kiselev
  • Alois Wohlschlager
  • Darshan Phaldesai
  • Jacob Alexander
  • Sander Vocke

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post CXX-Qt 0.7 Release appeared first on KDAB.

Categories: FLOSS Project Planets

Help fight the proprietary software monsters!

Planet KDE - Wed, 2024-10-30 17:55

KDE’s yearly fundraiser is now live, with the theme of spooooky proprietary software. Go check it outno, really! It’s great!

I think this one absolutely nails it, because the stories there are relatable. They describe common problems with proprietary software most of us have personally experienced in our journeys to the FOSS world, and how FOSS fixes it.

Let me share some of mine:

  • When I was a kid, I liked to make movies with my friends and add wacky special effects using a program called AlamDV. I even bought a license to it! After a year, it broke and the developer released version 2, which I dutifully also bought a new license for. Unfortunately, none of my AlamDV 1 projects opened in it. They were lost to the wind.
  • Similarly, I also used Apple’s iMovie editing app. At a certain point, they changed it completely to have a totally different UI and no longer open old projects. Still a kid, I never managed to figure out the new UI and all my old projects were lost forever.
  • A lot of the digital art I made as a kid was saved in Apple’s .pict file format, which even they eventually dropped support for. When I moved to Linux, I had to write a script to open these files individually and take screenshots of them in order to not lose them forever.
  • I’ve been able to consistently recycle older computers and keep them relevant with Plasma. Both of my kids have perfectly serviceable hand-me-down computers revitalized with Fedora KDE. My wife’s old 10 year-old laptop is a testbed for KDE Linux.
  • My sister-in-law just last weekend was complaining to me about AI in Photoshop, and was very receptive to the idea of ditching Microsoft and Adobe software entirely. It’s a big turn-off to artists.

This stuff is real, and the work we do has significant impact. It’s not just a toy for nerds. It’s not a basement science project for bored tinkerers. It’s the way computers should be, and can be if enough of us donate our skills, time, and money towards the goal.

How will the fundraised money be used? Principally, to help KDE e.V. balance its budget and stop operating at a loss (about -110k last year, projected -70k this year) due to the legal requirement to spend down large lump-sum donations in a timely manner. We can sustain this level of deficit spending for a few more years, but of course would prefer not to. It’s been a tough environment for nonprofits, and you might have heard that the GNOME Foundation recently ran into financial trouble trouble had to cut back. We want to avoid that! The sooner we’re operating at a surplus again, the sooner we can expand our sponsorship of engineering work beyond its current level.

So go donate today, and make a difference in the most important movement in software today!

Categories: FLOSS Project Planets

Lullabot: Transforming eBooks: From PDFs to Accessible Web Experiences

Planet Drupal - Wed, 2024-10-30 14:14

When it comes to digital content, accessibility isn't just a nice-to-have. It's essential. That's why we recently took on the challenge of transforming our eBook collection from PDFs into a fully accessible web format. We often help our clients clean up their PDFs, and absent very specific circumstances, we recommend avoiding them as web content. Based on our own advice, our own website was lacking.

Categories: FLOSS Project Planets

The Python Show: 49 - EdgeDB and Python with Yury Selivanov

Planet Python - Wed, 2024-10-30 13:31

In this episode of The Python Show Podcast, we welcome Yury Selivanov as our guest. Yury is a core CPython developer and co-founder of EdgeDB and MagicStack.

We chatted about many different topics, including the following:

  • Core Python development

  • EdgeDB and how it differs from relational databases

  • Python without the GIL

  • Python subinterpreters

  • Memhive

  • and more!

Links

Learn more about our guest and the topics we talked about with the following links:

Categories: FLOSS Project Planets

DevCollaborative: Why and How to Install Google Search Console on Your Drupal or WordPress Website

Planet Drupal - Wed, 2024-10-30 12:48

Use Google Search Console to be a better listener by understanding what search queries that are bringing visitors to your website.

Categories: FLOSS Project Planets

Evolving Web: Is Drupal the Right fit? T-Shirt Sizing for Your Next Website Project

Planet Drupal - Wed, 2024-10-30 11:20

As a member of the leadership team for Drupal CMS, the new product that makes Drupal more accessible to marketers and content teams, I’ve spent the last three months engaging with various teams about their CMS decisions. While I take notes on marketing tools, ease of use, benefits of open source, DXP capabilities, and SaaS options, the core of these conversations often revolves around people and culture.

Who Decides on a CMS?

The decision-maker for a CMS can vary: sometimes it’s a marketer or an IT professional, other times it’s the “head of digital,” or even an agency hired to handle the organization’s digital needs. Although many focus on features, decisions often hinge on feelings, prior experiences, and familiarity. Ultimately, the decisions reflect the experiences of those in the room.
This isn’t to say that the right technical fit isn’t important; rather, it often takes a backseat to personal experiences. it’s crucial to communicate why Drupal aligns with an organization’s digital strategy based on its goals.

Three Types of Websites: Why Drupal is a Great Fit

Let’s categorize websites into three types and discuss why Drupal suits each.

1. Cornerstone of Your Digital Strategy

Every organization needs a digital front door. For established brands, this digital presence serves as the foundation of their online strategy. A known brand must maintain consistent online expression, while an unknown brand needs to tell its story effectively, helping users recognize its voice and identity.
Users want to quickly understand if they’re in the right place and how to connect. They expect seamless integration with third-party tools and easy access to internal data.

Why Choose Drupal?

Drupal excels here because it goes beyond basic content management, offering flexibility for both internal and external users. It supports:

  • Integration with third-party tools and data management.
  • Enterprise-grade workflows and content management.
  • Custom features and transactions.
  • Tailored information architecture.
  • A blend of structured content and marketing pages.

 Planned Parenthood Direct's mobile-first website serves as a cornerstone of its digital strategy, focusing on creating a distinct sub-brand that resonates with younger audiences while maintaining alignment with the larger Planned Parenthood identity. The site leverages bold colours, playful design elements, and clear messaging to connect with users, driving them to download the app and access reproductive healthcare services.
2. CMS Platform

Many organizations manage a complex ecosystem of websites, often hindered by internal politics and multiple CMSs that lead to inconsistent branding.
A successful CMS platform balances flexibility with guidelines, making site creation easy while adhering to the organization’s branding and content strategy. It often requires standardization of third-party tools.

Why Choose Drupal?

Drupal’s modularity simplifies standardization across websites. It supports:

  • Configuration management to allow control over customization
  • Flexibility that enables governance at both the platform and individual site levels
  • As a widely adopted solution in enterprises, it benefits from optimized hosting tools designed for multi-site management (e.g., Pantheon Custom Upstreams, Acquia Site Factory).

 Evolving Web partnered with the Bibliothèque et Archives nationales du Québec (BAnQ) to redesign its website, building a platform where staff can curate content and users can search across multiple data sources.
3. Marketing Microsite Designed to Scale

Not every organization is large; some startups aim to create single-purpose websites quickly. These organizations need to build fast without sacrificing security or accessibility. Often, marketers seek easy drag-and-drop tools for rapid site creation.

While Drupal has traditionally been overlooked for quick projects, Drupal CMS provides a solution that fosters familiarity among a broader audience, because it lowers the barrier to entry and speeds up the timeline to launch a website. When marketers can create a website quickly, it enhances creativity and ownership, and frees up more time to focus on content and marketing strategy. Drupal CMS will be especially important for making the case for using Drupal for these types of projects.

Why Choose Drupal?

Drupal allows for the rapid launch of marketing sites, which can later scale into a digital cornerstone for an established brand. In particular, Drupal CMS will support:

  • Built-in AI tools for site building that free up time to focus on content strategy and handling the influx of feature requests and content decisions that they are often faced with
  • Allowing small sites to leverage the same modules and recipes available to larger sites
  • No limitations to scaling up a small website to accommodate more content, authors, or functionality
Conclusion: If Drupal Wins, We All Win

Increased usage of Drupal leads to a better experience for everyone involved—developers, site builders, marketers, and content teams. As an open-source platform, Drupal's growth benefits the broader community, including government and non-profit organizations. Improving Drupal enhances a public good rather than enriching proprietary solutions. 

If you're looking to talk more about Drupal and Drupal CMS don’t hesitate to get in touch

If Drupal wins, we all win.

+ more awesome articles by Evolving Web
Categories: FLOSS Project Planets

Real Python: Python Closures: Common Use Cases and Examples

Planet Python - Wed, 2024-10-30 10:00

In Python, a closure is typically a function defined inside another function. This inner function grabs the objects defined in its enclosing scope and associates them with the inner function object itself. The resulting combination is called a closure.

Closures are a common feature in functional programming languages. In Python, closures can be pretty useful because they allow you to create function-based decorators, which are powerful tools.

In this tutorial, you’ll:

  • Learn what closures are and how they work in Python
  • Get to know common use cases of closures
  • Explore alternatives to closures

To get the most out of this tutorial, you should be familiar with several Python topics, including functions, inner functions, decorators, classes, and callable instances.

Get Your Code: Click here to download the free sample code that shows you how to use closures in Python.

Take the Quiz: Test your knowledge with our interactive “Python Closures: Common Use Cases and Examples” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Python Closures: Common Use Cases and Examples

In this quiz, you'll test your understanding of Python closures. Closures are a common feature in functional programming languages and are particularly popular in Python because they allow you to create function-based decorators.

Getting to Know Closures in Python

A closure is a function that retains access to its lexical scope, even when the function is executed outside that scope. When the enclosing function returns the inner function, then you get a function object with an extended scope.

In other words, closures are functions that capture the objects defined in their enclosing scope, allowing you to use them in their body. This feature allows you to use closures when you need to retain state information between consecutive calls.

Closures are common in programming languages that are focused on functional programming, and Python supports closures as part of its wide variety of features.

In Python, a closure is a function that you define in and return from another function. This inner function can retain the objects defined in the non-local scope right before the inner function’s definition.

To better understand closures in Python, you’ll first look at inner functions because closures are also inner functions.

Inner Functions

In Python, an inner function is a function that you define inside another function. This type of function can access and update names in their enclosing function, which is the non-local scope.

Here’s a quick example:

Python >>> def outer_func(): ... name = "Pythonista" ... def inner_func(): ... print(f"Hello, {name}!") ... inner_func() ... >>> outer_func() Hello, Pythonista! >>> greeter = outer_func() >>> print(greeter) None Copied!

In this example, you define outer_func() at the module level or global scope. Inside this function, you define the name local variable. Then, you define another function called inner_func(). Because this second function lives in the body of outer_func(), it’s an inner or nested function. Finally, you call the inner function, which uses the name variable defined in the enclosing function.

When you call outer_func(), inner_func() interpolates name into the greeting string and prints the result to your screen.

Note: To learn more about inner functions, check out the Python Inner Functions: What Are They Good For? tutorial.

In the above example, you defined an inner function that can use the names in the enclosing scope. However, when you call the outer function, you don’t get a reference to the inner function. The inner function and the local names won’t be available outside the outer function.

In the following section, you’ll learn how to turn an inner function into a closure, which makes the inner function and the retained variables available to you.

Function Closures

All closures are inner functions, but not all inner functions are closures. To turn an inner function into a closure, you must return the inner function object from the outer function. This may sound like a tongue twister, but here’s how you can make outer_func() return a closure object:

Python >>> def outer_func(): ... name = "Pythonista" ... def inner_func(): ... print(f"Hello, {name}!") ... return inner_func ... >>> outer_func() <function outer_func.<locals>.inner_func at 0x1066d16c0> >>> greeter = outer_func() >>> greeter() Hello, Pythonista! Copied! Read the full article at https://realpython.com/python-closure/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Ned Batchelder: GitHub action security: zizmor

Planet Python - Wed, 2024-10-30 08:16

Zizmor is a new tool to check your GitHub action workflows for security concerns. I found it really helpful to lock down actions.

Action workflows can be esoteric, and continuous integration is not everyone’s top concern, so it’s easy for them to have subtle flaws. A tool like zizmor is great for drawing attention to them.

When I ran it, I had a few issues to fix:

  • Some data available to actions is manipulable by unknown people, so you have to avoid interpolating it directly into shell commands. For example, you might want to add the branch name to the action summary: - name: "Summarize"
      run: |
        echo "### From branch ${{ github.ref }}" >> $GITHUB_STEP_SUMMARY
    But github.ref is a branch name chosen by the author of the pull request. It could have a shell injection which could let an attacker exfiltrate secrets. Instead, put the value into an environment variable, then use it to interpolate: - name: "Summarize"
      env:
        REF: ${{ github.ref }}
      run: |
        echo "### From branch ${REF}" >> $GITHUB_STEP_SUMMARY
  • The actions/checkout step should avoid persisting credentials: - name: "Check out the repo"
      uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
      with:
        persist-credentials: false
  • In steps where I was pushing to GitHub, this meant I needed to explicitly set a remote URL with credentials: - name: "Push digests to pages"
      env:
        GITHUB_TOKEN: ${{ secrets.token }}
      run: |
        git config user.name nedbat
        git config user.email ned@nedbatchelder.com
        git remote set-url origin https://x-access-token:${GITHUB_TOKEN}@github.com/${GITHUB_REPOSITORY}.git

There were some other things that were easy to fix, and of course, you might have other issues. One improvement to zizmor: it could link to explanations of how to fix the problems it finds, but it wasn’t hard to find resources, like GitHub’s Security hardening for GitHub Actions.

William Woodruff is zizmor’s author. He was incredibly responsive when I had problems or questions about using zizmor. If you hit a snag, write an issue. It will be a good experience.

If you are like me, you have repos lying around that you don’t think about much. These are a special concern, because their actions could be years old, and not well maintained. These dusty corners could be a good vector for an attack. So I wanted to check all of my repos.

With Claude’s help I wrote a shell script to find all git repos I own and run zizmor on them. It checks the owner of the repo because my drive is littered with git repos I have no control over:

#!/bin/bash
# zizmor-repos.sh

echo "Looking for workflows in repos owned by: $*"

# Find all git repositories in current directory and subdirectories
find . \
    -type d \( \
        -name "Library" \
        -o -name "node_modules" \
        -o -name "venv" \
        -o -name ".venv" \
        -o -name "__pycache__" \
    \) -prune \
    -o -type d -name ".git" -print 2>/dev/null \
| while read gitdir; do
    # Get the repository directory (parent of .git)
    repo_dir="$(dirname "$gitdir")"

    # Check if .github/workflows exists
    if [ -d "${repo_dir}/.github/workflows" ]; then
        # Get the GitHub remote URL
        remote_url=$(git -C "$repo_dir" remote get-url origin)

        # Check if it's our repository
        # Handle both HTTPS and SSH URL formats
        for owner in $*; do
            if echo "$remote_url" | grep -q "github.com[/:]$owner/"; then
                echo ""
                echo "Found workflows in $owner repository: $repo_dir"
                ~/.cargo/bin/zizmor $repo_dir/.github/workflows
            fi
        done
    fi
done

After fixing issues, it’s very satisfying to see:

% zizmor-repos.sh nedbat BostonPython
Looking for workflows in repos owned by: nedbat BostonPython

Found workflows in nedbat repository: ./web/stellated
🌈 completed ping-nedbat.yml
No findings to report. Good job!

Found workflows in nedbat repository: ./web/nedbat_nedbat
🌈 completed build.yml
No findings to report. Good job!

Found workflows in nedbat repository: ./scriv
🌈 completed tests.yml
No findings to report. Good job!

Found workflows in nedbat repository: ./lab/gh-action-tests
🌈 completed matrix-play.yml
No findings to report. Good job!

Found workflows in nedbat repository: ./aptus/trunk
🌈 completed kit.yml
No findings to report. Good job!

Found workflows in nedbat repository: ./cog
🌈 completed ci.yml
No findings to report. Good job!

Found workflows in nedbat repository: ./dinghy/nedbat
🌈 completed test.yml
🌈 completed daily-digest.yml
🌈 completed docs.yml
No findings to report. Good job!

Found workflows in nedbat repository: ./dinghy/sample
🌈 completed daily-digest.yml
No findings to report. Good job!

Found workflows in nedbat repository: ./coverage/badge-samples
🌈 completed samples.yml
No findings to report. Good job!

Found workflows in nedbat repository: ./coverage/django_coverage_plugin
🌈 completed tests.yml
No findings to report. Good job!

Found workflows in nedbat repository: ./coverage/trunk
🌈 completed dependency-review.yml
🌈 completed publish.yml
🌈 completed codeql-analysis.yml
🌈 completed quality.yml
🌈 completed kit.yml
🌈 completed python-nightly.yml
🌈 completed coverage.yml
🌈 completed testsuite.yml
No findings to report. Good job!

Found workflows in BostonPython repository: ./bospy/about
🌈 completed past-events.yml
No findings to report. Good job!

Nice.

Categories: FLOSS Project Planets

Russell Coker: Links October 2024

Planet Debian - Wed, 2024-10-30 06:04

Dacid Brin wrote an interesting article about AI ecosystems and how humans might work with machines on creative projects [1]. Also he’s right about “influencers” being like funghi.

Cory Doctorow wrote an interesting post about DRM, coalitions, and cheating [2]. It seems that people like me who want “trusted computing” to secure their own computers don’t fit well in any of the coalitions.

The CHERI capability system for using extra hardware to validate jump addresses is an interesting advance in computer science [3]. The lecture is froim the seL4 Summit, this sort of advance in security goes well with a formally proven microkernel. I hope that this becomes a checkbox when ordering a custom RISC-V design.

Bunnie wrote an insightful blog post about how the Mossad might have gone about implementing the exploding pager attack [4]. I guess we will see a lot more of this in future, it seems easy to do.

Interesting blog post about Control Flow Integrity in the V8 engine of Chrome [5].

Interesting blog post about the new mseal() syscall which can be used by CFI among other things [6].

This is the Linux kernel documentation about the Control-flow Enforcement Technology (CET) Shadow Stack [7]. Unfortunately not enabled in Debian/Unstable yet.

ARM added support for Branch Target Identification in version 8.5 of the architecture [8].

The CEO of Automatic has taken his dispute with WPEngine to an epic level, this video catalogues it, I wonder what is wrong with him [9].

NuShell is an interesting development in shell technology which runs on Linux and Windows [10].

Interesting article about making a computer game without coding using ML [11]. I doubt that it would be a good game, but maybe educational for kids.

Krebs has an insightful article about location tracking by phones which is surprisingly accurate [12]. He has provided information on how to opt out of some of it on Android, but we need legislative action!

Interesting YouTube video about how to make a 20kW microwave oven and what it can do [13]. Don’t do this at home, or anywhere else!

The Void editor is an interesting project, a fork of VSCode that supports DIRECT connections to LLM systems where you don’t have their server acting as a middle-man and potentially snooping [14].

Related posts:

  1. Links August 2024 Bruce Schneier and Kim Córdova wrote an insightful article about...
  2. Links September 2024 CNA Insider has an insightful documentary series about Chinese illegal...
  3. Links June 2024 Modos Labs have released the design of an e-ink display...
Categories: FLOSS Project Planets

Promet Source: Secure Our World: Cybersecurity Awareness Month 2024

Planet Drupal - Wed, 2024-10-30 05:44
Takeaway: October is Cybersecurity Awareness Month. For 21 years, the U.S. government has set aside this month to help organizations strengthen their online security and protect sensitive information. The 2024 theme is "Secure Our World." This reminds us that protecting sensitive information is a shared responsibility across all departments and roles.
Categories: FLOSS Project Planets

Dirk Eddelbuettel: gcbd 0.2.7 on CRAN: More Mere Maintenance

Planet Debian - Tue, 2024-10-29 21:10

Another pure maintenance release 0.2.7 of the gcbd package is now on CRAN. The gcbd proposes a benchmarking framework for LAPACK and BLAS operations (as the library can exchanged in a plug-and-play sense on suitable OSs) and records result in local database. Its original motivation was to also compare to GPU-based operations. However, as it is both challenging to keep CUDA working packages on CRAN providing the basic functionality appear to come and go so testing the GPU feature can be challenging. The main point of gcbd is now to actually demonstrate that ‘yes indeed’ we can just swap BLAS/LAPACK libraries without any change to R, or R packages. The ‘configure / rebuild R for xyz’ often seen with ‘xyz’ being Goto or MKL is simply plain wrong: you really can just swap them (on proper operating systems, and R configs – see the package vignette for more). But nomatter how often we aim to correct this record, it invariably raises its head another time.

This release accommodates a CRAN change request as we were referencing the (now only suggested) package gputools. As hinted in the previous paragraph, it was once on CRAN but is not right now so we adjusted our reference.

CRANberries also provides a diffstat report for the latest release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Seth Michael Larson: How to export OPML from Omnivore

Planet Python - Tue, 2024-10-29 20:00
How to export OPML from Omnivore AboutBlogCool URLs How to export OPML from Omnivore

Published 2024-10-30 by Seth Larson
Reading time: minutes

Omnivore recently announced they were bought by ElevenLabs, which is an AI company funded by Trump-supporting VC firm Andreessen Horowitz. As a part of this deal, they are shuttering the service on an extremely tight deadline.

This is very disappointing to me, I've previously recommended Omnivore to others and have donated for the past year that I've used the service. It goes without saying that I want nothing to do with Omnivore.

At the recommendation of a few friends I am going to try out Feedbin, which costs $5/month (the same price that I was willing to donate to Omnivore) and has a generous 30-day trial. This service has been around for much longer and appears to not be adding AI garbage to their app (thank you, Feedbin!)

The Omnivore team has unhelpfully given an extremely tight deadline to migrate your content out before they shutter the service: November 15th. Exporting your content (links, tags, rendered HTML pages) worked fine for me but I had to restart the export process once. Please do this ASAP to avoid losing your data.

I will need to write a few scripts to import the content into Feedbin. But your data export doesn't give you your RSS or newsletter subscriptions (again, screw you Omnivore).

So I wrote a short Python script to do that. First install the dependencies (OmnivoreQL and PyOPML):

$ python -m pip install omnivoreql pyopml

You'll also need to create an Omnivore API token and place the value in the script:

import opml from omnivoreql import OmnivoreQL omnivore_api_token = "<YOUR API TOKEN GOES HERE>" omnivore = OmnivoreQL(api_token=omnivore_api_token) subscriptions = omnivore.get_subscriptions() with open("newsletters.txt", mode="w") as newsletters_fileobj: newsletters_fileobj.truncate() feeds_opml = opml.OpmlDocument( title="Omnivore Feeds Export" ) for subscription in subscriptions["subscriptions"]["subscriptions"]: if subscription["newsletterEmail"] is not None: newsletters_fileobj.write(subscription["name"] + "\n") else: feeds_opml.add_rss( text=subscription["name"], title=subscription["name"], xml_url=subscription["url"], description=subscription["description"], ) with open("feeds.opml", mode="wb") as feeds_fileobj: feeds_fileobj.truncate() feeds_opml.dump(feeds_fileobj, pretty=True)

After running the script you'll have two files: feeds.opml and newsletters.txt.

The feeds.opml file can be imported into RSS readers that support the OPML format (Feedbin and many other services do).

The newsletters.txt file is mostly to remind you which newsletters you've subscribed to. Because these readers use custom email addresses to handle newsletters you'll need to manually set these subscriptions up on the new reader service you choose to use.

If there's any issues with the script above, get in contact and I'll try to fix any issues.

Have thoughts or questions? Let's chat over email or social:

sethmichaellarson@gmail.com
@sethmlarson@fosstodon.org

Want more articles like this one? Get notified of new posts by subscribing to the RSS feed or the email newsletter. I won't share your email or send spam, only whatever this is!

Want more content now? This blog's archive has ready-to-read articles. I also curate a list of cool URLs I find on the internet.

Find a typo? This blog is open source, pull requests are appreciated.

Thanks for reading! ♡ This work is licensed under CC BY-SA 4.0

Categories: FLOSS Project Planets

Armin Ronacher: Make It Ephemeral: Software Should Decay and Lose Data

Planet Python - Tue, 2024-10-29 20:00

Most software that exists today does not forget. Creating software that remembers is easy, but designing software that deliberately “forgets” is a bit more complex. By “forgetting,” I don't mean losing data because it wasn’t saved or losing it randomly due to bugs. I'm referring to making a deliberate design decision to discard data at a later time. This ability to forget can be an incredibly benefitial property for many applications. Most importantly software that forgets enables different user experiences.

I'm willing to bet that your cloud storage or SaaS applications likely serve as dumping grounds for outdated, forgotten files and artifacts. This doesn’t have to be the case.

Older computer software often aimed to replicate physical objects and experiences. This approach (skeuomorphism) was about making digital interfaces feel familiar to older physical objects. They resembled the appearance and behavior even though they didn't need to. Ironically though skeuomorphism despite focusing on look and feel, rarely considers some of the hidden affordances of the physical world. Critically, rarely does digial software feature degradation. Yes, the trash bin was created as an appoximation of this, but the bin seemingly did not make it farther than file or email management software. It also does not go far enough.

In the physical world, much of what we create has a natural tendency to decay and that is really useful information. A sticky note on a monitor gathers dust and fades. A notebook fills with notes and random scribbles, becomes worn, and eventually ends up in a cabinet to finally end its life discarded in a bin. We probably all clear out our desk every couple of months, tossing outdated items to keep the space manageable. When I do that, a key part of this is quickly judging how “old” some paper looks. But even without regular cleaning, things are naturally lost or discarded over time on my desk. Yet software rarely behaves this way. I think that’s a problem.

When data is kept indefinitely by default, it changes our relationship with that software. People sometimes may hesitate to create anything in shared spaces for fear of cluttering them, while others might indiscriminately litter them. In file-based systems, this may be manageable, but in shared SaaS applications, everything created (dashboards, notebooks, diagrams) lingers indefinitely and remains searchable and discoverable. This persistence seems advantageous but can quickly lead to more and more clutter.

Adding new data to software is easy. Scheduling it for automatic deletion is a bit harder. Simulating any kind of “visual decay” to hint at age or relevance is rarely seen in today's software though it wouldn't be all that hard to add. I'm not convinced that the work required to implement any of those things is why it does not exist, I think it's more likely that there is a belief that keeping stuff around forever is a benefit over the limitations of the real world.

The reality is that even though the entities we create are sticking around forever, the information contained within them ages badly. Of the 30 odd "test" dashboards that are in our Datadog installation, most of them don't show data any more. The same is true for hundreds of notebooks. We have a few thousand notebooks and quite a few of them at this point are anchored to data that is past the retention period or are referencing metrics that are gone.

In shared spaces with lots of users, few things are intended to last forever. I hope that it will become more popular for software to take age more intentional into account. For instance one can start fading out old documents that are rarely maintained or refreshed. I want software to hide old documents, dashboards etc. and that includes most critically not showing up in search. I don't want to accidentally navigate to old and unused dashboards in the mids of an incident.

Sorting by frequency of use is insufficient to me. Ideally software embraced an “ephemeral by default” approach. While there’s some risk of data loss, you can make the deletion purely virtual (at least for a while). Imagine dashboard software with built-in “garbage collection”: everything created starts with a short time-to-live (say, 30 days), after which it moves to a “to sort” folder. If it’s not actively sorted and saved within six months, it's moved to a trash and eventually deleted.

This idea extends far beyond dashboards! Wiki and innformation management software like Notion could benefit from decaying notes, as the information they hold often becomes outdated quickly. I routinely encounter more outdated pages than current ones. While outright deletion may not be the solution, irrelevant notes and documents showing up in searches add to the clutter and make finding useful information harder. “But I need my data sometimes years later” I hear you say. What about making it intentional? Archive them in year books. Make me intentionally “dig into the archives” if I really have to. There are many very intentional ways of dealing with this problem.

And even if software does not want to go down that path I would at least wish for scheduled deletion. I will forget to delete, and I'm lazy and given the tools available I rarely clean up. Yet many of the things I create I already know I really only need for a week or to. So give me a button I can press to schedule deletion. Then I don't have to remember to clean up after myself a few months later, but I can make that call already today when I create my thing.

Categories: FLOSS Project Planets

Pages