Feeds

Spyder IDE: Reusable research Birds of a Feather session at Scipy 2023: Goals and challenges

Planet Python - Tue, 2023-12-19 07:00

The Spyder team and collaborators hosted a Birds of a Feather (BoF) session at SciPy 2023, focused on moving beyond just scripts and notebooks toward truly reproducible, reusable research. Here, we’ll recap the motivation and goals of the BoF and share the common challenges that participants brought up with notebooks and moving toward reproducible, reusable research. In our next post, we’ll follow up with some of the tips, tools, platforms and strategies attendees brought up as ways to address them, including using Spyder! We'd like to thank Juanita Gomez for helping organize the BoF, Hari for his hard work compiling a summary of the outcomes, and everyone for attending and sharing such great ideas and insights!

The trouble with notebooks

The overwhelming majority of current scientific code is siloed away into one-off scripts and notebooks, where the only real mechanism for reusing and building upon them is good old copy and paste. In order to keep "building upon the shoulders of giants", we need to achieve not only reproducibility of individual results but also true reusability of research methods, that can be shared, built upon, and deployed by researchers across the world.

In particular, scripts and notebooks are not typically very reproducible or reusable, as users generally cannot easily import them, specify dependencies, extend them or use them for another project (without copy/paste and managing multiple code versions by hand). Additionally, for notebooks specifically, authors and readers alike cannot easily track them in Git (with clean diffs), lint, type check, test or format them with standard Python tools, or interoperate with most other non-notebook-specific ecosystems.

To address these pressing issues, the Spyder team and interested community members convened a Birds of a Feather (BoF) session, "Beyond Notebooks: From reproducible to reusable research", at the SciPy 2023 conference in Austin, TX, where we invited attendees to share their tools and workflows for reusable science, and explored how we can encourage users to expand beyond the current notebook-centric monoculture and toward more holistic, modular and interoperable approaches to conducting research and developing scientific code. The goal was to not only share and discuss ideas and insights on the topic among BoF participants, which numbered over ≈50 interested community members, but also to help inform future guides and resources on this topic, to be hosted on central platforms like the Scientific Python organization, as is currently in progress.

Goals and themes

The BoF was motivated by the following key questions:

  • What is reusable research and why is it important?
  • What tools and techniques do people have to share for effective reusable research?
  • How can we integrate reusable research into existing workflows?
  • How do we teach students and researchers about reusable research, and encourage them to practice it?

While the resulting community ideas and insights centered around three related themes:

  • How can we make existing notebooks more readable, reproducible and reusable?
  • How can notebooks be progressively migrated to Python modules for basic reusability?
  • How can the community simplify and advocate the process of creating fully reusable Python packages?
Common challenges

Participants commented that students mostly get introduced to notebooks through classes in contexts that are very different from how they would use them for their research, and they don't have a good resource for that to hand to them if they have a question or are confused about that. Others responded that they think that should be part of the curriculum, questioning why are people learning machine learning using Jupyter notebooks without actually learning how to use Jupyter notebooks themselves, and that many folks don't come from a traditional computer science background and may not know about all these tools.

It was also remarked that some feel the fact that students are only exposed to notebooks really makes them not necessarily want to reach for other tools even when they would be more appropriate down the line, which participants suggested addressing by encouraging students to use IDEs like Spyder and JupyterLab that offer many features for reusability and reproducibility, but while also allowing them to take advantage of notebooks.

In particular, one former Spyder developer commented that they feel that we should show students how to use tools like debugging and make it easier for them to do that, but give them the choice whether they want to use those tools, and that the right approach is not necessarily telling them what tool to use, but having documentation and exposure to those tools so students can pick the best option for them. Others remarked in response that we do want to give students options, though many might not need a full debugger.

One library worker mentioned that they often only have an hour to introduce users to Python, and use Google Collab notebooks because it makes it a lot easier for students to get started with Python over having to download and install an IDE, but then students tend to be familiar with that tool and continue to use it. Another participant mentioned they are a big fan of using videos to help reach students over reading the documentation, as they feel students are much more likely to watch them.

The discussion shifted to tools in larger organizations, with a participant commenting "It's one thing when it's students, but how do you do that when it's your whole organizational culture that needs to change?" One participant responded saying she's a student herself, and no one ever really talked to her about IDEs and explained what they were and why you'd want to use one, remarking that it's important for teachers to actually train them to use the proper tools, but they have no idea when it comes to coworkers using these things.

Another participant suggested "nerd sniping" as an effective way to handle that in larger organizations, which involves figuring out what is the biggest pain point for the team, usually something that should be automated, and then getting them to follow better practices by showing them how these tools can fix that problem. Others agreed that it's really about awareness, and if you show someone a cool tool most people will make the decision to adopt them on their own, but there will always be some who might not want that.

Finally, it was brought up that students might have familiarity with Python or R, but Git is a completely different animal and is quite challenging to factor that into education; people like writers would really benefit from Git but it’s really hard to get them to use it, and people might not be aware of how inefficient the workflows they use are, because that's all they know.

Next up

Now that we’ve surfaced the reproducibility and reusability challenges that participants brought up at the BoF, stay tuned for our next blog post coming up soon, where we’ll share all the helpful tips, cool tools, awesome platforms and useful strategies attendees suggested to help address them. Until then, happy Spydering!

Categories: FLOSS Project Planets

Qt Contributor’s Summit 2023

Planet KDE - Tue, 2023-12-19 06:58

Earlier this month I traveled to winterly Berlin for the Qt Contributor’s Summit. After having contributed many patches to Qt in the past months in order to make the upcoming Plasma 6 really shine I decided to attend for the first time this year to meet some more of the faces behind our beloved UI toolkit.

Welcome to Qt Contributor’s Summit 2023

The event took place over the course of two days adjacent to Qt World Summit at Estrel Hotel in Neukölln – a massive hotel, congress, and entertainment complex, and actually the largest one in Europe. It literally took me longer to walk from its main entrance to the venue than getting from Sonnenallee S-Bahn station to the entrance.

Thursday morning at 9:30 after registering and picking up our badges, Volker Hilsheimer of The Qt Group opened the event and gave a recap on the state of the Qt Project and where it’s headed. Following that was a panel discussion on how to attract more external contributors to Qt. Being a library consumed by applications rather than an end-user product on its own certainly makes it hard to excite people to contribute or give them a reason to scratch their own itch.

After a copious lunch we started diving into discussions and workshops, typically three tracks in parallel. They were usually scheduled for 30 minutes which I found was way too short for any kind of meaningful outcome. The first meeting I attended revolved around “Qt – Connected First” and how Qt networking APIs could be made more capable and easy to use, particularly in the realm of OAuth2 and JWT. The need for supporting the fetch API in QML was also emphasized. Next I joined “moc in 202x and beyond” with Fabian Kosmale where we discussed ways the Meta Object Compiler, what gives you signals and slots, and is basically just a glorified pre-processor, could understand actual C++ language constructs. After that I listened to a discussion on improving the API review process of Qt.

Finally, there was a one hour slot by Volker on evolving QIcon and theming that came out very promising. Linux desktops for the longest time have had a standardized way of loading and addressing themed icons and Qt’s own icon infrastructure is built around that. In recent years, however, most other major platforms, particularly macOS and iOS, Android, and Windows, gained the ability to provide standardized assets for many icons typically found in an application. They even took it a step further and support additional hints for an icon, for example whether it should be filled or just an outline, rounded or sharp, varying levels of “progress” (e.g. the WiFi icon might consume a hint on what signal strength it should represent), and of course dynamic colorization.

Most of those icons are actually implemented using font glyphs and so-called font parameter “tags”. Qt 6.7 laid the ground work for manipulating those through a new QFont::Tag API and will ship with a “platform icon engine” for the aforementioned operating systems. In KDE we’re quite excited about it since we also dynamically colorize our Breeze icons based on the current color scheme. This is currently done by our own KIconEngine and will not work when run in other environments like Gnome or Android where we instead have to ship a dedicated “Breeze Dark” icon-set using white rather than black icons. There’s now also a QIcon::ThemeIcon struct containing a list of well-known icon names (such as “undo” or “open file”) which will map to the respective native icon name depending on the current platform. And if this wasn’t thrilling enough, Qt SVG also received some love and, among other things, gained support for various patterns and filters, including Gaussian blur.

… Walking in a Winter Wonderland …

We then headed out to a Pizza place we didn’t believe would actually fit the thirty or so of us that were looking for dinner there. The next morning began with a presentation on Cyber Security by Kai Köhne and how to deal with CVEs in Qt better since Qt also ships a lot of 3rd party code. This was then followed by Marc Mutz and a session on the state of C++20 in Qt and of course co-routines. After lunch we continued discussing the Cyber Security topic. Thereafter Thiago Macieira explained how broken QSharedMemory and friends actually are and that there’s no real way to salvage them. The biggest user of it seems to be QtSingleApplication which I believe should actually be a core feature provided by Qt. There’s a also a few questionable uses within KDE with the most important one being in the KSycoca / KService database.

I then switched rooms to a joint session about cleaning up QMetaType where we scrolled through to the code a bit and tried to figure out what problem some of it is actually trying to solve. Finally, Fabian presented his work on extending the QML type system, most notably for supporting std::variant and, by proxy, std::optional. Currently, if an API accepts multiple types of input, such as “model” on a Repater or ListView taking a proper QAbstractItemModel* as well as a simple Array of values or even just a plain number, this has to be implemented using a property of QVariant. This doesn’t make for self-documenting code and poses problems at runtime where at best assigning an unsupported type will print a warning on console. Using std::variant one could declare the expected input up front. Likewise, rather than using QVariant to return undefined if no value exists, std::optional would make it obvious what the main type is but that it can be “nulled”. Furthermore, we discussed type-safe ways to declare the expected signature for a JS callback function such as the “provider” functions in TableView.

We then wrapped up the conference, collected our T-Shirts and whatever leftover merchandise and headed back towards home. Many thanks to Tobias Hunger and his family for their hospitality as well as The Qt Group for sponsoring my travel.

Categories: FLOSS Project Planets

CodersLegacy: How to disable the Cache in Selenium

Planet Python - Tue, 2023-12-19 06:08

Selenium is a powerful tool for automating web applications, but one common challenge faced by automation testers is dealing with browser caching. Caching can sometimes interfere with the accuracy of your Selenium tests, leading to unexpected results. In this tutorial, we’ll explore how to disable the cache in Selenium to ensure that your automated tests run smoothly and produce reliable results.

Why Disable Cache in Selenium?

Browser caching is a mechanism that stores copies of web pages to reduce loading times. While this is beneficial for regular browsing, it can be a hindrance during automated testing.

For example, during stress testing for a website, if the cache is used after the initial page load, then it will not be a good stress test, as the only time the server is used, is the initial page load. Furthermore, caching may lead to outdated content being served, making it difficult to verify the correctness of your web application through Selenium tests.

How to Disable the Browser Cache in Selenium

Disabling the browser cache is no easy task. Selenium itself has little control over the browser cache, thus we cannot directly disable it. It also does not help that other factors come into play and make things difficult, such as different browsers exhibiting different behaviors, and different versions for each browser causing certain methods to fail or become outdated.

There are various techniques that we have compiled here for you to try out, with a short description on how they work, and what effect they have on the cache.


1. Setting cache size to 0

This sets the size of the cache to 0, ideally preventing the cache from ever forming.

-- service = ChromeService(executable_path=ChromeDriverManager().install()) driver = webdriver.Chrome(service=service, options=chrome_options)" style="color:#d8dee9ff;display:none" aria-label="Copy" class="code-block-pro-copy-button">from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.service import Service as ChromeService from selenium import webdriver chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("--disk-cache-size=0") <-- service = ChromeService(executable_path=ChromeDriverManager().install()) driver = webdriver.Chrome(service=service, options=chrome_options)

Note, that arguments can vary from browser to browser. This may not work for other browsers other than Chrome.

2. Use incognito mode

Activating incognito mode disables many things like cache and cookies.

-- service = ChromeService(executable_path=ChromeDriverManager().install()) driver = webdriver.Chrome(service=service, options=chrome_options)" style="color:#d8dee9ff;display:none" aria-label="Copy" class="code-block-pro-copy-button">from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.service import Service as ChromeService from selenium import webdriver chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("--incognito") <-- service = ChromeService(executable_path=ChromeDriverManager().install()) driver = webdriver.Chrome(service=service, options=chrome_options)

For Firefox, you will use the –private argument instead. Other browsers may have different versions of this argument too.

from webdriver_manager.firefox import GeckoDriverManager from selenium.webdriver.firefox.service import Service as FirefoxService from selenium import webdriver firefox_options = webdriver.FirefoxOptions() firefox_options.add_argument('--private') service = FirefoxService(executable_path=GeckoDriverManager().install()) driver = webdriver.Chrome(service=service, options=firefox_options) 3. Forcing a hard Refresh

You can re-load the page without the cache in most browser by using a keyboard shortcut. For Chrome and Firefox, the shortcut is CTRL + SHIFT + R (command instead of control for apple devices). These keys must be sent on an element however, so locate an important/big container element on the web page you are accessing (e.g. body), and send these keys on that element.

driver.get("https://example.com") element = driver.find_element(By.XPATH, "//body") element.send_keys(Keys.CONTROL, Keys.SHIFT, "R")

CTRL + F5 is another popular shortcut for refreshing that works on most browsers.

4. Automate the Clear Cache Action

One foolproof method of bypassing the cache, is to simply automate the clear cache action in the browser, the way you normally would (as a human). On the Chrome Browser, if you visit the URL, chrome://settings/clearBrowserData, you will see the following window popup.

All we have to do is automate the “click” action for that button. This can be done with the following code:

import time from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.actions import mouse_button from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.service import Service as ChromeService from selenium import webdriver chrome_options = webdriver.ChromeOptions() service = ChromeService(executable_path=ChromeDriverManager().install()) driver = webdriver.Chrome(service=service, options=chrome_options) driver.get("chrome://settings/clearBrowserData") element = driver.find_element(By.XPATH, "//settings-ui") time.sleep(1) element.send_keys(mouse_button.MouseButton.LEFT) element.send_keys(Keys.ENTER) time.sleep(3) driver.quit()

This code looks a little complex, because of the various imports and setup statements required. It is also a bit complicated due to the fact that Chrome uses a rather strange way of creating the popup, meaning we cannot directly access the button. Hence, we have to select the popup container (settings-ui), click on it (left mouse button) to focus it, and then the enter button to submit the clear cache request.

This marks the end of the “How to disable the Cache in Selenium” Tutorial. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the tutorial content can be asked in the comment section below.

The post How to disable the Cache in Selenium appeared first on CodersLegacy.

Categories: FLOSS Project Planets

Announcing Brise theme

Planet KDE - Tue, 2023-12-19 05:00

Brise theme is yet another fork of Breeze. The name comes having both the French and German translations of Breeze, being Brise.

As some people know, I’m contributing quite a lot to the Breeze style for the Plasma 6 release and I don’t intend to stop doing that. Both git repositories share the same git history and I didn’t massively rename all the C++ classes from BreeseStyle to BriseStyle to make it as easy as possible to backport commits from one repository to the other. There are also no plans to make this the new default style for Plasma.

My goal with this Qt style is to have a style that is not a big departure of Breeze like you know it but does contain some cosmetic small changes. This would serve as a place where I can experiment with new ideas and if they tend to be popular to then move them to Breeze.

Here is a breakdown of all the changes I made so far.

  • I made Brise coinstallable with Breeze, so that users can have both installed simultaneously. I minified the changes to avoid merge conflicts while doing so.

  • I increased the border radius of all the elements from 3 pixels to 5 pixels. This value is configurable between small (3 pixels), medium (5 pixels) and large (7 pixels). A merge request was opened in Breeze and might make it into Plasma 6.1. The only difference is that in breeze the default will likely keep being 3 pixels for the time being.

Cute buttons and frames with 5 pixels border radius

  • Add a separator between the search field and the title in the standard KDE config windows which serves as an extension of the separator between the list of the setting’s categories and the setting’s page. This is mostly to be similar to System Settings and other Kirigami applications. There is a pending merge request for this also in Breeze.
  • A new tab style that removes the blue lines from the active lines and introduce other small changes. Non-editable tabs are also now filling the entire horizontal space available. I’m not completely happy with the look yet, so no merge requests have been submitted to Breeze.

Separator in the toolbar and the new tabs

  • Remove outlines from menu and combobox items. My goal is to go in the same direction as KirigamiAddons.RoundedItemDelegate.

Menu without outlines

  • Ensure that all the controls have the same height. Currently a small disparency in height is noticeable when they are in the same row. The patch is still a bit hacky and needs some wider testing on a large range of apps to ensure no regressions, but it is also a improvement I will definitively submit upstream once I feel like it’s ready.

Here, in these two screenshots, every control has 35 pixels as height.

Categories: FLOSS Project Planets

LostCarPark Drupal Blog: Drupal Advent Calendar day 19 - ECA Commerce

Planet Drupal - Tue, 2023-12-19 02:00
Drupal Advent Calendar day 19 - ECA Commerce james Tue, 12/19/2023 - 07:00

Welcome back to the Drupal Advent calendar for another door opening! Today Nic Laflin (nicxvan) is here to tell us about the ECA Commerce module.

Today's module integrates two important Drupal ecosystems Commerce and ECA!

In Drupal 7 if you were building a commerce site then the Rules module was essential to set up notifications and pricing changes. Rules had a rich ecosystem and you could react to almost any event, check some conditions, and perform some action in Drupal. Even though Rules is currently available for Drupal 9 and 10, the jump to 8 took some time and the ecosystem lost some of...

Tags
Categories: FLOSS Project Planets

François Marier: Filtering your own spam using SpamAssassin

Planet Debian - Tue, 2023-12-19 01:20

I know that people rave about GMail's spam filtering, but it didn't work for me: I was seeing too many false positives. I personally prefer to see some false negatives (i.e. letting some spam through), but to reduce false positives as much as possible (and ideally have a way to tune this).

Here's the local SpamAssassin setup I have put together over many years. In addition to the parts I describe here, I also turn off greylisting on my email provider (KolabNow) because I don't want to have to wait for up to 10 minutes for a "2FA" email to go through.

This setup assumes that you download all of your emails to your local machine. I use fetchmail for this, though similar tools should work too.

Three tiers of emails

The main reason my setup works for me, despite my receiving hundreds of spam messages every day, is that I split incoming emails into three tiers via procmail:

  1. not spam: delivered to inbox
  2. likely spam: quarantined in a soft_spam/ folder
  3. definitely spam: silently deleted

I only ever have to review the likely spam tier for false positives, which is on the order of 10-30 spam emails a day. I never even see the the hundreds that are silently deleted due to a very high score.

This is implemented based on a threshold in my .procmailrc:

# Use spamassassin to check for spam :0fw: .spamassassin.lock | /usr/bin/spamassassin # Throw away messages with a score of > 12.0 :0 * ^X-Spam-Level: \*\*\*\*\*\*\*\*\*\*\*\* /dev/null :0: * ^X-Spam-Status: Yes $HOME/Mail/soft_spam/ # Deliver all other messages :0: ${DEFAULT}

I also use the following ~/.muttrc configuration to easily report false negatives/positives and examine my likely spam folder via a shortcut in mutt:

unignore X-Spam-Level unignore X-Spam-Status macro index S "c=soft_spam/\n" "Switch to soft_spam" # Tell mutt about SpamAssassin headers so that I can sort by spam score spam "X-Spam-Status: (Yes|No), (hits|score)=(-?[0-9]+\.[0-9])" "%3" folder-hook =soft_spam 'push ol' folder-hook =spam 'push ou' # <Esc>d = de-register as non-spam, register as spam, move to spam folder. macro index \ed "<enter-command>unset wait_key\n<pipe-entry>spamassassin -r\n<enter-command>set wait_key\n<save-message>=spam\n" "report the message as spam" # <Esc>u = unregister as spam, register as non-spam, move to inbox folder. macro index \eu "<enter-command>unset wait_key\n<pipe-entry>spamassassin -k\n<enter-command>set wait_key\n<save-message>=inbox\n" "correct the false positive (this is not spam)" Custom SpamAssassin rules

In addition to the default ruleset that comes with SpamAssassin, I've also accrued a number of custom rules over the years.

The first set comes from the (now defunct) SpamAssassin Rules Emporium. The second set is the one that backs bugs.debian.org and lists.debian.org. Note this second one includes archived copies of some of the SARE rules and so I only use some of the rules in the common/ directory.

Finally, I wrote a few custom rules of my own based on specific kinds of emails I have seen slip through the cracks. I haven't written any of those in a long time and I suspect some of my rules are now obsolete. You may want to do your own testing before you copy these outright.

In addition to rules to match more spam, I've also written a ruleset to remove false positives in French emails coming from many of the above custom rules. I also wrote a rule to get a bonus to any email that comes with a patch:

describe FM_PATCH Includes a patch body FM_PATCH /\bdiff -pruN\b/ score FM_PATCH -1.0

since it's not very common in spam emails

SpamAssassin settings

When it comes to my system-wide SpamAssassin configuration in /etc/spamassassin/, I enable the following plugins:

loadplugin Mail::SpamAssassin::Plugin::AntiVirus loadplugin Mail::SpamAssassin::Plugin::AskDNS loadplugin Mail::SpamAssassin::Plugin::ASN loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold loadplugin Mail::SpamAssassin::Plugin::Bayes loadplugin Mail::SpamAssassin::Plugin::BodyEval loadplugin Mail::SpamAssassin::Plugin::Check loadplugin Mail::SpamAssassin::Plugin::DKIM loadplugin Mail::SpamAssassin::Plugin::DNSEval loadplugin Mail::SpamAssassin::Plugin::FreeMail loadplugin Mail::SpamAssassin::Plugin::FromNameSpoof loadplugin Mail::SpamAssassin::Plugin::HashBL loadplugin Mail::SpamAssassin::Plugin::HeaderEval loadplugin Mail::SpamAssassin::Plugin::HTMLEval loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch loadplugin Mail::SpamAssassin::Plugin::ImageInfo loadplugin Mail::SpamAssassin::Plugin::MIMEEval loadplugin Mail::SpamAssassin::Plugin::MIMEHeader loadplugin Mail::SpamAssassin::Plugin::OLEVBMacro loadplugin Mail::SpamAssassin::Plugin::PDFInfo loadplugin Mail::SpamAssassin::Plugin::Phishing loadplugin Mail::SpamAssassin::Plugin::Pyzor loadplugin Mail::SpamAssassin::Plugin::Razor2 loadplugin Mail::SpamAssassin::Plugin::RelayEval loadplugin Mail::SpamAssassin::Plugin::ReplaceTags loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody loadplugin Mail::SpamAssassin::Plugin::SpamCop loadplugin Mail::SpamAssassin::Plugin::TextCat loadplugin Mail::SpamAssassin::Plugin::TxRep loadplugin Mail::SpamAssassin::Plugin::URIDetail loadplugin Mail::SpamAssassin::Plugin::URIEval loadplugin Mail::SpamAssassin::Plugin::VBounce loadplugin Mail::SpamAssassin::Plugin::WelcomeListSubject loadplugin Mail::SpamAssassin::Plugin::WLBLEval

Some of these require extra helper packages or Perl libraries to be installed. See the comments in the relevant *.pre files.

My ~/.spamassassin/user_prefs file contains the following configuration:

required_hits 5 ok_locales en fr # Bayes options score BAYES_00 -4.0 score BAYES_40 -0.5 score BAYES_60 1.0 score BAYES_80 2.7 score BAYES_95 4.0 score BAYES_99 6.0 bayes_auto_learn 1 bayes_ignore_header X-Miltered bayes_ignore_header X-MIME-Autoconverted bayes_ignore_header X-Evolution bayes_ignore_header X-Virus-Scanned bayes_ignore_header X-Forwarded-For bayes_ignore_header X-Forwarded-By bayes_ignore_header X-Scanned-By bayes_ignore_header X-Spam-Level bayes_ignore_header X-Spam-Status

as well as manual score reductions due to false positives, and manual score increases to help push certain types of spam emails over the 12.0 definitely spam threshold.

Finally, I have the FuzzyOCR package installed since it has occasionally flagged some spam that other tools had missed. It is a little resource intensive though and so you may want to avoid this one if you are filtering spam for other people.

As always, feel free to leave a comment if you do something else that works well and that's not included in my setup. This is a work-in-progress.

Categories: FLOSS Project Planets

Anarcat: (Re)introducing screentest

Planet Python - Mon, 2023-12-18 22:46

I have accidentally rewritten screentest, an old X11/GTK2 program that I was previously using to, well, test screens.

Screentest is dead

It was removed from Debian in May 2023 but had already missed two releases (Debian 11 "bullseye" and 12 "bookworm") due to release critical bugs. The stated reason for removal was:

The package is orphaned and its upstream is no longer developed. It depends on gtk2, has a low popcon and no reverse dependencies.

So I had little hope to see this program back in Debian. The git repository shows little activity, the last being two years ago. Interestingly, I do not quite remember what it was testing, but I do remember it to find dead pixels, confirm native resolution, and various pixel-peeping. Here's a screenshot of one of the screentest screens:

Now, I think it's safe to assume this program is dead and buried, and anyways I'm running wayland now, surely there's something better?

Well, no. Of course not. Someone would know about it and tell me before I go on a random coding spree in a fit of procrastination... riiight? At least, the Debconf video team didn't seem to know of any replacement. They actually suggested I just "invoke gstreamer directly" and "embrace the joy of shell scripting".

Screentest reborn

So, I naively did exactly that and wrote a horrible shell script. Then I realized the next step was to write an command line parser and monitor geometry guessing, and thought "NOPE, THIS IS WHERE THE SHELL STOPS", and rewrote the whole thing in Python.

Now, screentest lives as a ~400-line Python script, half of which is unit test data and command-line parsing.

Why screentest

Some smarty pants is going to complain and ask why the heck one would need something like that (and, well, someone already did), so maybe I can lay down a list of use case:

  • testing color output, in broad terms (answering the question of "is it just me or this project really yellow?")

  • testing focus and keystone ("this looks blurry, can you find a nice sharp frame in that movie to adjust focus?")

  • test for native resolution and sharpness ("does this projector really support 4k for 30$? that sounds like bullcrap")

  • looking for dead pixels ("i have a new monitor, i hope it's intact")

What does screentest do?

Screentest displays a series of "patterns" on screen. The list of patterns is actually hardcoded in the script, copy-pasted from this list from the videotestsrc gstreamer plugin, but you can pass any pattern supported by your gstreamer installation with --patterns. A list of patterns relevant to your installation is available with the gst-inspect-1.0 videotestsrc command.

By default, screentest goes through all patterns. Each pattern runs indefinitely until the you close the window, then the next pattern starts.

You can restrict to a subset of patterns, for example this would be a good test for dead pixels:

screentest --patterns black,white,red,green,blue

This would be a good sharpness test:

screentest --patterns pinwheel,spokes,checkers-1,checkers-2,checkers-4,checkers-8

A good generic test is the classic SMPTE color bars and is the first in the list, but you can run only that test with:

screentest --patterns smpte

(I will mention, by the way, that as a system administrator with decades of experience, it is nearly impossible to type SMPTE without first typing SMTP and re-typing it again a few times before I get it right. I fully expect this post to have numerous typos.)

Here's an example of the SMPTE pattern from Wikipedia:

For multi-monitor setups, screentest also supports specifying which output to use as a native resolution, with --output. Failing that, it will try to look at the outputs and use the first it will find. If it fails to find anything, you can specify a resolution with --resolution WIDTHxHEIGHT.

I have tried to make it go full screen by default, but stumbled a bug in Sway that crashes gst-launch. If your Wayland compositor supports it, you can possibly enable full screen with --sink waylandsink fullscreen=true. Otherwise it will create a new window that you will have to make fullscreen yourself.

For completeness, there's also an --audio flag that will emit the classic "drone", a sine wave at 440Hz at 40% volume (the audiotestsrc gstreamer plugin. And there's a --overlay-name option to show the pattern name, in case you get lost and want to start with one of them again.

How this works

Most of the work is done by gstreamer. The script merely generates a pipeline and calls gst-launch to show the output. That both limits what it can do but also makes it much easier to use than figuring out gst-launch.

There might be some additional patterns that could be useful, but I think those are better left to gstreamer. I, for example, am somewhat nostalgic of the Philips circle pattern that used to play for TV stations that were off-air in my area. But that, in my opinion, would be better added to the gstreamer plugin than into a separate thing.

The script shows which command is being ran, so it's a good introduction to gstreamer pipelines. Advanced users (and the video team) will possibly not need screentest and will design their own pipelines with their own tools.

I've previously worked with ffmpeg pipelines (in another such procrastinated coding spree, video-proxy-magic), and I found gstreamer more intuitive, even though it might be slightly less powerful.

In retrospect, I should probably have picked a new name, to avoid crashing the namespace already used by the project, which is now on GitHub. Who knows, it might come back to life after this blog post; it would not be the first time.

For now, the project lives along side the rest of my scripts collection but if there's sufficient interest, I might move it to its own git repositories. Comments, feedback, contributions are as usual welcome. And naturally, if you know something better for this kind of stuff, I'm happy to learn more about your favorite tool!

So now I have finally found something to test my projector, which will likely confirm what I've already known all along: that it's kind of a piece of crap and I need to get a proper one.

Categories: FLOSS Project Planets

Antoine Beaupré: (Re)introducing screentest

Planet Debian - Mon, 2023-12-18 22:46

I have accidentally rewritten screentest, an old X11/GTK2 program that I was previously using to, well, test screens.

Screentest is dead

It was removed from Debian in May 2023 but had already missed two releases (Debian 11 "bullseye" and 12 "bookworm") due to release critical bugs. The stated reason for removal was:

The package is orphaned and its upstream is no longer developed. It depends on gtk2, has a low popcon and no reverse dependencies.

So I had little hope to see this program back in Debian. The git repository shows little activity, the last being two years ago. Interestingly, I do not quite remember what it was testing, but I do remember it to find dead pixels, confirm native resolution, and various pixel-peeping. Here's a screenshot of one of the screentest screens:

Now, I think it's safe to assume this program is dead and buried, and anyways I'm running wayland now, surely there's something better?

Well, no. Of course not. Someone would know about it and tell me before I go on a random coding spree in a fit of procrastination... riiight? At least, the Debconf video team didn't seem to know of any replacement. They actually suggested I just "invoke gstreamer directly" and "embrace the joy of shell scripting".

Screentest reborn

So, I naively did exactly that and wrote a horrible shell script. Then I realized the next step was to write an command line parser and monitor geometry guessing, and thought "NOPE, THIS IS WHERE THE SHELL STOPS", and rewrote the whole thing in Python.

Now, screentest lives as a ~400-line Python script, half of which is unit test data and command-line parsing.

Why screentest

Some smarty pants is going to complain and ask why the heck one would need something like that (and, well, someone already did), so maybe I can lay down a list of use case:

  • testing color output, in broad terms (answering the question of "is it just me or this project really yellow?")

  • testing focus and keystone ("this looks blurry, can you find a nice sharp frame in that movie to adjust focus?")

  • test for native resolution and sharpness ("does this projector really support 4k for 30$? that sounds like bullcrap")

  • looking for dead pixels ("i have a new monitor, i hope it's intact")

What does screentest do?

Screentest displays a series of "patterns" on screen. The list of patterns is actually hardcoded in the script, copy-pasted from this list from the videotestsrc gstreamer plugin, but you can pass any pattern supported by your gstreamer installation with --patterns. A list of patterns relevant to your installation is available with the gst-inspect-1.0 videotestsrc command.

By default, screentest goes through all patterns. Each pattern runs indefinitely until the you close the window, then the next pattern starts.

You can restrict to a subset of patterns, for example this would be a good test for dead pixels:

screentest --patterns black,white,red,green,blue

This would be a good sharpness test:

screentest --patterns pinwheel,spokes,checkers-1,checkers-2,checkers-4,checkers-8

A good generic test is the classic SMPTE color bars and is the first in the list, but you can run only that test with:

screentest --patterns smpte

(I will mention, by the way, that as a system administrator with decades of experience, it is nearly impossible to type SMPTE without first typing SMTP and re-typing it again a few times before I get it right. I fully expect this post to have numerous typos.)

Here's an example of the SMPTE pattern from Wikipedia:

For multi-monitor setups, screentest also supports specifying which output to use as a native resolution, with --output. Failing that, it will try to look at the outputs and use the first it will find. If it fails to find anything, you can specify a resolution with --resolution WIDTHxHEIGHT.

I have tried to make it go full screen by default, but stumbled a bug in Sway that crashes gst-launch. If your Wayland compositor supports it, you can possibly enable full screen with --sink waylandsink fullscreen=true. Otherwise it will create a new window that you will have to make fullscreen yourself.

For completeness, there's also an --audio flag that will emit the classic "drone", a sine wave at 440Hz at 40% volume (the audiotestsrc gstreamer plugin. And there's a --overlay-name option to show the pattern name, in case you get lost and want to start with one of them again.

How this works

Most of the work is done by gstreamer. The script merely generates a pipeline and calls gst-launch to show the output. That both limits what it can do but also makes it much easier to use than figuring out gst-launch.

There might be some additional patterns that could be useful, but I think those are better left to gstreamer. I, for example, am somewhat nostalgic of the Philips circle pattern that used to play for TV stations that were off-air in my area. But that, in my opinion, would be better added to the gstreamer plugin than into a separate thing.

The script shows which command is being ran, so it's a good introduction to gstreamer pipelines. Advanced users (and the video team) will possibly not need screentest and will design their own pipelines with their own tools.

I've previously worked with ffmpeg pipelines (in another such procrastinated coding spree, video-proxy-magic), and I found gstreamer more intuitive, even though it might be slightly less powerful.

In retrospect, I should probably have picked a new name, to avoid crashing the namespace already used by the project, which is now on GitHub. Who knows, it might come back to life after this blog post; it would not be the first time.

For now, the project lives along side the rest of my scripts collection but if there's sufficient interest, I might move it to its own git repositories. Comments, feedback, contributions are as usual welcome. And naturally, if you know something better for this kind of stuff, I'm happy to learn more about your favorite tool!

So now I have finally found something to test my projector, which will likely confirm what I've already known all along: that it's kind of a piece of crap and I need to get a proper one.

Categories: FLOSS Project Planets

James Bennett: Running async tests in Python

Planet Python - Mon, 2023-12-18 20:21

This is part of a series of posts I’m doing as a sort of Python/Django Advent calendar, offering a small tip or piece of information each day from the first Sunday of Advent through Christmas Eve. See the first post for an introduction.

A-sync-ing feeling

Async Python can be useful in the right situation, but one of the tricky things about it is that it requires a bit more effort to run than normal synchronous …

Read full entry

Categories: FLOSS Project Planets

Dirk Eddelbuettel: tinythemes 0.0.1 at CRAN: New Package

Planet Debian - Mon, 2023-12-18 19:28

Delighted to announce a new package that arrived on CRAN today: tinythemes. It repackages the theme_ipsum_rc() function by Bob Rudis from his hrbrthemes package in a zero (added) dependency way. A simple example is (also available as a demo inside the packages in the next update) contrasts the default style (on left) with the one added by this package (on the right):

The GitHub repo also shows this little example: total dependencies of hrbrthemes over what ggplot2 installs:

> db <- tools::CRAN_package_db() > deps <- tools::package_dependencies(c("ggplot2", "hrbrthemes"), recursive=TRUE, db=db > Filter(\(x) x != "ggplot2", setdiff(deps[[2]], deps[[1]])) [1] "extrafont" "knitr" "rmarkdown" "htmltools" [5] "tools" "gdtools" "extrafontdb" "Rttf2pt1" [9] "Rcpp" "systemfonts" "gfonts" "curl" [13] "fontquiver" "base64enc" "digest" "ellipsis" [17] "fastmap" "evaluate" "highr" "xfun" [21] "yaml" "bslib" "fontawesome" "jquerylib" [25] "jsonlite" "stringr" "tinytex" "cachem" [29] "memoise" "mime" "sass" "fontBitstreamVera" [33] "fontLiberation" "shiny" "crul" "crayon" [37] "stringi" "cpp11" "urltools" "httpcode" [41] "fs" "rappdirs" "httpuv" "xtable" [45] "sourcetools" "later" "promises" "commonmark" [49] "triebeard" >

Comments and suggestions are welcome at the GitHub repo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

sudo without a `setuid` binary or SSH over a UNIX socket

Planet KDE - Mon, 2023-12-18 18:00

In this post, I will detail how to replace sudo (a setuid binary) by using SSH over a local UNIX socket.

I am of the opinion that setuid/setgid binaries are a UNIX legacy that should be deprecated. I will explain the security reasons behind that statement in a future post.

This is related to the work of the Confined Users SIG in Fedora.

Why bother?

The main benefit of this approach is that it enables root access to the host from any unprivileged toolbox / distrobox container. This is particularly useful on Fedora Atomic desktops (Silverblue, Kinoite, Sericea, Onyx) or Universal Blue (Bluefin, Bazzite) for example.

As a side effect of this setup, we also get the following security advantages:

  • No longer rely on sudo as a setuid binary for privileged operations.
  • Access control via a physical hardware token (here a Yubikey) for each privileged operation.
Setting up the server

Create the following systemd units:

/etc/systemd/system/sshd-unix.socket:

[Unit] Description=OpenSSH Server Unix Socket Documentation=man:sshd(8) man:sshd_config(5) [Socket] ListenStream=/run/sshd.sock Accept=yes [Install] WantedBy=sockets.target

/etc/systemd/system/sshd-unix@.service:

[Unit] Description=OpenSSH per-connection server daemon (Unix socket) Documentation=man:sshd(8) man:sshd_config(5) Wants=sshd-keygen.target After=sshd-keygen.target [Service] ExecStart=-/usr/sbin/sshd -i -f /etc/ssh/sshd_config_unix StandardInput=socket

Create a dedicated configuration file /etc/ssh/sshd_config_unix:

# Deny all non key based authentication methods PermitRootLogin prohibit-password PasswordAuthentication no PermitEmptyPasswords no GSSAPIAuthentication no # Only allow access for specific users AllowUsers root tim # The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2 # but this is overridden so installations will only check .ssh/authorized_keys AuthorizedKeysFile .ssh/authorized_keys # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

Enable and start the new socket unit:

$ sudo systemctl daemon-reload $ sudo systemctl enable --now sshd-unix.socket

Add your SSH Key to /root/.ssh/authorized_keys.

Setting up the client

Install socat and use the following snippet in /.ssh/config:

Host host.local User root # We use `run/host/run` instead of `/run` to transparently work in and out of containers ProxyCommand socat - UNIX-CLIENT:/run/host/run/sshd.sock # Path to your SSH key. See: https://tim.siosm.fr/blog/2023/01/13/openssh-key-management/ IdentityFile ~/.ssh/keys/localroot # Force TTY allocation to always get an interactive shell RequestTTY yes # Minimize log output LogLevel QUIET

Test your setup:

$ ssh host.local [root@phoenix ~]# Shell alias

Let’s create a sudohost shell “alias” (function) that you can add to your Bash or ZSH config to make using this command easier:

# Get an interactive root shell or run a command as root on the host sudohost() { if [[ ${#} -eq 0 ]]; then ssh host.local "cd \"${PWD}\"; exec \"${SHELL}\" --login" else ssh host.local "cd \"${PWD}\"; exec \"${@}\"" fi }

Test the alias:

$ sudohost id uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 $ sudohost pwd /var/home/tim $ sudohost ls Desktop Downloads ...

We’ll keep a distinct alias for now as we’ll still have a need for the “real” sudo in our toolbox containers.

Security?

As-is, this setup is basically a free local root for anything running under your current user that has access to your SSH private key. This is however likely already the case on most developer’s workstations if you are part of the wheel, sudo or docker groups, as any code runnning under your user can edit your shell config and set a backdoored alias for sudo or run arbitrary privileged containers via Docker. sudo itself is not a security boundary as commonly configured by default.

To truly increase our security posture, we would instead need to remove sudo (and all other setuid binaries) and run our session under a fully unprivileged, confined user, but that’s for a future post.

Setting up U2F authentication with an sk-based SSH key-pair

To make it more obvious when commands are run as root, we can setup SSH authentication using U2F with a Yubikey as an example. While this, by itself, does not, strictly speaking, increase the security of this setup, this makes it harder to run commands without you being somewhat aware of it.

First, we need to figure out which algorithm are supported by our Yubikey:

$ lsusb -v 2>/dev/null | grep -A2 Yubico | grep "bcdDevice" | awk '{print $2}'

If the value is 5.2.3 or higher, then we can use ed25519-sk, otherwise we’ll have to use ecdsa-sk to generate the SSH key-pair:

$ ssh-keygen -t ed25519-sk # or $ ssh-keygen -t ecdsa-sk

Add the new sk-based SSH public key to /root/.ssh/authorized_keys.

Update the server configuration to only accept sk-based SSH key-pairs:

/etc/ssh/sshd_config_unix:

# Only allow sk-based SSH key-pairs authentication methods PubkeyAcceptedKeyTypes sk-ecdsa-sha2-nistp256@openssh.com,sk-ssh-ed25519@openssh.com ... Restricting access to a subset of users

You can also further restrict the access to the UNIX socket by configuring classic user/group UNIX permissions:

/etc/systemd/system/sshd-unix.socket:

1 2 3 4 5 6 7 8 ... [Socket] ... SocketUser=tim SocketGroup=tim SocketMode=0660 ...

Then reload systemd’s configuration and restart the socket unit.

Next steps: Disabling sudo

Now that we have a working alias to run privileged commands, we can disable sudo access for our user.

Important backup / pre-requisite step

Make sure that you have a backup and are able to boot from a LiveISO in case something goes wrong.

Set a strong password for the root account. Make sure that can locally log into the system via a TTY console.

If you have the classic sshd server enabled and listening on the network, make sure to disable remote login as root or password logins.

Removing yourself from the wheel / sudo groups

Open a terminal running as root (i.e. don’t use sudo for those commands) and remove you users from the wheel or sudo groups using:

$ usermod -dG wheel tim

You can also update the sudo config to remove access for users that are part of the wheel group:

# Comment / delete this line %wheel ALL=(ALL) ALL Removing the setuid binaries

To fully benefit from the security advantage of this setup, we need to remove the setuid binares (sudo and su).

If you can, uninstall sudo and su from your system. This is usually not possible due to package dependencies (su is part of util-linux on Fedora).

Another option is to remove the setuid bit from the sudo and su binaries:

$ chmod u-s $(which sudo) $ chmod u-s $(which su)

You will have to re-run those commands after each update on classic systems.

Setting this up for Fedora Atomic desktops is a little bit different as /usr is read only. This will be the subject of an upcoming blog post.

Conclusion

Like most of the time with security, this is not a silver bullet solution that will make your system “more secure” (TM). I have been working on this setup as part of my investigation to reduce our reliance on setuid binaries and trying to figure out alternative for common use cases.

Let me know if you found this interesting as that will likely motivate me to write the next part!

References
Categories: FLOSS Project Planets

PreviousNext: Improving Drupal with the help of your clients

Planet Drupal - Mon, 2023-12-18 16:44

Our client, ServiceNSW, is a committed open-source contributor, working closely with us to improve their customer experience while sharing these advances with the Drupal community.

by adam.bramley / 19 December 2023How is client-backed contribution made possible? 

It helps when you work with a client that understands the value of contributing development time back to the Drupal community. ServiceNSW are members of Drupal and have co-presented with us at DrupalSouth, so they’re truly invested.

Solutions to client challenges, such as core patches or contributor modules, require upfront work. Doing this in a community setting is far more beneficial, allowing everyone to contribute and further improve it. That’s why SNSW recognises the future benefits of investing in the work done now. 

We also put a lot of focus on performance and security. This means SNSW receives the latest upgrades for both Drupal core and contributed modules, helping move issues along and ensuring they have the latest and greatest, including being one of our first clients to move to Drupal 10.1. In fact, during the lead-up to the release of Drupal 10.1, we committed over a dozen large core issues in collaboration with the SNSW development team.

The patches we worked on pre Drupal 10.1 upgrade

Over a period of three months, in the lead-up to Drupal 10.1, we targeted patches that were large and/or conflicted with other patches we were using. These were becoming increasingly hard to maintain. SNSW understood that these fixes would be a net gain to developer productivity and an improvement for the community.

  1. Issue #3198868: Add delay to queue suspend
  2. Issue #2867001: Don't treat suspending of a queue as erroneous
  3. Issue #2745179: Uncaught exception in link formatter if a link field has malformed data (a 7-year-old bug!)
  4. Issue #3059026: Catch and handle exceptions in PathFieldItemList
  5. Issue #3311595: Html::transformRootRelativeUrlsToAbsolute() replaces "\r\n" with " \n"
  6. Issue #2859042: Impossible to update an entity revision if the field value you are updating matches the default revision
  7. Issue #2791693: Remove sample date from date field error message and title attribute (another 7 year old one!)
  8. Issue #2831233: Field tokens for "historical data" fields (revisions) contain a hyphen, breaking twig templates and throwing an assertion error
  9. Issue #3007424: Multiple usages of FieldPluginBase::getEntity do not check for NULL, leading to WSOD
Revisions everywhere!

One of our largest pieces of work was Implementing a generic revision UI

Originally opened in 2014, this issue paved the way for one of the most sought-after features from our client - having Revisions for all entity types and a consistent user experience for them.

This was originally committed to the SNSW codebase in July of 2018 using this patch when we added a Block Content Type for a Notice feature on the website.

~3.5 years, ~250 comments, and a huge effort from PreviousNext and SNSW developers, along with many other community members and it was committed to 10.1.x.

This spawned several other core issues for other entity types:

  • Block Content - This was also committed to 10.1 alpha.
  • Media - which is committed and will be available in 10.2.0!
  • Taxonomy terms - which is currently RTBC and looking promising for 10.3!

Plus contributed projects to extend contributed module entity types with revisioning support, such as Micro-content Revision UI.

The patches committed to Drupal 10.1 that we were able to remove

With all this pre-work, we were well positioned when the 10.1 upgrade came around. As you may have noticed, we like to get the ball rolling early, and we had a Pull Request going for the 10.1 upgrade in late June (the day 10.1.0 was released, in fact). This allowed us to figure out which modules needed help, what patches needed re-rolling, and to catch any bugs early.

It wasn't until mid-August when that PR was finally merged, with multiple developers touching it every now and then, when there was some movement.

Here's a full list of Drupal core patches we were able to remove, thanks to the contributions from SNSW.

  1. Issue #2350939: Implement a generic revision UI
  2. Issue #2809291: Add "edit block $type" permissions
  3. Issue #1984588: Add Block Content revision UI
  4. Issue #3315042: Remaining tasks for "edit block $type" permissions
  5. Issue #2859042: Impossible to update an entity revision if the field value you are updating matches the default revision
  6. Issue #3311595: Html::transformRootRelativeUrlsToAbsolute() replaces "\r\n" with " \n"
  7. Issue #3007424: Multiple usages of FieldPluginBase::getEntity do not check for NULL, leading to WSOD
  8. Issue #2831233: Field tokens for "historical data" fields (revisions) contain a hyphen, breaking twig templates and throwing an assertion error
  9. Issue #3059955: It is possible to overflow the number of items allowed in Media Library
  10. Issue #3123666: Custom classes for pager links do not work with Claro theme
  11. Issue #2867001: Dont treat suspending of a queue as erroneous
  12. Issue #3198868: Add delay to queue suspend
  13. Issue #2984504: Access to 'Reset to alphabetical' denied for users without administer permission
  14. Issue #3309157: RevisionLogInterface is typehinted as always returning entity/ids, but cannot guarantee set/existing values
  15. Issue #2634022: ViewsPluginInterface::create() inherits from nothing, breaking PHPStan-strict
  16. Issue #3349507: DateTimePlus::createFromDateTime should accept DateTimeInterface
Service NSW, a true Drupal partner

Service NSW has (at the time of writing this post) contributed to 19 Drupal core issues that were committed over the past three months.

We look forward to continuing this incredible partnership and contributing in the coming months!

Categories: FLOSS Project Planets

Drupal Association blog: The DrupalCon Nonprofit Summit is back in 2024: Unlocking the Power of Drupal for Social Good

Planet Drupal - Mon, 2023-12-18 16:16

When I joined the Drupal Association in July, I underestimated how moved I would be by the collective power of the community. A throwback to my organizing roots, I reveled in the eclectic excitement surrounding the innovation and collaboration of the application, evolution, and marketing of Drupal.

I remember discovering open source software myself, over 10 years ago. The worker’s center I worked for housed an instance of CiviCRM in Drupal and we used it to track our members — as we served a vulnerable population, it was paramount to keep the data safe and away from clandestine subpoenas and prying eyes.

Drupal responds to a fundamental need in the nonprofit sector – the ability to own, control, and share data. Joining the Drupal Association as the Director of Philanthropy allows me to work within the nonprofit sector to leverage the power of Drupal for greater impact, and I yearned for an opportunity to collaborate with others with the same perspective.

The Drupal Association was remiss to let the Nonprofit Summit lapse at DrupalCon Pittsburgh, but… I am thrilled to reintroduce the Nonprofit Summit at DrupalCon Portland!

The network of nonprofits in the Drupal Community is strong and vibrant and has been a joy to work with and learn from. Judging by the extraordinary talent represented by its organizers, Jess Snyder and Johanna Bates, the Nonprofit Summit will be a dynamic and inspiring one-day event bringing together passionate professionals from the nonprofit sector to delve into the transformative potential of Drupal.

Join us for a day of discovery, collaboration, and inspiration as we collectively unlock the full potential of Drupal for social good. Facilitated discussions, round table group sessions, and an opportunity to learn and inspire one another are just a few of the features we plan to bring to the summit this year.

The Nonprofit Summit will be on Thursday, 9 May, the 4th day of DrupalCon, after three days of expert speakers, networking, and contribution. Tickets go on sale 6 February. And we’re especially pleased to announce that the Drupal Association will subsidize the cost of tickets for those in the nonprofit sector, offering special pricing for the conference and summit! The conference rate for nonprofits is $395 and includes the summit.

Mark your calendars, spread the word, and get ready to be part of a community dedicated to making a lasting impact. The Nonprofit Drupal Summit is back and ready to shape the future of digital philanthropy. See you there!

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #429 - The Drupal Association Board

Planet Drupal - Mon, 2023-12-18 14:00

Today we are talking about the Drupal Association Board, Its Strategic Initiatives, and The Future of Drupal with guest Baddý Sonja Breidert. We’ll also cover Advent Calendar as our module of the week.

For show notes visit: www.talkingDrupal.com/429

Topics
  • Former member of Board of Drupal Association
  • What does the board do
  • How does the board operate
  • Are there term limits
  • How does someone get on the board
  • Strategic Initiatives
    • Innovation
    • Marketing
    • Fundraising
  • Now that you are no longer on the board what’s next
  • CEO of 1xInternet
  • How did you get started with Drupal
Resources Guests

Baddý Sonja Breidert - 1xinternet.de/en baddysonja

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Ron Northcutt - community.appsmith.com rlnorthcutt

MOTW Correspondent

Martin Anderson-Clutz - @mandclu Advent Calendar

  • Brief description:
    • Have you ever wanted to reveal content a day-at-a-time, in an interactive advent calendar? There’s a module for that.
  • Brief history
    • How old: created less than month ago in Nov 2023 by listener James Shields, whose drupal.org username is lostcarpark
    • Versions available: 1.0.0-beta3 release, which works with Drupal 10.1 and newer
  • Maintainership
    • Actively maintained, latest release made earlier today
    • Test coverage
    • Number of open issues: 5, 3 of which are bugs, but all but one are now marked as fixed
  • Usage stats:
    • 6 sites
  • Module features and usage
    • James actually created a Drupal advent calendar a year ago, on his website lostcarpark.com. The idea was to showcase a new module every day, similar to advent calendars that provide a chocolate or a toy each day, hidden behind a cardboard door
    • James’ initial version displayed the content in a traditional calendar format, using the Calendar View module. What he really wanted, however, was a way to present the content using clickable doors to reveal new entries
    • The new Advent Calendar module provides a new view display, so you can configure what content type or other filters to apply, and use fields to specify what information to show
    • The module uses a Single Directory Component for display, hence the 10.1 requirement
    • There is also an “Advent Calendar Quickstart” submodule that sets up everything for you, including a content type, view, and 24 nodes to populate it for you
    • Each site visitor gets to “open” the door to new content as it is published each day. For authenticated users, which doors have been opened is stored as user data, and for anonymous users it’s kept in local storage via Javascript
    • In addition to this being an interesting module in its own right, the advent calendar James has created this year is also a community effort. He’s managed to enlist a wide variety of contributors to write about modules or aspects of the Drupal community that they’re passionate about, so it’s a great way to up your Drupal game. You can open a new door yourself every day at https://lostcarpark.com/advent-calendar-2023
Categories: FLOSS Project Planets

XWayland Video Bridge 0.4

Planet KDE - Mon, 2023-12-18 10:42

An updated stable release of XWayland Video Bridge is out now for packaging.

https://download.kde.org/stable/xwaylandvideobridge/

sha256 ea72ac7b2a67578e9994dcb0619602ead3097a46fb9336661da200e63927ebe6

Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <jr@jriddell.org>
https://jriddell.org/esk-riddell.gpg

Changes

  • Also skip the switcher
  • Do not start in an X11 session and opt out of session management
Categories: FLOSS Project Planets

The Drop Times: Once Upon a Time...

Planet Drupal - Mon, 2023-12-18 10:19

Dear Readers,

It's that time of the year when everyone brushes off the cobwebs of the long 11 months to gather them in a corner of retrospection. We are searching out planners and diaries to create to-do lists, write down resolutions, and set goals, all while peering back at 2023 before we bid farewell.

In the grand narrative of the year, one acronym echoed across the globe, shaping industries and transforming the way we work: AI. And has it made my work easier? Certainly! It's no longer only about working hard but working smart. I sought the assistance of AI to write about Drupal, and interestingly, it offered me a story on Drupal in a very old-fashioned way.

"Once upon a time in the world of web development, an ambitious individual namely Dries Buytaert ventured into the creation of Drupal. Later, Dries along with a community of passionate individuals came together under the banner of open-source technology and their journey transformed Drupal into a powerful platform that empowered countless businesses and organizations to build and manage their online presence with ease."

Our world has expanded and contracted, all within the grasp of our fingertips. A robust online presence has become indispensable for the growth of any venture, and Drupal has emerged as a stalwart, now embodied in its latest iteration, Drupal 10.2. This update brings enhanced features, fortified security, and many opportunities for developers and users alike.

Milestones are not conquered in isolation; they are a testament to the collective effort. Hence, The DropTimes introduces a new segment dedicated to spotlighting the stories of organizations that embody innovation and unwavering commitment to the open-source community and Drupal. In the inaugural edition of Spotlight, we shine the light on SparkFabrik. Elma John, our sub-editor, combined SparkFabrik's rich history of success, open-source commitment, and vision for the future into a comprehensive feature.

In an exclusive interview by Kazima Abbas, Andrew Berry shares his experiences from Evolve Drupal Toronto, insights into Drupal's unique community spirit, and the story behind his contributions to Lullabot. Kazima also had the opportunity to correspond with Brian Perry, coordinator of API Client Initiative, to gain more insights about the initiative. The DropTimes was fortunate to publish Ignacio Díaz-Roncero Fraile's detailed overview of the Component-Based Design using Single Directory Components (SDC) in Drupal.

Fame often comes with a baggage of criticisms and opinions and it's not unlikely for people to have polarising opinions. Recently, one such demonstration occupied the front page of Hacker News, followed by a revisit by Dries Buytaert to a 16-year-old blog that announced his 16-year-old start-up, Acquia and I was able to capture the essence of that discussion into a recent article.

The program schedule for DrupalCon Portland 2024 has been unveiled, offering a comprehensive overview of the conference's daily activities. The Drupal France and Francophonie association has announced its eleventh DrupalCamp, which is set to take place at the Maison des Associations in Rennes from March 28 to March 30, 2024. Axess hosts a webinar on Drupal 10.2 Features & DrupalCon Lille Highlights on December 19, 2023. Also, look at the list of events that will keep the Drupalers engaged this week.

Meanwhile, after the announcement of Drupal 10.2, Adam Bramley has been appointed as a new co-maintainer in the Drupal Core. Drupal Association published blog posts announcing the addition of Lenny Moskalyk to the Drupal Board and progress updates on the EU Cyber Resiliency Act response.

There are more stories available out there. But the compulsion to limit the selection of stories is forcing us to put a hard break on further exploration—happy Drupaling folks.

To get timely updates, follow us on LinkedIn, Twitter and Facebook.

Thank you,

Sincerely

Alka Elizabeth
Sub-Editor, TheDropTimes

Categories: FLOSS Project Planets

Real Python: Enhance Your Flask Web Project With a Database

Planet Python - Mon, 2023-12-18 09:00

Adding a database to your Flask project comes with many advantages. By connecting a database, you’ll be able to store user information, monitor user interactions, and maintain dynamic site content.

In this tutorial, you’ll learn how to:

  • Hide secrets in environment variables
  • Formulate your database structure with models
  • Connect your Flask app with a database
  • Receive data from users with forms

By the end of this tutorial, you’ll have a robust understanding of how to connect your Flask app to a database and how to leverage this connection to expand your project’s capabilities.

Get Your Code: Click here to download the free source code for connecting a database to your Flask project.

Prerequisites

You’ll gain the most value from this tutorial if you’ve already created one or more Flask web projects yourself.

Also, you should be comfortable using the terminal and have basic knowledge of Python. Although it helps to know about virtual environments and pip, you’ll learn how to set everything up as you work through the tutorial.

Project Overview

In this tutorial, you’ll continue to work on an existing Flask project. The project at hand is a public message board.

You’ll start with an existing Flask project and then implement forms and a database. In the end, you can post and save messages:

In the demo vide above, you get an impression of how you can interact with the message board. However, the functionality is very similar to other interactions with a web app. If you feel inspired to adjust the project to your needs, then this Flask project is the perfect starting ground for that.

In the next step, you’ll download the source code of the Flask boilerplate project. However, you’ll notice that the codebase is quite generic so that you can transfer the instructions of this tutorial into your own Flask project.

Get Started

In this section, you’ll download all the requirements that you need for this tutorial and set up the development environment. Generally, you can leverage this tutorial to expand any Flask project that you’re currently working on. However, if you want to follow along closely, then you should perform the steps outlined below.

Grab the Prerequisites

To hit the ground running, you’ll build upon an existing Flask boilerplate project. That way, you don’t need to create a Flask project from scratch. Instead, you can focus on the main objectives of this tutorial, like adding a database and working with forms.

The code that you need is already in place for you. All you need to do is download the source code by clicking the link below:

Get Your Code: Click here to download the free source code for connecting a database to your Flask project.

Alternatively, you can follow the Flask boilerplate project tutorial. Either way, you should end up with a folder structure that looks like this:

rp_flask_board/ │ └── board/ │ ├── static/ │ └── styles.css │ ├── templates/ │ │ │ ├── pages/ │ │ ├── about.html │ │ └── home.html │ │ │ ├── _navigation.html │ └── base.html │ ├── __init__.py └── pages.py

Once you’ve got the folder structure for your Flask project in place, you can read on to prepare the development environment that you’ll need to work on your web app.

Prepare Your Development Environment

Before you continue working on your Flask project, it’s a good idea to create and activate a virtual environment. That way, you’re installing any project dependencies not system-wide but only in your project’s virtual environment.

Read the full article at https://realpython.com/flask-database/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Zato Blog: Automating telecommunications networks with Python and SFTP

Planet Python - Mon, 2023-12-18 07:00
Automating telecommunications networks with Python and SFTP 2023-12-18, by Dariusz Suchojad

In the realm of telecommunications, the Secure File Transfer Protocol (SFTP) serves as a critical mechanism for secure and reliable file exchange between different network components devices, and systems, whether it is updating configurations, network monitoring, exchanging customer data, or facilitating software updates. Conversely, Python is an ideal tool for the automation of telecommunications networks thanks to its readability and versality.

Let's dive into how to employ the two effectively and efficiently using the Zato integration and automation platform.

Dashboard

The first step is to define a new SFTP connection in your Dashboard, as in the screenshots below.

The form lets you provide all the default options that apply to each SFTP connection - remote host, what protocol to use, whether file metadata should be preserved during transfer, logging level and other details that you would typically provide.

Simply fill it out with the same details that you would use if it were command line-based SFTP connections.

Pinging

The next thing, right after the creation of a new connection, is to ping it to check if the server is responding.

Pinging opens a new SFTP connection and runs the ping command - in the screenshot above it was ls . - a practically no-op command whose sole purpose is to let the connection confirm that commands in fact can be executed, which proves the correctness of the configuration.

This will either returns details of why a connection could not be established or the response time if it was successful.

Cloud SFTP console

Having validated the configuration by pinging it, we can now execute SFTP commands straight in Dashboard from a command console:

Any SFTP command, or even a series of commands, can be sent and responses retrieved immediately. It is also possible to increase the logging level for additional SFTP protocol-level details.

This makes it possible to rapidly prototype file transfer functionality as a series of scripts that can be next moved as they are to Python-based services.

Python automation

Now, in Python, your API automation services have access to an extensive array of capabilities - from executing transfer commands individually or in batches to the usage of SFTP scripts previously created in your Dashboard.

Here is how Python can be used in practice:

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class MySFTPService(Service): def handle(self): # Connection to use conn_name = 'My SFTP Connection' # Get a handle to the connection object conn = self.out.sftp[conn_name].conn # Execute an arbitrary script with one or more SFTP commands, like in web-admin my_script = 'ls -la /remote/path' conn.execute(my_script) # Ping a remote server to check if it responds conn.ping() # Download an entry, possibly recursively conn.download('/remote/path', '/local/path') # Like .download but remote path must point to a file (exception otherwise) conn.download_file('/remote/path', '/local/path') # Makes the contents of a remote file available on output out = conn.read('/remote/path') # Uploads a local file or directory to remote path conn.upload('/local/path', '/remote/path') # Writes input data out to a remote file data = 'My data' conn.write(data, '/remote/path') # Create a new directory conn.create_directory('/path/to/new/directory') # Create a new symlink conn.create_symlink('/path/to/new/symlink') # Create a new hard-link conn.create_hardlink('/path/to/new/hardlink') # Delete an entry, possibly recursively, no matter what kind it is conn.delete('/path/to/delete') # Like .delete but path must be a directory conn.delete_directory('/path/to/delete') # Like .delete but path must be a file conn.delete_file('/path/to/delete') # Like .delete but path must be a symlink conn.delete_symlink('/path/to/delete') # Get information about an entry, e.g. modification time, owner, size and more info = conn.get_info('/remote/path') self.logger.info(info.last_modified) self.logger.info(info.owner) self.logger.info(info.size) self.logger.info(info.size_human) self.logger.info(info.permissions_oct) # A boolean flag indicating if path is a directory result = conn.is_directory('/remote/path') # A boolean flag indicating if path is a file result = conn.is_file('/remote/path') # A boolean flag indicating if path is a symlink result = conn.is_symlink('/remote/path') # List contents of a directory - items are in the same format that .get_info uses items = conn.list('/remote/path') # Move (rename) remote files or directories conn.move('/from/path', '/to/path') # An alias to .move conn.rename('/from/path', '/to/path') # Change mode of entry at path conn.chmod('600', '/path/to/entry') # Change owner of entry at path conn.chown('myuser', '/path/to/entry') # Change group of entry at path conn.chgrp('mygroup', '/path/to/entry') Summary

Given how important SFTP is in telecommunications, having a convenient and easy way to automate it using Python is an essential ability in a network engineer's skillset.

Thanks to the SFTP connections in Zato, you can prototype SFTP scripts in Dashboard and employ them in API services right after that. To complement it, a full Python API is available for programmatic access to remote file servers.

Combined, the features make it possible to create scalable and reusable file transfer services in a quick and efficient manner using the most convenient programming language, Python.

Next steps
  • Click here to read more about using Python and Zato in telecommunications
  • Start the tutorial which will guide you how to design and build Python API services for automation and integrations
More insights
Categories: FLOSS Project Planets

LN Webworks: Why is Drupal the Top Choice for Big Organizations: Top 8 Reasons

Planet Drupal - Mon, 2023-12-18 04:32

The way people use the internet is changing, and they want websites to be faster, more personalized, user-friendly, and secure. The content on a website is crucial for its success. To keep up with the ever-evolving needs of customers, your business requires a web content management system (CMS) that fits the bill. When it comes to content-focused CMS solutions, Drupal for Large Organizations is the perfect fit. 

According to W3Techs, Drupal is the chosen CMS for 2.4% of all websites. It's not just for big enterprises; even small companies find Drupal development services to be a highly useful CMS. 

Categories: FLOSS Project Planets

Specbee: Strategic Drupal Partnerships: The Michael J Fox Foundation's Drupal Story

Planet Drupal - Mon, 2023-12-18 03:24
The magic often starts with a Drupal decision. Quite a line, but that’s what we all at Specbee firmly believe. We're confident the good folks at the Drupal Association would agree with us. They've even crafted a collaborative series called "Beyond the Build," sharing stories of projects implemented with Drupal. We were recently interviewed for an episode of this series, along with our client, The Michael J Fox Foundation for Parkinson’s Research, where we discussed their website redesign, the strategic selection of Drupal as their enabling technology, and the role Specbee played in shaping their transformative process. Keep scrolling for the whole scoop. The Who’s Who Let’s start by introducing the hosts and guests of the interview “Beyond the Build Episode 3: Specbee and The Michael J. Fox Foundation for Parkinson’s Research”. Kelly Daleney: Director of Development, Drupal Association Nathan Roach: Director of Marketing, Axelerant Sean Keating: Vice President, Digital Strategies at The Michael J. Fox Foundation for Parkinson’s Research Jim Barnthouse: Vice President, Sales and Marketing, Specbee Click to watch the complete video Lights, Camera, Drupal When Sean and his team set out to revamp their website, they recognized the critical need for a stable, reliable platform. Given that the website stands as the most significant tool in building, accessing, and growing the Michael J Fox Foundation community, ensuring a well-defined information architecture and intuitive user experience was absolutely essential. Drupal, with its open-source charm and vast community, caught their attention. In conversation with Kelly and Nathan, Sean reveals more about why Drupal was the chosen one for them. Open Source Synergy The Foundation's emphasis on making research accessible aligns seamlessly with Drupal's open-source nature. Open Access Publications and open data repositories are integral to Parkinson's research, and they operate with unwavering commitment to availability, transparency, and openness. The success of this collaboration underscores the powerful synergy between open-source technology and open science principles. Community Connection In addition to its open-source nature, The Michael J. Fox Foundation was drawn to Drupal due to its large, vibrant, and supportive community, providing a robust foundation for their digital initiatives. A large community not only ensured ongoing support but also facilitated continuous innovation, keeping their digital presence at the forefront of technological advancements. In contrast to other closed-source, proprietary communities that provided little opportunity to enhance their capabilities or find the right developers, Drupal stood out as a great advantage. Accessibility Advantage In the site revamp, Drupal's accessibility features took center stage, aligning with the foundation's goal to reach a diverse audience of donors, patients, families, and industry researchers in the Parkinson’s community. User-Friendly Research By leveraging Drupal’s flexibility, the foundation is making significant strides in simplifying the discovery and archiving of Parkinson’s research papers. This directly benefits industry researchers, contributing to a more efficient and streamlined process. Choosing the Right Drupal Partner As Sean discusses why Drupal is their preferred CMS, he also talks about the decision to team up with Specbee and enhance their collaboration. The Michael J. Fox Foundation also collaborates with Drupal agency Lullabot to maintain their Drupal website. He states, "Specbee is currently our top choice for working on projects and adding features to the website. It's been a fantastic partnership." But what criteria did they consider when evaluating Drupal development companies like Specbee? Commitment to Drupal beyond development - Specbee's commitment to Drupal and significant core contributions to the community, as emphasized by Sean, played a pivotal role in their decision. In their Request for Proposal (RFP), they stressed the importance of finding a Drupal company that is wholly dedicated and committed to the platform. Strong local project management - Emphasizing the significance of local project management, Sean highlights its crucial role in interactions with overseas companies. To avoid communication issues, they sought a company with a local presence model for project management. Aligning Operational Style - Sean stressed the significance of seeking organizations that harmonize with the operational style of one's team. The Michael J. Fox Foundation operates with agility, favoring quick changes, small iterations, and innovation. Engaging with a company that mirrors the preference for adaptability over strict pre-defined structures ensures a collaborative partnership that resonates with the organization's operational rhythm. Consistent and accessible technology team - He emphasized the need for a development team that's consistently available and easy to communicate with—a criterion perfectly met by Specbee. Easy access to leadership team - Sean insisted on staying away from invisible companies. The ability to meet, know, and interact with the leadership team was a non-negotiable, and Specbee delivered on this front. Good references -  Listening to reference stories, especially how Specbee handles challenges, is underscored as crucial by Sean. According to Sean, these aspects earned Specbee "gold stars." The Takeaway For nonprofits and organizations looking to start a Drupal engagement and to leverage the Drupal community, Sean shares his valuable insights: Knowledge is Power: Familiarize yourself with the community, technology, and market before diving into conversations with potential partners. But first, start understanding your organization's short and long-term needs/goals. Culture Fit Matters: Look for organizations aligned with your team's work style. Whether agile or more structured, finding a cultural match is key.  Find a partner attuned to your team's way of working for a seamless and successful project experience. True Costs: Don't base decisions solely on cost. Understand the true costs involved in building, configuring, and supporting a solution over time. Recognize the nuanced distinctions in their structures for sourcing engineering staff. It's not merely a matter of comparing hourly rates; delve deeper into the factors shaping those rates. Whether an organization follows a specific cost structure or has a distinct approach to staff sourcing, comprehend the intricacies. Don't base decisions solely on the bottom line; consider the underlying factors that contribute to costs. Final Thoughts A big thank you to Sean for the generous words and for selecting Specbee as The Michael J Fox Foundation's Drupal partner.  We absolutely love working with you! Special thanks to Kelly, Nathan, and the entire Drupal Association team for putting together this fantastic video. Load videoBeyond the Build Episode 3: Specbee and The Michael J. Fox Foundation for Parkinson’s Research  
Categories: FLOSS Project Planets

Pages