Feeds
Simon's Blog: Drupal Settings.php Snippets for Debug
You can simply copy <CTRL-C> + <CTRL-V> the following into your public_html/site/default/settings.php, and toggle on/off those that are relevant to you based on your need:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 $config['system.logging']['error_level'] = 'verbose'; // [A] (For Drupal 8+) Turn on verbose debug message reporting //$conf['error_level'] = 2; // [A] (For Drupal 7) Turn on verbose debug message reporting (Equivalant to navigate to Administration→ Configuration→ Development → logging and errors and select "All messages".) // [A] ------------------------------------------- error_reporting(E_ALL); // [A] Enable PHP errors (For local Drupal development, you can also enable error reporting, display errors and display startup error to help you further debugging and fixing major runtime errors ini_set('display_errors', TRUE); // [A] Enable PHP errors (For local Drupal development, you can also enable error reporting, display errors and display startup error to help you further debugging and fixing major runtime errors ini_set('display_startup_errors', TRUE); // [A] Enable PHP errors (For local Drupal development, you can also enable error reporting, display errors and display startup error to help you further debugging and fixing major runtime errors $settings['twig_debug'] = TRUE; // [B] Twig Debug - Turn on twig debug mode $settings['twig_auto_reload'] = TRUE; // [B] Twig Debug - Turn on twig template auto reload $settings['twig_cache'] = FALSE; // [B] Twig Debug - Turn off twig cache // [B] ------------------------------------------ $settings[‘cache’][‘bins’][‘render’] = ‘cache.backend.null’; // [B] Disable Caching - Disable render caching. $settings[‘cache’][‘bins’][‘page’] = ‘cache.backend.null’; // [B] Disable Caching - Disable page cache. $settings[‘cache’][‘bins’][‘dynamic_page_cache’] = ‘cache.backend.null’; // [B] Disable Caching - Disable dynamic page cache. $settings[‘cache’][‘default’] = ‘cache.backend.null’; // [B] Disable Caching - Disable backend cache. $config['system.performance']['css']['preprocess'] = FALSE; // [C] Turn off agrregated css (see: https://www.drupal.org/docs/develop/development-tools/disabling-and-debugging-caching) $config['system.performance']['js']['preprocess'] = FALSE; // [C] Turn off agrregated js (see: https://www.drupal.org/docs/develop/development-tools/disabling-and $settings['update_free_access'] = FALSE; // [D] Enable access to /update.php $settings['rebuild_access'] = TRUE; // [D] Enable access to /rebuild.php (This setting can be enabled to allow Drupal's php and database cached storage to be cleared via the rebuild.php page. Access to this page can also be gained by generating a query string from rebuild_token_calculator.sh and using these parameters in a request to rebuild.php. $settings['skip_permissions_hardening'] = TRUE; // [E] Skip file system permissions hardening. (The system module will periodically check the permissions of your site's site directory to ensure that it is not writable by the website user. For sites that are managed with a version control system, this can cause problems when files in that directory such as settings.php are updated, because the user pulling in the changes won't have permissions to modify files in the directory. $settings['extension_discovery_scan_tests'] = TRUE; // [E] Allow test modules and themes to be installed. (Drupal ignores test modules and themes by default for performance reasons. During development it can be useful to install test extensions for debugging purpose. $settings['trusted_host_patterns'] = array(); // [E] Turn off trusted host(for instance verbose debug required then turn comment all lines except for $config['system.logging']['error_level'] = 'verbose';)
Seth Michael Larson: SEGA Genesis & Mega Drive games and ROMs from Steam
Published 2024-11-20 by Seth Larson
Reading time: minutes
TDLR: SEGA is discontinuing the "SEGA Mega Drive and Genesis Classics" on December 6th. This is an affordable way to purchase these games and ROMs compared to the original cartridges. Buy games you are interested in while you still can.
In particular, Dr. Robotnik's Mean Bean Machine is one of my favorite games. I created copy-cat games when I was first learning how to program computers. I already own this game twice over as a Genesis cartridge and in the Sonic Mega Collection for the GameCube, but neither of those formats are easy to find the ROM itself to be played elsewhere.
So I heard you like beans.
That's where the SEGA Mega Drive and Genesis Classics comes in. This launcher provides uncompressed ROMs that are easily accessible after purchasing the game. For the below instructions, I am using Ubuntu 24.04 as my operating system. Here's what I did:
- Download the Steam launcher for Linux.
- Purchase Dr. Robotnik's Mean Bean Machine on Steam for $4.99 USD.
- Download the "SEGA Mega Drive and Genesis Classics" launcher and the Dr. Robotnik's Mean Bean Machine "DLC". You don't have to launch the game through Steam.
- Navigate to ~/.steam/steam/steamapps/common/Sega\ Classics/uncompressed\ ROMs.
- ROM files can be found in this directory. Their file extension will be either .SGD or .68K. These can be changed to .bin to be recognized by emulators for Linux like Kega Fusion.
From here, you should be able to load these ROMs into any emulator. Happy gaming!
Have thoughts or questions? Let's chat over email or social:
sethmichaellarson@gmail.com
@sethmlarson@fosstodon.org
Want more articles like this one? Get notified of new posts by subscribing to the RSS feed or the email newsletter. I won't share your email or send spam, only whatever this is!
Want more content now? This blog's archive has ready-to-read articles. I also curate a list of cool URLs I find on the internet.
Find a typo? This blog is open source, pull requests are appreciated.
Thanks for reading! ♡ This work is licensed under CC BY-SA 4.0
︎Arnaud Rebillout: Installing an older Ansible version via pipx
... and therefore, hosts running Debian Buster are now unsupported.
Monday, I updated the system on my laptop (Debian Sid), and I got the latest version of ansible-core, 2.18:
$ ansible --version | head -1 ansible [core 2.18.0]To my surprise, Ansible started to fail with some remote hosts:
ansible-core requires a minimum of Python version 3.8. Current version: 3.7.3 (default, Mar 23 2024, 16:12:05) [GCC 8.3.0]
Yep, I do have to work with hosts running Debian Buster (aka. oldoldstable). While Buster is old, it's still out there, and it's still supported via Freexian’s Extended LTS.
How are we going to keep managing those machines? Obviously, we'll need an older version of Ansible.
Pipx to the rescue TL;DR pipx install --include-deps ansible==10.6.0 pipx inject ansible dnspython # for community.general.dig Installing Ansible via pipxLately I discovered pipx and it's incredibly simple, so I thought I'd give it a try for this use-case.
Reminder: pipx allows users to install Python applications in isolated environments. In other words, it doesn't make a mess with your system like pip does, and it doesn't require you to learn how to setup Python virtual environments by yourself. It doesn't ask for root privileges either, as it installs everything under ~/.local/.
First thing to know: pipx install ansible won't cut it, it doesn't install the whole Ansible suite. Instead we need to use the --include-deps flag in order to install all the Ansible commands.
The output should look something like that:
$ pipx install --include-deps ansible==10.6.0 installed package ansible 10.6.0, installed using Python 3.12.7 These apps are now globally available - ansible - ansible-community - ansible-config - ansible-connection - ansible-console - ansible-doc - ansible-galaxy - ansible-inventory - ansible-playbook - ansible-pull - ansible-test - ansible-vault done! ✨ 🌟 ✨Note: at the moment 10.6.0 is the latest release of the 10.x branch, but make sure to check https://pypi.org/project/ansible/#history and install whatever is the latest on this branch. The 11.x branch doesn't work for us, as it's the branch that comes with ansible-core 2.18, and we don't want that.
Next: do NOT run pipx ensurepath, even though pipx might suggest that. This is not needed. Instead, check your ~/.profile, it should contain these lines:
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/.local/bin" ] ; then PATH="$HOME/.local/bin:$PATH" fiMeaning: ~/.local/bin/ should already be in your path, unless it's the first time you installed a program via pipx and the directory ~/.local/bin/ was just created. If that's the case, you have to log out and log back in.
Now, let's open a new terminal and check if we're good:
$ which ansible /home/me/.local/bin/ansible $ ansible --version | head -1 ansible [core 2.17.6]Yep! And that's working already, I can use Ansible with Buster hosts again.
What's cool is that we can run ansible to use this specific Ansible version, but we can also run /usr/bin/ansible to run the latest version that is installed via APT.
Injecting Python dependencies needed by collectionsQuickly enough, I realized something odd, apparently the plugin community.general.dig didn't work anymore. After some research, I found a one-liner to test that:
# Works with APT-installed Ansible? Yes! $ /usr/bin/ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}" localhost | SUCCESS => { "msg": "151.101.66.132,151.101.2.132,151.101.194.132,151.101.130.132" } # Works with pipx-installed Ansible? No! $ ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}" localhost | FAILED! => { "msg": "An unhandled exception occurred while running the lookup plugin 'dig'. Error was a <class 'ansible.errors.AnsibleError'>, original message: The dig lookup requires the python 'dnspython' library and it is not installed." }The issue here is that we need python3-dnspython, which is installed on my system, but is not installed within the pipx virtual environment. It seems that the way to go is to inject the required dependencies in the venv, which is (again) super easy:
$ pipx inject ansible dnspython injected package dnspython into venv ansible done! ✨ 🌟 ✨Problem fixed! Of course you'll have to iterate to install other missing dependencies, depending on which Ansible external plugins are used in your playbooks.
Closing thoughtsHopefully there's nothing left to discover and I can get back to work! If there's more quirks and rough edges, drop me an email so that I can update this blog post.
Let me also credit another useful blog post on the matter: https://unfriendlygrinch.info/posts/effortless-ansible-installation/
Aurelien Jarno: AI crawlers should be smarter
It would be fantastic if all those AI companies dedicated some time to make their web crawlers smarter (what about using AI?). Noawadays most of them still stupidly follow every link on a Git frontend.
Hint: Changing the display options does not provide more training data!
PyCoder’s Weekly: Issue #656 (Nov. 19, 2024)
#656 – NOVEMBER 19, 2024
View in Browser »
TUI applications require a full terminal which most IDEs don’t implement. To make matters more complicated, TUIs use the same calls that many command line debuggers use, making it hard to deal with breakpoints. This article teaches you how to debug a Textual TUI program.
MIKE DRISCOLL
In this tutorial, you’ll learn how to write dictionary comprehensions in Python. You’ll also explore the most common use cases for dictionary comprehensions and learn about some bad practices that you should avoid when using them in your code.
REAL PYTHON
The Trunk Flaky Test public beta is open! You can now detect, quarantine, and eliminate flaky tests from your codebase. Discover insights from our analysis of 20.2 million CI jobs and see how Trunk can unblock pipelines and stop reruns. Access is free. Check out our getting started guide here →
TRUNK sponsor
A collection of Python puzzles. You are given a test file, and should write an implementation that passes the tests. All done in your browser.
GPTENGINEER.RUN
Entertainment-based content may appear educational, but it is not effective for learning. To truly learn, one should seek out long-form, challenging content that requires effort and engagement. Educators should prioritize creating meaningful, in-depth content that fosters deep learning.
X.COM
How do you build a sustainable open-source project and community? What lessons can be learned from Python’s history and the current mess that the WordPress community is going through? This week on the show, we speak with Paul Everitt from JetBrains about navigating open-source funding and the start of the Python Software Foundation.
REAL PYTHON podcast
Most Django scaling guides focus on theoretical maximums. But real scaling isn’t about handling hypothetical millions of users - it’s about systematically eliminating bottlenecks as you grow. Here’s how to do it right, based on patterns that work in production.
ANDREW
Simplify workloads and elevate customer service. Build customized AI assistants that respond to voice prompts with powerful language and comprehension capabilities. Personalized AI assistance based on your unique needs with Intel’s OpenVINO toolkit.
INTEL CORPORATION sponsor
When people compare pandas and Polars, they usually bring up topics such as lazy execution, Rust, null values, multithreading, and quey optimisation. Yet there’s one innovation which people often overlook: non-elementary group-by aggregations.
MARCO GORELLI • Shared by Marco Gorelli
PyPI now supports digital attestations. This feature lets Python package maintainers verify the authenticity and integrity of their uploads with cryptographically verifiable attestations, adding an extra layer of security and trust.
SARAH GOODING • Shared by Sarah Gooding
On October 29th, two DSF steering council members resigned, triggering an election earlier than planned. This note explains what that means and how you can get involved.
DJANGO SOFTWARE FOUNDATION
This post from Michael Kennedy talks about moving Talk Python’s hosting environment from Digital Ocean to Hetzner. It details everything involved in a move like this.
TALK PYTHON
In this video course, you’ll learn how to use Python format specifiers within an f-string to allow you to neatly format a float to your required precision.
REAL PYTHON course
This tracker tests the compatibility of the 500 most popular packages with Python 3.13’s free-threading and subinterpreter features.
PYTHON.TIPS • Shared by Vita Midori
November 20, 2024
REALPYTHON.COM
November 21, 2024
MEETUP.COM
November 21, 2024
PYLADIES.COM
November 22 to November 27, 2024
PYCON.ORG.AU
November 25 to December 1, 2024
PLONECONF.ORG
November 25, 2024
MEETUP.COM
November 30 to December 1, 2024
PYCONWROCLAW.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #656.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
FSF Blogs: Winter holidays are coming: Time for a free software tale
Python Insider: Python 3.14.0 alpha 2 released
Alpha 2? But Alpha 1 only just came out!
https://www.python.org/downloads/release/python-3140a2/
This is an early developer preview of Python 3.14
Major new features of the 3.14 series, compared to 3.13Python 3.14 is still in development. This release, 3.14.0a2 is the second of seven planned alpha releases.
Alpha releases are intended to make it easier to test the current state of new features and bug fixes and to test the release process.
During the alpha phase, features may be added up until the start of the beta phase (2025-05-06) and, if necessary, may be modified or deleted up until the release candidate phase (2025-07-22). Please keep in mind that this is a preview release and its use is not recommended for production environments.
Many new features for Python 3.14 are still being planned and written. Among the new major new features and changes so far:
- PEP 649: deferred evaluation of annotations
- PEP 741: Python configuration C API
- PEP 761: Python 3.14 and onwards no longer provides PGP signatures for release artifacts. Instead, Sigstore is recommended for verifiers.
- Improved error messages
- (Hey, fellow core developer, if a feature you find important is missing from this list, let Hugo know.)
The next pre-release of Python 3.14 will be 3.14.0a3, currently scheduled for 2024-12-17.
More resources- Online documentation
- PEP 745, 3.14 Release Schedule
- Report bugs at https://github.com/python/cpython/issues
- Help fund Python and its community
Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organisation contributions to the Python Software Foundation.
Regards from a chilly Helsinki with snow on the way,
Your release team,
Hugo van Kemenade
Ned Deily
Steve Dower
Łukasz Langa
Drupal Association blog: Celebrating Success: DrupalCon Barcelona 2024 Event Impact Recap
Welcome to the Event Impact Recap of DrupalCon Barcelona 2024. This year’s conference not only showcased the vibrant spirit of our global network but also highlighted the achievements and successes that emerged from this remarkable gathering. As we look forward to upcoming events in Singapore and Atlanta, let's take a minute to celebrate what we accomplished together in Barcelona!
At every DrupalCon, we unite the global Drupal community—crafted by the community, for the community. Our mission is to foster an inclusive environment where Drupal Certified Partners, Agencies, Marketers, End Users, Developers, Site Builders, and Community Organizers come together to train, learn, network, see old friends and make new ones, and grow their careers. We strive to create a vibrant space that celebrates collaboration and innovation, providing opportunities for personal and professional development.
Through shared knowledge, diverse perspectives, and active engagement, DrupalCon serves as a beacon for Drupal enthusiasts, empowering them to contribute to the future of open-source software. Together, we will shape the next generation of digital experiences, ensuring that Drupal continues to thrive, grow and innovate worldwide.
Key Highlights from DrupalCon Barcelona 2024 Attendance and EngagementWith 1,087 registered attendees and an impressive 96% check-in rate, DrupalCon Barcelona brought together a passionate community of Drupal enthusiasts and professionals. Notably, 307 participants received complimentary registrations (that’s 31%!) for their roles as speakers, scholarship recipients, or planners, reinforcing our commitment to inclusivity and accessibility.
Among the attendees, 27% were first-time DrupalCon participants, while 33.8% had attended four or more times. An impressive 79.1% of attendees expressed their intention to recommend DrupalCon to friends or colleagues, highlighting the event’s value.
Global RepresentationDrupalCon Barcelona truly exemplified our global reach, with attendees from 66 countries across six continents. This diversity enriched our discussions and collaborations, showcasing the power of Drupal as a unifying platform.
Registrations Per Country United Kingdom 122 Japan 4 Spain 113 Slovenia 3 Germany 111 Uruguay 3 Belgium 102 Iceland 3 United States 97 Estonia 2 France 58 Czechia 2 India 33 España 2 Netherlands 31 Israel 2 Norway 30 Armenia 2 Denmark 27 Croatia 2 Switzerland 24 Ghana 2 Austria 23 Schweiz 1 Sweden 22 Nicaragua 1 Finland 21 Singapore 1 Bulgaria 20 Thailand 1 Poland 16 Cyprus 1 Portugal 15 Turkey 1 Ireland 15 Åland Islands 1 Italy 15 Luxembourg 1 Greece 13 Algeria 1 Canada 12 Magyarország 1 Czech Republic 9 Niger 1 Georgia 9 Antigua 1 Romania 8 Bangladesh 1 Serbia 7 Saudi Arabia 1 Brazil 7 Tunisia 1 Ukraine 6 Peru 1 Australia 5 Argentina 1 Lithuania 5 Philippines 1 Belarus 5 Colombia 1 Hungary 5 Burkina Faso 1 Mexico 5 Afghanistan 1 Slovakia 4 Iran 1 DriesNote and StarshotA standout moment was the DriesNote, which attracted 810 attendees eager to learn about the future of Drupal CMS and the role of AI in expanding our marketplace. The insights shared during this session sparked lively discussions and innovative ideas.
The Starshot track and Makers and Takers tracks were immensely popular, with the top session, "Drupal AI: The Golden Era of the Web," drawing 520 attendees. These sessions not only highlighted cutting-edge topics but also fostered collaboration and knowledge sharing among participants.
Sponsorship SupportDrupalCon Barcelona 2024 was made possible by the generous support of our sponsors:
- Diamond Sponsors: 4
- Platinum Sponsors: 6
- Gold Sponsors: 3
- Silver Sponsors: 12
- Module Sponsors: 11
- Village Sponsors: 5
- Media Sponsors: 3
- Scholarship Sponsors: 3
- Total Sponsors: 30
In total, we had 30 sponsors whose commitment to the Drupal community was essential for the event and the overall community growth and success. Their support underscores the strength of our partnerships and shared goals.
Volunteer ContributionsThe success of DrupalCon Barcelona was greatly aided by 208 dedicated volunteers, who contributed their time and talents across various roles—from session review committees and help desks to contribution monitors and photographers. Their hard work and enthusiasm were crucial in creating a welcoming and productive environment for all.
Looking AheadAs we reflect on the achievements and connections fostered at DrupalCon Barcelona 2024, I feel optimistic about the future of Drupal. This event was not just a conference; it was a celebration of collaboration, knowledge sharing, and community spirit.
I extend my heartfelt gratitude to everyone who contributed to this success—from attendees and volunteers to sponsors and organizers. Together, we can carry this momentum forward as we embark on the next chapter of Drupal's journey at DrupalCon Singapore, DrupalCon Atlanta, and beyond.
Here’s to continued growth, innovation, and the vibrant spirit of the Drupal community! I hope to see many of you in Singapore in December where we will be getting a sneak peek of the Drupal CMS, ahead of it’s release in January 2025; tickets are available on the DrupalCon Singapore website.
PyCharm: Code Faster with JetBrains AI in PyCharm
PyCharm 2024.3 comes with many improvements to JetBrains AI to help you code faster. I’m going to walk you through some of these updates in this blog post.
Natural language inline AI promptYou can now use JetBrains AI by typing straight into your editor in natural language without opening the AI Assistant tool window. If you use either IntelliJ IDEA or PyCharm, you might already be familiar with natural language AI prompts, but let me walk you through the process.
If you’re typing in the gutter you can start typing your request straight into the editor, and then press Tab. Here’s an example of one such request:
write a script to capture a date input from a user and print it out prefixed by a message stating that their birthday is on that date.You can then iterate on the initial input by clicking on the purple block in the gutter or by pressing ⌘\ or Ctrl+\ and pressing Enter:
add error handling so that when a birthday is in the future, we dont accept itYou can use ⌘\ or Ctrl+\ to keep iterating until you’re happy with the result. For example, we can use the prompt:
print out the day of the week as well as their birthday dateAnd then:
change the format of day_of_week to shortThis feature is available for Python, JavaScript, TypeScript, JSON, and YAML files.
Let’s look at some more examples. We can get JetBrains AI Assistant to help us generate new code with a prompt like this:
Write code that lists the latest polls, shows poll details, handles voting, updates votes, and displays poll results, ensuring only published polls are accessible.Or add some error handling to our code:
Add edge case handling to this codeRemember, context is everything. Where you start your natural language prompt is important, as PyCharm uses the placement of your caret to figure out the context. You don’t need to prefix your query with a ? or $ if you start typing in the gutter because the context is the file, but if your caret is indented, you’ll need to start your query with the ? or $ character so PyCharm knows you’re crafting a natural language query.
In this example, we want to refactor existing code, so we need to prefix our query with the ? character:
?create a dedicated function for printing the schedule and remove the code from here Running code in the Python consoleWe know that JetBrains AI can generate code for you, but now you can run that code in the Python console without leaving the AI Assistant tool window by clicking the green run arrow.
For example, let’s say you have the following prompt:
Create a python script that asks for a birthday date in standard format yyy-MM-dd then converts it and prints it back out in a written format such as 22nd January 1991You can now click the green run arrow on the top-right of the code snippet to run it in your Python console:
Even more featuresIn addition to the new functionality for natural language and code completion for PyCharm highlighted above, there are several other improvements to JetBrains AI.
Faster code completionWe have introduced a new model for faster cloud-based completion with AI Assistant which is showing very promising results.
Faster documentationIf documentation isn’t your thing, you can now hand off writing your Python docstrings to JetBrains AI. If you type either single or double quotes to enter a docstring and then press Return, you’ll see a prompt that says Generate with AI Assistant. Click that prompt and let JetBrains AI generate the documentation for you:
Help at your fingertipsWe all need a little help now and again, and we can get JetBrains AI to help us here too. We’ve added a /docs prompt to the JetBrains AI tool window. This prompt will query the PyCharm documentation to save you from switching out of the context you’re working in!
Ability to choose your LLMFor AI Chat, you can now select a different LLM from the drop-down menu in the chat window itself. There are lots of options for you to choose from:
More context in Jupyter notebooksWe’ve also improved how JetBrains AI works for data scientists. JetBrains AI now recognizes DataFrames and variables in your notebook. You can prefix your DataFrame or variable with # so that JetBrains AI considers it as part of the context.
SummaryJetBrains AI is available inside PyCharm, right where you need it. This release brings many improvements, from writing in natural language inside the editor and running AI-generated Python snippets in the console to generating documentation.
Remember, if you’re in the gutter, you can start typing in natural language and then press Tab to get AI Assistant to generate the code. If you’re inside a method or function, you need to prefix your natural language query with either ? or $. You can then iterate on the generated code as many times as you like as you build out your new functionality and explore further.
Real Python: Working With TOML and Python
TOML—Tom’s Obvious Minimal Language—is a reasonably new configuration file format that the Python community has embraced over the last couple of years. TOML plays an essential part in the Python ecosystem. Many of your favorite tools rely on TOML for configuration, and you’ll use pyproject.toml when you build and distribute your own packages.
In this video course, you’ll learn more about TOML and how you can use it. In particular, you’ll:
- Learn and understand the syntax of TOML
- Use tomli and tomllib to parse TOML documents
- Use tomli_w to write data structures as TOML
- Use tomlkit when you need more control over your TOML files
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Mike Driscoll: How to Debug Your Textual Application
Textual is a great Python package for creating a lightweight, powerful, text-based user interface. That means you can create a GUI in your terminal with Python without learning curses! But what happens when you encounter some problems that require debugging your application? A TUI takes over your terminal, which means you cannot see anything from Python’s print() statement.
Wait? What about your IDE? Can that help? Actually no. When you run a TUI, you need a fully functional terminal to interact with it. PyCharm doesn’t work well with Textual. WingIDE doesn’t even have a terminal emulator. Visual Studio Code also doesn’t work out of the box, although you may be able to make it work with a custom json or yaml file. But what do you do if you can’t figure that out?
That is the crux of the problem and what you will learn about in this tutorial: How to debug Textual applications!
Getting StartedTo get the most out of this tutorial, make sure you have installed Textual’s development tools by using the following command:
python -m pip install textual-dev --upgradeOnce you have the latest version of textual-dev installed, you may continue!
Debugging with Developer ModeWhen you want to debug a Textual application, you need to open two terminal windows. On Microsoft Windows, you can open two Powershell or two Command Prompts. In the first terminal, run this command:
textual consoleThe Textual console will listen for any Textual application running in developer mode. But first, you need some kind of application to test with. Open up your favorite Python IDE and create a new file called hello_textual.py. Then enter the following code into it:
from textual.app import App, ComposeResult from textual.widgets import Button class WelcomeButton(App): def compose(self) -> ComposeResult: yield Button("Exit") def on_button_pressed(self) -> None: self.mount(Button("Other")) if __name__ == "__main__": app = WelcomeButton() app.run()To run a Textual application, use the other terminal you opened earlier. The one that isn’t running Textual Console in it. Then run this command:
textual run --dev hello_textual.pyYou will see the following in your terminal:
If you switch over to the other terminal, you will see a lot of output that looks something like this:
Now, if you want to test that you are reaching a part of your code in Textual, you can add a print() function now to your on_button_pressed() method. You can also use self.log.info() which you can read about in the Textual documentation.
Let’s update your code to include some logging:
from textual.app import App, ComposeResult from textual.widgets import Button class WelcomeButton(App): def compose(self) -> ComposeResult: yield Button("Exit") print("The compose() method was called!") def on_button_pressed(self) -> None: self.log.info("You pressed a button") self.mount(Button("Other")) if __name__ == "__main__": app = WelcomeButton() app.run()Now, when you run this code, you can check your Textual Console for output. The print() statement should be in the Console without you doing anything other than running the code. You must click the button to get the log statement in the Console.
Here is what the log output will look like in the Console:
And here is an example of what you get when you print() to the Console:
There’s not much difference here, eh? Either way, you get the information you need and if you need to print out Python objects, this can be a handy debugging tool.
If you find the output in the Console to be too verbose, you can use -x or --exclude to exclude log groups. Here’s an example:
textual console -x SYSTEM -x EVENT -x DEBUG -x INFOIn this version of the Textual Console, you are suppressing SYSTEM, EVENT, DEBUG, and INFO messages.
Launch your code from earlier and you will see that the output in your Console is greatly reduced:
Now, let’s learn how to use notification as a debugging tool.
Debugging with NotificationIf you like using print() statements then you will love that Textual’s App() class provides a notify() method. You can call it anywhere in your application using self.app.notify() , along with a message. If you are in your App class, you can reduce the call to simply self.notify().
Let’s take the example from earlier and update it to use the notify method instead:
from textual.app import App, ComposeResult from textual.widgets import Button class WelcomeButton(App): def compose(self) -> ComposeResult: yield Button("Exit") def on_button_pressed(self) -> None: self.mount(Button("Other")) self.notify("You pressed the button!") if __name__ == "__main__": app = WelcomeButton() app.run()The notify() method takes the following parameters:
- message – The message you want to display in the notification
- title – An optional title to add to the message
- severity – The message’s severity, which translates to a different color for the notification. You may use “information”, “error” or “warning”
- timeout – The timeout in seconds for how long to show the message
Try editing the notification to use more of these features. For example, you could update the code above to use this instead:
self.notify("You pressed the button!", title="Info Message", severity="error")Textual’s App class also provides a bell() method you can call to play the system bell. You could add this to really get the user’s attention, assuming they have the system bell enabled on their computer.
Wrapping UpDebugging your TUI application successfully is a skill. You need to know how to find errors, and Textual’s dev mode makes this easier. While it would be great if a Python IDE had a fully functional terminal built into it, that is a very niche need. So it’s great that Textual included the tooling you need to figure out your code.
Give these tips a try, and you’ll soon be able to debug your Textual applications easily!
The post How to Debug Your Textual Application appeared first on Mouse Vs Python.
Melissa Wen: Display/KMS Meeting at XDC 2024: Detailed Report
XDC 2024 in Montreal was another fantastic gathering for the Linux Graphics community. It was again a great time to immerse in the world of graphics development, engage in stimulating conversations, and learn from inspiring developers.
Many Igalia colleagues and I participated in the conference again, delivering multiple talks about our work on the Linux Graphics stack and also organizing the Display/KMS meeting. This blog post is a detailed report on the Display/KMS meeting held during this XDC edition.
Short on Time?
- Catch the lightning talk summarizing the meeting here (you can even speed up 2x):
- For a quick written summary, scroll down to the TL;DR section.
This meeting took 3 hours and tackled a variety of topics related to DRM/KMS (Linux/DRM Kernel Modesetting):
- Sharing Drivers Between V4L2 and KMS: Brainstorming solutions for using a single driver for devices used in both camera capture and display pipelines.
- Real-Time Scheduling: Addressing issues with non-blocking page flips encountering sigkills under real-time scheduling.
- HDR/Color Management: Agreement on merging the current proposal, with NVIDIA implementing its special cases on VKMS and adding missing parts on top of Harry Wentland’s (AMD) changes.
- Display Mux: Collaborative design discussions focusing on compositor control and cross-sync considerations.
- Better Commit Failure Feedback: Exploring ways to equip compositors with more detailed information for failure analysis.
While I didn’t present a talk this year, I co-organized a Display/KMS meeting (with Rodrigo Siqueira of AMD) to build upon the momentum from the 2024 Linux Display Next hackfest. The meeting was attended by around 30 people in person and 4 remote participants.
Speakers: Melissa Wen (Igalia) and Rodrigo Siqueira (AMD)
Link: https://indico.freedesktop.org/event/6/contributions/383/
Topics: Similar to the hackfest, the meeting agenda was built over the first two days of the conference and mixed talks follow-up with new ideas and ongoing community efforts.
The final agenda covered five topics in the scheduled order:
- How to share drivers between V4L2 and DRM for bridge-like components (new topic);
- Real-time Scheduling (problems encountered after the Display Next hackfest);
- HDR/Color Management (ofc);
- Display Mux (from Display hackfest and XDC 2024 talk, bringing AMD and NVIDIA together);
- (Better) Commit Failure Feedback (continuing the last minute topic of the Display Next hackfest).
Similar to the hackfest, the meeting agenda evolved over the conference. During the 3 hours of meeting, I coordinated the room and discussion rounds, and Rodrigo Siqueira took notes and also contacted key developers to provide a detailed report of the many topics discussed.
From his notes, let’s dive into the key discussions!
How to share drivers between V4L2 and KMS for bridge-like components.Led by Laurent Pinchart, we delved into the challenge of creating a unified driver for hardware devices (like scalers) that are used in both camera capture pipelines and display pipelines.
- Problem Statement: How can we design a single kernel driver to handle devices that serve dual purposes in both V4L2 and DRM subsystems?
- Potential Solutions:
- Multiple Compatible Strings: We could assign different compatible strings to the device tree node based on its usage in either the camera or display pipeline. However, this approach might raise concerns from device tree maintainers as it could be seen as a layer violation.
- Separate Abstractions: A single driver could expose the device to both DRM and V4L2 through separate abstractions: drm-bridge for DRM and V4L2 subdev for video. While simple, this approach requires maintaining two different abstractions for the same underlying device.
- Unified Kernel Abstraction: We could create a new, unified kernel abstraction that combines the best aspects of drm-bridge and V4L2 subdev. This approach offers a more elegant solution but requires significant design effort and potential migration challenges for existing hardware.
We have discussed real-time scheduling during this year Linux Display Next hackfest and, during the XDC 2024, Jonas Adahl brought up issues uncovered while progressing on this front.
- Context: Non-blocking page-flips can, on rare occasions, take a long time and, for that reason, get a sigkill if the thread doing the atomic commit is a real-time schedule.
- Action items:
- Explore alternative backtraces during the busy wait (e.g., ftrace).
- Investigate the maximum thread time in busy wait to reproduce issues faced by compositors. Tools like RTKit (mutter) can be used for better control (Michel Dänzer can help with this setup).
This is a well-known topic with ongoing effort on all layers of the Linux Display stack and has been discussed online and in-person in conferences and meetings over the last years.
Here’s a breakdown of the key points raised at this meeting:
- Talk: Color operations for Linux color pipeline on AMD devices: In the previous day, Alex Hung (AMD) presented the implementation of this API on AMD display driver.
- NVIDIA Integration: While they agree with the overall proposal, NVIDIA needs to add some missing parts. Importantly, they will implement these on top of Harry Wentland’s (AMD) proposal. Their specific requirements will be implemented on VKMS (Virtual Kernel Mode Setting driver) for further discussion. This VKMS implementation can benefit compositor developers by providing insights into NVIDIA’s specific needs.
- Other vendors: There is a version of the KMS API applied on Intel color pipeline. Apart from that, other vendors appear to be comfortable with the current proposal but lacks the bandwidth to implement it right now.
- Upstream Patches: The relevant upstream patches were can be found here. [As humorously notes, this series is eagerly awaiting your “Acked-by” (approval)]
- Compositor Side: The compositor developers have also made significant
progress.
- KDE has already implemented and validated the API through an experimental implementation in Kwin.
- Gamescope currently uses a driver-specific implementation but has a draft that utilizes the generic version. However, some work is still required to fully transition away from the driver-specific approach. AP: work on porting gamescope to KMS generic API
- Weston has also begun exploring implementation, and we might see something from them by the end of the year.
- Kernel and Testing: The kernel API proposal is well-refined and meets the DRM subsystem requirements. Thanks to Harry Wentland effort, we already have the API attached to two hardware vendors and IGT tests, and, thanks to Xaver Hugl, a compositor implementation in place.
Finally, there was a strong sense of agreement that the current proposal for HDR/Color Management is ready to be merged. In simpler terms, everything seems to be working well on the technical side - all signs point to merging and “shipping” the DRM/KMS plane color management API!
Display MuxDuring the meeting, Daniel Dadap led a brainstorming session on the design of the display mux switching sequence, in which the compositor would arm the switch via sysfs, then send a modeset to the outgoing driver, followed by a modeset to the incoming driver.
- Context:
- During this year Linux Display Next hackfest, Mario Limonciello (AMD) introduced the topic and led a discussion on Display Mux.
- Daniel Dadap (NVIDIA) retook this discussion with the XDC 2024 talk: Dynamic Switching of Display Muxes on Hybrid GPU Systems.
- Key Considerations:
- HPD Handling: There was a general consensus that disabling HPD can be part of the sequence for internal panels and we don’t need to focus on it here.
- Cross-Sync: Ensuring synchronization between the compositor and the drivers is crucial. The compositor should act as the “drm-master” to coordinate the entire sequence, but how can this be ensured?
- Future-Proofing: The design should not assume the presence of a mux. In future scenarios, direct sharing over DP might be possible.
- Action points:
- Sharing DP AUX: Explore the idea of sharing DP AUX and its implications.
- Backlight: The backlight definition represents a problem in the mux switch context, so we should explore some of the current specs available for that.
In the last part of the meeting, Xaver Hugl asked for better commit failure feedback.
- Problem description: Compositors currently face challenges in collecting detailed information from the kernel about commit failures. This lack of granular data hinders their ability to understand and address the root causes of these failures.
To address this issue, we discussed several potential improvements:
- Direct Kernel Log Access: One idea is to directly load relevant kernel logs into the compositor. This would provide more detailed information about the failure and potentially aid in debugging.
- Finer-Grained Failure Reporting: We also explored the possibility of separating atomic failures into more specific categories. Not all failures are critical, and understanding the nature of the failure can help compositors take appropriate action.
- Enhanced Logging: Currently, the dmesg log doesn’t provide enough information for user-space validation. Raising the log level to capture more detailed information during failures could be a viable solution.
By implementing these improvements, we aim to equip compositors with the necessary tools to better understand and resolve commit failures, leading to a more robust and stable display system.
A Big Thank You!Huge thanks to Rodrigo Siqueira for these detailed meeting notes. Also, Laurent Pinchart, Jonas Adahl, Daniel Dadap, Xaver Hugl, and Harry Wentland for bringing up interesting topics and leading discussions. Finally, thanks to all the participants who enriched the discussions with their experience, ideas, and inputs, especially Alex Goins, Antonino Maniscalco, Austin Shafer, Daniel Stone, Demi Obenour, Jessica Zhang, Joan Torres, Leo Li, Liviu Dudau, Mario Limonciello, Michel Dänzer, Rob Clark, Simon Ser and Teddy Li.
This collaborative effort will undoubtedly contribute to the continued development of the Linux display stack.
Stay tuned for future updates!
Ned Batchelder: Loop targets
I posted a Python tidbit about how for loops can assign to other things than simple variables, and many people were surprised or even concerned:
params = {"query": QUERY,
"page_size": 100,
}
# Get page=0, page=1, page=2, ...
for params["page"] in itertools.count():
data = requests.get(SEARCH_URL, params).json()
if not data["results"]:
break
...
This code makes successive GET requests to a URL, with a params dict as the data payload. Each request uses the same data, except the “page” item is 0, then 1, 2, and so on. It has the same effect as if we had written it:
for page_num in itertools.count():params["page"] = page_num
data = requests.get(SEARCH_URL, params).json()
One reply asked if there was a new params dict in each iteration. No, loops in Python do not create a scope, and never make new variables. The loop target is assigned to exactly as if it were an assignment statement.
As a Python Discord helper once described it,
While loops are “if” on repeat. For loops are assignment on repeat.
A loop like for <ANYTHING> in <ITER>: will take successive values from <ITER> and do an assignment exactly as this statement would: <ANYTHING> = <VAL>. If the assignment statement is ok, then the for loop is ok.
We’re used to seeing for loops that do more than a simple assignment:
for i, thing in enumerate(things):...
for x, y, z in zip(xs, ys, zs):
...
These work because Python can assign to a number of variables at once:
i, thing = 0, "hello"x, y, z = 1, 2, 3
Assigning to a dict key (or an attribute, or a property setter, and so on) in a for loop is an example of Python having a few independent mechanisms that combine in uniform ways. We aren’t used to seeing exotic combinations, but you can reason through how they would behave, and you would be right.
You can assign to a dict key in an assignment statement, so you can assign to it in a for loop. You might decide it’s too unusual to use, but it is possible and it works.
Zato Blog: IMAP and OAuth2 Integrations with Microsoft 365
This is the first in a series of articles about automation of and integrations with Microsoft 365 cloud products using Python and Zato.
We start off with IMAP automation by showing how to create a scheduled Python service that periodically pulls latest emails from Outlook using OAuth2-based connections.
IMAP and OAuth2
Microsoft 365 requires for all IMAP connections to use OAuth2. This can be challenging to configure in server-side automation and orchestration processes so Zato offers an easy way that lets you read and send emails without a need for getting into low-level OAuth2 details.
Consider a common orchestration scenario - a business partner sends automated emails with attachments that need to be parsed, some information needs to be extracted and processed accordingly.
Before OAuth2, an automation process would receive from Azure administrators a dedicated IMAP account with a username and password.
Now, however, in addition to creating an IMAP account, administrators will need to create and configure a few more resources that the orchestration service will use. Note that the password to the IMAP account will never be used.
Administrators need to:
- Register an Azure client app representing your service that uses IMAP
- Grant this app a couple of Microsoft Graph application permissions:
- Mail.ReadWrite
- Mail.Send
Next, administrators need to give you a few pieces of information about the app:
- Application (client) ID
- Tenant (directory) ID
- Client secret
Additionally, you still need to receive the IMAP username (an e-mail address). It is just that you do not need its corresponding password.
In DashboardThe first step is to create a new connection in your Zato Dashboard - this will establish an OAuth2-using connection that Zato will manage and your Python code will not have to do anything else, all the underlying OAuth2 tokens will keep refreshing as needed, the platform will take care of everything.
Having received the configuration details from Azure administrators, you can open your Zato Dashboard and navigate to IMAP connections:
Fill out the form as below, choosing "Microsoft 365" as the server type. The other type, "Generic IMAP" is used for the classical case of IMAP with a username and password:
Change the secret and click Ping to confirm that the connection is configured correctly:
In PythonUse the code below to receive emails. Note that it merely needs to refer to a connection definition by its name and there is no need for any usage of OAuth2 here:
# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class MyService(Service): def handle(self): # Connect to a Microsoft 365 IMAP connection by its name .. conn = self.email.imap.get('My Automation').conn # .. get all messages matching filter criteria ("unread" by default).. for msg_id, msg in conn.get(): # .. and access each of them. self.logger.info(msg.data)This is everything that is needed for integrations with IMAP using Microsoft 365 although we can still go further. For instance, to create a scheduled job to periodically invoke the service, go to the Scheduler job in Dashboard:
In this case, we decide to have a job that runs once per hour:
As expected, clicking OK will suffice for the job to start in background. It is as simple as that.
More resources➤ Python API integration tutorial
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
Specbee: Your essential guide to Multilingual SEO and Hreflang (and how Drupal makes it easier)
Dirk Eddelbuettel: RcppArmadillo 14.2.0-1 on CRAN: New Upstream Minor
Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1191 other packages on CRAN, downloaded 37.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 603 times according to Google Scholar.
Conrad released a minor version 14.2.0 a few days ago after we spent about two weeks with several runs of reverse-dependency checks covering corner cases. After a short delay at CRAN due to a false positive on a test, a package failing tests we also failed under the previous version, and some concern over new deprecation warnings _whem using the headers directly as _e.g. mlpack R package does we are now on CRAN. I noticed a missing feature under large ‘64bit word’ (for large floating-point matrices) and added an exporter for icube going to double to support the 64-bit integer range (as we already did, of course, for vectors and matrices). Changes since the last CRAN release are summarised below.
Changes in RcppArmadillo version 14.2.0-1 (2024-11-16)Upgraded to Armadillo release 14.2.0 (Smooth Caffeine)
Faster handling of symmetric matrices by inv() and rcond()
Faster handling of hermitian matrices by inv(), rcond(), cond(), pinv(), rank()
Added solve_opts::force_sym option to solve() to force the use of the symmetric solver
More efficient handling of compound expressions by solve()
Added exporter specialisation for icube for the ARMA_64BIT_WORD case
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.
If you like this or other open-source work I do, you can sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Nonprofit Drupal posts: November Drupal for Nonprofits Chat
Join us THURSDAY, November 21 at 1pm ET / 10am PT, for our regularly scheduled call to chat about all things Drupal and nonprofits. (Convert to your local time zone.)
We don't have anything specific on the agenda this month, so we'll have plenty of time to discuss anything that's on our minds at the intersection of Drupal and nonprofits. Got something specific you want to talk about? Feel free to share ahead of time in our collaborative Google doc: https://nten.org/drupal/notes!
All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.
This free call is sponsored by NTEN.org and open to everyone.
-
Join the call: https://us02web.zoom.us/j/81817469653
-
Meeting ID: 818 1746 9653
Passcode: 551681 -
One tap mobile:
+16699006833,,81817469653# US (San Jose)
+13462487799,,81817469653# US (Houston) -
Dial by your location:
+1 669 900 6833 US (San Jose)
+1 346 248 7799 US (Houston)
+1 253 215 8782 US (Tacoma)
+1 929 205 6099 US (New York)
+1 301 715 8592 US (Washington DC)
+1 312 626 6799 US (Chicago) -
Find your local number: https://us02web.zoom.us/u/kpV1o65N
-
- Follow along on Google Docs: https://nten.org/drupal/notes
Talking Drupal: Talking Drupal #476 - Off The Cuff #10
Today we are talking about some things are on our mind including, The DOJ Accessibility ruling,Drupal CMS Event Recipes and Tooling for core development with our Hosts. We’ll also cover @font-your-face as our module of the week.
For show notes visit: https://www.talkingDrupal.com/476
Topics- DOJ Accessibility Ruling
- Drupal CMS
- Tooling for core development
- Open University
- Accessibility ruling
- PHPUnit testing
- Drupal Events Recipes
Martin Anderson-Clutz - mandclu.com mandclu
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Martin Anderson-Clutz - mandclu.com mandclu Joshua "Josh" Mitchell - joshuami.com joshuami
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you ever wanted to add and manage web fonts for your Drupal site, directly within the admin interface? There’s a module for that.
- Module name/project name:
- Brief history
- How old: created in May 2010 by Scott Reynen, but the most recent release was by Henrique Mendes (hmendes) of CI&T
- Versions available: 7.x-2.8 and 4.0.0 versions available, the latter of which support Drupal 9.4 and 10.
- Maintainership
- Actively maintained
- Security coverage
- Test coverage
- Documentation, but looks like it might be ready for a refresh
- Number of open issues: 48 open issues, 8 of which are bugs against the current branch
- Usage stats:
- 32,213 sites
- Module features and usage
- The module provides an interface to browse fonts from Google, Adobe, Typekit, and more
- License restrictions for fonts are clearly indicated
- When you find a font you want to use, you just click “enable”. You don’t need to write any CSS or define a library, and it’s easy to mix-and-match fonts from different providers. It can even make it easier to include your own local fonts
- The module includes submodules for the different font providers, so you enable the submodules based on where you want to use fonts from
- Then you can import the fonts for those providers, though you do need an API key to import fonts from Google
- The module does also have an API, so you can write your own modules to integrate with other font providers, or access the information about available fonts
C.J. Collier: Managing HPE SAS Controllers
Notes to self. And anyone else who might find them useful. Following are some ssacli commands which I use infrequently enough that they fall out of cache. This may repeat information in other blogs, but since I search my posts first when commands slip my mind, I thought I’d include them here, too.
hpacucli is the wrong command. Use ssacli instead.
$ KR='/usr/share/keyrings/hpe.gpg' $ for fingerprint in \ 882F7199B20F94BD7E3E690EFADD8D64B1275EA3 \ 57446EFDE098E5C934B69C7DC208ADDE26C2B797 \ 476DADAC9E647EE27453F2A3B070680A5CE2D476 ; do \ curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x${fingerprint}" \ | gpg --no-default-keyring --keyring "${KR}" --import ; \ done $ gpg --list-keys --no-default-keyring --keyring "${KR}" /usr/share/keyrings/hpe.gpg --------------------------- pub rsa2048 2012-12-04 [SC] [expired: 2022-12-02] 476DADAC9E647EE27453F2A3B070680A5CE2D476 uid [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) pub rsa2048 2014-11-19 [SC] [expired: 2024-11-16] 882F7199B20F94BD7E3E690EFADD8D64B1275EA3 uid [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) - 1 pub rsa2048 2015-12-10 [SCEA] [expires: 2025-12-07] 57446EFDE098E5C934B69C7DC208ADDE26C2B797 uid [ unknown] Hewlett Packard Enterprise Company RSA-2048-25 $ echo "deb [signed-by=${KR}] http://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" \ | sudo dd of=/etc/apt/sources.list.d status=none $ sudo apt-get update $ sudo apt-get install -y -qq ssacli > /dev/null 2>&1 $ sudo ssacli ctrl all show status HPE Smart Array P408i-p SR Gen10 in Slot 3 Controller Status: OK Cache Status: OK Battery/Capacitor Status: OK $ sudo ssacli ctrl all show detail HPE Smart Array P408i-p SR Gen10 in Slot 3 Bus Interface: PCI Slot: 3 Serial Number: PFJHD0ARCCR1QM RAID 6 Status: Enabled Controller Status: OK Hardware Revision: B Firmware Version: 2.65 Firmware Supports Online Firmware Activation: True Driver Supports Online Firmware Activation: True Rebuild Priority: High Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Parallel Surface Scan Supported: Yes Current Parallel Surface Scan Count: 1 Max Parallel Surface Scan Count: 16 Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Write Cache Bypass Threshold Size: 1040 KiB Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Cache Ratio: 10% Read / 90% Write Configured Drive Write Cache Policy: Disable Unconfigured Drive Write Cache Policy: Default Total Cache Size: 2.0 Total Cache Memory Available: 1.8 Battery Backed Cache Size: 1.8 No-Battery Write Cache: Disabled SSD Caching RAID5 WriteBack Enabled: True SSD Caching Version: 2 Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Spare Activation Mode: Activate on physical drive failure (default) Controller Temperature (C): 53 Cache Module Temperature (C): 43 Capacitor Temperature (C): 40 Number of Ports: 2 Internal only Encryption: Not Set Express Local Encryption: False Driver Name: smartpqi Driver Version: Linux 2.1.18-045 PCI Address (Domain:Bus:Device.Function): 0000:11:00.0 Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s) Controller Mode: Mixed Port Max Phy Rate Limiting Supported: False Latency Scheduler Setting: Disabled Current Power Mode: MaxPerformance Survival Mode: Enabled Host Serial Number: 2M20040D1Q Sanitize Erase Supported: True Sanitize Lock: None Sensor ID: 0 Location: Capacitor Current Value (C): 40 Max Value Since Power On: 42 Sensor ID: 1 Location: ASIC Current Value (C): 53 Max Value Since Power On: 55 Sensor ID: 2 Location: Unknown Current Value (C): 43 Max Value Since Power On: 45 Sensor ID: 3 Location: Cache Current Value (C): 43 Max Value Since Power On: 44 Primary Boot Volume: None Secondary Boot Volume: None $ sudo ssacli ctrl all show config HPE Smart Array P408i-p SR Gen10 in Slot 3 (sn: PFJHD0ARCCR1QM) Internal Drive Cage at Port 1I, Box 2, OK Internal Drive Cage at Port 2I, Box 2, OK Port Name: 1I (Mixed) Port Name: 2I (Mixed) Array A (SAS, Unused Space: 0 MB) logicaldrive 1 (1.64 TB, RAID 6, OK) physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK) physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, OK) physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK) physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK) physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK) physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK) physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK) physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 1.2 TB, OK) SEP (Vendor ID HPE, Model Smart Adapter) 379 (WWID: 51402EC013705E88, Port: Unknown) $ sudo ssacli ctrl slot=3 pd 2I:2:7 show detail HPE Smart Array P408i-p SR Gen10 in Slot 3 Array A physicaldrive 2I:2:7 Port: 2I Box: 2 Bay: 7 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 1.2 TB Drive exposed to OS: False Logical/Physical Block Size: 512/512 Rotational Speed: 10000 Firmware Revision: U850 Serial Number: KZGN1BDE WWID: 5000CCA01D247239 Model: HGST HUC101212CSS600 Current Temperature (C): 46 Maximum Temperature (C): 51 PHY Count: 2 PHY Transfer Rate: 6.0Gbps, Unknown PHY Physical Link Rate: 6.0Gbps, Unknown PHY Maximum Link Rate: 6.0Gbps, 6.0Gbps Drive Authentication Status: OK Carrier Application Version: 11 Carrier Bootloader Version: 6 Sanitize Erase Supported: False Shingled Magnetic Recording Support: None Drive Unique ID: 5000CCA01D247238 Tweet