Feeds
Qt Quick 3D survey - November 2024
The Qt Quick 3D project was first announced and released in 2019. Since then, it has seen numerous enhancements. We have concentrated on boosting performance, introducing new features, and expanding the module's capabilities.
Emergency and weather alerts
I had previously mentioned efforts on bringing public emergency and weather alerts to free software and free infrastructure, which was also my initial motivation to work on push notifications. With that moving forward it’s time to explain a bit more what’s happening there.
FOSS Public Alert ServerAs part of the initial push notification work I had written a simple prototype server which aggregates alerts from about a hundred countries and allows clients to subscribe to be notified about alerts in a given area of interest.
With the push notification client and server parts for KDE released and deployed, that’s now the next thing to tackle.
After teaming up with FOSS Warn (a free alternative to the proprietary emergency and weather alert apps in Germany), who need the same kind of infrastructure there’s now even NLnet funding to turn this into production-ready software.
Work is going to happen in this repository on KDE’s Gitlab instance, and can be followed on this Mastodon account.
Public emergency alert push notification. Common Alerting Protocol (CAP)For this to scale globally we need standardized data models, data formats and protocols. And thankfully those exist in form of OASIS’ Common Alerting Protocol (CAP) since many years and are widely in use.
CAP describes an XML format for alert messages, covering categorization, severity, certainty, urgency, affected area, and multi-lingual descriptions and instructions.
Build on top of that are CAP feeds, which are basically RSS or Atom feeds of CAP alert messages.
There’s also a bunch of national- or agency-specific profiles which for example define additional information elements or specify the use of existing fields more precisely for their use-case.
From a technical point of view none of that is particularly challenging, we already have support for CAP alerts in KWeatherCore for example.
CAP is widespread for internal use in various international, national or sub-national emergency warning systems, for connecting alerting authorities with e.g. media broadcast or mobile network providers. In many countries CAP feeds are also publicly available, which is the interesting part for us here.
CAP WorkshopAfter having gotten in touch with other people working in this area we got invited to the CAP Workshop that happened last week, a rather unassuming name for a three day international conference with national delegations from more than 120 countries, several UN agencies and NGOs like the International Red Cross/Crescent, and quite a bit more formal than what we are used to from other events.
Unfortunately I couldn’t be there in person and only followed this online, but even that provided quite some interesting insights.
- OASIS’ work on extended event codes, which should help with better display and filtering of alerts in client applications.
- Several talks covered the “usability” of alerts, ie. how to word/present warnings so that they are understood by everyone and so that they result in the intended reaction. Much of this is in the hands of the alert issuers, but client software can help here as well, e.g. regarding accessibility (TTS, both visual and audible notifications, translations and understandable language in general, etc).
- Google showed the various places where they integrate alerts in their products: displayed on maps, discoverable via search, location-dependent push notifications, integrated with weather forecasts and in routing before entering an affected area. Can be inspiration for what we could do as well.
- A talk presented work on Mexico’s earthquake early warning system. As earthquakes can’t be predicted this uses the fact that an alert can outrace the shockwave and thus might still reach people further away in time (same as tsunami warnings work, just at a much shorter timescale). Their estimate from a simulation of past earthquakes was 10 seconds making a difference of 10k lives. That’s a level of responsiveness our current CAP feed polling approach is not going to be able to reach, this would need direct listening to the alert radio signals.
- There’s several CAP feeds that aren’t publicly available yet, whenever such a case came up there was push for changing that in the following Q&A session. Good, we need that.
There’s plenty of things to do here. On the software development side there’s getting the server code production ready and deployed, building monitoring tools for that and turning the demo app into a reuable client library and integrating that in places where it makes sense.
Equally important though is finding more CAP feeds and in some cases additional data sets to resolve area codes used in the alerts to geographic polygons. This is something that often needs local knowledge and understanding the local languages. And where there are no CAP feeds yet, nagging your local authorities to change that also helps.
We should soon get a dedicated Matrix channel for coordinating this, and we’ll be at 38C3 and FOSDEM 2025 if you want to talk about this in person.
Nick Coghlan: The origin of venvstacks
There has been a longstanding gap in the Python packaging ecosystem that has somewhat annoyed me, but not enough to do anything about it: we haven't really had a good way to compose multiple layers of Python virtual environments together, allowing large dependencies (like AI and machine learning libraries) to be shared across multiple different application environments without having to install them directly into the base runtime environment.
Utilities for collecting up an entire Python runtime, an application, and all its dependencies into a single deployable artifact have existed since before the turn of the century.
We've had standardised virtual environments (allowing multiple applications to share a base Python runtime and its directly installed third party packages) for almost as long.
We've had zip applications for a long time as well (and other utilities which build on that feature).
We've had tools like wagon which allow us to ship a bundle of prebuilt Python wheel archives and install them on a destination system without needing to download anything else from the internet at installation time.
We've had tools like conda (and more recently uv), which make intelligent use of hard links on local systems to avoid making duplicate copies of completely identical versions of packages.
We've technically had platform specific mechanisms like Linux container images, where the contents of an environment can be built up across multiple container image layers, with the lower layers being shared across multiple image definitions, but have lacked a convenient way to handle the dependency management complications involved in using these tools to share large Python libraries.
But we've never had something which specifically took full advantage of the way Python's import system works to enable robust structural decomposition of Python applications into independently updatable subcomponents (with a granularity larger than single packages).
All of this history meant that I was thoroughly intrigued when a mutual acquaintance introduced me to the creators of the LM Studio personal AI desktop application to discuss a Python packaging problem they had looming on their technical road map: it was clear from user demand and the rate of evolution in the Python AI/ML ecosystem that they needed a way to ship Python AI/ML components directly to their users without having to wait for those capabilities to be made available through native interfaces in other languages (such as Swift, C++, or JavaScript), but it didn't seem obvious to them how they could readily integrate that capability into LM Studio without making the application installation process substantially more complicated for their users.
What started as a consulting contract for a technical proof of concept, and has since turned into a permanent position with the organisation, proved fruitful, and the result is the recently published open source venvstacks utility, which is specifically designed to enable the kind of portable deterministic artifact publishing setup that LM Studio needed, including:
- Base runtime layers (based on python-build-standalone)
- Framework layers (for shipping large dependencies, such as Apple MLX or PyTorch)
- Application layers (including additional unpackaged "launch modules" for app execution)
There are certainly still some technical limitations to be addressed (the dynamic linking problem with layering virtual environments like this is notorious amongst Python packaging experts for a reason), but even in its current form, venvstacks is already capable enough to power the recent inclusion of Apple MLX support in LM Studio.
Russ Allbery: Review: Overdue and Returns
Review: Overdue and Returns, by Mark Lawrence
Publisher: Mark Lawrence Copyright: June 2023 Copyright: February 2024 ASIN: B0C9N51M6Y ASIN: B0CTYNQGBX Format: Kindle Pages: 99Overdue is a stand-alone novelette in the Library Trilogy universe. Returns is a collection of two stories, the novelette "Returns" and the short story "About Pain." All of them together are about the length of a novella, so I'm combining them into a single review.
These are ancillary stories in the same universe as the novels, but not necessarily in the same timeline. (Trying to fit "About Pain" into the novel timeline will give you a headache and I am choosing to read it as author's fan fiction.) I'm guessing they're part of the new fad for releasing short fiction on Amazon to tide readers over and maintain interest between books in a series, a fad about which I have mixed feelings. Given the total lack of publisher metadata in either the stories or on Amazon, I'm assuming they were self-published even though the novels are published by Ace, but I don't know that for certain.
There are spoilers for The Book That Wouldn't Burn, so don't read these before that novel. There are no spoilers for The Book That Broke the World, and I don't think the reading order would matter.
I found all three of these stories irritating and thuddingly trite. "Returns" is probably the best of the lot in terms of quality of storytelling, but I intensely dislike the structural implications of the nature of the book at its center and am therefore hoping that it's non-canonical.
I would not waste your time with these even if you are enjoying the novels.
"Overdue": Three owners of the same bookstore at different points in time have encounters with an albino man named Yute who is on a quest. One of the owners is trying to write a book, one of them is older, depressed, and closed off, and one of them has regular conversations with her sister's ghost. The nature of the relationship between the three is too much of a spoiler, but it involves similar shenanigans as The Book That Wouldn't Burn.
Lawrence uses my least favorite resolution of benign ghost stories. The story tries very hard to sell it as a good thing, but I thought it was cruel and prefer fantasy that rejects both branches of that dilemma. Other than that, it was fine, I guess, although the moral was delivered with all of the subtlety of the last two minutes of a Saturday morning cartoon. (5)
"Returns": Livira returns a book deep inside the library and finds that she can decipher it, which leads her to a story about Yute going on a trip to recover another library book. This had a lot of great Yute lines, plus I always like seeing Livira in exploration mode. The book itself is paradoxical in a causality-destroying way, which is handwaved away as literal magic. I liked this one the best of the three stories, but I hope the world-building of the main series does not go in this direction and I'm a little afraid it might. (6)
"About Pain": A man named Holden runs into a woman named Clovis at the gym while carrying a book titled Catcher that his dog found and that he's returning to the library. I thoroughly enjoy Clovis and was happy to read a few more scenes about her. Other than that, this was fine, I guess, although it is a story designed to deliver a point and that point is one that appears in every discussion of classics and re-reading that has ever happened on the Internet. Also, I know I'm being grumpy, but Lawrence's puns with authors and character names are chapter-epigraph amusing but not short-story-length funny. Yes, yes, his name is Holden, we get it. (5)
Rating: 5 out of 10
Paul Wise: FLOSS Activities October 2024
This month I didn't have any particular focus. I just worked on issues in my info bubble.
Changes- ArchiveBot: improve dashboard filtering
- Debian wiki pages: ArmPorts, Exploits
- FLOSS license needed for ThreadTree
- Features in ThreadTree (1 2 3 4 5), systemd-cron
- Warnings in kraft, python3-pypandoc
All work was done on a volunteer basis.
Taavi Väänänen: Custom domains on the Wikimedia Cloud VPS web proxy
The shared web proxy used on Wikimedia Cloud VPS now has technical support for using arbitrary domains (and not just wmcloud.org subdomains) in proxy names. I think this is a good example of how software slowly evolves over time as new requirements emerge, with each new addition building on top of the previous ones.
According to the edit history on Wikitech, the web proxy service has its origins in 2012, although the current idea where you create a proxy and map it to a specific instance and port was only introduced a a year later. (Before that, it just directly mapped the subdomain to the VPS instance with the same name).
There were some smaller changes in the coming years like the migration to acme-chief for TLS certificate management, but the overall logic stayed very similar until 2020 when the wmcloud.org domain was introduced. That was implemented by adding a config option listing all possible domains, so future domain additions would be as simple as adding the new domain to that list in the configuration.
Then the changes start becoming more frequent:
- In 2022, for my Terraform support project, a bunch of logic, including the list of supported backend domains was moved from the frontend code to the backend. This also made it possible to dynamically change which projects can use which domains suffixes for their proxies.
- Then, early this year, I added support for zones restricted to a single project, because we wanted to use the proxy for the *.svc.toolforge.org Toolforge infrastructure domains instead of coming up with a new system for that use case. This also added suport for using different TLS certificates for different domains so that we would not have to have a single giant certificate with all the names.
- Finally, the last step was to add two new features to the proxy system: support for adding a proxy at the apex of a domain, as well as support for domains that are not managed in Designate (the Cloud VPS/OpenStack auth DNS service). In addition, we needed a bit of config to ensure http-01 challenges get routed to the acme-chief instance.
SystemSeed.com: Video: An Introduction to Human-Centred Design
Watch the recording of 'An Introduction to Human-Centred Design', presented by Elise West at DrupalCon Barcelona 2024
Tamsin Fox-Davies Thu, 10/31/2024 - 22:23Qt Creator 15 Beta2 released
We are happy to announce the release of Qt Creator 15 Beta2!
John Cook: How hard is constraint programming?
I’ve been writing code for the Z3 SMT solver for several months now. Here are my findings.
Python is used here as the base language. Python/Z3 feels like a two-layer programming model—declarative code for Z3, imperative code for Python. In this it seems reminiscent of C++/CUDA programming for NVIDIA GPUs—in that case, mixed CPU and GPU imperative code. Either case is a clever combination of methodologies that is surprisingly fluent and versatile, albeit not a perfect blend of seamless conceptual cohesion.
Other comparisons:
- Both have two separate memory spaces (CUDA CPU/GPU memories for one; pure Python variables and Z3 variables for the other).
- Both can be tricky to debug. In earlier days, CUDA had no debugger, so one had to fall back to the trusty “printf” statement (for a while it didn’t even have that!). If the code crashed, you might get no output at all. To my knowledge, Z3 has no dedicated debugger. If the problem being solved comes back as satisfiable, you can print out the discovered model variables, but if satisfiability fails, you get very little information. Like some other novel platforms, something of a “black box.”
- In both cases, programmer productivity can be well-served by developing custom abstractions. I developed a Python class to manage multidimensional arrays of Z3 variables, this was a huge time saver.
There are differences too, of course.
- In Python, “=” is assignment, but in Z3, one only has “==”, logical or numeric equality, not assignment per se. Variables are set once and can’t be changed—sort of a “write-once variables” programming model—as is natural to logic programming.
- Code speed optimization is challenging. Code modifications for Z3 constraints/variables can have extreme and unpredictable runtime effects, so it’s hard to optimize. Z3 is solving an NP-complete problem after all, so runtimes can theoretically increase massively. Speedups can be massive also; one round of changes I made gave 2000X speedup on a test problem. Runtime of CUDA code can be unpredictable to a lesser degree, depending on the PTX and SASS code generation phases and the aggressive code optimizations of the CUDA compiler. However, it seems easier to “see through” CUDA code, down to the metal, to understand expected performance, at least for smaller code fragments. The Z3 solver can output statistics of the solve, but these are hard to actionably interpret for a non-expert.
- Z3 provides many, many algorithmic tuning parameters (“tactics”), though it’s hard to reason about which ones to pick. Autotuners like FastSMT might help. Also there have been some efforts to develop tools to visualize the solve process, this might be of help.
It would be great to see more modern tooling support and development of community best practices to help support Z3 code developers.
The post How hard is constraint programming? first appeared on John D. Cook.drunomics: Drupal 11 Released - Key Features and Modernised Technology
Preparing for the European Cyber Resilience Act (CRA)
In an era where digital security is paramount, the European Union is taking steps to improve cybersecurity legislation with the introduction of the European Union Cyber Resilience Act (CRA). As the European Union has now adopted the CRA, Qt Group continues to work towards making our products CRA compliant and supporting our customers with their compliancy.
eGenix.com: PyDDF Python Herbst Sprint 2024
The following text is in German, since we're announcing a Python sprint in Düsseldorf, Germany.
Python Meeting Herbst Sprint 2024 in
Düsseldorf
Samstag, 09.11.2024, 10:00-18:00 Uhr
Sonntag, 10.11.2024. 10:00-18:00 Uhr
Eviden / Atos Information Technology GmbH, Am Seestern 1, 40547 Düsseldorf
Informationen Das Python Meeting Düsseldorf (PyDDF) veranstaltet mit freundlicher Unterstützung von Eviden Deutschland ein Python Sprint Wochenende.Der Sprint findet am Wochenende 09./10.11.2024 in der Eviden / Atos Niederlassung, Am Seestern 1, in Düsseldorf statt.Folgende Themengebiete sind als Anregung bereits angedacht:
- AI/ML: Bilderkennung mit Azure Computervision
- AI/ML: Texte und Meta Daten aus Presseseiten extrahieren, mit Hilfe eines lokalen LLMs
- AI/ML: Transkription von Videos/Audiodateien mit Whisper
- Kodi Add-Ons für ARD, ZDF und ARTE
Alles weitere und die Anmeldung findet Ihr auf der Meetup Sprint Seite:
WICHTIG: Ohne Anmeldung können wir den Gebäudezugang nicht vorbereiten. Eine spontane Anmeldung am Sprint Tag wird daher vermutlich nicht funktionieren.
Teilnehmer sollten sich zudem in der PyDDF Telegram Gruppe registrieren, da wir uns dort koordinieren:
Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python-Begeisterte aus der Region wendet.
Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf.
Marc-André Lemburg, eGenix.com
CXX-Qt 0.7 Release
We just released CXX-Qt version 0.7!
CXX-Qt is a set of Rust crates for creating bidirectional Rust ⇄ C++ bindings with Qt. It supports integrating Rust into C++ applications using CMake or building Rust applications with Cargo. CXX-Qt provides tools for implementing QObject subclasses in Rust that can be used from C++, QML, and JavaScript.
For 0.7, we have stabilized the cxx-qt bridge macro API and there have been many internal refactors to ensure that we have a consistent baseline to support going forward. We encourage developers to reach out if they find any unclear areas or missing features, to help us ensure a roadmap for them, as this may be the final time we can adapt the API. In the next releases, we’re looking towards stabilizing the cxx-qt-build and getting the cxx-qt-lib APIs ready for 1.0.
Check out the new release through the usual channels:
Some of the most notable developer-facing changes: Stabilized #[cxx_qt::bridge] macroCXX-Qt 0.7 reaches a major milestone by stabilizing the bridge macro that is at the heart of CXX-Qt. You can now depend on your CXX-Qt bridges to remain compatible with future CXX-Qt versions. As we’re still pre-1.0, we may still introduce very minor breaking changes to fix critical bugs in the edge-cases of the API, but the vast majority of bridges should remain compatible with future versions.
This stabilization is also explicitly limited to the bridge API itself. Breaking changes may still occur in e.g. cxx-qt-lib, cxx-qt-build, and cxx-qt-cmake. We plan to stabilize those crates in the next releases.
Naming ChangesThe handling of names internally has been refactored to ensure consistency across all usages. During this process, implicit automatic case conversion has been removed, so cxx_name and rust_name are now used to specify differing Rust and C++ names. Since the automatic case conversion is useful, it can be explicitly enabled using per extern block attributes auto_cxx_name and auto_rust_name, while still complimenting CXX. For more details on how these attributes can be used, visit the attributes page in the CXX-Qt book.
// with 0.6 implicit automatic case conversion #[cxx_qt::bridge] mod ffi { unsafe extern "RustQt" { #[qobject] #[qproperty(i32, my_number) // myNumber in C++ type MyObject = super::MyObjectRust; fn my_method(self: &MyObject); // myMethod in C++ } } // with 0.7 cxx_name / rust_name #[cxx_qt::bridge] mod ffi { unsafe extern "RustQt" { #[qobject] #[qproperty(i32, my_number, cxx_name = "myNumber") type MyObject = super::MyObjectRust; #[cxx_name = "myMethod"] fn my_method(self: &MyObject); } } // with 0.7 auto_cxx_name / auto_rust_name #[cxx_qt::bridge] mod ffi { #[auto_cxx_name] // <-- enables automatic cxx_name generation within the `extern "RustQt"` block unsafe extern "RustQt" { #[qobject] #[qproperty(i32, my_number) // myNumber in C++ type MyObject = super::MyObjectRust; fn my_method(self: &MyObject); // myMethod in C++ } } cxx_file_stem RemovalIn previous releases, the output filename of generated C++ files used the cxx_file_stem attribute of the CXX-Qt bridge. This has been changed to use the filename of the Rust source file including the directory structure.
Previously, the code below would generate a C++ header path of my_file.cxxqt.h. After the changes, the cxx_file_stem must be removed and the generated C++ header path changes to crate-name/src/my_bridge.cxxqt.h. This follows a similar pattern to CXX.
// crate-name/src/my_bridge.rs // with 0.6 a file stem was specified #[cxx_qt::bridge(cxx_file_stem = "my_file")] mod ffi { ... } // with 0.7 the file path is used #[cxx_qt::bridge] mod ffi { ... } Build System ChangesThe internals of the build system have changed so that dependencies are automatically detected and configured by cxx-qt-build, libraries can pass build information to cxx-qt-build, and a CXX-Qt CMake module is now available providing convenience wrappers around corrosion. This means that the cxx-qt-lib-headers crate has been removed and only cxx-qt-lib is required. With these changes, there is now no need for the -header crates that existed before. Previously, some features were enabled by default in cxx-qt-lib. Now these are all opt-in. We have provided full and qt_full as convenience to enable all features; however, we would recommend opting in to the specific features you need.
We hope to improve the API of cxx-qt-build in the next cycle to match the internal changes and become more modular.
Further ImprovementsCXX-Qt can now be successfully built for WASM, with documented steps available in the book and CI builds for WASM to ensure continued support.
Locking generation on the C++ side for all methods has been removed, which simplifies generation and improves performance. Using queue from cxx_qt::CxxQtThread is still safe, as it provides locking, but it is up to the developer to avoid incorrect multi-threading in C++ code (as in the CXX crate). Note that Qt generally works well here, with the signal/slot mechanism working safely across threads.
As with most releases, there are more Qt types wrapped in cxx-qt-lib and various other changes are detailed in the CHANGELOG.
Make sure to subscribe to the KDAB YouTube channel, where we’ll post more videos on CXX-Qt in the coming weeks.
Thanks to all of our contributors that helped us with this release:- Ben Ford
- Laurent Montel
- Matt Aber
- knox (aka @knoxfighter)
- Be Wilson
- Joshua Goins
- Alessandro Ambrosano
- Alexander Kiselev
- Alois Wohlschlager
- Darshan Phaldesai
- Jacob Alexander
- Sander Vocke
About KDAB
If you like this article and want to read similar material, consider subscribing via our RSS feed.
Subscribe to KDAB TV for similar informative short video content.
KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.
The post CXX-Qt 0.7 Release appeared first on KDAB.
Help fight the proprietary software monsters!
KDE’s yearly fundraiser is now live, with the theme of spooooky proprietary software. Go check it out — no, really! It’s great!
I think this one absolutely nails it, because the stories there are relatable. They describe common problems with proprietary software most of us have personally experienced in our journeys to the FOSS world, and how FOSS fixes it.
Let me share some of mine:
- When I was a kid, I liked to make movies with my friends and add wacky special effects using a program called AlamDV. I even bought a license to it! After a year, it broke and the developer released version 2, which I dutifully also bought a new license for. Unfortunately, none of my AlamDV 1 projects opened in it. They were lost to the wind.
- Similarly, I also used Apple’s iMovie editing app. At a certain point, they changed it completely to have a totally different UI and no longer open old projects. Still a kid, I never managed to figure out the new UI and all my old projects were lost forever.
- A lot of the digital art I made as a kid was saved in Apple’s .pict file format, which even they eventually dropped support for. When I moved to Linux, I had to write a script to open these files individually and take screenshots of them in order to not lose them forever.
- I’ve been able to consistently recycle older computers and keep them relevant with Plasma. Both of my kids have perfectly serviceable hand-me-down computers revitalized with Fedora KDE. My wife’s old 10 year-old laptop is a testbed for KDE Linux.
- My sister-in-law just last weekend was complaining to me about AI in Photoshop, and was very receptive to the idea of ditching Microsoft and Adobe software entirely. It’s a big turn-off to artists.
This stuff is real, and the work we do has significant impact. It’s not just a toy for nerds. It’s not a basement science project for bored tinkerers. It’s the way computers should be, and can be if enough of us donate our skills, time, and money towards the goal.
How will the fundraised money be used? Principally, to help KDE e.V. balance its budget and stop operating at a loss (about -110k last year, projected -70k this year) due to the legal requirement to spend down large lump-sum donations in a timely manner. We can sustain this level of deficit spending for a few more years, but of course would prefer not to. It’s been a tough environment for nonprofits, and you might have heard that the GNOME Foundation recently ran into financial trouble trouble had to cut back. We want to avoid that! The sooner we’re operating at a surplus again, the sooner we can expand our sponsorship of engineering work beyond its current level.
So go donate today, and make a difference in the most important movement in software today!
Lullabot: Transforming eBooks: From PDFs to Accessible Web Experiences
When it comes to digital content, accessibility isn't just a nice-to-have. It's essential. That's why we recently took on the challenge of transforming our eBook collection from PDFs into a fully accessible web format. We often help our clients clean up their PDFs, and absent very specific circumstances, we recommend avoiding them as web content. Based on our own advice, our own website was lacking.
The Python Show: 49 - EdgeDB and Python with Yury Selivanov
In this episode of The Python Show Podcast, we welcome Yury Selivanov as our guest. Yury is a core CPython developer and co-founder of EdgeDB and MagicStack.
We chatted about many different topics, including the following:
Core Python development
EdgeDB and how it differs from relational databases
Python without the GIL
Python subinterpreters
Memhive
and more!
Learn more about our guest and the topics we talked about with the following links:
Yury’s GitHub page
EdgeDB on GitHub
EdgeDB’s website
PyCon 2024 - Yury Selivanov: Overcoming GIL with subinterpreters and immutability
An article about Memhive
DevCollaborative: Why and How to Install Google Search Console on Your Drupal or WordPress Website
Use Google Search Console to be a better listener by understanding what search queries that are bringing visitors to your website.
Evolving Web: Is Drupal the Right fit? T-Shirt Sizing for Your Next Website Project
As a member of the leadership team for Drupal CMS, the new product that makes Drupal more accessible to marketers and content teams, I’ve spent the last three months engaging with various teams about their CMS decisions. While I take notes on marketing tools, ease of use, benefits of open source, DXP capabilities, and SaaS options, the core of these conversations often revolves around people and culture.
Who Decides on a CMS?The decision-maker for a CMS can vary: sometimes it’s a marketer or an IT professional, other times it’s the “head of digital,” or even an agency hired to handle the organization’s digital needs. Although many focus on features, decisions often hinge on feelings, prior experiences, and familiarity. Ultimately, the decisions reflect the experiences of those in the room.
This isn’t to say that the right technical fit isn’t important; rather, it often takes a backseat to personal experiences. it’s crucial to communicate why Drupal aligns with an organization’s digital strategy based on its goals.
Let’s categorize websites into three types and discuss why Drupal suits each.
1. Cornerstone of Your Digital StrategyEvery organization needs a digital front door. For established brands, this digital presence serves as the foundation of their online strategy. A known brand must maintain consistent online expression, while an unknown brand needs to tell its story effectively, helping users recognize its voice and identity.
Users want to quickly understand if they’re in the right place and how to connect. They expect seamless integration with third-party tools and easy access to internal data.
Drupal excels here because it goes beyond basic content management, offering flexibility for both internal and external users. It supports:
- Integration with third-party tools and data management.
- Enterprise-grade workflows and content management.
- Custom features and transactions.
- Tailored information architecture.
- A blend of structured content and marketing pages.
2. CMS Platform
Many organizations manage a complex ecosystem of websites, often hindered by internal politics and multiple CMSs that lead to inconsistent branding.
A successful CMS platform balances flexibility with guidelines, making site creation easy while adhering to the organization’s branding and content strategy. It often requires standardization of third-party tools.
Drupal’s modularity simplifies standardization across websites. It supports:
- Configuration management to allow control over customization
- Flexibility that enables governance at both the platform and individual site levels
- As a widely adopted solution in enterprises, it benefits from optimized hosting tools designed for multi-site management (e.g., Pantheon Custom Upstreams, Acquia Site Factory).
3. Marketing Microsite Designed to Scale
Not every organization is large; some startups aim to create single-purpose websites quickly. These organizations need to build fast without sacrificing security or accessibility. Often, marketers seek easy drag-and-drop tools for rapid site creation.
While Drupal has traditionally been overlooked for quick projects, Drupal CMS provides a solution that fosters familiarity among a broader audience, because it lowers the barrier to entry and speeds up the timeline to launch a website. When marketers can create a website quickly, it enhances creativity and ownership, and frees up more time to focus on content and marketing strategy. Drupal CMS will be especially important for making the case for using Drupal for these types of projects.
Why Choose Drupal?Drupal allows for the rapid launch of marketing sites, which can later scale into a digital cornerstone for an established brand. In particular, Drupal CMS will support:
- Built-in AI tools for site building that free up time to focus on content strategy and handling the influx of feature requests and content decisions that they are often faced with
- Allowing small sites to leverage the same modules and recipes available to larger sites
- No limitations to scaling up a small website to accommodate more content, authors, or functionality
Increased usage of Drupal leads to a better experience for everyone involved—developers, site builders, marketers, and content teams. As an open-source platform, Drupal's growth benefits the broader community, including government and non-profit organizations. Improving Drupal enhances a public good rather than enriching proprietary solutions.
If you're looking to talk more about Drupal and Drupal CMS don’t hesitate to get in touch.
If Drupal wins, we all win.
+ more awesome articles by Evolving Web