Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 16 hours 36 min ago

The Drop Times: Drupal is Alive and Well in India: Tim Doyle

Fri, 2024-03-15 09:49
Tim Doyle, CEO of Drupal Association, reflects on a vibrant gathering of industry leaders and young talents in India, showcasing the potential and enthusiasm within the country's Drupal community.
Categories: FLOSS Project Planets

Acquia Developer Portal Blog: Running Nightwatch tests in Acquia Code Studio

Fri, 2024-03-15 09:49

Nightwatch is an automated testing framework for web applications and websites. It uses the W3C WebDriver API to simulate real user interactions in a web browser environment, such as Chrome, to test the complete flow of an application, end-to-end.

In this tutorial, we'll see how we can extend the Acquia AutoDevOps template that comes out-of-the-box with Code Studio to also run Nightwatch tests for our application so we can ensure critical features remain functional as we further develop and maintain it.

  1. A simple Nightwatch test

    To start, let's first add a simple Nightwatch test to verify Drupal is up and running by navigating to the Drupal login page and

Categories: FLOSS Project Planets

Acquia Developer Portal Blog: EvolveDrupal Guide

Fri, 2024-03-15 09:49
About the Host, Evolving Web

Evolving Web is the agency partner that empowers organizations to adapt to the ever-changing digital landscape. We’re a team of 90+ technologists, storytellers and creatives who believe that a dynamic and adaptable digital presence is the most powerful way for clients to create meaningful connections. Through strategy, technology, design, marketing, and training, we set stories in motion through digital platforms designed for growth. 

Evolving Web was founded in 2007 by two passionate Drupalists on the ideals that define the open source ethos: transparency, collaboration, and continuous improvement. Then and now, we believe in the power of open source to drive innovation and growth. Our team regularly contributes to the open source community through code, documentation, sponsorship, strategic initiatives, and events like EvolveDrupal. 

Categories: FLOSS Project Planets

Acquia Developer Portal Blog: Getting Started with Acquia Cloud IDE: A Code Editor as a Service

Fri, 2024-03-15 09:49

There are two main challenges developers face when setting up and managing their local environments: 

  1. Lack of local resources (You need a powerful machine to run Docker.)
  2. Lots of time spent configuring environments, especially if you want to build something more sophisticated

While it may be fun experimenting with Linux and everything related to DevOps, sometimes you just don’t have the time. Sound familiar?

Don’t take my word for it. Check out this Twitter poll conducted by a former colleague:

The results could not be more clear. Developers want simplicity. Fair, right? You have enough stress and problems to solve as you work to meet tight deadlines.

Categories: FLOSS Project Planets

Lullabot: Lullabot Podcast: Ease of Updates, Compounded Peace of Mind

Thu, 2024-03-14 23:00

We look into the challenges and solutions in managing Drupal and other website updates across hundreds of websites, a topic brought to life by members of Lullabot's Support and Maintenance team and Jenna Tollerson from the State of Georgia.

Learn how Lullabot deals with update processes, drastically reducing manual labor while enhancing security and efficiency through automation.

"It's almost like adding another whole developer to your team!"

Categories: FLOSS Project Planets

Dries Buytaert: Building my own temperature and humidity monitor

Thu, 2024-03-14 09:04

Last fall, we toured the Champagne region in France, famous for its sparkling wines. We explored the ancient, underground cellars where Champagne undergoes its magical transformation from grape juice to sparkling wine. These cellars, often 30 meters deep and kilometers long, maintain a constant temperature of around 10-12°C, providing the perfect conditions for aging and storing Champagne.

25 meters underground in a champagne tunnel, which often stretches for miles/kilometers.

After sampling various Champagnes, we returned home with eight cases to store in our home's basement. However, unlike those deep cellars, our basement is just a few meters deep, prompting a simple question that sent me down a rabbit hole: how does our basement's temperature compare?

Rather than just buying a thermometer, I decided to build my own "temperature monitoring system" using open hardware and custom-built software. After all, who needs a simple solution when you can spend evenings tinkering with hardware, sensors, wires and writing your own software? Sometimes, more is more!

The basic idea is this: track the temperature and humidity of our basement every 15 minutes and send this information to a web service. This web service analyzes the data and alerts us if our basement becomes too cold or warm.

I launched this monitoring system around Christmas last year, so it's been running for nearly three months now. You can view the live temperature and historical data trends at https://dri.es/sensors. Yes, publishing our basement's temperature online is a bit quirky, but it's all in good fun.

A screenshot of my basement temperature dashboard.

So far, the temperature in our basement has been ideal for storing wine. However, I expect it will change during the summer months.

In the rest of this blog post, I'll share how I built the client that collects and sends the data, as well as the web service backend that processes and visualizes that data.

Hardware used

For this project, I bought:

  1. Adafruit ESP32-S3 Feather: A microcontroller board with Wi-Fi and Bluetooth capabilities, serving as the central processing unit of my project.
  2. Adafruit SHT4x sensor: A high-accuracy temperature and humidity sensor.
  3. 3.7v 500mAh battery: A small and portable power source.
  4. STEMMA QT / Qwiic JST SH 4-pin cable: To connect the sensor to the board without soldering.

The total hardware cost was $32.35 USD. I like Adafruit a lot, but it's worth noting that their products often come at a higher cost. You can find comparable hardware for as little as $10-15 elsewhere. Adafruit's premium cost is understandable, considering how much valuable content they create for the maker community.

An ESP32-S3 development board (middle) linked to an SHT41 temperature and humidity sensor (left) module and powered by a battery pack (right). Client code for Adafruit ESP32-S3 Feather

I developed the client code for the Adafruit ESP32-S3 Feather using the Arduino IDE, a widely used platform for developing and uploading code to Arduino-compatible boards.

The code measures temperature and humidity every 15 minutes, connects to WiFi, and sends this data to https://dri.es/sensors, my web service endpoint.

One of my goals was to create a system that could operate for a long time without needing to recharge the battery. The ESP32-S3 supports a "deep sleep" mode where it powers down almost all its functions, except for the clock and memory. By placing the ESP32-S3 into deep sleep mode between measurements, I was able to significantly reduce power.

Now that you understand the high-level design goals, including deep sleep mode, I'll share the complete client code below. It includes detailed code comments, making it self-explanatory.

[code c]#include "Adafruit_SHT4x.h" #include "Adafruit_MAX1704X.h" #include "WiFiManager.h" #include "ArduinoJson.h" #include "HTTPClient.h" // The Adafruit_SHT4x sensor is a high-precision, temperature and humidity // sensor with I2C interface. Adafruit_SHT4x sht4 = Adafruit_SHT4x(); // The Adafruit ESP32-S3 Feather comes with a built-in MAX17048 LiPoly / LiIon // battery monitor. The MAX17048 provides accurate monitoring of the battery's // voltage. Utilizing the Adafruit library, not only helps us obtain the raw // voltage data from the battery cell, but also converts this data into a more // intuitive battery percentage or charge level. We will pass on the battery // percentage to the web service endpoint, which can visualize it or use it to // send notifications when the battery needs recharging. Adafruit_MAX17048 maxlipo; void setup() { Serial.begin(115200); // Wait for the serial connection to establish before proceeding further. // This is crucial for boards with native USB interfaces. Without this loop, // initial output sent to the serial monitor is lost. This code is not // needed when running on battery. //delay(1000); // Generates a unique device ID from a segment of the MAC address. // Since the MAC address is permanent and unchanged after reboots, // this guarantees the device ID remains consistent. To achieve a // compact ID, only a specific portion of the MAC address is used, // specifically the range between 0x10000 and 0xFFFFF. This range // translates to a hexadecimal string of a fixed 5-character length, // giving us roughly 1 million unique IDs. This approach balances // uniqueness with compactness. uint64_t chipid = ESP.getEfuseMac(); uint32_t deviceValue = ((uint32_t)(chipid >> 16) & 0x0FFFFF) | 0x10000; char device[6]; // 5 characters for the hex representation + the null terminator. sprintf(device, "%x", deviceValue); // Use '%x' for lowercase hex letters // Initialize the SHT4x sensor: if (sht4.begin()) { Serial.println(F("SHT4 temperature and humidity sensor initialized.")); sht4.setPrecision(SHT4X_HIGH_PRECISION); sht4.setHeater(SHT4X_NO_HEATER); } else { Serial.println(F("Could not find SHT4 sensor.")); } // Initialize the MAX17048 sensor: if (maxlipo.begin()) { Serial.println(F("MAX17048 battery monitor initialized.")); } else { Serial.println(F("Could not find MAX17048 battery monitor!")); } // Insert a short delay to ensure the sensors are ready and their data is stable: delay(200); // Retrieve temperature and humidity data from SHT4 sensor: sensors_event_t humidity, temp; sht4.getEvent(&humidity, &temp); // Get the battery percentage and calibrate if it's over 100%: float batteryPercent = maxlipo.cellPercent(); batteryPercent = (batteryPercent > 100) ? 100 : batteryPercent; WiFiManager wifiManager; // Uncomment the following line to erase all saved WiFi credentials. // This can be useful for debugging or reconfiguration purposes. // wifiManager.resetSettings(); // This WiFI manager attempts to establish a WiFi connection using known // credentials, stored in RAM. If it fails, the device will switch to Access // Point mode, creating a network named "Temperature Monitor". In this mode, // connect to this network, navigate to the device's IP address (default IP // is 192.168.4.1) using a web browser, and a configuration portal will be // presented, allowing you to enter new WiFi credentials. Upon submission, // the device will reboot and try connecting to the specified network with // these new credentials. if (!wifiManager.autoConnect("Temperature Monitor")) { Serial.println(F("Failed to connect to WiFi ...")); // If the device fails to connect to WiFi, it will restart to try again. // This approach is useful for handling temporary network issues. However, // in scenarios where the network is persistently unavailable (e.g. router // down for more than an hour, consistently poor signal), the repeated // restarts and WiFi connection attempts can quickly drain the battery. ESP.restart(); // Mandatory delay to allow the restart process to initiate properly: delay(1000); } // Send collected data as JSON to the specified URL: sendJsonData("https://dri.es/sensors", device, temp.temperature, humidity.relative_humidity, batteryPercent); // WiFi consumes significant power so turn it off when done: WiFi.disconnect(true); // Enter deep sleep for 15 minutes. The ESP32-S3's deep sleep mode minimizes // power consumption by powering down most components, except the RTC. This // mode is efficient for battery-powered projects where constant operation // isn't needed. When the device wakes up after the set period, it runs // setup() again, as the state isn't preserved. Serial.println(F("Going to sleep for 15 minutes ...")); ESP.deepSleep(15 * 60 * 1000000); // 15 mins * 60 secs/min * 1,000,000 μs/sec. } bool sendJsonData(const char* url, const char* device, float temperature, float humidity, float battery) { StaticJsonDocument<200> doc; // Round floating-point values to one decimal place for efficient data // transmission. This approach reduces the JSON payload size, which is // important for IoT applications running on batteries. doc["device"] = device; doc["temperature"] = String(temperature, 1); doc["humidity"] = String(humidity, 1); doc["battery"] = String(battery, 1); // Serialize JSON to a string: String jsonData; serializeJson(doc, jsonData); // Initialize an HTTP client with provided URL: HTTPClient httpClient; httpClient.begin(url); httpClient.addHeader("Content-Type", "application/json"); // Send a HTTP POST request: int httpCode = httpClient.POST(jsonData); // Close the HTTP connection: httpClient.end(); // Print debug information to the serial console: Serial.println("Sent '" + jsonData + "' to " + String(url) + ", return code " + httpCode); return (httpCode == 200); } void loop() { // The ESP32-S3 resets and runs setup() after waking up from deep sleep, // making this continuous loop unnecessary. }[/code] Further optimizing battery usage

When I launched my thermometer around Christmas 2023, the battery was at 88%. Today, it is at 52%. Some quick math suggests it's using approximately 12% of its battery per month. Given its current rate of usage, it needs recharging about every 8 months.

Connecting to the WiFi and sending data are by far the main power drains. To extend the battery life, I could send updates less frequently than every 15 minutes, only send them when there is a change in temperature (which is often unchanged or only different by 0.1°C), or send batches of data points together. Any of these methods would work for my needs, but I haven't implemented them yet.

Alternatively, I could hook the microcontroller up to a 5V power adapter, but where is the fun in that? It goes against the project's "more is more" principle.

Handling web service requests

With the client code running on the ESP32-S3 and sending sensor data to https://dri.es/sensors, the next step is to set up a web service endpoint to receive this incoming data.

As I use Drupal for my website, I implemented the web service endpoint in Drupal. Drupal uses Symfony, a popular PHP framework, for large parts of its architecture. This combination offers an easy but powerful way for implementing web services, similar to those found across other modern server-side web development frameworks like Laravel, Django, etc.

Here is what my Drupal routing configuration looks like:

[code yaml]sensors.sensor_data: path: '/sensors' methods: [POST] defaults: _controller: '\Drupal\sensors\Controller\SensorMonitorController::postSensorData' requirements: _permission: 'access content'[/code]

The above configuration directs Drupal to send POST requests made to https://dri.es/sensors to the postSensorData() method of the SensorMonitorController class.

The implementation of this method handles request authentication, validates the JSON payload, and saves the data to MariaDB database table. Pseudo-code:

[code php]public function postSensorData(Request $request) : JsonResponse { $content = $request->getContent(); $data = json_decode($content, TRUE); // Validate the JSON payload: … // Authenticate the request: … $device = DeviceFactory::getDevice($data['device']); if ($device) { $device->recordSensorEvent($data); } return new JsonResponse(['message' => 'Thank you!']); }[/code]

For testing your web service, you can use tools like cURL:

[code bash]$ curl -X POST -H "Content-Type: application/json" -d '{"device":"0xdb123", "temperature":21.5, "humidity":42.5, "battery":90.0}' https://localhost/sensors[/code]

While cURL is great for quick tests, I use PHPUnit tests for automated testing in my CI/CD workflow. This ensures that everything keeps working, even when upgrading Drupal, Symfony, or other components of my stack.

Storing sensor data in a database

The primary purpose of $device->recordSensorEvent() in SensorMonitorController::postSensorData() is to store sensor data into a SQL database. So, let's delve into the database design.

My main design goals for the database backend were:

  1. Instead of storing every data point indefinitely, only keep the daily average, minimum, maximum, and the latest readings for each sensor type across all devices.
  2. Make it easy to add new devices and new sensors in the future. For instance, if I decide to add a CO2 sensor for our bedroom one day (a decision made in my head but not yet pitched to my better half), I want that to be easy.

To this end, I created the following MariaDB table:

[code sql]CREATE TABLE sensor_data ( date DATE, device VARCHAR(255), sensor VARCHAR(255), avg_value DECIMAL(5,1), min_value DECIMAL(5,1), max_value DECIMAL(5,1), min_timestamp DATETIME, max_timestamp DATETIME, readings SMALLINT NOT NULL, UNIQUE KEY unique_stat (date, device, sensor) );[/code]

A brief explanation for each field:

  • date: The date for each sensor reading. It doesn't include a time component as we aggregate data on a daily basis.
  • device: The device ID of the device providing the sensor data, such as 'basement' or 'bedroom'.
  • sensor: The type of sensor, such as 'temperature', 'humidity' or 'co2'.
  • avg_value: The average value of the sensor readings for the day. Since individual readings are not stored, a rolling average is calculated and updated with each new reading using the formula: avg_value = avg_value + new_value - avg_value new_total_readings . This method can accumulate minor rounding errors, but simulations show these are negligible for this use case.
  • min_value and max_value: The daily minimum and maximum sensor readings.
  • min_timestamp and max_timestamp: The exact moments when the minimum and maximum values for that day were recorded.
  • readings: The number of readings (or measurements) taken throughout the day, which is used for calculating the rolling average.

In essence, the recordSensorEvent() method needs to determine if a record already exists for the current date. Depending on this determination, it will either insert a new record or update the existing one.

In Drupal this process is streamlined with the merge() function in Drupal's database layer. This function handles both inserting new data and updating existing data in one step.

[code php]private function updateDailySensorEvent(string $sensor, float $value): void { $timestamp = \Drupal::time()->getRequestTime(); $date = date('Y-m-d', $timestamp); $datetime = date('Y-m-d H:i:s', $timestamp); $connection = Database::getConnection(); $result = $connection->merge('sensor_data') ->keys([ 'device' => $this->id, 'sensor' => $sensor, 'date' => $date, ]) ->fields([ 'avg_value' => $value, 'min_value' => $value, 'max_value' => $value, 'min_timestamp' => $datetime, 'max_timestamp' => $datetime, 'readings' => 1, ]) ->expression('avg_value', 'avg_value + ((:new_value - avg_value) / (readings + 1))', [':new_value' => $value]) ->expression('min_value', 'LEAST(min_value, :value)', [':value' => $value]) ->expression('max_value', 'GREATEST(max_value, :value)', [':value' => $value]) ->expression('min_timestamp', 'IF(LEAST(min_value, :value) = :value, :timestamp, min_timestamp)', [':value' => $value, ':timestamp' => $datetime]) ->expression('max_timestamp', 'IF(GREATEST(max_value, :value) = :value, :timestamp, max_timestamp)', [':value' => $value, ':timestamp' => $datetime]) ->expression('readings', 'readings + 1') ->execute(); }[/code]

Here is what the query does:

  • It checks if a record for the current sensor and date exists.
  • If not, it creates a new record with the sensor data, including the initial average, minimum, maximum, and latest value readings, along with the timestamp for these values.
  • If a record does exist, it updates the record with the new sensor data, adjusting the average value, and updating minimum and maximum values and their timestamps if the new reading is a new minimum or maximum.
  • The function also increments the count of readings.

For those not using Drupal, similar functionality can be achieved with MariaDB's INSERT ... ON DUPLICATE KEY UPDATE command, which allows for the same conditional insert or update logic based on whether the specified unique key already exists in the table.

Here are example queries, extracted from MariaDB's General Query Log to help you get started:

[code sql]INSERT INTO sensor_data (device, sensor, date, min_value, min_timestamp, max_value, max_timestamp, readings) VALUES ('0xdb123', 'temperature', '2024-01-01', 21, '2024-01-01 00:00:00', 21, '2024-01-01 00:00:00', 1); UPDATE sensor_data SET min_value = LEAST(min_value, 21), min_timestamp = IF(LEAST(min_value, 21) = 21, '2024-01-01 00:00:00', min_timestamp), max_value = GREATEST(max_value, 21), max_timestamp = IF(GREATEST(max_value, 21) = 21, '2024-01-01 00:00:00', max_timestamp), readings = readings + 1 WHERE device = '0xdb123' AND sensor = 'temperature' AND date = '2024-01-01';[/code] Generating graphs

With the data securely stored in the database, the next step involved generating the graphs. To accomplish this, I wrote some custom PHP code that generates Scalable Vector Graphics (SVGs).

Given that is blog post is already quite long, I'll spare you the details. For now, those curious can use the 'View source' feature in their web browser to examine the SVGs on the thermometer page.

Conclusion

It's fun how a visit to the Champagne cellars in France sparked an unexpected project. Choosing to build a thermometer rather than buying one allowed me to dive back into an old passion for hardware and low-level software.

I also like taking control of my own data and software. It gives me a sense of control and creativity.

As Drupal's project lead, using Drupal for an Internet-of-Things (IoT) backend brought me unexpected joy. I just love the power and flexibility of open-source platforms like Drupal.

As a next step, I hope to design and 3D print a case for my thermometer, something I've never done before. And as mentioned, I'm also considering integrating additional sensors. Stay tuned for updates!

Categories: FLOSS Project Planets

Dries Buytaert: Drupal adventures in Japan and Australia

Thu, 2024-03-14 09:04

Next week, I'm traveling to Japan and Australia. I've been to both countries before and can't wait to return – they're among my favorite places in the world.

My goal is to connect with the local Drupal community in each country, discussing the future of Drupal, learning from each other, and collaborating.

I'll also be connecting with Acquia's customers and partners in both countries, sharing our vision, strategy and product roadmap. As part of that, I look forward to spending some time with the Acquia teams as well – about 20 employees in Japan and 35 in Australia.

I'll present at a Drupal event in Tokyo the evening of March 14th at Yahoo! Japan.

While in Australia, I'll be attending Drupal South, held at the Sydney Masonic Centre from March 20-22. I'm excited to deliver the opening keynote on the morning of March 20th, where I'll delve into Drupal's past, present, and future.

I look forward to being back in Australia and Japan, reconnecting with old friends and the local communities.

Categories: FLOSS Project Planets

Dries Buytaert: Two years later: is my Web3 website still standing?

Thu, 2024-03-14 09:04

Two years ago, I launched a simple Web3 website using IPFS (InterPlanetary File System) and ENS (Ethereum Name Service). Back then, Web3 tools were getting a lot of media attention and I wanted to try it out.

Since I set up my Web3 website two years ago, I basically forgot about it. I didn't update it or pay attention to it for two years. But now that we hit the two-year mark, I'm curious: is my Web3 website still online?

At that time, I also stated that Web3 was not fit for hosting modern web applications, except for a small niche: static sites requiring high resilience and infrequent content updates.

I was also curious to explore the evolution of Web3 technologies to see if they became more applicable for website hosting.

My original Web3 experiment

In my original blog post, I documented the process of setting up what could be called the "Hello World" of Web3 hosting. I stored an HTML file on IPFS, ensured its availability using "pinning services", and made it accessible using an ENS domain.

For those with a basic understanding of Web3, here is a summary of the steps I took to launch my first Web3 website two years ago:

  1. Purchased an ENS domain name: I used a crypto wallet with Ethereum to acquire dries.eth through the Ethereum Name Service, a decentralized alternative to the traditional DNS (Domain Name System).
  2. Uploaded an HTML File to IPFS: I uploaded a static HTML page to the InterPlanetary File System (IPFS), which involved running my own IPFS node and utilizing various pinning services like Infura, Fleek, and Pinata. These pinning services ensure that the content remains available online even when my own IPFS node is offline.
  3. Accessed the website: I confirmed that my website was accessible through IPFS-compatible browsers.
  4. Mapped my webpage to my domain name: As the last step, I linked my IPFS-hosted site to my ENS domain dries.eth, making the web page accessible under an easy domain name.

If the four steps above are confusing to you, I recommend reading my original post. It is over 2,000 words, complete with screenshots and detailed explanations of the steps above.

Checking the pulse of various Web3 services

As the first step in my check-up, I wanted to verify if the various services I referenced in my original blog post are still operational.

The results, displayed in the table below, are really encouraging: Ethereum, ENS, IPFS, Filecoin, Infura, Fleek, Pinata, and web3.storage are all operational.

The two main technologies – ENS and IPFS – are both actively maintained and developed. This indicates that Web3 technology has built a robust foundation.

Service Description Still around in February 2024) ENS A blockchain-based naming protocol offering DNS for Web3, mapping domain names to Ethereum addresses. Yes IPFS A peer-to-peer protocol for storing and sharing data in a distributed file system. Yes Filecoin A blockchain-based storage network and cryptocurrency that incentivizes data storage and replication. Yes Infura Provides tools and infrastructure to manage content on IPFS and other tools for developers to connect their applications to blockchain networks and deploy smart contracts. Yes Fleek A platform for building websites using IPFS and ENS. Yes Pinata Provides tools and infrastructure to manage content on IPFS, and more recently Farcaster applications. Yes web3.storage Provides tools and infrastructure to manage content on IPFS with support for Filecoin. Yes Is my Web3 website still up?

Seeing all these Web3 services operational is encouraging, but the ultimate test is to check if my Web3 webpage, dries.eth, remained live. It's one thing for these services to work, but another for my site to function properly. Here is what I found in a detailed examination:

  1. Domain ownership verification: A quick check on etherscan.io confirmed that dries.eth is still registered to me. Relief!
  2. ENS registrar access: Using my crypto wallet, I could easily log into the ENS registrar and manage my domains. I even successfully renewed dries.eth as a test.
  3. IPFS content availability: My webpage is still available on IPFS, thanks to having pinned it two years ago. Logging into Fleek and Pinata, I found my content on their admin dashboards.
  4. Web3 and ENS gateway access: I can visit dries.eth using a Web3 browser, and also via an IPFS-compatible ENS gateway like https://dries.eth.limo/ – a privacy-centric service, new since my initial blog post.

The verdict? Not only are these Web3 services still operational, but my webpage also continues to work!

This is particularly noteworthy given that I haven't logged in to these services, didn't perform any maintenance, or didn't pay any hosting fees for two years (the pinning services I'm using have a free tier).

Visit my Web3 page yourself

For anyone interested in visiting my Web3 page (perhaps your first Web3 visit?), there are several methods to choose from, each with a different level of Web3-ness.

  • Use a Web3-enabled browser: Browsers such as Brave and Opera, offer built-in ENS and IPFS support. They can resolve ENS addresses and interpret IPFS addresses, making it as easy to navigate IPFS content as if it is traditional web content via HTTP or HTTPS.
  • Install a Web3 browser extension: If your favorite browser does not support Web3 out of the box, adding a browser extension like MetaMask can help you access Web3 applications. MetaMask works with Chrome, Firefox, and Edge. It enables you to use .eth domains for doing Ethereum transactions or for accessing content on IPFS.
  • Access through an ENS gateway: For those looking for the simplest way to access Web3 content without installing anything new, using an ENS gateway, such as eth.limo, is the easiest method. This gateway maps ENS domains to DNS, offering direct navigation to Web3 sites like mine at https://dries.eth.limo/. It serves as a simple bridge between Web2 (the conventional web) and Web3.
Streamlining content updates with IPNS

In my original post, I highlighted various challenges, such as the limitations for hosting dynamic applications, the cost of updates, and the slow speed of these updates. Although these issues still exist, my initial analysis was conducted with an incomplete understanding of the available technology. I want to delve deeper into these limitations, and refine my previous statements.

Some of these challenges stem from the fact that IPFS operates as a "content-addressed network". Unlike traditional systems that use URLs or file paths to locate content, IPFS uses a unique hash of the content itself. This hash is used to locate and verify the content, but also to facilitate decentralized storage.

While the principle of addressing content by a hash is super interesting, it also introduces some complications: whenever content is updated, its hash changes, making it tricky to link to the updated content. Specifically, every time I updated my Web3 site's content, I had to update my ENS record, and pay a translation fee on the Ethereum network.

At the time, I wasn't familiar with the InterPlanetary Name System (IPNS). IPNS, not to be confused with IPFS, addresses this challenge by assigning a mutable name to content on IPFS. You can think of IPNS as providing an "alias" or "redirect" for IPFS addresses: the IPNS address always stays the same and points to the latest IPFS address. It effectively eliminates the necessity of updating ENS records with each content change, cutting down on expenses and making the update process more automated and efficient.

To leverage IPNS, you have to take the following steps:

  1. Upload your HTML file to IPFS and receive an IPFS hash.
  2. Publish this hash to IPNS, creating an IPNS hash that directs to the latest IPFS hash.
  3. Link your ENS domain to this IPNS hash. Since the IPNS hash remains constant, you only need to update your ENS record once.

Without IPNS, updating content involved:

  1. Update the HTML file.
  2. Upload the revised file to IPFS, generating a new IPFS hash.
  3. Update the ENS record with the new IPFS hash, which costs some Ether and can take a few minutes.

With IPNS, updating content involves:

  1. Update the HTML file.
  2. Upload the revised file to IPFS, generating a new IPFS hash.
  3. Update the IPNS record to reference this new hash, which is free and almost instant.

Although IPNS is a faster and more cost-effective approach compared to the original method, it still carries a level of complexity. There is also a minor runtime delay due to the extra redirection step. However, I believe this tradeoff is worth it.

Updating my Web3 site to use IPNS

With this newfound knowledge, I decided to use IPNS for my own site. I generated an IPNS hash using both the IPFS desktop application (see screenshot) and IPFS' command line tools:

[code bash]$ ipfs name publish /ipfs/bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q > Published to k51qzi5uqu5dgy8mzjtcqvgr388xjc58fwprededbb1fisq1kvl34sy4h2qu1a: /ipfs/bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q[/code] The IPFS Desktop application showing my index.html file with an option to 'Publish to IPNS'.

After generating the IPNS hash, I was able to visit my site in Brave using the IPFS protocol at ipfs://bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q, or via the IPNS protocol at ipns://k51qzi5uqu5dgy8mzjtcqvgr388xjc58fwprededbb1fisq1kvl34sy4h2qu1a.

My Web3 site in Brave using IPNS.

Next, I updated the ENS record for dries.eth to link to my IPNS hash. This change cost me 0.0011 ETH (currently $4.08 USD), as shown in the Etherscan transaction. Once the transaction was processed, dries.eth began directing to the new IPNS address.

A transaction confirmation on the ENS website, showing a successful update for dries.eth. Rolling back my IPNS record in ENS

Unfortunately, my excitement was short-lived. A day later, dries.eth stopped working. IPNS records, it turns out, need to be kept alive – a lesson learned the hard way.

While IPFS content can be persisted through "pinning", IPNS records require periodic "republishing" to remain active. Essentially, the network's Distributed Hash Table (DHT) may drop IPNS records after a certain amount of time, typically 24 hours. To prevent an IPNS record from being dropped, the owner must "republish" it before the DHT forgets it.

I found out that the pinning services I use – Dolphin, Fleek and Pinata – don't support IPNS republishing. Looking into it further, it turns out few IPFS providers do.

During my research, I discovered Filebase, a small Boston-based company with fewer than five employees that I hadn't come across before. Interestingly, they provide both IPFS pinning and IPNS republishing. However, to pin my existing HTML file and republish its IPNS hash, I had to subscribe to their service at a cost of $20 per month.

Faced with the challenge of keeping my IPNS hash active, I found myself at a crossroads: either fork out $20 a month for a service like Filebase that handles IPNS republishing for me, or take on the responsibility of running my own IPFS node.

Of course, the whole point of decentralized storage is that people run their own nodes. However, considering the scope of my project – a single HTML file – the effort of running a dedicated node seemed disproportionate. I'm also running my IPFS node on my personal laptop, which is not always online. Maybe one day I'll try setting up a dedicated IPFS node on a Raspberry Pi or similar setup.

Ultimately, I decided to switch my ENS record back to the original IPFS link. This change, documented in the Etherscan transaction, cost me 0.002 ETH (currently $6.88 USD).

Although IPNS works, or can work, it just didn't work for me. Despite the setback, the whole experience was a great learning journey.

(Update: A couple of days after publishing this blog post, someone kindly recommended https://dwebservices.xyz/, claiming their free tier includes IPNS republishing. Although I haven't personally tested it yet, a quick look at their about page suggests they might be a promising solution.)

Web3 remains too complex for most people

Over the past two years, Web3 hosting hasn't disrupted the mainstream website hosting market. Despite the allure of Web3, mainstream website hosting is simple, reliable, and meets the needs of nearly all users.

Despite a significant upgrade of the Ethereum network that reduced energy consumption by over 99% through its transition to a Proof of Stake (PoS) consensus mechanism, environmental considerations, especially the carbon footprint associated with blockchain technologies, continue to create further challenges for the widespread adoption of Web3 technologies. (Note: ENS operates on the blockchain but IPFS does not.)

As I went through the check-up, I discovered islands of innovation and progress. Wallets and ENS domains got easier to use. However, the overall process of creating a basic website with IPFS and ENS remains relatively complex compared to the simplicity of Web2 hosting.

The need for a SQL-compatible Web3 database

Modern web applications like those built with Drupal and WordPress rely on a technology stack that includes a file system, a domain name system (e.g. DNS), a database (e.g. MariaDB or MySQL), and a server-side runtime environment (e.g. PHP).

While IPFS and ENS offer decentralized alternatives for the first two, the equivalents for databases and runtime environments are less mature. This limits the types of applications that can easily move from Web2 to Web3.

A major breakthrough would be the development of a decentralized database that is compatible with SQL, but currently, this does not seem to exist. The complexity of ensuring data integrity and confidentiality across multiple nodes without a central authority, along with meeting the throughput demands of modern web applications, may be too complex to solve.

After all, blockchains, as decentralized databases, have been in development for over a decade, yet lack support for the SQL language and fall short in speed and efficiency required for dynamic websites.

The need for a distributed runtime

Another critical component for modern websites is the runtime environment, which executes the server-side logic of web applications. Traditionally, this has been the domain of PHP, Python, Node.js, Java, etc.

WebAssembly (WASM) could emerge as a potential solution. It could make for an interesting decentralized solution as WASM binaries can be hosted on IPFS.

However, when WASM runs on the client-side – i.e. in the browser – it can't deliver the full capabilities of a server-side environment. This limitation makes it challenging to fully replicate traditional web applications.

So for now, Web3's applications are quite limited. While it's possible to host static websites on IPFS, dynamic applications requiring database interactions and server-side processing are difficult to transition to Web3.

Bridging the gap between Web2 and Web3

In the short term, the most likely path forward is blending decentralized and traditional technologies. For example, a website could store its static files on IPFS while relying on traditional Web2 solutions for its dynamic features.

Looking to the future, initiatives like OrbitDB's peer-to-peer database, which integrates with IPFS, show promise. However, OrbitDB lacks compatibility with SQL, meaning applications would need to be redesigned rather than simply transferred.

Web3 site hosting remains niche

Even the task of hosting static websites, which don't need a database or server-side processing, is relatively niche within the Web3 ecosystem.

As I wrote in my original post: In its current state, IPFS and ENS offer limited value to most website owners, but tremendous value to a very narrow subset of all website owners.. This observation remains accurate today.

IPFS and ENS stand out for their strengths in censorship resistance and reliability. However, for the majority of users, the convenience and adequacy of Web2 for hosting static sites often outweigh these benefits.

The key to broader acceptance of new technologies, like Web3, hinges on either discovering new mass-market use cases or significantly enhancing the user experience for existing ones. Web3 has not found a universal application or surpassed Web2 in user experience.

The popularity of SaaS platforms underscores this point. They dominate not because they're the most resilient or robust options, but because they're the most convenient. Despite the benefits of resilience and autonomy offered by Web3, most individuals opt for less resilient but more convenient SaaS solutions.

Conclusion

Despite the billions invested in Web3 and notable progress, its use for website hosting still has significant limitations.

The main challenge for the Web3 community is to either develop new, broadly appealing applications or significantly improve the usability of existing technologies.

Website hosting falls into the category of existing use cases.

Unfortunately, Web3 remains mostly limited to static websites, as it does not yet offer robust alternatives to SQL databases and server-side runtime.

Even within the limited scope of static websites, improvements to the user experience have been marginal, focused on individual parts of the technology stack. The overall end-to-end experience remains complex.

Nonetheless, the fact that my Web3 page is still up and running after two years is encouraging, showing the robustness of the underlying technology, even if its current use remains limited. I've grown quite fond of IPFS, and I hope to do more useful experiments with it in the future.

All things considered, I don't see Web3 taking the website hosting world by storm any time soon. That said, over time, Web3 could become significantly more attractive and functional. All in all, keeping an eye on this space is definitely fun and worthwhile.

Categories: FLOSS Project Planets

Dries Buytaert: Acquia a Leader in the 2024 Gartner Magic Quadrant for Digital Experience Platforms

Thu, 2024-03-14 09:04

For the fifth year in a row, Acquia has been named a Leader in the Gartner Magic Quadrant for Digital Experience Platforms (DXP).

Acquia received this recognition from Gartner based on both the completeness of product vision and ability to execute.

Central to our vision and execution is a deep commitment to openness. Leveraging Drupal, Mautic and open APIs, we've built the most open DXP, empowering customers and partners to tailor our platform to their needs.

Our emphasis on openness extends to ensuring our solutions are accessible and inclusive, making them available to everyone. We also prioritize building trust through data security and compliance, integral to our philosophy of openness.

We're proud to be included in this report and thank our customers and partners for their support and collaboration.

Mandatory disclaimer from Gartner

Gartner, Magic Quadrant for Digital Experience Platforms, Irina Guseva, Jim Murphy, Mike Lowndes, John Field - February 21, 2024.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Acquia.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

Categories: FLOSS Project Planets

Dries Buytaert: Satoshi Nakamoto's Drupal adventure

Thu, 2024-03-14 09:04

Martti Malmi, an early contributor to the Bitcoin project, recently shared a fascinating piece of internet history: an archive of private emails between himself and Satoshi Nakamoto, Bitcoin's mysterious founder.

The identity of Satoshi Nakamoto remains one of the biggest mysteries in the technology world. Despite extensive investigations, speculative reports, and numerous claims over the years, the true identity of Bitcoin's creator(s) is still unknown.

Martti Malmi released these private conversations in reaction to a court case focused on the true identity of Satoshi Nakamoto and the legal entitlements to the Bitcoin brand and technology.

The emails provide some interesting details into Bitcoin's early days, and might also provide some new clues about Satoshi's identity.

Satoshi and Martti worked together on a variety of different things, including the relaunch of the Bitcoin website. Their goal was to broaden public understanding and awareness of Bitcoin.

And to my surprise, the emails reveal they chose Drupal as their preferred CMS! (Thanks to Jeremy Andrews for making me aware.)

The emails detail Satoshi's hands-on involvement, from installing Drupal themes, to configuring Drupal's .htaccess file, to exploring Drupal's multilingual capabilities.

At some point in the conversation, Satoshi expressed reservations about Drupal's forum module.

For what it is worth, this proves that I'm not Satoshi Nakamoto. Had I been, I'd have picked Drupal right away, and I would never have questioned Drupal's forum module.

Jokes aside, as Drupal's Founder and Project Lead, learning about Satoshi's use of Drupal is a nice addition to Drupal's rich history. Almost every day, I'm inspired by the unexpected impact Drupal has.

Categories: FLOSS Project Planets

Dries Buytaert: The little HTTP Header Analyzer that could

Thu, 2024-03-14 09:04

HTTP headers play a crucial part in making your website fast and secure. For that reason, I often inspect HTTP headers to troubleshoot caching problems or review security settings.

The complexity of the HTTP standard and the challenge to remember all the best practices led me to develop an HTTP Header Analyzer four years ago.

It is pretty simple: enter a URL, and the tool will analyze the headers sent by your webserver, CMS or web application. It then explains these headers, offers a score, and suggests possible improvements.

For a demonstration, click https://dri.es/headers?url=https://www.reddit.com. As the URL suggests, it will analyze the HTTP headers of Reddit.com.

I began this as a weekend project in the early days of COVID, seeing it as just another addition to my toolkit. At the time, I simply added it to my projects page but never announced or mentioned it on my blog.

So why write about it now? Because I happened to check my log files and, lo and behold, the little scanner that could clocked in more than 5 million scans, averaging over 3,500 scans a day.

So four years and five million scans later, I'm finally announcing it to the world!

If you haven't tried my HTTP header analyzer, check it out. It's free, easy to use, requires no sign-up, and is built to help improve your website's performance and security.

The crawler works with all websites, but naturally, I added some special checks for Drupal sites.

I don't have any major plans for the crawler. At some point, I'd like to move it to its own domain, as it feels a bit out of place as part of my personal website. But for now, that isn't a priority.

For the time being, I'm open to any feedback or suggestions and will commit to making any necessary corrections or improvements.

It's rewarding to know that this tool has made thousands of websites faster and safer! It's also a great reminder to share your work, even in the simplest way – you never know the impact it could have.

Categories: FLOSS Project Planets

Dries Buytaert: Acquia retrospective 2023

Thu, 2024-03-14 09:04

At the beginning of every year, I publish a retrospective that looks back at the previous year at Acquia. I also discuss the changing dynamics in our industry, focusing on Content Management Systems (CMS) and Digital Experience Platforms (DXP).

If you'd like, you can read all of my retrospectives for the past 15 years: 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009.

Resilience and growth amid market turbulence

At the beginning of 2023, interest rates were 4.5%. Technology companies, investors, and PE firms were optimistic, anticipating modest growth. However, as inflation persisted and central banks raised rates more than expected, this optimism dwindled.

The first quarter also saw a regional bank crisis, notably the fall of Silicon Valley Bank, which many tech firms, including Acquia, relied on. Following these events, the market's pace slowed, and the early optimism shifted to a more cautious outlook.

Despite these challenges, Acquia thrived. We marked 16 years of revenue increase, achieved record renewal rates, and continued our five-year trend of rising profitability. 2023 was another standout year for Acquia.

One of our main objectives for 2023 was to expand our platform through M&A. However, tighter credit lending, valuation discrepancies, and economic uncertainty complicated these efforts. By the end of 2023, with the public markets rebounding, the M&A landscape showed slight improvement.

In November, we announced Acquia's plan to acquire Monsido, a platform for improving website accessibility, content quality, SEO, privacy, and performance. The acquisition closed last Friday. I'm excited about expanding the value we offer to our customers and look forward to welcoming Monsido's employees to Acquia.

Working towards a safer, responsible and inclusive digital future

Looking ahead to 2024, I anticipate these to be the dominant trends in the CMS and DXP markets:

  • Converging technology ecosystems: MACH and Jamstack are evolving beyond their original approaches. As a result, we'll see their capabilities converge with one another, and with those of traditional CMSes. I wrote extensively about this in Jamstack and MACH's journey towards traditional CMS concepts.
  • Navigating the cookie-less future: Marketers will need to navigate a cookie-less future. This means organizations will depend more and more on data they collect from their own digital channels (websites, newsletters, video platforms, etc).
  • Digital decentralization: The deterioration of commercial social media platforms has been a positive development in my opinion. I anticipate users will continue to reduce their time on these commercial platforms. The steady shift towards open, decentralized alternatives like Mastodon, Nostr, and personal websites is a welcome trend.
  • Growth in digital accessibility: The importance of accessibility is growing and will become even more important in 2024 as organizations prepare for enforcement of the European Accessibility Act in 2025. This trend isn't just about responding to legislation; it's about making sure digital experiences are inclusive to everyone, including individuals with disabilities.
  • AI's impact on digital marketing and websites: As people start getting information directly from Artificial Intelligence (AI) tools, organic website traffic will decline. Just like with the cookie-less future, organizations will need to focus more on growing their own digital channels with exclusive and personalized content.
  • AI's impact on website building: We'll witness AI's evolution from assisting in content production to facilitating the building of applications. Instead of laboriously piecing together landing pages or campaigns with a hundred clicks, users will simply be able to guide the process with AI prompts. AI will evolve to become the new user interface for complex tasks.
  • Cybersecurity prioritization: As digital landscapes expand, so do vulnerabilities. People and organizations will become more protective of their personal and privacy data, and will demand greater control over the sharing and storage of their information. This means a growing focus on regulation, more strict compliance rules, automatic software updates, AI-driven monitoring and threat detection, passwordless authentication, and more.
  • Central content and data stores: Organizations are gravitating more and more towards all-in-one platforms that consolidate data and content. This centralization enables businesses to better understand and anticipate customer needs, and deliver better, personalized customer experiences.

While some of these trends suggest a decline in the importance of traditional websites, others trends point towards a positive future for websites. On one side, the rise of AI in information gathering will decrease the need for traditional websites. On the other side, the decline of commercial social media and the shift to a cookie-less future suggest that websites will continue to be important, perhaps even more so.

What I like most about many of these trends is that they are shaping a more intuitive, inclusive, and secure digital future. Their impact on end-users will be profound, making every interaction more personal, accessible, and secure.

However, I suspect the ways in which we do digital marketing will need to change quite a bit. Marketing teams will need to evolve how they generate leads. They'll have to use privacy-friendly methods to develop strong customer relationships and offer more value than what AI tools provide.

This means getting closer to customers with content that is personal and relevant. The use of intent data, first-party data and predictive marketing for determining the "next best actions" will continue to grow in importance.

It also means that more content may transition into secure areas such as newsletters, members-only websites, or websites that tailor content dynamically for each user, where it can't be mined by AI tools.

All this bodes well for CMSes, Customer Data Platforms (CDPs), personalization software, Account-Based Marketing (ABM), etc. By utilizing these platforms, marketing teams can better engage with individuals and offer more meaningful experiences. Acquia is well-positioned based on these trends.

Reaffirming our DXP strategy, with a focus on openness

On a personal front, my title expanded from CTO to CTO & Chief Strategy Officer. Since Acquia's inception, I've always played a key role in shaping both our technology and business strategies. This title change reflects my ongoing responsibilities.

Until 2018, Acquia mainly focused on CMS. In 2018, we made a strategic shift from being a leader in CMS to becoming a leader in DXP. We have greatly expanded our product portfolio since then. Today, Acquia's DXP includes CMS, Digital Asset Management (DAM), Customer Data Platform (CDP), Marketing Automation, Digital Experience Optimization, and more. We've been recognized as leaders in DXP by analyst firms including Gartner, GigaOm, Aragon Research and Omdia.

As we entered 2023, we felt we had successfully executed our big 2018 strategy shift. With my updated title, I spearheaded an effort to revise our corporate strategy to figure out what is next. The results were that we reaffirmed our commitment to our core DXP market with the goal of creating the best "Open DXP" in the market.

We see "Open" as a key differentiator. As part of our updated strategy, we explicitly defined what "Open" means to us. While this topic deserves a blog post on its own, I will touch on it here.

Being "Open" means we actively promote integrations with third-party vendors. When you purchase an Acquia product, you're not just buying a tool; you're also buying into a technology ecosystem.

However, our definition of "Open" extends far beyond mere integrations. It's also about creating an inclusive environment where everyone is empowered to participate and contribute to meaningful digital experiences in a safe and secure manner. Our updated strategy, while still focused on the DXP ecosystem, champions empowerment, inclusivity, accessibility, and safety.

A slide from our strategy presentation, summarizing our definition of an Open DXP. The definition is the cornerstone of Acquia's "Why"-statement.

People who have followed me for a while know that I've long advocated for an Open Web, promoting inclusivity, accessibility, and safety. It's inspiring to see Acquia fully embrace these principles, a move I hope will inspire not just me, but our employees, customers, and partners too. It's not just a strategy; it's a reflection of our core values.

It probably doesn't come as a surprise that our updated strategy aligns with the trends I outlined above, many of which also point towards a safer, more responsible, and inclusive digital future. Our enthusiasm for the Monsido acquisition is also driven by these core principles.

Needless to say, our strategy update is about much more than a commitment to openness. Our commitment to openness drives a lot of our strategic decisions. Here are a few key examples to illustrate our direction.

  • Expanding into the mid-market: Acquia has primarily catered to the enterprise and upper mid-market sectors. We're driven by the belief that an open platform, dedicated to inclusivity, accessibility, and safety, enhances the web for everyone. Our commitment to contributing to a better web is motivating us to broaden our reach, making expanding into the mid-market a logical strategic move.
  • Expanding partnerships, empowering co-creation: Our partnership program is well-established with Drupal, and we're actively expanding partnerships for Digital Asset Management (DAM), CDP, and marketing automation. We aim to go beyond a standard partner program by engaging more deeply in co-creation with our partners, similar to what we do in the Open Source community. The goal is to foster an open ecosystem where everyone can contribute to developing customer solutions, embodying our commitment to empowerment and collaboration. We've already launched a marketplace in 2023, Acquia Exchange, featuring more than 100 co-created solutions, with the goal of expanding to 500 by the end of 2024.
  • Be an AI-fueled organization: In 2023, we launched numerous AI features and we anticipate introducing even more in 2024. Acquia already adheres to responsible AI principles. This aligns with our definition of "Open", emphasizing accountability and safety for the AI systems we develop. We want to continue to be a leader in this space.
Stronger together

We've always been very focused on our greatest asset: our people. This year, we welcomed exceptional talent across the organization, including two key additions to our Executive Leadership Team (ELT): Tarang Patel, leading Corporate Development, and Jennifer Griffin Smith, our Chief Market Officer. Their expertise has already made a significant impact.

Eight awards for "Best Places to Work 2023" in various categories such as IT, mid-size workplaces, female-friendly environments, wellbeing, and appeal for millennials, across India, the UK, and the USA.

In 2023, we dedicated ourselves to redefining and enhancing the Acquia employee experience, committing daily to its principles through all our programs. This focus, along with our strong emphasis on diversity, equity, and inclusion (DEI), has cultivated a culture of exceptional productivity and collaboration. As a result, we've seen not just record-high employee retention rates but also remarkable employee satisfaction and engagement. Our efforts have earned us various prestigious "Best Place to Work" awards.

Customer-centric excellence, growth, and renewals

Our commitment to delivering great customer experiences is evident in the awards and recognition we received, many of which are influenced by customer feedback. These accolades include recognition on platforms like TrustRadius and G2, as well as the prestigious 2023 CODiE Award. Acquia received 32 awards for leadership in various categories across its products.

As mentioned earlier, we delivered consistently excellent, and historically high, renewal rates throughout 2023. It means our customers are voting with their feet (and wallets) to stay with Acquia.

Furthermore, we achieved remarkable growth within our customer base with record rates of expansion growth. Not only did customers choose to stay with Acquia, they chose to buy more from Acquia as well.

To top it all off, we experienced a substantial increase in the number of customers who were willing to serve as references for Acquia, endorsing our products and services to prospects.

Many of the notable customer stories for 2023 came from some of the world's most recognizable organizations, including:

  • Nestlé: With thousands of brand sites hosted on disparate technologies, Nestlé brand managers had difficulty maintaining, updating, and securing brand assets globally. Not only was it a challenge to standardize and govern the multiple brands, it was costly to maintain resources for each technology and inefficient with work being duplicated across sites. Today, Nestlé uses Drupal, Acquia Cloud Platform and Acquia Site Factory to face these challenges. Nearly all (90%) of Nestlé sites are built using Drupal. Across the brand's entire portfolio of sites, approximately 60% are built on a shared codebase – made possible by Acquia and Acquia Site Factory.
  • Novartis: The Novartis product sites in the U.S. were fragmented across multiple platforms, with different approaches and capabilities and varying levels of technical debt. This led to uncertainty in the level of effort and time to market for new properties. Today, the Novartis platform built with Acquia and EPAM has become a model within the larger Novartis organization for how a design system can seamlessly integrate with Drupal to build a decoupled front end. The new platform allows Novartis to create new or move existing websites in a standardized design framework, leading to more efficient development cycles and more functionality delivered in each sprint.
  • US Drug Enforcement Administration: The U.S. DEA wanted to create a campaign site to increase public awareness regarding the increasing danger of fake prescription pills laced with fentanyl. Developed with Tactis and Acquia, the campaign website One Pill Can Kill highlights the lethal nature of fentanyl. The campaign compares real and fake pills through videos featuring parents and teens who share their experiences with fentanyl. It also provides resources and support for teens, parents, and teachers and discusses the use of Naloxone in reversing the effects of drug overdose.
  • Cox Automotive: Cox Automotive uses first-party data through Acquia Campaign Studio for better targeted marketing. With their Automotive Marketing Platform (AMP) powered by Acquia, they access real-time data and insights, delivering personalized messages at the right time. The results? Dealers using AMP see consumers nine times more likely to purchase within 45 days and a 14-fold increase in sales gross profit ROI.

I'm proud of outcomes like this: it show how valuable our DXP is to our customers.

Product innovation

In 2023, we remained focused on solving problems for our current and future customers. We use both quantitative and qualitative data to assess areas of opportunities and run hypothesis-driven experiments with design prototypes, hackathons, and proofs-of-concept. This approach has led to hundreds of improvements across our products, both by our development teams and through partnerships. Below are some key innovations that have transformed the way our customers operate:

  • We released many AI features in 2023, including AI assistants and automations for Acquia DAM, Acquia CDP, Acquia Campaign Studio, and Drupal. This includes: AI assist during asset creation in Drupal and Campaign Studio, AI-generated descriptions for assets and products in DAM, auto-tagging in DAM with computer vision, next best action/channel predictions in CDP, ML-powered customer segmentation in CDP, and much more.
  • Our Drupal Acceleration Team (DAT) worked with the Drupal community on major upgrade of the Drupal field UI, which makes it significantly faster and more user-friendly to perform content modeling. We also open sourced Acquia Migrate Accelerate as part of the run-up to the Drupal 7 community end-of-life in January 2025. Finally, DAT contributed to a number of major ongoing initiatives including Project Browser, Automatic Updates, Page Building, Recipes, and more that will be seen in later versions of Drupal.
  • We launched a new trial experience for Acquia Cloud Platform, our Drupal platform. Organizations can now explore Acquia's hosting and developer tools to see how their Drupal applications perform on our platform.
  • Our Kuberbetes-native Drupal hosting platform backed by AWS, Acquia Cloud Next, continued to roll out to more customers. Over two-thirds of our customers are now enjoying Acquia Cloud Next, which provides them the highest levels of performance, self-healing, and dynamic scaling. We've seen a 50% decrease in critical support tickets since transitioning customers to Acquia Cloud Next, all while maintaining an impressive uptime record of 99.99% or higher.
  • Our open source marketing automation tool, Acquia Campaign Studio, is now running on Acquia Cloud Next as its core processing platform. This consolidation benefits everyone: it streamlines and accelerates innovation for us while enabling our customers to deliver targeted and personalized messages at a massive scale.
  • We decided to make Mautic a completely independent Open Source project, letting it grow and change freely. We've remained the top contributor ever since.
  • Marketers can now easily shape the Acquia CDP data model using low-code tools, custom attributes and custom calculations features. This empowers all Acquia CDP users, regardless of technical skill, to explore new use cases.
  • Acquia CDP's updated architecture enables nearly limitless elasticity, which allows the platform to scale automatically based on demand. We put this to the test during Black Friday, when our CDP efficiently handled billions of events. Our new architecture has led to faster, more consistent processing times, with speeds improving by over 70%.
  • With Snowflake as Acquia's data backbone, Acquia customers can now collaborate on their data within their organization and across business units. Customers can securely share and access governed data while preserving privacy, offering them advanced data strategies and solutions.
  • Our DAM innovation featured 47 updates and 13 new integrations. These updates included improved Product Information Management (PIM) functionality, increased accessibility, and a revamped search experience. Leveraging AI, we automated the generation of alt-text and product descriptions, which streamlines content management. Additionally, we established three partnerships to enhance content creation, selection, and distribution in DAM: Moovly for AI-driven video creation and translation, Vizit for optimizing content based on audience insights, and Syndic8 for distributing visual and product content across online commerce platforms.
  • With the acquisition of Monsido and new partnerships with VWO (tools for optimizing website engagement and conversions) and Conductor (SEO platform), Acquia DXP now offers an unparalleled suite of tools for experience optimization. Acquia already provided the best tools to build, manage and operate websites. With these additions, Acquia DXP also offers the best solution for experience optimization.
  • Acquia also launched Acquia TV, a one-stop destination for all things digital experience. It features video podcasts, event highlights, product breakdowns, and other content from a diverse collection of industry voices. This is a great example of how we use our own technology to connect more powerfully with our audiences. It's something our customers strive to do everyday.
Conclusion

In spite of the economic uncertainties of 2023, Acquia had a remarkable year. We achieved our objectives, overcame challenges, and delivered outstanding results. I'm grateful to be in the position that we are in.

Our achievements in 2023 underscore the importance of putting our customers first and nurturing exceptional teams. Alongside effective management and financial responsibility, these elements fuel ongoing growth, irrespective of economic conditions.

Of course, none of our results would be possible without the support of our customers, our partners, and the Drupal and Mautic communities. Last but not least, I'm grateful for the dedication and hard work of all Acquians who made 2023 another exceptional year.

Categories: FLOSS Project Planets

Dries Buytaert: The Watchmaker's Approach to Web Development

Thu, 2024-03-14 09:04

Since 1999, I've been consistently working on this website, making it one of my longest-standing projects. Even after all these years, the satisfaction of working on my website remains strong. Remarkable, indeed.

During rare moments of calm — be it a slow holiday afternoon, a long flight home, or the early morning stillness — I'm often drawn to tinkering with my website.

When working on my website, I often make small tweaks and improvements. Much like a watchmaker meticulously fine-tuning the gears of an antique clock, I pay close attention to details.

This holiday, I improved the lazy loading of images in my blog posts, leading to a perfect Lighthouse score. A perfect score isn't necessary, but it shows the effort and care I put into my website.

Screenshot of Lighthouse scores via https://pagespeed.web.dev/.

I also validated my RSS feeds, uncovering a few opportunities for improvement. Like a good Belgian school boy, I promptly implemented these improvements, added new PHPUnit tests and integrated these into my CI/CD pipeline. Some might consider this overkill for a personal site, but for me, it's about mastering the craft, adhering to high standards, and building something that is durable.

Last year, I added 135 new photos to my website, a way for me to document my adventures and family moments. As the year drew to a close, I made sure all new photos have descriptive alt-texts, ensuring they're accessible to all. Writing alt-texts can be tedious, yet it's these small but important details that give me satisfaction.

Just like the watchmaker working on an antique watch, it's not just about keeping time better; it's about cherishing the process and craft. There is something uniquely calming about slowly iterating on the details of a website. I call it the The Watchmaker's Approach to Web Development, where the process holds as much value as the result.

I'm thankful for my website as it provides me a space where I can create, share, and unwind. Why share all this? Perhaps to encourage more people to dive into the world of website creation and maintenance.

Categories: FLOSS Project Planets

Dries Buytaert: Effortless inspecting of Drupal database queries

Thu, 2024-03-14 09:04

Drupal's database abstraction layer is great for at least two reasons. First, it ensures compatibility with various database systems like MySQL, MariaDB, PostgreSQL and SQLite. Second, it improves security by preventing SQL injection attacks.

My main concern with using any database abstraction layer is the seemingly reduced control over the final SQL query. I like to see and understand the final query.

To inspect and debug database queries, I like to use MariaDB's General Query Log. When enabled, this feature captures every query directly in a text file.

I like this approach for a few reasons:

  • It directly captures the query in MariaDB instead of Drupal. This provides the most accurate view of the final SQL queries.
  • It enables real-time monitoring of all queries in a separate terminal window, removing the need to insert debug statements into code.
  • It records both successful and unsuccessful queries, making it a handy tool for debugging purposes.

If you're interested in trying it out, here is how I set up and use MariaDB's General Query Log.

First, log in as the MySQL super user. Since I'm using DDEV for my development, the command to do that looks like this:

[code bash]$ ddev mysql -u root -proot[/code]

After logging in as the MySQL super user, you need to set a few global variables:

[code shell]SET global general_log = 1; SET global log_output = 'file'; SET global general_log_file = '/home/dries/queries.txt';[/code]

This will start logging all queries to a file named 'queries.txt' in my home directory.

DDEV employs containers, isolated environments designed for operating specific software such as databases. To view the queries, you must log into the database container.

[code bash]$ ddev ssh -s db[/code]

Now, you can easily monitor the queries in real-time from a separate terminal window:

[code bash]$ tail -f /home/dries/queries.txt[/code]

This is a simple yet effective way to monitor all database queries that Drupal generates.

Categories: FLOSS Project Planets

Dries Buytaert: The new old: Jamstack and MACH's journey towards traditional CMS concepts

Thu, 2024-03-14 09:04

In recent years, new architectures like MACH and Jamstack have emerged in the world of Content Management Systems (CMS) and Digital Experience Platforms (DXP).

In some way, they have challenged traditional models. As a result, people sometimes ask for my take on MACH and Jamstack. I've mostly refrained from sharing my perspective to avoid controversy.

However, recognizing the value of diverse viewpoints, I've decided to share some of my thoughts. I hope it contributes positively to the ongoing evolution of these technologies.

The Jamstack is embracing its inner CMS

Jamstack, born in 2015, began as an approach for building static sites. The term Jamstack stands for JavaScript, APIs and Markup. Jamstack is an architectural approach that decouples the web experience layer from content and business logic. The web experience layer is often pre-rendered as static pages (markup), served via a Content Delivery Network (CDN).

By 2023, the Jamstack community underwent a considerable transformation. It evolved significantly from its roots, increasingly integrating traditional CMS features, such as "content previews". This shift is driven by the need to handle more complex websites and to make the Jamstack more user-friendly for marketers, moving beyond its developer-centric origins.

Netlify, a leader in the Jamstack community, has acknowledged this shift. As part of that, they are publicly moving away from the term "Jamstack" towards being a "composable web platform".

Many traditional CMSes, including Drupal, have long embraced the concept of "composability". For example, Drupal has been recognized as a composable CMS for two decades and offers advanced composable capabilities.

This expansion reflects a natural progression in technology, where software tends to grow more complex and multifaceted over time, often referred to as the law of increasing complexity. What starts simple becomes more complex over time.

I believe the Jamstack will continue adding traditional CMS features such as menu management, translation workflows, and marketing technology integrations until they more closely resemble traditional CMSes.

Wake-up call: CMSes are hybrid now

The Jamstack is starting to look more like traditional CMSes, not only in features, but also in architecture. Traditional CMSes can work in two main ways: "coupled" or "integrated", where they separate the presentation layer from the business logic using an approach called Model-View-Controller (MVC), or "decoupled", where the front-end and back-end are split using web service APIs, like the Jamstack does.

Modern CMSes integrate well with various JavaScript frameworks, have GraphQL backends, can render static pages, and much more.

This combination of both methods is known as "hybrid". Most traditional CMSes have adopted this hybrid approach. For example, Drupal has championed a headless approach for over a decade, predating both the terms "Jamstack" and "Headless".

Asserting that traditional CMSes are "monolithic" and "outdated" is a narrow-minded view held by people who have overlooked their evolution. In reality, today's choice is between a purely Headless or a Hybrid CMS. Redefining "Jamstack" beyond overstated claims

As the Jamstack becomes more versatile, the original "Jamstack" name and definition feel restrictive. The essence of Jamstack, once centered on creating faster, secure websites with a straightforward deployment process, is changing.

It's time to move beyond some of the original value proposition, not just because of its evolution, but also because many of the Jamstack's advantages have been overstated for years:

In short, Jamstack, initially known for its static site generation and simplicity, is growing into something more dynamic and complex. This is a positive evolution driven by the market's need. This evolution narrows the gap between Jamstack and traditional CMSes, though they still differ somewhat – Jamstack offers a pure headless approach, while traditional CMSes offer both headless and integrated options.

Last but not least, this shift is making Jamstack more similar to MACH, leading to my opinion on the latter.

The key difference between MACH and traditional CMSes

Many readers, especially people in the Drupal community, might be unaware of MACH. MACH is an acronym for Microservices, API-first, Cloud-native SaaS, Headless. It specifies an architectural approach where each business function is a distinct cloud service, often created and maintained by separate vendors and integrated by the end user.

Imagine creating an online store with MACH certified solutions: you'd use different cloud services for each aspect – one for handling payments, another for your product catalog, and so on. A key architectural concept of MACH is that each of these services can be developed, deployed, and managed independently.

At first glance, that doesn't look too different from Drupal or WordPress. Platforms like WordPress and Drupal are already integration-rich, with Drupal surpassing 50,000 different modules in 2023. In fact, some of these modules integrate Drupal with MACH services. For example, when building an online store in Drupal or WordPress, you'd install modules that connect with a payment service, SaaS-based product catalog, and so on.

At a glance, this modular approach seems similar to MACH. Yet, there is a distinct contrast: Drupal and WordPress extend their capabilities by adding modules to a "core platform", while MACH is a collection of independent services, operating without relying on an underlying core platform.

MACH could benefit from "monolithic" features

Many in the MACH community applaud the lack of a core platform, often criticizing it as "monolithic". Calling a CMS like Drupal "monolithic" is, in my opinion, a misleading and simplistic marketing strategy. From a technical perspective, Drupal's core platform is exceptionally modular, allowing all components to be swapped.

Rather than (mis)labeling traditional CMSes as "monolithic", a more accurate way to explain the difference between MACH and traditional CMSes is to focus on the presence or absence of a "core platform".

More importantly, what is often overlooked is the vital role a core platform plays in maintaining a consistent user experience, centralized account management, handling integration compatibility and upgrades, streamlining translation and content workflows across integrations, and more. The core platform essentially offers "shared services" aimed to help improve the end user and developer experience. Without a core platform, there can be increased development and maintenance costs, a fragmented user experience, and significant challenges in composability.

This aligns with a phenomenon that I call "MACH fatigue". Increasingly, organizations come to terms with the expenses of developing and maintaining a MACH-based system. Moreover, end-users often face a fragmented user experience. Instead of a seamlessly integrated interface, they often have to navigate multiple disconnected systems to complete their tasks.

To me, it seems that an ideal solution might be described as "loosely-coupled architectures with a highly integrated user experience". Such architectures allow the flexibility to mix and match the best systems (like your preferred CMS, CRM, and commerce platform). Meanwhile, a highly integrated user experience ensures that these combinations are seamless, not just for the end users but also for content creators and experience builders.

At present, traditional platforms like Drupal are closer to this ideal compared to MACH solutions. I anticipate that in the coming years, the MACH community will focus on improving the cost-effectiveness of MACH platforms. This will involve creating tools for managing dependencies and ensuring version compatibility, similar to the practices embraced by Drupal with Composer.

Efforts are already underway to streamline the MACH user experience with "Digital Experience Composition (DXC) tools". DXC acts as a layer over a MACH architecture, offering a user interface that breaks down various MACH elements into modular "Lego blocks". This allows both developers and marketers to assemble these blocks into a digital experience. Users familiar with traditional CMSes might find this a familiar concept, as many CMS platforms include DXC elements as integral features within their core platform.

Traditional CMSes like Drupal can also take cues from Jamstack and MACH, particularly in their approaches to web services. For instance, while Drupal effectively separates business logic from the presentation layer using the MVC pattern, it primarily relies on internal APIs. A shift towards more public web service APIs could enhance Drupal's flexibility and innovation potential.

Conclusions

In recent years, we've witnessed a variety of technical approaches in the CMS/DXP landscape, with MACH, Jamstack, decoupled, and headless architectures each carving out their paths.

Initially, these paths appeared to diverge. However, we're now seeing a trend of convergence, where different systems are learning from each other and integrating their unique strengths.

The Jamstack is evolving beyond its original focus on static site generation into a more dynamic and complex approach, narrowing the gap with traditional CMSes.

Similarly, to bridge the divide with traditional CMSes, MACH may need to broaden its scope to encompass shared services commonly found in the core platform of traditional CMSes. That would help with developer cost, composability and user-friendliness.

In the end, the success of any platform is judged by how effectively it delivers a good user experience and cost efficiency, regardless of its architecture. The focus needs to move away from architectural considerations to how these technologies can create more intuitive, powerful platforms for end users.

Categories: FLOSS Project Planets

MidCamp - Midwest Drupal Camp: Beware the ides of March!

Wed, 2024-03-13 21:01
Beware the ides of March!

With just one week until we meet in person for MidCamp 2024, book your ticket before the standard discount pricing ends on March 14, 11.59pm CT to save $100!

Session Schedule

We’ve got a great line-up this year! All sessions on Wednesday and Thursday (March 20/21) are included in the price of your MidCamp registration. We encourage you to start planning your days -- and get ready for some great learning opportunities!

Expanded Learning Tickets

Know any students or individuals seeking to expand their Drupal knowledge?

We have heavily discounted tickets ($25!) available for students and those wanting to learn more about Drupal to join us at MidCamp and learn more about the community!

There are sessions for everyone—topics range from Site Building and DevOps to Project Management and Design.

Get your tickets now!

Volunteer for some exclusive swag!

If you know you'll be joining us in Chicago, why not volunteer some of your spare time and get some exclusive swag in return! Check out our non-code opportunities to get involved.

Stay In The Loop

Please feel free to ask on the MidCamp Slack and come hang out with the community online. We will be making announcements there from time to time. We’re also on Twitter and Mastodon.

We can’t wait to see you next week! 

The MidCamp Team

Categories: FLOSS Project Planets

The Drop Times: Zoocha: The Drupal Development Specialists in the UK

Wed, 2024-03-13 17:34
Discover the journey of Zoocha, a leading Drupal Development Agency renowned for its commitment to innovation, sustainability, and client success. Since its inception in 2009, Zoocha has demonstrated a profound dedication to open-source technology and agile methodologies, earning notable certifications and accolades. With a portfolio boasting collaborations with high-profile clients across various sectors, Zoocha excels in delivering customized Drupal solutions that meet unique client needs.

This article delves into Zoocha's strategies for engaging talent, promoting Drupal among younger audiences, and its ambitious goal to achieve carbon neutrality by 2025. Explore how Zoocha's commitment to quality, security, and environmental sustainability positions it as a trusted partner in the dynamic world of web development.
Categories: FLOSS Project Planets

Four Kitchens: Custom Drush commands with Drush Generate

Wed, 2024-03-13 14:59

Marc Berger

Senior Backend Engineer

Always looking for a challenge, Marc tries to add something new to his toolbox for every project and build — be it a new CSS technology, creating custom APIs, or testing out new processes for development.

January 1, 1970

Recently, one of our clients had to retrieve some information from their Drupal site during a CI build. They needed to know the internal Drupal path from a known path alias. Common Drush commands don’t provide this information directly, so we decided to write our own custom Drush command. It was a lot easier than we thought it would be! Let’s get started.

Note: This post is based on commands and structure for Drush 12.

While we can write our own Drush command from scratch, let’s discuss a tool that Drush already provides us: the drush generate command. Drush 9 added support to generate scaffolding and boilerplate code for many common Drupal coding tasks such as custom modules, themes, services, plugins, and many more. The nice thing about using the drush generate command is that the code it generates conforms to best practices and Drupal coding standards — and some generators even come with examples as well. You can see all available generators by simply running drush generate without any arguments.

Step 1: Create a custom module

To get started, a requirement to create a new custom Drush command in this way is to have an existing custom module already in the codebase. If one exists, great. You can skip to Step 2 below. If you need a custom module, let’s use Drush to generate one:

drush generate module

Drush will ask a series of questions such as the module name, the package, any dependencies, and if you want to generate a .module file, README.md, etc. Once the module has been created, enable the module. This will help with the autocomplete when generating the custom Drush command.

drush en <machine_name_of_custom_module>

Step 2: Create custom Drush command boilerplate

First, make sure you have a custom module where your new custom Drush command will live and make sure that module is enabled. Next, run the following command to generate some boilerplate code:

drush generate drush:command-file

This command will also ask some questions, the first of which is the machine name of the custom module. If that module is enabled, it will autocomplete the name in the terminal. You can also tell the generator to use dependency injection if you know what services you need to use. In our case, we need to inject the path_alias.manager service. Once generated, the new command class will live here under your custom module:

<machine_name_of_custom_module>/src/Drush/Commands

Let’s take a look at this newly generated code. We will see the standard class structure and our dependency injection at the top of the file:

<?php namespace Drupal\custom_drush\Drush\Commands; use Consolidation\OutputFormatters\StructuredData\RowsOfFields; use Drupal\Core\StringTranslation\StringTranslationTrait; use Drupal\Core\Utility\Token; use Drupal\path_alias\AliasManagerInterface; use Drush\Attributes as CLI; use Drush\Commands\DrushCommands; use Symfony\Component\DependencyInjection\ContainerInterface; /** * A Drush commandfile. * * In addition to this file, you need a drush.services.yml * in root of your module, and a composer.json file that provides the name * of the services file to use. */ final class CustomDrushCommands extends DrushCommands { use StringTranslationTrait; /** * Constructs a CustomDrushCommands object. */ public function __construct( private readonly Token $token, private readonly AliasManagerInterface $pathAliasManager, ) { parent::__construct(); } /** * {@inheritdoc} */ public static function create(ContainerInterface $container) { return new static( $container->get('token'), $container->get('path_alias.manager'), ); }

Note: The generator adds a comment about needing a drush.services.yml file. This requirement is deprecated and will be removed in Drush 13, so you can ignore it if you are using Drush 12. In our testing, this file does not need to be present.

Further down in the new class, we will see some boilerplate example code. This is where the magic happens:

/** * Command description here. */ #[CLI\Command(name: 'custom_drush:command-name', aliases: ['foo'])] #[CLI\Argument(name: 'arg1', description: 'Argument description.')] #[CLI\Option(name: 'option-name', description: 'Option description')] #[CLI\Usage(name: 'custom_drush:command-name foo', description: 'Usage description')] public function commandName($arg1, $options = ['option-name' => 'default']) { $this->logger()->success(dt('Achievement unlocked.')); }

This new Drush command doesn’t do very much at the moment, but provides a great jumping-off point. The first thing to note at the top of the function are the new PHP 8 attributes that begin with the #. These replace the previous PHP annotations that are commonly seen when writing custom plugins in Drupal. You can read more about the new PHP attributes.

The different attributes tell Drush what our custom command name is, description, what arguments it will take (if any), and any aliases it may have.

Step 3: Create our custom command

For our custom command, let’s modify the code so we can get the internal path from a path alias:

/** * Command description here. */ #[CLI\Command(name: 'custom_drush:interal-path', aliases: ['intpath'])] #[CLI\Argument(name: 'pathAlias', description: 'The path alias, must begin with /')] #[CLI\Usage(name: 'custom_drush:interal-path /path-alias', description: 'Supply the path alias and the internal path will be retrieved.')] public function getInternalPath($pathAlias) { if (!str_starts_with($pathAlias, "/")) { $this->logger()->error(dt('The alias must start with a /')); } else { $path = $this->pathAliasManager->getPathByAlias($pathAlias); if ($path == $pathAlias) { $this->logger()->error(dt('There was no internal path found that uses that alias.')); } else { $this->output()->writeln($path); } } //$this->logger()->success(dt('Achievement unlocked.')); }

What we’re doing here is changing the name of the command so it can be called like so:

drush custom_drush:internal-path <path> or via the alias: drush intpath <path>

The <path> is a required argument (such as /my-amazing-page) because of how it is called in the getInternalPath method. By passing a path, this method first checks to see if the path starts with /. If it does, it will perform an additional check to see if there is a path that exists. If so, it will return the internal path, i.e., /node/1234. Lastly, the output is provided by the logger method that comes from the inherited DrushCommands class. It’s a simple command, but one that helped us automatically set config during a CI job.

Table output

Note the boilerplate code also generated another example below the first — one that will provide output in a table format:

/** * An example of the table output format. */ #[CLI\Command(name: 'custom_drush:token', aliases: ['token'])] #[CLI\FieldLabels(labels: [ 'group' => 'Group', 'token' => 'Token', 'name' => 'Name' ])] #[CLI\DefaultTableFields(fields: ['group', 'token', 'name'])] #[CLI\FilterDefaultField(field: 'name')] public function token($options = ['format' => 'table']): RowsOfFields { $all = $this->token->getInfo(); foreach ($all['tokens'] as $group => $tokens) { foreach ($tokens as $key => $token) { $rows[] = [ 'group' => $group, 'token' => $key, 'name' => $token['name'], ]; } } return new RowsOfFields($rows); }

In this example, no argument is required, and it will simply print out the list of tokens in a nice table:

------------ ------------------ ----------------------- Group Token Name ------------ ------------------ ----------------------- file fid File ID node nid Content ID site name Name ... ... ... Final thoughts

Drush is a powerful tool, and like many parts of Drupal, it’s expandable to meet different needs. While I shared a relatively simple example to solve a small challenge, the possibilities are open to retrieve all kinds of information from your Drupal site to use in scripting, CI/CD jobs, reporting, and more. And by using the drush generate command, creating these custom solutions is easy, follows best practices, and helps keep code consistent.

Further reading

The post Custom Drush commands with Drush Generate appeared first on Four Kitchens.

Categories: FLOSS Project Planets

Tag1 Consulting: The DDEV Local Development Environment: Talking with Maintainer Randy Fay

Wed, 2024-03-13 07:40

Randy Fay, the maintainer of DDEV discusses the key features and functionalities of DDEV, a Docker-based development environment that streamlines setting up and managing local development for applications (no Docker knowledge is required). Whether you're creating applications in Python, PHP, or other languages, DDEV will save you tremendous time and effort. It also works great for managing multiple projects, or working with a large distributed team, ensuring everyone’s configurations remain in sync. Randy also demos DDEV, showcasing how fast and easy it is to set up a local Drupal development environment quickly. Additionally, he touches upon the history and future of DDEV, and the critical role of the DDEV user community in both supporting the project and shaping it’s development. This interview is perfect for any developer interested in efficient development tools, current DDEV users, or anyone curious about local development technologies and best practices. --- For a transcript of this video, see The DDEV Local Development Environment- Talking with Randy Fay --- ## Links - DDEV: ddev.com - Docs https://ddev.readthedocs.io - CMS Quickstarts https://ddev.readthedocs.io/en/stable/users/quickstart/ - DDEV 2023 Review https://ddev.com/blog/2023-review - [DDEV 2024 Plans](https://ddev.com/blog/2024-plans...

Read more michaelemeyers Wed, 03/13/2024 - 04:40
Categories: FLOSS Project Planets

Talking Drupal: Skills Upgrade 2

Wed, 2024-03-13 00:00

Welcome back to “Skills Upgrade” a Talking Drupal mini-series following the journey of a D7 developer learning D10. This is episode 2.

Topics
  • Review Chad's goals for the previous week
    • DDEV Installation
    • Docker for Mac vs other options
    • IDE Setup
  • Review Chad's questions
  • Tasks for the upcoming week
    • DDEV improve performance
    • Install Drupal 10
    • Install drupal/core dependencies
    • Configure and test phpcs
    • Test phpstan
    • Set up settings.local.php
    • Install devel module
Resources

DDEV Performance DDEV Quickstart Drupal Core Dependencies How to Implement Drupal Code Standards Running PHPStan On Drupal Custom Modules Why you should care about using settings.local.php Rancher Desktop

Chad's Drupal 10 Learning Curriclum & Journal Chad's Drupal 10 Learning Notes

Hosts

AmyJune Hineline - @volkswagenchick

Guests

Chad Hester - chadkhester.com @chadkhest Mike Anello - DrupalEasy.com @ultimike

Categories: FLOSS Project Planets

Pages