Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 3 hours 32 min ago

The Drop is Always Moving: For the first time, Drupal Starshot adds a team of advisors to the leadership team. This council will provide strategic input and feedback to help ensure Starshot meets the needs of key stakeholders and end-users. Members are...

Fri, 2024-07-12 08:47

For the first time, Drupal Starshot adds a team of advisors to the leadership team. This council will provide strategic input and feedback to help ensure Starshot meets the needs of key stakeholders and end-users. Members are announced now at https://www.drupal.org/about/starshot/blog/announcing-the-drupal-starshot-advisory-council

Categories: FLOSS Project Planets

Drupal Starshot blog: Announcing the Drupal Starshot Advisory Council

Fri, 2024-07-12 08:27

I'm excited to announce the formation of the Drupal Starshot Advisory Council. When I announced Starshot's Leadership Team, I explained that we are innovating on the leadership model by adding a team of advisors. This council will provide strategic input and feedback to help ensure Drupal Starshot meets the needs of key stakeholders and end-users.

The Drupal Starshot initiative represents an ambitious effort to expand Drupal's reach and impact. To guide this effort, we've established a diverse Advisory Council that includes members of the Drupal Starshot project team, Drupal Association staff and Board of Directors, representatives from Drupal Certified Partners, Drupal Core Committers, and last but not least, individuals representing the target end-users for Drupal Starshot. This ensures a wide range of perspectives and expertise to inform the project's direction and decision-making.

The initial members include:

The council has been meeting monthly to receive updates from myself and the Drupal Starshot Leadership Team. Members will provide feedback on project initiatives, offer recommendations, and share insights based on their diverse experiences and areas of expertise.

In addition to guiding the strategic direction of Drupal Starshot, the Advisory Council will play a vital role in communication and alignment between the Drupal Starshot team, the Drupal Association, Drupal Core, and the broader Drupal community.

I'm excited to be working with this accomplished group to make the Drupal Starshot vision a reality. Together we can expand the reach and impact of Drupal, and continue advancing our mission to make the web a better place.

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

File attachments:  starshot-council-1920w.jpg
Categories: FLOSS Project Planets

The Drop is Always Moving: Announced in https://www.drupal.org/drupalorg/blog/ending-packagesdrupalorg-support-for-composer-1, Drupal Composer 1 support is being phased out! 1️⃣ New Drupal.org packages/releases will not be available for Composer 1...

Fri, 2024-07-12 05:39

Announced in https://www.drupal.org/drupalorg/blog/ending-packagesdrupalorg-support-for-composer-1, Drupal Composer 1 support is being phased out!
1️⃣ New Drupal.org packages/releases will not be available for Composer 1 after Aug 12, 2024.
2️⃣ Composer 1 support for older packages will be dropped after Oct 1, 2024.

Categories: FLOSS Project Planets

Dries Buytaert: Announcing the Drupal Starshot Advisory Council

Thu, 2024-07-11 18:24

I'm excited to announce the formation of the Drupal Starshot Advisory Council. When I announced Starshot's Leadership Team, I explained that we are innovating on the leadership model by adding a team of advisors. This council will provide strategic input and feedback to help ensure Drupal Starshot meets the needs of key stakeholders and end-users.

The Drupal Starshot initiative represents an ambitious effort to expand Drupal's reach and impact. To guide this effort, we've established a diverse Advisory Council that includes members of the Drupal Starshot project team, Drupal Association staff and Board of Directors, representatives from Drupal Certified Partners, Drupal Core Committers, and last but not least, individuals representing the target end-users for Drupal Starshot. This ensures a wide range of perspectives and expertise to inform the project's direction and decision-making.

The initial members include:

The council has been meeting monthly to receive updates from myself and the Drupal Starshot Leadership Team. Members will provide feedback on project initiatives, offer recommendations, and share insights based on their diverse experiences and areas of expertise.

In addition to guiding the strategic direction of Drupal Starshot, the Advisory Council will play a vital role in communication and alignment between the Drupal Starshot team, the Drupal Association, Drupal Core, and the broader Drupal community.

I'm excited to be working with this accomplished group to make the Drupal Starshot vision a reality. Together we can expand the reach and impact of Drupal, and continue advancing our mission to make the web a better place.

Categories: FLOSS Project Planets

Dries Buytaert: Building my own temperature and humidity monitor

Thu, 2024-07-11 15:09

Last fall, we toured the Champagne region in France, famous for its sparkling wines. We explored the ancient, underground cellars where Champagne undergoes its magical transformation from grape juice to sparkling wine. These cellars, often 30 meters deep and kilometers long, maintain a constant temperature of around 10-12°C, providing the perfect conditions for aging and storing Champagne.

25 meters underground in a champagne tunnel, which often stretches for miles/kilometers.

After sampling various Champagnes, we returned home with eight cases to store in our home's basement. However, unlike those deep cellars, our basement is just a few meters deep, prompting a simple question that sent me down a rabbit hole: how does our basement's temperature compare?

Rather than just buying a thermometer, I decided to build my own "temperature monitoring system" using open hardware and custom-built software. After all, who needs a simple solution when you can spend evenings tinkering with hardware, sensors, wires and writing your own software? Sometimes, more is more!

The basic idea is this: track the temperature and humidity of our basement every 15 minutes and send this information to a web service. This web service analyzes the data and alerts us if our basement becomes too cold or warm.

I launched this monitoring system around Christmas last year, so it's been running for nearly three months now. You can view the live temperature and historical data trends at https://dri.es/sensors. Yes, publishing our basement's temperature online is a bit quirky, but it's all in good fun.

A screenshot of my basement temperature dashboard.

So far, the temperature in our basement has been ideal for storing wine. However, I expect it will change during the summer months.

In the rest of this blog post, I'll share how I built the client that collects and sends the data, as well as the web service backend that processes and visualizes that data.

Hardware used

For this project, I bought:

  1. Adafruit ESP32-S3 Feather: A microcontroller board with Wi-Fi and Bluetooth capabilities, serving as the central processing unit of my project.
  2. Adafruit SHT4x sensor: A high-accuracy temperature and humidity sensor.
  3. 3.7v 500mAh battery: A small and portable power source.
  4. STEMMA QT / Qwiic JST SH 4-pin cable: To connect the sensor to the board without soldering.

The total hardware cost was $32.35 USD. I like Adafruit a lot, but it's worth noting that their products often come at a higher cost. You can find comparable hardware for as little as $10-15 elsewhere. Adafruit's premium cost is understandable, considering how much valuable content they create for the maker community.

An ESP32-S3 development board (middle) linked to a Sensirion SHT41 temperature and humidity sensor (left) and powered by a battery pack (right). Client code for Adafruit ESP32-S3 Feather

I developed the client code for the Adafruit ESP32-S3 Feather using the Arduino IDE, a widely used platform for developing and uploading code to Arduino-compatible boards.

The code measures temperature and humidity every 15 minutes, connects to WiFi, and sends this data to https://dri.es/sensors, my web service endpoint.

One of my goals was to create a system that could operate for a long time without needing to recharge the battery. The ESP32-S3 supports a "deep sleep" mode where it powers down almost all its functions, except for the clock and memory. By placing the ESP32-S3 into deep sleep mode between measurements, I was able to significantly reduce power.

Now that you understand the high-level design goals, including deep sleep mode, I'll share the complete client code below. It includes detailed code comments, making it self-explanatory.

[code c]#include "Adafruit_SHT4x.h" #include "Adafruit_MAX1704X.h" #include "WiFiManager.h" #include "ArduinoJson.h" #include "HTTPClient.h" // The Adafruit_SHT4x sensor is a high-precision, temperature and humidity // sensor with an I2C interface. Adafruit_SHT4x sht4 = Adafruit_SHT4x(); // The Adafruit ESP32-S3 Feather comes with a built-in MAX17048 LiPoly / LiIon // battery monitor. The MAX17048 provides accurate monitoring of the battery's // voltage. Utilizing the Adafruit library, not only helps us obtain the raw // voltage data from the battery cell, but also converts this data into a more // intuitive battery percentage or charge level. We will pass on the battery // percentage to the web service endpoint, which can visualize it or use it to // send notifications when the battery needs recharging. Adafruit_MAX17048 maxlipo; // The setup() function is used to initialize the device's hardware and // communications. It's executed once at startup. Here, we begin serial // communication, initialize sensors, connect to Wi-Fi, and send initial // data. void setup() { Serial.begin(115200); // Wait for the serial connection to establish before proceeding further. // This is crucial for boards with native USB interfaces. Without this loop, // initial output sent to the serial monitor is lost. This code is not // needed when running on battery. //delay(1000); // Generates a unique device ID from a segment of the MAC address. // Since the MAC address is permanent and unchanged after reboots, // this guarantees the device ID remains consistent. To achieve a // compact ID, only a specific portion of the MAC address is used, // specifically the range between 0x10000 and 0xFFFFF. This range // translates to a hexadecimal string of a fixed 5-character length, // giving us roughly 1 million unique IDs. This approach balances // uniqueness with compactness. uint64_t chipid = ESP.getEfuseMac(); uint32_t deviceValue = ((uint32_t)(chipid >> 16) & 0x0FFFFF) | 0x10000; char device[6]; // 5 characters for the hex representation + the null terminator. sprintf(device, "%x", deviceValue); // Use '%x' for lowercase hex letters // Initialize the SHT4x sensor: if (sht4.begin()) { Serial.println(F("SHT4 temperature and humidity sensor initialized.")); sht4.setPrecision(SHT4X_HIGH_PRECISION); sht4.setHeater(SHT4X_NO_HEATER); } else { Serial.println(F("Could not find SHT4 sensor.")); } // Initialize the MAX17048 sensor: if (maxlipo.begin()) { Serial.println(F("MAX17048 battery monitor initialized.")); } else { Serial.println(F("Could not find MAX17048 battery monitor!")); } // Insert a short delay to ensure the sensors are ready and their data is stable: delay(200); // Retrieve temperature and humidity data from SHT4 sensor: sensors_event_t humidity, temp; sht4.getEvent(&humidity, &temp); // Get the battery percentage and calibrate if it's over 100%: float batteryPercent = maxlipo.cellPercent(); batteryPercent = (batteryPercent > 100) ? 100 : batteryPercent; WiFiManager wifiManager; // Uncomment the following line to erase all saved WiFi credentials. // This can be useful for debugging or reconfiguration purposes. // wifiManager.resetSettings(); // This WiFi manager attempts to establish a WiFi connection using known // credentials, stored in RAM. If it fails, the device will switch to Access // Point mode, creating a network named "Temperature Monitor". In this mode, // connect to this network, navigate to the device's IP address (default IP // is 192.168.4.1) using a web browser, and a configuration portal will be // presented, allowing you to enter new WiFi credentials. Upon submission, // the device will reboot and try connecting to the specified network with // these new credentials. if (!wifiManager.autoConnect("Temperature Monitor")) { Serial.println(F("Failed to connect to WiFi ...")); // If the device fails to connect to WiFi, it will restart to try again. // This approach is useful for handling temporary network issues. However, // in scenarios where the network is persistently unavailable (e.g. router // down for more than an hour, consistently poor signal), the repeated // restarts and WiFi connection attempts can quickly drain the battery. ESP.restart(); // Mandatory delay to allow the restart process to initiate properly: delay(1000); } // Send collected data as JSON to the specified URL: sendJsonData("https://dri.es/sensors", device, temp.temperature, humidity.relative_humidity, batteryPercent); // WiFi consumes significant power so turn it off when done: WiFi.disconnect(true); // Enter deep sleep for 15 minutes. The ESP32-S3's deep sleep mode minimizes // power consumption by powering down most components, except the RTC. This // mode is efficient for battery-powered projects where constant operation // isn't needed. When the device wakes up after the set period, it runs // setup() again, as the state isn't preserved. Serial.println(F("Going to sleep for 15 minutes ...")); ESP.deepSleep(15 * 60 * 1000000); // 15 mins * 60 secs/min * 1,000,000 μs/sec. } bool sendJsonData(const char* url, const char* device, float temperature, float humidity, float battery) { StaticJsonDocument<200> doc; // Round floating-point values to one decimal place for efficient data // transmission. This approach reduces the JSON payload size, which is // important for IoT applications running on batteries. doc["device"] = device; doc["temperature"] = String(temperature, 1); doc["humidity"] = String(humidity, 1); doc["battery"] = String(battery, 1); // Serialize JSON to a string: String jsonData; serializeJson(doc, jsonData); // Initialize an HTTP client with the provided URL: HTTPClient httpClient; httpClient.begin(url); httpClient.addHeader("Content-Type", "application/json"); // Send a HTTP POST request: int httpCode = httpClient.POST(jsonData); // Close the HTTP connection: httpClient.end(); // Print debug information to the serial console: Serial.println("Sent '" + jsonData + "' to " + String(url) + ", return code " + httpCode); return (httpCode == 200); } void loop() { // The ESP32-S3 resets and runs setup() after waking up from deep sleep, // making this continuous loop unnecessary. }[/code] Further optimizing battery usage

When I launched my thermometer around Christmas 2023, the battery was at 88%. Today, it is at 52%. Some quick math suggests it's using approximately 12% of its battery per month. Given its current rate of usage, it needs recharging about every 8 months.

Connecting to the WiFi and sending data are by far the main power drains. To extend the battery life, I could send updates less frequently than every 15 minutes, only send them when there is a change in temperature (which is often unchanged or only different by 0.1°C), or send batches of data points together. Any of these methods would work for my needs, but I haven't implemented them yet.

Alternatively, I could hook the microcontroller up to a 5V power adapter, but where is the fun in that? It goes against the project's "more is more" principle.

Handling web service requests

With the client code running on the ESP32-S3 and sending sensor data to https://dri.es/sensors, the next step is to set up a web service endpoint to receive this incoming data.

As I use Drupal for my website, I implemented the web service endpoint in Drupal. Drupal uses Symfony, a popular PHP framework, for large parts of its architecture. This combination offers an easy but powerful way for implementing web services, similar to those found across other modern server-side web development frameworks like Laravel, Django, etc.

Here is what my Drupal routing configuration looks like:

[code yaml]sensors.sensor_data: path: '/sensors' methods: [POST] defaults: _controller: '\Drupal\sensors\Controller\SensorMonitorController::postSensorData' requirements: _access: 'TRUE'[/code]

The above configuration directs Drupal to send POST requests made to https://dri.es/sensors to the postSensorData() method of the SensorMonitorController class.

The implementation of this method handles request authentication, validates the JSON payload, and saves the data to a MariaDB database table. Pseudo-code:

[code php]public function postSensorData(Request $request) : JsonResponse { $content = $request->getContent(); $data = json_decode($content, TRUE); // Validate the JSON payload: … // Authenticate the request: … $device = DeviceFactory::getDevice($data['device']); if ($device) { $device->recordSensorEvent($data); } return new JsonResponse(['message' => 'Thank you!']); }[/code]

For testing your web service, you can use tools like cURL:

[code bash]$ curl -X POST -H "Content-Type: application/json" -d '{"device":"0xdb123", "temperature":21.5, "humidity":42.5, "battery":90.0}' https://localhost/sensors[/code]

While cURL is great for quick tests, I use PHPUnit tests for automated testing in my CI/CD workflow. This ensures that everything keeps working, even when upgrading Drupal, Symfony, or other components of my stack.

Storing sensor data in a database

The primary purpose of $device->recordSensorEvent() in SensorMonitorController::postSensorData() is to store sensor data into a SQL database. So, let's delve into the database design.

My main design goals for the database backend were:

  1. Instead of storing every data point indefinitely, only keep the daily average, minimum, maximum, and the latest readings for each sensor type across all devices.
  2. Make it easy to add new devices and new sensors in the future. For instance, if I decide to add a CO2 sensor for our bedroom one day (a decision made in my head but not yet pitched to my better half), I want that to be easy.

To this end, I created the following MariaDB table:

[code sql]CREATE TABLE sensor_data ( date DATE, device VARCHAR(255), sensor VARCHAR(255), avg_value DECIMAL(5,1), min_value DECIMAL(5,1), max_value DECIMAL(5,1), min_timestamp DATETIME, max_timestamp DATETIME, readings SMALLINT NOT NULL, UNIQUE KEY unique_stat (date, device, sensor) );[/code]

A brief explanation for each field:

  • date: The date for each sensor reading. It doesn't include a time component as we aggregate data on a daily basis.
  • device: The device ID of the device providing the sensor data, such as 'basement' or 'bedroom'.
  • sensor: The type of sensor, such as 'temperature', 'humidity' or 'co2'.
  • avg_value: The average value of the sensor readings for the day. Since individual readings are not stored, a rolling average is calculated and updated with each new reading using the formula: avg_value = avg_value + new_value - avg_value new_total_readings . This method can accumulate minor rounding errors, but simulations show these are negligible for this use case.
  • min_value and max_value: The daily minimum and maximum sensor readings.
  • min_timestamp and max_timestamp: The exact moments when the minimum and maximum values for that day were recorded.
  • readings: The number of readings (or measurements) taken throughout the day, which is used for calculating the rolling average.

In essence, the recordSensorEvent() method needs to determine if a record already exists for the current date. Depending on this determination, it will either insert a new record or update the existing one.

In Drupal this process is streamlined with the merge() function in Drupal's database layer. This function handles both inserting new data and updating existing data in one step.

[code php]private function updateDailySensorEvent(string $sensor, float $value): void { $timestamp = \Drupal::time()->getRequestTime(); $date = date('Y-m-d', $timestamp); $datetime = date('Y-m-d H:i:s', $timestamp); $connection = Database::getConnection(); $result = $connection->merge('sensor_data') ->keys([ 'device' => $this->id, 'sensor' => $sensor, 'date' => $date, ]) ->fields([ 'avg_value' => $value, 'min_value' => $value, 'max_value' => $value, 'min_timestamp' => $datetime, 'max_timestamp' => $datetime, 'readings' => 1, ]) ->expression('avg_value', 'avg_value + ((:new_value - avg_value) / (readings + 1))', [':new_value' => $value]) ->expression('min_value', 'LEAST(min_value, :value)', [':value' => $value]) ->expression('max_value', 'GREATEST(max_value, :value)', [':value' => $value]) ->expression('min_timestamp', 'IF(LEAST(min_value, :value) = :value, :timestamp, min_timestamp)', [':value' => $value, ':timestamp' => $datetime]) ->expression('max_timestamp', 'IF(GREATEST(max_value, :value) = :value, :timestamp, max_timestamp)', [':value' => $value, ':timestamp' => $datetime]) ->expression('readings', 'readings + 1') ->execute(); }[/code]

Here is what the query does:

  • It checks if a record for the current sensor and date exists.
  • If not, it creates a new record with the sensor data, including the initial average, minimum, maximum, and latest value readings, along with the timestamp for these values.
  • If a record does exist, it updates the record with the new sensor data, adjusting the average value, and updating minimum and maximum values and their timestamps if the new reading is a new minimum or maximum.
  • The function also increments the count of readings.

For those not using Drupal, similar functionality can be achieved with MariaDB's INSERT ... ON DUPLICATE KEY UPDATE command, which allows for the same conditional insert or update logic based on whether the specified unique key already exists in the table.

Here are example queries, extracted from MariaDB's General Query Log to help you get started:

[code sql]INSERT INTO sensor_data (device, sensor, date, min_value, min_timestamp, max_value, max_timestamp, readings) VALUES ('0xdb123', 'temperature', '2024-01-01', 21, '2024-01-01 00:00:00', 21, '2024-01-01 00:00:00', 1); UPDATE sensor_data SET min_value = LEAST(min_value, 21), min_timestamp = IF(LEAST(min_value, 21) = 21, '2024-01-01 00:00:00', min_timestamp), max_value = GREATEST(max_value, 21), max_timestamp = IF(GREATEST(max_value, 21) = 21, '2024-01-01 00:00:00', max_timestamp), readings = readings + 1 WHERE device = '0xdb123' AND sensor = 'temperature' AND date = '2024-01-01';[/code] Generating graphs

With the data securely stored in the database, the next step involved generating the graphs. To accomplish this, I wrote some custom PHP code that generates Scalable Vector Graphics (SVGs).

Given that is blog post is already quite long, I'll spare you the details. For now, those curious can use the 'View source' feature in their web browser to examine the SVGs on the thermometer page.

Conclusion

It's fun how a visit to the Champagne cellars in France sparked an unexpected project. Choosing to build a thermometer rather than buying one allowed me to dive back into an old passion for hardware and low-level software.

I also like taking control of my own data and software. It gives me a sense of control and creativity.

As Drupal's project lead, using Drupal for an Internet-of-Things (IoT) backend brought me unexpected joy. I just love the power and flexibility of open-source platforms like Drupal.

As a next step, I hope to design and 3D print a case for my thermometer, something I've never done before. And as mentioned, I'm also considering integrating additional sensors. Stay tuned for updates!

Categories: FLOSS Project Planets

Dries Buytaert: Drupal adventures in Japan and Australia

Thu, 2024-07-11 15:09

Next week, I'm traveling to Japan and Australia. I've been to both countries before and can't wait to return – they're among my favorite places in the world.

My goal is to connect with the local Drupal community in each country, discussing the future of Drupal, learning from each other, and collaborating.

I'll also be connecting with Acquia's customers and partners in both countries, sharing our vision, strategy and product roadmap. As part of that, I look forward to spending some time with the Acquia teams as well – about 20 employees in Japan and 35 in Australia.

I'll present at a Drupal event in Tokyo the evening of March 14th at Yahoo! Japan.

While in Australia, I'll be attending Drupal South, held at the Sydney Masonic Centre from March 20-22. I'm excited to deliver the opening keynote on the morning of March 20th, where I'll delve into Drupal's past, present, and future.

I look forward to being back in Australia and Japan, reconnecting with old friends and the local communities.

Categories: FLOSS Project Planets

Dries Buytaert: Two years later: is my Web3 website still standing?

Thu, 2024-07-11 15:09

Two years ago, I launched a simple Web3 website using IPFS (InterPlanetary File System) and ENS (Ethereum Name Service). Back then, Web3 tools were getting a lot of media attention and I wanted to try it out.

Since I set up my Web3 website two years ago, I basically forgot about it. I didn't update it or pay attention to it for two years. But now that we hit the two-year mark, I'm curious: is my Web3 website still online?

At that time, I also stated that Web3 was not fit for hosting modern web applications, except for a small niche: static sites requiring high resilience and infrequent content updates.

I was also curious to explore the evolution of Web3 technologies to see if they became more applicable for website hosting.

My original Web3 experiment

In my original blog post, I documented the process of setting up what could be called the "Hello World" of Web3 hosting. I stored an HTML file on IPFS, ensured its availability using "pinning services", and made it accessible using an ENS domain.

For those with a basic understanding of Web3, here is a summary of the steps I took to launch my first Web3 website two years ago:

  1. Purchased an ENS domain name: I used a crypto wallet with Ethereum to acquire dries.eth through the Ethereum Name Service, a decentralized alternative to the traditional DNS (Domain Name System).
  2. Uploaded an HTML File to IPFS: I uploaded a static HTML page to the InterPlanetary File System (IPFS), which involved running my own IPFS node and utilizing various pinning services like Infura, Fleek, and Pinata. These pinning services ensure that the content remains available online even when my own IPFS node is offline.
  3. Accessed the website: I confirmed that my website was accessible through IPFS-compatible browsers.
  4. Mapped my webpage to my domain name: As the last step, I linked my IPFS-hosted site to my ENS domain dries.eth, making the web page accessible under an easy domain name.

If the four steps above are confusing to you, I recommend reading my original post. It is over 2,000 words, complete with screenshots and detailed explanations of the steps above.

Checking the pulse of various Web3 services

As the first step in my check-up, I wanted to verify if the various services I referenced in my original blog post are still operational.

The results, displayed in the table below, are really encouraging: Ethereum, ENS, IPFS, Filecoin, Infura, Fleek, Pinata, and web3.storage are all operational.

The two main technologies – ENS and IPFS – are both actively maintained and developed. This indicates that Web3 technology has built a robust foundation.

Service Description Still around in February 2024) ENS A blockchain-based naming protocol offering DNS for Web3, mapping domain names to Ethereum addresses. Yes IPFS A peer-to-peer protocol for storing and sharing data in a distributed file system. Yes Filecoin A blockchain-based storage network and cryptocurrency that incentivizes data storage and replication. Yes Infura Provides tools and infrastructure to manage content on IPFS and other tools for developers to connect their applications to blockchain networks and deploy smart contracts. Yes Fleek A platform for building websites using IPFS and ENS. Yes Pinata Provides tools and infrastructure to manage content on IPFS, and more recently Farcaster applications. Yes web3.storage Provides tools and infrastructure to manage content on IPFS with support for Filecoin. Yes Is my Web3 website still up?

Seeing all these Web3 services operational is encouraging, but the ultimate test is to check if my Web3 webpage, dries.eth, remained live. It's one thing for these services to work, but another for my site to function properly. Here is what I found in a detailed examination:

  1. Domain ownership verification: A quick check on etherscan.io confirmed that dries.eth is still registered to me. Relief!
  2. ENS registrar access: Using my crypto wallet, I could easily log into the ENS registrar and manage my domains. I even successfully renewed dries.eth as a test.
  3. IPFS content availability: My webpage is still available on IPFS, thanks to having pinned it two years ago. Logging into Fleek and Pinata, I found my content on their admin dashboards.
  4. Web3 and ENS gateway access: I can visit dries.eth using a Web3 browser, and also via an IPFS-compatible ENS gateway like https://dries.eth.limo/ – a privacy-centric service, new since my initial blog post.

The verdict? Not only are these Web3 services still operational, but my webpage also continues to work!

This is particularly noteworthy given that I haven't logged in to these services, didn't perform any maintenance, or didn't pay any hosting fees for two years (the pinning services I'm using have a free tier).

Visit my Web3 page yourself

For anyone interested in visiting my Web3 page (perhaps your first Web3 visit?), there are several methods to choose from, each with a different level of Web3-ness.

  • Use a Web3-enabled browser: Browsers such as Brave and Opera, offer built-in ENS and IPFS support. They can resolve ENS addresses and interpret IPFS addresses, making it as easy to navigate IPFS content as if it is traditional web content via HTTP or HTTPS.
  • Install a Web3 browser extension: If your favorite browser does not support Web3 out of the box, adding a browser extension like MetaMask can help you access Web3 applications. MetaMask works with Chrome, Firefox, and Edge. It enables you to use .eth domains for doing Ethereum transactions or for accessing content on IPFS.
  • Access through an ENS gateway: For those looking for the simplest way to access Web3 content without installing anything new, using an ENS gateway, such as eth.limo, is the easiest method. This gateway maps ENS domains to DNS, offering direct navigation to Web3 sites like mine at https://dries.eth.limo/. It serves as a simple bridge between Web2 (the conventional web) and Web3.
Streamlining content updates with IPNS

In my original post, I highlighted various challenges, such as the limitations for hosting dynamic applications, the cost of updates, and the slow speed of these updates. Although these issues still exist, my initial analysis was conducted with an incomplete understanding of the available technology. I want to delve deeper into these limitations, and refine my previous statements.

Some of these challenges stem from the fact that IPFS operates as a "content-addressed network". Unlike traditional systems that use URLs or file paths to locate content, IPFS uses a unique hash of the content itself. This hash is used to locate and verify the content, but also to facilitate decentralized storage.

While the principle of addressing content by a hash is super interesting, it also introduces some complications: whenever content is updated, its hash changes, making it tricky to link to the updated content. Specifically, every time I updated my Web3 site's content, I had to update my ENS record, and pay a translation fee on the Ethereum network.

At the time, I wasn't familiar with the InterPlanetary Name System (IPNS). IPNS, not to be confused with IPFS, addresses this challenge by assigning a mutable name to content on IPFS. You can think of IPNS as providing an "alias" or "redirect" for IPFS addresses: the IPNS address always stays the same and points to the latest IPFS address. It effectively eliminates the necessity of updating ENS records with each content change, cutting down on expenses and making the update process more automated and efficient.

To leverage IPNS, you have to take the following steps:

  1. Upload your HTML file to IPFS and receive an IPFS hash.
  2. Publish this hash to IPNS, creating an IPNS hash that directs to the latest IPFS hash.
  3. Link your ENS domain to this IPNS hash. Since the IPNS hash remains constant, you only need to update your ENS record once.

Without IPNS, updating content involved:

  1. Update the HTML file.
  2. Upload the revised file to IPFS, generating a new IPFS hash.
  3. Update the ENS record with the new IPFS hash, which costs some Ether and can take a few minutes.

With IPNS, updating content involves:

  1. Update the HTML file.
  2. Upload the revised file to IPFS, generating a new IPFS hash.
  3. Update the IPNS record to reference this new hash, which is free and almost instant.

Although IPNS is a faster and more cost-effective approach compared to the original method, it still carries a level of complexity. There is also a minor runtime delay due to the extra redirection step. However, I believe this tradeoff is worth it.

Updating my Web3 site to use IPNS

With this newfound knowledge, I decided to use IPNS for my own site. I generated an IPNS hash using both the IPFS desktop application (see screenshot) and IPFS' command line tools:

[code bash]$ ipfs name publish /ipfs/bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q > Published to k51qzi5uqu5dgy8mzjtcqvgr388xjc58fwprededbb1fisq1kvl34sy4h2qu1a: /ipfs/bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q[/code] The IPFS Desktop application showing my index.html file with an option to 'Publish to IPNS'.

After generating the IPNS hash, I was able to visit my site in Brave using the IPFS protocol at ipfs://bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q, or via the IPNS protocol at ipns://k51qzi5uqu5dgy8mzjtcqvgr388xjc58fwprededbb1fisq1kvl34sy4h2qu1a.

My Web3 site in Brave using IPNS.

Next, I updated the ENS record for dries.eth to link to my IPNS hash. This change cost me 0.0011 ETH (currently $4.08 USD), as shown in the Etherscan transaction. Once the transaction was processed, dries.eth began directing to the new IPNS address.

A transaction confirmation on the ENS website, showing a successful update for dries.eth. Rolling back my IPNS record in ENS

Unfortunately, my excitement was short-lived. A day later, dries.eth stopped working. IPNS records, it turns out, need to be kept alive – a lesson learned the hard way.

While IPFS content can be persisted through "pinning", IPNS records require periodic "republishing" to remain active. Essentially, the network's Distributed Hash Table (DHT) may drop IPNS records after a certain amount of time, typically 24 hours. To prevent an IPNS record from being dropped, the owner must "republish" it before the DHT forgets it.

I found out that the pinning services I use – Dolphin, Fleek and Pinata – don't support IPNS republishing. Looking into it further, it turns out few IPFS providers do.

During my research, I discovered Filebase, a small Boston-based company with fewer than five employees that I hadn't come across before. Interestingly, they provide both IPFS pinning and IPNS republishing. However, to pin my existing HTML file and republish its IPNS hash, I had to subscribe to their service at a cost of $20 per month.

Faced with the challenge of keeping my IPNS hash active, I found myself at a crossroads: either fork out $20 a month for a service like Filebase that handles IPNS republishing for me, or take on the responsibility of running my own IPFS node.

Of course, the whole point of decentralized storage is that people run their own nodes. However, considering the scope of my project – a single HTML file – the effort of running a dedicated node seemed disproportionate. I'm also running my IPFS node on my personal laptop, which is not always online. Maybe one day I'll try setting up a dedicated IPFS node on a Raspberry Pi or similar setup.

Ultimately, I decided to switch my ENS record back to the original IPFS link. This change, documented in the Etherscan transaction, cost me 0.002 ETH (currently $6.88 USD).

Although IPNS works, or can work, it just didn't work for me. Despite the setback, the whole experience was a great learning journey.

(Update: A couple of days after publishing this blog post, someone kindly recommended https://dwebservices.xyz/, claiming their free tier includes IPNS republishing. Although I haven't personally tested it yet, a quick look at their about page suggests they might be a promising solution.)

Web3 remains too complex for most people

Over the past two years, Web3 hosting hasn't disrupted the mainstream website hosting market. Despite the allure of Web3, mainstream website hosting is simple, reliable, and meets the needs of nearly all users.

Despite a significant upgrade of the Ethereum network that reduced energy consumption by over 99% through its transition to a Proof of Stake (PoS) consensus mechanism, environmental considerations, especially the carbon footprint associated with blockchain technologies, continue to create further challenges for the widespread adoption of Web3 technologies. (Note: ENS operates on the blockchain but IPFS does not.)

As I went through the check-up, I discovered islands of innovation and progress. Wallets and ENS domains got easier to use. However, the overall process of creating a basic website with IPFS and ENS remains relatively complex compared to the simplicity of Web2 hosting.

The need for a SQL-compatible Web3 database

Modern web applications like those built with Drupal and WordPress rely on a technology stack that includes a file system, a domain name system (e.g. DNS), a database (e.g. MariaDB or MySQL), and a server-side runtime environment (e.g. PHP).

While IPFS and ENS offer decentralized alternatives for the first two, the equivalents for databases and runtime environments are less mature. This limits the types of applications that can easily move from Web2 to Web3.

A major breakthrough would be the development of a decentralized database that is compatible with SQL, but currently, this does not seem to exist. The complexity of ensuring data integrity and confidentiality across multiple nodes without a central authority, along with meeting the throughput demands of modern web applications, may be too complex to solve.

After all, blockchains, as decentralized databases, have been in development for over a decade, yet lack support for the SQL language and fall short in speed and efficiency required for dynamic websites.

The need for a distributed runtime

Another critical component for modern websites is the runtime environment, which executes the server-side logic of web applications. Traditionally, this has been the domain of PHP, Python, Node.js, Java, etc.

WebAssembly (WASM) could emerge as a potential solution. It could make for an interesting decentralized solution as WASM binaries can be hosted on IPFS.

However, when WASM runs on the client-side – i.e. in the browser – it can't deliver the full capabilities of a server-side environment. This limitation makes it challenging to fully replicate traditional web applications.

So for now, Web3's applications are quite limited. While it's possible to host static websites on IPFS, dynamic applications requiring database interactions and server-side processing are difficult to transition to Web3.

Bridging the gap between Web2 and Web3

In the short term, the most likely path forward is blending decentralized and traditional technologies. For example, a website could store its static files on IPFS while relying on traditional Web2 solutions for its dynamic features.

Looking to the future, initiatives like OrbitDB's peer-to-peer database, which integrates with IPFS, show promise. However, OrbitDB lacks compatibility with SQL, meaning applications would need to be redesigned rather than simply transferred.

Web3 site hosting remains niche

Even the task of hosting static websites, which don't need a database or server-side processing, is relatively niche within the Web3 ecosystem.

As I wrote in my original post: In its current state, IPFS and ENS offer limited value to most website owners, but tremendous value to a very narrow subset of all website owners.. This observation remains accurate today.

IPFS and ENS stand out for their strengths in censorship resistance and reliability. However, for the majority of users, the convenience and adequacy of Web2 for hosting static sites often outweigh these benefits.

The key to broader acceptance of new technologies, like Web3, hinges on either discovering new mass-market use cases or significantly enhancing the user experience for existing ones. Web3 has not found a universal application or surpassed Web2 in user experience.

The popularity of SaaS platforms underscores this point. They dominate not because they're the most resilient or robust options, but because they're the most convenient. Despite the benefits of resilience and autonomy offered by Web3, most individuals opt for less resilient but more convenient SaaS solutions.

Conclusion

Despite the billions invested in Web3 and notable progress, its use for website hosting still has significant limitations.

The main challenge for the Web3 community is to either develop new, broadly appealing applications or significantly improve the usability of existing technologies.

Website hosting falls into the category of existing use cases.

Unfortunately, Web3 remains mostly limited to static websites, as it does not yet offer robust alternatives to SQL databases and server-side runtime.

Even within the limited scope of static websites, improvements to the user experience have been marginal, focused on individual parts of the technology stack. The overall end-to-end experience remains complex.

Nonetheless, the fact that my Web3 page is still up and running after two years is encouraging, showing the robustness of the underlying technology, even if its current use remains limited. I've grown quite fond of IPFS, and I hope to do more useful experiments with it in the future.

All things considered, I don't see Web3 taking the website hosting world by storm any time soon. That said, over time, Web3 could become significantly more attractive and functional. All in all, keeping an eye on this space is definitely fun and worthwhile.

Categories: FLOSS Project Planets

Dries Buytaert: Acquia a Leader in the 2024 Gartner Magic Quadrant for Digital Experience Platforms

Thu, 2024-07-11 15:09

For the fifth year in a row, Acquia has been named a Leader in the Gartner Magic Quadrant for Digital Experience Platforms (DXP).

Acquia received this recognition from Gartner based on both the completeness of product vision and ability to execute.

Central to our vision and execution is a deep commitment to openness. Leveraging Drupal, Mautic and open APIs, we've built the most open DXP, empowering customers and partners to tailor our platform to their needs.

Our emphasis on openness extends to ensuring our solutions are accessible and inclusive, making them available to everyone. We also prioritize building trust through data security and compliance, integral to our philosophy of openness.

We're proud to be included in this report and thank our customers and partners for their support and collaboration.

Mandatory disclaimer from Gartner

Gartner, Magic Quadrant for Digital Experience Platforms, Irina Guseva, Jim Murphy, Mike Lowndes, John Field - February 21, 2024.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Acquia.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

Categories: FLOSS Project Planets

Dries Buytaert: Satoshi Nakamoto's Drupal adventure

Thu, 2024-07-11 15:09

Martti Malmi, an early contributor to the Bitcoin project, recently shared a fascinating piece of internet history: an archive of private emails between himself and Satoshi Nakamoto, Bitcoin's mysterious founder.

The identity of Satoshi Nakamoto remains one of the biggest mysteries in the technology world. Despite extensive investigations, speculative reports, and numerous claims over the years, the true identity of Bitcoin's creator(s) is still unknown.

Martti Malmi released these private conversations in reaction to a court case focused on the true identity of Satoshi Nakamoto and the legal entitlements to the Bitcoin brand and technology.

The emails provide some interesting details into Bitcoin's early days, and might also provide some new clues about Satoshi's identity.

Satoshi and Martti worked together on a variety of different things, including the relaunch of the Bitcoin website. Their goal was to broaden public understanding and awareness of Bitcoin.

And to my surprise, the emails reveal they chose Drupal as their preferred CMS! (Thanks to Jeremy Andrews for making me aware.)

The emails detail Satoshi's hands-on involvement, from installing Drupal themes, to configuring Drupal's .htaccess file, to exploring Drupal's multilingual capabilities.

At some point in the conversation, Satoshi expressed reservations about Drupal's forum module.

For what it is worth, this proves that I'm not Satoshi Nakamoto. Had I been, I'd have picked Drupal right away, and I would never have questioned Drupal's forum module.

Jokes aside, as Drupal's Founder and Project Lead, learning about Satoshi's use of Drupal is a nice addition to Drupal's rich history. Almost every day, I'm inspired by the unexpected impact Drupal has.

Categories: FLOSS Project Planets

Drupal.org blog: Ending Packages.Drupal.org support for Composer 1

Thu, 2024-07-11 12:48

To prepare Drupal.org infrastructure for providing automatic updates for Drupal and upgrading Drupal.org itself, we are removing support for Composer 1 on Packages.Drupal.org.

  • New Drupal.org packages & releases will not be available for Composer 1 after August 12, 2024.
  • Composer 1 support will be dropped after October 1, 2024.

Preparing your site for Composer 2 is documentation for updating Drupal site codebases with Composer 2.

Deprecating Packagist.org support for Composer 1.x is Packagist.org’s announcement.

Less than 1% of our Composer traffic comes from Composer 1. Drupal’s automatic updates require Composer 2. Packagist.org has already reduced support for Composer 1. So now is a good time to upgrade to Composer 2, if you have not already.

Follow #3201223: Deprecate composer 1 for detailed status updates.

Categories: FLOSS Project Planets

mark.ie: My LocalGov Drupal contributions for week-ending July 12th, 2024

Thu, 2024-07-11 12:00

Here's what I've been working on for my LocalGov Drupal contributions this week. Thanks to Big Blue Door for sponsoring the time to work on these.

Categories: FLOSS Project Planets

mark.ie: My Drupal Core Contributions for week-ending July 12th, 2024

Thu, 2024-07-11 08:59

Here's what I've been working on for my Drupal contributions this week. Thanks to Code Enigma for sponsoring the time to work on these.

Categories: FLOSS Project Planets

PreviousNext: Co-contribution with clients: A revision UI API for all entity types

Thu, 2024-07-11 01:11

The tale of an eight-year, collaborative effort to build a generic revision UI into Drupal 10.1.0, bringing a major piece of functionality to core.

by lee.rowlands / 11 July 2024

As we discussed in our previous post, Improving Drupal with the help of your clients, we’re fortunate to work with a client like ServiceNSW that is committed to open-source contribution. So when their challenges require solutions that will also benefit the whole Drupal community, they're on board!

In the beginning, there were nodes

Since Drupal 4.7 was released in 2006, nodes have had a revision user interface (UI). The UI allows editors to view revision history and specific revisions, as well as revert and delete revisions.

A lot has changed since Drupal 4.7. We received revision support for many more entities, but Node remained the only one with a revision UI in core.

Supporting client needs through contrib 

Our client, Service NSW, makes heavy use of block content entities for Notices displayed throughout the site. These are regularly updated. Editors need to be able to see what has changed and when, revert to previous versions, and view revision logs when needed. 

Since Drupal 8, much of the special treatment of Node entities has been replaced with generic Entity API functionality. Nodes were no longer the only tool in the content-modelling toolbox, with this one exception: revision UI.

The code for node's revision UI lives in the node module. It’s dependent on hard-coded permission checking and uses routing and forms outside the entity API.

This meant that for every additional entity type for which Service NSW needed a revision UI, those parts needed to be recreated repeatedly.

As you can imagine, this approach quickly becomes hard to maintain due to the amount of duplication. 

The journey to core

Having identified that Drupal core needed a generic entity revision UI API (it already had generic APIs for entity routing, editing, viewing and access), we set to work on this missing piece of the puzzle.

We found an existing core issue for it, and in 2015, posted our first patch for it. 

This began an 8-year journey to bring a major piece of functionality to core.

Over the course of many re-rolls, we released contributed modules built on top of the patch:

Finally, with the release of Drupal 10.1.0 in 2023, any entity-type could opt into a revision UI. The Drupal 10.1.0 release opted-in for Block Content entities, making that contributed module obsolete. Then later in 2023, the release of Drupal 10.2.0 saw Media entities use this new API. In early 2024, support for Taxonomy terms was added and released in 10.3.0.

Challenges along the way

The biggest challenges encountered were keeping the patch up to date with core as it changed and navigating the contribution process. Over the years, there have been over 120 patch files and 300+ comments on the issue!

Another challenge was the lack of an access API for checking access to revisions. 

The entity API supported a set of entity access operations — view, update, delete — but no revision operations were considered. The node module had hard-coded permissions e.g. 'view all revisions' and 'revert all revisions'. 

To have a generic entity revision UI API, we needed a generic way to check access to the operations the UI would make available.

Initially, we tried to include this with the revision UI changes. However, it became increasingly difficult to get both major pieces of functionality simultaneously. So, in 2019, this was split into a separate issue, and the original issue was postponed.

With efforts from our team, Service NSW and many other individuals and companies in the Drupal community, this made it into Drupal core in 2021. It was first available in Drupal 9.3.0. Adding a whole new major access API is not without its challenges, though. Unfortunately, this change resulted in a security release shortly after 9.3.0 came out. Luckily it was caught and fixed before many sites had updated to 9.3.0.

Collaborative contribution

Adding a new feature to Drupal core is a large undertaking. Doing it in a client-agency collaboration provides an ideal model for how open source should work. 

Developers from PreviousNext and Service NSW worked with the broader Drupal community to bring this feature to fruition.

Our developers have experience contributing to core and were able to guide Service NSW developers through the process. Being credited on large features like this is a major feather in the cap for both individual developers and their organisations.

Wrapping up

Together, we helped integrate a generic revision UI into Drupal 10.1.0. All of the developers involved received issue credits for their work. 

This was a significant effort over eight years, requiring collaboration with individuals and organisations in the wider Drupal community to build consensus. This level of shared commitment helps drive the Drupal open source project forward, recognising that what benefits one can benefit all.

So, what are the next big features you and your clients could work on? Or is there something you want to bring to core, as an individual, group or organisation? Either way, we’d love to chat and collaborate!

Contributors
  • dpi
  • acbramley
  • jibran
  • manuel garcia
  • chr.fritsch
  • AaronMcHale
  • Nono95230
  • capysara
  • darvanen
  • ravi.shankar
  • Spokje
  • thhafner
  • larowlan
  • smustgrave
  • mstrelan
  • mikestar5
  • andregp
  • joachim
  • nterbogt
  • shubhangi1995
  • catch
  • mkalkbrenner
  • Berdir
  • Sam152
  • Xano
Issue links
Categories: FLOSS Project Planets

Tag1 Consulting: Migrating Your Data from Drupal 7 to Drupal 10: Syntax and structure of migration files

Wed, 2024-07-10 12:11

In the previous article, we saw what a migration file looks like. We made some changes without going too deep into explaining the syntax or structure of the file. Today, we are exploring the language in which migration files are written and the different sections it contains.

Read more mauricio Wed, 07/10/2024 - 09:11
Categories: FLOSS Project Planets

amazee.io: amazee.io Launches New Tokyo Cloud Region on AWS

Tue, 2024-07-09 20:00
Discover our new Tokyo Cloud Region on AWS, offering flexible, scalable, and secure PaaS solutions for optimized application delivery and hosting in Japan.
Categories: FLOSS Project Planets

Drupal Association blog: Drupal Association Announces HeroDevs as Inaugural Partner for Drupal 7 Extended Security Support Provider Program

Tue, 2024-07-09 14:33

PORTLAND, Ore., 10 July 2024—The Drupal Association is pleased to announce HeroDevs as the inaugural partner for the new Drupal 7 Extended Security Support Provider Program. This initiative aims to support Drupal 7 users by carefully vetting providers to deliver extended security support services beyond the 5 January 2025 end-of-life (EOL) date.

The Drupal 7 Extended Security Support Provider Program allows organizations that cannot migrate from Drupal 7 to newer versions by the EOL date to continue using a version of Drupal 7 that is secure and compliant. This program complements the Association’s D7 Certified Migration Providers Program, which helps organizations find the right partner to transition their sites from Drupal 7 to Drupal 10.

HeroDevs has successfully met the stringent requirements established by the Drupal Association to become a certified provider with its secure, seamless drop-in replacement of Drupal 7 and core modules. 

“HeroDevs has demonstrated strong expertise in finding and fixing security and compatibility issues for major open-source libraries like Drupal,” Tim Doyle, CEO of the Drupal Association, said. “This program underscores the Drupal Association’s dedication to providing qualified options for organizations using Drupal 7 so they can stay secure while they figure out their next steps for upgrading. ”

As organizations prepare for the transition from Drupal 7, HeroDevs will provide the necessary support to keep their sites secure and operational.

Joe Eames, VP of Partnership at HeroDevs, added, “We are honored to be recognized as the inaugural partner of this important program. At HeroDevs, we are creating a more sustainable, secure web and Drupal is a major part of that. We aim to help organizations maintain a secure and compliant web presence – all while giving open source creators and maintainers the freedom to innovate.” 

For more information about the HeroDevs Drupal 7 Never-Ending Support (NES), click here.

About the Drupal Association

The Drupal Association is a non-profit organization that fosters and supports the Drupal software project, the community, and its growth. Our mission is to drive innovation and adoption of Drupal as a high-impact digital public good, hand-in-hand with our open source community. Through various initiatives, events, and programs, the Drupal Association helps ensure the ongoing development and success of the Drupal project.

Categories: FLOSS Project Planets

drunomics: Custom Elements UI: quicker changes to your decoupled Drupal site

Tue, 2024-07-09 08:01
Custom Elements UI: quicker changes to your decoupled Drupal site bof teaser.png jurgen.thano Tue, 07/09/2024 - 14:01 The latest version of the Custom Elements module empowers developers building headless Drupal solutions. With a user-friendly interface, it’s now easier to modify output entities, adjust properties, and change formats. At Drupal Developer Days Burgas, attendees explored the Custom Elements UI and discussed Lupus Decoupled, an efficient stack for decoupled Drupal applications. Body New Custom Elements module version

The Custom Elements module is an essential building block in the technology stack that drunomics uses to build headless Drupal solutions, facilitating output of pages in either 'custom elements' or JSON format as the front end requires it.

The newest version of the module features a user interface to modify any entity that is part of the output: any property can be included/excluded, and output format can be changed, without the need to write Drupal/PHP code. This allows a developer to more easily change both the backend API output and the decoupled frontend consuming the output at the same time, making for faster turnaround times in changes to your website.

Our talk at Drupal Developer Days Burgas

Roderik Muit and Alexandru Ieremia chaired an informal (Birds of a Feather) session at Drupal Developer Days in Burgas in June 2024, to prevent the new changes to any interested parties. They also prepared some information about the larger Lupus Decoupled stack for any interested attendees who would not be familiar with it yet.

After the presentation, an animated discussion followed. Some people were curious how the Custom Elements UI worked, what the code behind it looks like and how to write own 'formatters'.

Another person said that Lupus Decoupled seems to exactly satisfy their need to address the resource heavy JSON:API queries in his current main website. He was encouraged to try out a demo and ask any questions in our issue queue or on #lupus-decoupled on Drupal Slack. Users were assured that Lupus Decoupled is ready to use (for experienced developers) and completely open source.

The new Custom Elements version with UI to alter output, is currently available as a development version; we are working to finalize a beta release as soon as possible.

Categories: FLOSS Project Planets

Drupal Association blog: Celebrating Success: DrupalCon Portland 2024 Event Impact Recap

Tue, 2024-07-09 07:46

Welcome to the Event Impact Recap of DrupalCon Portland 2024, a benchmark event in North America, that not only marked a significant milestone in the Drupal community, but also holds a special place in my journey. Having served as a contractor for DrupalCon Portland and now stepping into the role of the new Community Programs Director with the Drupal Association, I am thrilled to share the highlights and successes of this remarkable gathering. My goal is to have an Impact Report shared with the community after each DrupalCon that depicts the data and feedback on the event. Please view the slides.

Key Highlights from DrupalCon Portland 2024:

  • Attendance and Engagement:
    • With 1,368 registered attendees and an impressive 97.8% check-in rate, DrupalCon Portland 2024 brought together a vibrant community of Drupal enthusiasts and professionals.
    • Of the 1,368 registered attendees, 438 (about one third) received comped registrations for volunteering, speaking, or other roles at the conference.
    • The event saw 3,249 hotel rooms booked in Portland, OR, highlighting its impact on the local economy and hospitality sector AND, it’s worth to note, these were just the rooms through our block, many rooms were booked outside the block making an even bigger impact on the local business community. 
    • A post event survey showed the rank of overall experience at DrupalCon Portland 2024 at 4.21/5
    • 32% of attendees said this was their 1st DrupalCon 
    • 8 Scholarship grants were given out to the community
    • 95% of attendees said they would attend a future DrupalCon
  • Global Representation:
    • Attendees from 6 continents, 35 countries, and 46 states joined us, demonstrating Drupal's global reach and community diversity.
  • Specialized Summits:
    • Five Summits (Government, Higher Ed, Nonprofit, Healthcare, and Community) attracted 476 attendees, facilitating deep dives into crucial Drupal topics and fostering collaboration.
  • DriesNote and Starshot:
    • A highlight of the event was the DriesNote, attended by 950 people in person, eager to hear about Starshot, an exciting new initiative. (View the recording on the Drupal Association Youtube page). This session not only informed but also inspired attendees about the future of Drupal.
    • Two BOFs were hosted, providing platforms for continued discussions and community engagement beyond the main sessions.
  • Sponsorship:
    • DrupalCon Portland 2024 was made possible thanks to the generous support of our sponsors 
      • Presenting Sponsors: 2
      • Champion Sponsors: 6
      • Advocate Sponsors: 11
      • Exhibitors: 28
      • Total Sponsors: 47
    • The conference wouldn't have been possible without the dedication and partnership of these organizations. Their support underscores their commitment to the Drupal community and its ongoing success.
  • Volunteer Contributions:
    • The success of DrupalCon Portland 2024 was further bolstered by 28 dedicated volunteers, including local ambassadors, logistics contributors, and translation assistants, who collectively contributed 221.5 hours onsite.

I am deeply honored to now step into the role of the new Community Programs Director with the Drupal Association. With over 18 years of experience in event planning, field marketing, nonprofit management, and community engagement, I am excited to leverage my skills to enhance community programs and initiatives within the Drupal ecosystem. My goal is to foster even stronger connections, facilitate meaningful collaborations, and support the growth and inclusivity of the Drupal community.

As we reflect on the achievements and connections fostered at DrupalCon Portland 2024, I am filled with optimism about the future of Drupal and the potential for continued growth and innovation within our community, and am excited to be a part of DrupalCon Barcelona, DrupalCon Singapore, DrupalCon Atlanta and many more for years to come!

DrupalCon Portland 2024 was not just an event but a celebration of collaboration, knowledge sharing, and community spirit. I extend my heartfelt gratitude to everyone who contributed to its success, from attendees and volunteers to sponsors and organizers. Let's carry this momentum forward as we embark on the next chapter of Drupal's journey together.

- Meghan Harrell
Community Programs Director
Drupal Association

Categories: FLOSS Project Planets

Specbee: Simplifying content duplication with Quick Node Clone module in Drupal

Tue, 2024-07-09 07:18
If you’re a marketer, you know how much content cloning can simplify your life. It lets you duplicate blog posts, landing pages, articles, product listings, and forum posts effortlessly.  If you’re familiar with Drupal, you should know that nodes are fundamental content entities that represent individual pieces of content on a site. Creating similar content nodes in Drupal can be time-consuming, especially when you have to duplicate them manually.  Fortunately, there's a solution: the Quick Node Clone module. In this blog post, we'll explore how this handy module can streamline your content creation process in Drupal. What is the Quick Node Clone Module The Quick Node Clone module allows Drupal users to swiftly duplicate existing nodes with just a few clicks. This module can save you time and effort by eliminating the need to recreate content from scratch. How to Install the module Getting started with the Quick Node Clone module is straightforward. Simply follow these steps: Download the module from Drupal.org or use Composer to install it. Enable the module in the Drupal administration interface. Clear the cache for the changes to take effect. Configuring the module Once the module is installed, you can customize its settings to suit your needs.  Text to prepend to title The text we enter in this field will be prepended to the title of the cloned node. This will be seen on the node clone page. Clone publication status of original If it's checked then the publication status will be cloned from the Original node that we clone.If Unchecked the publication status will be cloned from the “default publish status of the content type” of that particular node. Exclusion list If you don’t want some field values to be cloned then you can choose the particular content type and exclude any field. This module also supports 'paragraphs', allowing us to exclude any paragraph field from being cloned, similar to nodes. How to Use Quick Node Clone Using the Quick Node Clone module is simple: Navigate to the node you want to duplicate. Click on the "Clone" button, depending on your Drupal configuration. Optionally, make any necessary changes to the cloned node. Save the cloned node, and you're done! Permissions: This module provides a set of permissions. For any content type, we can grant permission to clone its nodes.Additionally, there's the "Administer Quick Node Clone Settings" permission, granting access to the module's configuration page at /admin/config/quick-node-clone. Hooks provided by the module: 1. hook_cloned_node_alter() Example Usage Let's consider a practical example where we want to modify certain properties of the cloned node: /**  * Implements hook_cloned_node_alter().  */ function mymodule_cloned_node_alter($cloned_node, $original_node) {   // Change the title of the cloned node.   $cloned_node->setTitle('Modified Title');   // Check if the cloned node has a specific field and update its value.   if ($cloned_node->hasField('field_example')) {     $cloned_node->set('field_example', 'New Field Value');   } } mymodule should be replaced with the machine name of your custom module. $cloned_node represents the cloned node object that you can modify. $original_node refers to the original node being cloned, providing context for your alterations. 2. hook_cloned_node_paragraph_alter()   Example UsageLet's consider an example scenario where we want to update the value of a specific paragraph field during the cloning process: /**  * Implements hook_cloned_node_paragraph_field_alter().  */ function mymodule_cloned_node_paragraph_field_alter($paragraph, $field_name, $settings) {   // Check if the paragraph has a field named 'field_place' and update its value.   if ($paragraph->hasField('field_place')) {     $paragraph->set('field_place', 'New Changed Place');   } } mymodule should be replaced with the machine name of your custom module. $paragraph represents the cloned paragraph entity that you can modify. $field_name indicates the name of the paragraph field being processed. $settings provides additional information about the field. Final thoughts The Quick Node Clone module is a valuable tool for Drupal users looking to streamline their content creation process. This module can save you time and effort by simplifying the duplication of nodes, allowing you to focus on more important tasks. Give it a try on your Drupal site and experience the benefits firsthand!
Categories: FLOSS Project Planets

Pages