Feeds
Real Python: pandas GroupBy: Grouping Real World Data in Python
Whether you’ve just started working with pandas and want to master one of its core capabilities, or you’re looking to fill in some gaps in your understanding about .groupby(), this course will help you to break down and visualize a pandas GroupBy operation from start to finish.
This course is meant to complement the official pandas documentation and the pandas Cookbook, where there are self-contained, bite-sized examples. Here, however, you’ll focus on three more involved walkthroughs that use real-world datasets.
In this course, you’ll cover:
- How to use pandas GroupBy operations on real-world data
- How the split-apply-combine chain of operations works
- How to decompose the split-apply-combine chain into steps
- How to categorize methods of a pandas GroupBy object based on their intent and result
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Sitback Solutions: Making the polyfill.io vulnerability a thing of the past for Drupal
Russell Coker: More About Kogan 5120*2160 Monitor
On the 18th of May I blogged about my new 5120*2160 monitor [1]. One thing I noted was that one Netflix movie had run in an aspect ratio that used all space on the monitor. I still donât know if the movie in question was cropped in a letterbox manner but other Netflix shows in âfull screenâ mode donât extend to both edges. Also one movie I downloaded as in 3840*1608 resolution which is almost exactly the same aspect ratio as my monitor. I wonder if some company is using 5120*2160 screens for TVs, 4K and FullHD are rumoured to be cheaper than most other resolutions partly due to TV technology being used for monitors. There is the Anamorphic Format of between 2.35:1 and 2.40:1 [2] which is a close match for the 2.37:1 of my new monitor.
I tried out the HDMI audio on a Dell laptop and my Thinkpad Yoga Gen3 and found it to be of poor quality, it seemed limited to 2560*1440, at this time Iâm not sure how much of the fault is due to the laptops and how much is due to the monitor. The monitor docs state that it needs HDMI version 2.1 which was released in 2017 and my Thinkpad Yoga Gen3 was released in 2018 so probably doesnât have that. The HDMI cable in question did 4K resolution on my previous monitor so it should all work at a minimum of 4K resolution.
The switching between inputs is a problem. If I switch between DisplayPort for my PC and HDMI for a laptop the monitor will usually timeout before the laptop establishes a connection and then switch back to the DisplayPort input. So far I have had to physically disconnect the input source I donât want to use. The DisplayPort switch that Iâve used doesnât seem designed to work with resolutions higher than 4K.
Iâve bought a new USB-C dock which is described as doing 8K which means that as my Thinkpad is described as supporting 5120Ă2880@60Hz over USB-C I should be able to get 5120*2160 without any problems, however for unknown reasons I only get 4K. For work Iâm using a Dell Latitude 7400 2in1 thatâs apparently only capable of 4096*2304 @24 Hz which is less pixels than 5120*2160 and it will also only do 4K resolution. But for both those cases itâs still a significant improvement over 2560*1440. I tested with a Dell Latitude 7440 which gave the full 5120*2160 resolution, I was unable to find specs on what the maximum resolution of the 7440 is. I also have bought DisplayPort switch rated at 8K resolution. I got a switch that doesnât also do USB because the ones that do 8K resolution and USB are about $70. The only KVM switch I saw for 8K resolution at a reasonable price was one designed for switching between two laptops and there doesnât seem to be any adaptors to convert from regular DisplayPort to USB-C alternative mode so that wasnât viable. Currently I have the old KVM switch used for USB only (for keyboard and mouse) and the new switch which only does DisplayPort. So I have two buttons to push when switching between input sources which isnât too bad.
It seems that for PCs resolutions with more pixels than 4K are as difficult and inconvenient now as 4K was 6 years ago when I started doing it. If you want higher than 4K resolution to just work at this time then you need Apple hardware.
The monitor has a series of modes for different types of output, Iâve found âstandardâ to be good for text and âmovieâ to be good for watching movies/TV and for playing RTS games. I previously wrote about how to use ddcutil to use a monitor as a KVM switch [3], unfortunately I canât do this with the new monitor as the time that the monitor waits for a good signal on a new input after changing is shorter than the time taken for Linux on the laptops Iâm using to enable HDMI output. Iâve found the following commands to do the basics.
# get display mode ddcutil getvcp DC # set standard mode ddcutil setvcp DC 0 # set movie mode ddcutil setvcp DC 03Now that I have that going the next thing I want to do is to have it switch between âstandardâ and âmovieâ modes when I switch keyboard focus.
- [1] https://etbe.coker.com.au/2024/05/18/kogan-5120Ă2160-40-monitor/
- [2] https://en.wikipedia.org/wiki/Anamorphic_format
- [3] https://etbe.coker.com.au/2022/07/19/ddc-switch/
Related posts:
- Kogan 5120*2160 40âł Monitor Iâve just got a new Kogan 5120*2160 40âł curved monitor....
- Dell 32âł 4K Monitor and DisplayPort Switch After determining that the Philips 43âł monitor was too large...
- DDC as a KVM Switch With the recent resurgence in Covid19 Iâve been working from...
DrupalEasy: Getting ready to run your first migration
Migrating content into Drupal is an extremely useful skill for most Drupal developers. Often, the most difficult step in learning to do migrations is the first one - that of getting everything set up and running your first migration to ensure that everything is working as expected.
Most Drupal migration-related blog posts and documentation that I've seen either completely ignore the setup process or gloss over it before diving into the details of writing migrations. I like to ensure, both here, and when working with participants in our Professional Module Development course, that we ensure a solid understanding of this process to build not only skills, but confidence.
This blog post will explain how to set up and run your first (very simple) Drupal migration. The process I will outline is actually the same steps I do when beginning to write a custom migration - just to make sure I have everything "hooked up" properly before I start getting into the more complex aspects of the project.
I generally write migrations on my local machine, using DDEV as the local development environment.
Configuration migrations vs. Plugin migrationsWhen creating a custom migration, one of the initial decisions to be made is whether you'll write the migrations as plugins or configuration entities. I've always used configuration entities, but there are pros and cons to both approaches. Here, we will focus on configuration entity migrations.
There are some minor differences in the workflow presented below when using plugin migrations. For more information on the differences, I recommend Mauricio Dinarte's article.
Core and contrib modules usedIf you're planning on following along with this article, the following modules should be installed and enabled:
Drupal core:
- Migrate
- Migrate Drupal
Contrib:
- Migrate tools (for Drush commands) https://www.drupal.org/project/migrate_tools
- Migrate plus (migrations as configuration, additional process plugins, and more) https://www.drupal.org/project/migrate_plus
This blog post will demonstrate importing a small portion of user data from a Drupal 7 site into a Drupal 10 site. Normally, the first step is setting up the source data: in this case a Drupal 7 database. I normally create a new d7 database on the same MariaDB server as the Drupal 10 site (using DDEV, this is quite easy) and then import the Drupal 7 database into it.
Next, we have to tell the Drupal 10 site about the d7 source database. This can be done by adding the following database connection array to the bottom of your settings.local.php file (which I know you're using!):
$databases['migrate']['default'] = array( 'driver' => 'mysql', 'database' => 'd7', 'username' => 'db', 'password' => 'db', 'host' => 'db', 'port' => 3306, 'prefix' => '', );Note the database key is migrate and the database name is d7. Everything else is identical to the regular Drupal 10 database credentials in DDEV. If you're using Lando or another local development environment, then your database connection array may be different.
Custom migration moduleNext, we need a custom module to house our migration. The easiest way to create one is via Drush's generate command:
$ drush generate module Welcome to module generator! ââââââââââââââââââââââââââââââ Module name: †My d7 migration Module machine name [my_d7_migration]: †Module description: †Migrations from Drupal 7 site. Package [Custom]: †Dependencies (comma separated): †migrate, migrate_drupal, migrate_plus Would you like to create module file? [No]: †Would you like to create install file? [No]: †Would you like to create README.md file? [No]: †The following directories and files have been created or updated: âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ âą /var/www/html/web/modules/custom/my_d7_migration/my_d7_migration.info.ymlOnce created, go ahead and enable the module as well:
$ drush en my_d7_migrationMigration group configurationAs this blog post will be writing migrations as configuration entities, create a new config/install directory in your new module.
We'll define a migration group to which all of our migrations will belong - this is what connects each migration to the source d7 database. Create a new migrate_plus.migration_group.my_group.yml file in the new config/install directory with the following contents:
id: my_d7_migration label: My Drupal 7 migration. description: Content from Drupal 7 site. source_type: Source Drupal 7 site shared_configuration: source: key: migrate migration_tags: - Drupal 7 - my_d7_migrationThe most important bit of this configuration is the source key of migrate; this links up this migration group with the d7 database, with the migrate key we previously configured. Then, any migration created that is in this group automatically has access to the source d7 database.
Simple sample migration configurationNext, let's look at a super simple example migration for user entities. This will not be a complete user migration, but rather just enough to migrate something. Many migration blog posts, documentation pages and other sources provide guidance and examples for writing migration configuration files (remember, this blog post is focused on all of the configuration and mechanics that normally aren't covered in other places.)
Here's the user migration configuration we'll use. Create a new /config/install/migrate_plus.migration.user.yml file in your custom module with the following contents:
id: user label: Migrate D7 users. migration_group: my_d7_migration source: plugin: d7_user process: name: name pass: pass mail: mail created: created access: access login: login status: status timezone: timezone destination: plugin: entity:userReputable Drupal migration articles and documentation will explain that migrations need a source (extraction,) some processing (transform,) and a destination (where to load the data.) Often these three concepts will be called ETL. Each of these concepts are easy to spot in our sample configuration file:
- The data is coming from the d7_user plugin (provided by Drupal core with the data source being that of the my_d7_migration group which we configured in a previous step.)
- The processing of the data in this simple migration is just field mapping (to: from.) Many migration configurations have transformations as part of this section, often provided by Drupal process plugins.
- The destination is Drupal user entities.
Again, these instructions are specific to Drupal migration configurations when created as configuration entities. Instructions for migration configuration written as plugins are slightly different (and not covered in this article.)
As we are dealing with configuration entities, they must be imported into the active configuration store - which, by default, is the Drupal database. This is easily accomplished with the drush cim --partial command. This must be run each and every time a migration configuration file is modified. This is one of the (few, in my opinion) downsides of writing migrations as configurations.
Next, I often check the status of migration via the drush migrate:status command. When using the Migrate Drupal module, it is recommended to always use the --group option otherwise the output of migrate:status can get a bit messy (due to all the default migrations that will be displayed.)
The drush migrate:import and migrate:rollback commands should be self-explanatory. Each can be used with either the --group option or with a migration name (as shown below.) I almost always use the --update option on migrate:import for updating previously imported data.
Finally, keep the drush migrate:reset command in your back pocket when writing custom migrations. If the migration crashes, you'll need to use this to reset its status from processing to idle in order to run it again.
Here's a full set of the command specific to the sample user migration, migration group, and custom module created in this article:
$ drush cim --partial --source=modules/custom/my_d7_migration/config/install $ drush migrate:status --group=whatever $ drush migrate:import user --update $ drush migrate:reset user $ drush migrate:rollback userWhat's next?What I've presented in this article is 90% of what I regularly use when running migrations. Sure, there are a few edge cases where oddities occur, but I believe this is a solid base of knowledge to start with.
Once all this makes sense, I encourage you to utilize other blog posts and documentation to extend the user migration we started, or write new ones based on other resources. Once you're comfortable writing additional migrations, learning how to create your own process plugins is the natural next step. As process plugins are written as PHP classes, some knowledge of module development is necessary.
Interested in learning more about Drupal module development? Check out DrupalEasy's 15-week Professional Module Development course!
Additional resources- UnderstandDrupal.com - loads of resources about Drupal migrations from Mauricio Dinarte.
- Tag1's recent series about migrations (also written by Mauricio Dinarte!):
- Migration related modules (definitely not an exhaustive list, but a solid start):
- Migrate plus
- Migrate tools
- Migrate devel
- Migrate process extra - process plugins for migrations.
- Migrate conditions - additional process plugins for migrations.
- Migrate source CSV - allows migrations from .csv files.
- Migration tools - even more process plugins and additional helper classes and methods for migration.
- Migrate upgrade - Drush support for the Drupal core migrations from Drupal 6 and 7.
Lead image generated by OpenAI.
Python Bytes: #393 Dare enter the Bash dungeon?
Golems GABB: Best SEO Practices for Drupal Websites in 2024
If you are enthusiastic about Best SEO Practices for Drupal Websites in 2024 and want to grow your website traffic, you are just in the right place! Drupal is like a wise friend on the web who helps you complete tasks quickly without understanding many programming details. Also, it has some cool tricks that make your website both Google-friendly and user-friendly. And here we go, deeper into optimizing your SEO for Drupal!
Drupal SEO Checklist in 2024All right, let's make a rough Drupal SEO checklist.
Specbee: Top 8 Drupal modules that can improve user engagement
Building and Running QML Android Applications
Qt Creator uses Gradle for project building. Once your project code is ready, select the Android kit for building in Qt Creator. Click build, and Qt Creator will generate Gradle configuration files for you. However, you may encounter errors such as:
Execution failed for task ':processDebugResources'. A failure occurred while executing com.android.build.gradle.internal.res.LinkApplicationAndroidResourcesTask$TaskAction Android resource linking failed aapt2 E 07-23 15:59:44 51907 51907 LoadedArsc.cpp:94] RES_TABLE_TYPE_TYPE entry offsets overlap actual entry data. aapt2 E 07-23 15:59:44 51907 51907 ApkAssets.cpp:149] Failed to load resources table in APK '/home/zhy/Android/Sdk/platforms/android-35/android.jar'.
This is because the current Gradle plugin version does not support android-35. SeeHow to fix "Execution failed for task ':app:processDebugResources'. > Android resource linking failed"[Android/Flutter] - Stack Overflow
To resolve this issue, you need to modify the Gradle configuration.
Navigate to the /build/Qt_6_7_2_Clang_arm64_v8a-Debug/android-build folder in your project.
Open the build.gradle file and locate the dependencies block:
dependencies { classpath 'com.android.tools.build:gradle:7.4.1' }com.android.tools.build:gradle:7.4.1 refers to the Android Gradle Plugin (AGP). This default version is from 2022 and is quite outdated.
You can find the latest version here: Maven Repository: com.android.tools.build » gradle
Update the plugin version to a newer one, for example:
dependencies { classpath 'com.android.tools.build:gradle:8.4.1' }Additionally, when upgrading the plugin version, ensure compatibility with the Gradle version.
Check the relationship between the Gradle and AGP versions here: Android Gradle plugin 8.5 release notes | Android Studio | Android Developers
Therefore, you should use at least Gradle version 8.6 to support this plugin.
Navigate to the ./gradle/wrapper folder and open the gradle-wrapper.properties file. This file defines the Gradle version used by the project.
Find the line: distributionUrl=https\://services.gradle.org/distributions/gradle-8.3-bin.zip
Change it to:
distributionUrl=https\://services.gradle.org/distributions/gradle-8.6-bin.zip
This update specifies that the project will use Gradle version 8.6.
After making these changes, click on build. It will automatically download the specified version of Gradle and then download the necessary Gradle plugins.
If successful, you can find the built APK at ./build/outputs/apk/debug.
Running and Installing APKIn a Linux environment, you can use the adb command to install the APK on your Android device. Alternatively, you can use Qt Creator for one-click deployment.
After connecting your Android device via USB, use the adb command to install the APK:
adb install android-build-debug.apkMake sure that your Android device has Developer Mode enabled. You can find specific instructions on how to enable Developer Mode based on your device model through an online search.
Christoph Schiessl: How to Serve a Directory of Static Files with FastAPI
Quansight Labs Blog: Introducing the 2024 Labs Internship Program Cohort
Russell Coker: Blog Comments
The Akismet WordPress anti-spam plugin has changed itâs policy to not run on sites that have adverts which includes mine. Without it I get an average of about 1 spam comment per hour and the interface for removing spam takes more mouse actions than desired. For email spam itâs about the same volume half of which is messages with SpamAssassin scores high enough to go into the MaybeSpam folder (that I go through every few weeks) and half of which goes straight to my inbox. But fortunately moving spam to a folder where I can later use it to train Bayesian classification is a much faster option on PC and is also something I can do from my phone MUA.
As an experiment I have configured my blog to only take comments from registered users. It will be interesting to see how many spammers make it through that and to also see feedback from genuine people. People who canât comment can tell me about it via the contact methods listed here [1].
I previously wrote about other ways of dealing with hostile comments [2]. Blogging seems to be less popular nowadays so a Planet specific forum doesnât seem a viable option. Itâs a pity, I think that YouTube and Facebook have taken over from blogs and thatâs not a good thing.
Related posts:
- Blog Comments John Scalzi wrote an insightful post about the utility of...
- comment spam The war on comment-spam has now begun. It appears that...
- wp-spamshield Yesterday I installed the wp-spamshield plugin for WordPress [1]. It...
Martin-Éric Racine: dhcpcd replacing dhclient for Trixie... or perhaps networkd?
My work on overhauling dhcpcd as the prime replacement for ISC's discontinued DHCP client is done. The package has achieved stability, both upstream and at Debian. The only remaining points are bug #1038882 to swap the Priorities of isc-dhcp-client and dhcpcd-base in the repository's override, and swaping ifupdown's search order to put dhcpcd first.
Meanwhile, ifupdown's de-facto maintainer prompted me to sollicit opinions on which of the 4 ifupdown implementations should ship with a minimal installation for Trixie. This, in turn, re-opened the debate of what should be Debian's default network configuation framework (see the thread starting with this post).
networkdGiven how most Debian ports (except for Hurd) ship with systemd, which includes a disabled networkd by standard, many people in the thread feel that this should become the default network configuration tool for minimal installations. As it happens, most of my hosts fit that scenario, so I figured that I would give another go at testing networkd on one host.
I used the following minimalistic /etc/systemd/network/dhcp.network:
[Match] Name=en* wl* [Network] DHCP=yesThis correctly configured IPv4 via DHCP, with the small caveat that it doesn't update /etc/resolv.conf without installing resolvconf or systemd-resolved.
However, networkd's default IPv6 settings really are not suitable for public consumption. The key issues (see Bug #1076432):
- Temporary addresses are not enabled by default. Worse, the setting is ignored if it was enabled by sysctl during bootup. This is a major privacy issue. Adding IPv6PrivacyExtensions=yes to the above exposed another issue: instead of using the fe80 address generated by the kernel, networkd adds a new one.
- Networkd uses EUI64 addresses by default. This is another major privacy issue, since EUI64 addresses are forensically traceable to the interface's MAC address. Worse, the setting is ignored if stable-privacy was enabled by sysctl during bootup. To top it all, networkd does stable-privacy using systemd's time-proven brain-dead approach of reinventing the wheel: instead of merely setting the kernel's address generation mode to 3 and letting it configure the secret address, it expects the secret address to be spelled out in the systemd unit.
Conclusion: networkd works well enough for someone configuring an IPv4-only network from 20 years ago, but is utterly inadequate for IPv6 or dual-stack installations, doubly so on a software distribution that claims to care about privacy and network security.
Talking Drupal: Talking Drupal #460 - Preconfigured CMS Solutions
Today we are talking about Preconfigured CMS Solutions, How they can help your business, and The best way to build them in Drupal with guests Baddy Sonja Breidert and Dr. Christoph Breidert.
For show notes visit: www.talkingDrupal.com/460
Topics- Spain
- What is a Preconfigured CMS / Drupal Solution
- Who is the audience
- What business objectives can preconfigured solutions solve
- What are the ingredients
- How do you manage theming
- How do you manage customized design
- What do you do if your client has a need that your preconfigured solution does not solve
- What about Starshot
- Did the two of you meet over Drupal
- How do you manage work life balance
Christoph Breidert - 1xINTERNET breidert
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi BaddĂœ Sonja Breidert - 1xINTERNET baddysonja
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you ever wanted to customize the way Google Maps appear on your Drupal site? There's a module for that.
- Module name/project name:
- Brief history
- How old: created in Mar 2014 by iampuma, but recent releases are by Artem Dmitriiev (a.dmitriiev) of 1xINTERNET
- Versions available: 7.x-2.0, 8.x-1.7, and 8.x-2.6 versions available, the last of which works with Drupal 8, 9, 10, and 11
- Maintainership
- Actively maintained, latest release a week ago
- Security coverage
- Has a Documentation page and lots of information on the project page
- Number of open issues: 8 open issues, 1 of which is a bug against the current branch, though it was actually fixed in the latest release
- Usage stats:
- 1,764 sites
- Module features and usage
- The module provides allows your Drupal to use custom styles, which you can copy and paste from SnazzyMaps.com, or create from scratch using a configuration widget on Github that is linked from the project page
- You will be able to use custom markers by using the System Stream Wrapper module
- You can also specify popups for the markers, using a field or a view mode
- If you use the companion styled_google_views module, you can also show multiple locations, and define clustering options
- Styled Google Map also has integration with Google's Directions service, so visitors can easily get turn-by-turn directions for how to reached their chosen location
- The module also includes a demo submodule you can use to quickly set up a working example to illustrate all the different options available using Styled Google Map
The Drop Times: Modernizing Your Web Presence with Drupal
Hello Tech Enthusiasts,
In this edition, weâre exploring why Drupal is a smart choice for your website development needs.
Drupalâs modular design makes it highly scalable and flexible, allowing it to support everything from small blogs to large enterprise sites. This adaptability ensures your website can grow and evolve alongside your business without needing extensive overhauls.
Managing content on Drupal is straightforward, thanks to its intuitive interface. Even non-technical users can easily handle updates and changes. Additionally, with numerous themes and modules, customization is virtually limitless, enabling businesses to tailor their sites to specific requirements.
Security is another area where Drupal excels. Regular updates and a robust security framework protect your site from vulnerabilities. Plus, being open-source, Drupal benefits from a vibrant community that continually contributes to its development, ensuring continuous improvements and access to a wealth of resources.
Businesses across various industries have successfully used Drupal to create dynamic, secure, and scalable websites. For example, the University of Oxford leverages Drupal to manage extensive content libraries and facilitate complex workflows efficiently. You can read more about their experience here.
Drupal offers a powerful and versatile CMS solution for modern website development. Its scalability, ease of use, extensive customization options, and strong security make it an ideal choice for businesses aiming to enhance their web presence. Supported by an active community, Drupal continues to meet the diverse needs of its users.
With that, now explore some of the highlighted stories we covered last week:
Join The DropTimes (TDT) for its milestone 100th interview with Dries Buytaert, the innovative founder of Drupal. Interviewed by Anoop John, Founder and Lead of The DropTimes, this conversation explores Drupalâs rich history and transformative journey.
Learn how to migrate new content efficiently using Drupal's Migrate API. Jaymie Strecker's second part of this tutorial, "Using Drupal Migrations to Modify Content Within a Site," provides detailed examples of creating paragraphs, nodes, URL aliases, and redirects.
The 2024 Drupal Association board election is now open. Community members can nominate themselves until 30 July 2024. Voting begins on 15 August.
The Drupal Association's annual survey is now open! Digital service providers worldwide are invited to share their insights on market trends, growth opportunities, and more. Participate by September 4 to contribute to the comprehensive report and see the results presented at the CEO Dinner during DrupalCon Barcelona.
Varbase 10.0.0 has arrived, bringing a host of enhancements and new features to this comprehensive Drupal distribution. Built on Drupal 10, it offers improved performance, PHP 8.1 compatibility, and a modern CKEditor 5 integration.
Drupal Starshot is recruiting track leads to drive crucial project tracks, focusing on areas like Drupal Recipes, installer enhancements, and product marketing. Apply by July 31st to contribute to this innovative initiative and help shape the future of Drupal.
Mark Conroy introduces the LocalGov KeyNav module, enabling keyboard shortcuts for navigating LocalGov Drupal websites. This module, sponsored by Big Blue Door, enhances site usability with customizable key sequences for quick access to various pages.
DrupalCon Barcelona 2024 invites registered attendees to submit proposals for Birds of a Feather (BoF) sessions. BoFs provide an inclusive, informal environment to discuss Drupal, technology topics, or personal interests. Submit your BoF proposal by September 25, 2024, to join the conversation.
GĂĄbor Hojtsy and Lauri Timmanee have released the open-source slide deck and video recording from their keynote on Drupal Starshot at Drupal Developer Days 2024. Access these resources for your events and presentations.
Sponsorship opportunities are now available for DrupalCamp Pune 2024, which will take place on October 19-20. This is a great opportunity to engage with the global Drupal community and showcase your brand.
Submit your session proposals for Twin Cities Drupal Camp 2024 by August 15. The camp, scheduled for September 12-13, 2024, welcomes diverse topics related to Drupal and web development.
We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.
To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.
Thank you,
Sincerely
Kazima Abbas
Sub-editor, The DropTimes.
Â
Peter Bengtsson: Converting Celsius to Fahrenheit round-up
In the last couple of days, I've created variations of a simple algorithm to demonstrate how Celcius and Fahrenheit seem to relate to each other if you "mirror the number".
It wasn't supposed to be about the programming language. Still, I used Python in the first one and I noticed that since the code is simple, it could be fun to write variants of it in other languages.
- Converting Celsius to Fahrenheit with Python
- Converting Celsius to Fahrenheit with TypeScript
- Converting Celsius to Fahrenheit with Go
- Converting Celsius to Fahrenheit with Ruby
- Converting Celsius to Fahrenheit with Crystal
- Converting Celsius to Fahrenheit with Rust
It was a fun exercise.
And speaking of fun, I couldn't help but to throw in a benchmark using hyperfine that measures, essentially, how fast these CLIs can start up. The results look like this:
Summary ./conversion-rs ran 1.31 ± 1.30 times faster than ./conversion-go 1.88 ± 1.33 times faster than ./conversion-cr 7.15 ± 4.64 times faster than bun run conversion.ts 14.27 ± 9.48 times faster than python3.12 conversion.py 18.10 ± 12.35 times faster than node conversion.js 67.75 ± 43.80 times faster than ruby conversion.rbIt doesn't prove much, that you didn't expect. But it's fun to see how fast Python 3.12 has become at starting up.
Head on over to https://github.com/peterbe/temperature-conversion to play along. Perhaps you can see some easy optimizations (speed and style).
libc @ Savannah: The GNU C Library version 2.40 is now available
The GNU C Library
=================
The GNU C Library version 2.40 is now available.
The GNU C Library is used as the C library in the GNU system and
in GNU/Linux systems, as well as many other systems that use Linux
as the kernel.
The GNU C Library is primarily designed to be a portable
and high performance C library. It follows all relevant
standards including ISO C11 and POSIX.1-2017. It is also
internationalized and has one of the most complete
internationalization interfaces known.
The GNU C Library webpage is at http://www.gnu.org/software/libc/
Packages for the 2.40 release may be downloaded from:
       http://ftpmirror.gnu.org/libc/
       http://ftp.gnu.org/gnu/libc/
The mirror list is at http://www.gnu.org/order/ftp.html
Distributions are encouraged to track the release/* branches
corresponding to the releases they are using. The release
branches will be updated with conservative bug fixes and new
features while retaining backwards compatibility.
NEWS for version 2.40
=====================
Major new features:
- The <stdbit.h> header type-generic macros have been changed when using
 GCC 14.1 or later to use __builtin_stdc_bit_ceil etc. built-in functions
 in order to support unsigned __int128 and/or unsigned _BitInt(N) operands
 with arbitrary precisions when supported by the target.
- The GNU C Library now supports a feature test macro _ISOC23_SOURCE to
 enable features from the ISO C23 standard. Only some features from
 this standard are supported by the GNU C Library. The older name
 _ISOC2X_SOURCE is still supported. Features from C23 are also enabled
 by _GNU_SOURCE, or by compiling with the GCC options -std=c23,
 -std=gnu23, -std=c2x or -std=gnu2x.
- The following ISO C23 function families (introduced in TS
 18661-4:2015) are now supported in <math.h>. Each family includes
 functions for float, double, long double, _FloatN and _FloatNx, and a
 type-generic macro in <tgmath.h>.
 - Exponential functions: exp2m1, exp10m1.
 - Logarithmic functions: log2p1, log10p1, logp1.
- A new tunable, glibc.rtld.enable_secure, can be used to run a program
 as if it were a setuid process. This is currently a testing tool to allow
 more extensive verification tests for AT_SECURE programs and not meant to
 be a security feature.
- On Linux, the epoll header was updated to include epoll ioctl definitions
 and the related structure added in Linux kernel 6.9.
- The fortify functionality has been significantly enhanced for building
 programs with clang against the GNU C Library.
- Many functions have been added to the vector library for aarch64:
   acosh, asinh, atanh, cbrt, cosh, erf, erfc, hypot, pow, sinh, tanh
- On x86, memset can now use non-temporal stores to improve the performance
 of large writes. This behaviour is controlled by a new tunable
 x86_memset_non_temporal_threshold.
Deprecated and removed features, and other changes affecting compatibility:
- Architectures which use a 32-bit seconds-since-epoch field in struct
 lastlog, struct utmp, struct utmpx (such as i386, powerpc64le, rv32,
 rv64, x86-64) switched from a signed to an unsigned type for that
 field. This allows these fields to store timestamps beyond the year
 2038, until the year 2106. Please note that applications are still
 expected to migrate off the interfaces declared in <utmp.h> and
 <utmpx.h> (except for login_tty) due to locking and session management
 problems.
- __rseq_size now denotes the size of the active rseq area (20 bytes
 initially), not the size of struct rseq (32 bytes initially).
Security related changes:
The following CVEs were fixed in this release, details of which can be
found in the advisories directory of the release tarball:
 GLIBC-SA-2024-0004:
   ISO-2022-CN-EXT: fix out-of-bound writes when writing escape
   sequence (CVE-2024-2961)
 GLIBC-SA-2024-0005:
   nscd: Stack-based buffer overflow in netgroup cache (CVE-2024-33599)
 GLIBC-SA-2024-0006:
   nscd: Null pointer crash after notfound response (CVE-2024-33600)
 GLIBC-SA-2024-0007:
   nscd: netgroup cache may terminate daemon on memory allocation
   failure (CVE-2024-33601)
 GLIBC-SA-2024-0008:
   nscd: netgroup cache assumes NSS callback uses in-buffer strings
   (CVE-2024-33602)
The following bugs were resolved with this release:
 [19622] network: Support aliasing with struct sockaddr
 [21271] localedata: cv_RU: update translations
 [23774] localedata: lv_LV collates Y/y incorrectly
 [23865] string: wcsstr is quadratic-time
 [25119] localedata: Change Czech weekday names to lowercase
 [27777] stdio: fclose does a linear search, takes ages when many FILE*
   are opened
 [29770] libc: prctl does not match manual page ABI on powerpc64le-
   linux-gnu
 [29845] localedata: Update hr_HR locale currency to âŹ
 [30701] time: getutxent misbehaves on 32-bit x86 when _TIME_BITS=64
 [31316] build: Fails test misc/tst-dirname "Didn't expect signal from
   child: got `Illegal instruction'" on non SSE CPUs
 [31317] dynamic-link: [RISCV] static PIE crashes during self
   relocation
 [31325] libc: mips: clone3 is wrong for o32
 [31335] math: Compile glibc with -march=x86-64-v3 should disable FMA4
   multi-arch version
 [31339] libc: arm32 loader crash after cleanup in 2.36
 [31340] manual: A bad sentence in section 22.3.5 (resource.texi)
 [31357] dynamic-link: $(objpfx)tst-rtld-list-diagnostics.out rule
   doesn't work with test wrapper
 [31370] localedata: wcwidth() does not treat
   DEFAULT_IGNORABLE_CODE_POINTs as zero-width
 [31371] dynamic-link: x86-64: APX and Tile registers aren't preserved
   in ld.so trampoline
 [31372] dynamic-link: _dl_tlsdesc_dynamic doesn't preserve all caller-
   saved registers
 [31383] libc: _FORTIFY_SOURCE=3 and __fortified_attr_access vs size of
   0 and zero size types
 [31385] build: sort-makefile-lines.py doesn't check variable with _
   nor with "^# variable"
 [31402] libc: clone (NULL, NULL, ...) clobbers %r7 register on
   s390{,x}
 [31405] libc: Improve dl_iterate_phdr using _dl_find_object
 [31411] localedata: Add Latgalian locale
 [31412] build: GCC 6 failed to build i386 glibc on Fedora 39
 [31429] build: Glibc failed to build with -march=x86-64-v3
 [31468] libc: sigisemptyset returns true when the set contains signals
   larger than 34
 [31476] network: Automatic activation of single-request options break
   resolv.conf reloading
 [31479] libc: Missing #include <sys/rseq.h> in sched_getcpu.c may
   result in a loss of rseq acceleration
 [31501] dynamic-link: _dl_tlsdesc_dynamic_xsavec may clobber %rbx
 [31518] manual: documentation: FLT_MAX_10_EXP questionable text, evtl.
   wrong,
 [31530] localedata: Locale file for Moksha - mdf_RU
 [31553] malloc: elf/tst-decorate-maps fails on ppc64el
 [31596] libc: On the llvm-arm32 platform, dlopen("not_exist.so", -1)
   triggers segmentation fault
 [31600] math: math: x86 ceill traps when FE_INEXACT is enabled
 [31601] math: math: x86 floor traps when FE_INEXACT is enabled
 [31603] math: math: x86 trunc traps when FE_INEXACT is enabled
 [31612] libc: arc4random fails to fallback to /dev/urandom if
   getrandom is not present
 [31629] build: powerpc64: Configuring with "--with-cpu=power10" and
   'CFLAGS=-O2 -mcpu=power9' fails to build glibc
 [31640] dynamic-link: POWER10 ld.so crashes in
   elf_machine_load_address with GCC 14
 [31661] libc: NPROCESSORS_CONF and NPROCESSORS_ONLN not available in
   getconf
 [31676] dynamic-link: Configuring with CC="gcc -march=x86-64-v3"
   --with-rtld-early-cflags=-march=x86-64 results in linker failure
 [31677] nscd: nscd: netgroup cache: invalid memcpy under low
   memory/storage conditions
 [31678] nscd: nscd: Null pointer dereferences after failed netgroup
   cache insertion
 [31679] nscd: nscd: netgroup cache may terminate daemon on memory
   allocation failure
 [31680] nscd: nscd: netgroup cache assumes NSS callback uses in-buffer
   strings
 [31682] math: [PowerPC] Floating point exception error for math test
   test-ceil-except-2 test-floor-except-2 test-trunc-except-2
 [31686] dynamic-link: Stack-based buffer overflow in
   parse_tunables_string
 [31695] libc: pidfd_spawn/pidfd_spawnp leak an fd if clone3 succeeds
   but execve fails
 [31719] dynamic-link: --enable-hardcoded-path-in-tests doesn't work
   with -Wl,--enable-new-dtags
 [31730] libc: backtrace_symbols_fd prints different strings than
   backtrace_symbols returns
 [31753] build: FAIL: link-static-libc with GCC 6/7/8
 [31755] libc: procutils_read_file doesn't start with a leading
   underscore
 [31756] libc: write_profiling is only in libc.a
 [31757] build: Should XXXf128_do_not_use functions be excluded?
 [31759] math: Extra nearbyint symbols in libm.a
 [31760] math: Missing math functions
 [31764] build: _res_opcodes should be a compat symbol only
 [31765] dynamic-link: _dl_mcount_wrapper is exported without prototype
 [31766] stdio: IO_stderr _IO_stdin_ _IO_stdout should be compat
   symbols
 [31768] string: Extra stpncpy symbol in libc.a
 [31770] libc: clone3 is in libc.a
 [31774] libc: Missing __isnanf128 in libc.a
 [31775] math: Missing exp10 exp10f32x exp10f64 fmod fmodf fmodf32
   fmodf32x fmodf64 in libm.a
 [31777] string: Extra memchr strlen symbols in libc.a
 [31781] math: Missing math functions in libm.a
 [31782] build: Test build failure with recent GCC trunk (x86/tst-cpu-
   features-supports.c:69:3: error: parameter to builtin not valid:
   avx5124fmaps)
 [31785] string: loongarch: Extra strnlen symbols in libc.a
 [31786] string: powerpc: Extra strchrnul and strncasecmp_l symbols in
   libc.a
 [31787] math: powerpc: Extra llrintf, llrintf, llrintf32, and
   llrintf32 symbols in libc.a
 [31788] libc: microblaze: Extra cacheflush symbol in libc.a
 [31789] libc: powerpc: Extra versionsort symbol in libc.a
 [31790] libc: s390: Extra getutent32, getutent32_r, getutid32,
   getutid32_r, getutline32, getutline32_r, getutmp32, getutmpx32,
   getutxent32, getutxid32, getutxline32, pututline32, pututxline32,
   updwtmp32, updwtmpx32 in libc.a
 [31797] build: g++ -static requirement should be able to opt-out
 [31798] libc: pidfd_getpid.c is miscompiled by GCC 6.4
 [31802] time: difftime is pure not const
 [31808] time: The supported time_t range is not documented.
 [31840] stdio: Memory leak in _IO_new_fdopen (fdopen) on seek failure
 [31867] build: "CPU ISA level is lower than required" on SSE2-free
   CPUs
 [31876] time: "Date and time" documentation fixes for POSIX.1-2024 etc
 [31883] build: ISA level support configure check relies on bashism /
   is otherwise broken for arithmetic
 [31892] build: Always install mtrace.
 [31917] libc: clang mq_open fortify wrapper does not handle 4 argument
   correctly
 [31927] libc: clang open fortify wrapper does not handle argument
   correctly
 [31931] time: tzset may fault on very short TZ string
 [31934] string: wcsncmp crash on s390x on vlbb instruction
 [31963] stdio: Crash in _IO_link_in within __gcov_exit
 [31965] dynamic-link: rseq extension mechanism does not work as
   intended
 [31980] build: elf/tst-tunables-enable_secure-env fails on ppc
Release Notes
=============
https://sourceware.org/glibc/wiki/Release/2.40
Contributors
============
This release was made possible by the contributions of many people.
The maintainers are grateful to everyone who has contributed
changes or bug reports. These include:
Adam Sampson
Adhemerval Zanella
Alejandro Colomar
Alexandre Ferrieux
Amrita H S
Andreas K. HĂŒttel
Andreas Schwab
Andrew Pinski
Askar Safin
Aurelien Jarno
Avinal Kumar
Carlos Llamas
Carlos O'Donell
Charles Fol
Christoph MĂŒllner
DJ Delorie
Daniel Cederman
Darius Rad
David Paleino
Dragan StanojeviÄ (Nevidljivi)
Evan Green
Fangrui Song
Flavio Cruz
Florian Weimer
Gabi Falk
H.J. Lu
Jakub Jelinek
Jan Kurik
Joe Damato
Joe Ramsay
Joe Simmons-Talbott
Joe Talbott
John David Anglin
Joseph Myers
Jules Bertholet
Julian Zhu
Junxian Zhu
Konstantin Kharlamov
Luca Boccassi
Maciej W. Rozycki
Manjunath Matti
Mark Wielaard
MayShao-oc
Meng Qinggang
Michael Jeanson
Michel Lind
Mike FABIAN
Mohamed Akram
Noah Goldstein
Palmer Dabbelt
Paul Eggert
Philip Kaludercic
Samuel Dobron
Samuel Thibault
Sayan Paul
Sergey Bugaev
Sergey Kolosov
Siddhesh Poyarekar
Simon Chopin
Stafford Horne
Stefan Liebler
Sunil K Pandey
Szabolcs Nagy
Wilco Dijkstra
Xi Ruoyao
Xin Wang
Yinyu Cai
YunQiang Su
We would like to call out the following and thank them for their
tireless patch review:
Adhemerval Zanella
Alejandro Colomar
Andreas K. HĂŒttel
Arjun Shankar
Aurelien Jarno
Bruno Haible
Carlos O'Donell
DJ Delorie
Dmitry V. Levin
Evan Green
Fangrui Song
Florian Weimer
H.J. Lu
Jonathan Wakely
Joseph Myers
Mathieu Desnoyers
Maxim Kuvyrkov
Michael Jeanson
Noah Goldstein
Palmer Dabbelt
Paul Eggert
Paul E. Murphy
Peter Bergner
Philippe Mathieu-Daudé
Sam James
Siddhesh Poyarekar
Simon Chopin
Stefan Liebler
Sunil K Pandey
Szabolcs Nagy
Xi Ruoyao
Zack Weinberg
--
Andreas K. HĂŒttel
dilfridge@gentoo.org
Gentoo Linux developer
(council, toolchain, base-system, perl, releng)
https://wiki.gentoo.org/wiki/User:Dilfridge
https://www.akhuettel.de/
Week 8 recap
Real Python: Logging in Python
Recording relevant information during the execution of your program is a good practice as a Python developer when you want to gain a better understanding of your code. This practice is called logging, and itâs a very useful tool for your programmerâs toolbox. It can help you discover scenarios that you might not have thought of while developing.
These records are called logs, and they can serve as an extra set of eyes that are constantly looking at your applicationâs flow. Logs can store information, like which user or IP accessed the application. If an error occurs, then logs may provide more insights than a stack trace by telling you the state of the program before the error and the line of code where it occurred.
Python provides a logging system as part of its standard library. You can add logging to your application with just a few lines of code.
In this tutorial, youâll learn how to:
- Work with Pythonâs logging module
- Set up a basic logging configuration
- Leverage log levels
- Style your log messages with formatters
- Redirect log records with handlers
- Define logging rules with filters
When you log useful data from the right places, you can debug errors, analyze the applicationâs performance to plan for scaling, or look at usage patterns to plan for marketing.
Youâll do the coding for this tutorial in the Python standard REPL. If you prefer Python files, then youâll find a full logging example as a script in the materials of this tutorial. You can download this script by clicking the link below:
Get Your Code: Click here to download the free sample code that youâll use to learn about logging in Python.
Take the Quiz: Test your knowledge with our interactive âLogging in Pythonâ quiz. Youâll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Logging in PythonIn this quiz, you'll test your understanding of Python's logging module. With this knowledge, you'll be able to add logging to your applications, which can help you debug errors and analyze performance.
If you commonly use Pythonâs print() function to get information about the flow of your programs, then logging is the natural next step for you. This tutorial will guide you through creating your first logs and show you how to make logging grow with your projects.
Starting With Pythonâs Logging ModuleThe logging module in Pythonâs standard library is a ready-to-use, powerful module thatâs designed to meet the needs of beginners as well as enterprise teams.
Note: Since logs offer a variety of insights, the logging module is often used by other third-party Python libraries, too. Once youâre more advanced in the practice of logging, you can integrate your log messages with the ones from those libraries to produce a homogeneous log for your application.
To leverage this versatility, itâs a good idea to get a better understanding of how the logging module works under the hood. For example, you could take a stroll through the logging moduleâs source code
The main component of the logging module is something called the logger. You can think of the logger as a reporter in your code that decides what to record, at what level of detail, and where to store or send these records.
Exploring the Root LoggerTo get a first impression of how the logging module and a logger work, open the Python standard REPL and enter the code below:
Python >>> import logging >>> logging.warning("Remain calm!") WARNING:root:Remain calm! Copied!The output shows the severity level before each message along with root, which is the name the logging module gives to its default logger. This output shows the default format that can be configured to include things like a timestamp or other details.
In the example above, youâre sending a message on the root logger. The log level of the message is WARNING. Log levels are an important aspect of logging. By default, there are five standard levels indicating the severity of events. Each has a corresponding function that can be used to log events at that level of severity.
Note: Thereâs also a NOTSET log level, which youâll encounter later in this tutorial when you learn about custom logging handlers.
Here are the five default log levels, in order of increasing severity:
Log Level Function Description DEBUG logging.debug() Provides detailed information thatâs valuable to you as a developer. INFO logging.info() Provides general information about whatâs going on with your program. WARNING logging.warning() Indicates that thereâs something you should look into. ERROR logging.error() Alerts you to an unexpected problem thatâs occured in your program. CRITICAL logging.critical() Tells you that a serious error has occurred and may have crashed your app.The logging module provides you with a default logger that allows you to get started with logging without needing to do much configuration. However, the logging functions listed in the table above reveal a quirk that you may not expect:
Python >>> logging.debug("This is a debug message") >>> logging.info("This is an info message") >>> logging.warning("This is a warning message") WARNING:root:This is a warning message >>> logging.error("This is an error message") ERROR:root:This is an error message >>> logging.critical("This is a critical message") CRITICAL:root:This is a critical message Copied!Notice that the debug() and info() messages didnât get logged. This is because, by default, the logging module logs the messages with a severity level of WARNING or above. You can change that by configuring the logging module to log events of all levels.
Read the full article at https://realpython.com/python-logging/ »[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
What every developer should know about time
What time is it? Itâs 9:30 in the morning.
But what does that mean? This isnât true everywhere in the world, and who decided a day has 24 hours? What is a second? What is time?
What is time, really?Time, as we understand it, is the ongoing sequence of existence and events that occur in a seemingly irreversible flow from the past, through the present, and into the future.
General relativity reveals that the perception of time can vary for different observers: the concept of what time it is now holds significance only relative to a specific observer. On your wristwatch, time advances at a consistent rate of one second per second. However, if you observe a distant clock, its rate will be influenced by both velocity and gravity: a clock moving faster than you will appear to tick more slowly.
Ignoring relativistic effects for a moment, physics provides a practical definition of time: time is âwhat a clock readsâ. Observing a certain number of repetitions of a standard cyclical event constitutes one standard unit, such as the second.
Time unitsEven in ancient times, two notable physical phenomena could be observed cyclically and helped quantify the passage of time:
- The Earthâs orbit around the Sun results in evident, cyclical events, and defines the âyearâ as a unit of time.
- The Sun crossing the local meridian, i.e. the time it takes for the Sun to reach the same angle in the sky. The interval between two successive instances of this event identify the solar âdayâ.
Note that the solar day, as a time unit, is affected by both the Earthâs own rotation and its revolution around the Sun: since the Earth moves by roughly 1° around the Sun each day, the Earth has to rotate by roughly 361° for the Sun to cross the local meridian.
This is different from the âsidereal dayâ, which is the amount of time it takes for the Earth to rotate by 360°, since it is computed from the apparent motion of distant astronomical objects instead of the Sun.
However, the solar day is typically the unit of interest for us humans and our daily lives.
How many days in a year?Although the Earth doesnât rotate a discrete number of times in its year, and hence the latter cannot be subdivided into a discrete number of days, humans have devised numerous calendar systems to organize days for social, religious, and administrative purposes.
Pope Gregory XIII implemented the Gregorian calendar in 1582, which has become the predominant calendar system in use today. This calendar supplanted the Julian calendar, rectifying inaccuracies by incorporating leap years, which are additional days inserted to synchronize the calendar year with the solar year. Over a complete leap cycle spanning 400 years, the average duration of a Gregorian calendar year is 365.2425 days. While the majority of years consist of 365 days, 97 out of every 400 years are designated as leap years, extending to 366Â days.
Despite the Gregorian calendar being the de-facto standard, other calendars exist:
- Islamic Calendar: A lunar calendar consisting of 12 months in a year of 354 or 355Â days.
- Hebrew Calendar: A lunisolar calendar used by Jewish communities.
- Chinese Calendar: A lunisolar calendar that determines traditional festivals and holidays in Chinese culture.
We all know that there are 60 seconds in a minute, 60 minutes in an hour, and 24 hours in a day, right? As it can be expected, these magic numbers arenât a fundamental property of nature, but rather a result of two historical choices:
- Dividing the day into 24 hours: This stems from ancient civilizations where early cultures divided the cycle of day and night into a convenient number of units, likely influenced by the phases of the moon or other natural patterns. Ancient egyptians used sundial subdivisions and split the cycle into two halves of 12 hours each: they used a duodecimal system already, likely influenced by the number of joints in our hands, excluding thumbs, and the ease of counting them. Twenty-four then became a common choice for counting hours.
- Subdividing hours into 60 minutes and minutes into 60 seconds: This preference for base-60 systems comes from even earlier civilizations like the Sumerians and Babylonians. Their number systems were based on 60, again possibly due to the ease of counting on fingers and knuckles. This base-60 system was later adopted by the Greeks and eventually influenced our timekeeping.
Historically, dividing the natural day from sunrise to sunset into twelve hours, as the romans also did, meant that the hour had a variable measure that depended on season and latitude.
Equal hours were later standardized as 1/24 of the mean solar day.
Counting timeHistorically, the concept of a second relied on the Earthâs rotation, but variability of the latter made this definition inadequate for scientific accuracy. In 1967, the second was redefined using the consistent properties of the caesium-133 atom. Presently, a second is defined as the time it takes for 9,192,631,770 oscillations of radiation during the transition between two energy states in caesium-133.
TAI and atomic clocksAtomic clocks utilize the definition of a second for precise time measurement.
An array of atomic clocks globally synchronizes time through consensus: the clocks collectively determine the accurate time, with each clock adjusting to align with the consensus, known as International Atomic Time (TAI).
The International Bureau of Weights and Measures (BIPM) located near Paris, France, computes the International Atomic Time (TAI) by taking the weighted average of more than 300 atomic clocks from around the world, where the most stable clocks receive more weight in the calculation.
Operating on atomic seconds, TAI serves as the standard for calibrating various timekeeping instruments, including quartz and GPSÂ clocks.
GPS timeSpecifically, GPS satellites carry multiple atomic clocks for redundancy and better synchronization, but the accuracy of GPS timing requires different corrections, such as for both Special and General Relativity:
- According to Special Relativity, clocks moving at high speeds will run slower relative to those on the ground. GPS satellites travel at approximately 14,000 km/h, which results in their clocks ticking slower by about 7.2 microseconds per day.
- According to General Relativity, clocks in a weaker gravitational field run faster than those closer to a massive object. The altitude of GPS satellites (about 20,200 km) causes their clocks to tick faster by about 45.8 microseconds per day.
- As the net effect, the combined relativistic effects cause the satellite clocks to run faster than ground-based clocks by about 38.6 microseconds per day.
To compensate, the satellite clocks are pre-adjusted on Earth before launch to tick slower than the ground clocks.
Once in orbit, GPS satellites continuously broadcast signals containing the exact transmission time and their positions.
A GPS receiver on the ground picks up these signals from multiple satellites and compares the reception time with the transmission time to calculate the time delays.
Using these delays, the receiver determines its distance from each satellite and, through trilateration, calculates its own position and the current time.
Precise timekeeping on GPS satellites is thus necessary, since small errors of 38.6 microseconds would result in big positional errors of roughly 11.4 km accumulated each day.
Notably, corrections must also consider that the elliptical satellite orbits cause variations in time dilation and gravitational frequency shifts over time. This eccentricity affects the clock rate difference between the satellite and the receiver, increasing or decreasing it based on the satelliteâs altitude.
UTC and leap secondsWhile TAI provides precise seconds, the Earth presents an irregular rotation: it gradually slows due to tidal acceleration.
On the other hand, for practical purposes, civil time is defined to agree with the Earthâs rotation.
Coordinated Universal Time (UTC) serves as the international standard for timekeeping. This time scale uses the same atomic seconds as TAI but adjusts for variations in the Earthâs rotation by adding or omitting leap seconds as necessary.
To determine if a leap second is required, universal time UT1 is utilized to measure mean solar time by monitoring the Earthâs rotation relative to the sun.
The variance between UT1 and UTC dictates the necessity of a leap second in UTC.
UTC has thus precise seconds and it is always kept within 0.9 seconds of the UT1 to synchronize with astronomical time: an additional second may be added to the last minute of a UTC year or June to compensate.
This is why the second is now precisely defined in the International System of Units (SI) as the atomic second, while days, hours, and minutes are only mentioned in the SI Brochure as accepted units for explanatory purposes: their use is deeply embedded in history and culture, but their definition is not precise as it refers to the mean solar day and it cannot account for the inconsistent rotational and orbital motion of the Earth.
Local timeAlthough UTC provides a universal time standard, individuals require a âlocalâ time that is consistent to the position of the sun, ensuring that local noon (when the sun reaches its zenith) roughly aligns with 12:00Â PM.
In a specific region, coordinating daily activities such as work and travel is more convenient when people adhere to a shared local time. For instance, if I start traveling to another nearby city at 9:00 in the morning, I can anticipate that it will be 11:00 AM after two hours of travel.
On the other hand, if I schedule a flight to a destination far away and the airline informs me that the arrival time is 11:00 AM local time, I can generally infer that the sun will be high in the sky.
Time zonesTime zones are designed to delineate geographical regions within which a convenient and consistent standard time is utilized.
A time zone is a set of rules that define a local time relative to the standard incremental time, i.e. UTC.
Imagine segmenting the Earth into 24 sections, each approximately 15 degrees of longitude apart. This process would roughly define geographical areas that differ by an hour, which is a time range small enough so that people living within each area can agree on a local time.
For this purpose, the Coordinated Universal Time (UTC), located at zero degrees longitude (the Prime Meridian), serves as the reference point. Each country or political entity declares its preferred time zone identified by its deviation from UTC, which spans from UTCâ12:00 to UTC+14:00. Typically, these offsets are whole-hour increments, though certain zones like India and Nepal deviate by an additional 30 or 45Â minutes.
This system ensures that different regions around the globe set their clocks according to their respective time zones.
There are actually more time zones than countries, even though time zones generally correspond to hourly divisions of the Earthâs geography.
At the boundaries of UTC offsets, the International Date Line (IDL) roughly follows the 180° meridian but zigzags to avoid splitting countries or territories, creating a sharp time difference where one side can be a whole day ahead of the other: i.e. when jumping from UTC-12 to UTC+12.
Some regions have chosen offsets that extend beyond the conventional range for economic or practical reasons. For example, Kiribati adjusted its time zones to ensure that all its islands are on the same calendar day. As a result, some areas of Kiribati observe UTC+14, hence why UTC offset range extends from -12 to +14.
Daylight Saving TimeAdditionally, some nations adjust their clocks forward by one hour during warmer months. This practice, known as Daylight Saving Time (DST), aims to synchronize business hours with daylight hours, potentially boosting economic activity by extending daylight for commercial endeavors. The evening daylight extension also holds the promise of conserving energy by reducing reliance on artificial lighting. Typically, clocks are set forward in the spring and set back in the fall.
DST is less prevalent near the equator due to minimal variation in sunrise and sunset times, as well as in high latitudes, where a one-hour shift could lead to dramatic changes in daylight patterns. Consequently, certain countries maintain the same local time year-round.
Nigeria always observes West Africa Time (WAT), that is UTC+1. Japan observes Japan Standard Time (JST), that is UTC+9.
Instead, other countries observe DST during the summer by following a different UTC offset. Thatâs why Germany and other countries in Europe observe both Central European Time (CET) UTC+1 and Central European Summer Time (CEST) UTC+2 during the year, and thatâs why CEST has such a name.
Interestingly, while the United Kingdom follows Greenwich Mean Time (GMT) UTC+0 in winter and British Summer Time (BST) UTC+1 in summer, Iceland uses GMT all year.
Time zone databasesConceptually, a time zone is a region of the Earth that shares the same standard time, which can include both a standard time and a daylight saving time; these correspond to local time offsets with respect to UTC.
These conventions and rules change over time, hence why time zone databases exist and are regularly updated to reflect historical changes.
A standard reference is the IANA time zone database, also known as tz database or tzdata, which maintains a list of the existing time zones and their rules.
On Linux, macOS and Unix systems, you can navigate the timezone database at the standard path `/usr/share/zoneinfo/`.
Hereâs a breakdown of the essential elements for navigating time zone databases:
- The âtime zone identifierâ is typically a unique label in the form of Area/Location, e.g. America/New_York.
- âTime zone abbreviationsâ are short forms used to denote specific time zones, often indicating whether it is standard or daylight saving time. For example, CET (Central European Time) that corresponds to UTC+01:00 and CEST (Central European Summer Time) that corresponds to UTC+02:00.
- âStandard timesâ are the official local times observed in a region when daylight saving time is not in effect. For example, Central European Time (CET, UTC+01:00) is the standard time for many European countries during the winter months.
For all practical purposes, UTC serves as the foundation for tracking the passage of time and as the reference for defining a local time.
However, all would be futile without a standardized representation of time that allows for consistent and accurate recording, processing, and exchange of time-related data across different systems.
Time formats are crucial for ensuring that time data is interpretable and usable by both humans and machines.
Some common formats have become the de-facto standards for interoperable time data.
ISO 8601, RFC 3339ISO 8601 is a globally recognized standard for representing dates, times, and durations. It offers great versatility, accommodating a variety of date and time formats.
ISO 8601 strings represent date and time in a human-readable format with optional timezone information. It uses the Gregorian calendar, even for dates that precede its genesis.
Dates use the format `YYYY-MM-DD`, such as `2024â05â28`. Times use the format `HH:MM:SS`, such as `14:00:00`. Combined date and time can be formatted as `YYYY-MM-DDTHH:MM:SS`, for example `2024â05â28T14:00:00`. ISO 8601 strings can include timezone offsets, for example `2024â05â28T14:00:00+02:00`. `Z` is used to indicate Zulu time, the military name for UTC, for example `2024â05â28T14:00:00Z`.
ISO 8601 is further refined by RFC 3339 for use in internet protocols.
RFC 3339 is a profile of ISO 8601, meaning it adheres to the standard but imposes additional constraints to ensure consistency and avoid ambiguities:
- Always specifies the time zone, either as Z for UTC or an offset. Example: `2024â05â28T14:00:00â05:00`.
- Supports fractional seconds, such as in `2024â05â28T14:00:00.123Z`.
- Fractional seconds, if used, must be preceded by a decimal point and can have any number of digits.
- Does not allow the use of the truncated format of ISO 8601, such as `2024â05` for May 2024.
Both standards are not limited to representing only UTC times, as they can also express times that occurred before the introduction of UTC. Practically, they are most often used to represent UTC timestamps.
Since leap seconds are added to UTC to account for irregularities in the Earthâs rotation, ISO 8601 allows for the representation of the 60th second in a minute, which is the leap second.
For example, the latest leap second occurred at the end of 2016, so `2016â12â31T23:59:60+00:00` is a valid ISO 8601 time stamp.
Unix timeUnix time, or POSIX time, is a method of keeping track of time by counting the number of non-leap seconds that have passed since the Unix epoch. The Unix epoch is set at 00:00:00 UTC on January 1, 1970.
The Unix time format is thus a signed integer counting the number of non-leap seconds since the epoch.
For example, the Unix timestamp `1653484800` represents a specific second in time, that is 00:00:00 UTC on May 25, 2022.
For dates and times before the epoch, Unix time is represented as negative integers.
Unix time can also represent time with higher precision using fractional seconds (e.g., Unix timestamps with milliseconds, microseconds, or nanoseconds).
Note that Unix time is timezone-agnostic.
In addition, Unix time does not account for leap seconds, so it does not have a straightforward way to represent the 60th second (leap second) of a minute.
According to the POSIX.1 standard, Unix time handles a leap second by repeating the previous second, meaning that time appears to âstand stillâ for one second when fractional seconds are not considered. However, some systems, such as Googleâs, use a technique called âleap smear.â Instead of inserting a leap second abruptly, they gradually adjust the system clock over a longer period, such as 24 hours, to distribute the one-second difference smoothly. As a result, certain Unix timestamps can be ambiguous: 1483142400 might refer to either the start of the leap second (2016â12â31 23:59:60) or one second later at the end of it (2017â01â01 00:00:00).
Note that every day in Unix time consists of exactly 86400 seconds. On the other hand, when leap seconds occur, the difference between two Unix times is not equal to the duration in seconds of the period between the two. Most applications however donât require this level of accuracy, so Unix times are handy to compare and manipulate with basic additions and subtractions.
Clocks and synchronizationFrom atomic clocks to individual computers, time is synchronized through a hierarchical and redundant system using GPS, NTP, and other methods to ensure accuracy.
GPS satellites carry atomic clocks and broadcast time signals. Receivers on Earth can use these signals to determine precise time.
In Network Time Protocol (NTP), servers synchronize with atomic clocks via GPS or other means, and form a hierarchical distribution network of time information.
Radio stations, such as WWV/WWVH, can also broadcast time signals that can be picked up by radio receivers and used for synchronization.
Hardware clocksComputers have hardware âclocksâ, or âtimersâ, that synchronize all components and signals on the motherboard and are also used to track time.
Typically, modern clocks are generated by quartz crystal oscillators of about 20MHz: an oscillator on the motherboard ticks the âsystem clockâ, and other clocks are generated from it by multiplying or dividing the frequency of the first oscillator thanks to phase-locked loops.
For example, the CPU clock is generated from the system clock and has the same purpose, but is only used on the CPUÂ itself.
Since the clock determines the speed at which instructions are executed and the CPU needs to perform more operations per time than the motherboard, the CPU clock yields higher clock signals: e.g. 4GHz for a CPUÂ core.
Some computers also allow the user to change the multipliers in order to âoverclockâ or âunderclockâ the CPU. Generally, this is used to speed up the processing capabilities of the computer, at the expense of increased power consumption, heat, and unreliability.
OS clockBacked by hardware clocks, the operating system manages different software counters, still called âclocksâ, such as the âsystem clockâ which is used to track time.
The OS configures the hardware timers to generate periodic interrupts, and each interrupt allows the OS to increment a counter that represents the system uptime.
The actual time is derived from the uptime counter, typically starting from a predefined epoch (e.g., January 1, 1970, for Unix-based systems).
Real Time ClockModern computers have a separate hardware component dedicated to keeping track of time even when the computer is powered off: the Real-Time Clock (RTC).
The RTC is usually implemented as a small integrated circuit on the motherboard, powered by a small battery such as a coin-cell battery. It has its own low-power oscillator, separate from the main system clock.
The operating system reads the RTC during boot through standard communication protocols like I2C or SPI, and initializes the system time.
The OS periodically writes the system time back to the RTC to account for any adjustments (e.g., NTP updates).
Although a system clock is more precise for short-term operations due to high-frequency ticks, an RTC is sufficient for keeping the current date and time over long periods.
NTPAll hardware clocks drift due to temperature changes, aging hardware, and other factors.
This results in inaccurate timekeeping for the operating system, so regular synchronization of system time with NTP servers is needed to correct for this drift.
NTP adjusts the system clock of the OS in one of two ways:
- By âsteppingâ, abruptly changing the time.
- By âslewingâ, slowly adjusting the clock speed so that it drifts toward the correct time: hardware timers continue to produce interrupts at their usual rate while the OS alters the rate at which it counts hardware interrupts.
Slewing should be preferred, but itâs only applicable for correcting small time inaccuracies.
NTP servers also provide leap second information, and systems need to be configured to handle these adjustments correctly, often by âsmearingâ the leap second over a long period, e.g. 24Â hours.
Different OSÂ clocksThe operating system generally keeps several time counters for use in various scenarios.
In Linux, for instance, `CLOCK_REALTIME` represents the current wall clock or time-of-day. However, other counters like `CLOCK_MONOTONIC` and `CLOCK_BOOTTIME` serve different purposes:
- `CLOCK_MONOTONIC` remains unaffected by discontinuous changes in the system time (such as adjustments by NTP or manual changes by the system administrator) and excludes leap seconds.
- `CLOCK_BOOTTIME` is similar to `CLOCK_MONOTONIC` but also accounts for any time the system spends in suspension.
For example, if you need to calculate the elapsed time between two events on a single machine without a reboot occurring in between, `CLOCK_MONOTONIC` is a suitable choice. This is commonly used for measuring time intervals, such as in code performance benchmarking.
Best practicesStoring universal timeFor many applications, such as in log lines, time zones and Daylight Saving Time (DST) rules are not directly relevant when storing a timestamp: what matters is the exact moment in (universal) time when the event occurred, i.e. the relative ordering of events.
When storing timestamps, it is crucial to store them in a consistent and unambiguous manner.
Stored timestamps, as well as time calculations, should then be in UTC: i.e. without UTC offset. This avoids errors due to local time changes such as DST.
Timestamps should be converted to local time zones only for display purposes.
As per formatting, Unix timestamps are the preferred time format since they are simple numbers, are space-efficient, and are easy to compare and manipulate directly.
UTC timestamp strings as defined in RFC 3339 are useful when displaying time in a human-readable format.
Time zones and DST become relevant when converting, displaying, or processing timestamps in user-facing applications, especially when dealing with future events.
Here are some considerations for handling future events:
- Store the base timestamp of the event in Unix time. This ensures that the eventâs reference time is unambiguous and consistent.
- Convert the stored UTC timestamp to the local time zone when displaying the event to users. Ensure that the conversion respects the UTC offset and DST rules of the target time zone.
- Keep in mind that time zone rules change from time to time, for instance when a country stops observing DST, and the UTC timestamp would then be represented by a different future local time.
Time zone databases, like the IANA time zone database, are regularly updated to reflect historical changes. Applications must use the latest versions of these databases to ensure accurate timestamp conversions, correctly interpreting and displaying timestamps.
By all means, use established libraries or your languageâs facilities to do timezone conversions and formatting. Do not implement these rules yourself.
Storing local timeTime zone databases must be updated because time zone rules and DST schedules can and do change. Governments and regulatory bodies may adjust time zone boundaries, introduce or abolish DST, or change the start and end dates of DST.
This is especially relevant if the application user is more interested in preserving the local time information of a scheduled event than its UTCÂ time.
Indeed, while storing UTC time can help you track when the next solar eclipse will happen, imagine scheduling a conference call for 1 PM on some day next year and storing that time as a UTC timestamp. If the government changes or abolishes its DST then the timestamp still refers to the instant in time that was formerly going to be 1 PM at that date, but converting it back to local time would yield a different result.
For these situations storing UTC time is not ideal. Instead, calendar apps might store the eventâs time with a day/date, a time-of-day, and a timezone, as three separate values; the UTC time can be derived if needed, given also the timezone rules that currently apply when deriving it.
However, it should be considered that converting local time to UTC time is not always unambiguous: thanks to DST some local timestamp might happen twice, e.g. when setting the clock backward, and some local timestamp might never happen, e.g. when setting the clock forward and skipping an hour.
Ideally, invalid or ambiguous times should not be stored in the first place, and stored times should be validated when new timezone rules are introduced.
Additional challenges arise with:
- Recurrent events, where changes to timezone rules can alter offsets for some instances while leaving others unaffected.
- Events where users come from multiple timezones, necessitating a single coordinating time zone for the recurrence. However, offsets must be recalculated for each involved time zone and every occurrence.
For exploring the topic of recurrent events and calendars further, RFC 5545 might be of interest since it presents the iCalendar data format for representing and exchanging calendaring and scheduling information.
For instance, it also describes a grammar for specifying ârecurrence rulesâ where âthe last work day of the monthâ could be represented as âFREQ=MONTHLY;BYDAY=MO,TU,WE,TH,FR;BYSETPOS=-1â, even though recurrence rules may generate recurrence instances with an invalid date or nonexistent local time, and that must still be accounted for.
Parsing and formatting timeDifferent time formats serve various purposes in timekeeping, incorporating concepts like date and time, universal time, calendar systems, and time zones. The specific format needed depends on the use case.
For example, using the naming convention from the W3C guidelines, âincremental timeâ like Unix time measures time in fixed integer units that increase from a specific point. In contrast, âfloating timeâ refers to a date or time value in a calendar that is not tied to a specific instant. Applications use floating time values when the local wall time is more relevant than a precise timeline position. These values remain constant regardless of the userâs time zone.
Floating times are serialized in ISO8601 formats without local time offsets or time zone identifiers (like Z for UTC).
However, many programming libraries can make it too easy to deserialize an ISO8601 string into an incremental time (e.g. Unix time) aligned with UTC, leading to errors by implicitly assuming timezone information.
Programmers should carefully consider the intended use of time information to collect, parse, and format it correctly. Here are some guidelines for handling time inputs:
- Use UTC or a consistent time zone when creating time-based values to facilitate comparisons across sources.
- Allow users to choose a time zone and associate it with their session or profile.
- Use exemplar cities to help users identify the correct time zone.
- Consider the country as a hint, since most have a single time zone.
- For data with local time offsets, adjust the zone offset to UTC before performing any date/time sensitive operations.
- For time data not related to time zones, like birthdays, use a representation that does not imply a time zone or local time offset.
- Since user input can be messy, avoid strict pattern matching and use a âlenient parseâ approach: https://unicode.org/reports/tr35/tr35-10.html#Date_Format_Patterns
- Be mindful of how libraries parse user input, for instance using the correct calendar: https://ericasadun.com/2018/12/25/iso-8601-yyyy-yyyy-and-why-your-year-may-be-wrong/
Timekeeping and clocks are hardly reliable and consistent.
In theory, we have the means to agree on the notion of what time it is now: we know how to synchronize computer systems worldwide.
The real problem is not that information requires time to be transferred between systems either, instead itâs the fact that everything can break.
What makes it hard to build distributed systems that agree on a specific value is the combination of them being asynchronous and having physical components subject to failures.
Certain results provide insights into the challenges of achieving specific properties in distributed systems under particular conditions:
- The âFLP result,â a 1985 paper, establishes that in asynchronous systems, no distributed algorithm can reliably solve the consensus problem. This is because, given an unbounded amount of time for processing, itâs impossible to determine if a system has crashed or is merely experiencing delays.
- The âCAP theoremâ explains that any distributed system can achieve at most two of the following three properties: Consistency (ensuring all data is up to date), Availability (ensuring the system is accessible), and Partition Tolerance (ensuring the system continues to function despite network partitions). In unreliable systems, itâs impossible to guarantee both consistency and availability simultaneously.
- The âPACELC theorem,â introduced in 2010, extends the CAP theorem by highlighting that even in the absence of network partitions, distributed systems must make a trade-off between latency and consistency.
The net outcome is that in theory having a single absolute value of time is not meaningful in distributed systems. In practice, we can certainly plan for reducing the margin of error and de-risking our applications.
Here are some further considerations that can be relevant:
- Ensure all nodes in the distributed system synchronize their clocks using NTP or similar protocols to minimize time drift between nodes. Use multiple NTP servers for redundancy and reliability.
- Ensure that NTP servers and clients are configured to handle leap seconds correctly.
- Consider also using GPS to provide an additional layer of accuracy.
- Besides NTP, evaluate Precision Time Protocol (PTP) for applications that require nanosecond-level synchronization.
- Account for network latency, for example by estimating round-trip times (RTTs), to avoid discrepancies caused by delays in message delivery.
- Ensure timestamps have sufficient precision for the applicationâs requirements. Millisecond or microsecond precision may be necessary for high-frequency events.
- Ensure that all nodes log events using UTC to maintain consistency in distributed logs.
- Use distributed databases that handle time synchronization internally, such as Google Spanner, which uses TrueTime to provide globally consistent timestamps. This is mostly relevant for applications that require precise transaction ordering, such as financial systems or inventory management.
- Consider using logical clocks (e.g., Lamport Timestamps or Vector Clocks) to order events in a causally consistent manner across distributed nodes, independent of physical time.
- Consider Hybrid Logical Clocks (HLC) to provide a more reliable timestamp mechanism that combines physical and logical clocks.
- Evaluate the use of Conflict-free Replicated Data Types (CRDT) to avoid the need of time or coordination for performing data updates.
- UTC was formerly known as Greenwich Mean Time (GMT) because the Prime Meridian was arbitrarily designated to pass through the Royal Observatory in Greenwich.
- The acronym UTC is a compromise between the English âCoordinated Universal Timeâ and the French âTemps Universel CoordonnĂ©.â The international advisory group selected UTC to avoid favoring any specific language. In contrast, TAI stands for the French âTemps Atomique International.â
- Leap seconds need not be announced more than six months in advance, posing a challenge for second-accurate planning beyond this period.
- NTP servers can use the leapfile directive in ntpd to announce an upcoming leap second. Most end-user installations do not implement this, relying instead on their upstream servers to handle it correctly.
- Leap seconds can theoretically be both added and subtracted, although no leap second has ever been subtracted yet.
- The 32-bit representation of Unix time will overflow on January 19, 2038, known as the Year 2038 problem. Transitioning to a 64-bit representation is necessary to extend the date range.
- The year 2000 problem: https://en.wikipedia.org/wiki/Year_2000_problem
- Atomic clocks are extremely precise. The primary time standard in the United States, the National Institute of Standards and Technology (NIST)âs caesium fountain clock, NIST-F2, has a time measurement uncertainty of only 1 second over 300 million years.
- Leap seconds will be phased out in 2035, preferring a slowly drift of UTC from astronomical time over the complexities of managing leap seconds.
- In 1998 the Swatch corporation introduced the â.beat timeâ for their watches, a decimal universal time system without time zones: https://en.wikipedia.org/wiki/Swatch_Internet_Time
- February 30 has been a real date at least twice in history: https://www.timeanddate.com/date/february-30.html
- Year 0 does not exist: https://en.wikipedia.org/wiki/Year_zero
What was discussed here itâs just the tip of the iceberg to get a brief overview of the topic.
Some further resources might be interesting to deepen our understanding of timekeeping, its challenges, and edge cases:
- A collection of falsehoods programmers believe about time: https://infiniteundo.com/post/25326999628/falsehoods-programmers-believe-about-time
- Some more fallacies with relative counter-examples: https://yourcalendricalfallacyis.com/
- A list of misconceptions about time zones in particular: https://www.zainrizvi.io/blog/falsehoods-programmers-believe-about-time-zones/
- Fun video by Computerphile about the madness of time and time zones: https://www.youtube.com/watch?v=-5wpm-gesOY
- An illustrated comparison between ISO 8601 and RFC 3339: https://ijmacd.github.io/rfc3339-iso8601/
- The `clock_gettime()` manpage, that explains the different OS clocks: https://man7.org/linux/man-pages/man2/clock_gettime.2.html
- How Google smear leap seconds: https://developers.google.com/time/smear
- Instructional video by Computerphile about how NTP stratums work and synchronize: https://www.youtube.com/watch?v=BAo5C2qbLq8
- PTP, and how Meta is using it over NTP: https://engineering.fb.com/2022/11/21/production-engineering/future-computing-ptp/
- Explainer on Hybrid Logical Clocks (HLC): https://sergeiturukin.com/2017/06/26/hybrid-logical-clocks.html
- How Google Spanner and its time API work, managing clock uncertainty: https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf
- The paper on the FLP result, named after its authors: https://groups.csail.mit.edu/tds/papers/Lynch/jacm85.pdf
- Some perspectives on the CAP theorem and its implications: https://groups.csail.mit.edu/tds/papers/Gilbert/Brewer2.pdf
- A well-written piece about the problems with simultaneity in distributed systems: https://queue.acm.org/detail.cfm?id=2745385
- A Stack Exchange answer detailing the confusion about 12:00 AM/PM and mapping a zero-based 24-hour time format to a one-based 12-hour AM/PM time format: https://ell.stackexchange.com/questions/152603/why-11-am-1-hour-1200-pm/152729#152729
- An article by Martin Fowler that provides a structured approach for managing recurring events in computer systems: https://martinfowler.com/apsupp/recurring.pdf
- W3C guidelines for working with time and timezones: https://w3c.github.io/timezone/
- A Wikipedia page about error analysis in GPS, with further references: https://en.wikipedia.org/wiki/Error_analysis_for_the_Global_Positioning_System
- A paper introducing a relativistic framework to establish coordinate time on the Moon: https://arxiv.org/abs/2402.11150
- The Time-Nuts mailing list, to take the rabbit hole to the next level: http://www.leapsecond.com/time-nuts.htm
Wim Leers: XB week 8: design locomotive gathering steam
Last week was very quiet, July 1â7âs #experience-builder Drupal Slack meeting was even more tumbleweed-yâŠ
⊠but while that was very quiet, the Acquia UX team’s design locomotive was getting started in the background, with two designs ready-to-be-implemented posted by Lauri â he’s working with that team to get his Experience Builder (XB) product vision visualized:
- #3458503: Improve the page hierarchy display â with Gaurav âGauravvvvâ enthusiastically jumping on it!
- #3458863: Add initial implementation of top bar
On top of that, a set of placeholder issues was posted â choices to be made that need user research before an informed design can be crafted:
- #3459088: Define built-in components and categorization for components
- #3459229: Allow saving component compositions as sections
- #3459234: Allow directly creating/editing entity title and meta fields
- #3459098: Define strategy for keyboard shortcuts
- #3459236: Redirect users directly to Experience Builder when creating new content or editing content
More to come on the design front! :)
Missed a prior week? See all posts tagged Experience Builder.
Goal: make it possible to follow high-level progress by reading ~5 minutes/week. I hope this empowers more people to contribute when their unique skills can best be put to use!
For more detail, join the #experience-builder Slack channel. Check out the pinned items at the top!
Speaking of design, Hemant âguptahemantâ Gupta and Kristen âkristenpolâ Pol are volunteering on behalf of their respective employers to implement the initial design system prior to DrupalCon Barcelona.
Front end- Jesse âjessebakerâ Baker landed an important refactor (#3456946: Refactor Preview.tsx React component to prevent duplicate backend calls), with reviews from Ben âbnjmnmâ Mullins and Lee âlarowlanâ Rowlands
- Jesse also landed a small foundational piece of the UI: #3458723: Implement context menu for hovered component, which also is a crucial stepping stone for future accessibility work
- Harumi âhooroomooâ Jang got #3456084: Add initial implementation of primary menu landed and filed the follow-up #3458617: Close menu on drag in primary menu â which subsequently spawned an interesting discussion about the design between Jesse and Lauri
- Jesse got #3459235: Implement front-end (React) routing library going, with enthusiastic participation from BĂĄlint âbalintbrewsâ KlĂ©ri and Ronald âroaguicrâ Aguilar
- Having observed environment setup issues for some, Ben opened #3458369: DDEV support for Cypress tests after witnessing how painful it was for Jesse to get Cypress talking to Drupal inside DDEV
Jesse reported two bugs that would be excellent starting points for contributing to XB:
- #3459333: Undo/redo - user can undo the loading of the initial state (regression)
- #3458535: Middle click + drag doesn’t work correctly if middle click is inside preview iframe
And Lauri added a feature request that builds on top of the aforementioned foundational UI piece Jesse & Ben landed (#3459249: Allow opening the contextual menu by right clicking a component) â that too is a great entry point.
Back endAnd, finally, on the back-end side:
- Ted âtedbowâ Bowman pushed forward #3456008: Add support for matching SDC prop shape: {type: string, enum: âŠ} but got stuck. He opened #3458580 as a blocker ⊠and upon my return that Friday, I discovered that actually ⊠that was working correctly, but the implementation was very confusing â not surprising, considering this code dated back to a PoC branch from the chaotic origins. So, instead, the logic was made a lot clearer, with better comments. If you’re interested in the deepest parts of how Drupal handles structured data, you’ll probably find it an intriguing commit. 1
- I’d missed it the previous week, but Lee actually did a huge push forward on #3454519: [META] Support component types other than SDC, by turning a high-level concept into a concrete proposal in the form a massive draft MR, with lots of interesting concepts, all rooted in his real-world experience with Layout Builder. An interesting discussion between Alex âeffulgentsiaâ Bronstein and Lee ensued :)
Thanks to Lauri for reviewing this!
-
Sadly, it’s an inevitability that this is still complex: Drupal 8 introduced Typed Data API into the pre-existing Field API, and then later the Symfony validation constraint API was integrated into both. Getting a full picture of what constraints apply to an entire entity field data tree is not something Drupal core provides, so that picture must be manually crafted, which is what that code does. ↩︎