Feeds

Christoph Schiessl: How to Serve a Directory of Static Files with FastAPI

Planet Python - Mon, 2024-07-22 20:00
Learn how to serve a static site using FastAPI. Perfect for locally testing statically generated websites, for instance, with `httpie`.
Categories: FLOSS Project Planets

Quansight Labs Blog: Introducing the 2024 Labs Internship Program Cohort

Planet Python - Mon, 2024-07-22 20:00
Meet the interns joining us for our third-annual summer internship.
Categories: FLOSS Project Planets

Russell Coker: Blog Comments

Planet Debian - Mon, 2024-07-22 18:05

The Akismet WordPress anti-spam plugin has changed it’s policy to not run on sites that have adverts which includes mine. Without it I get an average of about 1 spam comment per hour and the interface for removing spam takes more mouse actions than desired. For email spam it’s about the same volume half of which is messages with SpamAssassin scores high enough to go into the MaybeSpam folder (that I go through every few weeks) and half of which goes straight to my inbox. But fortunately moving spam to a folder where I can later use it to train Bayesian classification is a much faster option on PC and is also something I can do from my phone MUA.

As an experiment I have configured my blog to only take comments from registered users. It will be interesting to see how many spammers make it through that and to also see feedback from genuine people. People who can’t comment can tell me about it via the contact methods listed here [1].

I previously wrote about other ways of dealing with hostile comments [2]. Blogging seems to be less popular nowadays so a Planet specific forum doesn’t seem a viable option. It’s a pity, I think that YouTube and Facebook have taken over from blogs and that’s not a good thing.

Related posts:

  1. Blog Comments John Scalzi wrote an insightful post about the utility of...
  2. comment spam The war on comment-spam has now begun. It appears that...
  3. wp-spamshield Yesterday I installed the wp-spamshield plugin for WordPress [1]. It...
Categories: FLOSS Project Planets

Martin-Éric Racine: dhcpcd replacing dhclient for Trixie... or perhaps networkd?

Planet Debian - Mon, 2024-07-22 14:38

My work on overhauling dhcpcd as the prime replacement for ISC's discontinued DHCP client is done. The package has achieved stability, both upstream and at Debian. The only remaining points are bug #1038882 to swap the Priorities of isc-dhcp-client and dhcpcd-base in the repository's override, and swaping ifupdown's search order to put dhcpcd first.

Meanwhile, ifupdown's de-facto maintainer prompted me to sollicit opinions on which of the 4 ifupdown implementations should ship with a minimal installation for Trixie. This, in turn, re-opened the debate of what should be Debian's default network configuation framework (see the thread starting with this post).

networkd

Given how most Debian ports (except for Hurd) ship with systemd, which includes a disabled networkd by standard, many people in the thread feel that this should become the default network configuration tool for minimal installations. As it happens, most of my hosts fit that scenario, so I figured that I would give another go at testing networkd on one host.

I used the following minimalistic /etc/systemd/network/dhcp.network:

[Match] Name=en* wl* [Network] DHCP=yes

This correctly configured IPv4 via DHCP, with the small caveat that it doesn't update /etc/resolv.conf without installing resolvconf or systemd-resolved.

However, networkd's default IPv6 settings really are not suitable for public consumption. The key issues (see Bug #1076432):

  1. Temporary addresses are not enabled by default. Worse, the setting is ignored if it was enabled by sysctl during bootup. This is a major privacy issue. Adding IPv6PrivacyExtensions=yes to the above exposed another issue: instead of using the fe80 address generated by the kernel, networkd adds a new one.
  2. Networkd uses EUI64 addresses by default. This is another major privacy issue, since EUI64 addresses are forensically traceable to the interface's MAC address. Worse, the setting is ignored if stable-privacy was enabled by sysctl during bootup. To top it all, networkd does stable-privacy using systemd's time-proven brain-dead approach of reinventing the wheel: instead of merely setting the kernel's address generation mode to 3 and letting it configure the secret address, it expects the secret address to be spelled out in the systemd unit.

Conclusion: networkd works well enough for someone configuring an IPv4-only network from 20 years ago, but is utterly inadequate for IPv6 or dual-stack installations, doubly so on a software distribution that claims to care about privacy and network security.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #460 - Preconfigured CMS Solutions

Planet Drupal - Mon, 2024-07-22 14:00

Today we are talking about Preconfigured CMS Solutions, How they can help your business, and The best way to build them in Drupal with guests Baddy Sonja Breidert and Dr. Christoph Breidert.

For show notes visit: www.talkingDrupal.com/460

Topics
  • Spain
  • What is a Preconfigured CMS / Drupal Solution
  • Who is the audience
  • What business objectives can preconfigured solutions solve
  • What are the ingredients
  • How do you manage theming
  • How do you manage customized design
  • What do you do if your client has a need that your preconfigured solution does not solve
  • What about Starshot
  • Did the two of you meet over Drupal
  • How do you manage work life balance
Resources Guests

Christoph Breidert - 1xINTERNET breidert

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Baddý Sonja Breidert - 1xINTERNET baddysonja

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you ever wanted to customize the way Google Maps appear on your Drupal site? There's a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in Mar 2014 by iampuma, but recent releases are by Artem Dmitriiev (a.dmitriiev) of 1xINTERNET
    • Versions available: 7.x-2.0, 8.x-1.7, and 8.x-2.6 versions available, the last of which works with Drupal 8, 9, 10, and 11
  • Maintainership
    • Actively maintained, latest release a week ago
    • Security coverage
    • Has a Documentation page and lots of information on the project page
    • Number of open issues: 8 open issues, 1 of which is a bug against the current branch, though it was actually fixed in the latest release
  • Usage stats:
    • 1,764 sites
  • Module features and usage
    • The module provides allows your Drupal to use custom styles, which you can copy and paste from SnazzyMaps.com, or create from scratch using a configuration widget on Github that is linked from the project page
    • You will be able to use custom markers by using the System Stream Wrapper module
    • You can also specify popups for the markers, using a field or a view mode
    • If you use the companion styled_google_views module, you can also show multiple locations, and define clustering options
    • Styled Google Map also has integration with Google's Directions service, so visitors can easily get turn-by-turn directions for how to reached their chosen location
    • The module also includes a demo submodule you can use to quickly set up a working example to illustrate all the different options available using Styled Google Map
Categories: FLOSS Project Planets

The Drop Times: Modernizing Your Web Presence with Drupal

Planet Drupal - Mon, 2024-07-22 12:05

Hello Tech Enthusiasts,

In this edition, we’re exploring why Drupal is a smart choice for your website development needs.

Drupal’s modular design makes it highly scalable and flexible, allowing it to support everything from small blogs to large enterprise sites. This adaptability ensures your website can grow and evolve alongside your business without needing extensive overhauls.

Managing content on Drupal is straightforward, thanks to its intuitive interface. Even non-technical users can easily handle updates and changes. Additionally, with numerous themes and modules, customization is virtually limitless, enabling businesses to tailor their sites to specific requirements.

Security is another area where Drupal excels. Regular updates and a robust security framework protect your site from vulnerabilities. Plus, being open-source, Drupal benefits from a vibrant community that continually contributes to its development, ensuring continuous improvements and access to a wealth of resources.

Businesses across various industries have successfully used Drupal to create dynamic, secure, and scalable websites. For example, the University of Oxford leverages Drupal to manage extensive content libraries and facilitate complex workflows efficiently. You can read more about their experience here.

Drupal offers a powerful and versatile CMS solution for modern website development. Its scalability, ease of use, extensive customization options, and strong security make it an ideal choice for businesses aiming to enhance their web presence. Supported by an active community, Drupal continues to meet the diverse needs of its users.

With that, now explore some of the highlighted stories we covered last week:

Join The DropTimes (TDT) for its milestone 100th interview with Dries Buytaert, the innovative founder of Drupal. Interviewed by Anoop John, Founder and Lead of The DropTimes, this conversation explores Drupal’s rich history and transformative journey.

Learn how to migrate new content efficiently using Drupal's Migrate API. Jaymie Strecker's second part of this tutorial, "Using Drupal Migrations to Modify Content Within a Site," provides detailed examples of creating paragraphs, nodes, URL aliases, and redirects.

The 2024 Drupal Association board election is now open. Community members can nominate themselves until 30 July 2024. Voting begins on 15 August.

The Drupal Association's annual survey is now open! Digital service providers worldwide are invited to share their insights on market trends, growth opportunities, and more. Participate by September 4 to contribute to the comprehensive report and see the results presented at the CEO Dinner during DrupalCon Barcelona.

Varbase 10.0.0 has arrived, bringing a host of enhancements and new features to this comprehensive Drupal distribution. Built on Drupal 10, it offers improved performance, PHP 8.1 compatibility, and a modern CKEditor 5 integration.

Drupal Starshot is recruiting track leads to drive crucial project tracks, focusing on areas like Drupal Recipes, installer enhancements, and product marketing. Apply by July 31st to contribute to this innovative initiative and help shape the future of Drupal.

Mark Conroy introduces the LocalGov KeyNav module, enabling keyboard shortcuts for navigating LocalGov Drupal websites. This module, sponsored by Big Blue Door, enhances site usability with customizable key sequences for quick access to various pages.

DrupalCon Barcelona 2024 invites registered attendees to submit proposals for Birds of a Feather (BoF) sessions. BoFs provide an inclusive, informal environment to discuss Drupal, technology topics, or personal interests. Submit your BoF proposal by September 25, 2024, to join the conversation.

Gábor Hojtsy and Lauri Timmanee have released the open-source slide deck and video recording from their keynote on Drupal Starshot at Drupal Developer Days 2024. Access these resources for your events and presentations.

Sponsorship opportunities are now available for DrupalCamp Pune 2024, which will take place on October 19-20. This is a great opportunity to engage with the global Drupal community and showcase your brand.

Submit your session proposals for Twin Cities Drupal Camp 2024 by August 15. The camp, scheduled for September 12-13, 2024, welcomes diverse topics related to Drupal and web development.

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely
Kazima Abbas
Sub-editor, The DropTimes.
 

Categories: FLOSS Project Planets

Peter Bengtsson: Converting Celsius to Fahrenheit round-up

Planet Python - Mon, 2024-07-22 10:29

In the last couple of days, I've created variations of a simple algorithm to demonstrate how Celcius and Fahrenheit seem to relate to each other if you "mirror the number".
It wasn't supposed to be about the programming language. Still, I used Python in the first one and I noticed that since the code is simple, it could be fun to write variants of it in other languages.

  1. Converting Celsius to Fahrenheit with Python
  2. Converting Celsius to Fahrenheit with TypeScript
  3. Converting Celsius to Fahrenheit with Go
  4. Converting Celsius to Fahrenheit with Ruby
  5. Converting Celsius to Fahrenheit with Crystal
  6. Converting Celsius to Fahrenheit with Rust

It was a fun exercise.

And speaking of fun, I couldn't help but to throw in a benchmark using hyperfine that measures, essentially, how fast these CLIs can start up. The results look like this:

Summary ./conversion-rs ran 1.31 ± 1.30 times faster than ./conversion-go 1.88 ± 1.33 times faster than ./conversion-cr 7.15 ± 4.64 times faster than bun run conversion.ts 14.27 ± 9.48 times faster than python3.12 conversion.py 18.10 ± 12.35 times faster than node conversion.js 67.75 ± 43.80 times faster than ruby conversion.rb

It doesn't prove much, that you didn't expect. But it's fun to see how fast Python 3.12 has become at starting up.

Head on over to https://github.com/peterbe/temperature-conversion to play along. Perhaps you can see some easy optimizations (speed and style).

Categories: FLOSS Project Planets

libc @ Savannah: The GNU C Library version 2.40 is now available

GNU Planet! - Mon, 2024-07-22 10:29

The GNU C Library
=================

The GNU C Library version 2.40 is now available.

The GNU C Library is used as the C library in the GNU system and
in GNU/Linux systems, as well as many other systems that use Linux
as the kernel.

The GNU C Library is primarily designed to be a portable
and high performance C library.  It follows all relevant
standards including ISO C11 and POSIX.1-2017.  It is also
internationalized and has one of the most complete
internationalization interfaces known.

The GNU C Library webpage is at http://www.gnu.org/software/libc/

Packages for the 2.40 release may be downloaded from:
        http://ftpmirror.gnu.org/libc/
        http://ftp.gnu.org/gnu/libc/

The mirror list is at http://www.gnu.org/order/ftp.html

Distributions are encouraged to track the release/* branches
corresponding to the releases they are using.  The release
branches will be updated with conservative bug fixes and new
features while retaining backwards compatibility.

NEWS for version 2.40
=====================

Major new features:

  • The <stdbit.h> header type-generic macros have been changed when using

  GCC 14.1 or later to use __builtin_stdc_bit_ceil etc. built-in functions
  in order to support unsigned __int128 and/or unsigned _BitInt(N) operands
  with arbitrary precisions when supported by the target.

  • The GNU C Library now supports a feature test macro _ISOC23_SOURCE to

  enable features from the ISO C23 standard.  Only some features from
  this standard are supported by the GNU C Library.  The older name
  _ISOC2X_SOURCE is still supported.  Features from C23 are also enabled
  by _GNU_SOURCE, or by compiling with the GCC options -std=c23,
  -std=gnu23, -std=c2x or -std=gnu2x.

  • The following ISO C23 function families (introduced in TS

  18661-4:2015) are now supported in <math.h>.  Each family includes
  functions for float, double, long double, _FloatN and _FloatNx, and a
  type-generic macro in <tgmath.h>.

  - Exponential functions: exp2m1, exp10m1.

  - Logarithmic functions: log2p1, log10p1, logp1.

  • A new tunable, glibc.rtld.enable_secure, can be used to run a program

  as if it were a setuid process. This is currently a testing tool to allow
  more extensive verification tests for AT_SECURE programs and not meant to
  be a security feature.

  • On Linux, the epoll header was updated to include epoll ioctl definitions

  and the related structure added in Linux kernel 6.9.

  • The fortify functionality has been significantly enhanced for building

  programs with clang against the GNU C Library.

  • Many functions have been added to the vector library for aarch64:

    acosh, asinh, atanh, cbrt, cosh, erf, erfc, hypot, pow, sinh, tanh

  • On x86, memset can now use non-temporal stores to improve the performance

  of large writes. This behaviour is controlled by a new tunable
  x86_memset_non_temporal_threshold.

Deprecated and removed features, and other changes affecting compatibility:

  • Architectures which use a 32-bit seconds-since-epoch field in struct

  lastlog, struct utmp, struct utmpx (such as i386, powerpc64le, rv32,
  rv64, x86-64) switched from a signed to an unsigned type for that
  field.  This allows these fields to store timestamps beyond the year
  2038, until the year 2106.  Please note that applications are still
  expected to migrate off the interfaces declared in <utmp.h> and
  <utmpx.h> (except for login_tty) due to locking and session management
  problems.

  • __rseq_size now denotes the size of the active rseq area (20 bytes

  initially), not the size of struct rseq (32 bytes initially).

Security related changes:

The following CVEs were fixed in this release, details of which can be
found in the advisories directory of the release tarball:

  GLIBC-SA-2024-0004:
    ISO-2022-CN-EXT: fix out-of-bound writes when writing escape
    sequence (CVE-2024-2961)

  GLIBC-SA-2024-0005:
    nscd: Stack-based buffer overflow in netgroup cache (CVE-2024-33599)

  GLIBC-SA-2024-0006:
    nscd: Null pointer crash after notfound response (CVE-2024-33600)

  GLIBC-SA-2024-0007:
    nscd: netgroup cache may terminate daemon on memory allocation
    failure (CVE-2024-33601)

  GLIBC-SA-2024-0008:
    nscd: netgroup cache assumes NSS callback uses in-buffer strings
    (CVE-2024-33602)

The following bugs were resolved with this release:

  [19622] network: Support aliasing with struct sockaddr
  [21271] localedata: cv_RU: update translations
  [23774] localedata: lv_LV collates Y/y incorrectly
  [23865] string: wcsstr is quadratic-time
  [25119] localedata: Change Czech weekday names to lowercase
  [27777] stdio: fclose does a linear search, takes ages when many FILE*
    are opened
  [29770] libc: prctl does not match manual page ABI on powerpc64le-
    linux-gnu
  [29845] localedata: Update hr_HR locale currency to €
  [30701] time: getutxent misbehaves on 32-bit x86 when _TIME_BITS=64
  [31316] build: Fails test misc/tst-dirname "Didn't expect signal from
    child: got `Illegal instruction'" on non SSE CPUs
  [31317] dynamic-link: [RISCV] static PIE crashes during self
    relocation
  [31325] libc: mips: clone3 is wrong for o32
  [31335] math: Compile glibc with -march=x86-64-v3 should disable FMA4
    multi-arch version
  [31339] libc: arm32 loader crash after cleanup in 2.36
  [31340] manual: A bad sentence in section 22.3.5 (resource.texi)
  [31357] dynamic-link: $(objpfx)tst-rtld-list-diagnostics.out rule
    doesn't work with test wrapper
  [31370] localedata: wcwidth() does not treat
    DEFAULT_IGNORABLE_CODE_POINTs as zero-width
  [31371] dynamic-link: x86-64: APX and Tile registers aren't preserved
    in ld.so trampoline
  [31372] dynamic-link: _dl_tlsdesc_dynamic doesn't preserve all caller-
    saved registers
  [31383] libc: _FORTIFY_SOURCE=3 and __fortified_attr_access vs size of
    0 and zero size types
  [31385] build: sort-makefile-lines.py doesn't check variable with _
    nor with "^# variable"
  [31402] libc: clone (NULL, NULL, ...) clobbers %r7 register on
    s390{,x}
  [31405] libc: Improve dl_iterate_phdr using _dl_find_object
  [31411] localedata: Add Latgalian locale
  [31412] build: GCC 6 failed to build i386 glibc on Fedora 39
  [31429] build: Glibc failed to build with -march=x86-64-v3
  [31468] libc: sigisemptyset returns true when the set contains signals
    larger than 34
  [31476] network: Automatic activation of single-request options break
    resolv.conf reloading
  [31479] libc: Missing #include <sys/rseq.h> in sched_getcpu.c may
    result in a loss of rseq acceleration
  [31501] dynamic-link: _dl_tlsdesc_dynamic_xsavec may clobber %rbx
  [31518] manual: documentation: FLT_MAX_10_EXP questionable text, evtl.
    wrong,
  [31530] localedata: Locale file for Moksha - mdf_RU
  [31553] malloc: elf/tst-decorate-maps fails on ppc64el
  [31596] libc: On the llvm-arm32 platform, dlopen("not_exist.so", -1)
    triggers segmentation fault
  [31600] math: math: x86 ceill traps when FE_INEXACT is enabled
  [31601] math: math: x86 floor traps when FE_INEXACT is enabled
  [31603] math: math: x86 trunc traps when FE_INEXACT is enabled
  [31612] libc: arc4random fails to fallback to /dev/urandom if
    getrandom is not present
  [31629] build: powerpc64: Configuring with "--with-cpu=power10" and
    'CFLAGS=-O2 -mcpu=power9' fails to build glibc
  [31640] dynamic-link: POWER10 ld.so crashes in
    elf_machine_load_address with GCC 14
  [31661] libc: NPROCESSORS_CONF and NPROCESSORS_ONLN not available in
    getconf
  [31676] dynamic-link: Configuring with CC="gcc -march=x86-64-v3"
    --with-rtld-early-cflags=-march=x86-64 results in linker failure
  [31677] nscd: nscd: netgroup cache: invalid memcpy under low
    memory/storage conditions
  [31678] nscd: nscd: Null pointer dereferences after failed netgroup
    cache insertion
  [31679] nscd: nscd: netgroup cache may terminate daemon on memory
    allocation failure
  [31680] nscd: nscd: netgroup cache assumes NSS callback uses in-buffer
    strings
  [31682] math: [PowerPC] Floating point exception error for math test
    test-ceil-except-2 test-floor-except-2 test-trunc-except-2
  [31686] dynamic-link: Stack-based buffer overflow in
    parse_tunables_string
  [31695] libc: pidfd_spawn/pidfd_spawnp leak an fd if clone3 succeeds
    but execve fails
  [31719] dynamic-link: --enable-hardcoded-path-in-tests doesn't work
    with -Wl,--enable-new-dtags
  [31730] libc: backtrace_symbols_fd prints different strings than
    backtrace_symbols returns
  [31753] build: FAIL: link-static-libc with GCC 6/7/8
  [31755] libc: procutils_read_file doesn't start with a leading
    underscore
  [31756] libc: write_profiling is only in libc.a
  [31757] build: Should XXXf128_do_not_use functions be excluded?
  [31759] math: Extra nearbyint symbols in libm.a
  [31760] math: Missing math functions
  [31764] build: _res_opcodes should be a compat symbol only
  [31765] dynamic-link: _dl_mcount_wrapper is exported without prototype
  [31766] stdio: IO_stderr _IO_stdin_ _IO_stdout should be compat
    symbols
  [31768] string: Extra stpncpy symbol in libc.a
  [31770] libc: clone3 is in libc.a
  [31774] libc: Missing __isnanf128 in libc.a
  [31775] math: Missing exp10 exp10f32x exp10f64 fmod fmodf fmodf32
    fmodf32x fmodf64 in libm.a
  [31777] string: Extra memchr strlen symbols in libc.a
  [31781] math: Missing math functions in libm.a
  [31782] build: Test build failure with recent GCC trunk (x86/tst-cpu-
    features-supports.c:69:3: error: parameter to builtin not valid:
    avx5124fmaps)
  [31785] string: loongarch: Extra strnlen symbols in libc.a
  [31786] string: powerpc: Extra strchrnul and strncasecmp_l symbols in
    libc.a
  [31787] math: powerpc: Extra llrintf, llrintf, llrintf32, and
    llrintf32 symbols in libc.a
  [31788] libc: microblaze: Extra cacheflush symbol in libc.a
  [31789] libc: powerpc: Extra versionsort symbol in libc.a
  [31790] libc: s390: Extra getutent32, getutent32_r, getutid32,
    getutid32_r, getutline32, getutline32_r, getutmp32, getutmpx32,
    getutxent32, getutxid32, getutxline32, pututline32, pututxline32,
    updwtmp32, updwtmpx32 in libc.a
  [31797] build: g++ -static requirement should be able to opt-out
  [31798] libc: pidfd_getpid.c is miscompiled by GCC 6.4
  [31802] time: difftime is pure not const
  [31808] time: The supported time_t range is not documented.
  [31840] stdio: Memory leak in _IO_new_fdopen (fdopen) on seek failure
  [31867] build: "CPU ISA level is lower than required" on SSE2-free
    CPUs
  [31876] time: "Date and time" documentation fixes for POSIX.1-2024 etc
  [31883] build: ISA level support configure check relies on bashism /
    is otherwise broken for arithmetic
  [31892] build: Always install mtrace.
  [31917] libc: clang mq_open fortify wrapper does not handle 4 argument
    correctly
  [31927] libc: clang open fortify wrapper does not handle argument
    correctly
  [31931] time: tzset may fault on very short TZ string
  [31934] string: wcsncmp crash on s390x on vlbb instruction
  [31963] stdio: Crash in _IO_link_in within __gcov_exit
  [31965] dynamic-link: rseq extension mechanism does not work as
    intended
  [31980] build: elf/tst-tunables-enable_secure-env fails on ppc

Release Notes
=============

https://sourceware.org/glibc/wiki/Release/2.40

Contributors
============

This release was made possible by the contributions of many people.
The maintainers are grateful to everyone who has contributed
changes or bug reports.  These include:

Adam Sampson
Adhemerval Zanella
Alejandro Colomar
Alexandre Ferrieux
Amrita H S
Andreas K. Hüttel
Andreas Schwab
Andrew Pinski
Askar Safin
Aurelien Jarno
Avinal Kumar
Carlos Llamas
Carlos O'Donell
Charles Fol
Christoph Müllner
DJ Delorie
Daniel Cederman
Darius Rad
David Paleino
Dragan Stanojević (Nevidljivi)
Evan Green
Fangrui Song
Flavio Cruz
Florian Weimer
Gabi Falk
H.J. Lu
Jakub Jelinek
Jan Kurik
Joe Damato
Joe Ramsay
Joe Simmons-Talbott
Joe Talbott
John David Anglin
Joseph Myers
Jules Bertholet
Julian Zhu
Junxian Zhu
Konstantin Kharlamov
Luca Boccassi
Maciej W. Rozycki
Manjunath Matti
Mark Wielaard
MayShao-oc
Meng Qinggang
Michael Jeanson
Michel Lind
Mike FABIAN
Mohamed Akram
Noah Goldstein
Palmer Dabbelt
Paul Eggert
Philip Kaludercic
Samuel Dobron
Samuel Thibault
Sayan Paul
Sergey Bugaev
Sergey Kolosov
Siddhesh Poyarekar
Simon Chopin
Stafford Horne
Stefan Liebler
Sunil K Pandey
Szabolcs Nagy
Wilco Dijkstra
Xi Ruoyao
Xin Wang
Yinyu Cai
YunQiang Su

We would like to call out the following and thank them for their
tireless patch review:

Adhemerval Zanella
Alejandro Colomar
Andreas K. Hüttel
Arjun Shankar
Aurelien Jarno
Bruno Haible
Carlos O'Donell
DJ Delorie
Dmitry V. Levin
Evan Green
Fangrui Song
Florian Weimer
H.J. Lu
Jonathan Wakely
Joseph Myers
Mathieu Desnoyers
Maxim Kuvyrkov
Michael Jeanson
Noah Goldstein
Palmer Dabbelt
Paul Eggert
Paul E. Murphy
Peter Bergner
Philippe Mathieu-Daudé
Sam James
Siddhesh Poyarekar
Simon Chopin
Stefan Liebler
Sunil K Pandey
Szabolcs Nagy
Xi Ruoyao
Zack Weinberg

--
Andreas K. Hüttel
dilfridge@gentoo.org
Gentoo Linux developer
(council, toolchain, base-system, perl, releng)
https://wiki.gentoo.org/wiki/User:Dilfridge
https://www.akhuettel.de/

Categories: FLOSS Project Planets

Week 8 recap

Planet KDE - Mon, 2024-07-22 10:06
Since our main problem up to this point is to deal with the constant micro-pixel calls, we decided to discard all these < 1 pixel movements by filtering out distances of less than 1 pixel before they go down the pipeline in KisToolFreehandHelper::pai...
Categories: FLOSS Project Planets

Real Python: Logging in Python

Planet Python - Mon, 2024-07-22 10:00

Recording relevant information during the execution of your program is a good practice as a Python developer when you want to gain a better understanding of your code. This practice is called logging, and it’s a very useful tool for your programmer’s toolbox. It can help you discover scenarios that you might not have thought of while developing.

These records are called logs, and they can serve as an extra set of eyes that are constantly looking at your application’s flow. Logs can store information, like which user or IP accessed the application. If an error occurs, then logs may provide more insights than a stack trace by telling you the state of the program before the error and the line of code where it occurred.

Python provides a logging system as part of its standard library. You can add logging to your application with just a few lines of code.

In this tutorial, you’ll learn how to:

  • Work with Python’s logging module
  • Set up a basic logging configuration
  • Leverage log levels
  • Style your log messages with formatters
  • Redirect log records with handlers
  • Define logging rules with filters

When you log useful data from the right places, you can debug errors, analyze the application’s performance to plan for scaling, or look at usage patterns to plan for marketing.

You’ll do the coding for this tutorial in the Python standard REPL. If you prefer Python files, then you’ll find a full logging example as a script in the materials of this tutorial. You can download this script by clicking the link below:

Get Your Code: Click here to download the free sample code that you’ll use to learn about logging in Python.

Take the Quiz: Test your knowledge with our interactive “Logging in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Logging in Python

In this quiz, you'll test your understanding of Python's logging module. With this knowledge, you'll be able to add logging to your applications, which can help you debug errors and analyze performance.

If you commonly use Python’s print() function to get information about the flow of your programs, then logging is the natural next step for you. This tutorial will guide you through creating your first logs and show you how to make logging grow with your projects.

Starting With Python’s Logging Module

The logging module in Python’s standard library is a ready-to-use, powerful module that’s designed to meet the needs of beginners as well as enterprise teams.

Note: Since logs offer a variety of insights, the logging module is often used by other third-party Python libraries, too. Once you’re more advanced in the practice of logging, you can integrate your log messages with the ones from those libraries to produce a homogeneous log for your application.

To leverage this versatility, it’s a good idea to get a better understanding of how the logging module works under the hood. For example, you could take a stroll through the logging module’s source code

The main component of the logging module is something called the logger. You can think of the logger as a reporter in your code that decides what to record, at what level of detail, and where to store or send these records.

Exploring the Root Logger

To get a first impression of how the logging module and a logger work, open the Python standard REPL and enter the code below:

Python >>> import logging >>> logging.warning("Remain calm!") WARNING:root:Remain calm! Copied!

The output shows the severity level before each message along with root, which is the name the logging module gives to its default logger. This output shows the default format that can be configured to include things like a timestamp or other details.

In the example above, you’re sending a message on the root logger. The log level of the message is WARNING. Log levels are an important aspect of logging. By default, there are five standard levels indicating the severity of events. Each has a corresponding function that can be used to log events at that level of severity.

Note: There’s also a NOTSET log level, which you’ll encounter later in this tutorial when you learn about custom logging handlers.

Here are the five default log levels, in order of increasing severity:

Log Level Function Description DEBUG logging.debug() Provides detailed information that’s valuable to you as a developer. INFO logging.info() Provides general information about what’s going on with your program. WARNING logging.warning() Indicates that there’s something you should look into. ERROR logging.error() Alerts you to an unexpected problem that’s occured in your program. CRITICAL logging.critical() Tells you that a serious error has occurred and may have crashed your app.

The logging module provides you with a default logger that allows you to get started with logging without needing to do much configuration. However, the logging functions listed in the table above reveal a quirk that you may not expect:

Python >>> logging.debug("This is a debug message") >>> logging.info("This is an info message") >>> logging.warning("This is a warning message") WARNING:root:This is a warning message >>> logging.error("This is an error message") ERROR:root:This is an error message >>> logging.critical("This is a critical message") CRITICAL:root:This is a critical message Copied!

Notice that the debug() and info() messages didn’t get logged. This is because, by default, the logging module logs the messages with a severity level of WARNING or above. You can change that by configuring the logging module to log events of all levels.

Read the full article at https://realpython.com/python-logging/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

What every developer should know about time

Planet KDE - Mon, 2024-07-22 07:44
Photo by Aron Visuals on Unsplash

What time is it? It’s 9:30 in the morning.

But what does that mean? This isn’t true everywhere in the world, and who decided a day has 24 hours? What is a second? What is time?

What is time, really?

Time, as we understand it, is the ongoing sequence of existence and events that occur in a seemingly irreversible flow from the past, through the present, and into the future.

General relativity reveals that the perception of time can vary for different observers: the concept of what time it is now holds significance only relative to a specific observer. On your wristwatch, time advances at a consistent rate of one second per second. However, if you observe a distant clock, its rate will be influenced by both velocity and gravity: a clock moving faster than you will appear to tick more slowly.

Ignoring relativistic effects for a moment, physics provides a practical definition of time: time is “what a clock reads”. Observing a certain number of repetitions of a standard cyclical event constitutes one standard unit, such as the second.

Time units

Even in ancient times, two notable physical phenomena could be observed cyclically and helped quantify the passage of time:

  • The Earth’s orbit around the Sun results in evident, cyclical events, and defines the “year” as a unit of time.
  • The Sun crossing the local meridian, i.e. the time it takes for the Sun to reach the same angle in the sky. The interval between two successive instances of this event identify the solar “day”.

Note that the solar day, as a time unit, is affected by both the Earth’s own rotation and its revolution around the Sun: since the Earth moves by roughly 1° around the Sun each day, the Earth has to rotate by roughly 361° for the Sun to cross the local meridian.

This is different from the “sidereal day”, which is the amount of time it takes for the Earth to rotate by 360°, since it is computed from the apparent motion of distant astronomical objects instead of the Sun.

However, the solar day is typically the unit of interest for us humans and our daily lives.

How many days in a year?

Although the Earth doesn’t rotate a discrete number of times in its year, and hence the latter cannot be subdivided into a discrete number of days, humans have devised numerous calendar systems to organize days for social, religious, and administrative purposes.

Pope Gregory XIII implemented the Gregorian calendar in 1582, which has become the predominant calendar system in use today. This calendar supplanted the Julian calendar, rectifying inaccuracies by incorporating leap years, which are additional days inserted to synchronize the calendar year with the solar year. Over a complete leap cycle spanning 400 years, the average duration of a Gregorian calendar year is 365.2425 days. While the majority of years consist of 365 days, 97 out of every 400 years are designated as leap years, extending to 366 days.

Despite the Gregorian calendar being the de-facto standard, other calendars exist:

  • Islamic Calendar: A lunar calendar consisting of 12 months in a year of 354 or 355 days.
  • Hebrew Calendar: A lunisolar calendar used by Jewish communities.
  • Chinese Calendar: A lunisolar calendar that determines traditional festivals and holidays in Chinese culture.
How many seconds in a day?

We all know that there are 60 seconds in a minute, 60 minutes in an hour, and 24 hours in a day, right? As it can be expected, these magic numbers aren’t a fundamental property of nature, but rather a result of two historical choices:

  • Dividing the day into 24 hours: This stems from ancient civilizations where early cultures divided the cycle of day and night into a convenient number of units, likely influenced by the phases of the moon or other natural patterns. Ancient egyptians used sundial subdivisions and split the cycle into two halves of 12 hours each: they used a duodecimal system already, likely influenced by the number of joints in our hands, excluding thumbs, and the ease of counting them. Twenty-four then became a common choice for counting hours.
  • Subdividing hours into 60 minutes and minutes into 60 seconds: This preference for base-60 systems comes from even earlier civilizations like the Sumerians and Babylonians. Their number systems were based on 60, again possibly due to the ease of counting on fingers and knuckles. This base-60 system was later adopted by the Greeks and eventually influenced our timekeeping.

Historically, dividing the natural day from sunrise to sunset into twelve hours, as the romans also did, meant that the hour had a variable measure that depended on season and latitude.

Equal hours were later standardized as 1/24 of the mean solar day.

Counting time

Historically, the concept of a second relied on the Earth’s rotation, but variability of the latter made this definition inadequate for scientific accuracy. In 1967, the second was redefined using the consistent properties of the caesium-133 atom. Presently, a second is defined as the time it takes for 9,192,631,770 oscillations of radiation during the transition between two energy states in caesium-133.

TAI and atomic clocks

Atomic clocks utilize the definition of a second for precise time measurement.

An array of atomic clocks globally synchronizes time through consensus: the clocks collectively determine the accurate time, with each clock adjusting to align with the consensus, known as International Atomic Time (TAI).

The International Bureau of Weights and Measures (BIPM) located near Paris, France, computes the International Atomic Time (TAI) by taking the weighted average of more than 300 atomic clocks from around the world, where the most stable clocks receive more weight in the calculation.

Operating on atomic seconds, TAI serves as the standard for calibrating various timekeeping instruments, including quartz and GPS clocks.

GPS time

Specifically, GPS satellites carry multiple atomic clocks for redundancy and better synchronization, but the accuracy of GPS timing requires different corrections, such as for both Special and General Relativity:

  • According to Special Relativity, clocks moving at high speeds will run slower relative to those on the ground. GPS satellites travel at approximately 14,000 km/h, which results in their clocks ticking slower by about 7.2 microseconds per day.
  • According to General Relativity, clocks in a weaker gravitational field run faster than those closer to a massive object. The altitude of GPS satellites (about 20,200 km) causes their clocks to tick faster by about 45.8 microseconds per day.
  • As the net effect, the combined relativistic effects cause the satellite clocks to run faster than ground-based clocks by about 38.6 microseconds per day.

To compensate, the satellite clocks are pre-adjusted on Earth before launch to tick slower than the ground clocks.

Once in orbit, GPS satellites continuously broadcast signals containing the exact transmission time and their positions.

A GPS receiver on the ground picks up these signals from multiple satellites and compares the reception time with the transmission time to calculate the time delays.

Using these delays, the receiver determines its distance from each satellite and, through trilateration, calculates its own position and the current time.

Precise timekeeping on GPS satellites is thus necessary, since small errors of 38.6 microseconds would result in big positional errors of roughly 11.4 km accumulated each day.

Notably, corrections must also consider that the elliptical satellite orbits cause variations in time dilation and gravitational frequency shifts over time. This eccentricity affects the clock rate difference between the satellite and the receiver, increasing or decreasing it based on the satellite’s altitude.

UTC and leap seconds

While TAI provides precise seconds, the Earth presents an irregular rotation: it gradually slows due to tidal acceleration.

On the other hand, for practical purposes, civil time is defined to agree with the Earth’s rotation.

Coordinated Universal Time (UTC) serves as the international standard for timekeeping. This time scale uses the same atomic seconds as TAI but adjusts for variations in the Earth’s rotation by adding or omitting leap seconds as necessary.

To determine if a leap second is required, universal time UT1 is utilized to measure mean solar time by monitoring the Earth’s rotation relative to the sun.

The variance between UT1 and UTC dictates the necessity of a leap second in UTC.

UTC has thus precise seconds and it is always kept within 0.9 seconds of the UT1 to synchronize with astronomical time: an additional second may be added to the last minute of a UTC year or June to compensate.

This is why the second is now precisely defined in the International System of Units (SI) as the atomic second, while days, hours, and minutes are only mentioned in the SI Brochure as accepted units for explanatory purposes: their use is deeply embedded in history and culture, but their definition is not precise as it refers to the mean solar day and it cannot account for the inconsistent rotational and orbital motion of the Earth.

Local time

Although UTC provides a universal time standard, individuals require a “local” time that is consistent to the position of the sun, ensuring that local noon (when the sun reaches its zenith) roughly aligns with 12:00 PM.

In a specific region, coordinating daily activities such as work and travel is more convenient when people adhere to a shared local time. For instance, if I start traveling to another nearby city at 9:00 in the morning, I can anticipate that it will be 11:00 AM after two hours of travel.

On the other hand, if I schedule a flight to a destination far away and the airline informs me that the arrival time is 11:00 AM local time, I can generally infer that the sun will be high in the sky.

Time zones

Time zones are designed to delineate geographical regions within which a convenient and consistent standard time is utilized.

A time zone is a set of rules that define a local time relative to the standard incremental time, i.e. UTC.

Imagine segmenting the Earth into 24 sections, each approximately 15 degrees of longitude apart. This process would roughly define geographical areas that differ by an hour, which is a time range small enough so that people living within each area can agree on a local time.

For this purpose, the Coordinated Universal Time (UTC), located at zero degrees longitude (the Prime Meridian), serves as the reference point. Each country or political entity declares its preferred time zone identified by its deviation from UTC, which spans from UTC−12:00 to UTC+14:00. Typically, these offsets are whole-hour increments, though certain zones like India and Nepal deviate by an additional 30 or 45 minutes.

This system ensures that different regions around the globe set their clocks according to their respective time zones.

There are actually more time zones than countries, even though time zones generally correspond to hourly divisions of the Earth’s geography.

At the boundaries of UTC offsets, the International Date Line (IDL) roughly follows the 180° meridian but zigzags to avoid splitting countries or territories, creating a sharp time difference where one side can be a whole day ahead of the other: i.e. when jumping from UTC-12 to UTC+12.

Some regions have chosen offsets that extend beyond the conventional range for economic or practical reasons. For example, Kiribati adjusted its time zones to ensure that all its islands are on the same calendar day. As a result, some areas of Kiribati observe UTC+14, hence why UTC offset range extends from -12 to +14.

Daylight Saving Time

Additionally, some nations adjust their clocks forward by one hour during warmer months. This practice, known as Daylight Saving Time (DST), aims to synchronize business hours with daylight hours, potentially boosting economic activity by extending daylight for commercial endeavors. The evening daylight extension also holds the promise of conserving energy by reducing reliance on artificial lighting. Typically, clocks are set forward in the spring and set back in the fall.

DST is less prevalent near the equator due to minimal variation in sunrise and sunset times, as well as in high latitudes, where a one-hour shift could lead to dramatic changes in daylight patterns. Consequently, certain countries maintain the same local time year-round.

Nigeria always observes West Africa Time (WAT), that is UTC+1. Japan observes Japan Standard Time (JST), that is UTC+9.

Instead, other countries observe DST during the summer by following a different UTC offset. That’s why Germany and other countries in Europe observe both Central European Time (CET) UTC+1 and Central European Summer Time (CEST) UTC+2 during the year, and that’s why CEST has such a name.

Interestingly, while the United Kingdom follows Greenwich Mean Time (GMT) UTC+0 in winter and British Summer Time (BST) UTC+1 in summer, Iceland uses GMT all year.

Time zone databases

Conceptually, a time zone is a region of the Earth that shares the same standard time, which can include both a standard time and a daylight saving time; these correspond to local time offsets with respect to UTC.

These conventions and rules change over time, hence why time zone databases exist and are regularly updated to reflect historical changes.

A standard reference is the IANA time zone database, also known as tz database or tzdata, which maintains a list of the existing time zones and their rules.

On Linux, macOS and Unix systems, you can navigate the timezone database at the standard path `/usr/share/zoneinfo/`.

Here’s a breakdown of the essential elements for navigating time zone databases:

  • The “time zone identifier” is typically a unique label in the form of Area/Location, e.g. America/New_York.
  • “Time zone abbreviations” are short forms used to denote specific time zones, often indicating whether it is standard or daylight saving time. For example, CET (Central European Time) that corresponds to UTC+01:00 and CEST (Central European Summer Time) that corresponds to UTC+02:00.
  • “Standard times” are the official local times observed in a region when daylight saving time is not in effect. For example, Central European Time (CET, UTC+01:00) is the standard time for many European countries during the winter months.
Time formats

For all practical purposes, UTC serves as the foundation for tracking the passage of time and as the reference for defining a local time.

However, all would be futile without a standardized representation of time that allows for consistent and accurate recording, processing, and exchange of time-related data across different systems.

Time formats are crucial for ensuring that time data is interpretable and usable by both humans and machines.

Some common formats have become the de-facto standards for interoperable time data.

ISO 8601, RFC 3339

ISO 8601 is a globally recognized standard for representing dates, times, and durations. It offers great versatility, accommodating a variety of date and time formats.

ISO 8601 strings represent date and time in a human-readable format with optional timezone information. It uses the Gregorian calendar, even for dates that precede its genesis.

Dates use the format `YYYY-MM-DD`, such as `2024–05–28`. Times use the format `HH:MM:SS`, such as `14:00:00`. Combined date and time can be formatted as `YYYY-MM-DDTHH:MM:SS`, for example `2024–05–28T14:00:00`. ISO 8601 strings can include timezone offsets, for example `2024–05–28T14:00:00+02:00`. `Z` is used to indicate Zulu time, the military name for UTC, for example `2024–05–28T14:00:00Z`.

ISO 8601 is further refined by RFC 3339 for use in internet protocols.

RFC 3339 is a profile of ISO 8601, meaning it adheres to the standard but imposes additional constraints to ensure consistency and avoid ambiguities:

  • Always specifies the time zone, either as Z for UTC or an offset. Example: `2024–05–28T14:00:00–05:00`.
  • Supports fractional seconds, such as in `2024–05–28T14:00:00.123Z`.
  • Fractional seconds, if used, must be preceded by a decimal point and can have any number of digits.
  • Does not allow the use of the truncated format of ISO 8601, such as `2024–05` for May 2024.

Both standards are not limited to representing only UTC times, as they can also express times that occurred before the introduction of UTC. Practically, they are most often used to represent UTC timestamps.

Since leap seconds are added to UTC to account for irregularities in the Earth’s rotation, ISO 8601 allows for the representation of the 60th second in a minute, which is the leap second.

For example, the latest leap second occurred at the end of 2016, so `2016–12–31T23:59:60+00:00` is a valid ISO 8601 time stamp.

Unix time

Unix time, or POSIX time, is a method of keeping track of time by counting the number of non-leap seconds that have passed since the Unix epoch. The Unix epoch is set at 00:00:00 UTC on January 1, 1970.

The Unix time format is thus a signed integer counting the number of non-leap seconds since the epoch.

For example, the Unix timestamp `1653484800` represents a specific second in time, that is 00:00:00 UTC on May 25, 2022.

For dates and times before the epoch, Unix time is represented as negative integers.

Unix time can also represent time with higher precision using fractional seconds (e.g., Unix timestamps with milliseconds, microseconds, or nanoseconds).

Note that Unix time is timezone-agnostic.

In addition, Unix time does not account for leap seconds, so it does not have a straightforward way to represent the 60th second (leap second) of a minute.

According to the POSIX.1 standard, Unix time handles a leap second by repeating the previous second, meaning that time appears to “stand still” for one second when fractional seconds are not considered. However, some systems, such as Google’s, use a technique called “leap smear.” Instead of inserting a leap second abruptly, they gradually adjust the system clock over a longer period, such as 24 hours, to distribute the one-second difference smoothly. As a result, certain Unix timestamps can be ambiguous: 1483142400 might refer to either the start of the leap second (2016–12–31 23:59:60) or one second later at the end of it (2017–01–01 00:00:00).

Note that every day in Unix time consists of exactly 86400 seconds. On the other hand, when leap seconds occur, the difference between two Unix times is not equal to the duration in seconds of the period between the two. Most applications however don’t require this level of accuracy, so Unix times are handy to compare and manipulate with basic additions and subtractions.

Clocks and synchronization

From atomic clocks to individual computers, time is synchronized through a hierarchical and redundant system using GPS, NTP, and other methods to ensure accuracy.

GPS satellites carry atomic clocks and broadcast time signals. Receivers on Earth can use these signals to determine precise time.

In Network Time Protocol (NTP), servers synchronize with atomic clocks via GPS or other means, and form a hierarchical distribution network of time information.

Radio stations, such as WWV/WWVH, can also broadcast time signals that can be picked up by radio receivers and used for synchronization.

Hardware clocks

Computers have hardware “clocks”, or “timers”, that synchronize all components and signals on the motherboard and are also used to track time.

Typically, modern clocks are generated by quartz crystal oscillators of about 20MHz: an oscillator on the motherboard ticks the “system clock”, and other clocks are generated from it by multiplying or dividing the frequency of the first oscillator thanks to phase-locked loops.

For example, the CPU clock is generated from the system clock and has the same purpose, but is only used on the CPU itself.

Since the clock determines the speed at which instructions are executed and the CPU needs to perform more operations per time than the motherboard, the CPU clock yields higher clock signals: e.g. 4GHz for a CPU core.

Some computers also allow the user to change the multipliers in order to “overclock” or “underclock” the CPU. Generally, this is used to speed up the processing capabilities of the computer, at the expense of increased power consumption, heat, and unreliability.

OS clock

Backed by hardware clocks, the operating system manages different software counters, still called “clocks”, such as the “system clock” which is used to track time.

The OS configures the hardware timers to generate periodic interrupts, and each interrupt allows the OS to increment a counter that represents the system uptime.

The actual time is derived from the uptime counter, typically starting from a predefined epoch (e.g., January 1, 1970, for Unix-based systems).

Real Time Clock

Modern computers have a separate hardware component dedicated to keeping track of time even when the computer is powered off: the Real-Time Clock (RTC).

The RTC is usually implemented as a small integrated circuit on the motherboard, powered by a small battery such as a coin-cell battery. It has its own low-power oscillator, separate from the main system clock.

The operating system reads the RTC during boot through standard communication protocols like I2C or SPI, and initializes the system time.

The OS periodically writes the system time back to the RTC to account for any adjustments (e.g., NTP updates).

Although a system clock is more precise for short-term operations due to high-frequency ticks, an RTC is sufficient for keeping the current date and time over long periods.

NTP

All hardware clocks drift due to temperature changes, aging hardware, and other factors.

This results in inaccurate timekeeping for the operating system, so regular synchronization of system time with NTP servers is needed to correct for this drift.

NTP adjusts the system clock of the OS in one of two ways:

  • By “stepping”, abruptly changing the time.
  • By “slewing”, slowly adjusting the clock speed so that it drifts toward the correct time: hardware timers continue to produce interrupts at their usual rate while the OS alters the rate at which it counts hardware interrupts.

Slewing should be preferred, but it’s only applicable for correcting small time inaccuracies.

NTP servers also provide leap second information, and systems need to be configured to handle these adjustments correctly, often by “smearing” the leap second over a long period, e.g. 24 hours.

Different OS clocks

The operating system generally keeps several time counters for use in various scenarios.

In Linux, for instance, `CLOCK_REALTIME` represents the current wall clock or time-of-day. However, other counters like `CLOCK_MONOTONIC` and `CLOCK_BOOTTIME` serve different purposes:

  • `CLOCK_MONOTONIC` remains unaffected by discontinuous changes in the system time (such as adjustments by NTP or manual changes by the system administrator) and excludes leap seconds.
  • `CLOCK_BOOTTIME` is similar to `CLOCK_MONOTONIC` but also accounts for any time the system spends in suspension.

For example, if you need to calculate the elapsed time between two events on a single machine without a reboot occurring in between, `CLOCK_MONOTONIC` is a suitable choice. This is commonly used for measuring time intervals, such as in code performance benchmarking.

Best practicesStoring universal time

For many applications, such as in log lines, time zones and Daylight Saving Time (DST) rules are not directly relevant when storing a timestamp: what matters is the exact moment in (universal) time when the event occurred, i.e. the relative ordering of events.

When storing timestamps, it is crucial to store them in a consistent and unambiguous manner.

Stored timestamps, as well as time calculations, should then be in UTC: i.e. without UTC offset. This avoids errors due to local time changes such as DST.

Timestamps should be converted to local time zones only for display purposes.

As per formatting, Unix timestamps are the preferred time format since they are simple numbers, are space-efficient, and are easy to compare and manipulate directly.

UTC timestamp strings as defined in RFC 3339 are useful when displaying time in a human-readable format.

Time zones and DST become relevant when converting, displaying, or processing timestamps in user-facing applications, especially when dealing with future events.

Here are some considerations for handling future events:

  • Store the base timestamp of the event in Unix time. This ensures that the event’s reference time is unambiguous and consistent.
  • Convert the stored UTC timestamp to the local time zone when displaying the event to users. Ensure that the conversion respects the UTC offset and DST rules of the target time zone.
  • Keep in mind that time zone rules change from time to time, for instance when a country stops observing DST, and the UTC timestamp would then be represented by a different future local time.

Time zone databases, like the IANA time zone database, are regularly updated to reflect historical changes. Applications must use the latest versions of these databases to ensure accurate timestamp conversions, correctly interpreting and displaying timestamps.

By all means, use established libraries or your language’s facilities to do timezone conversions and formatting. Do not implement these rules yourself.

Storing local time

Time zone databases must be updated because time zone rules and DST schedules can and do change. Governments and regulatory bodies may adjust time zone boundaries, introduce or abolish DST, or change the start and end dates of DST.

This is especially relevant if the application user is more interested in preserving the local time information of a scheduled event than its UTC time.

Indeed, while storing UTC time can help you track when the next solar eclipse will happen, imagine scheduling a conference call for 1 PM on some day next year and storing that time as a UTC timestamp. If the government changes or abolishes its DST then the timestamp still refers to the instant in time that was formerly going to be 1 PM at that date, but converting it back to local time would yield a different result.

For these situations storing UTC time is not ideal. Instead, calendar apps might store the event’s time with a day/date, a time-of-day, and a timezone, as three separate values; the UTC time can be derived if needed, given also the timezone rules that currently apply when deriving it.

However, it should be considered that converting local time to UTC time is not always unambiguous: thanks to DST some local timestamp might happen twice, e.g. when setting the clock backward, and some local timestamp might never happen, e.g. when setting the clock forward and skipping an hour.

Ideally, invalid or ambiguous times should not be stored in the first place, and stored times should be validated when new timezone rules are introduced.

Additional challenges arise with:

  • Recurrent events, where changes to timezone rules can alter offsets for some instances while leaving others unaffected.
  • Events where users come from multiple timezones, necessitating a single coordinating time zone for the recurrence. However, offsets must be recalculated for each involved time zone and every occurrence.

For exploring the topic of recurrent events and calendars further, RFC 5545 might be of interest since it presents the iCalendar data format for representing and exchanging calendaring and scheduling information.

For instance, it also describes a grammar for specifying “recurrence rules” where “the last work day of the month” could be represented as “FREQ=MONTHLY;BYDAY=MO,TU,WE,TH,FR;BYSETPOS=-1”, even though recurrence rules may generate recurrence instances with an invalid date or nonexistent local time, and that must still be accounted for.

Parsing and formatting time

Different time formats serve various purposes in timekeeping, incorporating concepts like date and time, universal time, calendar systems, and time zones. The specific format needed depends on the use case.

For example, using the naming convention from the W3C guidelines, “incremental time” like Unix time measures time in fixed integer units that increase from a specific point. In contrast, “floating time” refers to a date or time value in a calendar that is not tied to a specific instant. Applications use floating time values when the local wall time is more relevant than a precise timeline position. These values remain constant regardless of the user’s time zone.

Floating times are serialized in ISO8601 formats without local time offsets or time zone identifiers (like Z for UTC).

However, many programming libraries can make it too easy to deserialize an ISO8601 string into an incremental time (e.g. Unix time) aligned with UTC, leading to errors by implicitly assuming timezone information.

Programmers should carefully consider the intended use of time information to collect, parse, and format it correctly. Here are some guidelines for handling time inputs:

  • Use UTC or a consistent time zone when creating time-based values to facilitate comparisons across sources.
  • Allow users to choose a time zone and associate it with their session or profile.
  • Use exemplar cities to help users identify the correct time zone.
  • Consider the country as a hint, since most have a single time zone.
  • For data with local time offsets, adjust the zone offset to UTC before performing any date/time sensitive operations.
  • For time data not related to time zones, like birthdays, use a representation that does not imply a time zone or local time offset.
  • Since user input can be messy, avoid strict pattern matching and use a “lenient parse” approach: https://unicode.org/reports/tr35/tr35-10.html#Date_Format_Patterns
  • Be mindful of how libraries parse user input, for instance using the correct calendar: https://ericasadun.com/2018/12/25/iso-8601-yyyy-yyyy-and-why-your-year-may-be-wrong/
Distributed systems

Timekeeping and clocks are hardly reliable and consistent.

In theory, we have the means to agree on the notion of what time it is now: we know how to synchronize computer systems worldwide.

The real problem is not that information requires time to be transferred between systems either, instead it’s the fact that everything can break.

What makes it hard to build distributed systems that agree on a specific value is the combination of them being asynchronous and having physical components subject to failures.

Certain results provide insights into the challenges of achieving specific properties in distributed systems under particular conditions:

  • The “FLP result,” a 1985 paper, establishes that in asynchronous systems, no distributed algorithm can reliably solve the consensus problem. This is because, given an unbounded amount of time for processing, it’s impossible to determine if a system has crashed or is merely experiencing delays.
  • The “CAP theorem” explains that any distributed system can achieve at most two of the following three properties: Consistency (ensuring all data is up to date), Availability (ensuring the system is accessible), and Partition Tolerance (ensuring the system continues to function despite network partitions). In unreliable systems, it’s impossible to guarantee both consistency and availability simultaneously.
  • The “PACELC theorem,” introduced in 2010, extends the CAP theorem by highlighting that even in the absence of network partitions, distributed systems must make a trade-off between latency and consistency.

The net outcome is that in theory having a single absolute value of time is not meaningful in distributed systems. In practice, we can certainly plan for reducing the margin of error and de-risking our applications.

Here are some further considerations that can be relevant:

  • Ensure all nodes in the distributed system synchronize their clocks using NTP or similar protocols to minimize time drift between nodes. Use multiple NTP servers for redundancy and reliability.
  • Ensure that NTP servers and clients are configured to handle leap seconds correctly.
  • Consider also using GPS to provide an additional layer of accuracy.
  • Besides NTP, evaluate Precision Time Protocol (PTP) for applications that require nanosecond-level synchronization.
  • Account for network latency, for example by estimating round-trip times (RTTs), to avoid discrepancies caused by delays in message delivery.
  • Ensure timestamps have sufficient precision for the application’s requirements. Millisecond or microsecond precision may be necessary for high-frequency events.
  • Ensure that all nodes log events using UTC to maintain consistency in distributed logs.
  • Use distributed databases that handle time synchronization internally, such as Google Spanner, which uses TrueTime to provide globally consistent timestamps. This is mostly relevant for applications that require precise transaction ordering, such as financial systems or inventory management.
  • Consider using logical clocks (e.g., Lamport Timestamps or Vector Clocks) to order events in a causally consistent manner across distributed nodes, independent of physical time.
  • Consider Hybrid Logical Clocks (HLC) to provide a more reliable timestamp mechanism that combines physical and logical clocks.
  • Evaluate the use of Conflict-free Replicated Data Types (CRDT) to avoid the need of time or coordination for performing data updates.
Fun facts
  • UTC was formerly known as Greenwich Mean Time (GMT) because the Prime Meridian was arbitrarily designated to pass through the Royal Observatory in Greenwich.
  • The acronym UTC is a compromise between the English “Coordinated Universal Time” and the French “Temps Universel Coordonné.” The international advisory group selected UTC to avoid favoring any specific language. In contrast, TAI stands for the French “Temps Atomique International.”
  • Leap seconds need not be announced more than six months in advance, posing a challenge for second-accurate planning beyond this period.
  • NTP servers can use the leapfile directive in ntpd to announce an upcoming leap second. Most end-user installations do not implement this, relying instead on their upstream servers to handle it correctly.
  • Leap seconds can theoretically be both added and subtracted, although no leap second has ever been subtracted yet.
  • The 32-bit representation of Unix time will overflow on January 19, 2038, known as the Year 2038 problem. Transitioning to a 64-bit representation is necessary to extend the date range.
  • The year 2000 problem: https://en.wikipedia.org/wiki/Year_2000_problem
  • Atomic clocks are extremely precise. The primary time standard in the United States, the National Institute of Standards and Technology (NIST)’s caesium fountain clock, NIST-F2, has a time measurement uncertainty of only 1 second over 300 million years.
  • Leap seconds will be phased out in 2035, preferring a slowly drift of UTC from astronomical time over the complexities of managing leap seconds.
  • In 1998 the Swatch corporation introduced the “.beat time” for their watches, a decimal universal time system without time zones: https://en.wikipedia.org/wiki/Swatch_Internet_Time
  • February 30 has been a real date at least twice in history: https://www.timeanddate.com/date/february-30.html
  • Year 0 does not exist: https://en.wikipedia.org/wiki/Year_zero
Further reads

What was discussed here it’s just the tip of the iceberg to get a brief overview of the topic.

Some further resources might be interesting to deepen our understanding of timekeeping, its challenges, and edge cases:

Categories: FLOSS Project Planets

Wim Leers: XB week 8: design locomotive gathering steam

Planet Drupal - Mon, 2024-07-22 07:29

Last week was very quiet, July 1–7’s #experience-builder Drupal Slack meeting was even more tumbleweed-y…

… but while that was very quiet, the Acquia UX team’s design locomotive was getting started in the background, with two designs ready-to-be-implemented posted by Lauri — he’s working with that team to get his Experience Builder (XB) product vision visualized:

  1. #3458503: Improve the page hierarchy display — with Gaurav “Gauravvvv” enthusiastically jumping on it!
  2. #3458863: Add initial implementation of top bar

On top of that, a set of placeholder issues was posted — choices to be made that need user research before an informed design can be crafted:

More to come on the design front! :)

Missed a prior week? See all posts tagged Experience Builder.

Goal: make it possible to follow high-level progress by reading ~5 minutes/week. I hope this empowers more people to contribute when their unique skills can best be put to use!

For more detail, join the #experience-builder Slack channel. Check out the pinned items at the top!

Speaking of design, Hemant “guptahemant” Gupta and Kristen “kristenpol” Pol are volunteering on behalf of their respective employers to implement the initial design system prior to DrupalCon Barcelona.

Front end Front-end starting points for contributing to XB

Jesse reported two bugs that would be excellent starting points for contributing to XB:

And Lauri added a feature request that builds on top of the aforementioned foundational UI piece Jesse & Ben landed (#3459249: Allow opening the contextual menu by right clicking a component) — that too is a great entry point.

Back end

And, finally, on the back-end side:

Thanks to Lauri for reviewing this!

  1. Sadly, it’s an inevitability that this is still complex: Drupal 8 introduced Typed Data API into the pre-existing Field API, and then later the Symfony validation constraint API was integrated into both. Getting a full picture of what constraints apply to an entire entity field data tree is not something Drupal core provides, so that picture must be manually crafted, which is what that code does. ↩︎

Categories: FLOSS Project Planets

The Drop Times: Why Should Agencies Respond to Drupal Business Survey?

Planet Drupal - Mon, 2024-07-22 06:59
"I will turn every stone, and I will turn them on time," notes a very confident Janne Kalliola. Over a decade ago, Janne's idea to host a dinner for Drupal business leaders at DrupalCon Prague 2013 sparked the Drupal Business Survey, now in its ninth year. In an interview with Alka Elizabeth, Janne discusses the survey's origins, its evolution, and the key players involved. He shares the benefits for the Drupal community, significant changes, and expectations for this year's survey, highlighting its role in gathering vital insights for Drupal businesses and agencies worldwide.
Categories: FLOSS Project Planets

Talk Python to Me: #471: Learning and teaching Pandas

Planet Python - Mon, 2024-07-22 04:00
If you want to get better at something, often times the path is pretty clear. If you get better at swimming, you go to the pool and practice your strokes and put in time doing the laps. If you want to get better at mountain biking, hit the trails and work on drills focusing on different aspects of riding. You can do the same for programming. Reuven Lerner is back on the podcast to talk about his book Pandas Workout. We dive into strategies for learning Pandas and Python as well as some of his workout exercises.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/scalablepath'>Scalable Path</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Reuven Lerner on Twitter</b>: <a href="https://twitter.com/reuvenmlerner" target="_blank" rel="noopener">@reuvenmlerner</a><br/> <b>Pandas Workout Book</b>: <a href="https://www.manning.com/books/pandas-workout" target="_blank" rel="noopener">manning.com</a><br/> <b>Bamboo Weekly: Solar eclipse</b>: <a href="https://www.bambooweekly.com/bw-61-solar-eclipse/" target="_blank" rel="noopener">bambooweekly.com</a><br/> <b>Bamboo Weekly: Avocado hand</b>: <a href="https://www.bambooweekly.com/bw-73-avocado-hand/" target="_blank" rel="noopener">bambooweekly.com</a><br/> <b>Scaling data science across Python and R</b>: <a href="https://talkpython.fm/episodes/show/236/scaling-data-science-across-python-and-r" target="_blank" rel="noopener">talkpython.fm</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=x1-t4xRS9XA" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/471/learning-and-teaching-pandas" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Categories: FLOSS Project Planets

Zato Blog: LDAP and Active Directory as Python API Services

Planet Python - Mon, 2024-07-22 04:00
LDAP and Active Directory as Python API Services 2024-07-22, by Dariusz Suchojad

LDAP and Active Directory often play key a role in the management of a company's network resources yet it is not always very convenient to query a directory directly using the LDAP syntax and protocol that few people truly specialize in. This is why in this article we are using Zato to offer a REST API on top of directory services so that API clients can use REST and JSON instead.

Installing Zato

Start off by installing Zato - if you are not sure what to choose, pick the Docker Quickstart option and this will set up a working environment in a few minutes.

Creating connections

Once Zato is running, connections can be easily created in its Dashboard (by default, http://127.0.0.1:8183). Navigate to Connections -> Outgoing -> LDAP ..

.. and then click Create a new connection which will open a form as below:

The same form works for both regular LDAP and Active Directory - in the latter case, make sure that Auth type is set to NTLM.

The most important information is:

  • User credentials
  • Authentication type
  • Server or servers to connect to

Note that if authentication type is not NTLM, user credentials can be provided using the LDAP syntax, e.g. uid=MyUser,ou=users,o=MyOrganization,dc=example,dc=com.

Right after creating a connection be sure to set its password too - the password asigned by default is a randomly generated one.

Pinging

It is always prudent to ping a newly created connection to ensure that all the information entered was correct.

Note that if you have more than one server in a pool then the first available one of them will be pinged - it is the whole pool that is pinged, not a particular part of it.

Active Directory as a REST service

As the first usage example, let's create a service that will translate JSON queries into LDAP lookups - given username or email the service will basic information about the person's account, such as first and last name.

Note that the conn object returned by client.get() below is capable of running any commands that its underlying Python library offers - in this case we are only using searches but any other operation can also be used, e.g. add or modify as well.

# -*- coding: utf-8 -*- # stdlib from json import loads # Bunch from bunch import bunchify # Zato from zato.server.service import Service # Where in the directory we expect to find the user search_base = 'cn=users, dc=example, dc=com' # On input, we are looking users up by either username or email search_filter = '(&(|(uid={user_info})(mail={user_info})))' # On output, we are interested in username, first name, last name and the person's email query_attributes = ['uid', 'givenName', 'sn', 'mail'] class ADService(Service): """ Looks up users in AD by their username or email. """ class SimpleIO: input_required = 'user_info' output_optional = 'message', 'username', 'first_name', 'last_name', 'email' response_elem = None skip_empty_keys = True def handle(self): # Connection name to use conn_name = 'My AD Connection' # Get a handle to the connection pool with self.out.ldap[conn_name].conn.client() as client: # Get a handle to a particular connection with client.get() as conn: # Build a filter to find a user by user_info = self.request.input['user_info'] user_filter = search_filter.format(user_info=user_info) # Returns True if query succeeds and has any information on output if conn.search(search_base, user_filter, attributes=query_attributes): # This is where the actual response can be found response = conn.entries # In this case, we expect at most one user matching input criteria entry = response[0] # Convert it to JSON for easier handling .. entry = entry.entry_to_json() # .. and load it from JSON to a Python dict entry = loads(entry) # Convert to a Bunch instance to get dot access to dictionary keys entry = bunchify(entry['attributes']) # Now, actually produce a JSON response. For simplicity's sake, # assume that users have only one of email or other attributes. self.response.payload.message = 'User found' self.response.payload.username = entry.uid[0] self.response.payload.first_name = entry.givenName[0] self.response.payload.last_name = entry.sn[0] self.response.payload.email = entry.mail[0] else: # No business response = no such user found self.response.payload.message = 'No such user'

After creating a REST channel, we can invoke the service from command line, thus confirming that we can offer the directory as a REST service:

$ curl "localhost:11223/api/get-user?user_info=MyOrganization\\MyUser" ; echo { "message": "User found", "username": "MyOrganization\\MyUser", "first_name": "First", "last_name": "Last", "email": "address@example.com" } $ More resources

➤ Python API integration tutorial
What is a Network Packet Broker? How to automate networks in Python?
What is an integration platform?
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?

More blog posts
Categories: FLOSS Project Planets

You can contribute to KDE with non-C++ code

Planet KDE - Sun, 2024-07-21 20:00
Not everything made by KDE uses C++. This is probably obvious to some people, but it’s worth mentioning nevertheless. And I don’t mean this as just “well duh, KDE uses QtQuick which is written with C++ and QML”. I also don’t mean this as “well duh, Qt has a lot of bindings to other languages”. I mean explicitly “KDE has tools written primarily in certain languages and specialized formats”. Note that I said “specialized formats”.
Categories: FLOSS Project Planets

Seth Michael Larson: YouTube without YouTube Shorts

Planet Python - Sun, 2024-07-21 20:00
YouTube without YouTube Shorts AboutBlogNewsletterLinks YouTube without YouTube Shorts

Published 2024-07-22 by Seth Larson
Reading time: minutes

Back in October 2023 I wrote about how “For You” wasn't for me, and my dislike for infinitely-scrolling algorithmically-suggested content. In that post I mentioned that I removed YouTube from my phone due to the “YouTube Shorts” feature which functions similar to TikTok.

That didn't last long. I enjoy many creators on YouTube, so eventually I ended up reinstalling the app. But my problem persisted: I would find myself opening up YouTube and occasionally scrolling through Shorts for longer than I wanted to.

What I wanted was YouTube without Shorts, but there didn't seem to be any way to disable them in the app. I was complaining about this to a friend, who out of nowhere mentioned that disabling your “Watch History” will disable the YouTube Shorts infinite feed. My mind was blown! 🤯


No more YouTube Shorts! 🥳

I disabled my Watch History and deleted my existing history and sure enough, no Shorts! (also no home page, which I can also live without). Here's how to do it:

  • Open YouTube app, select “You” (bottom-right on iOS)
  • Settings cog in the top-right
  • Manage all history, wait for the Google Account page to load
  • Select “Turn Off” for YouTube History
  • Select “Manage History” and delete all history
  • Go back to the home page and see that Shorts no longer loads!
  • Might have to restart the app, if I remember correctly I had to?

Lesson learned: complain about hyper-specific problems to your friends more often.

Thanks for reading! ♡ Did you find this article helpful and want more content like it? Get notified of new posts by subscribing to the RSS feed or the email newsletter.

This work is licensed under CC BY-SA 4.0

Categories: FLOSS Project Planets

Mike Gabriel: Polis - a FLOSS Tool for Civic Participation -- Issues extending Polis and adjusting our Goals

Planet Debian - Sun, 2024-07-21 08:58

Here comes the 3rd article of the 5-episode blog post series on Polis, written by Guido Berhörster, member of staff at my company Fre(i)e Software GmbH.

Enjoy also this read on Guido's work on Polis,
Mike

Table of Contents of the Blog Post Series
  1. Introduction
  2. Initial evaluation and adaptation
  3. Issues extending Polis and adjusting our goals (this article)
  4. Creating (a) new frontend(s) for Polis
  5. Current status and roadmap
Polis - Issues extending Polis and adjusting our Goals

After the initial implementation of limited branding support, user feedback and the involvement of an UX designer lead to the conclusion that we needed more far-reaching changes to the user interface in order to reduce visual clutter, rearrange and improve UI elements, and provide better integration with the websites in which conversations are embedded.

Challenges when visualizing Data in Polis

Polis visualizes groups using a spatial projection of users based on similarities in voting behavior and places them in two to five groups using a clustering algorithm. During our testing and evaluation users were rarely able to interpret the visualization and often intuitively made incorrect assumptions e.g. by associating the filled area of a group with its significance or size. After consultation with a member of the Multi-Agent Systems (MAS) Group at the University of Groningen we chose to temporarily replace the visualization offered by Polis with simple bar charts representing agreement or disagreement with statements of a group or the majority. We intend to revisit this and explore different forms of visualization at a later point in time.

The different factors playing into the weight attached to statements which determine the pseuodo-random order in which they are presented for voting (“comment routing”) proved difficult to explain to stakeholders and users and the admission of the ad-hoc and heuristic nature of the used algorithm1 by Polis’ authors lead to the decision to temporarily remove this feature. Instead, statements should be placed into three groups, namely

  1. metadata questions,
  2. seed statements,
  3. and participant statements

Statements should then be sorted by group but in a fully randomized order within the group so that metadata questions would be presented before seed statements which would be presented before participant’s statements. This simpler method was deemed sufficient for the scale of our pilot projects, however we intend to revisit this decision and explore different methods of “comment routing” in cooperation with our scientific partners at a later point in time.

An evaluation of the requirements for implementing mandatory authentication and adding support for additional authentication methods to Polis showed that significant changes to both the administration and participation frontend were needed due to a lack of an abstraction layer or extension mechanism and the current authentication providers being hardcoded in many parts of the code base.

A New Frontend is born: Particiapp

Based on the implementation details of the participation frontend, the invasive nature of the changes required, and the overhead of keeping up with active upstream development it became clear that a different, more flexible approach to development was needed. This ultimately lead to the creation of Particiapp, a new Open Source project providing the building blocks and necessary abstraction layers for rapid protoyping and experimentation with different fontends which are compatible with but independent from Polis.

  1. Small, Christopher T., Bjorkegren, Michael, Erkkilä, Timo, Shaw, Lynette and Megill, Colin (2021). Polis: Scaling deliberation by mapping high dimensional opinion spaces. Recerca. Revista de Pensament i Anàlisi, 26(2), pp. 1-26. 

Categories: FLOSS Project Planets

KDE Gear 24.08 branches created

Planet KDE - Sun, 2024-07-21 06:17
Make sure you commit anything you want to end up in the KDE Gear 24.08
releases to them

Next Dates  
  • July 25, 2024: 24.08 Freeze and Beta (24.07.80) tag & release
  • August  8, 2024: 24.08 RC (24.07.90) Tagging and Release
  • August 15, 2024: 24.08 Tagging
  • August 22, 2024: 24.08 Release

https://community.kde.org/Schedules/KDE_Gear_24.08_Schedule

Categories: FLOSS Project Planets

Russell Coker: SE Linux Policy for Dell Management

Planet Debian - Sun, 2024-07-21 04:57

The recent issue of Windows security software killing computers has reminded me about the issue of management software for Dell systems. I wrote policy for the Dell management programs that extract information from iDRAC and store it in Linux. After the break I’ve pasted in the policy. It probably needs some changes for recent software, it was last tested on a PowerEdge T320 and prior to that was used on a PowerEdge R710 both of which are old hardware and use different management software to the recent hardware. One would hope that the recent software would be much better but usually such hope is in vain. I deliberately haven’t submitted this for inclusion in the reference policy because it’s for proprietary software and also it permits many operations that we would prefer not to permit.

The policy is after the break because it’s larger than you want on a Planet feed. But first I’ll give a few selected lines that are bad in a noteworthy way:

  1. sys_admin means the ability to break everything
  2. dac_override means break Unix permissions
  3. mknod means a daemon creates devices due to a lack of udev configuration
  4. sys_rawio means someone didn’t feel like writing a device driver, maintaining a device driver for DKMS is hard and getting a driver accepted upstream requires writing quality code, in any case this is a bad sign.
  5. self:lockdown is being phased out, but used to mean bypassing some integrity protections, that would usually be related to sys_rawio or similar.
  6. dev_rx_raw_memory is bad, reading raw memory allows access to pretty much everything and execute of raw memory is something I can’t imagine a good use for, the Reference Policy doesn’t use this anywhere!
  7. dev_rw_generic_chr_files usually means a lack of udev configuration as udev should do that.
  8. storage_raw_write_fixed_disk shouldn’t be needed for this sort of thing, it doesn’t do anything that involves managing partitions.

Now without network access or other obvious ways of remote control this level of access while excessive isn’t necessarily going to allow bad things to happen due to outside attack. But if there are bugs in the software there’s nothing to stop it from giving the worst results.

allow dell_datamgrd_t self:capability { dac_override dac_read_search mknod sys_rawio sys_admin }; allow dell_datamgrd_t self:lockdown integrity; dev_rx_raw_memory(dell_datamgrd_t) dev_rw_generic_chr_files(dell_datamgrd_t) dev_rw_ipmi_dev(dell_datamgrd_t) dev_rw_sysfs(dell_datamgrd_t) storage_raw_read_fixed_disk(dell_datamgrd_t) storage_raw_write_fixed_disk(dell_datamgrd_t) allow dellsrvadmin_t self:lockdown integrity; allow dellsrvadmin_t self:capability { sys_admin sys_rawio }; dev_read_raw_memory(dellsrvadmin_t) dev_rw_sysfs(dellsrvadmin_t) dev_rx_raw_memory(dellsrvadmin_t)

The best thing that Dell could do for their customers is to make this free software and allow the community to fix some of these issues.


Here is dellsrvadmin.te:

policy_module(dellsrvadmin,1.0.0) require {   type dmidecode_exec_t;   type udev_t;   type device_t;   type event_device_t;   type mon_local_test_t; } type dellsrvadmin_t; type dellsrvadmin_exec_t; init_daemon_domain(dellsrvadmin_t, dellsrvadmin_exec_t) type dell_datamgrd_t; type dell_datamgrd_exec_t; init_daemon_domain(dell_datamgrd_t, dell_datamgrd_t) type dellsrvadmin_var_t; files_type(dellsrvadmin_var_t) domain_transition_pattern(udev_t, dellsrvadmin_exec_t, dellsrvadmin_t) modutils_domtrans(dellsrvadmin_t) allow dell_datamgrd_t device_t:dir rw_dir_perms; allow dell_datamgrd_t device_t:chr_file create; allow dell_datamgrd_t event_device_t:chr_file { read write }; allow dell_datamgrd_t self:process signal; allow dell_datamgrd_t self:fifo_file rw_file_perms; allow dell_datamgrd_t self:sem create_sem_perms; allow dell_datamgrd_t self:capability { dac_override dac_read_search mknod sys_rawio sys_admin }; allow dell_datamgrd_t self:lockdown integrity; allow dell_datamgrd_t self:unix_dgram_socket create_socket_perms; allow dell_datamgrd_t self:netlink_route_socket r_netlink_socket_perms; modutils_domtrans(dell_datamgrd_t) can_exec(dell_datamgrd_t, dmidecode_exec_t) allow dell_datamgrd_t dellsrvadmin_var_t:dir rw_dir_perms; allow dell_datamgrd_t dellsrvadmin_var_t:file manage_file_perms; allow dell_datamgrd_t dellsrvadmin_var_t:lnk_file read; allow dell_datamgrd_t dellsrvadmin_var_t:sock_file manage_file_perms; kernel_read_network_state(dell_datamgrd_t) kernel_read_system_state(dell_datamgrd_t) kernel_search_fs_sysctls(dell_datamgrd_t) kernel_read_vm_overcommit_sysctl(dell_datamgrd_t) # for /proc/bus/pci/* kernel_write_proc_files(dell_datamgrd_t) corecmd_exec_bin(dell_datamgrd_t) corecmd_exec_shell(dell_datamgrd_t) corecmd_shell_entry_type(dell_datamgrd_t) dev_rx_raw_memory(dell_datamgrd_t) dev_rw_generic_chr_files(dell_datamgrd_t) dev_rw_ipmi_dev(dell_datamgrd_t) dev_rw_sysfs(dell_datamgrd_t) files_search_tmp(dell_datamgrd_t) files_read_etc_files(dell_datamgrd_t) files_read_etc_symlinks(dell_datamgrd_t) files_read_usr_files(dell_datamgrd_t) logging_search_logs(dell_datamgrd_t) miscfiles_read_localization(dell_datamgrd_t) storage_raw_read_fixed_disk(dell_datamgrd_t) storage_raw_write_fixed_disk(dell_datamgrd_t) can_exec(mon_local_test_t, dellsrvadmin_exec_t) allow mon_local_test_t dellsrvadmin_var_t:dir search; allow mon_local_test_t dellsrvadmin_var_t:file read_file_perms; allow mon_local_test_t dellsrvadmin_var_t:file setattr; allow mon_local_test_t dellsrvadmin_var_t:sock_file write; allow mon_local_test_t dell_datamgrd_t:unix_stream_socket connectto; allow mon_local_test_t self:sem { create read write destroy unix_write }; allow dellsrvadmin_t self:process signal; allow dellsrvadmin_t self:lockdown integrity; allow dellsrvadmin_t self:sem create_sem_perms; allow dellsrvadmin_t self:fifo_file rw_file_perms; allow dellsrvadmin_t self:packet_socket create; allow dellsrvadmin_t self:unix_stream_socket { connectto create_stream_socket_perms }; allow dellsrvadmin_t self:capability { sys_admin sys_rawio }; dev_read_raw_memory(dellsrvadmin_t) dev_rw_sysfs(dellsrvadmin_t) dev_rx_raw_memory(dellsrvadmin_t) allow dellsrvadmin_t dellsrvadmin_var_t:dir rw_dir_perms; allow dellsrvadmin_t dellsrvadmin_var_t:file manage_file_perms; allow dellsrvadmin_t dellsrvadmin_var_t:lnk_file read; allow dellsrvadmin_t dellsrvadmin_var_t:sock_file write; allow dellsrvadmin_t dell_datamgrd_t:unix_stream_socket connectto; kernel_read_network_state(dellsrvadmin_t) kernel_read_system_state(dellsrvadmin_t) kernel_search_fs_sysctls(dellsrvadmin_t) kernel_read_vm_overcommit_sysctl(dellsrvadmin_t) corecmd_exec_bin(dellsrvadmin_t) corecmd_exec_shell(dellsrvadmin_t) corecmd_shell_entry_type(dellsrvadmin_t) files_read_etc_files(dellsrvadmin_t) files_read_etc_symlinks(dellsrvadmin_t) files_read_usr_files(dellsrvadmin_t) logging_search_logs(dellsrvadmin_t) miscfiles_read_localization(dellsrvadmin_t)

Here is dellsrvadmin.fc:

/opt/dell/srvadmin/sbin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0) /opt/dell/srvadmin/sbin/dsm_sa_datamgrd        --        gen_context(system_u:object_r:dell_datamgrd_t,s0) /opt/dell/srvadmin/bin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0) /opt/dell/srvadmin/var(/.*)?                        gen_context(system_u:object_r:dellsrvadmin_var_t,s0) /opt/dell/srvadmin/etc/srvadmin-isvc/ini(/.*)?        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)

Related posts:

  1. creating a new SE Linux policy module Creating a simple SE Linux policy module is not difficult....
  2. Debian SE Linux policy bug checkmodule -m -o local.mod local.te semodule_package -o local.pp -m local.mod...
  3. New SE Linux Policy for Squeeze I have just uploaded refpolicy version 0.2.20100524-1 to Unstable. This...
Categories: FLOSS Project Planets

Pages