Feeds

Vincent Bernat: Why content providers need IPv6

Planet Debian - Sun, 2024-06-23 17:30

IPv4 is an expensive resource. However, many content providers are still IPv4-only. The most common reason is that IPv4 is here to stay and IPv6 is an additional complexity.1 This mindset may seem selfish, but there are compelling reasons for a content provider to enable IPv6, even when they have enough IPv4 addresses available for their needs.

Disclaimer

It’s been a while since this article has been in my drafts. I started it when I was working at Shadow, a content provider, while I now work for Free, an internet service provider.

Why ISPs need IPv6?

Providing a public IPv4 address to each customer is quite costly when each IP address costs US$40 on the market. For fixed access, some consumer ISPs are still providing one IPv4 address per customer.2 Other ISPs provide, by default, an IPv4 address shared among several customers. For mobile access, most ISPs distribute a shared IPv4 address.

There are several methods to share an IPv4 address:3

NAT44
The customer device is given a private IPv4 address, which is translated to a public one by a service provider device. This device needs to maintain a state for each translation.
464XLAT and DS-Lite
The customer device translates the private IPv4 address to an IPv6 address or encapsulates IPv4 traffic in IPv6 packets. The provider device then translates the IPv6 address to a public IPv4 address. It still needs to maintain a state for the NAT64 translation.
Lightweight IPv4 over IPv6, MAP-E, and MAP-T
The customer device encapsulates IPv4 in IPv6 packets or performs a stateless NAT46 translation. The provider device uses a binding table or an algorithmic rule to map IPv6 tunnels to IPv4 addresses and ports. It does not need to maintain a state.
Solutions to share an IPv4 address across several customers. Some of them require the ISP to keep state, some don't.

All these solutions require a translation device in the ISP’s network. This device represents a non-negligible cost in terms of money and reliability. As half of the top 1000 websites support IPv6 and the biggest players can deliver most of their traffic using IPv6,4 ISPs have a clear path to reduce the cost of translation devices: provide IPv6 by default to their customers.

Why content providers need IPv6?

Content providers should expose their services over IPv6 primarily to avoid going through the ISP’s translation devices. This doesn’t help users who don’t have IPv6 or users with a non-shared IPv4 address, but it provides a better service for all the others.

Why would the service be better delivered over IPv6 than over IPv4 when a translation device is in the path? There are two main reasons for that:5

  1. Translation devices introduce additional latency due to their geographical placement inside the network: it is easier and cheaper to only install these devices at a few points in the network instead of putting them close to the users.

  2. Translation devices are an additional point of failure in the path between the user and the content. They can become overloaded or malfunction. Moreover, as they are not used for the five most visited websites, which serve their traffic over IPv6, the ISPs may not be incentivized to ensure they perform as well as the native IPv6 path.

Looking at Google statistics, half of the users reach Google over IPv6. Moreover, their latency is lower.6 In the US, all the nationwide mobile providers have IPv6 enabled.

For France, we can refer to the annual ARCEP report: in 2022, 72% of fixed users and 60% of mobile users had IPv6 enabled, with projections of 94% and 88% for 2025. Starting from this projection, since all mobile users go through a network translation device, content providers can deliver a better service for 88% of them by exposing their services over IPv6. If we exclude Orange, which has 40% of the market share on consumer fixed access, enabling IPv6 should positively impact more than 55% of fixed access users.

In conclusion, content providers aiming for the best user experience should expose their services over IPv6. By avoiding translation devices, they can ensure fast and reliable content delivery. This is crucial for latency-sensitive applications, like live streaming, but also for websites in competitive markets, where even slight delays can lead to user disengagement.

  1. A way to limit this complexity is to build IPv6 services and only provide IPv4 through reverse proxies at the edge. ↩︎

  2. In France, this includes non-profit ISPs, like FDN and Milkywan. Additionally, Orange, the previously state-owned telecom provider, supplies non-shared IPv4 addresses. Free also provides a dedicated IPv4 address for customers connected to the point-to-point FTTH access. ↩︎

  3. I use the term NAT instead of the more correct term NAPT. Feel free to do a mental substitution. If you are curious, check RFC 2663. For a survey of the IPv6 transition technologies enumerated here, have a look at RFC 9313↩︎

  4. For AS 12322, Google, Netflix, and Meta are delivering 85% of their traffic over IPv6. Also, more than half of our traffic is delivered over IPv6. ↩︎

  5. An additional reason is for fighting abuse: blacklisting an IPv4 address may impact unrelated users who share the same IPv4 as the culprits. ↩︎

  6. IPv6 may not be the sole reason the latency is lower: users with IPv6 generally have a better connection. ↩︎

Categories: FLOSS Project Planets

Better arguments make arguments better

Planet KDE - Sun, 2024-06-23 08:08
Cut through bullshit arguments fast and make project discussions more productive.
Categories: FLOSS Project Planets

Sahil Dhiman: How I Write Blogs - June 2024 Edition

Planet Debian - Sun, 2024-06-23 03:02

I wrote about my blog writing methodology back April 2021. My writing method has undergone a significant shift now, so thought about writing this update.

New blog topics are added to my note-taking app quite frequently now. Occasionally going through the list, I merge topics, change order to prioritize certain topics or purely drop ideas which seems not worth a write-up. Due to this, I have the liberty to work on blogs according to mood. Writing the last one was tiring, so I chose to work on an easy one, i.e. this blog now.

Topic decided, everything starts on etherpad now. Etherpad has this nice font and sync feature, which helps me write from any device of choice. Actual writing usually happens in the morning, right after I wake up. For most topics, I quickly jot down pointers and keep on expanding them over the course of multiple days at a leisurely pace. Though, sometime it adds too many pieces in the puzzle and takes additional time to put everything in flow. New pointer addition keeps on happening along with writing. Nowadays, pictures too dot my blog, which I rarely use to do earlier. I have come to believe on less usage of external links. These breaks the flow of readers. If someone is sufficiently motivated to learn more about something, finding useful sources isn’t.

As the first draft comes into being, I run it through LanguageTool for spelling corrections (which typically are many) and fixing grammatical issues. Post that, for the first time I read the complete write-up in one go for formation of coherent storyline, moving paragraphs around for form a structure , adding explainers wherever something new or unexplained is introduced, removing elaborate sentences, making amends wherever required and moving paragraphs around for forming structure. Another round of LanguageTool follows. All set now, I try to space out my final read before publishing, which helps find additional mistakes or loopholes.

When everything is set, I do hugo to generate the site and rsync everything to the web server. A final git sync closes the publication part.

After a day or two, I come back to read the blog on the website. This entails another round finding and fixing trivial mistakes. After this, it’s set for good.

Nowadays, in addition to being on my blog, everything is syndicated on Planet FSCI and Planet Debian, which has given it more visibility. As someone who’s into infrastructure and Internet as a lot, I do pay attention to logs on my server, but as a disconnected exercise to if the blog is being read or not. More hits on the blog doesn’t translate to any gratification for me, at least for writers’ point of view. Occasionally, people do mention my blog, which does flatter me. Four years and nearly a hundred posts later, I still wonder how I kept on writing for this long.

Categories: FLOSS Project Planets

Brett Cannon: My impressions of ReScript

Planet Python - Sat, 2024-06-22 19:35

I maintain a GitHub Action called check-for-changed-files. For the purpose of this blog post what the action does isn&apost important, but the fact that I authored it originally in TypeScript is. See, one day I tried to update the NPM dependencies. Unfortunately, that update broke everything in a really bad way due to how the libraries I used to access PR details changed and howthe TypeScript types changed. I had also gotten tired of updating the NPM dependencies for security concerns I didn&apost have since this code was only run in CI by others for their own use (i.e. regex denial-of-service isn&apost a big concern). As such I was getting close to burning out on the project as it was a nothing but a chore to keep it up-to-date and I wasn&apost motivated to keep the code up-to-date since TypeScript felt more like a cost than a benefit for such a small code base where I&aposm the sole maintainer (there&aposs only been one other contributor to the project since the initial commit 4.5 years ago). I converted the code base to JavaScript in hopes of simplifying my life and it went better than I expected, but it still wasn&apost enough to keep me interested in the project.

And so I did what I needed to in order to be engaged with the project again: I rewrote it in another programming language that could run easily under Node. 😁 I decided I wanted to do the rewrite piecemeal to make sure I could tell if I was going to like the eventual outcome quickly rather than a complete rewrite from scratch and being unhappy with where I ended up (doing this while on parental leave made me prioritize my spare team immensely, so failing fast was tantamount). During my parental leave I learned Gleam because I loved their statement on expectations for community conduct on their homepage, but while it does compile to JavaScript I realized it works better when JavaScript is used as an escape hatch instead using Gleam to port an existing code base and so it wasn&apost a good fit for this use case.

My next language to attempt the rewrite with was ReScript thanks to my friend Dusty liking it. One of the first things I liked about the language was it had a clear migration path from JavaScript to ReScript in 5 easy steps. And since step 1 was "wrap your JavaScript code in %%raw blocks and change nothing" and step 5 was the optional "clean up" step, there was really only 3 main steps (I did have a hiccup with step 1, though, due to a bug not escaping backticks for template literals appropriately, but it was a mostly mechanical change to undo the template literals and switch to string concatenation).

A key thing that drew me to the language is its OCaml history. ReScript can have very strict typing, but ReScript&aposs OCaml background also means there&aposs type inference, so the typing doesn&apost feel that heavy. ReScript also has a functional programming leaning which I appreciate.

💡When people say "ML" for "machine learning" it still throws me as I instinctively think they are actually referring to "Standard ML".

But having said all of that, ReScript does realize folks will be migrating or working with a preexisting JavaScript code base or libraries, and so it tries to be pragmatic for that situation. For instance, while the language has roots in OCaml, the syntax would feel comfortable to JavaScript developers. While supporting a functional style of programming, the language still has things like if/else and for loops. While the language is strongly typed, ReScript as things like its object type where the types of the fields can be inferred based on usage to make it easier to bring over JavaScript objects.

As part of the rewrite I decided to lean in on testing to help make sure things worked as I expected them to. But I ran into an issue where the first 3 testing frameworks I looked into didn&apost work with ReScript 11 (which came out in January 2024 and is the latest major version as I write this). Luckily the 4th one, rescript-zora, worked without issue (it also happens to be by my friend, Dusty, so I was able to ask questions of the author directly 😁; I initially avoided it so I wouldn&apost pester him about stuff, but I made up for it by contributing back). Since ReScript&aposs community isn&apost massive it isn&apost unexpected to have some delays in projects keeping up with stuff. Luckily the ReScript forum is active so you can get your questions answered quickly if you get stuck. But this hiccup and the one involving %%raw and template literals, the process was overall rather smooth.

In the end I would say the experience was a good one. I liked the language and transitioning from JavaScript to ReScript went relatively smoothly. As such, I have ported check-for-changed-files over to ReScript permanently in the 1.2.1 release, and hopefully no one noticed the switch. 🤞

Categories: FLOSS Project Planets

KDE PIM Sprint June 2024

Planet KDE - Sat, 2024-06-22 03:00

Last weekend I visited Toulouse for the annual KDE PIM sprint. Besides discussing many topics and getting a lot of work done this being in France also had some culinary benefits.

Photo by Carl Schwan. Sprint

There’s several reports already on Planet KDE about what we did:

And there’s also the meeting notes in the wiki.

So instead of repeating that I’ll just focus on a few things I worked on. What I’ll repeat though is their thanks to Étincelle Coworking for providing the venue and to Kévin for organizing everything.

Getting everyone together for a few days is extremely valuable and productive, and your donations to KDE .e.V. help to make that possible!

Getting KMime ready for KDE Frameworks

Work on moving more libraries or classes from PIM to Frameworks had been paused for some time while we were doing the transition to Qt 6. But that’s done now, so we can continue.

My main focus for this is KMime, a library for parsing and generating emails. KMime has its origins at the beginning of the century, and while for email- related code that’s actually more on the younger side it’s mature and optimized in what it does, and just needs a bit of polish to catch up with current standards.

This includes:

  • Cleaning up some legacy API from a time when attachments were sort of an afterthought (to KMime but also the email standards).
  • Making more of the API const-correct so parsing a message doesn’t accidentally modify it.
  • Moving some code out of KMime that isn’t really in scope but just had been put there to solve dependency issues in the past.

While doing this and reviewing and adjusting consumer code we also found some duplicated, inefficient or no longer used code in libraries built on top of KMime that got subsequently cleaned up.

Moving classes to KDE Frameworks

Moving code to Frameworks isn’t just about entire libraries, it can also be about a single class or method. MultiplyingLineEdit is such a case that was requested to move, and similar to KMime needed a couple of cleanups.

Sometimes it’s however also about the other way around, phasing out custom infrastructure that is meanwhile equally or better served by code in Frameworks. Pimcommon::ConfigureImmutableWidgetUtils is an example for that, with KConfigDialogManager being a more comprehensive replacement. Porting the remaining uses also progressed.

Travel

Travelling to Toulouse for the sprint also provided some opportunity for field-testing Itinerary.

  • The new Deutsche Bahn BahnCard “replacement documents” are recognized by Itinerary and got me through ticket checks successfully.
  • For the first time I got correct automatic transfer suggestions between railway stations in Paris (ie. take the direct Metro line rather than some nonsensical long distance train chain), as well as to the venue in Toulouse (ie. recommended to walk rather than taking the Metro for one stop), thanks to Transitous. New and stricter preconditions for automatic transfers also help with improving the quality of the suggestions.

And we also got a few improvements out of this:

  • Added support for creating events from OSM office elements, given the sprint was hosted in the rooms of a co-working space.
  • Company capital as sometimes mentioned in the fine print of French business documents (such as hotel bookings) doesn’t confuse the generic price extractor anymore, so your hotel stay is no longer mistakenly shown to cost 3.000.000 Euro.

The return trip turned out particularly exciting with a rare railway electricity outage, which provided ideas for how to extend Itinerary’s alternative connection search.

Categories: FLOSS Project Planets

This week in KDE: Plasma 6.1 cleanups

Planet KDE - Sat, 2024-06-22 02:38

Plasma 6.1 has been released to good reviews! We’ve spent the week fixing issues reported so far, as always. So far we’re in good shape here, with almost all the big issues fixed already. We’re still tracking a few more, such as cases where triple buffering introduced stuttering, or random QML widgets and System Settings pages failing to launch until Qt’s QML cache folder is cleared (if you do this, please save it first and attach it to the bug report).

So this time, lets start with the bug fixes:

Bug Fixes

Moving the pointer right after the screen locks to make it unlock immediately no longer just breaks it entirely instead. In addition, made the screen locker more robust against failure in a few more cases (Xaver Hugl, Plasma 6.1.1. Link 1, link 2, and link 3)

Re-fixed the bug of desktop files dragged to another screen disappearing until Plasma was restarted. This so fixed a case where Plasma could crash when dragging files from the desktop to some folders in Dolphin (David Edmundson, Plasma 6.1.1. Link 1 and link 2)

In Plasma’s new zoomed-out Edit Mode, moving non-center-aligned panels to a different screen edge once again works rather than crashing Plasma (Marco Martin, Plasma 6.1.1. Link)

KWin’s Cube effect can once again be opened reliably (David Edmundson, Plasma 6.1.1. Link)

KWin’s Zoom effect and ICC color profiles now get along better (Xaver Hugl, Plasma 6.1.1. Link)

KWin’s Shake Cursor effect now works as expected with every global animation speed, including animations entirely disabled (Vlad Zahorodnii, Plasma 6.1.1. Link)

Fixed a recent visual regression KWin’s Glide effect (Vlad Zahorodnii, Plasma 6.1.1. Link)

Widgets on the Plasma desktop that are dragged while in the new zoomed-out Edit Mode are now connected to the pointer as expected (Marco Martin, Plasma 6.1.1. Link)

In Discover, the “still looking” busy indicator is no longer visually broken for the first search you make after launching the app (Akseli Lahtinen, Plasma 6.1.1. Link)

Fixed that weird blur glitch when a floating panel de-floats (Marco Martin, KDE Frameworks 6.3.1. Link)

System Settings’ Audio Volume page no longer causes System Settings to crash when you open it for a second time(David Redondo, Qt 6.7.2. Link)

When you drag pinned Task Manager icons to re-arrange them, other icons no longer come along for the ride and ruin everything (Niccolò Venerandi, Qt 6.7.2. Link)

Other bug information of note:

UI Improvements

You can now resize the sidebar and the playlist bar in Elisa to suit your preferences. This is enabled by the recent work to get better a resizable split view control, so expect more of these soon! (Jack Hill, Elisa 24.08.0. Link)

Removed Filelight’s back and forward buttons because they were confusing when paired with the “go up” action (Han Young, Filelight 24.08.0. Link)

A number of countries that use Metric units with the US Letter paper size (e.g. Canada) now get the correct paper size set in System Settings’ Region & Language Page (Han Young, Plasma 6.1.1. Link)

By default, Breeze-themed windows can now be dragged only from their logical and visually distinct header areas, rather than every empty area. This is particularly helpful for control-heavy apps with with lots of draggable UI elements, such as Kdenlive. You can of course change this back if you’d like (me: Nate Graham, Plasma 6.2.0. Link)

In Discover, Flatpak runtimes on the Updates page are now shown in a separate “Application Support” category to clarify their purpose (Ivan Tkachenko, Plasma 6.2.0. Link)

Plasma’s Weather widget now tries to guide you away from weather stations provided by wetter.com, which doesn’t even include current temperature data! These weather stations will now only be shown as a fallback if no other better weather stations are available (Ismael Asensio, Plasma 6.2.0. Link)

Plasma’s Bluetooth widget no longer shows devices you’ve blocked (Ivan Tkachenko, Plasma 6.2.0. Link)

In System Monitor, you can now hide a column from the context menu you’ll see when right-clicking on its header (James Graham, Plasma 6.2.0. Link)

Reverted a change made a few months ago to force single-click on Folder View popups in list mode. These are file views and not menus — despite any superficial similarity — so we now treat them like file views and respect your click preference (me: Nate Graham, Plasma 6.2.0. Link)

System Settings’ Quick Settings pages is now visible in the sidebar rather than the header, matching other KDE apps (me: Nate Graham, Plasma 6.2.0. Link)

The Get New [thing] dialogs now use a more compact view style by default, improving information density (me: Nate Graham, Frameworks 6.4. Link)

Automation & Systematization

Added some GUI tests for various functionality in KWrite (Antoine, Herlicq link)

Added an autotest to verify screen arrangements after XWayland scales change (Xaver Hugl, link)

Added an autotest to verify that power profiles work as expected (Fushan Wen, link)

…And Everything Else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

The KDE organization has become important in the world, and your time and labor have helped to bring it there! But as we grow, it’s going to be equally important that this stream of labor be made sustainable, which primarily means paying for it. Right now the vast majority of KDE runs on labor not paid for by KDE e.V. (the nonprofit foundation behind KDE, of which I am a board member), and that’s a problem. We’ve taken steps to change this with paid technical contractors — but those steps are small due to growing but still limited financial resources. If you’d like to help change that, consider donating today!

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Categories: FLOSS Project Planets

Abhijith PA: abhijithpa.me to abhijithpa.in

Planet Debian - Sat, 2024-06-22 02:33

I let go my domain abhijithpa.me. It was getting expensive and I don’t fancy that anymore. I never actually purchased this domain but came with a package offer. I then kept it for couple of years.

I have now bought a new domain abhijithpa.in and pointed everything to it.

So if you are seeing abhijithpa.me with lot of contents, that is no me. Its either domain squatters or someone impersonating me.

Categories: FLOSS Project Planets

Promet Source: [Study] U.S. Government CMS Preferences and Trends

Planet Drupal - Sat, 2024-06-22 01:47
Takeaway: Government CMS preferences vary dramatically by level and size: While smaller cities favor specialized proprietary solutions, larger entities and States prefer flexible open-source platforms or enterprise solutions. That's why it's important to pick a CMS that fits your government’s context and needs.
Categories: FLOSS Project Planets

Russ Allbery: Review: And the Stars Will Sing

Planet Debian - Sat, 2024-06-22 01:04

Review: And the Stars Will Sing, by Michelle Browne

Series: The Meaning Wars #1 Publisher: Michelle Browne Copyright: 2012, 2021 Printing: 2021 ASIN: B0075G7GEA Format: Kindle Pages: 85

And the Stars Will Sing is a self-published science fiction novella, the first of a (currently) five book series. I believe it may be Browne's first publication, although I don't have a good data source to confirm.

Crystal Weiss is a new graduate from Mars, about to leave the solar system to her first job assignment: installation of a permanent wormhole in the vicinity of Messier 14. Her expertise is the placement calculations. The heavy mathematical lifting is of course done by computers, but humans have to do the mapping and some of the guidance. And the Stars Will Sing is an epistolary novel, told in the form of her letters to her friend Sarah.

I feel bad when I stumble across a book like this. I want to stick with my habit of writing a review of each book I read, but it's one thing to pan a bad book by a famous author and another thing to pick on a self-published novella that I read due to some recommendation or mention whose details I've forgotten. Worse, I think this wasn't even the recommended book; I looked up the author, saw that the first of a series was on sale, and thought "oh, hey, I like epistolary novels and I'm in the mood for some queer space opera."

This book didn't seem that queer (there is a secondary lesbian relationship but the main relationship seemed rather conventional), but I'll get to the romance in a moment.

I was not the reader for this book. There's a reason why most of the books I read are from traditional publishers; I'm too critical of a reader for a lot of early self-published work. It's not that I dislike self-publishing as a concept — many self-published books are excellent and the large publishers have numerous problems — but publishers enforce a quality bar. Inconsistently, unfairly, and by rejecting a lot of good work, but still, they do. I'm fairly sure traditional publishers would have passed on this book; the quality of the writing isn't there yet.

(It's certainly a better book than I could have written! But that's why I'm writing my reviews over in my quiet corner of the Internet and not selling fiction to other people.)

The early chapters aren't too bad, although they have a choppy, cliched style that more writing experience usually smoothes out. The later chapters have more dialogue, enough that I started wondering how Crystal could remember that much dialogue verbatim to put into a letter, and it's not good. All of the characters talk roughly the same (even the aliens), the dialogue felt even more cliched than the rest of the writing, and I started getting distracted by the speech tags.

Crystal comes across as very young, impulsive, and a drama magnet who likes being all up in her coworkers' business. None of these are objective flaws in the book, but I could tell early on that I was going to find her annoying. She has a heavily-foreshadowed enemies-to-lovers thing with one of her male coworkers. Her constant complaining about him at the start of the story was bad enough, but the real problem is that in the very few places where he has more personality than plastic lawn furniture, he's being obnoxious to Crystal. I'm used to being puzzled by a protagonist's choice in love interests, but this one felt less like an odd personality choice and more a lack of writing skill. Even if the relationship is being set up for failure (not true by the end of this book), you've got to help me understand what the protagonist saw in him or was getting out of the relationship.

The plot was so predictable that it ironically surprised me. I was sure that some sort of twist or complication was coming, but no. I will give Browne some credit for writing a slightly more realistic character reaction to violence than most SF authors, but there was nothing in the plot to hold my interest. The world-building was generic science fiction with aliens. It had a few glimmers of promise, but there was some sort of psychic hand-waving involved in siting wormholes that didn't work for me and the plot climax made no sense to me whatsoever.

This is the kind of bad book that I don't want to hold against the writer. Twelve years later and with numerous other novels and novellas under her belt, her writing is probably much better. I do think this book would have benefited from an editor telling her it wasn't good enough for publication yet, but that's not how the Kindle self-publishing world works. Mostly, this is my fault: I half-followed a recommendation into an area of publishing that I know from past experience I should avoid without a solid review from an equally critical reader.

Followed by The Stolen, a two-story collection.

Rating: 3 out of 10

Categories: FLOSS Project Planets

automake @ Savannah: automake 1.16.92 pretest release candidate

GNU Planet! - Fri, 2024-06-21 18:01

automake 1.16.92 pretest release candidate released. Please test if you can, so 1.17 will be as reliable as we can make it. Announcement:
https://lists.gnu.org/archive/html/autotools-announce/2024-06/msg00001.html

Categories: FLOSS Project Planets

Dirk Eddelbuettel: nanotime 0.3.9 on CRAN: Bugfix

Planet Debian - Fri, 2024-06-21 16:52

A quick bug fix release 0.3.9 for our nanotime package is now on CRAN, following up on the 0.3.8 release made this week. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

The 0.3.8 release added a accurate parameter for POSIXct conversions, and it turns out that this did not test as expected on arm64 so we disabled the test on that platform. The NEWS snippet below has the full details.

Changes in version 0.3.9 (2024-06-21)
  • Condition two tests to not run on arm64 (Dirk in #129 fixing #128)

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Ned Batchelder: Coverage at a crossroads

Planet Python - Fri, 2024-06-21 08:14

This is an interesting time for coverage.py: I’m trying to make use of new facilities in Python to drastically reduce the execution-time overhead, but it’s raising tricky questions about how coverage should work.

The current situation is a bit involved. I’ll try to explain, but this will get long, with a few sections. Come talk in Discord about anything here, or anything about coverage at all.

How coverage works today

Much of this is discussed in How coverage.py works, but I’ll give a quick overview here to set the stage for the rest of this post.

Trace function

Coverage knows what code is executed because it registers a trace function which gets called for each line of Python execution. This is the source of the overhead: calling a function is expensive. Worse, for statement coverage, we only need one bit of information for each line: was it ever executed? But the trace function will be invoked for every execution of the line, taking time but giving us no new information.

Arcs

The other thing to know about how coverage works today is arcs. Coverage measures branch coverage by tracking the previous line that was executed, then the trace function can record an arc: the previous line number and the current line number as a pair. Taken all together, these arcs show how execution has moved through the code.

Most arcs are uninteresting. Consider this simple program:

1print("Hello")
2print("world")
3print("bye")

This will result in arcs (1, 2) and (2, 3). Those tell us that lines 1, 2, and 3 were all executed, but nothing more interesting. Lots of arcs are this kind of straight-line information.

But when there are choices in the execution path, arcs tell us about branches taken:

1a = 1
2if a == 1:
3    print("a is one!")
4else:
5    print("a isn't one!")
6print("Done")

Now we’ll collect these arcs during execution: (1, 2), (2, 3), (3, 6). When coverage.py analyzes the code, it will determine that there were two possible arcs that didn’t happen: (2, 5) and (5, 6).

The set of all possible arcs is used to determine where the branches are. A branch is a line that has more than one possible arc leaving it. In this case, the possible arcs include (2, 3) and (2, 5), so line 2 is a branch, and only one of the possible arcs happened, so line 2 is marked as a partial branch.

SlipCover

SlipCover is a completely different implementation of a coverage measurement tool, focused on minimal execution overhead. They’ve accomplished this in a few clever ways, primarily by instrumenting the code to directly announce what is happening rather than using a trace function. Synthetic bytecode or source lines are inserted into your code to record data during execution without using a trace function.

SlipCover’s author (Juan Altmayer Pizzorno) and I have been talking for years about how SlipCover and coverage.py each work, with the ultimate goal to incorporate SlipCover-like techniques into coverage.py. SlipCover is an academic project, so was never meant to be widely used and maintained long-term.

One of the ways that SlipCover reduces overhead is to remove instrumentation once it has served its purpose. After a line has been marked as executed, there is no need to keep that line’s inserted bytecode. The extra tracking code can be removed to avoid its execution overhead.

Instrumenting and un-instrumenting code is complex. With Python 3.12, we might be able to get the best aspects of instrumented code without having to jump through complicated hoops.

Python 3.12: sys.monitoring

Python 3.12 introduced a new way for Python to track execution. The sys.monitoring feature lets you register for events when lines are executed. This is similar to a classic trace function, but the crucial difference is you can disable the event line-by-line. Once a line has reported its execution to coverage.py, that line’s event can be disabled, and the line will run at full speed in the future without the overhead of reporting to coverage. Other lines in the same file will still report their events, and they can each be disabled once they have fired once. This gives us low overhead because lines mostly run at full speed.

Coverage.py has supported line coverage in 3.12 since 7.4.0 with only 5% overhead or so.

Unfortunately, branch coverage is a different story. sys.monitoring has branch events, but they are disabled based only on the “from” line, not on the “from/to” pair. In our example above, line 2 could branch to 3 or to 5. When sys.monitoring tells us line 2 branched to 3, we have to keep the event enabled until we get an event announcing line 2 branching to line 5. This means we continue to suffer event overhead during execution of the 2-to-3 branch, plus we have to do our own bookkeeping to recognize when both branch destinations have been seen before we can disable the event.

As a result, branch coverage doesn’t get the same advantages from sys.monitoring as statement coverage.

I’ve been talking to the core Python devs about changing how sys.monitoring works for branches. But even if we come to a workable design, because of Python’s yearly release cycle it wouldn’t be available until 3.14 in October 2025 at the earliest.

Using lines for branches

In SlipCover, Juan came up with an interesting approach to use sys.monitoring line events for measuring branch coverage. SlipCover already had a few ways of instrumenting code, rewriting the code during import to add statements that report execution data. The new idea was to add lines at the destinations of branches. The lines don’t have to do anything. If sys.monitoring reports that the line was executed, then it must mean that the branch to that line happened.

As an example, our code from above would be rewritten to look something like this:

a = 1
if a == 1:                  # line 2
    NO-OP                   # line A, marked as 2->3
    print("a is one!")      # line 3
else:                       
    NO-OP                   # line B, marked as 2->5
    print("a isn't one!")   # line 5
print("Done")

If sys.monitoring reports that line A was executed, we can record a branch from 2 to 3 and disable the event for line A. This reduces the overhead and still lets us later get an event for line B when line 2 branches to line 5.

This seems to give us the best of all the approaches: events can be disabled for each choice from a branch independently, and the inserted lines can be as minimal as possible.

Problems

There are a few problems with adapting this clever approach to coverage.py.

Moving away from arcs

This technique changes the data we collect during execution. We no longer get a (1, 2) arc, for example. Overall, that’s fine because that arc isn’t involved in computing branches anyway. But arcs are used as intermediate data throughout coverage.py, including its extensive test suite. How can we move to arc-less measurement on 3.12+ without refactoring many tests and while still supporting Pythons older than 3.12?

I’ve gotten a start on adapting some test helpers to avoid having to change the tests, so this might not be a big blocker.

Is every multi-arc a branch?

Another problem is how coverage.py determines branches. As I mentioned above, coverage.py statically analyzes code to determine all possible arcs. Any line that could arc to more than one next line is considered a branch. This works great for classic branches like if statements. But what about this code?

1def func(x):
2    try:
3        if x == 10:
4            print("early return")
5            return
6    finally:
7        print("finally")
8    print("finished")

If you look at line 7, there are two places it could go next. If x is 10, line 7 will return from the function because of the return on line 5. If x is not 10, the line 7 will be followed by line 8. Coverage.py’s static analysis understands these possibilities and includes both (7, return) and (7, 8) in the possible arcs, so it considers line 7 a branch. But is it? The conditional here is really on line 3, which is already considered a branch.

I mention this as a problem here because the clever NO-OP rewriting scheme depends on being able to insert a line at a destination that clearly indicates where the branch started from. In this finally clause, where would we put the NO-OP line for the return? The rewriting scheme breaks down if the same destination can be reached from different starting points.

But maybe those cases are exactly the ones that shouldn’t be considered branches in the first place?

Where are we?

I’ve been hacking on this in a branch in the coveragepy repo. It’s a complete mess with print-debugging throughout just to see where this idea could lead.

For measuring performance, we have a cobbled-together benchmark tool in the benchmark directory of the repo. I could use help making it more repeatable and useful if you are interested.

I’m looking for thoughts about how to resolve some of these issues, and how to push forward on a faster coverage.py. There’s now a #coverage-py channel in the Python Discord where I’d welcome feedback or ideas about any of this, or anything to do with coverage.py.

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #209: Python's Command-Line Utilities & Music Information Retrieval Tools

Planet Python - Fri, 2024-06-21 08:00

What are the built-in Python modules that can work as useful command-line tools? How can these tools add more functionality to Windows machines? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Web Review, Week 2024-25

Planet KDE - Fri, 2024-06-21 07:55

Let’s go for my web review for the week 2024-25.

Proton is transitioning towards a non-profit structure

Tags: tech, internet, ethics, privacy

Very interesting move. I wish them well!

https://proton.me/blog/proton-non-profit-foundation


Licensing teams will target unwitting Oracle Java users • The Register

Tags: tech, java

Oracle doing Oracle things I guess… The surprising bit to me is the fact that so many people still seem to use Java SE while there are other excellent alternatives.

https://www.theregister.com/2024/06/20/oracle_java_licence_teams/


Microsoft Refused to Fix Flaw Years Before SolarWinds Hack — ProPublica

Tags: tech, microsoft, security

A deep dive into the events which led to the SolarWinds breaches. The responsibility from Microsoft as an organization is staggering. Their handling of security matters massively failed once more. I don’t get how governmental agencies or other companies can still turn to Microsoft with sensitive data.

https://www.propublica.org/article/microsoft-solarwinds-golden-saml-data-breach-russian-hackers


Microsoft delays Recall again, won’t debut it with new Copilot+ PCs after all | Ars Technica

Tags: tech, microsoft, security

Very unsurprising, the harm is probably done though. They’ll have to work hard for their reputation to recover (even though it was probably low already).

https://arstechnica.com/gadgets/2024/06/microsoft-delays-data-scraping-recall-feature-again-commits-to-public-beta-test/


Edward Snowden Says OpenAI Just Performed a “Calculated Betrayal of the Rights of Every Person on Earth”

Tags: tech, gpt, surveillance

It was already hard to trust this company, but now… that clearly gives an idea of the kind of monetization channels they’re contemplating.

https://futurism.com/the-byte/snowden-openai-calculated-betrayal


GitHub Copilot Chat: From Prompt Injection to Data Exfiltration · Embrace The Red

Tags: tech, ai, machine-learning, gpt, copilot, security, privacy

The creative ways to exfiltrate data from chat systems built with LLMs…

https://embracethered.com/blog/posts/2024/github-copilot-chat-prompt-injection-data-exfiltration/


I Will Fucking Piledrive You If You Mention AI Again — Ludicity

Tags: tech, ai, machine-learning, gpt, data-science, criticism, funny

OK, this is a rant about the state of the market and people drinking kool-aid. A bit long but I found it funny and well deserved at times.

https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/


Block AI training on a web site

Tags: tech, ai, machine-learning, gpt, self-hosting, criticism

Since there are ways to offset the plagiarism a bit, let’s do it. Obviously it’s not perfect but that’s a start.

https://blog.zgp.org/block-ai-training-on-a-web-site/


How free software hijacked Philip Hazel’s life

Tags: tech, foss, maintenance, life, history

Very interesting piece… shows how someone can end up maintaining something essential for decades. This is a lesson for us all.

https://lwn.net/SubscriberLink/978463/be23210c163a2107/


DDoS attacks can threaten the independent Internet

Tags: tech, networking, security, self-hosting, internet

This is indeed a real concern… with no propre solution in sight.

https://www.macchaffee.com/blog/2024/ddos-attacks/


We don’t know what’s happening on our networks

Tags: tech, networking, security

On the peculiarities of running a network for a university… this is an interesting way to frame it as basically being an ISP with benefits.

https://utcc.utoronto.ca/~cks/space/blog/sysadmin/OurNetworkTrafficIsUnknown


Why you shouldn’t parse the output of ls - Greg’s Wiki

Tags: tech, shell, scripting

This is indeed an easy mistake to do. It’s better be avoided.

https://mywiki.wooledge.org/ParsingLs


Versioning FreeCAD files with git - lambda.cx blog

Tags: tech, tools, git, cad

Interesting trick for a zip based format containing mostly text.

https://blog.lambda.cx/posts/freecad-and-git/


Joining Strings in Python: A “Huh” Moment - Veronica Writes

Tags: tech, python, memory, performance

Interesting dive into how join() and generator behave in CPython.

https://berglyd.net/blog/2024/06/joining-strings-in-python/


Understanding a Python closure oddity

Tags: tech, programming, python

That’s what happens where references are half hidden in a language. You think each closure get a different copy but in fact they all refer to the same object.

https://utcc.utoronto.ca/~cks/space/blog/python/UnderstandingClosureOddity


Regular JSON – Neil Madden

Tags: tech, json, security

JSON, its grammar and the security implications. The approach of looking at a restricted subset is interesting.

https://neilmadden.blog/2023/05/31/regular-json/


Demystifying Rust’s ? Operator

Tags: tech, programming, rust

Ever wondered how this operator is implemented in Rust? It’s not that complicated.

https://blog.sulami.xyz/posts/demystifying-rusts-questionmark-operator/


I’ve Stopped Using Box Plots. Should You? | Nightingale

Tags: tech, data-visualization

Why box plots are hard to grasp and probably badly designed. There are good alternatives out there though.

https://nightingaledvs.com/ive-stopped-using-box-plots-should-you/


When To Write a Simulator

Tags: tech, complexity, probability, simulation

Some problems are indeed tackled faster by having a simulation allowing to explore potential solutions. It’s tempting to go very formal and theoretical but it’d require more effort and be more error prone.

https://sirupsen.com/napkin/problem-16-simulation


Major version numbers may not be sacred, but backwards compatibility is

Tags: tech, library, api, maintenance

Good musing about major version numbers and backward compatibility. It is indeed important to communicate breaking changes properly and to not have those too often.

https://blog.cessen.com/post/2022_07_09_major_version_numbers_may_not_be_sacred


What’s hidden behind “just implementation details” | nicole@web

Tags: tech, software, programming, work, complexity

It might not look like a lot from the outside, but “just implementation details” in fact hides quite some work and complexity.

https://ntietz.com/blog/whats-behind-just-implementation/


A Note on Essential Complexity

Tags: tech, software, organization, complexity

Very nice piece about the various types of complexities we encounter in our trade, and what we can or should do about it.

https://olano.dev/blog/a-note-on-essential-complexity


Simple sabotage for software · Erik Bernhardsson

Tags: tech, software, management

This is a funny pretense, and yet… If any of this remind you of a real context, this would be paper cuts. Have enough of those and indeed the organization might grind to a halt.

https://erikbern.com/2023/12/13/simple-sabotage-for-software.html


Never, Sometimes, Always - lukeplant.me.uk

Tags: tech, requirements, software, product-management

This is indeed a good way to classify events probability in requirements. It definitely impacts how you handle them in software.

https://lukeplant.me.uk/blog/posts/never-sometimes-always/


Start Presentations on the Second Slide - by Kent Beck

Tags: tech, communication, talk

Nice trick, definitely should use it more often.

https://tidyfirst.substack.com/p/start-presentations-on-the-second


On Ultra-Processed Content - Cal Newport

Tags: tech, information, social-media, criticism

Indeed the analogy from “ultra-processed food” is an interesting one in the information context.

https://calnewport.com/on-ultra-processed-content/


Bye for now!

Categories: FLOSS Project Planets

mark.ie: My Drupal Core Contributions for week-ending June 21th, 2024

Planet Drupal - Fri, 2024-06-21 07:54

Here's what I've been working on for my Drupal contributions this week. Thanks to Code Enigma for sponsoring the time to work on these.

Categories: FLOSS Project Planets

Bits from Debian: Looking for the artwork for Trixie the next Debian release

Planet Debian - Fri, 2024-06-21 06:00

Each release of Debian has a shiny new theme, which is visible on the boot screen, the login screen and, most prominently, on the desktop wallpaper. Debian plans to release Trixie, the next release, next year. As ever, we need your help in creating its theme! You have the opportunity to design a theme that will inspire thousands of people while working in their Debian systems.

For the most up to date details, please refer to the wiki.

We would also like to take this opportunity to thank Juliette Taka Belin for doing the Emerald theme for bookworm.

The deadlines for submissions is: 2024-09-19

The artwork is usually picked based on which themes look the most:

  • ''Debian'': admittedly not the most defined concept, since everyone has their own take on what Debian means to them.
  • ''plausible to integrate without patching core software'': as much as we love some of the insanely hot looking themes, some would require heavy GTK+ theming and patching GDM/GNOME.
  • ''clean / well designed'': without becoming something that gets annoying to look at a year down the road. Examples of good themes include Joy, Lines, Softwaves and futurePrototype.

If you'd like more information or details, please post to the Debian Desktop mailing list.

Categories: FLOSS Project Planets

Bits from Debian: Recherche d'un thème pour Trixie, la prochaine version de Debian

Planet Debian - Fri, 2024-06-21 06:00

Chaque édition de Debian bénéficie d'un nouveau thème brillant visible sur l'écran d'amorçage, l'écran de connexion et, de la façon la plus évidente, sur le fond d'écran. Debian prévoit de publier Trixie, sa prochaine version, durant l'année à venir. Et, comme toujours, nous avons besoin de vous pour créer son thème ! Vous avez l'occasion de concevoir un thème qui inspirera des milliers de personnes quand ils travaillent sur leur machine Debian.

Pour disposer des détails les plus récents, veuillez vous référer à la page du wiki.

Nous voudrions profiter de cette occasion pour remercier Juliette Taka Belin pour avoir créé le thème Emerald pour Bookworm.

La date limite pour les propositions est le 19 septembre 2024.

Le thème est habituellement choisi en se basant sur ce qui paraît le plus :

  • « Debian » : il est vrai que ce n'est pas le concept le mieux défini, dans la mesure où chacun a son sentiment sur ce que représente Debian pour eux ;
  • « possible à intégrer sans corriger le logiciel de base » : autant nous aimons les thèmes follement excitants, certains nécessiteraient un gros travail d'adaptation des thèmes GTK+ et la correction de GDM/GNOME ;
  • « clair et bien conçu » : sans être quelque chose qui devient ennuyeux à regarder au bout d'un an. Parmi les bons thèmes, on peut citer Joy, Lines, Softwaves et futurePrototype.

Si vous souhaitez disposer plus d'informations, veuillez utiliser la liste de diffusion Debian Desktop.

Categories: FLOSS Project Planets

Bits from Debian: Looking for the artwork for Trixie the next Debian release

Planet Debian - Fri, 2024-06-21 06:00

Each release of Debian has a shiny new theme, which is visible on the boot screen, the login screen and, most prominently, on the desktop wallpaper. Debian plans to next release, Trixie, next year. As ever, we need your help in creating its theme! You have the opportunity to design a theme that will inspire thousands of people while working in their Debian systems.

For the most up to date details, please refer to the wiki.

We would also like to take this opportunity to thank Juliette Taka Belin for doing the Emerald theme for bookworm.

The deadlines for submissions is: 2024-09-19

The artwork is usually picked based on which themes look the most:

  • ''Debian'': admittedly not the most defined concept, since everyone has their own take on what Debian means to them.
  • ''plausible to integrate without patching core software'': as much as we love some of the insanely hot looking themes, some would require heavy GTK+ theming and patching GDM/GNOME.
  • ''clean / well designed'': without becoming something that gets annoying to look at a year down the road. Examples of good themes include Joy, Lines, Softwaves and futurePrototype.

If you'd like more information or details, please post to the Debian Desktop mailing list.

Categories: FLOSS Project Planets

mark.ie: A bash script to set up Drupal for local development using DDEV

Planet Drupal - Fri, 2024-06-21 05:45

Last week I wrote about how to set up Drupal for core contributing using DDEV. This week I decided to write a bash script so I wouldn't have to remember what I did, it would "just work".

Categories: FLOSS Project Planets

Pages