Feeds
2024 End-of-Year Review: Open Source AI Definition v1.0
The release of version 1.0 of the Open Source AI Definition (OSAID) marks an important milestone on a journey to ensure that AI systems are innovative and aligned with the foundational principles of Open Source: the freedoms to use, study, modify and share.
Drafting a definition through collaborationThe OSAID is a testament to the power of global collaboration. Over the past two years, OSI convened a coalition of stakeholders—developers, data scientists, legal experts, policymakers and end users from all over the world. This diverse group coalesced through in-person workshops, online town halls and intensive co-design sessions to craft version 1.0 of the definition.
Key milestones of 2024System Analysis: As part of the co-design process, working groups were formed to discuss which AI system components should be required to satisfy the four freedoms for AI. This included assessing how data, models, training methods and legal agreements adhere to Open Source principles. The analysis provided invaluable insights into the gaps that exist in current AI practices and outlined actionable steps to bridge these gaps. It also helped refine the OSAID’s criteria, ensuring they remain both practical and comprehensive.
System Evaluation: Several AI systems were assessed against the OSAID’s criteria. While models like OLMo (AI2), Pythia (Eleuther AI), CrystalCoder (LLM360) and T5 (Google) met the requirements, others like LLaMA2 (Meta), Phi-2 (Microsoft), Mixtral (Mistral) and Grok (X/Twitter) fell short, spotlighting the critical need for transparent frameworks in AI development. Other models such as BLOOM (BigScience), Starcoder2 (BigCode) and Falcon (TII) would pass if they changed their license. This evaluation process also revealed areas where certain models could improve to better align with Open Source standards, demonstrating the OSAID’s role as a constructive guide for future developments.
Stable Release of OSAID 1.0: After extensive global consultation, OSI released the first stable version of the definition at the All Things Open conference in Raleigh, NC. This marked the culmination of two years of dialogue, research and iteration. The stable release provides a comprehensive framework to evaluate AI systems against the core principles of openness.
Global Endorsements: The OSAID has garnered endorsements from over 20 organizations, including Mozilla Foundation, Eleuther AI, CommonCrawl Foundation and the Eclipse Foundation, alongside support from more than 100 individuals. These endorsements validate the OSAID’s importance and its potential to shape the future of AI development.
Events and conferencesThroughout 2024, OSI actively participated in 23 conferences from around the world to engage with diverse communities. Highlights include FOSDEM (February – Brussels), Columbia Convening on openness and AI (February – New York), Open Source Summit NA (April, Seattle), PyCon (May – Pittsburgh), AI_Dev Europe (June, 2024 – Paris), OSPOs for Good (July, 2024 – New York), KubeCon + AI_dev Hong Kong (August – Hong Kong), Open Source Congress (August – Beijing), Deep Learning Indaba (September – Dakar), India FOSS (September – Bengaluru), Open Source Summit Europe (September – Vienna), Nerdearla (September – Buenos Aires), Training Data in OSAI (October – Paris) and All Things Open (October – Raleigh). A full timeline of in-person and online events is available here.
Publications and voices of the OSAIDThe OSI published over 60 blog posts about Open Source AI in 2024. One of the highlights is the Voices of the OSAID series that we ran with stories about a few of the people involved in the Open Source AI Definition co-design process, featuring 10 volunteers who have helped shape and are shaping the definition. These stories highlight the diversity and passion of the community, bringing a human element to the often technical discussions around Open Source and AI.
Press coverageThe work around the Open Source AI Definition was cited over 180 times in the press worldwide, educating and countering misinformation. Our work was featured at The New York Times, The Verge, TechCrunch, ZDNET, InfoWorld, Ars Technica, IEEE Spectrum, MIT Technology Review, among other top media outlets.
Looking aheadThe release of OSAID 1.0 is not the end but the beginning of a new chapter. As we transition into 2025, OSI remains committed to continuing regular updates to the definition and evaluating AI systems and licenses to ensure alignment with Open Source principles.
As the Open Source community moves forward, your involvement is more critical than ever. We encourage more organizations and individuals to endorse and implement the OSAID. By broadening its reach, OSI aims to establish the OSAID as the global benchmark for open AI systems. Together, we can ensure that AI remains a tool for permissionless innovation.
OSI extends its deepest gratitude to the sponsors, volunteers and participants who made 2024 a banner year for Open Source AI. Let’s continue to build a future where technology serves everyone, everywhere. As we celebrate this year’s accomplishments, we look forward to what we can achieve together in 2025 and beyond. Please consider donating or sponsoring the OSI.
One more striped wallpaper
I recently saw one of my old branded “stripes” wallpapers in a screenshot of FreeBSD by someone on X, and that triggered me to make a new wallpaper in a similar style.
There was a call for artwork for the next Debian release – Trixie, and I made a modified version of one of my old wallpapers for it. As it was not chosen to be the default in Trixie, I decided to post it here for people who might like it.
It is, like all my wallpapers, a calm non-distracting one. (it is much prettier full-4k-size than in the thumbnail below)
Trixie TracksIf you like it, you can download it from Debian’s Wiki – in 1920x1080 and 4k versions. There is also a version with the Debian logo there for inspiration if you want to create a custom distribution-branded one.
You can support my work on Patreon, or you can get my book Functional Programming in C++ at Manning if you're into that sort of thing. -->Python Insider: Python 3.14.0 alpha 3 is out
O Alpha 3, O Alpha 3, how lovely are your branches!
https://www.python.org/downloads/release/python-3140a3/
This is an early developer preview of Python 3.14
Major new features of the 3.14 series, compared to 3.13Python 3.14 is still in development. This release, 3.14.0a3, is the third of seven planned alpha releases.
Alpha releases are intended to make it easier to test the current state of new features and bug fixes and to test the release process.
During the alpha phase, features may be added up until the start of the beta phase (2025-05-06) and, if necessary, may be modified or deleted up until the release candidate phase (2025-07-22). Please keep in mind that this is a preview release and its use is not recommended for production environments.
Many new features for Python 3.14 are still being planned and written. Among the new major new features and changes so far:
- PEP 649: deferred evaluation of annotations
- PEP 741: Python configuration C API
- PEP 761: Python 3.14 and onwards no longer provides PGP signatures for release artifacts. Instead, Sigstore is recommended for verifiers.
- Improved error messages
- (Hey, fellow core developer, if a feature you find important is missing from this list, let Hugo know.)
The next pre-release of Python 3.14 will be 3.14.0a4, currently scheduled for 2025-01-14.
More resources- Online documentation
- PEP 745, 3.14 Release Schedule
- Report bugs at https://github.com/python/cpython/issues
- Help fund Python and its community
A mince pie is a small, round covered tart filled with “mincemeat”, usually eaten during the Christmas season – the UK consumes some 800 million each Christmas. Mincemeat is a mixture of things like apple, dried fruits, candied peel and spices, and originally would have contained meat chopped small, but rarely nowadays. They are often served warm with brandy butter.
According to the Oxford English Dictionary, the earliest mention of Christmas mince pies is by Thomas Dekker, writing in the aftermath of the 1603 London plague, in Newes from Graues-end: Sent to Nobody (1604):
Ten thousand in London swore to feast their neighbors with nothing but plum-porredge, and mince-pyes all Christmas.
Here’s a meaty recipe from Rare and Excellent Receipts, Experienc’d and Taught by Mrs Mary Tillinghast and now Printed for the Use of her Scholars Only (1678):
- How to make Mince-pies.
To every pound of Meat, take two pound of beef Suet, a pound of Corrants, and a quarter of an Ounce of Cinnamon, one Nutmeg, a little beaten Mace, some beaten Colves, a little Sack & Rose-water, two large Pippins, some Orange and Lemon peel cut very thin, and shred very small, a few beaten Carraway-seeds, if you love them the Juyce of half a Lemon squez’d into this quantity of meat; for Sugar, sweeten it to your relish; then mix all these together and fill your Pie. The best meat for Pies is Neats-Tongues, or a leg of Veal; you may make them of a leg of Mutton if you please; the meat must be parboyl’d if you do not spend it presently; but if it be for present use, you may do it raw, and the Pies will be the better.
Enjoy the new releaseThanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organisation contributions to the Python Software Foundation.
Regards from a snowy and slippery Helsinki,
Your release team,
Hugo van Kemenade
Ned Deily
Steve Dower
Łukasz Langa
Gunnar Wolf: The science of detecting LLM-generated text
While artificial intelligence (AI) applications for natural language processing (NLP) are no longer something new or unexpected, nobody can deny the revolution and hype that started, in late 2022, with the announcement of the first public version of ChatGPT. By then, synthetic translation was well established and regularly used, many chatbots had started attending users’ requests on different websites, voice recognition personal assistants such as Alexa and Siri had been widely deployed, and complaints of news sites filling their space with AI-generated articles were already commonplace. However, the ease of prompting ChatGPT or other large language models (LLMs) and getting extensive answers–its text generation quality is so high that it is often hard to discern whether a given text was written by an LLM or by a human–has sparked significant concern in many different fields. This article was written to present and compare the current approaches to detecting human- or LLM-authorship in texts.
The article presents several different ways LLM-generated text can be detected. The first, and main, taxonomy followed by the authors is whether the detection can be done aided by the LLM’s own functions (“white-box detection”) or only by evaluating the generated text via a public application programming interface (API) (“black-box detection”).
For black-box detection, the authors suggest training a classifier to discern the origin of a given text. Although this works at first, this task is doomed from its onset to be highly vulnerable to new LLMs generating text that will not follow the same patterns, and thus will probably evade recognition. The authors report that human evaluators find human-authored text to be more emotional and less objective, and use grammar to indicate the tone of the sentiment that should be used when reading the text–a trait that has not been picked up by LLMs yet. Human-authored text also tends to have higher sentence-level coherence, with less term repetition in a given paragraph. The frequency distribution for more and less common words is much more homogeneous in LLM-generated texts than in human-written ones.
White-box detection includes strategies whereby the LLMs will cooperate in identifying themselves in ways that are not obvious to the casual reader. This can include watermarking, be it rule based or neural based; in this case, both processes become a case of steganography, as the involvement of a LLM is explicitly hidden and spread through the full generated text, aiming at having a low detectability and high recoverability even when parts of the text are edited.
The article closes by listing the authors’ concerns about all of the above-mentioned technologies. Detecting an LLM, be it with or without the collaboration of the LLM’s designers, is more of an art than a science, and methods deemed as robust today will not last forever. We also cannot assume that LLMs will continue to be dominated by the same core players; LLM technology has been deeply studied, and good LLM engines are available as free/open-source software, so users needing to do so can readily modify their behavior. This article presents itself as merely a survey of methods available today, while also acknowledging the rapid progress in the field. It is timely and interesting, and easy to follow for the informed reader coming from a different subfield.
Specbee: Off-page SEO explained - How to strengthen your website’s authority
Improvements to Mozilla’s Searchfox Code Browser
Mozilla is the maker of the famous Firefox web browser and the birthplace of the likes of Rust and Servo (read more about Embedding the Servo Web Engine in Qt).
Firefox is a huge, multi-platform, multi-language project with 21 million lines of code back in 2020, according to their own blog post. Navigating in projects like those is always a challenge, especially at the cross-language boundaries and in platform-specific code.
To improve working with the Firefox code-base, Mozilla hosts an online code browser tailored for Firefox called Searchfox. Searchfox analyzes C++, JavaScript, various IDLs (interface definition languages), and Rust source code and makes them all browsable from a single interface with full-text search, semantic search, code navigation, test coverage report, and git blame support. It’s the combination of a number of projects working together, both internal to Mozilla (like their Clang plugin for C++ analysis) and external (such as rust-analyzer).
It takes a whole repository in and separately indexes C++, Rust, JavaScript and now Java and Kotlin source code. All those analyses are then merged together across platforms, before running a cross-reference step and building the final index used by the web front-end available at searchfox.org.
Mozilla asked KDAB to help them with adding Java and Kotlin support to Searchfox in prevision of the merge of Firefox for Android into the main mozilla-central repository and enhance their C++ support with macro expansions. Let’s dive into the details of those tasks.
Java/Kotlin SupportMozilla merged the Firefox for Android source code into the main mozilla-central repository that Searchfox indexes. To add support for that new Java and Kotlin code to Searchfox, we reused open-source tooling built by Sourcegraph around the SemanticDB and SCIP code indexing formats. (Many thanks to them!)
Sourcegraph’s semanticdb-javac and semanticdb-kotlinc compiler plugins are integrated into Firefox’s CI system to export SemanticDB artifacts. The Searchfox indexer fetches those SemanticDB files and turns them into a SCIP index, using scip-semanticdb. That SCIP index is then consumed by the existing Searchfox-internal scip-indexer tool.
In the process, a couple of upstream contributions were made to rust-analyzer (which also emits SCIP data) and scip-semanticdb.
A few examples of Searchfox at work:
- Searching for the Java class: org.mozilla.geckoview.GeckoSession$PromptDelegate$AutocompleteRequest shows the definition, a superclass, some uses in Java source code and some uses in Kotlin tests.
- Searching for the Java interface method: org.mozilla.geckoview.Autocomplete$StorageDelegate$onAddressFetch shows the definition, a couple of users, and a couple of implementers across Java and Kotlin code.
- Querying the callers of a method with up to 2 indirections: calls-to:’org::mozilla::geckoview::Autofill::Session::getDefaultDimensions’ depth:2
If you want to dive into more details, see the feature request on Bugzilla, the implementation and further discussion on GitHub and the release announcement on the mozilla dev-platform mailing list.
Java/C++ Cross-language SupportGeckoView is an Android wrapper around Gecko, the Firefox web engine. It extensively uses cross-language calls between Java and C++.
Searchfox already had support for cross-language interfaces, thanks to its IDL support. We built on top of that to support direct cross-language calls between Java and C++.
First, we identified the different ways the C++ and Java code interact and call each other. There are three ways Java methods marked with the native keyword call into C++:
- Case A1: By default, the JVM will search for a matching C function to call based on its name. For instance, calling org.mozilla.gecko.mozglue.GeckoLoader.nativeRun from Java will call Java_org_mozilla_gecko_mozglue_GeckoLoader_nativeRun on the C++ side.
- Case A2: This behavior can be overridden at runtime by calling the JNIEnv::RegisterNatives function on the C++ side to point at another function.
- Case A3: GeckoView has a code generator that looks for Java items decorated with the @WrapForJNI and native annotations and generates a C++ class template meant to be used through the Curiously Recurring Template Pattern. This template provides an Init static member function that does the right JNIEnv::RegisterNatives calls to bind the Java methods to the implementing C++ class’s member functions.
We also identified two ways the C++ code calls Java methods:
- Case B1: directly with JNIEnv::Call… functions.
- Case B2: GeckoView’s code generator also looks for Java methods marked with @WrapForJNI (without the native keyword this time) and generates a C++ wrapper class and member functions with the right JNIEnv::Call… calls.
Only the C++ side has the complete view of the bindings; so that’s where we decided to extract the information from, by extending Mozilla’s existing Clang plugin.
First, we defined custom C++ annotations bound_as and binding_to that the clang plugin transforms into the right format for the cross-reference analysis. This means we can manually set the binding information:
class __attribute__((annotate("binding_to", "jvm", "class", "S_jvm_sample/Jni#"))) CallingJavaFromCpp { __attribute__((annotate("binding_to", "jvm", "method", "S_jvm_sample/Jni#javaStaticMethod()."))) static void javaStaticMethod() { // Wrapper code } __attribute__((annotate("binding_to", "jvm", "method", "S_jvm_sample/Jni#javaMethod()."))) void javaMethod() { // Wrapper code } __attribute__((annotate("binding_to", "jvm", "getter", "S_jvm_sample/Jni#javaField."))) int javaField() { // Wrapper code return 0; } __attribute__((annotate("binding_to", "jvm", "setter", "S_jvm_sample/Jni#javaField."))) void javaField(int) { // Wrapper code } __attribute__((annotate("binding_to", "jvm", "const", "S_jvm_sample/Jni#javaConst."))) static constexpr int javaConst = 5; }; class __attribute__((annotate("bound_as", "jvm", "class", "S_jvm_sample/Jni#"))) CallingCppFromJava { __attribute__((annotate("bound_as", "jvm", "method", "S_jvm_sample/Jni#nativeStaticMethod()."))) static void nativeStaticMethod() { // Real code } __attribute__((annotate("bound_as", "jvm", "method", "S_jvm_sample/Jni#nativeMethod()."))) void nativeMethod() { // Real code } };(This example is, in fact, extracted from our test suite, jni.cpp vs Jni.java.)
Then, we wrote some heuristics that try and identify cases A1 (C functions named Java_…), A3 and B2 (C++ code generated from @WrapForJNI decorators) and automatically generate these annotations. Cases A2 and B1 (manually calling JNIEnv::RegisterNatives or JNIEnv::Call… functions) are rare enough in the Firefox code base and impossible to reliably recognize; so it was decided not to cover them at the time. Developers who wish to declare such bindings could manually annotate them.
After this point, we used Searchfox’s existing analysis JSON format and mostly re-used what was already available from IDL support. When triggering the context menu for a binding wrapper or bound function, the definitions in both languages are made available, with “Go to” actions that jump over the generally irrelevant binding internals.
The search results also display both sides of the bridge, for instance:
- searching for the mozilla::widget::GeckoViewSupport::Open C++ member function links to its Java binding org.mozilla.geckoview.GeckoSession$Window.open.
- searching for the org.mozilla.geckoview.GeckoSession.getCompositorFromNative Java method links to its generated C++ binding mozilla::java::GeckoSession::GetCompositor.
If you want to dive into more details, see the feature request and detailed problem analysis on Bugzilla, the implementation and further discussion on GitHub, and the release announcement on the Mozilla dev-platform mailing list.
Displaying Interactive Macro ExpansionsAside from this Java/Kotlin-related work, we also added support for displaying and interacting with macro expansions. This was inspired by KDAB’s own codebrowser.dev, but improves it to:
- Display all expansion variants, if they differ across platforms or by definition:
- Make macros fully indexed and interactive:
This work mainly happened in the Mozsearch Clang plugin to extract macro expansions during the pre-processing stage and index them with the rest of the top-level code.
Again, if you want more details, the feature request is available on Bugzilla and the implementation and further technical discussion is on GitHub.
SummaryBecause of the many technologies it makes use of, from compiler plugins and code analyzers written in many languages, to a web front-end written using the usual HTML/CSS/JS, by way of custom tooling and scripts in Rust, Python and Bash, Searchfox is a small but complex and really interesting project to work on. KDAB successfully added Java/Kotlin code indexing, including analyzing their C++ bindings, and are starting to improve Searchfox’s C++ support itself, first with fully-indexed macro expansions and next with improved templates support.
About KDAB
If you like this article and want to read similar material, consider subscribing via our RSS feed.
Subscribe to KDAB TV for similar informative short video content.
KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.
The post Improvements to Mozilla’s Searchfox Code Browser appeared first on KDAB.
Russ Allbery: Review: Iris Kelly Doesn't Date
Review: Iris Kelly Doesn't Date, by Ashley Herring Blake
Series: Bright Falls #3 Publisher: Berkley Romance Copyright: October 2023 ISBN: 0-593-55058-7 Format: Kindle Pages: 381Iris Kelly Doesn't Date is a sapphic romance novel (probably a romantic comedy, although I'm bad at romance subgenres). It is the third book in the Bright Falls series. In the romance style, it has a new set of protagonists, but the protagonists of the previous books appear as supporting characters and reading this will spoil the previous books.
Among the friend group we were introduced to in Delilah Green Doesn't Care, Iris was the irrepressible loudmouth. She's bad at secrets, good at saying whatever is on her mind, and has zero desire to either get married or have children. After one of the side plots of Astrid Parker Doesn't Fail, she has sworn off dating entirely.
Iris is also now a romance novelist. Her paper store didn't get enough foot traffic to justify staying open, so she switched her planner business to online only and wrote a romance novel that was good enough to get a two-book deal. Now she needs to write a second book and she has absolutely nothing. Her own avoidance of romantic situations is not helping, but neither is her meddling family who are convinced her choices about marriage and family can be overturned with sufficient pestering. She desperately needs to shake up her life, get out of her creative rut, and do something new. Failing that, she'll settle for meeting someone in a bar and having some fun.
Stevie is a barista and actress living in Portland. Six months ago, she broke up with Adri, her creative partner, girlfriend of six years, and the first person with whom she had a serious relationship. More precisely, Adri broke up with her. They're still friends, truly, even though that friendship is being seriously strained by Adri dating Vanessa, another member of their small and close-knit friend group. Stevie has occasionally-crippling anxiety, not much luck in finding real acting roles in Portland, and a desperate desire to not make waves. Ren, the fourth member of their friend group, thinks Stevie needs a new relationship, or at least a fling. That's how Stevie, with Ren as backup and encouragement, ends up at the same bar with Iris.
The resulting dance and conversation was rather fun for both Stevie and Iris. The attempted one-night stand afterwards was a disaster due to Stevie's anxiety, and neither of them expected to see the other again. Stevie therefore felt safe pretending they'd hit it off to get her friends off her back. When Iris's continued restlessness lands her in an audition for Adri's fundraiser play that she also talked Stevie into performing in, this turns into a full-blown fake dating trope.
These books continue to be impossible to put down. I'm not sure what Blake is doing to make the pacing so perfect, but as with the previous books of the series I found this utterly compulsive reading. I started it in the afternoon, took a break in the evening for a few hours, and then finished it at 2am.
I wasn't sure if a book focused on Iris would work as well, but I need not have worried. Iris Kelly Doesn't Date is both more dramatic and more trope-centered than the earlier books, but Blake handles that in a way that fits Iris's personality and wasn't annoying even to a reader like me, who has an aversion to many types of relationship drama. The secret is Stevie, and specifically having the other protagonist be someone with severe anxiety.
No was never a very easy word for Stevie when it came to Adri, when it came to anyone, really. She could handle the little stuff — do you want a soda, have you seen this movie, do you like onions on your pizza — but the big stuff, the stuff that caused disappointed expressions and down-turned mouths... yeah, she sucked at that part. Her anxiety would flare, and she'd spend the next week convinced her friends hated her, she'd die alone and miserable, and wasn't worth a damn to anyone. Then, when said friend or family member eventually got ahold of her to tell her that, no, of course they didn't hate her, why in the world would she think that, her anxiety would crest once again, convincing her that she was terrible at understanding people and could never trust her own brain to make heads or tails of any social situation.
This is a spot-on description of a particular type of anxiety, but also this is the perfect protagonist to pair with Iris. Throughout the series, Iris has always been the ride-or-die friend, the person who may have no idea how to help but who will show up anyway and at least try to distract you. Stevie's anxiety makes Iris feel protective, which reveals one of the best sides of Iris's personality, and then the protectiveness plays off against Iris's own relationship issues and tendency to avoid taking anything too seriously. It's one of those relationships that starts a bit one-sided and then becomes mutually supporting once Stevie gets her feet under her. That's a relationship pattern I really enjoy reading about.
As with the rest of the series, the friendship dynamics are great. Here we get to see two friend groups at work: Iris's, which we've seen in the previous two volumes and which expanded interestingly in Astrid Parker Doesn't Fail, and Stevie's, which is new. I liked all of these people, even Adri in her own way (although she's the hardest to like). The previous happily-ever-afters do get a bit awkward here, but Blake tries to make that part of the plot and also avoids most of the problem of somewhat-boring romantic bliss by spreading the friendship connections a bit wider.
Stevie's friend group formed at orientation at Reed College, and that let me put my finger on another property of this series: essentially all of the characters are from a very specific social class. They're nearly all arts people (bookstore owner, photographer, interior decorator, actress, writer, director), they've mostly gone to college, and while most of them don't have lots of money, there's always at least one person in each friend group with significant wealth. Jordan, from the previous book, is a bit of an exception since she works in a trade (a carpenter), but she still acts like someone from that same social class. It's a bit like reading Jane Austen novels and realizing that the protagonists are drawn from a very specific and very narrow portion of society.
This is not a complaint, to be clear; I have no objections to reading about a very specific social class. But if one has already read lots of books about this class of people, I could see that diminishing the appeal of this series a bit. There are a lot of assumptions baked into the story that aren't really questioned, such as the ubiquity of therapists. (I don't know how Stevie affords one on a barista salary.) There are also some small things in the terminology (therapy speak, for example) and in the specific type of earnestness with which the books attempt to be diverse on most axes other than social class that I suspect may grate a bit for some readers. If that's you, this is your warning.
There is a third-act breakup here, just like the previous volumes. There is also a defense of the emotional punch of third-act breakups in romance novels in the book itself, put into Iris's internal monologue, so I suspect that's the author's answer to critics like myself who don't like the trope. I was less frustrated by this one because it fit the drama level of the protagonists, but I'll also know to expect a third-act breakup in any Blake novel I read in the future.
But, all that said, the summary once again is that I loved this book and could not put it down. Iris is dramatic and occasionally self-destructive but has a core of earnest empathy that makes her easy to like. She's exactly the sort of extrovert who is soothing to introverts rather than draining because she carries the extrovert load of social situations. Stevie is adorably earnest and thoughtful beneath her anxiety. They two of them are wildly different and yet remarkably good together, and I loved reading their story.
Highly recommended, along with the whole series. Start with Delilah Green Doesn't Care; if you like that, you're in for a treat.
Content note: This book is also rather sex-forward and pretty explicit in the sex scenes, maybe a touch more than Astrid Parker Doesn't Fail. If that is or is not your thing in romance novels, be aware going in.
Rating: 9 out of 10
Glyph Lefkowitz: DANGIT
Over the last decade, it has become a common experience to be using a social media app, and to perceive that app as saying something specific to you. This manifests in statements like “Twitter thinks Rudy Giuliani has lost his mind”, “Facebook is up in arms about DEI”, “Instagram is going crazy for this new water bottle”, “BlueSky loves this bigoted substack”, or “Mastodon can’t stop talking about Linux”. Sometimes this will even be expressed with “the Internet” as a metonym for the speaker’s preferred social media: “the Internet thinks that Kate Middleton is missing”.
However, even the smallest of these networks comprises literal millions of human beings, speaking dozens of different languages, many of whom never interact with each other at all. The hot takes that you see from a certain excitable sub-community, on your particular timeline or “for you” page, are not necessarily representative of “the Internet” — at this point, a group that represents a significant majority of the entire human population.
If I may coin a phrase, I will refer to these as “Diffuse, Amorphous, Nebulous, Generalized Internet Takes”, or DANGITs, which handily evokes the frustrating feeling of arguing against them.
A DANGIT is not really a new “internet” phenomenon: it is a specific expression of the availability heuristic.
If we look at our device and see a bunch of comments in our inbox, particularly if those comments have high salience via being recent, emotive, and repeated, we will naturally think that this is what The Internet thinks. However, just because we will naturally think this does not mean that we will accurately think it.
It is worth keeping this concept in mind when participating in public discourse because it leads to a specific type of communication breakdown. If you are arguing with a DANGIT, you will feel like you are arguing with someone with incredibly inconsistent, hypocritical, and sometimes even totally self-contradictory views. But to be self-contradictory, one needs to have a self. And if you are arguing with 9 different people from 3 different ideological factions, all making completely different points and not even taking time to agree on the facts beforehand, of course it’s going to sound like cacophonous nonsense. You’re arguing with the cacophony, it’s just presented to you in a way that deceives you into thinking that it’s one group.
There are subtle variations on this breakdown; for example, it can also make people’s taste seem incoherent. If it seems like one week the Interior Designer internet loves stark Scandinavian minimalism, and the next week baroque Rococo styles are making a comeback, it might seem like The Internet has no coherent sense of taste, and these things don’t go together. That’s because it doesn’t! Why would you expect it to?
Most likely, you are simply seeing some posts from minimalists, and then, separately, some posts from Rococo aficionados. Any particular person’s feed may be dedicated to a specific, internally coherent viewpoint, aesthetic, or ideology, but if you dump them all into a blender to separate them from their context, of course they will look jumbled together.
This is what social media does. It is context collapse as a service. Even if you eliminate engagement-maximizing algorithms and view everything perfectly chronologically, even if you have the world’s best trust & safety team making sure that there is nothing harmful and no disinformation, social media — like email — inherently remains that context-collapsing blender. There’s no way for it not to be; if two people you follow, who do not follow and are not aware of each other, are both posting unrelated things at the same time, you’re going to see them at around the same time.
Do not argue with a DANGIT. Discussions are the internet are famously Pyrrhic battles to begin with, but if you argue with a DANGIT it’s not that you will achieve a Pyrrhic victory, you cannot possibly achieve any victory, because you are shadowboxing an imagined consensus where none exits.
You can’t win against something that isn’t there.
AcknowledgmentsThank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more things like it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!
Dirk Eddelbuettel: #45: Some r-ci Updates
Welcome to post 45 in the $R^4 series!
We introduced r-ci here in post #32 here nearly four years ago. It has found pretty widespread use and adoption, and we received a few kind words then (in the linked issue) and also more recently (in a follow-up comment) from which we merrily quote:
[…] almost 3 years later on and I have had zero problems with this CI setup. For people who want reliable R software, resources like these are invaluable.
And while we followed up with post #41 about r2u for simple continuous integration, we may not have posted when we based r-ci on r2u (for the obvious Linux usage case). So let’s make time now for a (comparitively smaller) update, and an update usage examples.
We made two changes in the last few days. One is a (obvious in hindsight) simplification. Given that the bootstrap step was always executed, and needed no parameters, we pulled it into a new aggregated setup simply called r-ci that includes it so that it can be omitted as a step in the yaml file. Second, we recently needed Fortran on macOS too, and realized it was not installed by default so we just added that too.
With that a real and used example is now as simple as the screenshot to the left (and hence one ‘paragraph’ shorter). The trained eye will no doubt observe that there is nothing specific to a given repo. And that is basically the key feature: we can simply copy this file around and get fast and easy and reliable CI by taking advantage of the underlying robustness of r2u solving all dependency automagically and reliably. The option to enable macOS is also solid and compelling as the GitHub runners are fast (but more ‘expensive’ in how the count against the limit of minutes—so again a tradeoff to make), as is the option to run coverage if one so desires. Some of my repos do too.
Take a look at the r-ci website which has more examples for the other supported CI servics it can used with, and feel free to ask questions as issue in the repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. Please report excessive re-aggregation in third-party for-profit settings.
Mike Driscoll: The Python Countdown to Christmas 2024 Giveaway
Happy Holidays and Merry Christmas from me to you! I have been giving away hundreds of Python books and courses for Christmas for the last couple of years!
https://www.blog.pythonlibrary.org/wp-content/uploads/2024/12/Countdown-to-christmas-1.mp4From now until Christmas, I will be giving away hundreds more. You can start learning Python for free using my books or courses.
All you have to do is follow me on one of these platforms and watch out for my post that describes how to get a free book or course:
The post The Python Countdown to Christmas 2024 Giveaway appeared first on Mouse Vs Python.
Talking Drupal: Talking Drupal #480 - Ripple Makers
Today we are talking about The Ripple Makers program, How it benefits Drupal Association members, and Why it’s important to Drupal with guest Julia Kranzthor. We’ll also cover Migrate Boost as our module of the week.
For show notes visit: https://www.talkingDrupal.com/480
Topics- What is Ripple Makers
- Taxes
- Why did the Drupal Association (DA) membership program need overhauling
- Are DA individual memberships different than Ripple Makers
- Do people have to sign up if they are already a DA member
- Coming up with the benefits
- Where did the name come from
- Does this have new benefits
- What has the impact been
- Ripple Makers
- Drupal Certified Partner (DCP)
- Drupal staff page
- Migrate Boost
- 'workbench_moderation',
- 'pathauto',
- 'xmlsitemap',
- 'search_api',
- 'search_api_algolia',
Julia Kranzthor - JR_KThor
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Suzanne Dergacheva - evolvingweb.com pixelite
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you ever wanted to disable hooks to accelerate your Drupal migration? There’s a module for that.
- Module name/project name:
- Brief history
- How old: created in Sep 2023 by our own Nic Laflin
- Versions available: 1.0.1, compatible with Drupal 10 and 11
- Maintainership
- Actively maintained
- Security coverage
- Documentation README / project page have instructions
- Number of open issues: none!
- Usage stats:
- 119 sites
- Module features and usage
- Having hooks fire during a migration can significantly slow down the process, and what’s worse, it can also cause some significant problems, for example sending email notifications every time a node is created
- You disable hooks by defining an array in your settings.php file, either an array of specific hooks you want to disable, or an array of modules for which you want to disable all hooks
- This was a capability available for the Drupal 7 Migrate module, but hasn’t been available in the Migrate API in Drupal core since version 8, so this module can be invaluable if you’re working on a sizable migration
- Hopefully there are a lot of folks working on migrations ahead of the January 5 EOL for Drupal 7, so I thought this module would be timely
The Drop is Always Moving: Drupal 11.1.0 is now available! The first feature release of Drupal 11 improves the recipe system, introduces support for hooks written as classes, makes Workspaces more flexible and enhances performance.Read more at https:/...
Drupal 11.1.0 is now available! The first feature release of Drupal 11 improves the recipe system, introduces support for hooks written as classes, makes Workspaces more flexible and enhances performance.
Read more at https://www.drupal.org/blog/drupal-11-1-0
Drupal blog: Drupal 11.1.0 is now available
The first feature release of Drupal 11 improves the recipe system, introduces support for hooks written as classes, makes Workspaces more flexible and enhances performance.
Recipe system improvementsThe Recipe system allows packages to be configured with dependencies in a repeatable way. Drupal 11.1 now allows recipes to take user input (for example, API keys for remote services). Recipes can now also use configuration actions to add new blocks, enable layout builder for content types, clone configuration entities, and so on.
Hooks can be written as classesDrupal's unique hook system allows modifying forms, data updates, site processes, render structures, and even the ordering of other hooks. After long-running efforts by many contributors, it is now possible to also define hooks and hook implementations with object-oriented techniques that are more in line with modern PHP code design practices. This will also make Drupal's code easier to understand for PHP developers familiar with other projects. All runtime core hooks have been converted to object-oriented implementations.
With this new functionality, magic global functions like the following will no longer be needed:
function hook_entity_insert(EntityInterface $entity) { // DO STUFF }Instead, developers can use the new Hook attribute on methods:
class ExampleHooks { #[Hook('entity_insert')] public function entityInsert(EntityInterface $entity): void { // DO STUFF } } New icon management APIA dedicated API has been added to allow modules and themes to define icon packs. Within each pack is a series of icons each with a unique identifier that the system can then use. Modules and themes can alter icon packs.
Workspaces user interface separated into its own moduleAs part of a larger plan to use workspaces for content moderation, the user interface of the Workspaces module was moved to a separate Workspaces UI module. For new sites, if you want to enable Workspaces with the user interface, you now need to install this module.
Improvements to the initial experience after installationWe revisited Drupal core's default configuration to better reflect most user's needs. In this release, date formats were made easier to read. The user registration process also now defaults to administrator-created accounts, in order to avoid new sites being flooded with spam accounts in the moderation queue. When creating a new node type, Drupal core will no longer automatically add a body field, allowing site builders to choose their own content model without having to delete defaults they don't want first and reducing potential conflicts for platforms built on Drupal core such as Drupal CMS and the upcoming Experience Builder.
New views entity reference filterA new generic entity reference views filter has been added, which makes it possible to render exposed views filters as a select list or autocomplete of available entities. This may now be used by contributed modules and will be enabled for core entity types in future releases.
Render caching for formsForms built with form API can now opt-in to render caching, improving page loading performance in a variety of situations. We will be gradually opting forms into Drupal core into render caching, and may opt-in all forms to render caching by default in a future major release.
Improved browser and CDN caching for JavaScript and CSSDrupal's asset aggregation algorithm has been improved to reduce variation in CSS and JavaScript aggregates. Differences between pages which may have produced different but similar aggregates in the past, for example because libraries were requested in a different order, will now result in a single file instead. This improves CDN cache hit rates and reduces the amount of JavaScript and CSS that visitors will download when visiting multiple pages on a site. This builds on several previous recent improvements to Drupal core's asset aggregation since Drupal 10.1 and also unblocks further improvements which are planned for future minor releases.
PHP 8.4 is supportedThe PHP team is doing a fantastic job of improving the language and performance of PHP. PHP 8.4 was released in November, and Drupal 11.1 fully supports it.
Drupal CMS 1.0 will be based on Drupal 11.1Drupal 11.1 will be the basis of Drupal CMS 1.0, which will be released on January 15 on Drupal's 24th birthday. Many of the underlying improvements introduced in Drupal core will help compose an improved user experience in Drupal CMS. The first release candidate of Drupal CMS was already based on Drupal 11.1 RC. Stay tuned!
Drupal 10.4 will be available soonThe next Long-Term Support (LTS) release of Drupal 10 will be released this week. Drupal 10 will be supported until the release of Drupal 12 in mid- to late 2026. Long-Term Support for Drupal 10 is managed with a new maintenance minor release every 6 months that receives twelve months of support. This allows the maintenance minor to adapt to evolving dependencies. And it gives more flexibility for sites to move to Drupal 11 when they are ready.
The same will happen when Drupal 10 is end-of-life and Drupal 12 is released: Drupal 11 will transition to Long-Term Support, with its own maintenance minors every six months. This release schedule allows sites to move from one LTS version to the next if that is the best strategy for their needs..
Core maintainer team updatesSince Drupal 11.0, Adam Hoenich has stepped down from being a Migrate subsystem maintainer as he moved on to be a key committer for Drupal CMS. We thank Adam for his contributions!
Want to get involved?If you are looking to make the leap from Drupal user to Drupal contributor, or you want to share resources with your team as part of their professional development, there are many opportunitzies to deepen your Drupal skill set and give back to the community. Check out the Drupal contributor guide. You are more than welcome to join us at DrupalCon Atlanta in March 2025 to attend sessions, network, and enjoy mentorship for your first contributions.
The Drop Times: Countdown to the Big Drop
Dear Readers,
The Drupal CMS release candidate made its debut at DrupalCon Singapore 2024, marking the beginning of an exciting new era for Drupal. This release offers a first look at what’s being called the most user-friendly version of Drupal yet. But this is just the beginning. The full launch of Drupal CMS v1 is set for January 15, 2025 — exactly one month away! With the countdown officially on, the Drupal community is gearing up for a wave of activity, excitement, and preparation leading up to the big day.
At The DropTimes, we’re ready to keep you plugged into every development. Over the next month, we’ll be bringing you exclusive insights from track leads, in-depth looks at each of the tracks, and timely updates on project progress. We’ll also be covering the many Drupal CMS launch parties taking place around the world. This isn’t just a software release — it’s a moment of celebration for the Drupal community and a glimpse into the future of the platform.
But we don’t want to do it alone — we want to hear from 'you'! Do you have thoughts on Drupal CMS or ideas for where it should head next? Are you planning a launch party? We want to know! If there’s a track you believe deserves more attention or a new feature you’d like to see, let’s get your voice out there. The DropTimes is here to amplify community voices and spark conversation. The next chapter for Drupal is about to begin, and together, we can help shape it. Email us at editor@thedroptimes.com. Stay tuned as we count down to January 15!
Let's take a look at the important stories from the last week.
InterviewDrupalCon Singapore 2024- A Look into the Key Insights and Perspectives Shared by Dries Buytaert at DrupalCon Singapore 2024
- Winners of the First-Ever Splash Awards Asia Announced at DrupalCon Singapore 2024
- Clock's Ticking: One Month Until Drupal 7 End-of-Life
- 2025 Nonprofit Summit: Drupal Association Calls for Breakout Leaders!
- Drupal Open University Makes Exciting Progress!
- Drupal 7 Security Updates Released Ahead of End-of-Life Deadline
- Florida DrupalCamp Unveils Proposed Session Lineup for 2025 Event
- Sponsorship Opportunities Open for DrupalCamp Finland 2025
- Greece Winter Sprint 2024: A Triumphant Gathering for the Drupal Community
- MidCamp 2025 Update: Bi-Weekly Planning Meetings Now on Wednesdays
- Events This Week: Dec 16 - 22, 2024
- LPI and OS JobHub Launches 2025 Open Source Professionals Job Survey
- SparkFabrik Hosts Event to Celebrate Drupal CMS Launch on January 15, 2025
- QED42 Introduces AI DXP to Simplify AI Integration and Streamline Workflows
We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.
To get timely updates, follow us on LinkedIn, Twitter and Facebook. You can also join us on Drupal Slack at #thedroptimes.
Thank you,
Sincerely
Alka Elizabeth
Sub-editor, The DropTimes.
The Drop Times: The Dutch Government Works on Open Source with a Drupal Developers Day
Freelock Blog: Build a membership application system
Drupal, with the Events, Conditions, and Actions (ECA) module can build up sophisticated applications without a single line of custom code. You can build full applications using a handful of Drupal modules.
Real Python: Dictionaries in Python
Python dictionaries are a powerful built-in data type that allows you to store key-value pairs for efficient data retrieval and manipulation. Learning about them is essential for developers who want to process data efficiently. In this tutorial, you’ll explore how to create dictionaries using literals and the dict() constructor, as well as how to use Python’s operators and built-in functions to manipulate them.
By learning about Python dictionaries, you’ll be able to access values through key lookups and modify dictionary content using various methods. This knowledge will help you in data processing, configuration management, and dealing with JSON and CSV data.
By the end of this tutorial, you’ll understand that:
- A dictionary in Python is a mutable collection of key-value pairs that allows for efficient data retrieval using unique keys.
- Both dict() and {} can create dictionaries in Python. Use {} for concise syntax and dict() for dynamic creation from iterable objects.
- dict() is a class used to create dictionaries. However, it’s commonly called a built-in function in Python.
- .__dict__ is a special attribute in Python that holds an object’s writable attributes in a dictionary.
- Python dict is implemented as a hashmap, which allows for fast key lookups.
To get the most out of this tutorial, you should be familiar with basic Python syntax and concepts such as variables, loops, and built-in functions. Some experience with basic Python data types will also be helpful.
Get Your Code: Click here to download the free sample code that you’ll use to learn about dictionaries in Python.
Take the Quiz: Test your knowledge with our interactive “Python Dictionaries” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python DictionariesTest your understanding of Python dictionaries
Getting Started With Python DictionariesDictionaries are one of Python’s most important and useful built-in data types. They provide a mutable collection of key-value pairs that lets you efficiently access and mutate values through their corresponding keys:
Python >>> config = { ... "color": "green", ... "width": 42, ... "height": 100, ... "font": "Courier", ... } >>> # Access a value through its key >>> config["color"] 'green' >>> # Update a value >>> config["font"] = "Helvetica" >>> config { 'color': 'green', 'width': 42, 'height': 100, 'font': 'Helvetica' } Copied!A Python dictionary consists of a collection of key-value pairs, where each key corresponds to its associated value. In this example, "color" is a key, and "green" is the associated value.
Dictionaries are a fundamental part of Python. You’ll find them behind core concepts like scopes and namespaces as seen with the built-in functions globals() and locals():
Python >>> globals() { '__name__': '__main__', '__doc__': None, '__package__': None, ... } Copied!The globals() function returns a dictionary containing key-value pairs that map names to objects that live in your current global scope.
Python also uses dictionaries to support the internal implementation of classes. Consider the following demo class:
Python >>> class Number: ... def __init__(self, value): ... self.value = value ... >>> Number(42).__dict__ {'value': 42} Copied!The .__dict__ special attribute is a dictionary that maps attribute names to their corresponding values in Python classes and objects. This implementation makes attribute and method lookup fast and efficient in object-oriented code.
You can use dictionaries to approach many programming tasks in your Python code. They come in handy when processing CSV and JSON files, working with databases, loading configuration files, and more.
Python’s dictionaries have the following characteristics:
- Mutable: The dictionary values can be updated in place.
- Dynamic: Dictionaries can grow and shrink as needed.
- Efficient: They’re implemented as hash tables, which allows for fast key lookup.
- Ordered: Starting with Python 3.7, dictionaries keep their items in the same order they were inserted.
The keys of a dictionary have a couple of restrictions. They need to be:
- Hashable: This means that you can’t use unhashable objects like lists as dictionary keys.
- Unique: This means that your dictionaries won’t have duplicate keys.
In contrast, the values in a dictionary aren’t restricted. They can be of any Python type, including other dictionaries, which makes it possible to have nested dictionaries.
It’s important to note that dictionaries are collections of pairs. So, you can’t insert a key without its corresponding value or vice versa. Since they come as a pair, you always have to insert a key with its corresponding value.
Note: In some situations, you may want to add keys to a dictionary without deciding what the associated value should be. In those cases, you can use the .setdefault() method to create keys with a default or placeholder value.
Read the full article at https://realpython.com/python-dicts/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm: 7 Reasons You Should Use dbt Core in PyCharm
dbt Core is a modern data transformation framework. It doesn’t extract or load data and is only responsible for the T in the ELT (extract-load-transform) process. dbt connects to your data warehouse and helps you prepare your data so it can later be used to answer business questions.
In this blog post, we’ll talk about the top benefits of dbt and the advantages of using it in PyCharm Professional. To make the most of these features, you should be familiar with the framework. If you know SQL well, you’ll likely find it easy to use, and if you are a total novice in the field, you can use the dbt portal to get acquainted with it.
Why you should use dbt- Modularity and code reusability – Transformations can be saved into modular, reusable models. For instance, in this example the model int_count_customer.sql has a reference to stg_day_customer.sql and reuses its code.
- Versioning – dbt projects can be stored in version control systems like Git or GitHub. This allows you to track changes, collaborate with other team members, and maintain a record of all transformations.
- Testing – dbt allows you to write tests for your data models easily and check whether the data has any duplicates or null values. Additionally, you can even create specific rules to test against, and you can perform tests on both the model and the project levels.
- Documentation – dbt auto-generates documentation for data models, ensuring that team members and stakeholders all understand the data lineage and model definitions in the same way.
To summarize, dbt brings best practices in engineering to the field of data analysis, allowing you to produce higher-quality results while providing you with a straightforward and intuitive workflow.
These benefits are just the tip of the iceberg when it comes to what the tool can do.
How PyCharm streamlines your dbt workflowHaving established the benefits of dbt, we can now turn to the 7 key reasons to use it in PyCharm:
1. User-friendly onboarding – PyCharm streamlines the initial setup. As demonstrated in this video, setting up a project and configuring the necessary settings is straightforward.
2. Unified workspace for databases and dbt – PyCharm’s integrated database plugin powered by JetBrains DataGrip makes handling SQL databases significantly easier. Since it’s compatible with all databases that dbt works with, you don’t have to worry about juggling multiple tools. You can focus on data modeling and instantly view outcomes all in one place. To cover even a small number of the plugin’s features would take hours, but luckily we have a nice set of webinars dedicated to PyCharm’s functionality for databases: Visual SQL Development with PyCharm.
3. Git and dbt integration – In one interface, you can easily clone the repo, track any changes, manage branches, resolve conflicts, and collaborate with teammates.
4. Autocompletion for your .yml and jinja-template SQL files – People love using PyCharm because of its smart autocompletion, which it, of course, offers for dbt as well.
5. Local history –This feature lets you undo recent changes if they cause problems. You can also compare different versions to see what was changed and check whether updates were made correctly.
6. AI Assistant – AI Assistant is really helpful, especially if you’re just starting with dbt Core. It is context-aware, and in addition to having it answer your questions in the AI chat, you can have it generate code and fix problems for you, streamlining your work with data models. It also saves you from worrying about what to write in commit messages by composing them for you.
7. Project navigation – PyCharm excels in project navigation, offering features like fast search functionality and the Go to Declaration feature, both of which allow you to navigate through your dbt models effortlessly.
That’s just a glimpse of the benefits PyCharm already offers for dbt, and our support is still in its early stages. We invite you to test it out and share your insights. Whether you have suggestions for features or want to let us know about areas for improvement, we’re eager to hear from you.
Get started with PyCharm by using the promo code dbt-PyCharm to get a 3-month free trial.
Redeem your codeWant to learn how to use dbt in PyCharm? Head to the documentation page to learn more about the IDE’s dbt support.
Eager to learn more about dbt in general? Take a look at this post on the experience of using dbt and this analysis of deeper dbt concepts by Pavel Finkelshteyn.