Feeds
Smartbees: How to Add and Customize The Drupal Admin Toolbar Module?
Increasing work productivity and effectiveness is key in many professions, including Drupal developers. One of the tools that allows you to achieve this is the Drupal Admin Toolbar module. Thanks to it, you can easily access key administrative functions and navigate through admin panels. In this article, you will learn more about the Drupal Admin Toolbar features, benefits, and configuration methods. You will discover the possibilities that this tool has to offer and how it can streamline your Drupal-based website management.
Real Python: Quiz: Using Python's pip to Manage Your Projects' Dependencies
In this quiz, you’ll test your understanding of Python’s standard package manager, pip. You’ll revisit the concepts behind pip, important commands, and how to install packages.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Drupal life hack's: Mastering Dependency Injection in Drupal: A Practical Guide
Python Bytes: #401 We must replace uWSGI with something else
Specbee: Cooking up irresistible Drupal websites with Recipes
Tryton News: Security Release for issues #13505 and #13506
Albert Cervera has found that trytond allows to execute reports for records that user has no read access and also for reports limited to a set of group that the user is not.
Impact- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Confidentiality: Low
- Integrity: None
- Availability: None
There is no known workaround.
ResolutionAll affected users should upgrade trytond to the latest version.
Affected versions per series:
- trytond:
- 7.2: <= 7.2.8
- 7.0: <= 7.0.17
- 6.0: <= 6.0.51
Non affected versions per series:
- trytond:
- 7.2: >= 7.2.9
- 7.0: >= 7.0.18
- 6.0: >= 6.0.52
Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.
1 post - 1 participant
Russ Allbery: Review: The Book That Broke the World
Review: The Book That Broke the World, by Mark Lawrence
Series: Library Trilogy #2 Publisher: Ace Copyright: 2024 ISBN: 0-593-43796-9 Format: Kindle Pages: 366The Book That Broke the World is high fantasy and a direct sequel to The Book That Wouldn't Burn. You should not start here. In a delightful break from normal practice, the author provides a useful summary of the previous volume at the start of this book to jog your memory.
At the end of The Book That Wouldn't Burn, the characters were scattered and in various states of corporeality after some major revelations about the nature of the Library and the first appearance of the insectile Skeer. The Book That Wouldn't Burn picks up where it left off, and there is a lot more contact with the Skeer, but my guess that they would be the next viewpoint characters does not pan out. Instead, we get a new group and a new protagonist: Celcha, whose sees angels who come to visit her brother.
I have complaints, but before I launch into those, I should say that I liked this book apart from the totally unnecessary cannibalism. (I'll get to that.) Livira is a bit sidelined, which is regrettable, but Celcha and her brother are interesting new characters, and both Arpix and Clovis, supporting characters in the first book, get some excellent character development. Similar to the first book, this is a puzzle box story full of world-building tidbits with intellectually-satisfying interactions. Lawrence elaborates and complicates his setting in ways that don't contradict earlier parts of the story but create more room and depth for the characters to be creative. I came away still invested in this world and eager to find out how Lawrence pulls the world-building and narrative threads together.
The biggest drawback of this book is that it's not new. My thought after finishing the first book of the series was that if Lawrence had enough world-building ideas to fill three books to that same level of density, this had the potential of being one of my favorite fantasy series of all time. By the end of the second book, I concluded that this is not the case. Instead of showing us new twists and complications the way the first book did throughout, The Book That Broke the World mostly covers the same thematic ground from some new angles. It felt like Lawrence was worried the reader of the first book may not have understood the theme or the world-building, so he spent most of the second book nailing down anything that moved.
I found that frustrating. One of the best parts of The Book That Wouldn't Burn was that Lawrence trusted the reader to keep up, which for me hit the glorious but rare sweet spot of pacing where I was figuring out the world at roughly the same pace as the characters. It surprised me in some very enjoyable ways. The Book That Broke the World did not surprise me. There are a few new things, which I enjoyed, and a few elaborations and developments of ideas, which I mostly enjoyed, but I saw the big plot twist coming at least fifty pages before it happened and found the aftermath more annoying than revelatory. It doesn't help that the plot rests on character misunderstandings, one of my least favorite tropes.
One of the other disappointments of this book is that the characters stop using the Library as a library. The Library at the center of this series is a truly marvelous piece of world-building with numerous fascinating features that are unrelated to its contents, but Livira used it first and foremost as a repository of books. The first book was full of characters solving problems by finding a relevant book and reading it.
In The Book That Broke the World, sadly, this is mostly gone. The Library is mostly reduced to a complicated Big Dumb Object setting. It's still a delightful bit of world-building, and we learn about a few new features, but I only remember two places where the actual books are important to the story. Even the book referenced in the title is mostly important as an artifact with properties unrelated to the words that it contains or to the act of reading it. I think this is a huge lost opportunity and something I hope Lawrence fixes in the last book of the trilogy.
This book instead focuses on the politics around the existence of the Library itself. Here I'm cautiously optimistic, although a lot is going to depend on the third book. Lawrence has set up a three-sided argument between groups that I will uncharitably describe as the libertarian techbros, the "burn it all down" reactionaries, and the neoliberal centrist technocrats. All three of those positions suck, and Lawrence had better be setting the stage for Livira to find a different path. Her unwillingness to commit to any of those sides gives me hope, but bringing this plot to a satisfying conclusion is going to be tricky. I hope I like what Lawrence comes up with, but it feels far from certain.
It doesn't help that he's started delivering some points with a sledgehammer, and that's where we get to the unnecessary cannibalism. Thankfully this is a fairly small part of the tail end of the book, but it was an unpleasant surprise that I did not want in this novel and that I don't think made the story any better.
It's tempting to call the cannibalism gratuitous, but it does fit one of the main themes of this story, namely that humans are depressingly good at using any rule-based object in unexpected and nasty ways that are contrary to the best intentions of the designer. This is the fundamental challenge of the Library as a whole and the question that I suspect the third book will be devoted to addressing, so I understand why Lawrence wanted to emphasize his point. The reason why there is cannibalism here is directly related to a profound misunderstanding of the properties of the library, and I detected an echo of one of C.S. Lewis's arguments in The Last Battle about the nature of Hell.
The problem, though, is that this is Satanic baby-killerism, to borrow a term from Fred Clark. There are numerous ways to show this type of perversion of well-intended systems, which I know because Lawrence used other ones in the first book that were more subtle but equally effective. One of the best parts of The Book That Wouldn't Burn is that there were few real villains. The conflict was structural, all sides had valid perspectives, and the ethical points of that story were made with some care and nuance.
The problem with cannibalism as it's used here is not merely that it's gross and disgusting and off-putting to the reader, although it is all of those things. If I wanted to read horror, I would read horror novels. I don't appreciate surprise horror used for shock value in regular fantasy. But worse, it's an abandonment of moral nuance. The function of cannibalism in this story is like the function of Satanic baby-killers: it's to signal that these people are wholly and irredeemably evil. They are the Villains, they are Wrong, and they cease to be characters and become symbols of what the protagonists are fighting. This is destructive to the story because it's designed to provoke a visceral short-circuit in the reader and let the author get away with sloppy story-telling. If the author needs to use tactics like this to point out who is the villain, they have failed to set up their moral quandary properly.
The worst part is that this was entirely unnecessary because Lawrence's story-telling wasn't sloppy and he set up his moral quandary just fine. No one was confused about the ethical point here. I as the reader was following without difficulty, and had appreciated the subtlety with which Lawrence posed the question. But apparently he thought he was too subtle and decided to come back to the point with a pile-driver. I think that seriously injured the story. The ethical argument here is much more engaging and thought-provoking when it's more finely balanced.
That's a lot of complaints, mostly because this is a good book that I badly wanted to be a great book but which kept tripping over its own feet. A lot of trilogies have weak second books. Hopefully this is another example of the mid-story sag, and the finale will be worthy of the start of the story. But I have to admit the moral short-circuiting and the de-emphasis of the actual books in the library has me a bit nervous. I want a lot out of the third book, and I hope I'm not asking this author for too much.
If you liked the first book, I think you'll like this one too, with the caveat that it's quite a bit darker and more violent in places, even apart from the surprise cannibalism. But if you've not started this series, you may want to wait for the third book to see if Lawrence can pull off the ending.
Followed by The Book That Held Her Heart, currently scheduled for publication in April of 2025.
Rating: 7 out of 10
Dirk Eddelbuettel: nanotime 0.3.10 on CRAN: Update
A minor update 0.3.10 for our nanotime package is now on CRAN. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.
This release updates one S4 methods to very recent changes in r-devel for which CRAN had reached out. This concerns the setdiff() method when applied to two nanotime objects. As it only affected R 4.5.0, due next April, if rebuilt in the last two or so weeks it will not have been visible to that many users, if any. In any event, it now works again for that setup too, and should be going forward.
We also retired one demo function from the very early days, apparently it relied on ggplot2 features that have since moved on. If someone would like to help out and resurrect the demo, please get in touch. We also cleaned out some no longer used tests, and updated DESCRIPTION to what is required now. The NEWS snippet below has the full details.
Changes in version 0.3.10 (2024-09-16)Retire several checks for Solaris in test suite (Dirk in #130)
Switch to Authors@R in DESCRIPTION as now required by CRAN
Accommodate R-devel change for setdiff (Dirk in #133 fixing #132)
No longer ship defunction ggplot2 demo (Dirk fixing #131)
Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.
If you like this or other open-source work I do, you can sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Oliver Davies' daily list: Next week is DrupalCon Barcelona
Next week is DrupalCon in Barcelona.
I'm not speaking this year but if you're there and want to discuss automated testing, test-driven development, static analysis, Sculpin, Tailwind CSS, Nix, or anything else, I'll be there all week and would love to meet up and chat.
Open Source AI Definition – Weekly update september 16
- OSI invites individuals and organizations to endorse the Open Source AI Definition (OSAID). Endorsers will have their name and affiliation listed in the press release for Release Candidate 1 (RC1), which is expected to be finalized by the end of September. Those endorsing version 0.0.9 will be contacted again to confirm their support if there are any changes leading up to RC1.
- @mjbommar encourages reviewing the U.S. Copyright Office’s guidance on text and data mining (TDM) exceptions, which provides clear explanations and limitations, especially focusing on non-commercial, scholarly, and teaching uses. He emphasizes that the TDM guidance operates within narrow parameters that are often misunderstood or overlooked.
- @quaid proposes adding nuance to the Open Source AI (OSAI) Definition by introducing two designations: OSAI D+ (with open data) and OSAI D- (without open data, due to legitimate reasons beyond the creator’s control). He suggests using a dataset certificate of origin (dataset DCO) for self-verification to ensure compliance.
- @kjetilk agrees that verification is key but questions whether data information alone is sufficient for verification. He highlights that verifying rights to the data may not always be possible.
- @stefano appreciates the quadrant system’s clarity and confirms @quaid’s proposal for OSAI D- to be reserved for those with legitimate reasons for not sharing data.
- @thesteve0 expresses skepticism about broadening the “Open Source” label. He argues that without access to both data and code, AI models cannot truly be Open Source and suggests labeling such models as “open weights” instead.
- @shujisado notes the importance of data access in AI, pointing out that OSAID requires detailed information about how data is sourced, including provenance and selection criteria. He also discusses potential legal and ethical reasons for not sharing datasets.
- @Shamar raises concerns about “openwashing” in AI, where developers might distribute a model with a different dataset, undermining trust. He argues that distinguishing between OSAI D+ and D- risks legal complications for derivative works, suggesting that models without open data should not be considered truly open.
- @zack supports the idea of a tiered system (D+ and D-) as an improvement over the current situation, as it incentivizes progress from D- to D+. He is skeptical about verifiability but sees potential in the branding aspect of the proposal.
- @stefano asks @arandal about suggested edits, which include renaming data as “source data,” allowing open-source AI developers to require downstream modifications with open data, and permitting downstream developers to use open data to fine-tune models trained on non-public data. He further asks if arandal compares training data to model weights as source code is to binary code.
- @shujisado agrees with @stefano and points out that while many interpret OSD-compliant licenses to include CC4 and CC0, OSI has not officially evaluated Creative Commons licenses for compliance. He highlights concerns about CC0’s patent defense, which could be crucial for datasets.
- @mjbommar echoes the concerns about patent defense, noting it as a critical issue in both software and data licensing.
- @Shamar supports the first two suggestions but argues that models trained on non-public data cannot meet an “Open Source AI” definition, as they limit the freedom to study and modify, which are core principles of Open Source.
- @nick shares an article by Nathan Lambert, reviewed by key figures in the Open Source AI space, discussing the challenges of training data and the current Open Source AI definition. @Percy Liang (on X) view is highlighted, where he suggests that releasing an entire dataset is neither sufficient nor necessary for Open Source AI. He emphasizes the need for detailed code of the data processing pipeline for transparency, beyond just releasing the dataset.
- @shujisado discusses the legal nuances of using U.S. government documents in AI training, emphasizing that while they may be used in the U.S., legal complications arise in other jurisdictions.
- @Shamar stresses that Open Source AI should provide all the necessary data and processing information to recreate a system, otherwise, calling it Open Source is “open washing.”
- @Shamar proposes a clearer distinction between “source data” and “processing information” in the Open Source AI definition to ensure transparency and reproducibility. He suggests source data should be publicly available under the same terms that allowed its original use, while the process used to train the system should be shared under an Open Source license. His formulation aims to prevent loopholes that could lead to open-washing and emphasizes the importance of granting all four freedoms (study, modify, distribute, and use) to qualify as Open Source AI.
- @nick disagrees, arguing that @Shamar proposal misunderstands the difference between the rights to use data for training and the rights to distribute it. He also challenges the claim that exact replication of AI systems can be guaranteed, even with access to the same data.
- The sixteenth edition of our town hall meetings was held on the 13th of September. If you missed it, the recording and slides can be found here.
Nonprofit Drupal posts: September Drupal for Nonprofits Chat
Join us THURSDAY, September 19 at 1pm ET / 10am PT, for our regularly scheduled call to chat about all things Drupal and nonprofits. (Convert to your local time zone.)
We don't have anything specific on the agenda this month, so we'll have plenty of time to discuss anything that's on our minds at the intersection of Drupal and nonprofits. Got something specific you want to talk about? Feel free to share ahead of time in our collaborative Google doc: https://nten.org/drupal/notes!
All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.
This free call is sponsored by NTEN.org and open to everyone.
-
Join the call: https://us02web.zoom.us/j/81817469653
-
Meeting ID: 818 1746 9653
Passcode: 551681 -
One tap mobile:
+16699006833,,81817469653# US (San Jose)
+13462487799,,81817469653# US (Houston) -
Dial by your location:
+1 669 900 6833 US (San Jose)
+1 346 248 7799 US (Houston)
+1 253 215 8782 US (Tacoma)
+1 929 205 6099 US (New York)
+1 301 715 8592 US (Washington DC)
+1 312 626 6799 US (Chicago) -
Find your local number: https://us02web.zoom.us/u/kpV1o65N
-
- Follow along on Google Docs: https://nten.org/drupal/notes
drunomics: Why we don't use GraphQL
At drunomics we are building decoupled Drupal sites for more than five years. During this time, GraphQL has always been a popular choice for decoupled Drupal sites among professional or enterprise projects, thanks to the well maintained GraphQL contrib module. Still, I've vetted against using GraphQL for various enterprise projects, even though sometimes it was appealing to customers. In this blog post, I'd like to summarize why we don't use GraphQL:
General complexityGraphQL is not only a new query language to learn for both frontenders and backenders, moreover the backend has to support any kind of queries the frontenders make. On the frontend side of things, additional libraries and tooling is needed to handle the protocol.
Loose contractsGraphQL gives a lot of power to frontend developers, but that comes with a huge price: No defined or a very loosely defined contract, i.e. the data model or more specifically the GraphQL schema layered on top. Based upon this loose contract the frontend may compose any kind of queries, which the backend has to support. What leads to the next point:
Complex queriesWhen the backend is exposing the Drupal data schema directly, potentiallly a lot of things become leaked unwanted and changing things might became hard, because: Who knows what data properties the frontend uses and queries for? It's quite hard to optimize for every use-case.
However, the backend may compose it's own GraphQL schema and provide exactly the data model as needed by the client, the frontend. That's indeed, a great option to have, but it requires additional work and code to translate between the schema and the real data model behind. It makes it possible to change the underlying data model and schema mapping, while staying with the same or compatible GraphQL output and schema. But is that code performing the mapping performant enough? Does it work correctly? That's quite hard to tell without knowing exactly the queries one has to optimize and test for. So things are or become complex.
PerformanceFirst of all, GraphQL is bad for caching since it makes use of POST requests. The typical work-around is to use shortened, hashed queries and to access them via GET requests, what can help to mitigate the issue. But this comes at the cost of tying the deployed frontend and backend versions, thus increasing overall system and deployment complexity. That way, the main GraphQL advantage - flexibility at the frontend - gets lost. So not an easy or great compromise to make.
Client driven data fetchingWith GraphQL, the web browser (or generally the client) sends a query to the server, specifying the exact data it needs. While this can help to reduce payload size, it puts the client in the "driving seat". That often leads to additional round trips being required: Based upon the first request, often additional data is required for rendering it. This additional data often has to be fetched in additional requests, thus requiring another or multiple round-trips to the server and thus increasing latency.
In contrast, when the server is in the "driving seat", it may efficiently do all queries and resolve additional data, and then send the resulting data over the slower network once.
SecurityGraphQL queries can expose sensitive data if not properly secured. This can be mitigated by implementing proper authentication and authorization mechanisms. However, this can get very complex easily: Since the server does not know the queries needed by the client, it needs to handle every possible combination a client may request. Unfortunately, it's commonly rather easy for hackers to purposely write computationally very expensive (GraphQL) queries and to send them to the server, thus opening the door for DDOS or even DOS attacks.
Besides that, due to the complexity of the backend having to cover all possible combinations, the danger for data leaking accidentially becomes rather high.
The conclusionGraphQL comes with a couple of issues, which are - as usual - solvable. That's a price one might want to pay in certain situations, if the benefits are worth it. Thus, is using GraphQL a good idea? As so often, it depends. But in my experience, it's more often not, than it is.
Alternatives are RESTfulThe typical alternative to GraphQL is a RESTful API. As usual, with Drupal there are a couple of good options:
- Drupal comes with the JSON-API out-of-the box, which is a great feature to have. While it's good fit in certain situations, it also faces some of the issues mentioned above, most notable "Client driven data fetching" and "Loose contracts".
- Developers may use Drupal's API to provide custom-coded RESTful endpoints for the client. That addresses all mentioned concerns, but requires backend development time for every feature and most notable careful planing. This comes with the downside of frontend developers loosing the flexibility. (By the way, this is what GraphQL is loved for!)
- Configurable RESTful endpoints. In order to improve the development process and gain flexibility in the frontend, we developed a solution for providing custom RESTful endpoints that are configurable via Drupal, by frontend developers. For that, we improved the Custom Elements module, which is part of Lupus Decoupled Drupal, such that it integrates with Drupal's configuration sytem and provides an UI for customizing output by entity view-mode. That way, in many situations, we can tick all the boxes, while enabling the frontend developer to work efficiently. I'll share more details about the new Custom Elements UI in a dedicated blog post later this week.
Free software in the EU needs your help! Join the international effort before September 20
FSF Blogs: Free software in the EU needs your help! Join the international effort before September 20
FSF Events: Free Software Directory meeting on IRC: Friday, September 20, starting at 12:00 EDT (16:00 UTC)
Talking Drupal: Talking Drupal #467 - Config Actions System
Today we are talking about The Config Actions System, What it does, and how it helps with Drupal Recipes with guests Alex Pott and Adam Globus-Hoenich. We’ll also cover the Events recipe as our module of the week.
For show notes visit: www.talkingDrupal.com/467
Topics- Explain Config Actions
- Is this related to the Actions UI
- How are config actions used in Drupal
- How will the average user interact with Config Actions
- What does non-desctructive mean
- Where did the Config Action system come from
- Future of the Config Action system
- How can people help out
- How does the Config Action system help with Drupal CMS
Alex Pott - alexpott Adam Globus-Hoenich - phenaproxima
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Nate Dentzau - dentzau.com nathandentzau
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you ever wanted to set up and configure a robust events system in your Drupal website, in just a few seconds? There’s a recipe for that.
- Module name/project name:
- Brief history
- How old: originally created in Mar 2013 as a distribution, but reborn as a recipe in July 2024
- Versions available: 1.0.0-alpha3, compatible with Drupal 10.3 and 11
- Maintainership
- Actively maintained
- Security coverage? - no stable release
- Documentation in the works
- Number of open issues: 1 open issue, which is a bug
- Usage stats: not tracked for recipes
- Maintainer(s): mandclu
- Module features and usage
- Listeners probably won’t be surprised to hear that Smart Date is at the heart of what you’ll get when you apply the Events recipe
- You will have an Event content type, and a view to list upcoming and past events
- The recipe will also set up add-to-calendar links on your event page, making it easy for your site visitors to be reminded of when your event will take place
- There are companion recipes to add a calendar view, to be able to associate locations (with maps), and to add event registration
- A modified version of the Events recipe has already been integrated into Drupal CMS, so it will be even easier to apply for a site based on that
- Internally it makes use of the createIfNotExists and setComponents config actions, which is why I thought it would be relevant to today’s discussion
FSF Blogs: Free software in the EU needs your help! Join the ongoing Digital Europe Freedom Programme consultation by September 20
Free software in the EU needs your help! Join the ongoing Digital Europe Freedom Programme consultation by September 20
mark.ie: Live Previews for LocalGov Microsites Design Changes
I had great fun today expanding what was a proof-of-concept module from last year into a very usable live preview module for LGD Microsites.
Plasma 6.2 Beta in KDE neon Testing Edition
Back from the fun of Akademy in Würzburg we can now get to the important task of testing Plasma 6.2 beta. It’s now in KDE neon testing edition which we build from the Git branches which will be used to make the 6.2 final release in 2.5 weeks time. Grab it now or if you don’t have a machine to install it on you can try the Docker images using the simple command `neondocker -p -e testing`.