Feeds
The Drop is Always Moving: Drupal 11.1.0 is now available! The first feature release of Drupal 11 improves the recipe system, introduces support for hooks written as classes, makes Workspaces more flexible and enhances performance.Read more at https:/...
Drupal 11.1.0 is now available! The first feature release of Drupal 11 improves the recipe system, introduces support for hooks written as classes, makes Workspaces more flexible and enhances performance.
Read more at https://www.drupal.org/blog/drupal-11-1-0
Drupal blog: Drupal 11.1.0 is now available
The first feature release of Drupal 11 improves the recipe system, introduces support for hooks written as classes, makes Workspaces more flexible and enhances performance.
Recipe system improvementsThe Recipe system allows packages to be configured with dependencies in a repeatable way. Drupal 11.1 now allows recipes to take user input (for example, API keys for remote services). Recipes can now also use configuration actions to add new blocks, enable layout builder for content types, clone configuration entities, and so on.
Hooks can be written as classesDrupal's unique hook system allows modifying forms, data updates, site processes, render structures, and even the ordering of other hooks. After long-running efforts by many contributors, it is now possible to also define hooks and hook implementations with object-oriented techniques that are more in line with modern PHP code design practices. This will also make Drupal's code easier to understand for PHP developers familiar with other projects. All runtime core hooks have been converted to object-oriented implementations.
With this new functionality, magic global functions like the following will no longer be needed:
function hook_entity_insert(EntityInterface $entity) { // DO STUFF }Instead, developers can use the new Hook attribute on methods:
class ExampleHooks { #[Hook('entity_insert')] public function entityInsert(EntityInterface $entity): void { // DO STUFF } } New icon management APIA dedicated API has been added to allow modules and themes to define icon packs. Within each pack is a series of icons each with a unique identifier that the system can then use. Modules and themes can alter icon packs.
Workspaces user interface separated into its own moduleAs part of a larger plan to use workspaces for content moderation, the user interface of the Workspaces module was moved to a separate Workspaces UI module. For new sites, if you want to enable Workspaces with the user interface, you now need to install this module.
Improvements to the initial experience after installationWe revisited Drupal core's default configuration to better reflect most user's needs. In this release, date formats were made easier to read. The user registration process also now defaults to administrator-created accounts, in order to avoid new sites being flooded with spam accounts in the moderation queue. When creating a new node type, Drupal core will no longer automatically add a body field, allowing site builders to choose their own content model without having to delete defaults they don't want first and reducing potential conflicts for platforms built on Drupal core such as Drupal CMS and the upcoming Experience Builder.
New views entity reference filterA new generic entity reference views filter has been added, which makes it possible to render exposed views filters as a select list or autocomplete of available entities. This may now be used by contributed modules and will be enabled for core entity types in future releases.
Render caching for formsForms built with form API can now opt-in to render caching, improving page loading performance in a variety of situations. We will be gradually opting forms into Drupal core into render caching, and may opt-in all forms to render caching by default in a future major release.
Improved browser and CDN caching for JavaScript and CSSDrupal's asset aggregation algorithm has been improved to reduce variation in CSS and JavaScript aggregates. Differences between pages which may have produced different but similar aggregates in the past, for example because libraries were requested in a different order, will now result in a single file instead. This improves CDN cache hit rates and reduces the amount of JavaScript and CSS that visitors will download when visiting multiple pages on a site. This builds on several previous recent improvements to Drupal core's asset aggregation since Drupal 10.1 and also unblocks further improvements which are planned for future minor releases.
PHP 8.4 is supportedThe PHP team is doing a fantastic job of improving the language and performance of PHP. PHP 8.4 was released in November, and Drupal 11.1 fully supports it.
Drupal CMS 1.0 will be based on Drupal 11.1Drupal 11.1 will be the basis of Drupal CMS 1.0, which will be released on January 15 on Drupal's 24th birthday. Many of the underlying improvements introduced in Drupal core will help compose an improved user experience in Drupal CMS. The first release candidate of Drupal CMS was already based on Drupal 11.1 RC. Stay tuned!
Drupal 10.4 will be available soonThe next Long-Term Support (LTS) release of Drupal 10 will be released this week. Drupal 10 will be supported until the release of Drupal 12 in mid- to late 2026. Long-Term Support for Drupal 10 is managed with a new maintenance minor release every 6 months that receives twelve months of support. This allows the maintenance minor to adapt to evolving dependencies. And it gives more flexibility for sites to move to Drupal 11 when they are ready.
The same will happen when Drupal 10 is end-of-life and Drupal 12 is released: Drupal 11 will transition to Long-Term Support, with its own maintenance minors every six months. This release schedule allows sites to move from one LTS version to the next if that is the best strategy for their needs..
Core maintainer team updatesSince Drupal 11.0, Adam Hoenich has stepped down from being a Migrate subsystem maintainer as he moved on to be a key committer for Drupal CMS. We thank Adam for his contributions!
Want to get involved?If you are looking to make the leap from Drupal user to Drupal contributor, or you want to share resources with your team as part of their professional development, there are many opportunitzies to deepen your Drupal skill set and give back to the community. Check out the Drupal contributor guide. You are more than welcome to join us at DrupalCon Atlanta in March 2025 to attend sessions, network, and enjoy mentorship for your first contributions.
The Drop Times: Countdown to the Big Drop
Dear Readers,
The Drupal CMS release candidate made its debut at DrupalCon Singapore 2024, marking the beginning of an exciting new era for Drupal. This release offers a first look at what’s being called the most user-friendly version of Drupal yet. But this is just the beginning. The full launch of Drupal CMS v1 is set for January 15, 2025 — exactly one month away! With the countdown officially on, the Drupal community is gearing up for a wave of activity, excitement, and preparation leading up to the big day.
At The DropTimes, we’re ready to keep you plugged into every development. Over the next month, we’ll be bringing you exclusive insights from track leads, in-depth looks at each of the tracks, and timely updates on project progress. We’ll also be covering the many Drupal CMS launch parties taking place around the world. This isn’t just a software release — it’s a moment of celebration for the Drupal community and a glimpse into the future of the platform.
But we don’t want to do it alone — we want to hear from 'you'! Do you have thoughts on Drupal CMS or ideas for where it should head next? Are you planning a launch party? We want to know! If there’s a track you believe deserves more attention or a new feature you’d like to see, let’s get your voice out there. The DropTimes is here to amplify community voices and spark conversation. The next chapter for Drupal is about to begin, and together, we can help shape it. Email us at editor@thedroptimes.com. Stay tuned as we count down to January 15!
Let's take a look at the important stories from the last week.
InterviewDrupalCon Singapore 2024- A Look into the Key Insights and Perspectives Shared by Dries Buytaert at DrupalCon Singapore 2024
- Winners of the First-Ever Splash Awards Asia Announced at DrupalCon Singapore 2024
- Clock's Ticking: One Month Until Drupal 7 End-of-Life
- 2025 Nonprofit Summit: Drupal Association Calls for Breakout Leaders!
- Drupal Open University Makes Exciting Progress!
- Drupal 7 Security Updates Released Ahead of End-of-Life Deadline
- Florida DrupalCamp Unveils Proposed Session Lineup for 2025 Event
- Sponsorship Opportunities Open for DrupalCamp Finland 2025
- Greece Winter Sprint 2024: A Triumphant Gathering for the Drupal Community
- MidCamp 2025 Update: Bi-Weekly Planning Meetings Now on Wednesdays
- Events This Week: Dec 16 - 22, 2024
- LPI and OS JobHub Launches 2025 Open Source Professionals Job Survey
- SparkFabrik Hosts Event to Celebrate Drupal CMS Launch on January 15, 2025
- QED42 Introduces AI DXP to Simplify AI Integration and Streamline Workflows
We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.
To get timely updates, follow us on LinkedIn, Twitter and Facebook. You can also join us on Drupal Slack at #thedroptimes.
Thank you,
Sincerely
Alka Elizabeth
Sub-editor, The DropTimes.
The Drop Times: The Dutch Government Works on Open Source with a Drupal Developers Day
Freelock Blog: Build a membership application system
Drupal, with the Events, Conditions, and Actions (ECA) module can build up sophisticated applications without a single line of custom code. You can build full applications using a handful of Drupal modules.
Real Python: Dictionaries in Python
Python dictionaries are a powerful built-in data type that allows you to store key-value pairs for efficient data retrieval and manipulation. Learning about them is essential for developers who want to process data efficiently. In this tutorial, you’ll explore how to create dictionaries using literals and the dict() constructor, as well as how to use Python’s operators and built-in functions to manipulate them.
By learning about Python dictionaries, you’ll be able to access values through key lookups and modify dictionary content using various methods. This knowledge will help you in data processing, configuration management, and dealing with JSON and CSV data.
By the end of this tutorial, you’ll understand that:
- A dictionary in Python is a mutable collection of key-value pairs that allows for efficient data retrieval using unique keys.
- Both dict() and {} can create dictionaries in Python. Use {} for concise syntax and dict() for dynamic creation from iterable objects.
- dict() is a class used to create dictionaries. However, it’s commonly called a built-in function in Python.
- .__dict__ is a special attribute in Python that holds an object’s writable attributes in a dictionary.
- Python dict is implemented as a hashmap, which allows for fast key lookups.
To get the most out of this tutorial, you should be familiar with basic Python syntax and concepts such as variables, loops, and built-in functions. Some experience with basic Python data types will also be helpful.
Get Your Code: Click here to download the free sample code that you’ll use to learn about dictionaries in Python.
Take the Quiz: Test your knowledge with our interactive “Python Dictionaries” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python DictionariesTest your understanding of Python dictionaries
Getting Started With Python DictionariesDictionaries are one of Python’s most important and useful built-in data types. They provide a mutable collection of key-value pairs that lets you efficiently access and mutate values through their corresponding keys:
Python >>> config = { ... "color": "green", ... "width": 42, ... "height": 100, ... "font": "Courier", ... } >>> # Access a value through its key >>> config["color"] 'green' >>> # Update a value >>> config["font"] = "Helvetica" >>> config { 'color': 'green', 'width': 42, 'height': 100, 'font': 'Helvetica' } Copied!A Python dictionary consists of a collection of key-value pairs, where each key corresponds to its associated value. In this example, "color" is a key, and "green" is the associated value.
Dictionaries are a fundamental part of Python. You’ll find them behind core concepts like scopes and namespaces as seen with the built-in functions globals() and locals():
Python >>> globals() { '__name__': '__main__', '__doc__': None, '__package__': None, ... } Copied!The globals() function returns a dictionary containing key-value pairs that map names to objects that live in your current global scope.
Python also uses dictionaries to support the internal implementation of classes. Consider the following demo class:
Python >>> class Number: ... def __init__(self, value): ... self.value = value ... >>> Number(42).__dict__ {'value': 42} Copied!The .__dict__ special attribute is a dictionary that maps attribute names to their corresponding values in Python classes and objects. This implementation makes attribute and method lookup fast and efficient in object-oriented code.
You can use dictionaries to approach many programming tasks in your Python code. They come in handy when processing CSV and JSON files, working with databases, loading configuration files, and more.
Python’s dictionaries have the following characteristics:
- Mutable: The dictionary values can be updated in place.
- Dynamic: Dictionaries can grow and shrink as needed.
- Efficient: They’re implemented as hash tables, which allows for fast key lookup.
- Ordered: Starting with Python 3.7, dictionaries keep their items in the same order they were inserted.
The keys of a dictionary have a couple of restrictions. They need to be:
- Hashable: This means that you can’t use unhashable objects like lists as dictionary keys.
- Unique: This means that your dictionaries won’t have duplicate keys.
In contrast, the values in a dictionary aren’t restricted. They can be of any Python type, including other dictionaries, which makes it possible to have nested dictionaries.
It’s important to note that dictionaries are collections of pairs. So, you can’t insert a key without its corresponding value or vice versa. Since they come as a pair, you always have to insert a key with its corresponding value.
Note: In some situations, you may want to add keys to a dictionary without deciding what the associated value should be. In those cases, you can use the .setdefault() method to create keys with a default or placeholder value.
Read the full article at https://realpython.com/python-dicts/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm: 7 Reasons You Should Use dbt Core in PyCharm
dbt Core is a modern data transformation framework. It doesn’t extract or load data and is only responsible for the T in the ELT (extract-load-transform) process. dbt connects to your data warehouse and helps you prepare your data so it can later be used to answer business questions.
In this blog post, we’ll talk about the top benefits of dbt and the advantages of using it in PyCharm Professional. To make the most of these features, you should be familiar with the framework. If you know SQL well, you’ll likely find it easy to use, and if you are a total novice in the field, you can use the dbt portal to get acquainted with it.
Why you should use dbt- Modularity and code reusability – Transformations can be saved into modular, reusable models. For instance, in this example the model int_count_customer.sql has a reference to stg_day_customer.sql and reuses its code.
- Versioning – dbt projects can be stored in version control systems like Git or GitHub. This allows you to track changes, collaborate with other team members, and maintain a record of all transformations.
- Testing – dbt allows you to write tests for your data models easily and check whether the data has any duplicates or null values. Additionally, you can even create specific rules to test against, and you can perform tests on both the model and the project levels.
- Documentation – dbt auto-generates documentation for data models, ensuring that team members and stakeholders all understand the data lineage and model definitions in the same way.
To summarize, dbt brings best practices in engineering to the field of data analysis, allowing you to produce higher-quality results while providing you with a straightforward and intuitive workflow.
These benefits are just the tip of the iceberg when it comes to what the tool can do.
How PyCharm streamlines your dbt workflowHaving established the benefits of dbt, we can now turn to the 7 key reasons to use it in PyCharm:
1. User-friendly onboarding – PyCharm streamlines the initial setup. As demonstrated in this video, setting up a project and configuring the necessary settings is straightforward.
2. Unified workspace for databases and dbt – PyCharm’s integrated database plugin powered by JetBrains DataGrip makes handling SQL databases significantly easier. Since it’s compatible with all databases that dbt works with, you don’t have to worry about juggling multiple tools. You can focus on data modeling and instantly view outcomes all in one place. To cover even a small number of the plugin’s features would take hours, but luckily we have a nice set of webinars dedicated to PyCharm’s functionality for databases: Visual SQL Development with PyCharm.
3. Git and dbt integration – In one interface, you can easily clone the repo, track any changes, manage branches, resolve conflicts, and collaborate with teammates.
4. Autocompletion for your .yml and jinja-template SQL files – People love using PyCharm because of its smart autocompletion, which it, of course, offers for dbt as well.
5. Local history –This feature lets you undo recent changes if they cause problems. You can also compare different versions to see what was changed and check whether updates were made correctly.
6. AI Assistant – AI Assistant is really helpful, especially if you’re just starting with dbt Core. It is context-aware, and in addition to having it answer your questions in the AI chat, you can have it generate code and fix problems for you, streamlining your work with data models. It also saves you from worrying about what to write in commit messages by composing them for you.
7. Project navigation – PyCharm excels in project navigation, offering features like fast search functionality and the Go to Declaration feature, both of which allow you to navigate through your dbt models effortlessly.
That’s just a glimpse of the benefits PyCharm already offers for dbt, and our support is still in its early stages. We invite you to test it out and share your insights. Whether you have suggestions for features or want to let us know about areas for improvement, we’re eager to hear from you.
Get started with PyCharm by using the promo code dbt-PyCharm to get a 3-month free trial.
Redeem your codeWant to learn how to use dbt in PyCharm? Head to the documentation page to learn more about the IDE’s dbt support.
Eager to learn more about dbt in general? Take a look at this post on the experience of using dbt and this analysis of deeper dbt concepts by Pavel Finkelshteyn.
The Drop Times: QED42 Debuts AI-Powered Twig-to-SDC Module
CKEditor: CKEditor 5 introduces self-service licensing and version override for Drupal
Qt Creator 15 - CMake Update
Here are the new CMake features and fixes in Qt Creator 15:
Python Bytes: #414 Because we are not monsters
Zato Blog: HL7 FHIR Integrations in Python
HL7 FHIR, pronounced "fire", is a data model and message transfer protocol designed to facilitate the exchange of information among systems used in health care settings.
In such environments, a FHIR server will assume the role of a central repository of health records with other systems integrating with it, potentially in a hub-and-spoke fashion, thus letting the FHIR server become a unified and consistent source of data that would otherwise stay confined to a silo of each individual health information system.
While FHIR is the way forward, the current reality of health care systems is that much of the useful and actionable information is distributed and scattered among many individual data sources - paper-based directories, applications or data bases belonging to the same or different enterprises - and that directly hampers the progress towards delivering good health care. Anyone witnessing health providers copy-and-pasting the same information from one application to another, not having access to the already existing data, not to mention people not having an easy way to access their own data about themselves either, can understand what the lack of interoperability looks like externally.
The challenges that integrators face are two-fold. On the one hand, the already existing systems, including software as well as medical appliances, were often not, or are still not being, designed for the contemporary inter-connected world. On the other hand, FHIR in itself is a relatively new technology which means that it is not straightforward to re-use the existing skills and competencies.
Zato is an open-source platform that makes it possible to integrate systems with FHIR using Python. Specifically, its support for FHIR enables quick on-boarding of integrators who may be new to health care interoperability, who are coming to FHIR with previous experience or interest in web development technologies, and who need an easy way to get started with and to navigate the complex landscape of health care integrations.
Connecting to FHIR serversOutgoing FHIR connections are what allows Python-based services to communicate with FHIR servers. Throughout the rest of the chapter, the following definition will be used. It connects to a live, publicly available FHIR server.
Filling out the form below will suffice, there is no need for any server restarts. This principle, that restarts are not needed, applies all throughout the platform, whenever you change any piece of configuration, it will be automatically propagated as necessary.
- Name: FHIR.Sample
- Address: https://simplifier.net/zato
- Security: No security definition (we will talk about security later)
- TLS CA Certs: Default bundle
In Python code, you obtain client connections to FHIR servers through self.out.hl7.fhir objects, as in the example below which first refers to the server by its name and then looks up all the patients in the server.
The structure of the Patient resource that we expect to receive can be found here.
# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class FHIService1(Service): name = 'demo.fhir.1' def handle(self) -> 'None': # Connection to use conn_name = 'FHIR.Sample' with self.out.hl7.fhir[conn_name].conn.client() as client: # This is how we can refer to patients patients = client.resources('Patient') # Get all active patients, sorted by their birth date result = patients.sort('active', '-birthdate') # Log the result that we received for elem in result: self.logger.info('Received -> %s', elem['name'])Invoking the service will store in logs the data expected:
INFO - Received -> [{'use': 'official', 'family': 'Chalmers', 'given': ['Peter', 'James']}]For comparison, this is what the FHIR server displays in its frontend. - the information is the same.
Storing data in FHIR serversTo save information in a FHIR server, create the required resources and call .save to permanently store the data in the server. Resources can be saved either individually (as in the example below) or as a bundle.
# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class CommandsService(Service): name = 'demo.fhir.2' def handle(self) -> 'None': # Connection to use conn_name = 'FHIR.Sample' with self.out.hl7.fhir[conn_name].conn.client() as client: # First, create a new patient patient = client.resource('Patient') # Save the patient in the FHIR server patient.save() # Create a new appointment object appointment = client.resource('Appointment') # Who will attend it participant = { 'actor': patient, 'status':'accepted' } # Fill out the information about the appointment appointment.status = 'booked' appointment.participant = [participant] appointment.start = '2022-11-11T11:11:11.111+00:00' appointment.end = '2022-12-22T22:22:22.222+00:00' # Save the appointment in the FHIR server appointment.save() Learning what FHIR resources to useThe "R" in FHIR stands for "Resources" and the sample code above uses resources such a Patient or Appointment but how does one learn what other resources exist and what they look like? In other words, how does one learn the underlying data model?
First, you need to get familiar with the spec itself which, in addition to textual information, offers visualizations of the data model. For instance, here is the description of the Observation object, including details such as all the attributes an Observation is composed of as well as their multiplicities.
Secondly, do spend time with FHIR servers such as Simplifier. Use Zato services to create test resources, look them up and compare the results with what the spec says. There is no substitute for experimentation when learning a new data model.
FHIR securityOutgoing FHIR connections can be secured in several ways, depending on what a given FHIR requires:
- With Basic Auth definitions
- With OAuth definitions
- With SSL/TLS. If the server is not a public one (e.g. it is in a private network with a private IP address), you may need to upload the server's certificate to Zato first if you plan to use SSL/TLS because, without it, the server's certificate may be rejected.
While FHIR is what new deployments use, it is worth to add that there are still other HL7 versions frequently seen in integrations:
- Version 2, using its own MLLP protocol
- Version 3, using XML
Both of them can be used in Zato services, in both directions. For instance, it is possible to both receive HL7 v2 messages as well as to send them to external applications. It is also possible to send v2 messages using REST in addition to MLLP.
➤ Read more about using Python in API integrations
➤ Start the tutorial which will guide you how to design and build Python API services for interoperability, integrations and automation
➤ Visit the support center for more articles and FAQ
➤ Open-source iPaaS in Python
Handling incorrect warnings and a limited functionality in QML Code Editor in Qt Creator 14.0 and 15.0
We've recently discovered that the QML code editor in Qt Creator 14.0 and 15.0 is not working as expected out of the box. The QML Language Server integration is currently broken, and we’d like to address it openly and provide solutions for those affected.
Russ Allbery: Review: Finders
Review: Finders, by Melissa Scott
Series: Firstborn, Lastborn #1 Publisher: Candlemark & Gleam Copyright: 2018 ISBN: 1-936460-87-4 Format: Kindle Pages: 409Finders is a far future science fiction novel with cyberpunk vibes. It is the first of a series, but the second (and, so far, only other) book of the series is a prequel. It stands alone reasonably well (more on that later).
Cassilde Sam is a salvor. That means she specializes in exploring ancient wrecks and ruins left behind by the Ancients and salvaging materials that can be reused. The most important of those are what are called Ancestral elements: BLUE, which can hold programming; GOLD, which which reacts to BLUE instructions; RED, which produces actions or output; and GREEN, the rarest and most valuable, which powers everything else. Cassilde and her partner Dai Winter file claims on newly-discovered or incompletely salvaged Ancestor sites and then extract elemental material and anything else of value in their small salvage ship.
Cassilde is also dying. She has Lightman's, an incurable degenerative disease that can only be treated with ever-increasing quantities of GREEN. It's hard to sleep, hard to get warm, hard to breathe, and eventually she'll run out of money to pay for the GREEN and she'll die.
To push that day off into the future, she and Dai need work. The good news is that the wreckage of a new Ancestor sky palace was discovered in a long orbit and will create enough salvage work for every experienced salvor in the system. The bad news is that they're not qualified to bid on it. They need a scholar with a class-one license to bid on the best sections, and they haven't had a reliable scholar since their former partner and lover Summerland Ashe picked the opposite side in the Troubles and left the Fringe for the Entente, the more densely settled and connected portion of human space.
But, unexpectedly and suspiciously, Ashe may be back and offering to work with them again.
So, first, I love this setting. This is far from the first SF novel that is set in the aftermath of a general collapse of human civilization and revolving around discovering lost mysteries. Most examples of that genre are post-apocalyptic novels limited to Earth or the local solar system, but Kate Elliott's Unconquerable Sun comes immediately to mind. It's also not the first space archaeology series I've read; Kristine Kathyrn Rusch's story series starting with "Diving into the Wreck" also came to mind. But I don't recall the last time I've seen the author sell the setting so effectively.
This is a world with starships and spaceports and clearly advanced technology, but it feels like a post-collapse society that's built on ruins. It's not just that technology runs on half-understood Ancestral elements and states fight over control of debris fields. It's also that the society repurposes Ancestral remnants in ways that both they and the reader know weren't originally intended, and that sometimes are more ingenious or efficient than how the Ancestors probably used them. There's a creative grittiness here that reminds me of good cyberpunk.
It's not just good atmospheric writing, though. Scott makes a world-building decision that is going to sound trivial when I say it, but that has brilliant implications for the rest of the setting. There was not just one collapse; there were two.
The Ancestor civilization, presumed to be the first human civilization, has passed into myth, quite literally when it comes to the stories around its downfall in the aftermath of a war against AIs. After the Ancestors came the Successors, who followed a similar salvage and rebuild approach and got as far as inventing their own warp drive technology that was based on but different than the Ancestor technology. Then they also collapsed, leaving their adapted technology and salvage operations layered over Ancestor sites. Cassilde's civilization is the third human starfaring civilization, and it is very specifically the third, neither the second nor one of dozens.
This has so many small but effective implications that improve this story. A fall happened twice, so it feels like a pattern that makes Cassilde's civilization paranoid, but it happened for two very different reasons, so there is room to argue against it being a pattern. Salvage is harder because of the layering of Ancestor and Successor activity. Successors had their own way of controlling technology that is not accessible to Cassilde and her crew but is also not how the technology was intended to be used, which sends small ripples of interesting complexity through the background. And salvors are competing not only against each other but also against Successor salvage operations for which they have fragmentary records. It's a beautifully effective touch.
Melissa Scott has been publishing science fiction for forty years, and it shows in this book. The protagonists are older characters: established professionals with resource problems but also social connections and an earned reputation, people who are trying to do a job and live their lives, not change the world. The writing is competent, deft, and atmospheric, with the confidence of long practice, but it also has the feel of an earlier era of science fiction. I mentioned the cyberpunk influence, which shows in the grittiness of the descriptions, the marginality of the characters in society, and the background theme of repurposing and reusing technology in unintended ways. This is the sort of book that feels solidly in the center of science fiction, without the genre mixing into either fantasy or romance that has become somewhat more common, and also without the dramatics of space opera (although the reader discovers that the stakes of this novel may be higher than anyone realized).
And yet, so much of this book is about navigating a complicated romantic relationship, and that's where the story structure felt a bit odd. Cassilde, Dai, and Ashe were a polyamorous triad (polyamory also shows up in Scott's excellent Roads of Heaven series), and much of the first third of the book deals with the fracturing of trust with Ashe and their renegotiation of that relationship given his return. This is refreshingly written as the thoughtful interaction of three adults who take issues of trust seriously, but that also means it's much less dramatic than it sounds, and that means this book starts exceptionally slow. Scott is going somewhere, and the slow build became engrossing around the midpoint of the book, but I had to fight to stick with it at the start.
About 80% of the way through this book, I had no idea how Scott was going to wrap things up in the pages remaining and was bracing myself for some sort of series cliffhanger. This is not what happens; the plot is not fully resolved in every detail, but it reaches a conclusion of sorts that does not mandate a sequel. I did think the end was a little bit unsatisfying, though, and I want another book that explores the implications of the ending. I think it would have to be a much different book, and the tonal shift might be stark.
I've had this book on my to-read list for a while and kept putting it off because I wasn't sure I was in the mood for something precarious and gritty. This turned out to be an accurate worry: this is literally a book about salvaging the pieces of something full of wonders inextricably connected to dangers. You have to be in a cyberpunk sort of mood. But I've never read a bad Melissa Scott book, and this is no exception. The simplicity and ALL-CAPSNESS of the Ancestral elements grated a bit, but apart from that, the world-building is exceptional and well worth the trip. Recommended, although be warned that, if you're like me, it may not grab you from the first page.
Followed by Fallen, but that book is a prequel that does not share any protagonists.
Content notes: disability and degenerative illness in a universe where magical cures are possible, so be warned if that specific thematic combination is not what you're looking for.
Rating: 7 out of 10
Krita Monthly Update - Edition 21
Welcome to the @Krita-promo team's November 2024 development and community update.
Development Report Community Bug Hunt EndedThe Community Bug Hunt has ended, with dozens of bugs fixed and over a hundred bug more reports closed. Huge thanks to everyone who participated, and if you missed it, the plan is to make this a regular occurrence.
Can't wait for the next bug hunt to be scheduled? Neither will the bug reports! Help in investigating them is appreciated anytime!
Community Report November 2024 Monthly Art Challenge ResultsFor the "Fluffy" theme, 22 members submitted 26 original artworks. And the winner is… Most "Fluffy" by @steve.improvthis, featuring three different fluffy submissions. Be sure to check out the other two as well!
The December Art Challenge is Open NowFor the December Art Challenge, @steve.improvthis has chosen "Tropical" as the theme, with the optional challenge of using new or unfamiliar brushes. See the full brief for more details, and find yourself a place in the sun!
Featured Artwork Best of Krita-Artists - October/November 2024Seven images were submitted to the Best of Krita-Artists Nominations thread, which was open from October 15th to November 11th. When the poll closed on November 14th, these five wonderful works made their way onto the Krita-Artists featured artwork banner:
Ocean | Krita by @Gurkirat_Singh
Winter palace by @Sad_Tea
Order by @Valery_Sazonov
Curly, 10-24 by @Celes
Afternoon Magic by @zeki
Ways to Help KritaKrita is Free and Open Source Software developed by an international team of sponsored developers and volunteer contributors.
Visit Krita's funding page to see how user donations keep development going, and explore a one-time or monthly contribution. Or check out more ways to Get Involved, from testing, coding, translating, and documentation writing, to just sharing your artwork made with Krita.
The Krita-promo team has put out a call for volunteers, come join us and help keep these monthly updates going.
Notable ChangesNotable changes in Krita's development builds from Nov. 12 - Dec. 11, 2024.
Stable branch (5.2.9-prealpha):- General: Fix rounding errors in opacity conversion, which prevented layered 50% brushstrokes from adding up to 100%. (bug report) (Change, by Dmitry Kazakov)
- General: Fix snapping to grid at the edge of the canvas. (bug report) (Change, by Dmitry Kazakov)
- General: Disable snapping to image center by default, as it can cause confusion. (bug report) (Change, by Dmitry Kazakov)
- Calligraphy Tool: Fix following existing shape in the Calligraphy Tool. (bug report) (Change, by Dmitry Kazakov)
- Layers: Fix "Copy into new Layer" to copy vector data when a vector shape is active. (bug report) (Change, by Dmitry Kazakov)
- Selections: Fix the vector selection mode to not create 0px selections, and to select the canvas beforing subtracting if there is no existing selection. (bug report, CCbug report) (Change, by Dmitry Kazakov)
- General: Add Unify Layers Color Space action. (Change, by Dmitry Kazakov)
- Layers: Don't allow moving a mask onto a locked layer. (Change, by Maciej Jesionowski)
- Linux: Capitalize the .AppImage file extension to match the convention expected by launchers. (bug report) (Change, by Dmitry Kazakov)
Bug fixes:
- Color Management: Update display rendering when blackpoint compensation or LCMS optimizations are toggled, not just when the display color profile is changed. (bug report) (Change, by Dmitry Kazakov)
Features:
- Text: Implement Convert to Shape for bitmap fonts. (Change, by Wolthera van Hövell)
- Filters: Add Fast Color Overlay filter, which overlays a solid color using a configurable blending mode. (Change, by Maciej Jesionowski)
- Brush Engines: Add Pattern option to "Auto Invert For Eraser" mode. (Change, by Dmitry Kazakov)
- Wide Gamut Color Selector Docker: Add option to hide the Minimal Shade Selector rows. (Change, by Wolthera van Hövell)
- Wide Gamut Color Selector Docker: Show the Gamut Mask toolbar when the selector layout supports it. (Change, by Wolthera van Hövell)
- Layers: Add a warning icon for layers with a different color space than the image. (Change 1, by Dmitry Kazakov, and Change 2, by Timothée Giet)
- Pop-Up Palette: Add an option to sort the color history ring by last-used instead of by color. (bug report) (Change, by Dmitry Kazakov)
- Export Layers Plugin: Add option to use incrementing prefix on exported layers. (wishbug report) (Change, by Ross Rosales)
Pre-release versions of Krita are built every day for testing new changes.
Get the latest bugfixes in Stable "Krita Plus" (5.2.9-prealpha): Linux - Windows - macOS (unsigned) - Android arm64-v8a - Android arm32-v7a - Android x86_64
Or test out the latest Experimental features in "Krita Next" (5.3.0-prealpha). Feedback and bug reports are appreciated!: Linux - Windows - macOS (unsigned) - Android arm64-v8a - Android arm32-v7a - Android x86_64
GNU Taler news: GNU Taler 0.14 released
Brian Okken: Testing some tidbits with pytest
Over 20 years of bug squashing
The open source project I work on for the longest time is KDE and there more specific Kate.
This means I look at user bug reports for over 20 years now.
The statistics tell me our team got more than 9000 bugs since around 2001 (just for Kate, this excludes the libraries like KTextEditor that we maintain, too).
Kate Bug Statistics
That is a bit more than one bug per day for over two decades.
And as the statistics show, especially in the last years we were able to keep the open bug count down, that means we fixed a lot of them.
Given we are a small team, I think that is a nice achievement.
We not just survived over 20 years, we are still alive and kicking and not just a still compiling zombie project.
Thanks a lot to all people that are contributing to this success!
Let’s keep this up in the next year and the ones following.
Freelock Blog: Automatically post to Mastodon or other remote APIs
The ECA Helper module provides an action to make an arbitrary HTTP post to any URL. That's all that's necessary to post to Mastodon from Drupal, if you have a Mastodon account. I've been using this functionality to automatically post these advent calendar posts for the past week.
Real Python: Build Enumerations of Constants With Python's Enum
Python’s enum module offers a way to create enumerations, a data type allowing you to group related constants. You can define an enumeration using the Enum class, either by subclassing it or using its functional API. This tutorial will guide you through the process of creating and using Python enums, comparing them to simple constants, and exploring specialized types like IntEnum, IntFlag, and Flag.
Enumerations provide benefits over simple constants by offering a structured, readable, and maintainable way to manage sets of constant values. They ensure type safety, prevent value reassignment, and facilitate iteration and member access. By learning to utilize Python’s enum types, you enhance your ability to write organized and efficient code.
By the end of this tutorial, you’ll understand that:
- An enum in Python groups related constants in a single data type using the Enum class.
- You create enumerations by subclassing Enum or using the module’s functional API.
- Using Enum over simple constants provides structure, prevents reassignment, and enhances code readability.
- Enum, IntEnum, IntFlag, and Flag differ in their support for integer operations and bitwise flags.
- Enums can work with data types like integers or strings, boosting their flexibility.
- You access enumeration members using dot notation, call notation, or subscript notation.
- You can iterate over enum members using loops or the .__members__ attribute.
To follow along with this tutorial, you should be familiar with object-oriented programming and inheritance in Python.
Get Your Code: Click here to download the free sample code that you’ll use to build enumerations in Python.
Getting to Know Enumerations in PythonSeveral programming languages, including Java and C++, have a native enumeration or enum data type as part of their syntax. This data type allows you to create sets of named constants, which are considered members of the containing enum. You can access the members through the enumeration itself.
Enumerations come in handy when you need to define an immutable and discrete set of similar or related constant values that may or may not have semantic meaning in your code.
Days of the week, months and seasons of the year, Earth’s cardinal directions, a program’s status codes, HTTP status codes, colors in a traffic light, and pricing plans of a web service are all great examples of enumerations in programming. In general, you can use an enum whenever you have a variable that can take one of a limited set of possible values.
Python doesn’t have an enum data type as part of its syntax. Fortunately, Python 3.4 added the enum module to the standard library. This module provides the Enum class for supporting general-purpose enumerations in Python.
Enumerations were introduced by PEP 435, which defines them as follows:
An enumeration is a set of symbolic names bound to unique, constant values. Within an enumeration, the values can be compared by identity, and the enumeration itself can be iterated over. (Source)
Before this addition to the standard library, you could create something similar to an enumeration by defining a sequence of similar or related constants. To this end, Python developers often used the following idiom:
Python >>> RED, GREEN, YELLOW = range(3) >>> RED 0 >>> GREEN 1 Copied!Even though this idiom works, it doesn’t scale well when you’re trying to group a large number of related constants. Another inconvenience is that the first constant will have a value of 0, which is falsy in Python. This can be an issue in certain situations, especially those involving Boolean tests.
Note: If you’re using a Python version before 3.4, then you can create enumerations by installing the enum34 library, which is a backport of the standard-library enum. The aenum third-party library could be an option for you as well.
In most cases, enumerations can help you avoid the drawbacks of the above idiom. They’ll also help you produce more organized, readable, and robust code. Enumerations have several benefits, some of which relate to ease of coding:
- Allowing for conveniently grouping related constants in a sort of namespace
- Allowing for additional behavior with custom methods that operate on either enum members or the enum itself
- Providing quick and flexible access to enum members
- Enabling direct iteration over members, including their names and values
- Facilitating code completion within IDEs and editors
- Enabling type and error checking with static checkers
- Providing a hub of searchable names
- Mitigating spelling mistakes when using the members of an enumeration
They also make your code robust by providing the following benefits:
- Ensuring constant values that can’t be changed during the code’s execution
- Guaranteeing type safety by differentiating the same value shared across several enums
- Improving readability and maintainability by using descriptive names instead of mysterious values or magic numbers
- Facilitating debugging by taking advantage of readable names instead of values with no explicit meaning
- Providing a single source of truth and consistency throughout the code
Now that you know the basics of enumerations in programming and in Python, you can start creating your own enum types by using Python’s Enum class.
Creating Enumerations With Python’s EnumPython’s enum module provides the Enum class, which allows you to create enumeration types. To create your own enumerations, you can either subclass Enum or use its functional API. Both options will let you define a set of related constants as enum members.
In the following sections, you’ll learn how to create enumerations in your code using the Enum class. You’ll also learn how to set automatically generated values for your enums and how to create enumerations containing alias and unique values. To kick things off, you’ll start by learning how to create an enumeration by subclassing Enum.
Read the full article at https://realpython.com/python-enum/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]