Feeds
Matt Layman: Deploy Your Own Web App With Kamal 2
HDR and color management in KWin, part 5: HDR on SDR laptops
This one required a few other features to be implemented first, so let’s jump right in.
Matching reference luminancesA big part of what a desktop compositor needs to get right with HDR content is to show SDR and HDR content properly side by side. KWin 6.0 added an SDR brightness slider for that purpose, but that’s only half the equation - what about the brightness of HDR content?
When we say “HDR”, usually that refers to a colorspace with the rec.2020 primaries and the perceptual quantizer (PQ) transfer function. A transfer function describes how to calculate a real brightness value from the “electrical” signal encoded in the content - PQ specifically has encoded values from 0 to 1 and brightness values from 0 to 10000 nits. For reference, your typical office monitor does around 300 or 400 nits at maximum brightness setting, and many newer phones can go a bit above 1000 nits.
Now if we want to show HDR content on an HDR screen, the most straight forward thing to do would be to just calculate the brightness values, write them to the screen and be done with it, right? That’s what KWin did up to Plasma 6.1, but it’s far from ideal. Even if your display can show the full range of requested brightness values, you might want to adjust the brightness to match your environment - be it brighter or darker than the room the content was optimized for - and when there’s SDR things in HDR content, like subtitles in a video, that should ideally match other SDR content on the screen as well.
Luckily, there is a preexisting relationship between HDR and SDR that we can use: The reference luminance. It defines how bright SDR white is - which is why another name for it is simply “SDR white”.
As we want to keep the brightness slider working, we won’t map SDR content to the reference luminance of any HDR transfer function though, but instead we map both SDR and HDR content to the SDR brightness setting. If we have an HDR video that uses the PQ transfer function, that reference luminance is 203 nits. If your SDR brightness setting is at 406 nits, KWin will just multiply the brightness of the HDR video with a factor of 2.
This doesn’t only mean that we can make SDR and HDR content fit together nicely on HDR screens, but it also means we now know what to do when we have HDR content on an SDR screen: We map the reference luminance from the video to SDR white on the screen. That’s of course not enough to make it look nice though…
Tone mappingEspecially with HDR presented on an SDR screen, but also on many HDR screens, it will happen that the content brightness exceeds the display capabilities. To handle this, starting with Plasma 6.2, whenever the HDR metadata of the content says it’s brighter than the display can go, KWin will apply tone mapping.
Doing this tone mapping in RGB can result in changing the content quite badly though. Let’s take a look by using the most simple “tone mapping” function there is, clipping. It just limits the red, green and blue values separately to the brightness that the screen can show.
If we have a pixel with the value [2.0, 0.0, 2.0] and a maximum brightness of 1.0, that gets mapped to [1.0, 0.0, 1.0] - which is the same purple, just in darker. But if the pixel has the values [2.0, 0.0, 1.0], then that gets mapped to [1.0, 0.0, 1.0], even though the source color was significantly more red!
To fix that, KWin’s tone mapping uses ICtCp. This is a color space developed by Dolby, in which the perceived brightness (aka Intensity) is separated from the chroma components (Ct = blue-yellow, Cp = red-green), which is perfect for tone mapping. KWin’s shaders thus transform the RGB content to ICtCp, apply a brightness mapping function to only the intensity component, and then convert back to RGB.
The result of that algorithm looks like this:
RGB clipping KWin 6.2’s tone mapping MPV’s tone mappingAs you can see, there’s still some color changes going on in comparison to MPV’s algorithm; this is partially because the tone mapping curve still needs some more adjustments, and partially because we also still need to do similar mapping for colors that the screen can’t actually show. It’s already a large improvement though, and does better than the built-in tone mapping functionality in many HDR screens.
When tone mapping HDR content on SDR screens, we always end up reducing the brightness of the overall image, so that we have some brightness values to map the really bright highlights in the video to - otherwise everything just slightly over the reference luminance would look like an overexposed blob of color, as you can see in the “RGB clipping” image. There are ways around that though…
HDR on SDR laptop displaysTo explain the reasoning behind this, it helps to first have a look at what even makes a display “HDR”. In many cases it’s just marketing nonsense, a label that’s put on displays to make them seem more fancy and desirable, but in others there’s an actual tangible benefit to it.
Let’s take OLED displays as an example, as it’s considered one of the display technologies where HDR really shines. When you drive an OLED at high brightness levels, it becomes quite inefficient, it draws a lot of power and generates a lot of heat. Both of these things can only be dealt with to a limited degree, so OLED displays can generally only be used with relatively low average brightness levels. They can go a lot brighter than the average in a small part of the screen though, and that’s why they benefit so much from HDR - you can show a scene that’s on average only 200 nits bright, with the sky in the image going up to 300 nits, the sun going up to 1000 nits and the ground only doing 150 nits.
Now let’s compare that to SDR laptop displays. In the case of most LCDs, you have a single backlight LED for the whole screen, and when you move the brightness slider, the power the backlight is driven at is changed. So there’s no way to make parts of the screen brighter than the rest on a hardware level… but that doesn’t mean there isn’t a way to do it in software!
When we want to show HDR content and the brightness slider is below 100%, KWin increases the backlight level to get a peak brightness that matches the relative peak brightness of that content (as far as that’s possible). At the same time it changes the colorspace description on the output to match that change: While the reference luminance stays the same, the maximum luminance of the transfer function gets increased in proportion to the increase in backlight brightness.
The results is that SDR white gets mapped to a reduced RGB value, which is at least supposed to exactly counteract the increase of brightness that we’re applying with the backlight, while HDR content that goes beyond the reference luminance gets to use the full brightness range.
Increasing the backlight power of course doesn’t come without downsides; black levels and power usage both get increased, so this is only ever active if there’s HDR content on the screen with valid HDR metadata that signals brightness levels going beyond the reference luminance.
As always, capturing HDR content with a phone camera is quite difficult, but I think you can at least sort of see the effect:
without backlight adjustment with backlight adjustmentThis feature has been merged into KWin’s git master branch and will be available on all laptop displays starting with Plasma 6.3. I really recommend trying it for yourself once it reaches your distribution!
TestDriven.io: Avoid Counting in Django Pagination
FSF Anniversary Logo Contest
FSF Blogs: Forty years of commitment to software freedom
Forty years of commitment to software freedom
PreviousNext: PowerBI Dashboard: Addressing content currency
Post co-authored with NSW Resources. A critical issue with the management of content currency on our Drupal website, nsw.gov.au/nswresources required an innovative solution to provide us with an automated content audit process.
by luhur.rizal / 6 November 2024Due to the complexity and size of our Drupal web presence, ensuring each page was up-to-date and reviewed by the appropriate business unit became increasingly challenging. We needed a tool to track how long it had been since a page was reviewed, set specific periods for future reviews and easily identify the page owners for each section of the website. Furthermore, with the required frequency of daily updates to the site, the solution had to be ‘live’ to accurately reflect these changes.
Choosing the right solutionTo tackle these challenges, we collaborated with our Drupal web development partner, PreviousNext, to create a live .csv file of all relevant web pages. This file included custom metadata detailing each page’s review frequency, page owner, date of last page update, date of last content review, publishing status and the next scheduled review date. By using this .csv file as a data source, we built a user-friendly content audit report dashboard in Microsoft Power BI.
The PowerBI dashboard provides executives with a high-level overview of which sections of the website are most in need of review. A complementary dashboard for ‘content champions’ offers a more granular view of the status of each individual page, enabling targeted content management.
Power BI implementationImplementing this solution involved several steps:
Internal stakeholder consultationWe engaged with the various business units in NSW Resources division to identify page owners and establish appropriate review periods for each section of the website.
Metadata assignmentMetadata bulk-uploaded to the pages included the custom metadata fields created for the project, such as review periods and page owners.
Data manipulation in PowerBIData from the .csv file was manipulated within PowerBI to ensure that columns were in the correct format. We created a 'Review status' column based on the next date of review to provide clear visibility. We also filtered out any unpublished or archived content to make it more streamlined.
PowerBI buildUsing the dataset from the website's metadata and Google Analytics, we built a comprehensive dashboard in PowerBI Desktop and then uploaded it to PowerBI Service for broader access. Live links to the web pages were integrated into the dashboards for easy navigation.
Executive overview reportWe developed a high-level summary report that shows how many pages each business unit is responsible for and includes Google Analytics page views from the past 30 days.
Content audit reportThis report provides filtering by review status and sub-areas within each business unit, offering direct links to the listed web pages and conditional formatting utilising a traffic light system for review status indicators.
Overall trackerDesigned to be used exclusively by the NSW Resources website team, this site-wide and document overview provides an overall tracker, including documents, events, and articles that are not part of other reports.
Internal integrationsPowerBI reports were integrated into Microsoft Teams and the SharePoint intranet, facilitating use across the business.
Internal work request formThe form was updated to distinguish whether a web update is part of a comprehensive content audit review, therefore requiring a review period reset, or just a minor adjustment that means the review status remains unchanged.
A content audit that works for everyoneThe project required extensive consultation to define the scope and needs of each business unit. Following this, identifying the correct page owners, along with setting appropriate review periods, posed significant challenges.
As business units sometimes want entire website sections to be marked as reviewed with a change in the review period, the Drupal-side dashboard allowed for bulk changes to both owners and review periods by uploading a revised version of the .csv file, saving substantial time.
Understanding the correct licensing requirements for PowerBI was another challenge. After consulting with our internal IT team, a group workspace was set up under an enterprise agreement, and an individual licence was obtained for the team member managing the dashboards.
Testing the PowerBI dashboardTo ensure effectiveness, the solution was initially tested in a development environment that mirrored the production site. This approach allowed us to test the limitations and user experience prior to going live. During this phase, we tested the bulk upload of the .csv file to update page metadata.
A soft launch of the content audit dashboard provided valuable insights, such as the realisation that a three-month review period was too short, given the number of pages each business unit manages.
As a result of this testing period, we made minor adjustments, such as requiring a defined review period for each page and allowing users to opt-out of updates considered a ‘review’ for reporting purposes. This might include, for example, correcting a typo, which doesn’t constitute a page review in the context of the content audit dashboard.
Transforming content auditingThis solution significantly enhanced reporting capabilities across NSW Resources, reducing the need for manual intervention. Now, page updates are easily reflected in the report dashboard automatically.
The PowerBI dashboards offer real-time updates and clear visibility of page ownership and review status, making it easier for business units to manage their content.
Business units can independently track the currency of their pages without needing data from the digital team, streamlining the process and increasing efficiency.
Future plans for the dashboardThe solution will continue to evolve. We plan to use the work done on the PowerBI integration to inform future website improvements with the potential for further Google Analytics data integration.
A document audit is currently a separate and opt-in process for business units, but future plans may involve greater integration.
ConclusionOverall, this innovative solution addresses a critical need for NSW Resources by providing a robust, automated and user-friendly content audit process that adapts to the dynamic nature of our Drupal website.
PyCoder’s Weekly: Issue #654 (Nov. 5, 2024)
#654 – NOVEMBER 5, 2024
View in Browser »
What goes into building a spreadsheet application in Python that runs in the browser? How do you make it launch quickly, and where do you store the cells of data? This week on the show, we speak with Chris Laffra about his project, PySheets, and his book “Communication for Engineers.”
REAL PYTHON podcast
Python 3.13 included a new version of the REPL which has the ability to define keyboard shortcuts. This article shows you how to create one and warns you about potential hangups.
TREY HUNNER
Say goodbye to managing failures, network outages, flaky endpoints, and long-running processes. Temporal ensures your code never fails. Period. PLUS, you can get started today on Temporal Cloud with $1,000 free credits on us →
TEMPORAL TECHNOLOGIES sponsor
To better understand just where the performance cost of running tests comes from, Anders ran a million empty tests. This post talks about what he did and the final results.
ANDERS HOVMOLLER
Currently, CPython signs its artifacts with both PGP and Sigstore. Removing the PGP signature has been proposed, but that has implications: Sigstore is still new enough that many Linux distributions don’t support it yet.
JOE BROCKMEIER
In this video course, you’ll learn what magic methods are in Python, how they work, and how to use them in your custom classes to support powerful features in your object-oriented code.
REAL PYTHON course
As GenAI and LLMs rapidly evolve, the impact of data leaks and unsafe AI outputs makes it critical to secure your AI infrastructure. Learn how MLOps and ML Platform teams can use the newly launched Guardrails Pro to secure AI operations — enabling faster, safer adoption of LLMs at scale →
GUARDRAILS sponsor
In the real world, things decay over time. In the digital world things get kept forever, and sometimes that shouldn’t be so. Designing for deletion is hard.
ARMIN RONACHER
Bite code! does their monthly Python news wrap-up. Check out stories on 3.13, proposed template strings, dependency groups in pyproject.toml, and more.
BITE CODE!
This project uses computer vision solution to automate doing inventory of products in retail, using YOLOv8 and image embeddings for precise detection.
ALBERT FERRÉ • Shared by Albert Ferré
Context managers enable you to create “template” code with initialization and clean up to make the code that uses them easier to read and understand.
JUHA-MATTI SANTALA
This post celebrating ten years of Django Girls talks about how it got started, what they’re hoping to do, and how you can get involved.
DJANGO GIRLS
This quick TIL post talks about five useful pytest options that let you control what tests to run with respect to failing tests.
RODRIGO GIRÃO SERRÃO
This post shows you how to return values from coroutines that have been concurrently executed using asyncio.gather().
JASON BROWNLEE
This list contains the recorded talks from the PyBay 2024 conference.
YOUTUBE video
CRISTIANOPIZZAMIGLIO.COM • Shared by Cristiano Pizzamiglio
jamesql: In-Memory NoSQL Database in Python Events Weekly Real Python Office Hours Q&A (Virtual) November 6, 2024
REALPYTHON.COM
November 7, 2024
MEETUP.COM
November 7, 2024
SYPY.ORG
November 9, 2024
MEETUP.COM
November 12, 2024
PITERPY.COM
November 14 to November 16, 2024
PYCON.SE
November 16 to November 17, 2024
PYCON.HK
November 16 to November 17, 2024
PYCON.JP
November 16 to November 18, 2024
PYTHON.IE
Happy Pythoning!
This was PyCoder’s Weekly Issue #654.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Droptica: Product Search Engine in Drupal with Apache Solr Integration - How-to Guide
Product search is a key function in e-commerce today. This article will show how to create an advanced product search engine based on Drupal and its integration with Apache Solr. By combining Drupal, Droopler installation profile, and Solr, a powerful tool can be created to make it easier for customers to navigate and search large data sets faster. I encourage you to read the blog post or watch the video in the “Nowoczesny Drupal” series (the video is in Polish).
FSF Events: Free Software Directory meeting on IRC: Friday, November 8, starting at 12:00 EST (17:00 UTC)
Real Python: Introduction to Web Scraping With Python
Web scraping is the process of collecting and parsing raw data from the Web, and the Python community has come up with some pretty powerful web scraping tools.
The Internet hosts perhaps the greatest source of information on the planet. Many disciplines, such as data science, business intelligence, and investigative reporting, can benefit enormously from collecting and analyzing data from websites.
In this video course, you’ll learn how to:
- Parse website data using string methods and regular expressions
- Parse website data using an HTML parser
- Interact with forms and other website components
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Specbee: How to fix SEO rankings after your Drupal website migration
Real Python: Quiz: Variables in Python: Usage and Best Practices
In this quiz, you’ll test your understanding of Variables in Python: Usage and Best Practices.
By working through this quiz, you’ll revisit how to create and assign values to variables, change a variable’s data type dynamically, use variables to create expressions, counters, accumulators, and Boolean flags, follow best practices for naming variables, and create, access, and use variables in their scopes.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Performance of data import in LabPlot
In many cases, importing data into LabPlot for further analysis and visualization is the first step in the application:
LabPlot supports many different formats (CSV, Origin, SAS, Stata, SPSS, MATLAB, SQL, JSON, binary, OpenDocument Spreadsheets (ods), Excel (xlsx), HDF5, MQTT, Binary Logging Format (BLF), FITS, NetCDF, ROOT (CERN), LTspice, Ngspice) and we plan to add support for even more formats in the future. All of these formats have their reasons for existence as well as advantages and disadvantages. However, the performance of reading the data varies greatly between the different formats and also between the different CPU generations. In this post, we’ll show how long it takes to import a given amount of data in four different formats – ASCII/CSV, Binary, HDF5, and netCDF.
This post is not about promoting any of the formats, nor is it about doing very sophisticated measurements with different amounts and types of data and extensive CPU benchmarking. Rather, it’s about what you can (roughly) expect in terms of performance on the new and not so new hardware with the current implementation in LabPlot.
For this exercise, we import the data set with 1 integer column and 5 columns of float values (Brownian motion for 5 “particles”, one integer column for the index) with 50 Millions of rows which results into 300 Millions of numerical values:
We take 6 measurements for each format, ignore the first measurement, which is almost always an outlier due to the disk cache in the kernel and results in faster file reads on subsequent accesses, and calculate the averages:
As expected, the file formats that deal with binary representation internally (Binary, HDF5, NetCDF) provide significantly better performance compared to ASCII, and this difference becomes larger the slower the CPU is. The performance of HDF5 and NetCDF is almost the same because the newer version of NetCDF is based on HDF5.
The implementation in the data import code is straightforward. Ignoring for a moment the complexity with the different options affecting the behavior of the parser, different data types and other subleties, once everything is set up it’s just a matter of iterating over the data, parsing it and converting it into the internal structures. The logic inside the loop is fixed, and a linear behavior with respect to the total number of values to read is expected. This expectation is confirmed using the same CPU (we took the fastest CPU from the table above) and varying the total number of rows with the fixed number of columns:
The performance of the import is even more critical when dealing with external data that is frequently modified. In order to provide a smooth visualization in LabPlot for such “live data”, it’s important to optimize all steps involved here, like the import of the new data itself, as well as the recalculation in the algorithms (smoothing, etc.) and in the visualization part. For the next release(s), we’re now working to further optimize the implementation to handle more performance-critical scenarios in the near future. The results of these activities, funded by the NLnet grant, will be the subject of a dedicated post soon.
Talk Python to Me: #484: From React to a Django+HTMX based stack
Tryton News: Tryton Release 7.4
We are proud to announce the 7.4 release of Tryton .
This release provides many bug fixes, performance improvements and some fine tuning.
You can give it a try on the demo server, use the docker image or download it here.
As usual upgrading from previous series is fully supported.
Here is a list of the most noticeable changes:
Changes for the User ClientsThe Many2Many widget now has a restore button to revert the removal of records before saving.
The CSV export window stays open after the export is done so you can refine your export without having the redo all of the configuration.
It also supports exporting and importing translatable fields with a language per column.
The error messages displayed when there is a problem with the CSV import have been improved to include the row and column number of the value that caused the error.
The management window for the favourites has been removed and replaced by a simple “last favorite first” order.
The focus goes back to the search entry after performing a search/refresh.
You can now close a tab by middle clicking on it (as is common in other software).
Web ClientThe left menu and the attachment preview can now be resized so the user can make them the optimal size for their screen.
AccountingThe minimal chart of accounts has been replaced by the a universal chart of accounts which is a good base for IFRS and US GAAP.
It is now possible to copy an accounting move from a closed period. The closed period will be replaced by the current period after accepting the warning.
The payments are now numbered to make it easier to identify them inside the application.
An option has been added to the parties to allow direct debits to be created based on the balance instead of the accounting lines.
We’ve added a button on the Stripe payments and Stripe and Braintree customers to allow an updated to be forced. This helps when fixing missed webhooks.
When a stock move is cancelled, the corresponding stock account move is now cancelled automatically.
But it now no longer possible to cancel a done stock move which has been included in a calculation used for anglo-saxon accounting.
It is now possible to deactivate an agent so that they are no longer used for future orders.
CompanyIt is now possible to add a company logo. This is then displayed in the header of generated documents.
IncotermA warning is now raised when the incoterm of a shipment is different from the original document (such as the sale or purchase).
PartyWe’ve added more identifiers for parties like the United Kingdom Unique Taxpayer Reference, Taiwanese Tax Number, Turkish tax identification number, El Salvador Tax Number, Singapore’s Unique Entity Number, Montenegro Tax Number and Kenya Tax Number.
ProductWe’ve added a wizard to manage the replacement of products. Once there is no more stock of the replaced product in any of the warehouses, all the stock on all pending orders are replaced automatically.
A description can now be set for each product image.
There is now a button on the price list form to open the list of lines. This is helpful when the price list has a lot of lines.
ProductionIt is now possible to cancel a done production. All its stock moves are then cancelled.
The Bill of Materials now have an auto-generated internal code.
PurchaseThe wizard to handle exceptions has been improved to clearly display the list of lines to recreate and the list of lines to ignore.
The menu entry Parties associated to Purchases has been removed in favour of the per party reporting.
The purchase amendment now supports amending the quantity of a purchase line using the secondary unit.
QualityIt is now no longer possible to delete non-pending inspections.
SaleThe wizards to handle exceptions have been improved to clearly display the list of lines to recreate and the list of lines to ignore.
The menu entry Parties associated to Sales has been removed in favor of the per party reporting.
A warning is now raised when the user tries to submit a complaint for the same origin as an existing complaint.
The reporting can be grouped per promotion.
From a promotion, it is now possible to list of the sales related to it.
The coupon number of promotion can now be reused once the previous promotion has expired.
The sale amendment now supports amending the quantity of a sale line using the secondary unit.
StockIt is now possible to cancel a done shipment. When this happens the stock moves of the shipment are cancelled.
The task to reschedule late shipments now includes any shipment that is not yet done.
The supplier shipments no longer have a default planned date.
The customer shipments now have an extra state, Shipped, before the Done state.
The lot trace now shows the inventory as a document.
The package weight and the warehouse are now criteria that can be used when selecting a shipping method.
Changes for the System AdministratorThe clients automatically retry 5 times on a 503 Service Unavailable response. They respect the Retry-After value if it is set in the response header. This is useful when performing short maintenance on the server without causing an interruption for the users.
The scheduled tasks now show when they are running and prevent the user from editing them (as they are locked anyway).
We also store their last duration for a month by default. So the administrator can analyze and find slow tasks.
It is now possible to configure a license key for the TinyMCE editor.
Also TinyMCE has been updated to version 7.
It is now possible to configure the command to use to convert a report to a different format. This allows the use of an external service like document-converter.
AccountingThe Accounting Party group has been merged into the *Accounting" group.
We now raise a warning when the user is changing one of the configured credentials used on external services. This is to prevent accidental modification.
Document IncomingIt is now possible to set a maximum size for the content of the document incoming requests.
Inbound EmailIt is now possible to set a maximum size for the inbound email requests.
Web ShopThere is now a scheduled task that updates the cache that contains the product data feeds.
Changes for the Developer ServerThe ORM supports SQL Range functions and operators to build exclusion constraints. This allows, for example, the use of non-overlapping constraints using an index.
On PostgreSQL the btree_gist extension may be needed otherwise the ORM will fallback to locking querying the table.
The SQLite backend adds simple SQL constraints to the table schema.
The relational fields with a filter are no longer copied by default. This was a frequent source of bugs as the same relational field without the filter was already copied so it generated duplicates.
We’ve added a sparkline tool to generate textual sparklines. This allows the removal of the pygal dependency.
The activate_modules from testing now accepts a list of setup methods that are run before taking the backup. This speeds up any other tests which restore the backup as they then do not need to run those setup methods.
The backend now has a method to estimate the number of rows in a table. This is faster than counting when we only need an estimate, for example when choosing between a join and a sub-query.
We’ve added a ModelSQL.__setup_indexes__ method that prepares the indexes once the Pool has been loaded.
It is now possible to generate many sequential numbers in a single call. This allows, for example, to number a group of invoices with a single call.
The backend now uses JSONB by default for MultiSelection fields. It was already supported, but the database needed to be altered to activate the feature.
You can now define the cardinality (low, normal or high) for the index usage. This allows the backend to choose an optimal type of index to create.
We now have tools that apply the typing to columns of an SQLite query. This is needed because SQLite doesn’t do a good job of supporting CAST.
The RPC responses are now compressed if their size if large enough and the client accepts it.
The ModelView._changed_values and ModelStorage._save_values are now methods instead of properties. This makes it is easier to debug errors because AttributeError exceptions are no longer hidden.
The scheduled task runner now uses a pool of processes for better parallelism and management. Only the running task is now locked.
We’ve added an environment variable TEST_NETWORK so we can avoid running tests that require network access.
There is now a command line option for exporting translations and storing them as a po file in the corresponding module.
Tryton sets the python-format flag in the po file for the translations containing python formats. This allows Weblate (our translation service) to check if the translations keep the right placeholders.
The payment amounts are now cached on the account move line to improve the performance when searching for lines to pay.
The payment amounts now have to be greater or equal to zero.
Only purchase lines of type line can be used as an origin for a stock move.
SaleOnly sales lines of type line can be used as an origin for a stock move.
The fields from the Sale Shipment Cost Module are now all prefixed with sale_.
StockCancelled moves are no longer included in the shipment and package measurements.
2 posts - 1 participant
Django Weblog: Django bugfix release issued: 5.1.3
Today we've issued the 5.1.3 bugfix release.
The release package and checksums are available from our downloads page, as well as from the Python Package Index. The PGP key ID used for this release is Mariusz Felisiak: 2EF56372BA48CD1B.
KDE Plasma 6.2.3, Bugfix Release for November
Tuesday, 5 November 2024. Today KDE releases a bugfix update to KDE Plasma 6, versioned 6.2.3.
Plasma 6.2 was released in October 2024 with many feature refinements and new modules to complete the desktop experience.
This release adds two weeks' worth of new translations and fixes from KDE's contributors. The bugfixes are typically small but important and include:
- Bluedevil: Correct PIN entry behavior. Commit.
- KWin: Backends/drm: don't set backlight brightness to 1 in HDR mode. Commit. Fixes bug #495242
- KDE GTK Config: Gracefully handle decoration plugin failing to load. Commit.
Talking Drupal: Talking Drupal #474 - Revolt Event Loop
Today we are talking about the revolt event Loop, what it is, and why it matters with guest Alexander Varwijk (farvag). We’ll also cover IEF Complex Widget Dialog as our module of the week.
For show notes visit: https://www.talkingDrupal.com/474
Topics- What is an event loop
- Why does Drupal need an event loop
- What will change in core to implement this
- What problem does this solve
- Does this make Cron cleaner and long running processes faster
- What impact will this have on contrib
- How would contrib use this loop
- What does this mean for database compatibility
- What inspired this change
- Test instability
- Why Revolt
- Will this help with Drupal AI
- Adopt the Revolt event loop for async task orchestration
- revoltphp/event-loop was added as a dependency to Drupal Core
- Add "EventLoop::run" to Drupal Core
- Migrate BigPipe and the Renderer code that's currently built with fibers
- Revolt Playground that shows converting some Fiber implementations from Drupal to the Event Loop
- DrupalCon Barcelona Talk about "Why Async Drupal a Big Deal Is"
- Async PHP libraries
Alexander Varwijk - alexandervarwijk.com Kingdutch
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Joshua "Josh" Mitchell - joshuami.com joshuami
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you ever wanted to use Inline Entity Forms but have the dependent form open in a dialog? There’s a module for that.
- Module name/project name:
- Brief history
- How old: created in Mar 2020 by dataweb, though recent releases are by Chris Lai (chrisck), a fellow Canadian
- Versions available: 2.1.1 and 2.2.2, the latter or which is compatible Drupal 8.8 or newer, all the way up to Drupal 11
- Maintainership
- Actively maintained, latest release in the past month
- Number of open issues: 4 open issues, none of which are bugs against the current version
- Usage stats:
- 273 sites
- Module features and usage
- When you install the module, your Inline Entity Form widget configuration will have a new checkbox, to “Enable Popup for IEF”
- Includes specialized handling for different kinds of entities, like nodes, users, taxonomy terms, and users
- Will handle not just the creation forms, but editing entities, and also duplicating or deleting entities
- Not something you would always need, but can be very useful if the form you want to use for entity or even parent forms that are complex
- I should also add that IEF supports form modes, so often I’ll create an “embedded” form mode that exposes fewer elements, for example hiding the fields for URL alias, sticky, and so on. So I would start there, but if the content creation experience still feels complex, then IEF Complex Widget Dialog might be a nice way to help