Feeds
Desktop icons are surprisingly hard!
I spent past three weeks working on refactoring and fixing legacy code (the oldest of which was from 2013) that handled positioning Plasma desktop icons, and how this data was saved and loaded.
Here's the merge request if you're curious: plasma-desktop: Refactor icon positioner saving and loading
The existing code worked sometimes, but there were some oddities like race conditions (icon positioning happens in weird order) and backend code mixed in with frontend code.
Now I am not blaming anyone for this. Code has tendency to get a bit weird, especially over long periods of time, and especially in open source projects where anyone can tinker with it.
You know how wired earbuds always, always get tangled when you place them in a drawer or your pocket or something for few seconds? Codebases do the exact same thing, when there are multiple people writing on things, fixing each others' bugs. Everyone has a different way of thinking, so it's only natural that things over time get a bit tangled up.
So sometimes you need someone to look at the tangled codebase and try to clear it up a bit.
Reading code is the hardest partWhen going through old code, especially some that has barely any comments, it can take a very long time to understand what is actually going on. I honestly spent most of my time trying to understand how the thing even works, what is called when, where the icons positions are updated, and so on.
When I finally had some understanding of what was happening, I could start cleaning things up. I renamed a lot of the old methods to be hopefully more descriptive, and moved backend code — like saving icon positions — from the frontend back to backend.
Screens and iconsEvery screen (PC monitor, TV…) tends to have it's own quirks. Some, when connected with display-port adapter, tell your PC it's disconnected if your PC goes to screen saving mode. Some stay connected, but show a blank screen.
One big issue with the icon positions was that when screen got turned off, it thought there was no screen anymore and started removing items from the desktop.
That's fair. Why show desktop icons on a screen that is non-existent? But when you have a monitor that tells your PC, "Okay I'm disconnecting now!" when the PC says it's time to sleep, wrong things would happen.
This condition is now handled by having a check that if the screen is in use or not. Now when screen is not in use, we just do nothing with the icons. No need to touch them at all.
Stripes and screen resolutionOur icon positioning algorithm uses something called "stripes."
Every resolution has it's amount of stripes. Stripes contain an array of icons, or blank spots.
So if your screen resolution is, let's say, 1920x1080, we calculate how many stripes and how many items per stripe will fit on that screen.
Stripe1: 1 2 3 4 5 6 7 Stripe2: 1 2 3 4 5 6 7 Stripe3: 1 2 3 4 5 6 7 And so on..But when you change your screen resolution or scale factor, how many icon stripes you have and how many icons fit on each stripe will change.
So if you have one of those screens that looks to the system like it's been unplugged when it goes into sleep mode, previously the stripe amount would change to 1 row, 1 column. And the icon positioner would panics and shove all icons in that 1,1 slot.
Then when you'd turn the screen back on, the icon positioner would wonder what just happened and restore the proper stripe number and size. But by that point it would have lost all our positioning coordinate data during the shoving of icons in that one miniscule place, so instead it would reset the icon positions… and this leaves users wondering why their desktop icon arrangement is now gone.
Here we have to also check for the screen being in use or not. But there were other problems.
Saving icon positionsThe prior code saved the icon positions every time the positions changed. Makes sense.
But it didn't account for the screen being off… so the icon positions would get saved while the desktop was in a faulty state. This also causes frustration because someone arranges the icons how they wish, but then screen does something weird and they're now saved in wrong places again.
Our icon positions were updated after almost every draw call, if the positions changed. So this would mean the saving would happen rather often and no matter what moved them.
We had to separate the user action from computer action. If computer moves the icons, we ideally do not save their positions, unless something drastic has happened like resolution change.
The icon positions are saved per resolution, so if you move icons around while they're displayed on a 3440x1440 screen and then change the resolution to 1920x1080, both will have their own arrangements. This part of the codebase did not previously work, and it would always override the old configuration, which caused headache.
So now we only save icon positions when:
- The user adds or removes a desktop icon
- The user moves a desktop icon
- The user changes the desktop resolution
This makes the icon position saving much less random, since it's done only after explicit user actions.
Margin errorsThe last thing that caused headaches with the icon positioning was that the area available on the desktop for icons was determined before panels were loaded. When the panels loaded, they would reduce the amount of space for desktop icons, and that area would constantly resize until everything is ready.
In previous code, this would cause icons to move, which updates positions, which then saves their positions.
So let's say you arrange your icons nicely, but the next time you boot into plasma, your panels start shoving the poor icon area around and the icons have to move out of the way… and now they're all in the wrong places.
This was already partially fixed by not saving when the computer moves the icons around: we just load the icon positions when the screen is in use and we are done with listing the icons on the desktop. Part of the margin changes happen when screen is off.
We still need to fix the loading part; ideally we load the icon area last, so that it gets the margins it expects and doesn't shuffle around while panels are still appearing. But it was out of scope for this merge request.
ConclusionsIt may not sound like much, but this was a lot of work. I spent days just thinking about this problem, trying to understand what is happening now and how to improve it.
Luckily with a lot of help from reviewers and testers I got things to work much better than it used to. I am quite "all-over-the-place" when I solve problems so I appreciate the patience they had with me and my questions. :D
What I mostly wished for when working on this were more inline code comments. You don't need to comment the obvious things, but everything else could use something. It's hard to gauge what is obvious and what is not, but that kind of answers the question: If you don't know if it's obvious or not, it's likely not, so add some comment about it.
I do hope that the desktop icons act more reliably after all these changes. If you spot bugs, do report them at https://bugs.kde.org.
Thanks for reading! :)
PS. The funniest thing to me about all of this is that I do not like having any icons on my desktop. :'D
KDE Ships Frameworks 6.8.0
Friday, 8 November 2024
KDE today announces the release of KDE Frameworks 6.8.0.
KDE Frameworks are 72 addon libraries to Qt which provide a wide variety of commonly needed functionality in mature, peer reviewed and well tested libraries with friendly licensing terms. For an introduction see the KDE Frameworks release announcement.
This release is part of a series of planned monthly releases making improvements available to developers in a quick and predictable manner.
New in this version Baloo Bluez Qt- Simplify PendingCallPrivate. Commit.
- Add mimetype icons for text/x-typst. Commit.
- Monochromize not-explicitly-colorized symbolic folder icons. Commit. Fixes bug #494721
- Add CI for static builds on Linux. Commit.
- Unify common parts of index.theme for breeze and breeze-dark. Commit.
- Sync index.theme changes from breeze to breeze-dark. Commit. Fixes bug #494399
- Rename spinbox-* icons to value-*. Commit.
- FindKF6: Print custom message when required components are not found. Commit.
- Add a directory check when appending a module dir to qmlimportscanner. Commit.
- Add Python bindings. Commit.
- Break enums onto multiple lines. Commit.
- Set import paths for QML modules to all CMake search paths. Commit.
- Remove the old/unused SIP-based Python binding generation infrastructure. Commit.
- ECMGeneratePkgConfigFile: try to deduce additional include dirs. Commit.
- Fix custom definitions for generated pkgconfig files. Commit.
- Fix QM loader unit tests with a static Qt. Commit.
- Don't fall back to qmlplugin dump on static Qt builds. Commit.
- Retire Qt5 Android CI. Commit.
- Automatically install dependent targets of QML modules in static builds. Commit.
- Allow to specify an export set for targets installed by finalize_qml_module. Commit.
- Don't check websites in Appstream tests. Commit.
- Fix Duration's operator- accidentally adding instead of subtracting. Commit.
- Add CI for static builds on Linux. Commit.
- Fix compilation with Qt 6.9 (dev). Commit.
- Add test for passing unknown codec to codecForName. Commit.
- Fix buffer overflow in Codec::codecForName. Commit.
- Reset palette to default-constructed one when scheme is unset. Commit.
- Don't call activateSchemeInternal in init unless really needed. Commit.
- Add CI for static builds on Linux. Commit.
- Add linux-qt6-static CI. Commit.
- Kwindowconfig: If sizes are same as default, revert them to default when saving. Commit. See bug #494377
- Add CI for static builds on Linux. Commit.
- Correctly install QML module in a static build. Commit.
- Add CI for static builds on Linux. Commit.
- Fix IM protocol resource data initialization in static builds. Commit.
- Make KJob::elapsedTime const. Commit.
- Fix absolute path generation into (not installed) header. Commit.
- KPluginMetaData: reduce string allocation. Commit.
- Update git blame ignore file. Commit.
- Reformat code with clang-format. Commit.
- Kjob: add elapsedTime() returns the ms the job ran. Commit.
- Add CI for static builds on Linux. Commit.
- Install QML module correctly when building statically. Commit.
- ExportUrlsToPortal: use QScopeGuard::dismiss for the success code path. Commit.
- Upload new file sq.xml. Commit.
- UserMetadata: complete Windows implementation. Commit.
- Add WITH_X11 option to re-enable X11 code after runtime cleanup. Commit.
- Add CI for static builds on Linux. Commit.
- Add namespace for Android as required by newer gradle versions. Commit.
- Add CI for static builds on Linux. Commit.
- Correctly install static QML modules. Commit.
- Fix misunderstanding of All Saints Day in Swedish calendar. Commit.
- Add CI for static builds on Linux. Commit.
- Allow explicit setting of Python3 fallback executible path. Commit.
- Use raw pointer for some pimpl'd public classes. Commit.
- Add missing include. Commit.
- Trigger binding reevaluation on language change. Commit.
- Propagate QML dependency for a static build with KTranscript enabled. Commit.
- Add CI for static builds on Linux. Commit.
- Reduce temporary allocations. Commit.
- Modernize member initialization. Commit.
- Fix container size type narrowing warnings. Commit.
- Remove commented-out KLocale leftovers. Commit.
- Align argument names between definition and declaration. Commit.
- Re-evaluate the languages we translate to on QEvent::LanguageChange. Commit.
- Cleanup KLocalizedContext d-ptr handling. Commit.
- Special-case the language fallback for country-less English. Commit.
- Use QStringView for locale splitting. Commit.
- Port to KStandardActions. Commit.
- Postpone spawning KColorSchemeManager instance. Commit.
- Add CI for static builds on Linux. Commit.
- Init mimeType icons on demand. Commit.
- Set up KColorSchemeManager on Android as well. Commit.
- Reduce temporary allocations. Commit.
- TGA: Fixed GrayA image loading error. Commit.
- Exr: Fix read/write with openexr 3.3. Commit. Fixes bug #494571
- JXL improvements. Commit.
- JXR: Fixed image reading on sequential devices. Commit.
- Simplified read/verify header process. Commit.
- Minor: use existing variables which contain these strings. Commit.
- Fix crash from HTTPProtocol::del() which calls with inputData=nullptr. Commit.
- Port away from Qt::Core5Compat when using Qt 6.7 or newer. Commit.
- Remove unused KConfigWidgets dependency. Commit.
- Port to KStandardActions. Commit.
- Add missing KColorScheme link. Commit.
- Add missing include. Commit.
- Include DBus error in log when communication with kpasswdserver fails. Commit.
- Update git blame ignore file. Commit.
- Reformat code with clang-format. Commit.
- Copyjob/transferjob: use KJob::startElapsedTimer. Commit.
- Http worker: handle dav[s] protocol. Commit. Fixes bug #365356
- [KFileFilterCombo] Fix setting 'All' filter as default. Commit.
- KNewFileMenu: Prevent using home directory as template directory. Commit. Fixes bug #494679
- [KFileFilter] Ignore label when comparing filters. Commit.
- [KFileFilter] Remove excess spaces in logging. Commit.
- [KFileFilterCombo] More verbose logging when not finding a filter. Commit.
- [http] Inline handleRedirection into the metaDataChanged slot. Commit.
- [webdav] Handle redirections which add trailing slashes. Commit. See bug #484580
- Copyjob: prefer custom struct over std::pair. Commit.
- PreviewJob: use standard thumbnailer caching is disabled. Commit.
- KDirListerTest: improve test stability. Commit.
- Tests: Make sure KIO::UDSEntryList can be compared. Commit.
- Expose UDSEntry equal operator to KIO namespace. Commit.
- Core/copyjob: report speed when copying multiple files. Commit. See bug #391199
- KPreview: store standard thumbnails in /tmp subfolder. Commit.
- Preview: better clean after standard thumbnailer. Commit. Fixes bug #493274
- Openurljob.cpp: Avoid opening files in endless loop if mimetype is set to open with xdg-open. Commit. Fixes bug #494335
- [KFilePlacesView] Improve automatic resize heuristic. Commit. Fixes bug #449544
- Workerinterface: remove unused #include. Commit.
- Add translation context to admin security warning. Commit.
- Kfileitem: linkDest prevent readlink error when file is not a symlink. Commit.
- Check that admin worker was installed by root. Commit.
- Clean up Properties dialog to follow HIG, improve UX, remove frames and fix padding regression. Commit. Fixes bug #484789
- TrashSizeCache: Use correct flags for QDirIterator. Commit. See bug #479283
- TitleSubtitle: Don't explicit set renderType. Commit.
- Upper mound for overlaysheet width. Commit.
- SelectableLabel: fix a11y properties. Commit.
- Fix Kirigami Application (Qt6) template. Commit. Fixes bug #494478
- SelectableLabel: Use onPressedChanged. Commit. See bug #481293
- Reformat code with clang-format. Commit.
- Icon: Always respect the animated property. Commit. Fixes bug #466357
- Adjust tst_qicon for desktop theme. Commit.
- Fix loading desktop theme. Commit. Fixes bug #491294
- Fix presumable typos confusing background and foreground colors. Commit. See bug #491294
- Always print Theme file loading errors. Commit. See bug #491294
- SelectableLabel: override default padding values more completely. Commit. Fixes bug #495256
- Fix icon for positive state of InlineMessage. Commit.
- SelectableLabel: fix binding loop warning on cursorShape. Commit.
- ScrollablePage: Add properties to set if the scrollbars are interactive. Commit.
- SelectableLabel: use property alias instead of direct binding, expose more through aliases. Commit.
- Dialog: fix multiple binding loops (again). Commit.
- Make the close button actually close. Commit.
- Layout: Reverse the stacking order of items inserted into ToolBarLayout. Commit. See bug #490929
- Modify SelectableLabel to use TextEdit instead. Commit. See bug #493581
- Cleanup and fix static QML module installation. Commit.
- Disable PageRow gesture on android. Commit.
- Make OverlaySheet look exactly like Dialog. Commit. Fixes bug #489357
- Top align icon in multiline InlineMessage. Commit.
- Install QML module correctly when building statically. Commit.
- Fix QML unit tests when building against a static Qt. Commit.
- Make unit tests independent of QtWidgets. Commit.
- Don't hardcode library type. Commit.
- Kbihash: adapt to source incompatible change in Qt. Commit.
- Hide arrowButton in KWidgetJobTracker on startup. Commit.
- Add dedicated WITH_X11 option to avoid automagic. Commit.
- Make sure the action's dialog closes. Commit. Fixes bug #492998
- Put qnetworkreplys in a self-aborting unique_ptr. Commit. See bug #492998
- Parent the xml loader's httpjob. Commit. See bug #492998
- Filecopyworker: try to gracefully quit the thread. then terminate it. Commit. See bug #492998
- Typo--. Commit.
- Add namespace for Android as required by newer gradle. Commit.
- Add CI for static builds on Linux. Commit.
- Define undeprecated Capabilities key in JSON metadata, define JSON schema, remove obsolete key. Commit.
- Vi mode: Don't infinite loop in searcher. Commit.
- Remove unused var. Commit.
- Fix ignores. Commit.
- Less deprecated stuff used. Commit.
- Don't temporarily clear document URL during openUrl(). Commit.
- Only discard completion if the cursor was at the end of line. Commit.
- Update git blame ignore file. Commit.
- Reformat code with clang-format. Commit.
- Fix implicit conversion of Qt::Key in Qt 6.9. Commit.
- Try to avoid unwanted completions. Commit.
- Fix session restore of file type. Commit. Fixes bug #492201
- Make ViewPrivate::displayRangeChanged public. Commit.
- Set DocumentPrivate::m_reloading to false only if loading. Commit.
- Give a more proper name to the test. Commit.
- Fix multiblock range handling when unwrapping line. Commit. Fixes bug #494826
- Fix line removal not handled properly in KateTemplateHandler. Commit. Fixes bug #434093
- Inline blocksize into buffer. Commit.
- Improve MovingRangeTest::benchCheckValidity. Commit.
- Improve TextRange::checkValidity performance. Commit.
- Do all testing in clean temp dirs. Commit.
- Add a swap file test. Commit.
- Add benchmarks for moving stuff. Commit.
- Use std::vector for cursor storage. Commit.
- Allow disabling editorconfig. Commit. Fixes bug #471008
- Import i18n scripts from grantlee. Commit. Fixes bug #492237
- Fix "now" tag to allow single quoted strings. Commit.
- Add CI for static builds on Linux. Commit.
- Fix time entry in locales with mixed-case AM/PM suffixes. Commit.
- Add CI for static builds on Linux. Commit.
- Don't use Oxygen style in KSeparator. Commit.
- KMessageWidget: Improve accessibility. Commit.
- Simplify code: use erase remove. Commit.
- Fix window position not being restored. Commit. Fixes bug #493401
- Add CI for static builds on Linux. Commit.
- TextArea: Make placeholder wrap. Commit.
- Restore MediaChanged handling for Audio CDs. Commit.
- Support reproducible builds by omitting host paths in bison/yacc outputs. Commit.
- [udisks] Don't add/remove devices in slotMediaChanged. Commit. See bug #464149
- Port implicit QByteArray, QChar and QString conversions in iokit. Commit.
- Drop unfinished Power API. Commit.
- Fstabwatcher: use libmount monitor on Linux. Commit.
- Fstabhandling: use libmount in Linux. Commit.
- Add linux-qt6-static CI. Commit.
- Remove ASPELL runtime dependency from plugin building check. Commit.
- Provide SONNET_NO_BACKENDS option to deactivate build failures with no backends. Commit.
- Add CI for static builds on Linux. Commit.
Kdenlive 24.08.3 released
The last maintenance release of the 24.08 series is out.
- Fix crash caused by incorrect codec passed on opening subtitle. Commit. Fixes bug #495410.
- Fix shadowed variable causing incorrect clip removal on project opening, fix crash opening project with timeline clip missing in bin. Commit. See bug #493486.
- Fix qml crash building timeline with Qt 6.8 – ensure context property exists before setting source. Commit. See bug #495335.
- Fix generate proxy when frame size is above a value not using the current project setting. Commit.
- Fix shadow variable causing clip removal on project opening. Commit.
- Fix monitor seek to prev/next keyframe not working in rotoscoping. Commit.
- Fix missing build-in LUT files not correctly fixed on project open. Commit. See bug #494726.
- Fix clip jobs like stabilize creating invalid folders. Commit.
- Fix freeze loading project with invalid folder id. Commit.
- Don’t invalidate timeline preview when replacing an audio clip in bin. Commit.
- Ensure monitor is cleared and ruler hidden when no clip or a folder is selected in bin. Commit.
The post Kdenlive 24.08.3 released appeared first on Kdenlive.
Drupal life hack's: Configuring a Custom Permission Provider Service in Drupal 9/10 Modules
Python Software Foundation: PSF Grants Program Updates: Workgroup Charter, Future, & Refresh (Part 2)
Building on Part 1 of this PSF Grants Program Update, we are pleased to share updates to the Grants Workgroup (workgroup) Charter. We have outlined all the changes below in a chart, but there are a couple of changes that we’d like to highlight to grant applicants. These updates in particular will change how and when you apply, and hopefully reduce blockers to getting those applications in and ready for review. Because we are just sharing these updates, we are happy to be flexible on these changes but hope to see all applicants adhere to the changes starting around January 2025.
- Increase overall process time frame to 8 weeks (formerly 6 weeks). We want to be realistic about how long the process takes and we know that going over our projection can cause pain for applicants. We hope to turn around applications in 6 weeks in most cases, but planning for the extra two weeks can make a big difference for everyone involved!
- Our application form requires that you set the event date out to 6 weeks in advance. We will wait to update that to 8 weeks in advance until January 2025.
- It’s important to note that this time frame begins only once all required information has been received, not exactly from the day the application is submitted. Make sure to check the email you provided on the application to see if the workgroup Chair has any questions regarding your request!
- Add a statement of support for accessibility services. In line with the PSF’s mission to support and facilitate the growth of a diverse community, we are explicitly stating in the charter that we will consider funding accessibility services. For established events (have 2 or more events in the past with more than 200 participants at the last event), we are open to considering accessibility-related requests such as live captioning, sign language interpretation, or certified child care.
- To review these types of requests, we will need sufficient documentation such as quotes, certifications, or any other relevant information.
- Add guidelines around program/schedule review. Previously undocumented, we were checking event programs/schedules to ensure a Python focus as well as a diversity of speakers. Because of event organizing time frames, we often received grant requests before the schedule was available. Moving forward we are accepting 1 of 3 options:
- The program/schedule for the event
- A tentative schedule or list of accepted speakers/sessions for the event
- Programs from previous editions of the event if available, a link to the event’s call for proposals, which should state a required Python focus for the event as well as a statement in support of a diverse speaker group, and a description of the efforts that are being made to ensure a diversity of speakers.
Still on our Grants Program refresh to-do list is:
- Mapping Board-mandated priorities for the Grants Program to policy
- Charter adjustments as needed, based on the priority mapping
- Main documentation page re-write
- Budget template update
- Application form overhaul
- Transparency report for 2024
- Exploration and development of other resources that our grant applicants would find useful
Our community is ever-changing and growing, and we plan to be there every step of the way and continue crafting the Grants Program to serve Pythonistas worldwide. If you have questions or comments, we welcome and encourage you to join us at our monthly Grants Program Office Hour sessions on the PSF Discord.
Python Software Foundation: PSF Grants Program Updates: Workgroup Charter, Future, & Refresh (Part 1)
Time has flown by since we received the community call last December for greater transparency and better processes around our Grants Program. PSF staff have produced a Grants Program Transparency Report and begun holding monthly Grants Program Office Hours. The PSF Board also invested in a third-party retrospective and launched a major refresh of all areas of our Grants program.
To provide the Grants Program more support, we assigned Marie Nordin, PSF Community Communications Manager, to support the Grants Program alongside Laura Graves, Senior Accountant. Marie has stepped into the Grants Workgroup Chair role to relieve Laura after 3+ years– thank you, Laura! Marie has been leading the initiatives and work related to the Grants Program in collaboration with Laura.
Behind the scenes, PSF staff has been working with the PSF Board and the Grants Workgroup (workgroup) to translate the feedback we’ve received and the analysis we’ve performed into action, starting with the Grants Workgroup Charter. A full breakdown of updates to the charter can be found in Part 2 of this update.
The PSF Board spent time on their recent retreat to explore priorities for the program going forward. We also ran a more thorough workgroup membership renewal process based on the updated charter to support quicker grant reviews and votes through active workgroup engagement. We’re excited to share refresh progress, updates, and plans for the future of the program later on in this post!
Meanwhile, the attention our Grants Program has received in the past year has resulted in something wonderful: we’re getting more requests than ever. Our call to historically underrepresented regions to request funds has been answered in some areas- and we are thrilled! For example, in the African region, we granted around 65K in 2023 and over 140K already this year! And, year to date in 2024 we have awarded more grant funding than we did in all of 2023. The other side of this coin presents us with a new issue– the budget for the program.
Up until this year, we’ve been able to grant at least partial funding to the majority of requests we’ve received while staying within our guidelines and maintaining a feasible annual budget. With more eligible requests incoming, every “yes” brings us closer to the ceiling of our grant budget. In addition to the increased quantity of requests, we are receiving requests for higher amounts. Inflation and the tech crunch have been hitting event organizers everywhere (this includes the PSF-produced PyCon US), and we are seeing that reflected in the number and size of the grant requests we are receiving.
Moving forward, with the increased quantity and amount of eligible grant requests, we will need to take steps to ensure we are balancing grant awards with sustainability for our Grants Program, and the Foundation overall. We know that the most important part of any changes to the Grants Program is awareness and two-way communications with the community. We aim to do that as early and transparently as we possibly can. That means we aren’t changing anything about how we award grants today or even next week– but within the next couple of months. Please keep an eye on our blog and social accounts (Mastodon, X, LinkedIn) for news about upcoming changes, and make sure to share this post with your fellow Python event and initiative organizers.
The purpose of the PSF Grants Workgroup (workgroup) is to review, approve, and deny grant funding proposals for Python conferences, training workshops, Meetups, development projects, and other related Python initiatives. The workgroup charter outlines processes, guidelines, and membership requirements for the workgroup. Small changes have been made to the charter over the years, but it’s been some time since any significant changes were implemented.
During the summer of 2024, Marie, workgroup chair (hi 👋 it’s me writing this!), and Laura worked on updates for the charter. The updates focused on how to make the Grants Program processes and guidelines work better for the workgroup, the PSF Board, and most especially, the community we serve.
After many hours of discussing pain points, running scenarios, exploring possible guidelines, and drafting the actual wording, Marie and Laura introduced proposed updates for the charter to the Board in July. After a month of review and 1:1 meetings with the PSF Board and workgroup members, the updated charter went to a vote with the PSF Board on August 14th and was approved unanimously.
The workgroup has been operating under its new charter for a couple of months. Before we shared broadly with the community, we wanted to make sure the updates didn’t cause unintended consequences, and we were ready to walk back anything that didn’t make sense. Turns out, our hard work paid off, and the updates have been mostly working as we hoped. We will continue to monitor the impact of the changes and make any adjustments in the next Charter update. Read up on the Grants Workgroup Charter updates in Part 2 of this blog post!
Jonathan Dowland: John Carpenter's "The Fog"
A gift from my brother. Coincidentally I’ve had John Carpenter’s “Halloween” echoing around my my head for weeks: I’ve been deconstructing it and trying to learn to play it.
Oliver Davies' daily list: Should Drush be in Drupal core?
I've used Drush - the Drupal shell - to interact with my Drupal applications on the command line since I started around 2008.
It's always been part of my Drush experience.
From installing Drupal and performing routine actions such as enabling modules and clearing caches to, in newer Drupal versions, performing migrations and generating Storybook stories.
Many projects I work on have custom Drush commands to perform tasks from the command line.
This week, I created a new Drupal 11 project for a client using the drupal/core-recommended package and initially forgot to install Drush so I could install Drupal.
I'm surprised Drush isn't in Drupal core or a dependency of the recommended package.
There is a basic Drupal CLI at core/scripts/drupal, but I wonder if we'll see a fully-featured CLI tool like Drush included with Drupal core, similar to Symfony's console or Laravel's artisan commands.
For me, including Drush would be an obvious choice.
KDE Gear 24.08.3
Over 180 individual programs plus dozens of programmer libraries and feature plugins are released simultaneously as part of KDE Gear.
Today they all get new bugfix source releases with updated translations, including:
- neochat: Adjustments to make it work with the newly released libquotient 0.9 (Commit)
- kdevelop: MesonManager: remove test suites when a project is closing, (Commit, fixes bug #427157)
- kdevelop: Fix a qml crash building timeline with Qt 6.8 (Commit, fixes bug #495335)
Distro and app store packagers should update their application packages.
- 24.08 release notes for information on tarballs and known issues.
- Package download wiki page
- 24.08.3 source info page
- 24.08.3 full changelog
Drupal Starshot blog: Callout for a new design system for Experience Builder and Drupal CMS
If you are paying close attention to the Drupal CMS roadmap, you may have noticed that our focus has mostly been on CMS features and the administrative user interface. Many people have asked: What about themes?
Drupal CMS will initially ship with Olivero, which is the default theme for Drupal core in the Standard profile. Of course, Experience Builder will completely change the way we build sites, and that includes support for design systems and single-directory components. In order to support this initially, the Starshot Demo Design System was developed (very quickly!) to show how design systems can be integrated with XB. We will also develop some components for Olivero so that Drupal CMS and eventually core have something to demo with XB.
Now, we are planning for what comes next. So we are seeking a strategic partner to collaborate on designing and implementing a comprehensive design system for our post-v1 integration with Experience Builder for Drupal CMS.
The goal for this initiative is to create a modern and versatile design system that provides designers and front-end developers tools to accelerate their adoption of Drupal as their digital platform, by enabling them to easily adapt it to their own brand. This design system will enable content marketers to efficiently build landing pages and campaigns, allowing them to execute cohesive marketing strategies while maintaining the brand integrity.
Since it’s a big commitment for anyone, we are dividing the scope of work between design and implementation. We welcome applicants with expertise in one area who wish to specialize, as well as those who are equipped to handle the complete lifecycle of the design system, from initial design to full technical implementation and integration.
For more details, including information on how to apply, check out the full brief.
Interested partners should submit the following by 6 December, and we will announce the selected proposal(s) the week of 16 December. If you have questions before that, we’ll host a webinar the week of 19 November. You can also find us on Slack in #starshot or #experience-builder in the meantime.
We are looking forward to seeing your proposals!
Bits from Debian: Bits from the DPL
Dear Debian community,
this is Bits from DPL for October. In addition to a summary of my recent activities, I aim to include newsworthy developments within Debian that might be of interest to the broader community. I believe this provides valuable insights and foster a sense of connection across our diverse projects. Also, I welcome your feedback on the format and focus of these Bits, as community input helps shape their value.
Ada Lovelace Day 2024As outlined in my platform, I'm committed to increasing the diversity of Debian developers. I hope the recent article celebrating Ada Lovelace Day 2024–featuring interviews with women in Debian–will serve as an inspiring motivation for more women to join our community.
MiniDebConf CambridgeThis was my first time attending the MiniDebConf in Cambridge, hosted at the ARM building. I thoroughly enjoyed the welcoming atmosphere of both MiniDebCamp and MiniDebConf. It was wonderful to reconnect with people who hadn't made it to the last two DebConfs, and, as always, there was plenty of hacking, insightful discussions, and valuable learning.
If you missed the recent MiniDebConf, there's a great opportunity to attend the next one in Toulouse. It was recently decided to include a MiniDebCamp beforehand as well.
FTPmaster accepts MRs for DAKAt the recent MiniDebConf in Cambridge, I discussed potential enhancements for DAK to make life easier for both FTP Team members and developers. For those interested, the document "Hacking on DAK" provides guidance on setting up a local DAK instance and developing patches, which can be submitted as MRs.
As a perfectly random example of such improvements some older MR, "Add commands to accept/reject updates from a policy queue" might give you some inspiration.
At MiniDebConf, we compiled an initial list of features that could benefit both the FTP Team and the developer community. While I had preliminary discussions with the FTP Team about these items, not all ideas had consensus. I aim to open a detailed, public discussion to gather broader feedback and reach a consensus on which features to prioritize.
- Accept+Bug report
Sometimes, packages are rejected not because of DFSG-incompatible licenses but due to other issues that could be resolved within an existing package (as discussed in my DebConf23 BoF, "Chatting with ftpmasters"[1]). During the "Meet the ftpteam" BoF (Log/transcription of the BoF can be found here), for the moment until the MR gets accepted, a new option was proposed for FTP Team members reviewing packages in NEW:
Accept + Bug Report
This option would allow a package to enter Debian (in unstable or experimental) with an automatically filed RC bug report. The RC bug would prevent the package from migrating to testing until the issues are addressed. To ensure compatibility with the BTS, which only accepts bug reports for existing packages, a delayed job (24 hours post-acceptance) would file the bug.- Binary name changes - for instance if done to experimental not via new
When binary package names change, currently the package must go through the NEW queue, which can delay the availability of updated libraries. Allowing such packages to bypass the queue could expedite this process. A configuration option to enable this bypass specifically for uploads to experimental may be useful, as it avoids requiring additional technical review for experimental uploads.
Previously, I believed the requirement for binary name changes to pass through NEW was due to a missing feature in DAK, possibly addressable via an MR. However, in discussions with the FTP Team, I learned this is a matter of team policy rather than technical limitation. I haven't found this policy documented, so it may be worth having a community discussion to clarify and reach consensus on how we want to handle binary name changes to get the MR sensibly designed.
- Remove dependency tree
When a developer requests the removal of a package – whether entirely or for specific architectures – RM bugs must be filed for the package itself as well as for each package depending on it. It would be beneficial if the dependency tree could be automatically resolved, allowing either:
a) the DAK removal tooling to remove the entire dependency tree after prompting the bug report author for confirmation, or b) the system to auto-generate corresponding bug reports for all packages in the dependency tree.The latter option might be better suited for implementation in an MR for reportbug. However, given the possibility of large-scale removals (for example, targeting specific architectures), having appropriate tooling for this would be very beneficial.
In my opinion the proposed DAK enhancements aim to support both FTP Team members and uploading developers. I'd be very pleased if these ideas spark constructive discussion and inspire volunteers to start working on them--possibly even preparing to join the FTP Team.
On the topic of ftpmasters: an ongoing discussion with SPI lawyers is currently reviewing the non-US agreement established 22 years ago. Ideally, this review will lead to a streamlined workflow for ftpmasters, removing certain hurdles that were originally put in place due to legal requirements, which were updated in 2021.
Contacting teamsMy outreach efforts to Debian teams have slowed somewhat recently. However, I want to emphasize that anyone from a packaging team is more than welcome to reach out to me directly. My outreach emails aren't following any specific orders--just my own somewhat naïve view of Debian, which I'm eager to make more informed.
Recently, I received two very informative responses: one from the Qt/KDE Team, which thoughtfully compiled input from several team members into a shared document. The other was from the Rust Team, where I received three quick, helpful replies–one of which included an invitation to their upcoming team meeting.
Interesting readings on our mailing listsI consider the following threads on our mailing list some interesting reading and would like to add some comments.
Sensible languages for younger contributorsThough the discussion on debian-devel about programming languages took place in September, I recently caught up with it. I strongly believe Debian must continue evolving to stay relevant for the future.
"Everything must change, so that everything can stay the same." -- Giuseppe Tomasi di Lampedusa, The Leopard
I encourage constructive discussions on integrating programming languages in our toolchain that support this evolution.
Concerns regarding the "Open Source AI Definition"A recent thread on the debian-project list discussed the "Open Source AI Definition". This topic will impact Debian in the future, and we need to reach an informed decision. I'd be glad to see more perspectives in the discussions−particularly on finding a sensible consensus, understanding how FTP Team members view their delegated role, and considering whether their delegation might need adjustments for clarity on this issue.
Kind regards Andreas.
ImageX: AI in Drupal: Latest Demos of the Incredible Capabilities
Authored by Nadiia Nykolaichuk.
AI is shifting our perception of the impossible. It does things that past generations would have never imagined. Indeed, it has long since become a routine to ask AI assistants to play music, turn on the lights, or even order groceries. With the advance of generative AI, boosting content management through various AI-driven tasks has also become increasingly common.
ImageX: AI in Drupal: Latest Demos of the Incredible Capabilities
Authored by Nadiia Nykolaichuk.
AI is shifting our perception of the impossible. It does things that past generations would have never imagined. Indeed, it has long since become a routine to ask AI assistants to play music, turn on the lights, or even order groceries. With the advance of generative AI, boosting content management through various AI-driven tasks has also become increasingly common.
ClearlyDefined at SOSS Fusion 2024: a collaborative solution to Open Source license compliance
This past month, the Open Source Security Foundation (OpenSSF) hosted SOSS Fusion in Atlanta, an event that brought together a diverse community of leaders and innovators from across the digital security spectrum. The conference, held on October 22-23, explored themes central to today’s technological landscape: AI security, diversity in technology, and public policy for Open Source software. Industry thought leaders like Bruce Schneier, Marten Mickos, and Cory Doctorow delivered keynotes, setting the tone for a conference that emphasized collaboration and community in creating a secure digital future.
Amidst these pressing topics, the Open Source Initiative in collaboration with GitHub and SAP presented ClearlyDefined—an innovative project aimed at simplifying software license compliance and metadata management. Presented by Nick Vidal of the Open Source Initiative, along with E. Lynette Rayle from GitHub and Qing Tomlinson from SAP, the session highlighted how ClearlyDefined is transforming the way organizations handle licensing compliance for Open Source components.
What is ClearlyDefined?ClearlyDefined is a project with a powerful vision: to create a global crowdsourced database of license metadata for every software component ever published. This ambitious mission seeks to help organizations of all sizes easily manage compliance by providing accurate, up-to-date metadata for Open Source components. By offering a single, reliable source for license information, ClearlyDefined enables organizations to work together rather than in isolation, collectively contributing to the metadata that keeps Open Source software compliant and accessible.
The problem: redundant and inconsistent license managementIn today’s Open Source ecosystem, managing software licenses has become a significant challenge. Many organizations face the repetitive task of identifying, correcting, and maintaining accurate licensing data. When one component has missing or incorrect metadata, dozens—or even hundreds—of organizations using that component may duplicate efforts to resolve the same issue. ClearlyDefined aims to eliminate redundancy by enabling a collaborative approach.
The solution: crowdsourcing compliance with ClearlyDefinedClearlyDefined provides an API and user-friendly interface that make it easy to access and contribute license metadata. By aggregating and standardizing licensing data, ClearlyDefined offers a powerful solution for organizations to enhance SBOMs (Software Bill of Materials) and license information without the need for extensive re-scanning and data correction. At the conference, Nick demonstrated how developers can quickly retrieve license data for popular libraries using a simple API call, making license compliance seamless and scalable.
In addition, organizations that encounter incomplete or incorrect metadata can easily update it through ClearlyDefined’s platform, creating a feedback loop that benefits the entire Open Source community. This crowdsourcing approach means that once an organization fixes a licensing issue, that data becomes available to all, fostering efficiency and accuracy.
Key components of ClearlyDefined’s platform1. API and User Interface: Users can access ClearlyDefined data through an API or the website, making it simple for developers to integrate license checks directly into their workflows.
2. Human curation and community collaboration: To ensure high data quality, ClearlyDefined employs a curation workflow. When metadata requires updates, community members can submit corrections that go through a human review process, ensuring accuracy and reliability.
3. Integration with popular package managers: ClearlyDefined supports various package managers, including npm and pypi, and has recently expanded to support Conda, a popular choice among data science and AI developers.
Real-world use cases: GitHub and SAP’s adoption of ClearlyDefinedDuring the presentation, representatives from GitHub and SAP shared how ClearlyDefined has impacted their organizations.
– GitHub: ClearlyDefined’s licensing data powers GitHub’s compliance solutions, allowing GitHub to manage millions of licenses with ease. Lynette shared how they initially onboarded over 17 million licenses through ClearlyDefined, a number that has since grown to over 40 million. This database enables GitHub to provide accurate compliance information to users, significantly reducing the resources required to maintain licensing accuracy. Lynette showcased the harvesting process and the curation process. More details about how GitHub is using ClearlyDefined is available here.
– SAP: Qing discussed how ClearlyDefined’s approach has streamlined SAP’s Open Source compliance efforts. By using ClearlyDefined’s data, SAP reduced the time spent on license reviews and improved the quality of metadata available for compliance checks. SAP’s internal harvesting service integrates with ClearlyDefined, ensuring that critical license metadata is consistently available and accurate. SAP has contributed to the ClearlyDefined project and most notably, together with Microsoft, has optimized the database schema and reduced the database operational cost by more than 90%. More details about how SAP is using ClearlyDefined is available here.
Why ClearlyDefined mattersClearlyDefined is a community-driven initiative with a vision to address one of Open Source’s biggest challenges: ensuring accurate and accessible licensing metadata. By centralizing and standardizing this data, ClearlyDefined not only reduces redundant work but also fosters a collaborative approach to license compliance.
The platform’s Open Source nature and integration with existing package managers and APIs make it accessible and scalable for organizations of all sizes. As more contributors join the effort, ClearlyDefined continues to grow, strengthening the Open Source community’s commitment to compliance, security, and transparency.
Join the ClearlyDefined communityClearlyDefined is always open to new contributors. With weekly developer meetings, an open governance model, and continuous collaboration with OpenSSF and other Open Source organizations, ClearlyDefined provides numerous ways to get involved. For anyone interested in shaping the future of license compliance and data quality in Open Source, ClearlyDefined offers an exciting opportunity to make a tangible impact.
At SOSS Fusion, ClearlyDefined’s presentation showcased how an open, collaborative approach to license compliance can benefit the entire digital ecosystem, embodying the very spirit of the conference: working together toward a secure, inclusive, and sustainable digital future.
Download slides and see summarized presentation transcript below.
ClearlyDefined presentation transcriptHello, folks, good morning! Let’s start by introducing ClearlyDefined, an exciting project. My name is Nick Vidal, and I work with the Open Source Initiative. With me today are Lynette Rayle from GitHub and Qing Tomlinson from SAP, and we’re all very excited to be here.
Introduction to ClearlyDefined’s mission
So, what’s the mission of ClearlyDefined? Our mission is ambitious—we aim to crowdsource a global database of license metadata for every software component ever published. This would benefit everyone in the Open Source ecosystem.
The problem ClearlyDefined addresses
There’s a critical problem in the Open Source space: compliance and managing SBOMs (Software Bill of Materials) at scale. Many organizations struggle with missing or incorrect licensing metadata for software components. When multiple organizations use a component with incomplete or wrong license metadata, they each have to solve it individually. ClearlyDefined offers a solution where, instead of every organization doing redundant work, we can collectively work on fixing these issues once and make the corrected data available to all.
ClearlyDefined’s solution
ClearlyDefined enables organizations to access license metadata through a simple API. This reduces the need for repeated license scanning and helps with SBOM generation at scale. When issues arise with a component’s license metadata, organizations can contribute fixes that benefit the entire community.
Getting started with ClearlyDefined
To use ClearlyDefined, you can access its API directly from your terminal. For example, let’s say you’re working with a JavaScript library like Lodash. By calling the API, you can get all license metadata for a specific version of Lodash at your fingertips.
Once you incorporate this licensing metadata into your workflow, you may notice some metadata that needs updating. You can curate that data and contribute it back, so everyone benefits. ClearlyDefined also provides a user-friendly interface for this, making it easier to contribute.
Open Source and community contributions
ClearlyDefined is an Open Source initiative, hosted on GitHub, supporting various package managers (e.g., npm, pypi). We work to promote best practices and integrate with other tools. Recently, we’ve expanded our scope to support non-SPDX licenses and Conda, a package manager often used in data science projects.
Integration with other tools
ClearlyDefined integrates with GUAC, an OpenSSF project that consumes ClearlyDefined data. This integration broadens the reach and utility of ClearlyDefined’s licensing information.
Case studies and community impact
I’d like to hand it over to Lynette from GitHub, who will talk about how GitHub uses ClearlyDefined and why it’s critical for license compliance.
GitHub’s use of ClearlyDefined
Hello, I’m Lynette, a developer at GitHub working on license compliance solutions. ClearlyDefined has become a key part of our workflows. Knowing the licenses of our dependencies is crucial, as legal compliance requires correct attributions. By using ClearlyDefined, we’ve streamlined our process and now manage over 40 million licenses. We also run our own harvester to contribute back to ClearlyDefined and scale our operations.
SAP’s adoption of ClearlyDefined
Hi, my name is Qing. At SAP, we co-innovate and collaborate with Open Source, ensuring a clean, well-maintained software pool. ClearlyDefined has streamlined our license review process, reducing time spent on scanning and enhancing data quality. SAP’s journey with ClearlyDefined began in 2018, and since then, we’ve implemented large-scale automation for our Open Source compliance and continuously contribute curated data back to the community.
Community and governance
ClearlyDefined thrives on community involvement. We recently elected members to our Steering and Outreach Committees to support the platform and encourage new contributors. Our weekly developer meetings and active Discord channel provide opportunities to engage, share knowledge, and collaborate.
Q&A highlights
- PURLs as Package Identifiers: We’re exploring support for PURLs as an internal coordinate system.
- Data Quality Issues: Data quality is our top priority. We plan to implement routines to scan for common issues, ensuring accurate metadata across the platform.
Thank you all for joining us today. If you’re interested in contributing, please reach out and become part of this collaborative community.
Members Newsletter – November 2024
After more than two years of collaboration, information gathering, global workshopping, testing, and an in-depth co-design process, we have an Open Source AI Definition.
The purpose of version 1.0 is to establish a workable standard for developers, researchers, and educators to consider how they may design evaluations for AI systems’ openness. The meaningful ability to fork and control their AI will foster permissionless, global innovation. It was important to drive a stake in the ground so everyone has something to work with. It’s version 1.0, so going forward, the process allows for improvement, and that’s exactly what will happen.
Over 150 individuals were part of the OSAID forum, nearly 15K subscribers to the OSI newsletter were kept up-to-date with the latest news about the OSAID, 2M unique visitors to the OSI website were exposed to the OSAID process. There were 50+ co-design working group volunteers representing 29 countries, including participants from Africa, Asia, Europe, and the Americas.
Future versions of OSAID will continue to be informed by the feedback we receive from various stakeholder communities. The fundamental principles and aim will not change, but, as our (collective) understanding of the technology improves and technology itself evolves, we might need to update to clarify or even change certain requirements. To enable this, the OSI Board voted to establish an AI sub-committee who will develop appropriate mechanisms for updating the OSAID in consultation with stakeholders. It will be fully formed in the months ahead.
Please continue to stay involved, as diverse voices and experiences are required to ensure Open Source AI works for the good of us all.
Stefano Maffulli
Executive Director, OSI
I hold weekly office hours on Fridays with OSI members: book time if you want to chat about OSI’s activities, if you want to volunteer or have suggestions.
News from the OSI The Open Source Initiative Announces the Release of the Industry’s First Open Source AI DefinitionOpen and public co-design process culminates in a stable version of Open Source AI Definition, ensures freedoms to use, study, share and modify AI systems.
Other highlights:
- How we passed the AI conundrums
- ClearlyDefined at SOSS Fusion 2024
- ClearlyDefined’s Steering and Outreach Committees Defined
- The Open Source Initiative Supports the Open Source Pledge
Article from ZDNet
For 25 years, OSI’s definition of open-source software has been widely accepted by developers who want to build on each other’s work without fear of lawsuits or licensing traps. Now, as AI reshapes the landscape, tech giants face a pivotal choice: embrace these established principles or reject them.
Other highlights:
- The Gap Between Open and Closed AI Models Might Be Shrinking. Here’s Why That Matters (Time)
- Meta’s military push is as much about the battle for open-source AI as it is about actual battles (Fortune)
- OSI unveils Open Source AI Definition 1.0 (InfoWorld)
- We finally have an ‘official’ definition for open source AI (TechCrunch)
- Read all press mentions from this past month
News from OSI affiliates:
- OpenSSF: SOSS Fusion 2024: Uniting Security Minds for the Future of Open Source (Security Boulevard)
- Mozilla Foundation: How Mozilla’s President Defines Open-Source AI (Forbes)
News from OpenSource.net:
- OpenSource.Net turns one with a redesign
- How to make reviewing pull requests a better experience
- Closing the Gap: Accelerating environmental Open Source
The State of Open Source Survey
In collaboration with the Eclipse Foundation and Open Source Initiative (OSI).
JobsLead OSI’s public policy agenda and education.
Bloomberg is seeking a Technical Architect to join their OSPO team.
EventsUpcoming events:
- Nerdearla Mexico (November 7-9, 2024 – Mexico City)
- SeaGL (November 8-9, 2024 – Seattle)
- SFSCON (November 8-9, 2024 – Bolzano)
- KubeCon + CloudNativeCon North America (November 12-15, 2024 – Salt Lake City)
- OpenForum Academy Symposium (November, 13-14, 2024 – Boston)
- The Linux Foundation Legal Summit (November 18-19, 2024 – Napa)
- The Linux Foundation Member Summit (November 19-21, 2024 – Napa)
- Open Source Experience (December 4-5 – Paris)
- KubeCon + CloudNativeCon India (December 11-12, 2024 – Delhi)
- EU Open Source Policy Summit (January 31, 2025 – Brussels)
- FOSDEM (February 1-2, 2025 – Brussels)
CFPs:
- FOSDEM 2025 EU-Policy Devroom – event being organized by the OSI, OpenForum Europe, Eclipse Foundation, The European Open Source Software Business Association, the European Commission Open Source Programme Office, and the European Commission.
- PyCon US 2025: the Python Software Foundation kicks off Website, CfP, and Sponsorship!
- GitHub
Interested in sponsoring, or partnering with, the OSI? Please see our Sponsorship Prospectus and our Annual Report. We also have a dedicated prospectus for the Deep Dive: Defining Open Source AI. Please contact the OSI to find out more about how your company can promote open source development, communities and software.
Get to vote for the OSI Board by becoming a memberLet’s build a world where knowledge is freely shared, ideas are nurtured, and innovation knows no bounds!
mark.ie: LocalGov Drupal (LGD): A Digital Public Good Transforming Government Services
LocalGov Drupal is the epitome of the principles of a Digital Public Good.
Drupal In the News: Drupal CMS: Groundbreaking New Version of Drupal Detailed at DrupalCon Singapore 2024
MARINA BAY, Singapore, 6 November, 2024—Drupal CMS, the groundbreaking package built on Drupal core with the marketer in mind, will launch on 15 January 2025. Conference attendees at DrupalCon Singapore 2024 will have the exclusive opportunity to be the first to learn more about Drupal CMS directly from Drupal’s founder, Dries Buytaert.
Learn how Drupal CMS will enable site builders without any Drupal experience to easily create a new site using their browser, marking one of the most significant launches in Drupal history.
Alongside the Drupal Association leadership team, Dries will unveil key features of Drupal CMS, making DrupalCon Singapore 2024 a can’t-miss event for anyone in the Open Source community. Occurring one month before the release of Drupal CMS, DrupalCon Singapore 2024 is an exclusive opportunity for attendees to join in the conversation surrounding Drupal CMS directly with its creators.
“The product strategy is for Drupal CMS to be the gold standard for no-code website building,” said Dries. “Our goal is to empower non-technical users like digital marketers, content creators, and site-builders to create exceptional digital experiences without requiring developers.”
DrupalCon Singapore 2024, 9-11 December 2024, is a premier gathering of Drupal and Open Source professionals. Over three days, the conference will showcase the latest Drupal trends, facilitate networking opportunities, and offer a platform for thought leadership in the Open Source landscape.
Key features of DrupalCon Singapore 2024 include:
- Keynotes, sessions, and panels: The Driesnote and Drupal CMS Panel are two highlights amongst a packed schedule of insightful sessions.
- Contribution Day: Contribution Day is where attendees grow and learn by helping to make Drupal even better. Giving back to the project is crucial in an Open Source community, as the Drupal project is developed by a community of people who work together to innovate the software.
- Birds of a Feather (BoFs): BoFs provide the perfect setting for connecting with like-minded attendees who share your interests.
- Splash Awards: Celebrate the work and creativity of the global Drupal community with this awards ceremony, which recognises outstanding projects built with Drupal.
- Networking Opportunities: Network with experts from around the globe who create ambitious digital experiences.
Register for DrupalCon Singapore 2024 at https://events.drupal.org/singapore2024 and join the next chapter in Drupal’s evolution!
Real Python: How to Reset a pandas DataFrame Index
In this tutorial, you’ll learn how to reset a pandas DataFrame index, the reasons why you might want to do this, and the problems that could occur if you don’t.
Before you start your learning journey, you should familiarize yourself with how to create a pandas DataFrame. Knowing the difference between a DataFrame and a pandas Series will also prove useful to you.
In addition, you may want to use the data analysis tool Jupyter Notebook as you work through the examples in this tutorial. Alternatively, JupyterLab will give you an enhanced notebook experience, but feel free to use any Python environment you wish.
As a starting point, you’ll need some data. To begin with, you’ll use the band_members.csv file included in the downloadable materials that you can access by clicking the link below:
Get Your Code: Click here to download the free sample code you’ll use to learn how to reset a pandas DataFrame index.
The table below describes the data from band_members.csv that you’ll begin with:
Column Name PyArrow Data Type Description first_name string First name of member last_name string Last name of member instrument string Main instrument played date_of_birth string Member’s date of birthAs you’ll see, the data has details of the members of the rock band The Beach Boys. Each row contains information about its various members both past and present.
Note: In case you’ve never heard of The Beach Boys, they’re an American rock band formed in the early 1960s.
Throughout this tutorial, you’ll be using the pandas library to allow you to work with DataFrames, as well as the newer PyArrow library. The PyArrow library provides pandas with its own optimized data types, which are faster and less memory-intensive than the traditional NumPy types that pandas uses by default.
If you’re working at the command line, you can install both pandas and pyarrow using the single command python -m pip install pandas pyarrow. If you’re working in a Jupyter Notebook, you should use !python -m pip install pandas pyarrow. Regardless, you should do this within a virtual environment to avoid clashes with the libraries you use in your global environment.
Once you have the libraries in place, it’s time to read your data into a DataFrame:
Python >>> import pandas as pd >>> beach_boys = pd.read_csv( ... "band_members.csv" ... ).convert_dtypes(dtype_backend="pyarrow") Copied!First, you used import pandas to make the library available within your code. To construct the DataFrame and read it into the beach_boys variable, you used pandas’ read_csv() function, passing band_members.csv as the file to read. Finally, by passing dtype_backend="pyarrow" to .convert_dtypes() you convert all columns to pyarrow types.
If you want to verify that pyarrow data types are indeed being used, then beach_boys.dtypes will satisfy your curiosity:
Python >>> beach_boys.dtypes first_name string[pyarrow] last_name string[pyarrow] instrument string[pyarrow] date_of_birth string[pyarrow] dtype: object Copied!As you can see, each data type contains [pyarrow] in its name.
If you wanted to analyze the date information thoroughly, then you would parse the date_of_birth column to make sure dates are read as a suitable pyarrow date type. This would allow you to analyze by specific days, months or years, and so on, as commonly found in pivot tables.
The date_of_birth column is not analyzed in this tutorial, so the string data type it’s being read as will do. Later on, you’ll get the chance to hone your skills with some exercises. The solutions include the date parsing code if you want to see how it’s done.
Now that the file has been loaded into a DataFrame, you’ll probably want to take a look at it:
Python >>> beach_boys first_name last_name instrument date_of_birth 0 Brian Wilson Bass 20-Jun-1942 1 Mike Love Saxophone 15-Mar-1941 2 Al Jardine Guitar 03-Sep-1942 3 Bruce Johnston Bass 27-Jun-1942 4 Carl Wilson Guitar 21-Dec-1946 5 Dennis Wilson Drums 04-Dec-1944 6 David Marks Guitar 22-Aug-1948 7 Ricky Fataar Drums 05-Sep-1952 8 Blondie Chaplin Guitar 07-Jul-1951 Copied!DataFrames are two-dimensional data structures similar to spreadsheets or database tables. A pandas DataFrame can be considered a set of columns, with each column being a pandas Series. Each column also has a heading, which is the name property of the Series, and each row has a label, which is referred to as an element of its associated index object.
The DataFrame’s index is shown to the left of the DataFrame. It’s not part of the original band_members.csv source file, but is added as part of the DataFrame creation process. It’s this index object you’re learning to reset.
The index of a DataFrame is an additional column of labels that helps you identify rows. When used in combination with column headings, it allows you to access specific data within your DataFrame. The default index labels are a sequence of integers, but you can use strings to make them more meaningful. You can actually use any hashable type for your index, but integers, strings, and timestamps are the most common.
Note: Although indexes are certainly useful in pandas, an alternative to pandas is the new high-performance Polars library, which eliminates them in favor of row numbers. This may come as a surprise, but aside from being used for selecting rows or columns, indexes aren’t often used when analyzing DataFrames. Also, row numbers always remain sequential when rows are added or removed in a Polars DataFrame. This isn’t the case with indexes in pandas.
Read the full article at https://realpython.com/pandas-reset-index/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Julien Tayon: The crudest CRUD of them all : the smallest CRUD possible in 150 lines of python
For this to begin, I am not really motivated in beginning with a full fledged MVC (Model View Controller) à la django because there is a lot of boilerplates and actions to do before a result. But, it has a lot of feature I want, including authentication, authorization and handling security.
For prototypes we normally flavours lightweight framework (à la flask), and CRUD.
CRUD approach is a factorisation of all framework in a single dynamic form that adapts itself to the model to generate HTML forms to input data, tabulate, REST endpoints and search them from the python class declaration and generate the database model. One language to rule them all : PYTHON. You can easily generate even the javascript to handle autocompletion on the generated view from python with enough talent.
But before using a CRUD framework, we need a cruder one, ugly, disgusting but useful for a human before building the REST APIs, writing the class in python, the HTML form, and the controlers.
I call this the crudest CRUD of them all.
Think hard at what you want when prototyping ...
- to write no CONTROLLERS ; flask documentation has a very verbose approach to exposing routes and writing them, writing controller for embasing and searching databases is boring
- to write the fewer HTML views possible, one and only onle would be great ;
- to avoid having to fiddle the many files reflecting separation of concerns : the lesser python files and class you touch the better;
- to avoid having to write SQL nor use an ORM (at least a verbose declarative one) ;
- show me your code and you can mesmerize and even fool me, however show me your data structure and I'll know everthing I have to know about your application : data structure should be under your nose in a readable fashion in the code;/
- to have AT LEAST one end point for inserting and searching so that curl can be used to begin automation and testing, preferably in a factorisable fashion;
- only one point of failure is accepted
Once we set these few condition we see whatever we do WE NEED a dynamic http server at the core. Python being the topic here, we are gonna do it in python.
What is the simplest dynamic web server in python ?
The reference implementation of wsgi that is the crudest wsgi server of them all : wsgiref. And you don't need to download it since it's provided in python stdlib.
First thing first, we are gonna had a default view so that we can serve an HTML static page with the list of the minimal HTML we need to interact with data : sets of input and forms.
Here, we stop. And we see that these forms are describing the data model.
Wouldn't it be nice if we could parse the HTML form easily with a tool from the standard library : html.parser and maybe deduce the database model and even more than fields coud add relationship, and well since we are dreaming : what about creating the tables on the fly from the form if they don't exists ?
The encoding of the relationship do require an hijack of convention where when the parser cross a name of the field in the form whatever_id it deduces it is a foreign key to table « whatever », column « id ».
Once this is done, we can parse the html, do some magick to match HTML input types to database types (adapter) and it's almost over. We can even dream of creating the database if it does not exists in a oneliner for sqlite.
We just need to throw away all the frugality of dependencies by the window and spoil our karma of « digital soberty » by adding the almighty sqlalchemy the crudest (but still heavy) ORM when it comes of the field of the introspective features of an ORM to map a database object to a python object in a clear consistent way. With this, just one function is needed in the controller to switch from embasing (POST method) and searching (GET).
Well, if the DOM is passed in the request. So of course I see the critics here :
- we can't pass the DOM in the request because the HTML form ignores the DOM
- You are not scared of error 415 (request too large) in the get method if you pass the DOM ?
Since we are human we would also like the form to be readable when served, because, well, human don't read the source and can't see the name attributes of the input. A tad of improving the raw html would be nice. It would also give consistency. It will also diminishes the required size of the formular to send. Here, javascript again is the right anwser. Fine, we serve the static page in the top of the controller. Let's use jquery to make it terse enough. Oh, if we have Javascript, wouldn't il be able to clone the part of the invented model tag inside every form so now we can pass the relevant part of the DOM to the controller ?
I think we have everything to write the crudest CRUD server of them all :D
Happy code reading : import multipart from wsgiref.simple_server import make_server from json import dumps from sqlalchemy import create_engine, MetaData, Table, Column from sqlalchemy import Integer, String, Float, Date, DateTime,UnicodeText, ForeignKey from html.parser import HTMLParser from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import select from sqlalchemy import create_engine from sqlalchemy_utils import database_exists, create_database from urllib.parse import parse_qsl, urlparse engine = create_engine("postgresql://jul@192.168.1.32/pdca") if not database_exists(engine.url): create_database(engine.url) tables = dict() class HTMLtoData(HTMLParser): def __init__(self): global engine, tables self.cols = [] self.table = "" self.tables= [] self.engine= engine self.meta = MetaData() super().__init__() def handle_starttag(self, tag, attrs): attrs = dict(attrs) if tag == "input": if attrs.get("name") == "id": self.cols += [ Column('id', Integer, primary_key = True), ] return try: if attrs.get("name").endswith("_id"): table,_=attrs.get("name").split("_") self.cols += [ Column(attrs["name"], Integer, ForeignKey(table + ".id")) ] return except Exception as e: print(e) if attrs["type"] in ("email", "url", "phone", "text"): self.cols += [ Column(attrs["name"], UnicodeText ), ] if attrs["type"] == "number": if attrs["step"] == "any": self.cols+= [ Columns(attrs["name"], Float), ] else: self.cols+= [ Column(attrs["name"], Integer), ] if attrs["type"] == "date": self.cols += [ Column(attrs["name"], Date) ] if attrs["type"] == "datetime": self.cols += [ Column(attrs["name"], DateTime) ] if attrs["type"] == "time": self.cols += [ Column(attrs["name"], Time) ] if tag== "form": self.table = urlparse(attrs["action"]).path[1:] def handle_endtag(self, tag): if tag=="form": self.tables += [ Table(self.table, self.meta, *self.cols), ] tables[self.table] = self.tables[-1] self.table = "" self.cols = [] with engine.connect() as cnx: self.meta.create_all(engine) cnx.commit() html = """ <!doctype html> <html> <head> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.7.1/jquery.min.js"></script> <script> $(document).ready(function() { $("form").each((i,el) => { $(el).wrap("<fieldset>"+ el.action + "</fieldset>" ); $(el).append("<input type=submit value=insert formmethod=post ><input type=submit value=search formmethod=get />"); }); $("input:not([type=hidden],[type=submit])").each((i,el) => { $(el).before("<label>" + el.name+ "</label><br/>"); $(el).after("<br>"); }); }); </script> </head> <body> <form action=/user > <input type=number name=id /> <input type=text name=name /> <input type=email name=email > </form> <form action=/event > <input type=number name=id /> <input type=date name=date /> <input type=text name=text /> <input type=number name=user_id /> </form> </body> </html> """ router = dict({"" : lambda fo: html,}) def simple_app(environ, start_response): fo,fi=multipart.parse_form_data(environ) fo.update(**{ k: dict( name=fi.filename, content=fi.file.read().decode('utf-8', 'backslashreplace'), content_type=fi.content_type, ) for k,v in fi.items()}) table = route = environ["PATH_INFO"][1:] fo.update(**dict(parse_qsl(environ["QUERY_STRING"]))) start_response('200 OK', [('Content-type', 'text/html; charset=utf-8')]) try: HTMLtoData().feed(html) except KeyError: pass metadata = MetaData() metadata.reflect(bind=engine) Base = automap_base(metadata=metadata) Base.prepare() if route in tables.keys(): with Session(engine) as session: Item = getattr(Base.classes, table) if environ.get("REQUEST_METHOD", "GET") == "POST": new_item = Item(**{ k:v for k,v in fo.items() if v and not k.startswith("_")}) session.add(new_item) ret=session.commit() fo["insert_result"] = new_item.id if environ.get("REQUEST_METHOD") == "GET": result = [] for elt in session.execute( select(Item).filter_by(**{ k : v for k,v in fo.items() if v and not k.startswith("_")})).all(): result += [{ k.name:getattr(elt[0],k.name) for k in tables[table].columns}] fo["search_result"] = result return [ router.get(route,lambda fo:dumps(fo.dict, indent=4, default=str))(fo).encode() ] print("Crudest CRDU of them all on port 5000...") make_server('', 5000, simple_app).serve_forever()
1xINTERNET blog: Why choosing a reliable migration partner is crucial for a successful transition from Drupal 7
The end of life of Drupal 7 is just around the corner and selecting the right migration partner is crucial for a smooth, cost-effective, and future-proof transition. Find out how 1xINTERNET and Pantheon’s unique solution can support your organisation!