Feeds
Real Python: Managing Dependencies With Python Poetry
When your Python project relies on external packages, you need to make sure you’re using the right version of each package. After an update, a package might not work as it did before. A dependency manager like Python Poetry helps you specify, install, and resolve external packages in your projects. This way, you can be sure that you always work with the correct dependency version on every machine.
In this video course, you’ll learn how to:
- Create a new project using Poetry
- Add Poetry to an existing project
- Configure your project through pyproject.toml
- Pin your project’s dependency versions
- Install dependencies from a poetry.lock file
- Run basic Poetry commands using the Poetry CLI
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Highlights from the Digital Public Goods Alliance Annual Members Meeting 2024
This month, I had the privilege of representing the Open Source Initiative at the Digital Public Goods Alliance (DPGA) Annual Members Meeting. Held in Singapore, this event marked our second year participating as members, following our first participation in Ethiopia. It was an inspiring gathering of innovators, developers and advocates working to create a thriving ecosystem for Digital Public Goods (DPGs) as part of UNICEF’s initiative to advance the United Nationsâ sustainable development goals (SDGs).
Keynote insightsThe conference began with an inspiring keynote by Liv Marte Nordhaug, CEO of the DPGA, who made a call for governments and organizations to incorporate:
- Open Source first principles
- Open data at scale
- Interoperable Digital Public Infrastructure (DPI)
- DPGs as catalysts for climate change action
These priorities underscored the critical role of DPGs in fostering transparency, accountability and innovation across sectors.
DPGs and Open Source AIA standout feature of the first day of the event was the Open Source AI track led by Amreen Taneja, DPGA Standards Lead, which encompassed three dynamic sessions:
- Toward AI Democratization with Digital Public Goods: This session explored the role of DPGs in contributing to the democratization of AI technologies, including AI use, development and governance, to ensure that everyone benefits from AI technology equally.
- Fully Open Public Interest AI with Open Data: This session highlighted the need for open AI infrastructure supported by accessible, high-quality datasets, especially in the global majority countries. Discussions evolved over how open training data sets ought to be licensed to ensure open, public interest AI.
- Creating Public Value with Public AI: This session examined real-world applications of generative AI in public services. Governments and NGOs showcased how AI-enabled tools can effectively tackle social challenges, leveraging Open Source solutions within the AI stack.
The second day of the event was marked by the DPG Product Fair, which provided a science-fair-style platform for showcasing DPGs. Notable examples included:
- Indiaâs eGov DIGIT Open Source platform, serving over 1 billion citizens with robust digital infrastructure.
- Singaporeâs Open Government Products, which leverages Open Source and enables open collaboration to allow this small nation to expand their impact together with other southeast Asian nations.
One particularly engaging session was Sarah Espaldonâs âImproving Government Services with Legos.â This presentation from the Singapore government highlighted the benefits of modular DPGs in enhancing service delivery and building flexible DPI capabilities.
Privacy best practices for DPGsA highlight from the third and final day of the event was the privacy-focused workshop co-hosted by the DPGA and the Open Knowledge Foundation. As privacy becomes a central concern for DPGs, the DPGA introduced its Standard Expert Group to refine privacy requirements and develop best practice guidelines. The interactive session provided invaluable feedback, driving forward the development of robust privacy standards for DPGs.
Looking aheadThe event reaffirmed the potential of Open Source technologies to transform global public goods. As we move forward, the Open Source Initiative is committed to advancing these conversations, including around the Open Source AI Definition and fostering a more inclusive digital ecosystem. A special thanks to the dedicated DPGA teamâLiv Marte Nordhaug, Lucy Harris, Max Kintisch, Ricardo Miron, Luciana Amighini, Bolaji Ayodeji, Lea Gimpel, Pelin Hizal Smines, Jon Lloyd, Carol Matos, Amreen Taneja, and Jameson Voisinâwhose efforts made this conference a success. We look forward to another year of impactful collaboration and to the Annual Members Meeting next year in Brazil!
Emanuele Rocca: Building Debian packages The Right Way
There is more than one way to do it, but it seems that The Right Way to build Debian packages today is using sbuild with the unshare backend. The most common backend before the rise of unshare was schroot.
The official Debian Build Daemons have recently transitioned to using sbuild with unshare, providing a strong motivation to consider making the switch. Additionally the new approach means: (1) no need to configure schroot, and (2) no need to run the build as root.
Here are my notes about moving to the new setup, for future reference and in case they may be useful to others.
First I installed the required packages:
apt install sbuild mmdebstrap uidmapThen I created the following script to update my chroots every night:
#!/bin/bash for arch in arm64 armhf armel; do HOME=/tmp mmdebstrap --quiet --arch=$arch --include=ca-certificates --variant=buildd unstable \ ~/.cache/sbuild/unstable-$arch.tar http://127.0.0.1:3142/debian doneIn the script, Iâm calling mmdebstrap with --quiet because I donât want to get any output on succesful execution. The script is running in cron with email notifications, and I only want to get a message if something goes south. Iâm setting HOME=/tmp for a similar reason: the unshare user does not have access to my actual home directory, and by default dpkg tries to use $HOME/.dpkg.cfg as the configuration file. By overriding HOME I avoid the following message to standard error:
dpkg: warning: failed to open configuration file '/home/ema/.dpkg.cfg' for reading: Permission deniedThen I added the following to my sbuild configuration file (~/.sbuildrc):
$chroot_mode = 'unshare'; $unshare_tmpdir_template = '/dev/shm/tmp.sbuild.XXXXXXXXXX';The first option sets the sbuild backend to unshare, whereas unshare_tmpdir_template is needed on Bookworm to ensure that the build process runs in memory rather than on disk for performance reasons. Starting with Trixie, /tmp is by default a tmpfs so the setting wonât be needed anymore.
Packages for different architectures can now be built as follows:
# Tarball used: ~/.cache/sbuild/unstable-arm64.tar $ sbuild --dist=unstable hello # Tarball used: ~/.cache/sbuild/unstable-armhf.tar $ sbuild --dist=unstable --arch=armhf helloIf you have any comments or suggestions about any of this, please let me know.
Specbee: The All-New Drupal CMS: Find Answers to Your 13 Most Common Questions
Armin Ronacher: Constraints are Good: Python's Metadata Dilemma
There is currently an effort underway to build a new universal lockfile standard for Python, most of which is taking place on the Python discussion forum. This initiative has highlighted the difficulty of creating a standard that satisfies everyone. It has become clear that different Python packaging tools are having slightly different ideas in mind of what a lockfile is supposed to look like or even be used for.
In those discussions however also a small other aspect re-emerged: Python has a metadata problem. Python's metadata system is too complex and suffers from what I would call âlack of constraintsâ.
JavaScript: Example of Useful ConstraintsJavaScript provides an excellent example of how constraints can simplify and improve a system. In JavaScript, metadata is straightforward. Whether you develop against a package locally or if you are using a package from npm, metadata represents itself the same way. There is a single package.json file that contains the most important metadata of a package such as name, version or dependencies. This simplicity imposes significant but beneficial constraints:
- There is a 1:1 relationship between an npm package and its metadata. Every npm package has a single package.json file that is the source of truth of metadata. Metadata is trivially accessible, even programmatically, via require('packageName/package.json').
- Dependencies (and all other metadata) are consistent across platforms and architectures. Platform-specific binaries are handled via a filter mechanism (os and cpu) paired with optionalDependencies. [1]
- All metadata is static, and updates require explicit changes to package.json prior to distribution or installation. Tools are provided to manipulate that metadata such as npm version patch which will edit the file in-place.
These constraints offer several benefits:
- Uniform behavior regardless of whether a dependency is installed locally or from a remote source. There is not even a filesystem layout difference between what comes from git or npm. This enables things like replacing an installed dependency with a local development copy, without change in functionality.
- There is one singular source of truth for all metadata. You can edit package.json and any consumer of that metadata can just monitor that file for changes. No complex external information needs to be consulted.
- Resolvers can rely on a single API call to fetch dependency metadata for a version, improving efficiency. Practically this also means that the resolver only needs to hit a single URL to retrieve all possible dependencies of a dependency. [2]
- It makes auditing much easier because there are fewer moving parts and just one canonical location for metadata.
In contrast, Python has historically placed very few constraints on metadata. For example, the old setup.py based build system essentially allowed arbitrary code execution during the build process. At one point it was at least strongly suggested that the version produced by that build step better match what is uploaded to PyPI. However, in practice, if you lie about the version that is okay too. You could upload a source distribution to PyPI that claims it's 2.0 but will in fact install 2.0+somethinghere or a completely different version entirely.
What happens is that both before a package is published to PyPI and when a package is installed locally after downloading, the metadata is generated from scratch. Not only does that mean the metadata does not have to match, it also means that it's allowed to be completely different. It's absolutely okay for a package to claim it depends on cool-dependency on your machine, but on uncool-dependency on my machine. Or to dependent on different packages depending on the time of the day of the phase of the moon.
Editable installs and caching are particularly problematic since metadata could become invalid almost immediately after being written. [3]
Some of this has been somewhat improved because the new pyproject.toml standard encourages static metadata. However build systems are entirely allowed to override that by falling back to what is called âdynamic metadataâ and this is something that is commonly done.
In practice this system incurs a tremendous tax to everybody that can be easily missed.
Disjointed and complex metadata access: there is no clear relationship of PyPI package name and the installed Python modules. If you know what the PyPI package name is, you can access metadata via importlib.metadata. Metadata is not read from pyproject.toml, even if it's static, instead it takes the package name and it accesses the metadata from the .dist-info folder (most specifically the METADATA file therein) installed into site-packages.
Mandatory metadata re-generation: As a consequence if you edit pyproject.toml to edit a piece of metadata, you need to re-install the package for that metadata to be updated in the .dist-info. People commonly forget doing that, so desynchronized metadata is very common. This is true even for static metadata today!
Unclear cache invalidation: Because metadata can be dynamic, it's not clear when you should automatically re-install a package. It's not enough to just track pyproject.toml for changes when dynamic metadata is used. uv for instance has a really complex, explicit cache management system so one can help uv detect outdated metadata. This obviously is non-standardized, requires uv to understand version control systems and is also not shared with other tools. For instance if you know that the version information incorporates the git hash, you can tell uv to pay attention to git commits.
Fragmented metadata storage: even where generated metadata is stored is complex. Different systems have slightly different behavior for storing that metadata.
- When working locally (eg: editable installs) what happens depends
on the build system:
- If setuptools is used, metadata written into two locations. The legacy <PACKAGE_NAME>.egg-info/PKG-INFO file. Additionally it's placed in the new location for metadata inside site-packages in a <PACKAGE_NAME>.dist-info/METADATA file.
- If hatch and most other modern build systems are used, metadata is only written into site-packages. (into <PACKAGE_NAME>.dist-info/METADATA)
- If no build system is configured it depends a bit on the installer. pip will even for an editable install build a wheel with setuptools, uv will only build a wheel and make the metadata available if one runs uv build. Otherwise the metadata is not available (in theory it could be found in pyproject.toml for as long as it's not dynamic).
- For source distributions (sdist) first the build step happens as in the section before. Afterwards the metadata is thrown into a PKG-INFO file. It's currently placed in two locations in the sdist: PKG-INFO in the root and <PACKAGE_NAME>.egg-info/PKG-INFO. That metadata however I believe is only used for PyPI, when installing the sdist locally the metadata is regenerated from pyproject.toml (or if setuptools is used setup.py). That's also why metadata can change from what's in the sdist to what's there after installation.
- For wheels the metadata is placed in <PACKAGE_NAME>.dist-info/METADATA exclusively. Wheels have static metadata, so no build step is taking place. What is in the wheel is always used.
- When working locally (eg: editable installs) what happens depends
on the build system:
Dynamic metadata makes resolvers slow: Dynamic metadata makes the job of resolvers and installers very hard and slows them down. Today for instance advanced resolvers like poetry or uv sometimes are not able to install the right packages, because they assume that dependency metadata is consistent across sdists and wheels. However there are a lot of sdists available on PyPI that publish incomplete dependency metadata (just whatever the build step for the sdist created on the developer's machine is what is cached on PyPI).
Not getting this right can be the difference of hitting one static URL with all the metadata, and downloading a zip file, creating a virtualenv, installing build dependencies, generating an entire sdist and then reading the final generated metadata. Many orders of magnitude difference in time it takes to execute.
This also extends to caching. If the metadata can constantly change, how would a resolver cache it? Is it required to build all possible source distributions to determine the metadata as part of resolving?
Cognitive complexity: The system introduces an enormous cognitive overhead which makes it very hard to understand for users, particularly when things to wrong. Incorrectly cached metadata can be almost impossible to debug for a user because they do not understand what is going on. Their pyproject.toml shows the right information, yet for some reason it behaves incorrectly. Most people don't know what "egg info" or "dist info" is. Or why an sdist has metadata in a different location than a wheel or a local checkout.
Having support for dynamic metadata also means that developers continue to maintain elaborate and confusing systems. For instance there is a plugin for hatch that dynamically creates a readme [4], requiring even arbitrary Python code to run to display documentation. There are plugins to automatically change versions to incorporate git version hashes. As a result to figure out what version you actually have installed it's not just enough to look into a single file, you might have to rely on a tool to tell you what's going on.
The challenge with dynamic metadata in Python is vast, but unless you are writing a resolver or packaging tool, you're not going to experience the pain as much. You might in fact quite enjoy the power of dynamic metadata. Unsurprisingly bringing up the idea to remove it is very badly received. There are so many workflows seemingly relying on it.
At this point fixing this problem might be really hard because it's a social problem more than a technical one. If the constraint would have been placed there in the first place, these weird use cases would never have emerged. But because the constraints were not there, people were free to go to town with leveraging it with all the consequences it causes.
I think at this point it's worth moving the cheese, but it's unclear if this can be done through a standard. Maybe the solution will be for tools like uv or poetry to warn if dynamic metadata is used and strongly discourage it. Then over time the users of packages that use dynamic metadata will start to urge the package authors to stop using it.
The cost of dynamic metadata is real, but it's felt only in small ways all the time. You notice it a bit when your resolver is slower than it has to, you notice it if your packaging tool installs the wrong dependency, you notice it if you need to read the manual for the first time when you need to reconfigure your cache-key or force a package to constantly reinstall, you notice it if you need to re-install your local dependencies over and over for them not to break. There are many ways you notice it. You don't notice it as a roadblock, just as a tiny, tiny tax. Except that is a tax we all pay and it makes the user experience significantly worse compared to what it could be.
The deeper lesson here is that if you give developers too much flexibility, they will inevitably push the boundaries and that can have significant downsides as we can see. Because Python's packaging ecosystem lacked constraints from the start, imposing them now has become a daunting challenge. Meanwhile, other ecosystems, like JavaScript's, took a more structured approach early on, avoiding many of these pitfalls entirely.
[1]You can see how this works in action for sentry-cli for instance. The @sentry/cli package declares all its platform specific dependencies as optionalDependencies (relevant package.json). Each platform build has a filter in its package.json for os and cpu. For instance this is what the arm64 linux binary dependency looks like: package.json. npm will attempt to install all optional dependencies, but it will skip over the ones that are not compatible with the current platform. [2]For @sentry/cli at version 2.39.0 for instance this means that this singular URL will return all the information that a resolver needs: registry.npmjs.org/@sentry/cli/2.39.0 [3]A common error in the past was to receive a pkg_resources.DistributionNotFound exception when trying to run a script in local development [4]I got some flak on Bluesky for throwing readme generators under the bus. While they do not present the same problem when it comes to metadata like dependencies and versions do, they do still increase the complexity. In an ideal world what you find in site-packages represents what you have in your version control and there is a README.md file right there. That's what you have in JavaScript, Rust and plenty of other ecosystems. What we have however is a build step (either dynamic or copying) taking that readme file, and placing it in a RFC 5322 header encoded file in a dist info. So instead of "command clicking" on a dependency and finding the readme, we need special tools or arcane knowledge if we want to read the readme files locally.Sandro KnauĂ: Akademy 2024 in WĂŒrzburg
In order to prepare for the Akademy I started some days before to give my Librem 5 ( an Open Hardware Phone) another try and ended up with a non starting Plasma 6. Actually this issue was known already, but hasn't been addressed. In the end I reached the Akademy with my Librem 5 having phosh installed (which is Gnome based), in order to have something working.
I met Bushan and Bart who took care and the issue was fixed two days later I could finally install Plasma 6 on it. The last time I tested my Librem 5 with Plasma 5 it felt sluggish and not well working. But this time I was impressed how well the system reacts. Sure there are some things here and there, but in the bigger picture it is quite useable. One annoying issue is that the camera is only working with one app and the other issue is the battery capacity, you have to charge it once a day. Because of missing a QR reader that can use the camera, getting data to the phone was quite challenging. Unfortunately the conference Wifi separated the devices and I couldn't use KDE Connect to transfer data. In the end the only way to import data was taking five photos from the QR Code to import my D-Ticket to Itinerary.
With a device with Plasma Mobile, it directly was used for a experiment: How well does Dolphin works on a Plasma Mobile device. Together with Felix Ernst we tried it out and were quite impressed, that Dolphin does work very well on Plasma Mobile, after some simple modifications on the UI. That resulted in a patch to add a mobile UI for Dolphin !826.
With more time to play with my Librem 5 I also found an bug in KWeather, that is missing a Refresh option, when used in a Plasma Mobile environment #493656.
Akademy is a good place to identify and solve some issues. It is always like that, you chat with someone and they can tell you who to ask to answer the concrete question and in the end you can solve things, that seems unsolvable in the beginning.
There was also time to look into the travelling app Itinerary. A lot people are faced with a lot of real world issues, when not in their home town. Itinerary is the best traveling apps I know about. It can import nearly every ticket you have and can get location information from restaurant websites and allow routing to that place. It does add many useful information, while traveling like current delays, platform changes, live updates for elevator, weather information at the destination, a station map and all those features with strong focus on privacy.
In detail I found some small things to improve:
If you search for a bus ride and enter the correct name for the bus stop, it will still add some walk from and to the station. The issue here is that we use different backends and not all backends share the same geo coordinate. That's why Itinerary needs to add some heuristics to delete those paths.
Instead of displaying just a small station map of one bus stop in the inner city, it showed complete WĂŒrzburg inner city, as there is one big park around the inner city (named "Ringpark").
- WĂŒrzburg has a quite big bus station but the platform information were missing in the map, so we tweaked the CSS to display the platform. To be sure, that we don't fix only WĂŒrzburg, we also looked at Greifswald and Aix-en-Provence if they are following the same name scheme.
I additionally learned that it has a lot of details that helps people who have special needs. That is the reason why Daniel Kraut wants to get Itinerary available for iOS. As spoken out, that Daniel wants to reach this goal, others already started to implement the first steps to build apps for iOS.
This year I was volunteering in helping out at Akademy. For me it was a lot of fun to meet everyone at the infodesk or help the speakers setup the beamer and microphone. It is also a good opportunity to meet many new faces and get in contact with them. I see also room for improvement. As we were quite busy at the Welcome Event to get out the badges to everyone, I couldn't answer the questions from newcomers, as the queue was too long. I propose that some people volunteer to be available for questions from newcomers. Often it is hard for newcomers to get their first contact(s) in a new community. There is a lot of space for improvement to make it easier for newcomers to join. Some ideas in my head are: Make an event for the newcomers to get them some links into the community and show that everyone is friendly. The tables at the BoFs should make a circle, so everyone can see each other. It was also hard for me to understand everyone as they mostly spoken towards the front. And then BoFs are sometimes full of very specific words and if you are not already deep in the topic you are lost. I can see the problem, on the one side BoFs are also the place where the person that knows the topic already wants to get things done. On the other side new comers join BoFs, are overwhelmed by to many new words get frustrated and think, that they are not welcome. Maybe at least everyone should present itself with name and ask new faces, why they joined the BoF to help them joining.
I'm happy, that the food provided for the attendees was very delicious and that I'm not the only one mostly vegetarian with a big amount to be vegan.
At the conference the KDE Eco initiation really caught me, as I see a lot of new possibilities in giving more reasons to switch to an Open Source system. The talk from Natalie was great to see how pupils get excited about Open Source and also help their grandparents to move to a Linux system. As I also will start to work as a teacher, I really got ideas what I can do at school. Together with Joseph and Nicole, we finally started to think about how to drive an exploration on what kind of old hardware is still KDE software running. The ones with the oldest hardware will get an old KDE shirt. For more information see #40.
The conference was very motivating for me, I also had still energy at the evening to do some Debian packaging and finally pushed kweathercore to Debian and started to work on KWeather. Now I'm even more interested in the KDE apps focusing the mobile world, as I now have some hardware that can actually use those apps.
I really enjoyed the workshop how to contribute to Qt by Volker Hilsheimer, especially the way how Volker explained things in a very friendly way, answered every question, sometime postponed some questions but came back to them later. All in all I now have a good overview how Qt is doing development and how I can fix bugs.
The daytrip to Rothenburg ob der Tauber was very interesting for me. It was the first time I visited the village. But in my memory it feels like I know the village already. I grew up with reading a lot of comic albums including the good SiFi comic album series "Yoku Tsuno" created by the Belgian writer Roger Leloup. Yoku Tsuno is an electronics engineer, raised in Japan but now living in Belgium. In "On the edge of life" she helps her friend Ingard, who actually lives in Rothenburg. Leloup invested a lot of time to travel to make the make his drawings as accurate as possible.
In order to not have a hard cut from Akademy to normal life, I had a lunch with Carlos, to discuss KDE Neon and how we can improve the interaction with Debian. In the future this should have less friction and make both communities work together more smoothly. Additionally as I used to develop on KDEPIM with the help of Docker images based on Neon I ask for a meta kf6 dev meta package. That should help to get rid of most hand written lists of dev packages in the Docker file in order to make it more simple for new contributors to start hacking on KDEPIM.
The rest of the day I finally found time to do the normal tourist stuff: Going to the Wine bridge and having a walk to the castle of WĂŒrzburg. Unfortunately you hear a lot of car noises up there, but I could finally relaxe in a Japanese designed garden.
Finally at Saturday I started my trip back. The trains towards Eberswalde are broken and I needed to find alternative routing. I got a little bit nervous, as it was the first time I travelled with my Librem 5 and Itinerary only and needed to reach the next train in less than two mins. With the indoor maps provided, I could prepare my run through the train station so I reached successfully my next train.
By the way, also if you only only use KDE software, I would recommend everyone to join Akademy ;)
Akademy 2024 in WĂŒrzburg
In order to prepare for the Akademy I started some days before to give my Librem 5 ( an Open Hardware Phone) another try and ended up with a non starting Plasma 6. Actually this issue was known already, but hasn't been addressed. In the end I reached the Akademy with my Librem 5 having phosh installed (which is Gnome based), in order to have something working.
I met Bushan and Bart who took care and the issue was fixed two days later I could finally install Plasma 6 on it. The last time I tested my Librem 5 with Plasma 5 it felt sluggish and not well working. But this time I was impressed how well the system reacts. Sure there are some things here and there, but in the bigger picture it is quite useable. One annoying issue is that the camera is only working with one app and the other issue is the battery capacity, you have to charge it once a day. Because of missing a QR reader that can use the camera, getting data to the phone was quite challenging. Unfortunately the conference Wifi separated the devices and I couldn't use KDE Connect to transfer data. In the end the only way to import data was taking five photos from the QR Code to import my D-Ticket to Itinerary.
With a device with Plasma Mobile, it directly was used for a experiment: How well does Dolphin works on a Plasma Mobile device. Together with Felix Ernst we tried it out and were quite impressed, that Dolphin does work very well on Plasma Mobile, after some simple modifications on the UI. That resulted in a patch to add a mobile UI for Dolphin !826.
With more time to play with my Librem 5 I also found an bug in KWeather, that is missing a Refresh option, when used in a Plasma Mobile environment #493656.
Akademy is a good place to identify and solve some issues. It is always like that, you chat with someone and they can tell you who to ask to answer the concrete question and in the end you can solve things, that seems unsolvable in the beginning.
There was also time to look into the travelling app Itinerary. A lot people are faced with a lot of real world issues, when not in their home town. Itinerary is the best traveling apps I know about. It can import nearly every ticket you have and can get location information from restaurant websites and allow routing to that place. It does add many useful information, while traveling like current delays, platform changes, live updates for elevator, weather information at the destination, a station map and all those features with strong focus on privacy.
In detail I found some small things to improve:
If you search for a bus ride and enter the correct name for the bus stop, it will still add some walk from and to the station. The issue here is that we use different backends and not all backends share the same geo coordinate. That's why Itinerary needs to add some heuristics to delete those paths.
Instead of displaying just a small station map of one bus stop in the inner city, it showed complete WĂŒrzburg inner city, as there is one big park around the inner city (named "Ringpark").
- WĂŒrzburg has a quite big bus station but the platform information were missing in the map, so we tweaked the CSS to display the platform. To be sure, that we don't fix only WĂŒrzburg, we also looked at Greifswald and Aix-en-Provence if they are following the same name scheme.
I additionally learned that it has a lot of details that helps people who have special needs. That is the reason why Daniel Kraut wants to get Itinerary available for iOS. As spoken out, that Daniel wants to reach this goal, others already started to implement the first steps to build apps for iOS.
This year I was volunteering in helping out at Akademy. For me it was a lot of fun to meet everyone at the infodesk or help the speakers setup the beamer and microphone. It is also a good opportunity to meet many new faces and get in contact with them. I see also room for improvement. As we were quite busy at the Welcome Event to get out the badges to everyone, I couldn't answer the questions from newcomers, as the queue was too long. I propose that some people volunteer to be available for questions from newcomers. Often it is hard for newcomers to get their first contact(s) in a new community. There is a lot of space for improvement to make it easier for newcomers to join. Some ideas in my head are: Make an event for the newcomers to get them some links into the community and show that everyone is friendly. The tables at the BoFs should make a circle, so everyone can see each other. It was also hard for me to understand everyone as they mostly spoken towards the front. And then BoFs are sometimes full of very specific words and if you are not already deep in the topic you are lost. I can see the problem, on the one side BoFs are also the place where the person that knows the topic already wants to get things done. On the other side new comers join BoFs, are overwhelmed by to many new words get frustrated and think, that they are not welcome. Maybe at least everyone should present itself with name and ask new faces, why they joined the BoF to help them joining.
I'm happy, that the food provided for the attendees was very delicious and that I'm not the only one mostly vegetarian with a big amount to be vegan.
At the conference the KDE Eco initiation really caught me, as I see a lot of new possibilities in giving more reasons to switch to an Open Source system. The talk from Natalie was great to see how pupils get excited about Open Source and also help their grandparents to move to a Linux system. As I also will start to work as a teacher, I really got ideas what I can do at school. Together with Joseph and Nicole, we finally started to think about how to drive an exploration on what kind of old hardware is still KDE software running. The ones with the oldest hardware will get an old KDE shirt. For more information see #40.
The conference was very motivating for me, I also had still energy at the evening to do some Debian packaging and finally pushed kweathercore to Debian and started to work on KWeather. Now I'm even more interested in the KDE apps focusing the mobile world, as I now have some hardware that can actually use those apps.
I really enjoyed the workshop how to contribute to Qt by Volker Hilsheimer, especially the way how Volker explained things in a very friendly way, answered every question, sometime postponed some questions but came back to them later. All in all I now have a good overview how Qt is doing development and how I can fix bugs.
The daytrip to Rothenburg ob der Tauber was very interesting for me. It was the first time I visited the village. But in my memory it feels like I know the village already. I grew up with reading a lot of comic albums including the good SiFi comic album series "Yoku Tsuno" created by the Belgian writer Roger Leloup. Yoku Tsuno is an electronics engineer, raised in Japan but now living in Belgium. In "On the edge of life" she helps her friend Ingard, who actually lives in Rothenburg. Leloup invested a lot of time to travel to make the make his drawings as accurate as possible.
In order to not have a hard cut from Akademy to normal life, I had a lunch with Carlos, to discuss KDE Neon and how we can improve the interaction with Debian. In the future this should have less friction and make both communities work together more smoothly. Additionally as I used to develop on KDEPIM with the help of Docker images based on Neon I ask for a meta kf6 dev meta package. That should help to get rid of most hand written lists of dev packages in the Docker file in order to make it more simple for new contributors to start hacking on KDEPIM.
The rest of the day I finally found time to do the normal tourist stuff: Going to the Wine bridge and having a walk to the castle of WĂŒrzburg. Unfortunately you hear a lot of car noises up there, but I could finally relaxe in a Japanese designed garden.
Finally at Saturday I started my trip back. The trains towards Eberswalde are broken and I needed to find alternative routing. I got a little bit nervous, as it was the first time I travelled with my Librem 5 and Itinerary only and needed to reach the next train in less than two mins. With the indoor maps provided, I could prepare my run through the train station so I reached successfully my next train.
By the way, also if you only only use KDE software, I would recommend everyone to join Akademy ;)
KDE Plasma 6.2.4, Bugfix Release for November
Tuesday, 26 November 2024. Today KDE releases a bugfix update to KDE Plasma 6, versioned 6.2.4.
Plasma 6.2 was released in October 2024 with many feature refinements and new modules to complete the desktop experience.
This release adds three weeks' worth of new translations and fixes from KDE's contributors. The bugfixes are typically small but important and include:
- libkscreen Doctor: clarify the meaning of max. brightness zero. Commit. Fixes bug #495557
- Plasma Workspace: Battery Icon, Add headphone icon. Commit.
- Plasma Audio Volume Control: Fix speaker test layout for Pro-Audio profile. Commit. Fixes bug #495752
Ruqola 2.3.2
Ruqola 2.3.2 is a feature and bugfix release of the Rocket.chat app.
It includes many fixes for RocketChat 7.0.
New features:
- Fix administrator refresh user list
- Fix menu when we select video conference message
- Fix RocketChat 7.0 server support
- Fix create video message
- Fix update cache when we change video/attachment description
- Fix export message job
- Fix show userOffline when we have a group
- Fix enable/disable ok button when search room in team dialog
- Fix crash when we remove room in team dialog
- Fix update channel selection when we reconnect server
URL: https://download.kde.org/stable/ruqola/
Source: ruqola-2.3.2.tar.xz
SHA256: 57c8ff6fdeb4aba286425a1bc915db98ff786544a3ada9dec39056ca4b587837
Signed by: E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Riddell jr@jriddell.org
https://jriddell.org/jriddell.pgp
Talking Drupal: Talking Drupal #477 - Drupal Association CTO Then & Now
Today we are talking about being the CTO of the Drupal Association, How the job has changed, and How its impacted Drupal with guests Josh Mitchell & Tim Lehnen. Weâll also cover Automatic Anchors as our module of the week.
For show notes visit: https://www.talkingDrupal.com/477
Topics- How long ago were you CTO Josh
- Tim when did you take over
- DA infrastructure
- Drupal Credit System
- Josh's proudest moment
- Tim's proudest moment
- Growth
- Josh if you could do one thing differently
- Tim if you could make one change
- Future of the CTO job
- OOP Hook conversion
- Oregon State University Open Source Lab
- Whuffie: Cory doctorow Down and Out in the Magic Kingdom
- Rethink weighing of contrib projects and credits
Tim Lehnen - aspenthornpress.com hestenet
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Joshua "Josh" Mitchell - joshuami.com joshuami
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you ever wanted headings on your Drupal site to have unique id values, so links can be created to take users to specific parts of any page? Thereâs a module for that.
- Module name/project name:
- Brief history
- How old: created in Jun 2020 by Chris Komlenic (komlenic) of Penn State
- Versions available: 2.1.1-beta1, which supports Drupal 8.8, 9, and 10
- Maintainership
- Test coverage
- Number of open issues: x open issues, y of which are bugs against the current branch
- Usage stats:
- 137 sites
- Module features and usage
- By default, the module automatically generates ids on , , , , and elements within the page content
- Even if two headings have the same content, the module will make sure their ids are unique, as well as making sure they are i18n-friendly, use hyphens instead of spaces, and are short enough to be useful
- The module wonât interfere with or change manually-added or already-existing HTML ids
- Thereâs a permission to view helpful links on each heading that the ids obvious and easy to copy
- Configuration options include the root element it should look within (defaults to the body tag), which elements should get ids, what content to use for the displayed links, and whether or not generate ids on admin pages
Mike Driscoll: Black Friday Python Deals 2024
Black Friday and Cyber Monday are nearly here, so it’s time to do a Python sale! All my books and courses are 35% off until December 4th if you use this code: BF24 at checkout.
You can learn about any of the following topics in my books and courses:
- Basic Python (Python 101)
- PDF Processing (ReportLab book)
- Excel and Python
- Image processing (Pillow book)
- Python Logging (NEW this year!)
- Jupyter Notebook
- JupyterLab 101 (NEW this month!)
- and more
Check out my books and courses today!
Other Python Sales- Sundeep Agarwal: ~50% off Sundeepâs all book and Python bundles with code FestiveOffer
- Python Jumpstart with Python Morsels: 50% off Trey Hunner’s brand new Python course, an introduction to Python thatâs very hands-on ($99 instead of $199)
- Rodrigo 50% off Rodrigoâs all books bundle with code BF24
- The Python Coding Place: 40% off The Python Coding Book and 40% off a lifetime membership to The Python Coding Place with code black2024
- Adam Johnsonâs Django-related Deals for Black Friday 2024
The post Black Friday Python Deals 2024 appeared first on Mouse Vs Python.
Trey Hunner: New Python Jumpstart course
I’ve just recently launched a self-paced introduction to Python that is extremely hands-on. It’s called Python Jumpstart and it’s based on introductory Python curriculum that I have been iterating on for years.
Learn Python by writing Python code âWe do not learn by putting information into the brain. Our brains simply don’t retain knowledge that way.
Learning happens from repeatedly attempting to retrieve information from the brain. When it comes to Python, that means writing Python code.
So unlike most Python courses, Python Jumpstart is not focused on watching videos and rewriting code seen within videos. Instead, this new Python course is based around learning Python by writing Python code.
How Python Jumpstart is structured đŹPython Jumpstart includes learning Python by solving 46 short Python exercises.
Before each exercise, you’ll watch a 5 minute video explaining a new Python topic. You’ll then attempt the exercise to put one or more Python topics into practice.
You won’t solve many of the exercises on your first try and that’s okay. These exercises are designed to be revisited a few times over the course of weeks, until you’re satisfied with your solution.
This is not a sprint đThis structured path to Python proficiency, includes:
- Carefully crafted exercises that build real understanding
- Short, detailed explanations that won’t waste your time
- A proven teaching approach that focuses on active learning
- Spaced repetition to help concepts stick
This is not a course for impatient or passive learners. Learning takes repetitive effort spaced over many days and this course is structured to embrace that fact.
If you spend 30 minutes on Python Jumpstart each day, I estimate that you’ll complete this course in about 7 weeks. You’ll spend the large majority of that time writing Python code.
This course will take time, but it will be time well-spent.
Launch week special: 50% off until December 2 â°Through Monday December 2, 2024, you can get lifetime access to Python Jumpstart for $99. After this launch week, Python Jumpstart will be $199.
Ready to jumpstart your Python learning journey?
Real Python: Speed Up Your Python Program With Concurrency
Concurrency refers to the ability of a program to manage multiple tasks at once, improving performance and responsiveness. It encompasses different models like threading, asynchronous tasks, and multiprocessing, each offering unique benefits and trade-offs. In Python, threads and asynchronous tasks facilitate concurrency on a single processor, while multiprocessing allows for true parallelism by utilizing multiple CPU cores.
Understanding concurrency is crucial for optimizing programs, especially those that are I/O-bound or CPU-bound. Efficient concurrency management can significantly enhance a programâs performance by reducing wait times and better utilizing system resources.
In this tutorial, youâll learn how to:
- Understand the different forms of concurrency in Python
- Implement multi-threaded and asynchronous solutions for I/O-bound tasks
- Leverage multiprocessing for CPU-bound tasks to achieve true parallelism
- Choose the appropriate concurrency model based on your programâs needs
To get the most out of this tutorial, you should be familiar with Python basics, including functions and loops. A rudimentary understanding of system processes and CPU operations will also be helpful. You can download the sample code for this tutorial by clicking the link below:
Get Your Code: Click here to download the free sample code that youâll use to learn about speeding up your Python program with concurrency.
Take the Quiz: Test your knowledge with our interactive âPython Concurrencyâ quiz. Youâll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python ConcurrencyIn this quiz, you'll test your understanding of Python concurrency. You'll revisit the different forms of concurrency in Python, how to implement multi-threaded and asynchronous solutions for I/O-bound tasks, and how to achieve true parallelism for CPU-bound tasks.
Exploring Concurrency in PythonIn this section, youâll get familiar with the terminology surrounding concurrency. Youâll also learn that concurrency can take different forms depending on the problem it aims to solve. Finally, youâll discover how the different concurrency models translate to Python.
What Is Concurrency?The dictionary definition of concurrency is simultaneous occurrence. In Python, the things that are occurring simultaneously are called by different names, including these:
- Thread
- Task
- Process
At a high level, they all refer to a sequence of instructions that run in order. You can think of them as different trains of thought. Each one can be stopped at certain points, and the CPU or brain thatâs processing them can switch to a different one. The state of each train of thought is saved so it can be restored right where it was interrupted.
You might wonder why Python uses different words for the same concept. It turns out that threads, tasks, and processes are only the same if you view them from a high-level perspective. Once you start digging into the details, youâll find that they all represent slightly different things. Youâll see more of how theyâre different as you progress through the examples.
Now, youâll consider the simultaneous part of that definition. You have to be a little careful because, when you get down to the details, youâll discover that only multiple system processes can enable Python to run these trains of thought at literally the same time.
In contrast, threads and asynchronous tasks always run on a single processor, which means they can only run one at a time. They just cleverly find ways to take turns to speed up the overall process. Even though they donât run different trains of thought simultaneously, they still fall under the concept of concurrency.
Note: Threads in most other programming languages often run in parallel. To learn why Python threads canât, check out What Is the Python Global Interpreter Lock (GIL)?
If youâre curious about even more details, then you can also read about Bypassing the GIL for Parallel Processing in Python or check out the experimental free threading introduced in Python 3.13.
The way the threads, tasks, or processes take turns differs. In a multi-threaded approach, the operating system actually knows about each thread and can interrupt it at any time to start running a different thread. This mechanism is also true for processes. Itâs called preemptive multitasking since the operating system can preempt your thread or process to make the switch.
Preemptive multitasking is handy in that the code in the thread doesnât need to do anything special to make the switch. It can also be difficult because of that at any time phrase. The context switch can happen in the middle of a single Python statement, even a trivial one like x = x + 1. This is because Python statements typically consist of several low-level bytecode instructions.
On the other hand, asynchronous tasks use cooperative multitasking. The tasks must cooperate with each other by announcing when theyâre ready to be switched out without the operating systemâs involvement. This means that the code in the task has to change slightly to make it happen.
The benefit of doing this extra work upfront is that you always know where your task will be swapped out, making it easier to reason about the flow of execution. A task wonât be swapped out in the middle of a Python statement unless that statement is appropriately marked. Youâll see later how this can simplify parts of your design.
What Is Parallelism?So far, youâve looked at concurrency that happens on a single processor. What about all of those CPU cores your cool, new laptop has? How can you make use of them in Python? The answer is to execute separate processes!
A process can be thought of as almost a completely different program, though technically, itâs usually defined as a collection of resources including memory, file handles, and things like that. One way to think about it is that each process runs in its own Python interpreter.
Because theyâre different processes, each of your trains of thought in a program leveraging multiprocessing can run on a different CPU core. Running on a different core means that they can actually run at the same time, which is fabulous. There are some complications that arise from doing this, but Python does a pretty good job of smoothing them over most of the time.
Read the full article at https://realpython.com/python-concurrency/ »[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Qt support on Apple platforms
With the release of Qt 6.8.1 and 6.5.8, we are updating our documentation to clarify Qtâs support for newly released Apple operating system versions.
This Week in KDE Apps: Bugfixing Week
Welcome to a new issue of "This Week in KDE Apps"! Every week we cover as much as possible of what's happening in the world of KDE apps.
This week, we are continuing to prepare for the KDE Gear 24.12.0 release, with a focus on bugfixing now that we've entered the feature freeze period.
Meanwhile, and as part of the 2024 end-of-year fundraiser, you can "Adopt an App" in a symbolic effort to support your favorite KDE app. This week, we are particularly grateful to mdPlusPlus and txemaq for supporting Dolphin; mdPlusPlus, Greg Helding and Archie Lamb for Okular; Henning Lammert and Thibault Molleman for Filelight; Nithanim, Dominik Perfler, and Thibault Molleman for Spectacle; Vladimir Solomatin, Akseli Lahtinen, Haakon Johannes Tjelta Meihack, and Nithanim for Kate; Henning Lammert and Marco Ryll for Kasts; GhulDev, Anders Lund Tobias Junghans, and William Wojciechowski for Konsole; Piwix for KWrite; Gabriel Klavans for Tokodon; Matthew Lamont for Kontact; and Gabriel Karlsson for Itinerary.
Any monetary contribution, however small, will help us cover operational costs, salaries, travel expenses for contributors and in general just keep KDE bringing Free Software to the world. So consider donating today!
Getting back to all that's new in the KDE App scene, let's dig in!
Global ChangesThe KIO implementation for SFTP used by many KDE applications like Dolphin, Gwenview, and many others, now correctly closes the network connection when if a fatal error occurs, which means it is now possible to reconnect immediately without having to wait a few minutes. (Harald Sitter, 24.12.0. Link)
Many apps received some small bug fixes to ensure they work correctly on Haiku OS. (Luc Schrijvers, Link 1, link 2 and many more)
Dolphin Manage your filesDolphin now sorts more naturally when comparing filenames by excluding extensions. So now "a.txt" appears before "a 2.txt". (Eren Karakas, 24.12.0. Link)
Haruna Media playerLeft-clicking on a video now plays or pauses it by default. (Nate Graham, 25.04.0. Link)
Karp KDE arranger for PDFsKarp is a new PDF arranger and editor by Tomasz Bojczuk which has just finished the incubation phase.
Tomasz was active this week and added an option to select the PDF version of the resulting PDF (Link) and made it possible to move multiple pages at the same time (Link).
Kate Advanced Text EditorThe build plugin â which allows you to trigger a rebuild from Kate's interface â now supports multiple projects being open at the same time without having to constantly reload the list of targets every time you switch projects. (Waqar Ahmed, 25.04.0. Link)
The ctag indexing doesn't happen anymore on the root and home folder as it makes no sense and just wastes CPU cycles. (Waqar Ahmed, 24.12.0. Link)
Fix getting a PATH when launching Kate outside of the console on macOS. (Waqar Ahmed, 24.12.0. Link)
KMyMoney Personal finance manager based on double-entry bookkeepingIt is no longer possible to apply category filters on the "Net Worth report" of KMyMoney as this was resulting in erroneous results. (Thomas Baumgart, 5.2, Link)
KRDC Connect with RDP or VNC to another computerFix loading the gateway server address in the settings. (Fabio Bas, 24.12.0. Link)
Fix a crash when scrolling; the app was previously sending empty scroll events which were rejected by the remote server. (Fabio Bas, 24.12.0. Link)
Konqueror KDE File Manager & Web BrowserStefano fixed various issues with the Plasma Activities integration inside Konqueror. We now, for example, wait for the Plasma Activity service to be ready, before restoring activities when starting Konqueror. (Stefano Crocco, 24.12.0. Link)
Konsole Use the command line interfaceWe added the Campbell color scheme from Microsoft. (Mingcong Bai, 25.04.0. Link)
We fixed a few font rendering issues in Konsole. (Matan Ziv-Av, 24.12.0. Link)
Okular View and annotate documentsChanged the default value of the "scroll overlap" feature from 0% to 10%, which means that when you scroll down in a document using Page Down or Space Bar, the bottom 10% of the previous page will remain visible at the top of the view. This helps you retain your spatial awareness when quickly navigating. (David Cerenius, 25.04.0. Link)
Merkuro Calendar Manage your tasks and events with speed and easeClaudio fixed many issues with the day and month views. Now, clicking on a day in the month view will open the day view on the selected day and not just a random one, the current day will be correctly highlighted, some sizing issues are fixed, and the month view won't appear as disabled anymore in some situations. (Claudio Cambra, 24.12.0. Link 1, link 2 and link 3)
When double-clicking on an empty space in the month view, the incidence editor will use the selected date as its start date. (Claudio Cambra, 24.12.0. Link)
NeoChat Chat on MatrixWe fixed the sed-edit feature in NeoChat, which allows you to type a sed expression like s/foo/bar to edit your previous message. (James Graham, 24.12.0. Link)
On mobile devices, NeoChat won't open the space homepage when trying to just switch the selected space. (James Graham, 24.12.0. Link)
Implemented MSC4228: Search Redirection to harmlessly redirect searches for harmful and potentially illegal content.
OptiImage Image optimizer to reduce the size of imagesIt is now possible to remove an image from the list of images to optimize. (Soumyadeep Ghosh, Link)
Soumyadeep also fixed an issue where it was possible to add the same image multiple times (Soumyadeep Ghosh, Link)
Skanpage Scan multi-page documents and imagesPorted the export dialog in Skanpage to use Kirigami.Dialog, giving it a nicer and more consistent appearance. (Thomas Duckworth, 25.04.0. Link)
Ported Skanpage to use KIO, which allows saving scanned documents to remote folders. (Alexander Stippich, 25.04.0. Link)
Telly Skout A convergent Kirigami TV guideFix the list of favorite TV channels being empty after opening a different page. (Plata Hill, 24.12.0. Link)
Tokodon Browse the FediverseJoshua added titles to the profile pages, so that it is not empty anymore. (Joshua Goins, 24.12.0. Link)
Quote post are now better detected. (Joshua Goins, 24.12.0. Link)
It is now possible to start a new chat from the conversation page. (Joshua Goins, 24.12.0. Link)
âŠAnd Everything ElseThis blog only covers the tip of the iceberg! If youâre hungry for more, check out Nate's blog about Plasma and be sure not to miss his This Week in Plasma series, where every Saturday he covers all the work being put into KDE's Plasma desktop environment.
For a complete overview of what's going on, visit KDE's Planet, where you can find all KDE news unfiltered directly from our contributors.
Get InvolvedThe KDE organization has become important in the world, and your time and contributions have helped us get there. As we grow, we're going to need your support for KDE to become sustainable.
You can help KDE by becoming an active community member and getting involved. Each contributor makes a huge difference in KDE â you are not a number or a cog in a machine! You donât have to be a programmer either. There are many things you can do: you can help hunt and confirm bugs, even maybe solve them; contribute designs for wallpapers, web pages, icons and app interfaces; translate messages and menu items into your own language; promote KDE in your local community; and a ton more things.
You can also help us by donating. Any monetary contribution, however small, will help us cover operational costs, salaries, travel expenses for contributors and in general just keep KDE bringing Free Software to the world.
To get your application mentioned here, please ping us in invent or in Matrix.
The Drop Times: The Art of Growing Together
The open-source ecosystem thrives on collaboration, innovation, and inclusivity. Within this space, Drupal has consistently emerged as a beacon of robust, community-driven development. A significant contribution to this success comes from Drupal companies stepping up to sponsor Free and Open Source Software (FOSS) events, ensuring the platform's visibility and fostering its growth.
Some of them are doing more than just sponsoring eventsâthey are cultivating spaces where Drupal can shine amidst the broader open-source community. By anchoring Drupalâs presence at these gatherings, they highlight the platformâs strengths, such as its flexibility, scalability, and vibrant community. Their efforts underscore the idea that Drupal is not just a content management system but a cornerstone of the open-source movement.
Recently, Nico Grienauer wrote an article in the Drop Times about acolonoâs presence at âKiss the Globe,â an interdisciplinary event held in Vienna, Austria. Acolono was one of the event's sponsors, but it did not end there. The company offered discounted tickets to Drupal developers, enabling more contributors to attend. This approach democratizes access to knowledge-sharing opportunities, allowing developers to engage with the broader open-source world while representing Drupal.
Such measures ensure Drupal's representation and help create a collaborative bridge between diverse technologies and communities. This cross-pollination of ideas enhances the value of open-source events and reflects the cooperative ethos that defines Drupal.
In a world where competition often overshadows collaboration, these efforts remind us of the power of community. Such initiatives deserve appreciation as they propel Drupal forward and uphold the principles of open sourceâaccessibility, inclusivity, and innovation.
Now, letâs look at the important stories from the past week.
In an interview with Alka Elizabeth, Martin Anderson-Clutz reflects on his experiences presenting at NEDCamp. He delves into his work under the Starshot initiative, which aims to enhance Drupalâs capabilities for event management. He highlights the evolution of Drupal recipes, explores the complexities of event-based tools, and shares developments slated for upcoming releases. He also discusses how collaboration within the Drupal community has shaped tools and practices, offering insights into the future of event management and site-building with Drupal.
For acolono, Open Source is not just a technical solution but a philosophy. It represents collaboration, innovation, and shared responsibilityâvalues that align closely with the ethos of Kiss the Globe. By leveraging Drupal, acolono develops digital tools that are both innovative and environmentally conscious, reinforcing the agencyâs belief that technology can be a driver of sustainability. Nico Grienauer shares about acolono's partnership with Kiss the Globe event and their giant push for Drupal.
Bernardo Martinez writes about how DrupalCamp Atlanta regained momentum in 2024 after one of its lead organizers stepped down last year. Earlier in the week, we had published his reflection on this year's Open-Source/DrupalCamp Chattanooga. Here's a report of the event from Bernardo Martinez.
This week, from November 25 to December 1, 2024, features several Drupal events, including the MidCamp 2025 Planning Meeting, the Event Platform Initiative Discussion, and the Dutch Splash Awards. Have a look at the Drop Timesâ article on events this week.
Pamela Barone has provided updates on the progress of Drupal CMS version 1 and outlined plans for new work tracks to advance the platform further. With version 1 nearing completion, successful tracks like Project Browser and Workspaces as Content Moderation continue to drive improvements.
DrupalSouth has officially announced its committee for 2024, bringing together a group of dedicated professionals to lead the organizationâs activities and initiatives. The newly appointed committee includes individuals with diverse experience, ready to advance the Drupal community in the region.
DrupalCon Singapore 2024 is set to roll within a month. The grand event that brings together Drupal enthusiasts from Asia and beyond is going to be a goldmine of networking opportunities. Have a look at the Social Events at the event.
DrupalNYC invites Drupal enthusiasts to participate in its upcoming "Contribution Day," scheduled for Friday, December 13, 2024, from 9:00 AM to 4:00 PM. The event encourages both new and seasoned contributors to collaborate on advancing the Drupal ecosystem.
DrupalCamp Poland 2025, the country's largest Drupal-focused conference, will occur on June 7 in Warsaw. Organizers have announced that the call for session proposals is now open, inviting speakers to submit their topics for consideration.
A new podcast series was launched exclusively to deal with topics connected to LibreOffice. The series titled LibreOffice Podcast is being published on PeerTube. The first episode, published on 20 November, discussed the topic "Marketing Free Software".
James Abrahams, Director at FreelyGive. Ltd, recently shared an update on the integration of AI agents into Drupal, sparking a wide-ranging discussion among experts on LinkedIn. His insights focused on the critical role of evaluations in AI systems, as well as the challenges of creating intuitive tools for non-developers within the Drupal ecosystem.
With that let's wind up this week's newsletter.
Thomas Alias K
Sub-editor, The Drop Times
Golems GABB: Twig & PHP Templating in Drupal 11
Welcome here! This is your complete guide to Twig and PHP templating in Drupal 11. As you understand, Â Twig and PHP are important for the frontend and backend development in Drupal. If you know how they work, you can create beautiful, effective, and easy-to-maintain websites.Â
Today, Golems Drupal company explores Twig's smooth template engine and PHP's strong backend logic. Our blog will be helpful for every kind of person, whether you are a skilled Drupal developer, someone who owns a website or business owner, or simply starting your path in this field. We will dive into the details of Twig and PHP in Drupal to help you better understand how they work together so that your digital experiences can be crafted more effectively.