Feeds
PyCoder’s Weekly: Issue #610 (Jan. 2, 2024)
#610 – JANUARY 2, 2024
View in Browser »
In this tutorial, you’ll explore the process of creating a boilerplate for a Flask web project. It’s a great starting point for any scalable Flask web app that you wish to develop in the future, from basic web pages to complex web applications.
REAL PYTHON
Slides related to the upcoming JIT commit for Python 3.13. Note, GitHub paginates it if you don’t download it, so click the “More Pages” button to keep reading.
GITHUB.COM/BRANDTBUCHER
Although 2023 was full of AI news in computer science, it wasn’t the only news. This article summarizes the breakthroughs in 2023.
BILL ANDREWS
Cosine similarity is a check to see if two vectors point in the same direction, regardless of magnitude. This test is frequently used in some machine learning algorithms. This article details the various steps in speeding up the code, starting with vanilla Python and going all the way down to hand tuned assembly language.
ASH VARDANIAN
It’s been a fascinating year for the Python language and community. PyCoder’s Weekly included over 1,500 links to articles, blog posts, tutorials, and projects in 2023. Christopher Trudeau is back on the show this week to help wrap up everything by sharing some highlights and Python trends from across the year.
REAL PYTHON podcast
Python includes soft keywords: tokens that are important to the parser but can also be used as variable names. This article shows you what a soft keyword is and how to find them in Python 3.12 (both the easy and hard way).
RODRIGO GIRÃO SERRÃO
This is a cross-language developer survey of tools used in the industry. It includes questions about AI adoption, cloud tools, and more. 54% of respondents use Python as their most frequent language.
JETBRAINS
This articles shows you how to take advantage of some the newer async mechanisms in Django to build a messaging app. Things that used to require a third party library are now part of the framework.
TOM DEKAN • Shared by Tom Dekan
The Boyer-Moore majority vote algorithm looks for an element that appears more than n/2 times in a sequence using O(n) time. This article shows you how it works using Python code.
GITHUB.COM/NAUGHTYCONSTRICTOR • Shared by Mohammed Younes ELFRAIHI
Many developers dread the idea of becoming a manager, but there are some things you can only learn by doing. This article outlines why management might be the right thing for you.
CHARITY MAJORS
Redowan has strong opinions on reserving dataclasses for data-class purposes only: their methods should have no data modification side-effects. This article outlines why.
REDOWAN DELOWAR
Django 5 was recently released and this in-depth article covers what changed, how to upgrade from an earlier version, and how the Django version numbering system works.
ERIC MATTHES
“Figuring out how much parallelism your program can use is surprisingly tricky.” This article shows you why it is complicated and what you can determine.
ITAMAR TURNER-TRAURING
A quick tip on how to set an environment variable so that pip refuses to install a package unless in an active virtual environment.
DANIEL ROY GREENFIELD
This post talks about how to store configuration for your script and how and when to load the information into your program.
ROBERT RODE
Knowing when to raise the right exception is important, but often you don’t have to: Python might do it for you.
JAMES BENNETT
January 3, 2024
REALPYTHON.COM
January 4, 2024
MEETUP.COM
January 4, 2024
SYPY.ORG
January 9, 2024
PITERPY.COM
January 9, 2024
MEETUP.COM
January 10 to January 11, 2024
NOKIDBEHIND.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #610.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Matthew Garrett: Dealing with weird ELF libraries
But for this to work, two things are necessary: when we build a binary, there has to be a way to reference the relevant library functions in the binary; and when we run a binary, the library code needs to be mapped into the process.
(I'm going to somewhat simplify the explanations from here on - things like symbol versioning make this a bit more complicated but aren't strictly relevant to what I was working on here)
For the first of these, the goal is to replace a call to a function (eg, printf()) with a reference to the actual implementation. This is the job of the linker rather than the compiler (eg, if you use the -c argument to tell gcc to simply compile to an object rather than linking an executable, it's not going to care about whether or not every function called in your code actually exists or not - that'll be figured out when you link all the objects together), and the linker needs to know which symbols (which aren't just functions - libraries can export variables or structures and so on) are available in which libraries. You give the linker a list of libraries, it extracts the symbols available, and resolves the references in your code with references to the library.
But how is that information extracted? Each ELF object has a fixed-size header that contains references to various things, including a reference to a list of "section headers". Each section has a name and a type, but the ones we're interested in are .dynstr and .dynsym. .dynstr contains a list of strings, representing the name of each exported symbol. .dynsym is where things get more interesting - it's a list of structs that contain information about each symbol. This includes a bunch of fairly complicated stuff that you need to care about if you're actually writing a linker, but the relevant entries for this discussion are an index into .dynstr (which means the .dynsym entry isn't sufficient to know the name of a symbol, you need to extract that from .dynstr), along with the location of that symbol within the library. The linker can parse this information and obtain a list of symbol names and addresses, and can now replace the call to printf() with a reference to libc instead.
(Note that it's not possible to simply encode this as "Call this address in this library" - if the library is rebuilt or is a different version, the function could move to a different location)
Experimentally, .dynstr and .dynsym appear to be sufficient for linking a dynamic library at build time - there are other sections related to dynamic linking, but you can link against a library that's missing them. Runtime is where things get more complicated.
When you run a binary that makes use of dynamic libraries, the code from those libraries needs to be mapped into the resulting process. This is the job of the runtime dynamic linker, or RTLD[2]. The RTLD needs to open every library the process requires, map the relevant code into the process's address space, and then rewrite the references in the binary into calls to the library code. This requires more information than is present in .dynstr and .dynsym - at the very least, it needs to know the list of required libraries.
There's a separate section called .dynamic that contains another list of structures, and it's the data here that's used for this purpose. For example, .dynamic contains a bunch of entries of type DT_NEEDED - this is the list of libraries that an executable requires. There's also a bunch of other stuff that's required to actually make all of this work, but the only thing I'm going to touch on is DT_HASH. Doing all this re-linking at runtime involves resolving the locations of a large number of symbols, and if the only way you can do that is by reading a list from .dynsym and then looking up every name in .dynstr that's going to take some time. The DT_HASH entry points to a hash table - the RTLD hashes the symbol name it's trying to resolve, looks it up in that hash table, and gets the symbol entry directly (it still needs to resolve that against .dynstr to make sure it hasn't hit a hash collision - if it has it needs to look up the next hash entry, but this is still generally faster than walking the entire .dynsym list to find the relevant symbol). There's also DT_GNU_HASH which fulfills the same purpose as DT_HASH but uses a more complicated algorithm that performs even better. .dynamic also contains entries pointing at .dynstr and .dynsym, which seems redundant but will become relevant shortly.
So, .dynsym and .dynstr are required at build time, and both are required along with .dynamic at runtime. This seems simple enough, but obviously there's a twist and I'm sorry it's taken so long to get to this point.
I bought a Synology NAS for home backup purposes (my previous solution was a single external USB drive plugged into a small server, which had uncomfortable single point of failure properties). Obviously I decided to poke around at it, and I found something odd - all the libraries Synology ships were entirely lacking any ELF section headers. This meant no .dynstr, .dynsym or .dynamic sections, so how was any of this working? nm asserted that the libraries exported no symbols, and readelf agreed. If I wrote a small app that called a function in one of the libraries and built it, gcc complained that the function was undefined. But executables on the device were clearly resolving the symbols at runtime, and if I loaded them into ghidra the exported functions were visible. If I dlopen()ed them, dlsym() couldn't resolve the symbols - but if I hardcoded the offset into my code, I could call them directly.
Things finally made sense when I discovered that if I passed the --use-dynamic argument to readelf, I did get a list of exported symbols. It turns out that ELF is weirder than I realised. As well as the aforementioned section headers, ELF objects also include a set of program headers. One of the program header types is PT_DYNAMIC. This typically points to the same data that's present in the .dynamic section. Remember when I mentioned that .dynamic contained references to .dynsym and .dynstr? This means that simply pointing at .dynamic is sufficient, there's no need to have separate entries for them.
The same information can be reached from two different locations. The information in the section headers is used at build time, and the information in the program headers at run time[3]. I do not have an explanation for this. But if the information is present in two places, it seems obvious that it should be able to reconstruct the missing section headers in my weird libraries? So that's what this does. It extracts information from the DYNAMIC entry in the program headers and creates equivalent section headers.
There's one thing that makes this more difficult than it might seem. The section header for .dynsym has to contain the number of symbols present in the section. And that information doesn't directly exist in DYNAMIC - to figure out how many symbols exist, you're expected to walk the hash tables and keep track of the largest number you've seen. Since every symbol has to be referenced in the hash table, once you've hit every entry the largest number is the number of exported symbols. This seemed annoying to implement, so instead I cheated, added code to simply pass in the number of symbols on the command line, and then just parsed the output of readelf against the original binaries to extract that information and pass it to my tool.
Somehow, this worked. I now have a bunch of library files that I can link into my own binaries to make it easier to figure out how various things on the Synology work. Now, could someone explain (a) why this information is present in two locations, and (b) why the build-time linker and run-time linker disagree on the canonical source of truth?
[1] "Shared object" is the source of the .so filename extension used in various Unix-style operating systems
[2] You'll note that "RTLD" is not an acryonym for "runtime dynamic linker", because reasons
[3] For environments using the GNU RTLD, at least - I have no idea whether this is the case in all ELF environments
comments
Luca Saiu: Languages and complexity, Part I: why I love Anki
Hynek Schlawack: How to Ditch Codecov for Python Projects
Codecov’s unreliability breaking CI on my open source projects has been a constant source of frustration for me for years. I have found a way to enforce coverage over a whole GitHub Actions build matrix that doesn’t rely on third-party services.
Chapter Three: 15 Tips for Writing Better Web Copy
Thomas Koch: Good things come ... state folder
Just a little while ago (10 years) I proposed the addition of a state folder to the XDG basedir specification and expanded the article XDGBaseDirectorySpecification in the Debian wiki. Recently I learned, that version 0.8 (from May 2021) of the spec finally includes a state folder.
Granted, I wasn’t the first to have this idea (2009), nor the one who actually made it happen.
Now, please go ahead and use it! Thank you.
Thomas Koch: Know your tools - simple backup with rsync
I’ve been using rsync for years and still did not know its full powers. I just wanted a quick and dirty simple backup but realised that rsnapshot is not in Debian anymore.
However you can do much of rsnapshot with rsync alone nowadays.
The --link-dest option (manpage) solves the part of creating hardlinks to a previous backup (found here). So my backup program becomes this shell script in ~/backups/backup.sh:
#!/bin/sh SERVER="${1}" BACKUP="${HOME}/backups/${SERVER}" SNAPSHOTS="${BACKUP}/snapshots" FOLDER=$(date --utc +%F_%H-%M-%S) DEST="${SNAPSHOTS}/${FOLDER}" LAST=$(ls -d1 ${SNAPSHOTS}/????-??-??_??-??-??|tail -n 1) rsync \ --rsh="ssh -i ${BACKUP}/sshkey -o ControlPath=none -o ForwardAgent=no" \ -rlpt \ --delete --link-dest="${LAST}" \ ${SERVER}::backup "${DEST}"The script connects to rsync in daemon mode as outlined in section “USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION” in the rsync manpage. This allows to reference a “module” as the source that is defined on the server side as follows:
[backup] path = / read only = true exclude from = /srv/rsyncbackup/excludelist uid = root gid = rootThe important bit is the read only setting that protects the server against somebody with access to the ssh key to overwrit files on the server via rsync and thus gaining full root access.
Finally the command prefix in ~/.ssh/authorized_keys runs rsync as daemon with sudo and the specified config file:
command="sudo rsync --config=/srv/rsyncbackup/config --server --daemon ."The sudo setup is left as an exercise for the reader as mine is rather opinionated.
Unfortunately I have not managed to configure systemd timers in the way I wanted and therefor opened an issue: “Allow retry of timer triggered oneshot services with failed conditions or asserts”. Any help there is welcome!
Thomas Koch: Missing memegen
Back at $COMPANY we had an internal meme-site. I had some reputation in my team for creating good memes. When I watched Episode 3 of Season 2 from Yes Premier Minister yesterday, I really missed a place to post memes.
This is the full scene. Please watch it or even the full episode before scrolling down to the GIFs. I had a good laugh for some time.
With Debian, I could just download the episode from somewhere on the net with youtube-dl and easily create two GIFs using ffmpeg, with and without subtitle:
ffmpeg -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 tmp/tragic.gif ffmpeg -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 \ -vf "subtitles=tmp/sub.srt:force_style='Fontsize=60'" tmp/tragic_with_subtitle.gifAnd this sub.srt file:
1 00:00:10,000 --> 00:00:12,000 Tragic.I believe, one needs to install the libavfilter-extra variant to burn the subtitle in the GIF.
Some
space
to
hide
the
GIFs.
The Premier Minister just learned, that his predecessor, who was about to publish embarassing memories, died of a sudden heart attack:
I can’t actually think of a meme with this GIF, that the internal thought police community moderation would not immediately take down.
For a moment I thought that it would be fun to have a Meme-Site for Debian members. But it is probably not the right time for this.
Maybe somebody likes the above GIFs though and wants to use them somewhere.
Thomas Koch: lsp-java coming to debian
The Language Server Protocol (LSP) standardizes communication between editors and so called language servers for different programming languages. This reduces the old problem that every editor had to implement many different plugins for all different programming languages. With LSP an editor just needs to talk LSP and can immediately provide typicall IDE features.
I already packaged the Emacs packages lsp-mode and lsp-haskell for Debian bullseye. Now lsp-java is waiting in the NEW queue.
I’m always worried about downloading and executing binaries from random places of the internet. It should be a matter of hygiene to only run binaries from official Debian repositories. Unfortunately this is not feasible when programming and many people don’t see a problem with running multiple curl-sh pipes to set up their programming environment.
I prefer to do such stuff only in virtual machines. With Emacs and LSP I can finally have a lightweight textmode programming environment even for Java.
Unfortunately the lsp-java mode does not yet work over tramp. Once this is solved, I could run emacs on my host and only isolate the code and language server inside the VM.
The next step would be to also keep the code on the host and mount it with Virtio FS in the VM. But so far the necessary daemon is not yet in Debian (RFP: #1007152).
In Detail I uploaded these packages:
Thomas Koch: Waiting for a STATE folder in the XDG basedir spec
The XDG Basedirectory specification proposes default homedir folders for the categories DATA (~/.local/share), CONFIG (~/.config) and CACHE (~/.cache). One category however is missing: STATE. This category has been requested several times but nothing happened.
Examples for state data are:
- history files of shells, repls, anything that uses libreadline
- logfiles
- state of application windows on exit
- recently opened files
- last time application was run
- emacs: bookmarks, ido last directories, backups, auto-save files, auto-save-list
The missing STATE category is especially annoying if you’re managing your dotfiles with a VCS (e.g. via VCSH) and you care to keep your homedir tidy.
If you’re as annoyed as me about the missing STATE category, please voice your opinion on the XDG mailing list.
Of course it’s a very long way until applications really use such a STATE directory. But without a common standard it will never happen.
Cleaning up KDE's metadata - the little things matter too
Lots of my KDE contributions revolve around plugin code and their metadata, meaning I have a good overview of where and how metadata is used. In this post, I will highlight some recent changes and show you how to utilize them in your Plasma Applets and KRunner plugins!
Applet and Containment metadataApplets (or Widgets) are one of Plasma's main selling points regarding customizability. Next to user-visible information like the name, description and categories, there is a need for some technical metadata properties. This includes X-Plasma-API-Minimum-Version for the compatible versions, the ID and the package structure, which should always be “Plasma/Applet”.
For integrating with the system tray, applets had to specify the X-Plasma-NotificationArea and X-Plasma-NotificationAreaCategory properties.
The first one says that it may be shown in the system tray and the second one says in which category it belongs. But since we don't want any applets
without categories in there, the first value is redundant and may be omitted! Also, it was treated like a boolean value, but only "true" or "false" were expected.
I stumbled upon this when correcting the types in metadata files.
This was noticed due to preparations for improving the JSON linting we have in KDE. I will resume working on it and might also blog about it :).
What most applets in KDE specify is X-KDE-MainScript, which determines the entrypoint of the applet.
This is usually “ui/main.qml”, but in some cases the value differs. When working with applets, it is confusing to first have to look at the
metadata in order to find the correct file. This key was removed in Plasma6 and the file is always ui/main.qml.
Since this was the default value for the property, you may even omit it for Plasma5.
The same filename convention is also enforced for QML config modules (KCMs).
What all applets needed to specify was X-Plasma-API. This is typically set to "declarativeappletscript", but we de facto never used its value, but enforced it being present. This was historically needed because in Plasma4, applets could be written in other scripting languages. From Plasma6 onward, you may omit this key.
In the Plasma repositories, this allowed me to clean up over 200 lines of JSON data.
metadata.desktop files of Plasma5 addonsJust in case you were wondering: We have migrated from metadata.desktop files to only metadata.json files in Plasma6.
This makes providing metadata more consistent and efficient. In case your projects still use the old format, you can run desktoptojson -i pathto/metadata.desktop
and remove the file afterward.
See https://develop.kde.org/docs/plasma/widget/properties/#kpackagestructure for more detailed information. You can even do the conversion when targeting Plasma5 users!
Another nice addition is that “/runner” will now be the default object path. This means you can omit this one key. Check out the template to get started: https://invent.kde.org/frameworks/krunner/-/tree/master/templates/runner6python. DBus-Runners that worked without deprecations in Plasma5 will continue to do so in Plasma6! For C++ plugins, I will make a porting guide like blog post soonish, because I have started porting my own plugins to work with Plasma6.
Finally, I want to wish you all a happy and productive new year!
Real Python: HTTP Requests With Python's urllib.request
If you need to perform HTTP requests using Python, then the widely used Requests library is often the way to go. However, if you prefer to use only standard-library Python and minimize dependencies, then you can turn to urllib.request instead.
In this video course, you’ll:
- Learn the essentials of making basic HTTP requests with urllib.request
- Explore the inner workings of an HTTP message and how urllib.request represents it
- Grasp the concept of handling character encodings in HTTP messages
- Understand common hiccups when using urllib.request and learn how to resolve them
If you’re already familiar with HTTP requests such as GET and POST, then you’re well prepared for this video course. Additionally, you should have prior experience using Python to read and write files, ideally using a context manager.
In the end, you’ll discover that making HTTP requests doesn’t have to be a frustrating experience, despite its reputation. Many of the challenges people face in this process stem from the inherent complexity of the Internet. The good news is that the urllib.request module can help demystify much of this complexity.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Palantir: Planning Your Drupal 7 Migration: Organizational Groundwork
How to ensure a smooth Drupal 7 migration with content, design, and technology audits
In this article, we focus on tackling the organizational challenges that come with the technical realities of the migration away from Drupal 7 to newer, up-to-date versions of the content management system. We outline a strategic blueprint for a successful Drupal 7 migration, centered around three critical audits:
- Content audit: Evaluating which content should be carried over to the new platform.
- Design audit: Seizing opportunities to enhance the site’s design during the rebuild.
- Technical audit: Refining the site architecture and meticulously planning the technical aspects of the migration.
This is the perfect catalyst and opportunity for your organization to assess and not just transition, but to reconstruct your digital presence to meet your future goals and challenges.
As the clock ticks towards January 2025, the End of Life (EOL) for Drupal 7 looms on the horizon. Moving away from Drupal 7 marks a critical juncture. Since Drupal 8 and above are significantly different in architecture from Drupal 7, the move requires a comprehensive migration. The good news is that the upgrade paths from Drupal 8 and above are smoother, so the step up from Drupal 7 represents a unique, one-time effort, which will pay itself off in longer site life cycles and higher ROI on that investment.
The core principle guiding this migration strategy is the alignment of technical implementation and design with the initial content audit. This approach ensures that the rebuild is driven by content needs, rather than forcing content to conform to a predetermined site structure.
Some key organizational challenges issues when navigating this technical shift away from Drupal 7 revolve around governance, starting by understanding the existing content on your website, assigning responsibility for it, and ensuring its relevance and quality throughout the migration process.
This technical inflection point can and should also spark a broader debate about the extent of redesign and transformation needed during the migration. IT and marketing teams should be discussing, in a nutshell, “What do we have? What can be better? What do we need to make that happen?”
Palantir.net is a Drupal Association–Certified Migration Partner. We have the technical expertise, experience, and strategic insight required to help steer you through this vital transition. Whether through our Continuous Delivery Portfolio (CDP) or web modernization services, Palantir is ready to assist you in navigating the complexities of the D7 EOL upgrade.
Content audit: Decluttering the houseAt the heart of any successful migration lies a well-executed content audit, a responsibility primarily shouldered by the Marketing team. This vital process streamlines the migration by identifying what content truly needs to be transferred to the new platform and what can be retired.
The essence of a content auditThe key questions to address during the audit are: What content do we need? What can we do without? These decisions should be data-driven, relying on analytics to assess the relevance and usage of the content. Key metrics include page views, content validity, and user engagement.
If you don’t like having a cluttered house, you don’t just decide to build a slightly bigger house, then move all your clutter into it. It would be a better idea to take a look at what’s actually cluttering your house first, then decide what you need to build. In the same way, letting content drive your technical decisions is the better approach.
The complexity of a given migration is often more dependent on the variety of different content types than the volume of content. Once a system for migrating a specific type of content, like blog posts, is developed, it can run autonomously while the technical team focuses on other content types. For this reason, a site with numerous content types requires a more intricate migration plan compared to one with fewer types. A content audit can help reduce the number of content types and with it the resulting effort needed.
Tips for conducting a successful content auditThe following considerations can help make your content audit a smooth and effective process:
- Develop a comprehensive content inventory: Start by cataloging every piece of content on your website. This step is crucial as it allows you to see the full scope of your existing content and understand what requires improvement, discarding, or migration. Document key details of each content piece, such as URLs, titles, purpose, format, number of internal links, and other relevant information.
- Make data-driven decisions: Use tools like Google Analytics and Google Search Console to review content performance, examining metrics like traffic, engagement, bounce rates, time on page, and conversion rates. This quantitative analysis helps inform your content strategy and guides decisions on what content needs updating or removal.
- Complement data with qualitative decisions: Compare your content against benchmarks that align with your goals and audience needs. Assess the content for user experience, compliance with your style guide, and potential improvements. Decide on actions for each content piece, such as keeping, updating, consolidating, or removing, based on their relevance, quality, and performance.
- Involve a content strategist: An expert content strategist can help with all the above tasks and aid you in preparing a migration framework. They will help align your content with your marketing and branding goals, as well as UX design and information architecture. If you don’t have an internal content strategist, Palantir can provide one if we help you with your migration.
Conducting a content audit does more than just streamline the migration process. It can also unveil opportunities for a redesign of your site’s information and content architecture, aligned with a new content strategy. This process is not just about elimination, but also about discovery — uncovering what content is most valued by users. Not only are you finding out what you don’t need, but you’re hopefully finding out what's really important as well.
Given that moving from Drupal 7 to Drupal 10 essentially entails a complete site rebuild, there lies a golden opportunity to design the site around the content. This approach ensures that the site architecture complements and enhances the content, rather than forcing content to fit into a pre-existing structure.
The insights won here feed into the second crucial stage of a Drupal 7 migration: the design audit.
Design audit: An opportunity for enhancing UXA design audit is where you and your Marketing team evaluate the current design’s effectiveness and explore the potential for a redesign. It goes hand-in-hand with a content audit.
Design audit objectives- Evaluate current design effectiveness: Before deciding on a redesign, critically assess how well your current design serves your content and users. Does it facilitate easy navigation? Is it aesthetically pleasing and functionally efficient?
- Consider compatibility with Drupal 10: Drupal 10 brings new features and capabilities. The design audit of a Drupal 7 website usually reveals a rigid, template-based layout system, limiting content display versatility. By migrating to Drupal 10 and utilizing its advanced Layout Builder, the redesign can offer dynamic, user-friendly layouts, enhancing user engagement and providing flexibility for content strategy adaptations.
Note that, while migrating away from Drupal 7, you essentially rebuild your site. Even if you choose to retain the existing design, adapting the look and feel to the newer Drupal version will require some level of reworking as well. If your existing design seems incompatible or would require extensive modifications, it might be more efficient to opt for a new design. - Align design with content strategy: The design should complement and enhance the content, not overshadow it. A design audit should involve close coordination with content strategists to ensure that the design facilitates the content’s purpose and enhances user engagement.
- Explore modern design trends: Technology and design trends evolve rapidly. Use this migration as an opportunity to refresh your website’s look and feel to stay relevant and appealing to your audience.
- Accessibility enhancement: Focus on improving the overall user experience for everyone. This includes optimizing the site for various devices and improving accessibility, for instance, compliance with A11Y guidelines.
Palantir not only offers technical expertise in migration processes but also provides skilled designers who can seamlessly collaborate with your team. Our designers are adept at working alongside content strategists. They ensure that you end up with a cohesive system that effectively supports and enhances your content strategy, ensuring that every aspect of your site’s design is driven by and aligned with your overall content goals.
Technical audit: Engineering a future-ready frameworkNext up, your internal IT team should perform a comprehensive technical audit. If necessary, this stage can overlap with the content audits. However, we recommend that your migration should be primarily driven by the insights gained from your content audit.
The ultimate goal of the technical audit is preparing for a new Drupal environment. This means understanding how the identified technical elements will function in the new system and planning for any necessary adaptations or developments.
Data architecture auditThe technical audit begins with a detailed analysis of how data is structured in the current Drupal 7 site. This involves examining the entity relationships and the architecture of data storage. Understanding how different pieces of content are interlinked and how they reference each other is essential. This step not only overlaps with the content audit but also sets the stage for a smooth technical transition to Drupal 10.
Custom functionality and integration evaluationA critical aspect of the technical audit is assessing any custom functionalities or third-party integrations present in the current system. This includes custom plug-in modules, single sign-on mechanisms, and other unique features. Each custom element that you migrate over is something you have to maintain, potentially throughout the lifetime of the site. The decision to migrate these elements should be based on their current value and necessity. During the audit, aim to determine which functionalities are essential and how they can be adapted or redeveloped for Drupal 10.
Driving collaborative decision-makingCollaboration between the IT/technical, marketing, content strategy, and design teams is vital in deciding what to keep (and migrate) and what to discard — regarding site content, architecture, code, and functionality. The technical audit, outlining the functionalities and integrations of the current site, guides the planning and decision-making process following the insights you gain from the content and design audits.
Conclusion: Charting the course of a Drupal migrationAs we’ve seen, the journey away from Drupal 7 involves three main audits:
- the content audit, which acts as a decluttering exercise;
- the design audit, seizing opportunities to enhance user experience;
- and the technical audit, engineering a future-ready framework.
The content audit is the central pillar, and content strategy should drive the technical implementation and design decisions. This approach ensures a migration process where content seamlessly integrates into an efficient, updated site structure, rather than being confined by it.
Palantir is here to help and guide you through a successful migration from Drupal 7 to the future of your online digital presence. We are a Drupal Association–certified migration partner, with years of experience with intricate processes. Our expertise in content strategy, design innovation, and technical proficiency makes us an ideal full-service partner for navigating the complexities of a D7 end-of-life upgrade.
If you’re considering this critical step in your digital strategy, we invite you to explore how Palantir’s Continuous Delivery Portfolio (CDP) and web modernization services can transform your digital presence.
Content Strategy Design Drupal Drupal CDP StrategyLN Webworks: Why Media Business Should Choose Drupal As Their First Priority CMS: 5 Big Reasons
Media has changed a lot with new technology. Organizations need to connect with their audience using modern methods. Big media companies use digital tech to reach people on different platforms and make more money.
Top media networks have many websites and social media pages to reach more people. They create websites for different groups. In the age of Web 3.0, having a strong online presence is crucial. Drupal helps with that, improving the customer experience and increasing conversion by 25% for one media client we worked with.
Why is Drupal Preferred Over Other CMS?Media relies on content, and that content needs to bring in revenue in a scalable way. To achieve this while keeping things easy to manage, you need the right Content Management System (CMS). Considering the constant flow of new content, choosing the right CMS is crucial.
Django Weblog: Django bugfix releases issued: 4.2.9 and 5.0.1
Today we've issued 5.0.1 and 4.2.9 bugfix releases.
The release package and checksums are available from our downloads page, as well as from the Python Package Index. The PGP key ID used for this release is Mariusz Felisiak: 2EF56372BA48CD1B.