Feeds
Russell Coker: Links February 2023
Vox has an insightful interview with the author of “Slouching Towards Utopia: An Economic History of the Twentieth Century” [1]. The main claim of that book is that “The 140 years from 1870 to 2010 of the long twentieth century were, I strongly believe, the most consequential years of all humanity’s centuries”. A claim that seems well supported.
PostMarketOS is an interesting OS for hardware designed for Android [2]. It is based on Alpine Linux, is small, and modular. If you want to change something just change that package not the entire image. Also an aim is to have as much commonality between devices as possible, all phones with the same CPU family can run the same packages apart from the kernel and maybe some utilities related to hardware. Abhijithpa blogged about getting started with pmOS, it seems easy to do [3].
Interesting article about gay samurai [4]. Regarding sex with men or women “an elderly arbiter, after hearing the impassioned arguments of the two sides, counsels that the wisest course is to follow both paths in moderation, thereby helping to prevent overindulgence in either”. Wow.
The SCP project is an interesting collaborative SciFi/horror fiction project [5] based on an organisation that aims to Secure and Contain dangerous objects and beings and Protect the world from them. The series of stories about the Anti-Memetics Division [6] is a good place to start reading.
- [1] https://tinyurl.com/2y6ynv4k
- [2] https://postmarketos.org/
- [3] https://abhijithpa.me/2022/Running-postmarketos-on-my-phone/
- [4] https://www.tofugu.com/japan/gay-samurai/
- [5] https://scp-wiki.wikidot.com/
- [6] https://qntm.org/scp
Related posts:
- Links January 2023 The Intercept has an amusing and interesting article about senior...
- Links February 2020 Truthout has an interesting summary of the US “Wars Without...
- Links February 2021 Elestic Search gets a new license to deal with AWS...
Lemberg Solutions: Drupal Commerce + SAP Integration: Solutions and Benefits
Python Bytes: #325 It's called a merge conflict
Specbee: Migrate to Drupal 9 (or 10) Without Losing Your Hard-Earned SEO Ranking
Website migrations are never an easy decision and we get it. The fear of their SEO rankings being negatively affected often holds site owners back from migrating their CMS or upgrading from an older version. After all, it has been a long and hard process to get your website to the top of Google's search result pages, and you don't want all that effort to go to waste.
However, this common concern can be addressed and mitigated before and during the migration process. With meticulous planning and a systematic migration approach, a website migration will not affect your SEO. Instead, with a CMS like Drupal that offers SEO and performance optimization techniques, your SEO ranking should see an upward trend.
In this article, we’ll discuss why a website migration to Drupal 9 does not have to mean sacrificing your SEO ranking. We'll go through some of the best practices and tips as well as what you need to do if you see a drop in ranking after the migration.
Why Migrate to Drupal 9 (or 10)Most of our clients migrate/upgrade their CMS to Drupal 9 for one big reason: to fuel their business growth! Drupal 9 offers the high-performance tools and features needed to take your business to the next level.
Let’s get started with understanding why migrating your CMS to Drupal is important, especially in terms of SEO:
- Upgrading your CMS to the latest version of Drupal will bring more features, stability, and security to your website while also increasing the performance of your site by using the latest technologies.
- Drupal allows for easy management of important on-page optimization elements like meta tags, URLs, meta descriptions, titles and others that are vital to enhance your ranking.
- Drupal is SEO-ready straight out of the box! A variety of built-in and contributed SEO boosting modules that can be easily integrated with a Drupal website enhances its indexability.
- Drupal’s clean and well-structured code makes it easier for search engines to understand your website’s content.
- The highly customizable nature of Drupal enables you to tailor it to meet your SEO strategy's specific requirements.
A migration involves moving and mapping all your website's content, data, and functionality from an old version to a newer one. It's like charting a new course to a brighter future. And they are never going to be identical. A CMS migration when done right cannot be a cause for your SEO ranking to get affected.
Ideally, a website redesign or CMS migration is risk-free when no URL or structural changes are expected. But let’s give you a few reasons when you should be concerned:
- When you’re changing domain names and your new URL structure is completely different than the old one. This can cause search engines to see these pages as new pages and will lose the existing SEO juice.
- When internal links are lost during migration due to various reasons like a change in the URL structure, content reorganization or any manual migration errors.
- When the content is not migrated properly and leads to duplicate content.
- When a migration causes broken links which can lead to bad user experiences and consequently a dip in SEO ranking.
- When there are problems with the website’s crawlability and indexability because of technical errors during a migration.
We cannot emphasize enough how important an SEO audit is before a migration.
Just like you would thoroughly examine and fix your car before a big road trip to ensure a smooth and safe journey, an SEO audit can help you identify and avoid potential technical issues or SEO problems before a migration. It also allows you to plan for redirects, establish a baseline for measuring the impact on SEO performance and ensure current best practices.
What happens during an SEO Audit?Your ideal Drupal agency should provide you with a comprehensive SEO audit checklist before planning the CMS migration. Read this article to find out how to evaluate a Drupal partner for your next project.
Here are some of the most significant elements that are analyzed during an SEO audit:
- Check if robot.txt exists and is configured properly to make the website crawlable
- Verify if sitemap.xml exists and is optimized
- Clean URLs are enabled for SEO Friendly URLs
- Appropriate meta information and tags are present for web pages
- Check if structure data is enabled for the site.
- Verify if a canonical URL set for all the pages
- The titles and descriptions are optimized
- Check for duplicate content
- Check for broken links
- Find out if analytics tools are present on the application for tracking
Before you begin the migration process, make sure to create a full backup of your website to ensure that you have a copy of all your website's files and data.
Do: Benchmark current keyword rankingsBenchmarking old rankings is an important step when migrating a website to a new domain or URL structure. It helps you understand how your website is currently performing in search engines and identify any potential issues that may affect your SEO efforts after the migration.
Do: Benchmark organic traffic levelsIt helps you monitor any changes in organic traffic after the migration and allows you to identify any issues that may be affecting your SEO efforts.
Do: Keep the same URL structureTry to keep the same URL structure of your website, if possible. This will help to maintain the authority of your website and avoid any broken links
Do: On-page optimizationOn-page optimization is crucial when migrating a website to ensure that your site is optimized for search engines and user experience. Here are some steps to take for on-page optimization during a website migration:
- Update content
- Optimize meta elements
- Use header tags
- Optimize images
- Improve page speed
- Implement structured data
Drupal is a popular content management system (CMS) that provides several SEO modules that can help with website migration. Here are some SEO modules you may want to consider when migrating a Drupal website:
- Pathauto
- Redirect
- Metatag
- XML sitemap
- Google Analytics
- Schema.org
It is important to test all contact forms, thank you pages, and conversion codes while migrating a website. Here are some tips to help you ensure that these elements are working correctly after the migration:
- Test all contact forms - Make sure to test all contact forms on your website to ensure that they are working correctly. This includes testing the form fields, validation messages, and submission process.
- Verify thank you pages - Check that all thank you pages are working properly and have the correct URLs. Test them to ensure that they load correctly after form submissions or other actions.
- Check conversion codes - If you have any conversion codes installed on your website, such as Google Analytics or Facebook pixel, make sure to check that they are working properly. Verify that the codes are firing correctly on the appropriate pages and that they are tracking conversions accurately.
- Update any changes - If you make any changes to your contact forms, thank you pages, or conversion codes during the migration process, make sure to update them on the new website as well. This will help ensure that everything continues to work correctly.
Update your Sitemap.xml and Robots.txt files to reflect any changes in your website's URL structure. Read more about sitemaps and Drupal’s XML sitemap modules here.
Do: Monitor performanceMonitoring a website after migration is an important step to ensure that everything is functioning correctly and to identify any issues that may arise. Here are some steps you can take to monitor your Drupal 9 website’s performance after migration:
- Monitor traffic and rankings
- Check for broken links
- Monitor website speed
- Monitor server errors
- Test forms and conversions
We know already mentioned this in our Do’s but we can’t stress enough how important this step is! Even after the website migration, it is recommended not to delete your old site immediately. There are several reasons why you should keep your old site for a while. Like backup and recovery, content comparison and redirects.
Don't: Move to live before testing/reviewing it completelyIt's important to thoroughly test and review the new site before pushing it live to ensure that it is functioning correctly and there are no errors or issues that could harm your SEO. By taking the time to test and review the new site, you can identify and fix any potential issues before they impact your rankings and traffic. Make sure you have completed these activities before pushing it to live:
- Checked all links
- Verified title tags and meta descriptions
- Tested site speed
- Verified site structure and content
- Tested contact forms
- Ensured that all content and pictures are present on the new page
- Confirmed URL structure and 301 redirects are set up correctly
To minimize the potential negative impact of a website migration, it is generally advisable to avoid scheduling it during peak traffic periods when the site is experiencing its highest levels of user activity. This is because any disruptions to the site's functionality or accessibility during these times could lead to a poor user experience and potentially harm your search engine rankings or revenue. Instead, consider scheduling the migration during a time when traffic levels are typically lower, such as weekends or overnight, to minimize the risk of disruption and ensure a smoother transition for your users.
What happens if there’s a drop in ranking after a migration?Let’s get straight to the point. If you notice a drop in your SEO ranking after a migration:
- Keep calm. Take a step back and reassess the situation. Many times the drop is temporary because search engines will need to re-crawl your website.
- Check if this is happening due to an update in the algorithm
- Use Google Analytics to identify the pages that have been affected the most and are getting the least organic traffic
- Create a list of those URLs. Analyze these pages for URL structure, broken links, duplicate content, page errors, canonical URLs and other content changes.
- Use a page performance testing tool like GTMetrix and check if the performance has been affected. Follow best page speed practices (optimized images, CSS and other files) to fix this issue.
- If you have changed your hosting provider along with the migration, find out if there’s a performance issue because of the server change.
- Make sure all the pages are indexible (at least the ones you want to rank)
A successful migration process starts with an in-depth analysis of your current website’s structure, content, and code to identify any potential SEO risks. During this analysis, you should also consider factors such as which CMS version you are currently running, the cost and timeline of the migration process, and how to ensure that your SEO rankings remain intact during the transition. Keep checking your index status in the search console to make sure everything is in order once the migration is complete. Finally, it always helps to communicate regularly with your new hosting provider to ensure that all the performance issues are taken care of in a timely manner.
A CMS migration does not have to negatively impact your SEO ranking. In fact, a migration to Drupal 9 (or 10), can potentially increase your SEO rankings due to the improved speed and performance of your website. If you’re looking for a 100% Drupal-first company that specializes in Drupal migrations, then look no further than Specbee. Our certified experts have completed numerous successful migrations to Drupal 9 and can help ensure that your website remains SEO-friendly. Contact us today for a free consultation and find out how we can help you migrate with ease.
Author: Shefali Shetty
Meet Shefali Shetty, Director of Marketing at Specbee. An enthusiast for Drupal, she enjoys exploring and writing about the powerhouse. While not working or actively contributing back to the Drupal project, you can find her watching YouTube videos trying to learn to play the Ukulele :)
Email Address Subscribe Leave this field blank Drupal 9 Drupal 10 Drupal Module Drupal Migration Drupal PlanetLeave us a Comment
Recent Blogs Image Migrate to Drupal 9 (or 10) Without Losing Your Hard-Earned SEO Ranking Image Get the Most Out of Apache Solr: A Technical Exploration of Search Indexing Image From Mother to Manager - Shreevidya’s Career Story Need help Migrating to Drupal 9? Schedule a call Featured Case StudiesUpgrading the web presence of IEEE Information Theory Society, the most trusted voice for advanced technology
ExploreA Drupal powered multi-site, multi-lingual platform to enable a unified user experience at SEMI
ExploreGreat Southern Homes, one of the fastest growing home builders in the US, sees greater results with Drupal
Explore View all Case StudiesUtkarsh Gupta: FOSS Activites in February 2023
Here’s my (forty-first) monthly but brief update about the activities I’ve done in the F/L/OSS world.
DebianThis was my 50th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/
There’s a bunch of things I do, both, technical and non-technical. Here are the things I did this month:
Uploads- ruby-delayed-job (4.1.9-1~bpo11+1) - Backport to bullseye.
- ruby-delayed-job-active-record (4.1.6-3~bpo11+1) - Backport to bullseye.
- ruby-globalid (0.6.0-1~bpo11+1) - Backport to bullseye.
- ruby-tzinfo (2.0.4-1~bpo11+2) - Backport to bullseye.
- rails (2:6.1.7+dfsg-3~bpo11+1) - Backport to bullseye.
- ruby-commonmarker (0.23.6-1~bpo11+1) - Backport to bullseye.
- ruby-csv (3.2.2-1~bpo11+1) - Backport to bullseye.
- ruby-task-list (2.3.2-2~bpo11+1) - Backport to bullseye.
- ruby-i18n (1.10.0-2~bpo11+1) - Backport to bullseye.
- ruby-mini-magick (4.11.0-1~bpo11+1) - Backport to bullseye.
- ruby-net-ldap (0.17.0-1~bpo11+1) - Backport to bullseye.
- ruby-roadie-rails (3.0.0-1~bpo11+1) - Backport to bullseye.
- ruby-roadie (5.1.0-1~bpo11+1) - Backport to bullseye.
- ruby-sanitize (6.0.0-1~bpo11+1) - Backport to bullseye.
- ruby-nokogiri (1.13.1+dfsg-2~bpo11+1) - Backport to bullseye.
- ruby-mini-portile2 (2.8.0-1~bpo11+2) - Backport to bullseye.
- ruby-webrick (1.7.0-3~bpo11+2) - Backport to bullseye.
- ruby-zip (2.3.0-2~bpo11+1) - Backport to bullseye.
- gem2deb (2.1~bpo11+1) - Backport to bullseye.
- ruby-actionpack-action-caching (1.2.2-1~bpo11+1) - Backport to bullseye.
- ruby-nokogiri (1.13.5+dfsg-2~bpo11+1) - Backport to bullseye.
- redmine (5.0.4-2~bpo11+1) - Backport to bullseye.
- rails (2:6.1.7+dfsg-3~bpo11+2) - Backport to bullseye.
- ruby-roadie-rails (3.0.0-1~bpo11+2) - Backport to bullseye.
- redmine (5.0.4-4~bpo11+1) - Backport to bullseye.
- redmine (5.0.4-5~bpo11+1) - Backport to bullseye.
- ruby-web-console (4.2.0-1~bpo11+1) - Backport to bullseye.
- libyang2 (2.1.30-2) - Adding DEP8 test for yangre.
- redmine (5.0.4-3) - Add patch to stop unnecessary recursive chown’ing (Fixes: #1022816, #1022817).
- redmine (5.0.4-4) - Set DH_RUBY_IGNORE_TESTS to all (Fixes: #1031308).
- python-jira (3.4.1-1) - New upstream version, v3.4.1.
- Looked up some Release team documentation.
- Sponsored php-font-lib and php-dompdf-svg-lib for William.
- Granted DM rights for php-dompdf.
- Mentoring for newcomers.
- Reviewed micro bits for Nilesh, new uploads and changes.
- Ruby sprints.
- Bug work (on BTS and #debian-ruby) for rails and redmine.
- Moderation of -project mailing list.
A huge thanks to Freexian for sponsoring my Debian work and Entrouvert for sponsoring the Redmine backports. :D
UbuntuThis was my 25th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/
I mostly worked on different things, I guess.
I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D
Debian (E)LTSDebian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support).
This was my forty-first month as a Debian LTS and thirty-second month as a Debian ELTS paid contributor.
I worked for 24.25 hours for LTS and 28.50 hours for ELTS.
- Fixed CVE-2022-47016 for tmux and uploaded to buster via 2.8-3+deb10u1.
But decided to not roll the DLA for the package as the CVE got rejected upstream. - Issued DLA 3359-1, fixing CVE-2019-13038 and CVE-2021-3639, for libapache2-mod-auth-mellon.
For Debian 10 buster, these problems have been fixed in version 0.14.2-1+deb10u1. - Issued DLA 3360-1, fixing CVE-2021-30151 and CVE-2022-23837, for ruby-sidekiq.
For Debian 10 buster, these problems have been fixed in version 5.2.3+dfsg-1+deb10u1. - Worked on ruby-rails-html-sanitize and added notes to the security-tracker.
TL;DR: we need newer methods in ruby-loofah to make the patches for ruby-rails-html-sanitize backportable. - Started to look at other set of packages meanwhile.
- Issued ELA 813-1, fixing CVE-2017-12618 and CVE-2022-25147, for apr-util.
For Debian 8 jessie, these problems have been fixed in version 1.5.4-1+deb8u1.
For Debian 9 stretch, these problems have been fixed in version 1.5.4-3+deb9u1. - Issued ELA 814-1, fixing CVE-2022-39286, for jupyter-core.
For Debian 9 stretch, these problems have been fixed in version 4.2.1-1+deb9u1. - Issued ELA 815-1, fixing CVE-2022-44792 and CVE-2022-44793, for net-snmp.
For Debian 8 jessie, these problems have been fixed in version 5.7.2.1+dfsg-1+deb8u6.
For Debian 9 stretch, these problems have been fixed in version 5.7.3+dfsg-1.7+deb9u5. - Helped facilitate RabbitMQ’s update queries by one of our customers.
- Started to look at other set of packages meanwhile.
- Triaged ruby-loofah, ruby-sinatra, tmux, ruby-sidekiq, libapache2-mod-auth-mellon, jupyter-core, net-snmp, and apr-util, rabbitmq-server.
- Helped and assisted new contributors joining Freexian (LTS/ELTS/internally).
- Answered questions (& discussions) on IRC (#debian-lts and #debian-elts) and Matrix.
- Participated and helped fellow members with their queries via private mail and chat.
- General and other discussions on LTS private and public mailing list.
- Attended the monthly LTS meeting.
Until next time.
:wq for today.
Codementor: What's inside a programming language
Axelerant Blog: How To Shift-Left With Accessibility
Web accessibility is the practice of designing and building web solutions that everyone can use, no matter what limitations they have. This means that users with low or no vision, color blindness, trouble with motor skills, or inability to hear properly can use any accessible website or application.
Sumana Harihareswara - Cogito, Ergo Sumana: PyCon 2023: "Argument Clinic" & Mitigating COVID Risk
KDE Plasma 5.27.2, Bugfix Release for February
Tuesday, 28 February 2023. Today KDE releases a bugfix update to KDE Plasma 5, versioned 5.27.2.
Plasma 5.27 was released in February 2023 with many feature refinements and new modules to complete the desktop experience.
This release adds a week's worth of new translations and fixes from KDE's contributors. The bugfixes are typically small but important and include:
- Discover: don't claim 3rd-party repos are part of the OS on Debian derivatives. Commit.
- Dr Konqi: add Plasma Welcome to mappings file. Commit.
- Sddm: Focus something useful when switching between alternative login screens. Commit.
Reproducible Builds (diffoscope): diffoscope 237 released
The diffoscope maintainers are pleased to announce the release of diffoscope version 237. This version includes the following changes:
* autopkgtest: only install aapt and dexdump on architectures where they are available. (Closes: #1031297) * compartors/pdf: + Drop backward compatibility assignment. + Fix flake warnings, potentially reinstating PyPDF 1.x support (untested).You find out more by visiting the project homepage.
PyBites: Feel Comfortable with Git?
Folks come to me to ask for help with Git.
Sometimes they can’t guess what git subcommand they need. (Git 2.37 has 169.)
Sometimes they know what subcommand they want, but don’t know what flags to use. (git log now has 149 flags and options.)
Sometimes they issued a command, and Git didn’t do what they expected
Maybe you’ve had one of those problems yourself. Typically, their problem isn’t Git.
They even want Git to do something that it can do, easily. They’re just asking Git for it the wrong way
Usually these folks just have the wrong mental model of how Git works. They’ve learned a bunch of commands, drawn mental cartoons of how Git works, and then typed in a command based on that model.
They’re frustrated because they’ve built the wrong mental model.
The questions sometimes seem like a student saying,
“Dr. Haemer, Dr. Haemer! I understood everything
you said, except the difference between a loop and a CPU.”
I almost never answer with, “Oh, you just need to add the flag --foobarmumble,” or “You need to use git frabitz instead of git zazzle,” or “Git just can’t do that.”
Instead, it’s, “Aha. You just need to understand how Git works … the big picture. Let’s start there.”
Q: “Wait. You’re telling me that the best way for me to solve my Git problems is to understand what it’s doing?”
A: “Yup.”
Git’s magic isn’t in pieces hidden from view, it’s magic is its
simple, open design.
You can watch it work, under the hood, yourself. And should. I suspect making it easy to watch also made it easier for Linus Torvalds to debug.
I’ll show you.
Everything from here on out will be on the command-line,
GUI interfaces to Git are just layered on top of shell-level equivalents.
Working on the command line removes an obfuscating layer.
Oh, and I’m using “Git” to mean the whole, distributed, version-control system, and git when I mean the command.
I’m also going to assume you’re using Linux or something like it:
Unix, OS/X, Penguin, BSD, …
Linus designed and wrote both Linux and Git. Though Git is now pretty portable, guessing which OS you can expect it to make the most sense on is not much of a challenge.
Watching Git at Work
Begin by making a directory to work in:
$ mkdir /tmp/scratch
$ cd /tmp/scratch
$ ls -a
That’s empty all right. Now put it under Git control.
$ git init
$ ls # no files
$ ls -a # ah! a hidden directory
The .git directory is where Git will stuff everything it knows about.
You haven’t even created any files of your own, much less committed any. What did that git init command put into .git?
The most useful tool to explore this is the tree command,
which lays out directory hierarchies for you to see.
If your operating system didn’t supply tree by default,
stop for a second to install it with your favorite package manager: apt, brew, … whatever.
$ tree .git
Now you’re cooking.
Spend a few minutes looking through everything that’s there.
It’s mostly empty directories, plus a few files that are obviously boilerplate and templates.
Nothing useful.
Git knows it’s there, though. Try these:
$ git status # before removing .git
$ rm -rf .git
$ git status # after removing it
Ask Git a question, and it looks for answers in .git .
Want to wipe out a git repo and start over? Just remove .git .
Okay, now put it back.
$ git init
$ tree .git
That was easy. Next, make an empty file.
$ touch my-empty-file
Does that do anything to .git?
$ tree .git
Doesn’t look like it. What’s Git think?
$ git status
Now it sees a file outside of .git but there’s no information about that file inside of .git . And that’s what “untracked” means.
What would change if it were tracked? It tells you to use git add to track it, so try that. Why not? You know that if something goes wrong, you can just start over with rm -rf .git
$ git add my-empty-file
$ tree .git
Oho!
There’s a new file here.
.git/objects/e6/9de29bb2d1d6434b8b29ae775ad8c2e48c5391
And, since that’s in .git, Git sees it, too.
$ git status
Much of the work you do with Git adds and queries objects in .git/objects, and that’s where I’ll be pointing out things from now on.
Go ahead and commit it, and watch what changes.
$ git commit -m"My first commit: an empty file."
$ git status
$ tree .git/objects
The original .git/objects/e6/9de29* is still there,
but you’ve created two new objects, so now you have three.
Two of the three are Git’s version of a file and of a directory.
Linus calls objects that hold files “blobs,” and objects that hold directories, “trees.”
– e6/9de29*: a blob for the empty file
– bb/216ad*: a tree for the directory containing that empty file
To stave off a potential, mental mix-up. take a short pause to think through that, carefully.
You’re juggling *two* filesystems here. One is your OS’s filesystem, which has a directory called .git/objects/, with subdirectories and files.
The second is Git’s filesystem, which stores all its pieces as objects in that first filesystem. You’re going to explore this second filesystem.
Notice, especially, that the blob object for the empty file isn’t stored under the tree object in your OS’s filesystem. That blob is “in” that tree only in Git’s view of the world, and you’re about to see how that’s done.
Calling the files and directories for Git’s filesystems “blobs” and “trees” will help you keep straight which of the two filesystems you’re talking about.
Trees
In the Unix filesystem, you look at a directory’s content with the ls command. In Linus’s Git filesystem, you can use git ls-tree .
Try that now.
$ git ls-tree bb216ad97a6d296d1feedbc3e0973
43ce93f8f43
Git sees that this tree, has one blob (file), called `my-empty-file`, that it has permissions 100644, and that the blob is e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
Linus sticks that blob in
e6/9de29bb2d1d6434b8b29ae775ad8c2e48c5391 so it doesn’t have to store every object in the same directory in the parent’s filesystem. That’s just an implementation detail.
If you’ve already guessed that the file bb/216ad* is where Git put the tree bb216ad97a6d296d1feedbc3e097343ce93f8f43, you’ve guessed correctly.
You’re already building a new, detailed, and *correct* mental model of what Git’s doing.
But what’s that third file?
Commits
To take your model to the next level, first take a peek inside those files.
$ cat .git/objects/e69de29*
$ cat .git/objects/bb216ad*
Ugh. They’re encoded in some weird way, so cat isn’t useful.
Fortunately, Linus provides git cat-file -p, which decodes and shows the contents of objects in his file system.
git cat-file -p e69de29
Well, that doesn’t seem to do anything, right? Oh. Wait. That blob was the empty file. There’s no contents to show. Duh.
I’ll pause to point out a piece of syntax: There’s no slash in that name.
It’s e69de29, not e6/9de29. Linus spread Git’s objects out across subdirectories, but Git still thinks of them without the slashes.
Again: those subdirectories are just an implementation detail.
Luckily Git also lets you abbreviate names with the first few characters You can type git cat-file -p e69de29, not
git cat-file -p e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
Since there was nothing in the blob, let’s look at the tree.
git cat-file -p bb216ad
Now you’re cooking! That’s the tree all right. So, what’s the third object? Might as well peek at it.
git cat-file -p 659e774
(Your third object will be named something different from mine, but at this point, I bet you can work out what to type to see yours.)
You’re looking at the commit itself.
Notice what’s in it: your name, your email address, and your commit message. Here’s another way to see the same information:
git log
So, git log looks in .git for your commit object, reads the information, and formats it in a pretty way. And now you see what that leading line of the log comment, “commit …”, means. It’s Git’s name for that commit object.
You can also see that the first line of the commit object says,
tree bb216ad97a6d296d1feedbc3e097343ce93f8f43
So now you see how Git is connecting up all the pieces.
– A commit object keeps track of meta-information about the commit and points at the tree being committed.
– A tree keeps track of the blobs in it, and their human names.
– A blob stores the contents of a file.
What You Now Know
– Git stores all its information in the directory .git/
– git init creates that directory
– Linus implements a user-level filesystem with the files under .git/objects/.
– Each Git object is stored in a subdirectory of .git/objects/. The subdirectory is the first two characters of the name. This is an implementation detail to keep you from having to store every object in the same directory in your OS’s filesystem.
The name of the object in your OS’s file
.git/objects/e6/9de29bb2d1d6434b8b29ae775ad8c2e48c5391
is e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 ,
which Git lets you abbreviate as e69de29, thank goodness.
– There are three important flavors of objects: blobs, trees, and commits.
– Blobs are files, trees are directories, and commits are, um, commits.
– The objects are encoded, but you can use git cat-file -p to look inside them.
– You can use git cat-file -p to see the contents of a blob.
– You can use git cat-file -p to see the contents of a tree: a list of objects, with their types, permissions and their human names.
git ls-tree serves up the same information in a slightly nicer format.
– You can use git cat-file -p to see the contents of a commit: the commit message, timestamp, committer, and the tree you committed.
git log will format that information in loads of different, friendly ways.
– All these are bound together: commits point at trees, trees point at blobs and other trees.
– The Git filesystem is completely user-visible. You can see the whole thing. The Linux filesystem implements directories and files in a similar way — no surprise — but the implementation details are hidden. You can get a directory listing with ls, but you can’t actually open up a directory and look at its guts with cat.
There’s a lot more here to explore:
– tags & branches,
– SHA1s and object encoding,
– configuration files,
– remotes with their fetches, pushes & pulls,
– merges & rebases,
– indexes & packfiles,
– the git command and its subcommands,
…
Yes, Git is big. But its design is also simple.
Now you know you can watch and understand how all these work. You can see into Git’s secrets for yourself.
If you want some guidance along the way, I can recommend some resources:
– One is Ian Miel’s Learn Git the Hard Way, available on Kindle for $10.
I think it does a good job of teaching Git by teaching how it works.
– A second is Git Under the Hood, a set of videos that I did for Pearson, and available either directly, or through O’Reilly.
– You can even see what Git looked like at the beginning and every step of its evolution: Linus Torvalds started writing released Git on April 3, 2005, released it three days later, and made it self-hosting the next day. You can clone the source with git clone https://github.com/git/git, and then check out the very first version, or any version after that
Talking Drupal: Talking Drupal #388 - Valhalla Content Hub
Today we are talking about Valhalla Content Hub with Shane Thomas.
For show notes visit: www.talkingDrupal.com/388
Topics- Joining Netlify
- Changes at Gatsby
- What is a content hub
- How does that differ from a content repo
- What is Valhalla
- How does it work
- Data stitching with GraphQL
- Can you massage / normalize data
- Benefits
- Privacy
- Production examples
- How is it structured
- Do you have to use Gatsby
- Integrations with Drupal
- Timing
- Cost
- How to sign up
Shane Thomas - www.codekarate.com/ @smthomas3
HostsNic Laflin - www.nLighteneddevelopment.com @nicxvan John Picozzi - www.epam.com @johnpicozzi Jacob Rockowitz - www.jrockowitz.com @jrockowitz
MOTW CorrespondentMartin Anderson-Clutz - @mandclu Entity Share You configure one site to be the Server that provides the entities, and content types or bundles will be available, and in which languages.
The Drop Times: Importance of Synergy
"The whole is greater than the sum of its parts," said Aristotle. It is especially relevant while talking about a free software ecosystem.
In functional logic, it is helpful to break up things into smaller units so that it becomes manageable. There would be more focus, and bugs are easy to be identified.
The non-core modules that follow the strict guidelines for quality code are the building blocks contributing to Drupal's greatness. The insistence on quality is what binds these compartments seamlessly. Each team has its role. But their collective can touch in many ways than these individual parts could deliver.
The synergy between different constituent units is paramount in a loosely knit community formed based on superior technology and grand philosophy. Entities working in this space should constantly meet in some way or another and be able to share their ideas to achieve this synergy. The DrupalCons and DrupalCamps are always facilitating this catchup game.
DrupalCon Pittsburgh Early Bird Registration is now open and is available through April 02. But the deadline to apply for a scholarship will end tomorrow. Early Bird Registration for the 6th annual DrupalCamp Ruhr will also end tomorrow. DrupalCamp Florida is now over, and here is a look back. Read our interview with Melissa Bent and April Sides, published as part of DrupalCamp Florida. DrupalSouth (New Zealand and Australia) has called for paper submissions for their upcoming event in Wellington. They have opened registrations for the camp, and the first 50 registrants will get an early bird offer. If you are eager to attend Drupal training, you can consider registering for the training sessions at DrupalCamp New Jersey. Fan tickets are available for DrupalCamp Poland. Here is a list of current sponsors for Drupal Developer Days Vienna. Some sponsoring slots for the NERD Summit might still be open. The four-day DrupalCamping Wolfsburg, fashioned as a BAR Camp, has limited tickets, and those interested could rush for registration.
This March, we have the DrupalCamp NJ and the NERD Summit coming up. MidCamp is in April. DrupalSouth Wellington, The Stanford WebCamp, DrupalCamp Ruhr, and DrupalCamp Poland will follow in May. Not soon after, we have the first annual DrupalCon of this year in Pittsburgh by the beginning of June, just after the DrupalJam. In the same month, we have Drupal Camp Asheville and Drupal Developer Days Vienna. Let these gatherings be an excellent start for your Drupal journey if you are new to the community. For those already here, it is time to synergize with the rest. That is for this week. Thank you.
Sincerely,
Sebin A. Jacob
Editor-in-Chief, The Drop Times
Daniel Lange: Thunderbird gpg key import
Thunderbird, srsly?
5MB (or 4.8MiB) import limit. Sure. My modest pubring (111 keys) is 18MB. The Debian keyring is 28MB.
May be, just may be, add another 0 to that if statement?
So, until that happens, workarounds ...
Option 1:Export each pubkey into a separate file. The import dialog allows to select them all in one go. But - of course - it will ask confirmation for each. So prepare some valerian tea.
gpg --with-colons --list-public-keys | grep ^pub | cut -d : -f 5 | xargs -I {} -n 1 gpg -ao {}.pub --export {}; Option 2:Strip all the signatures, so Thunderbird gets a smaller file to chew on. This uses pgp-clean from signing-party.
gpg --with-colons --list-public-keys | grep ^pub | cut -d : -f 5 | xargs pgp-clean -s >> there_you_go_thunderbird.pubOption 1 will retain the signatures on individual keys, Option 2 will not.
CTI Digital: How Drupal Has Evolved to Make Content Editors Lives Easier
Drupal has come a long way since its inception as a content management system (CMS) in 2001. Over the years, Drupal has continued to evolve and improve, positioning itself as a top choice for organisations looking to build a dynamic and engaging online presence.
One of the most significant changes in Drupal's evolution has been its focus on becoming more user-friendly for content editors. In this blog, we’ll explore some of the biggest changes that have occurred from Drupal changing its positioning to being more user-focused.
Chromatic Insights: How to Add Tugboat Live Previews to Drupal Contrib Modules
CTI Digital: Drupal Through The Years: The Evolution of Drupal
Drupal has long been known as a powerful and flexible content management system (CMS), but it’s also well known for its complexity. In the early days of Drupal, creating and managing content required a deep understanding of the platform, its architecture and many intricacies, making it challenging for non-technical users to navigate.
However, over the years, Drupal has made significant changes to become more user-friendly and accessible for content editors. In this blog, we’ll take a closer look at the evolution of Drupal and the changes that Drupal and the community have made to create a more accessible platform for content editors.
Real Python: Using NumPy reshape() to Change the Shape of an Array
The main data structure that you’ll use in NumPy is the N-dimensional array. An array can have one or more dimensions to structure your data. In some programs, you may need to change how you organize your data within a NumPy array. You can use NumPy’s reshape() to rearrange the data.
The shape of an array describes the number of dimensions in the array and the length of each dimension. In this tutorial, you’ll learn how to change the shape of a NumPy array to place all its data in a different configuration. When you complete this tutorial, you’ll be able to alter the shape of any array to suit your application’s needs.
In this tutorial, you’ll learn how to:
- Change the shape of a NumPy array without changing its number of dimensions
- Add and remove dimensions in a NumPy array
- Control how data is rearranged when reshaping an array with the order parameter
- Use a wildcard value of -1 for one of the dimensions in reshape()
For this tutorial, you should be familiar with the basics of NumPy and N-dimensional arrays. You can read NumPy Tutorial: Your First Steps Into Data Science in Python to learn more about NumPy before diving in.
Supplemental Material: Click here to download the image repository that you’ll use with NumPy reshape().
Install NumPyYou’ll need to install NumPy to your environment to run the code in this tutorial and explore reshape(). You can install the package using pip within a virtual environment. Select either the Windows or Linux + macOS tab below to see instructions for your operating system:
PS> python -m venv venv PS> .\venv\Scripts\activate (venv) PS> python -m pip install numpy $ python -m venv venv $ source venv/bin/activate (venv) $ python -m pip install numpyIt’s a convention to use the alias np when you import NumPy. To get started, you can import NumPy in the Python REPL:
>>>>>> import numpy as npNow that you’ve installed NumPy and imported the package in a REPL environment, you’re ready to start working with NumPy arrays.
Understand the Shape of NumPy ArraysYou’ll use NumPy’s ndarray in this tutorial. In this section, you’ll review the key features of this data structure, including an array’s overall shape and number of dimensions.
You can create an array from a list of lists:
>>>>>> import numpy as np >>> numbers = np.array([[1, 2, 3, 4], [5, 6, 7, 8]]) >>> numbers array([[1, 2, 3, 4], [5, 6, 7, 8]])The function np.array() returns an object of type np.ndarray. This data structure is the main data type in NumPy.
You can describe the shape of an array using the length of each dimension of the array. NumPy represents this as a tuple of integers. The array numbers has two rows and four columns. Therefore, this array has a (2, 4) shape:
>>>>>> numbers.shape (2, 4)You can represent the same data using a different shape:
Both of these arrays contain the same data. The array with the shape (2, 4) has two rows and four columns and the array with the shape (4, 2) has four rows and two columns. You can check the number of dimensions of an array using .ndim:
>>>>>> numbers.ndim 2The array numbers is two-dimensional (2D). You can arrange the same data contained in numbers in arrays with a different number of dimensions:
Read the full article at https://realpython.com/numpy-reshape/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python for Beginners: Working With an XML File in Python
XML files are used to store data as well as to transmit data in software systems. This article discusses how to read, write, update, and delete data from an XML file in Python. For this task, we will use the xmltodict module in python.
Table of Contents- What is an XML File?
- Create an XML File in Python
- Read an XML File in Python
- Add a New Section to an XML File in Python
- Update Value in an XML File Using Python
- Delete data From XML File in Python
- Conclusion
XML (eXtensible Markup Language) is a markup language that is used to store and transmit data. It is similar to HTML in structure. But, unlike HTML, XML is designed to store and manipulate data, not to display data. XML uses a set of markup symbols to describe the structure and meaning of the data it contains, and it can be used to store any type of data, including text, numbers, dates, and other information.
An XML file is a plain text file that contains XML code, which can be read by a wide range of software applications, including web browsers, text editors, and specialized XML tools. The structure of an XML file consists of elements, which are defined by tags, and the data within those elements is stored as text or other data types.
The syntax for declaring data in an XML file is as follows.
<field_name> value <field_name>To understand this, consider the following example.
<?xml version="1.0"?> <employee> <name>John Doe</name> <age>35</age> <job> <title>Software Engineer</title> <department>IT</department> <years_of_experience>10</years_of_experience> </job> <address> <street>123 Main St.</street> <city>San Francisco</city> <state>CA</state> <zip>94102</zip> </address> </employee>The above XML string is a simple XML document that describes the details of an employee.
- The first line of the document, <?xml version="1.0"?>, is the XML declaration and it specifies the version of XML that is being used in the document.
- The root element of the document is <employee>, which contains several other elements:
- The <name> element stores the name of the employee, which is “John Doe”.
- The <age> element stores the age of the employee, which is “35”.
- The <job> element contains information about the employee’s job, including the <title> (Software Engineer), the <department> (IT), and the <years_of_experience> (10) elements.
- The <address> element contains information about the employee’s address, including the <street> (123 Main St.), <city> (San Francisco), <state> (CA), and <zip> (94102) elements.
Each of these elements is nested within the parent <employee> element, creating a hierarchical structure. This structure allows for the data to be organized in a clear and concise manner, making it easy to understand and process.
XML is widely used for data exchange and storage because it is platform-independent, meaning that XML data can be transported and read on any platform or operating system, without the need for proprietary software. It is also human-readable, which makes it easier to debug and maintain, and it is extensible, which means that new elements can be added as needed, without breaking existing applications that use the XML data.
XML files are saved using the .xml extension. Now, we will discuss approaches to read and manipulate XML files using the xmltodict module. You can install this module using pip by executing the following command in your command prompt.
pip3 install xmltodict Create an XML File in PythonTo create an XML file in python, we can use a python dictionary and the unparse() method defined in the xmltodict module. For this, we will use the following steps.
- First, we will create a dictionary containing the data that needs to be put into the XML file.
- Next, we will use the unparse() method to convert the dictionary to an XML string. The unparse() method takes the python dictionary as its input argument and returns the XML representation of the string.
- Now, we will open an XML file in write mode using the open() function. The open() function takes the file name as its first input argument and the literal “w” as its second input argument. After execution, it returns a file pointer.
- Next, we will write the XML string into the file using the write() method. The write() method, when invoked on the file pointer, takes the XML string as its input argument and writes it to the file.
- Finally, we will close the file using the close() method.
After execution of the above steps, the XML file will be saved in the file system. You can observe this in the following example.
import xmltodict employee={'employee': {'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}, }} file=open("employee.xml","w") xml_string=xmltodict.unparse(employee) file.write(xml_string) file.close()Instead of using the write() method, we can directly write the XML data into the file using the unparse() method. For this, we will pass the python dictionary as the first input argument and the file pointer as the second input argument to the unparse() method. After execution of the unparse() method, the data will be saved to the file.
You can observe this in the following example.
import xmltodict employee={'employee': {'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}, }} file=open("employee.xml","w") xmltodict.unparse(employee,file) file.close()The output file looks as follows.
XML File Read an XML File in PythonTo read an XML file in python, we will use the following steps.
- First, we will open the file in read mode using the open() function. The open() function takes the file name as its first input argument and the python literal “r” as its second input argument. After execution, it returns a file pointer.
- Once we get the file pointer, we will read the file using the read() method. The read() method, when invoked on the file pointer, returns the file contents as a python string.
- Now, we have read the XML file into a string. Next, we will parse it using the parse() method defined in the xmltodict module. The parse() method takes the XML string as its input and returns the contents of the XML string as a python dictionary.
- After parsing the contents of the XML file, we will close the file using the close() method.
After executing the above steps, we can read the XML file into a python dictionary. You can observe this in the following example.
import xmltodict file=open("employee.xml","r") xml_string=file.read() print("The XML string is:") print(xml_string) python_dict=xmltodict.parse(xml_string) print("The dictionary created from XML is:") print(python_dict) file.close()Output:
The XML string is: <?xml version="1.0" encoding="utf-8"?> <employee><name>John Doe</name><age>35</age><job><title>Software Engineer</title><department>IT</department><years_of_experience>10</years_of_experience></job><address><street>123 Main St.</street><city>San Francisco</city><state>CA</state><zip>94102</zip></address></employee> The dictionary created from XML is: {'employee': {'name': 'John Doe', 'age': '35', 'job': {'title': 'Software Engineer', 'department': 'IT', 'years_of_experience': '10'}, 'address': {'street': '123 Main St.', 'city': 'San Francisco', 'state': 'CA', 'zip': '94102'}}} Add a New Section to an XML File in PythonTo add a new section to an existing XML file, we will use the following steps.
- We will open the XML file in “r+” mode using the open() function. This will allow us to modify the file. Then, we will read it into a python dictionary using the read() method and the parse() method.
- Next, we will add the desired data to the python dictionary using key-value pairs.
- After adding the data to the dictionary, we will erase the existing data from the file. For this, we will first go to the start of the file using the seek() method. Then, we will erase the file contents using the truncate() method.
- Next, we will write the updated dictionary as XML to the file using the unparse() method.
- Finally, we will close the file using the close() method.
After executing the above steps, new data will be added to the XML file. You can observe this in the following example.
import xmltodict file=open("employee.xml","r+") xml_string=file.read() python_dict=xmltodict.parse(xml_string) #add a single element python_dict["employee"]["boss"]="Aditya" #Add a section with nested elements python_dict["employee"]["education"]={"University":"MIT", "Course":"B.Tech", "degree":"Hons."} file.seek(0) file.truncate() xmltodict.unparse(python_dict,file) file.close()The output file looks as follows.
XML file after adding dataIn this example, you can observe that we have added a single element as well as a nested element to the XML file.
To add a single element to the XML file, we just need to add a single key-value pair to the dictionary. To add an entire section, we need to add a nested dictionary.
Update Value in an XML File Using PythonTo update a value in the XML file, we will first read it into a python dictionary. Then, we will update the values in the dictionary. Finally, we will write the dictionary back into the XML file as shown below.
import xmltodict file=open("employee.xml","r+") xml_string=file.read() python_dict=xmltodict.parse(xml_string) #update values python_dict["employee"]["boss"]="Chris" python_dict["employee"]["education"]={"University":"Harvard", "Course":"B.Sc", "degree":"Hons."} file.seek(0) file.truncate() xmltodict.unparse(python_dict,file) file.close()Output:
XML file after the update Delete data From XML File in PythonTo delete data from the XML file, we will first read it into a python dictionary. Then, we will delete the key-value pairs from the dictionary. Next, we will dump the dictionary back into the XML file using the unparse() method. Finally, we will close the file using the close() method as shown below.
import xmltodict file=open("employee.xml","r+") xml_string=file.read() python_dict=xmltodict.parse(xml_string) #delete single element python_dict["employee"].pop("boss") #delete nested element python_dict["employee"].pop("education") file.seek(0) file.truncate() xmltodict.unparse(python_dict,file) file.close()Output:
XML file after deletionIn this example, you can observe that we have deleted a single element as well as a nested element from the XML file.
ConclusionIn this article, we have discussed how to perform create, read, update, and delete operations on an XML file in python using the xmltodict module. To learn more about XML files, you can read this article on how to convert XML to YAML in Python. You might also like this article on how to convert JSON to XML in python.
I hope you enjoyed reading this article. Stay tuned for more informative articles.
Happy Learning!
The post Working With an XML File in Python appeared first on PythonForBeginners.com.
Daniel Lange: Getting gpg to import signatures again
The GnuPG (gpg) ecosystem has been played with a bit in 2019 by adding fake signatures en masse to well known keys. The main result is that the SKS Keyserver network based on the OCaml software of the same name is basically history. A few other keyservers have come up like Hagrid (Rust) and Hockeypuck (Go) but there seems to be no clear winner yet. In case you missed it in 2019, see my take on cleaning these polluted keys.
Now the changed defaults in gpg to "mitigate" this issue are trickling down to even the conservative distributions. Debian Bullseye has self-sigs-only on gpg 2.2.27 and it looks like Debian Bookworm will get gpg 2.2.40. This would add import-clean but Daniel Kahn Gillmor patched it out. He argues correctly that this new default could delete data from good locally store pubkeys.
This all ends in you getting some random combination of self-sigs-only and / or import-clean depending on which Linux distribution and version you happen to use.
Better be explicit. I recommend to add:
# disable new gpg defaultskeyserver-options no-self-sigs-only
keyserver-options no-import-clean
to your ~/.gnupg/gpg.conf to make sure you can manage signatures yourself and receive them from keyservers or local imports as intended.
In case you care: See info gnupg --index-search=keyserver-options for the fine documentation. Of course apt install info first to be able to read info pages. 'cause who still used them in 2023? Oh, wait...