Feeds
Dan Yeaw: A Big Job Change
I recently changed jobs, and now I am an Engineering Manager for OSS at Anaconda!
This is my second major career pivot, and I thought I would share why I decided to make the change. Even though being a Submarine Officer, a Functional Safety Lead, and working on OSS are very different, they share a common thread of leadership and deeply technical engineering. I’m excited to bring these skills into my new role.
Goodbye to FordI spent the last 11 years leading Functional Safety at Ford. It was incredibly rewarding to grow from an individual contributor to a manager and eventually to leading a global team dedicated to ensuring Ford vehicles were safe.
While this role let me support Functional Safety Engineers across the company, I started to miss getting hands-on with technical contributions since most of my time was focused on leading the team.
Looking back, there are a couple of things I wish had been different at Ford:
- A strong bias for external talent with executive leadership
- Too much bureaucracy, especially in approval processes
Having a good mix of new talent join an organization is so important because fresh ideas and perspectives can make a big difference. However, in Ford’s software engineering areas, about 90% of the executive leadership roles were filled by people from outside the company. While I wasn’t aiming for higher leadership roles, this clear preference for external hires made it feel like developing and retaining internal talent wasn’t a priority. As you might expect, it took new leaders a while to adapt, and there was a lot of turnover.
On top of that, the approval process for things like hiring and travel was overly complicated. Simple approvals could take months with no feedback. This culture of control slowed everything down. Delegating authority—like giving managers a budget and headcount to work with and holding them accountable—would have made things so much smoother and faster.
The thing I’ll miss most about Ford is the people. I loved collaborating with all the Functional Safety Engineers and everyone else I worked with. I wish them all the best in the future!
Hello AnacondaI am now an Engineering Manager for Open Source Software at Anaconda, where I lead a team of engineers working on amazing projects like:
...and more!
Over the last seven years, I’ve been contributing to open source projects, especially in Python. Getting the chance to lead a team that does this full-time feels like a dream come true.
One of the things I’m most excited about with these projects is how they help make programming more accessible. BeeWare, for example, makes it possible to create apps for mobile and desktop, and PyScript lets you write Python directly in your web browser. Both tools are fantastic for helping anyone pick up Python, build something useful, and share it with others. Meanwhile, Jupyter and fsspec are key tools for data science, making it easier to analyze diverse datasets and integrate different data sources.
I’m thrilled to have the opportunity to strengthen the open-source communities around these projects, encourage healthier collaboration, and create more value for Python users by connecting these tools with the broader PyData ecosystem.
Real Python: Basic Data Types in Python: A Quick Exploration
Python data types are fundamental to the language, enabling you to represent various kinds of data. You use basic data types like int, float, and complex for numbers, str for text, bytes and bytearray for binary data, and bool for Boolean values. These data types form the core of most Python programs, allowing you to handle numeric, textual, and logical data efficiently.
Understanding Python data types involves recognizing their roles and how to work with them. You can create and manipulate these data types using built-in functions and methods. You can also convert between them when necessary. This versatility helps you manage data effectively in your Python projects.
By the end of this tutorial, you’ll understand that:
- Python’s basic data types include int, float, complex, str, bytes, bytearray, and bool.
- You can check a variable’s type using the type() function in Python.
- You can convert data types in Python using functions like int(), float(), str(), and others.
- Despite being dynamically typed, Python does have data types.
- The most essential data types in Python can be categorized as numeric, sequence, binary, and Boolean.
In this tutorial, you’ll learn only the basics of each data type. To learn more about a specific data type, you’ll find useful resources in the corresponding section.
Get Your Code: Click here to download the free sample code that you’ll use to learn about basic data types in Python.
Take the Quiz: Test your knowledge with our interactive “Basic Data Types in Python: A Quick Exploration” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Basic Data Types in Python: A Quick ExplorationTake this quiz to test your understanding of the basic data types that are built into Python, like numbers, strings, bytes, and Booleans.
Python’s Basic Data TypesPython has several built-in data types that you can use out of the box because they’re built into the language. From all the built-in types available, you’ll find that a few of them represent basic objects, such as numbers, strings and characters, bytes, and Boolean values.
Note that the term basic refers to objects that can represent data you typically find in real life, such as numbers and text. It doesn’t include composite data types, such as lists, tuples, dictionaries, and others.
In Python, the built-in data types that you can consider basic are the following:
Class Basic Type int Integer numbers float Floating-point numbers complex Complex numbers str Strings and characters bytes, bytearray Bytes bool Boolean valuesIn the following sections, you’ll learn the basics of how to create, use, and work with all of these built-in data types in Python.
Integer NumbersInteger numbers are whole numbers with no decimal places. They can be positive or negative numbers. For example, 0, 1, 2, 3, -1, -2, and -3 are all integers. Usually, you’ll use positive integer numbers to count things.
In Python, the integer data type is represented by the int class:
Python >>> type(42) <class 'int'> Copied!In the following sections, you’ll learn the basics of how to create and work with integer numbers in Python.
Integer LiteralsWhen you need to use integer numbers in your code, you’ll often use integer literals directly. Literals are constant values of built-in types spelled out literally, such as integers. Python provides a few different ways to create integer literals. The most common way is to use base-ten literals that look the same as integers look in math:
Python >>> 42 42 >>> -84 -84 >>> 0 0 Copied!Here, you have three integer numbers: a positive one, a negative one, and zero. Note that to create negative integers, you need to prepend the minus sign (-) to the number.
Python has no limit to how long an integer value can be. The only constraint is the amount of memory your system has. Beyond that, an integer can be as long as you need:
Python >>> 123123123123123123123123123123123123123123123123 + 1 123123123123123123123123123123123123123123123124 Copied!For a really, really long integer, you can get a ValueError when converting it to a string:
Read the full article at https://realpython.com/python-data-types/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: A Practical Introduction to Web Scraping in Python
Python web scraping allows you to collect and parse data from websites programmatically. With powerful libraries like urllib, Beautiful Soup, and MechanicalSoup, you can fetch and manipulate HTML content effortlessly. By automating data collection tasks, Python makes web scraping both efficient and effective.
You can build a Python web scraping workflow using only the standard library by fetching a web page with urllib and extracting data using string methods or regular expressions. For more complex HTML or more robust workflows, you can use the third-party library Beautiful Soup, which simplifies HTML parsing. By adding MechanicalSoup to your toolkit, you can even enable interactions with HTML forms.
By the end of this tutorial, you’ll understand that:
- Python is well-suited for web scraping due to its extensive libraries, such as Beautiful Soup and MechanicalSoup.
- You can scrape websites with Python by fetching HTML content using urllib and extracting data using string methods or parsers like Beautiful Soup.
- Beautiful Soup is a great choice for parsing HTML documents with Python effectively.
- Data scraping may be illegal if it violates a website’s terms of use, so always review the website’s acceptable use policy.
This tutorial guides you through extracting data from websites using string methods, regular expressions, and HTML parsers.
Note: This tutorial is adapted from the chapter “Interacting With the Web” in Python Basics: A Practical Introduction to Python 3.
The book uses Python’s built-in IDLE editor to create and edit Python files and interact with the Python shell, so you’ll see occasional references to IDLE throughout this tutorial. However, you should have no problems running the example code from the editor and environment of your choice.
Source Code: Click here to download the free source code that you’ll use to collect and parse data from the Web.
Take the Quiz: Test your knowledge with our interactive “A Practical Introduction to Web Scraping in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
A Practical Introduction to Web Scraping in PythonIn this quiz, you'll test your understanding of web scraping in Python. Web scraping is a powerful tool for data collection and analysis. By working through this quiz, you'll revisit how to parse website data using string methods, regular expressions, and HTML parsers, as well as how to interact with forms and other website components.
Scrape and Parse Text From WebsitesCollecting data from websites using an automated process is known as web scraping. Some websites explicitly forbid users from scraping their data with automated tools like the ones that you’ll create in this tutorial. Websites do this for two possible reasons:
- The site has a good reason to protect its data. For instance, Google Maps doesn’t let you request too many results too quickly.
- Making many repeated requests to a website’s server may use up bandwidth, slowing down the website for other users and potentially overloading the server such that the website stops responding entirely.
Before using your Python skills for web scraping, you should always check your target website’s acceptable use policy to see if accessing the website with automated tools is a violation of its terms of use. Legally, web scraping against the wishes of a website is very much a gray area.
Important: Please be aware that the following techniques may be illegal when used on websites that prohibit web scraping.
For this tutorial, you’ll use a page that’s hosted on Real Python’s server. The page that you’ll access has been set up for use with this tutorial.
Now that you’ve read the disclaimer, you can get to the fun stuff. In the next section, you’ll start grabbing all the HTML code from a single web page.
Build Your First Web ScraperOne useful package for web scraping that you can find in Python’s standard library is urllib, which contains tools for working with URLs. In particular, the urllib.request module contains a function called urlopen() that you can use to open a URL within a program.
In IDLE’s interactive window, type the following to import urlopen():
Python >>> from urllib.request import urlopen Copied!The web page that you’ll open is at the following URL:
Python >>> url = "http://olympus.realpython.org/profiles/aphrodite" Copied!To open the web page, pass url to urlopen():
Python >>> page = urlopen(url) Copied!urlopen() returns an HTTPResponse object:
Python >>> page <http.client.HTTPResponse object at 0x105fef820> Copied! Read the full article at https://realpython.com/python-web-scraping-practical-introduction/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Stack Abuse: Performance Optimization for Django-Powered Websites on Shared Hosting
Running a Django site on shared hosting can be really agonizing. It's budget-friendly, sure, but it comes with strings attached: sluggish response time and unexpected server hiccups. It kind of makes you want to give up.
Luckily, with a few fixes here and there, you can get your site running way smoother. It may not be perfect, but it gets the job done. Ready to level up your site? Let’s dive into these simple tricks that’ll make a huge difference.
Know Your Limits, Play Your StrengthsBut before we dive deeper, let's do a quick intro to Django. A website that is built on the Django web framework is called a Django-powered website.
Django is an open-source framework written in Python. It can easily handle spikes in traffic and large volumes of data. Platforms like Netflix, Spotify, and Instagram have a massive user base, and they have Django at their core.
Shared hosting is a popular choice among users when it comes to Django websites, mostly because it's affordable and easy to set up. But since you're sharing resources with other websites, you are likely to struggle with:
- Limited resources (CPU, storage, etc.)
- Noisy neighbor effect
However, that's not the end of the world. You can achieve a smoother run by–
- Reducing server load
- Regular monitoring
- Contacting your hosting provider
These tricks help a lot, but shared hosting can only handle so much. If your site is still slow, it might be time to think about cheap dedicated hosting plans.
But before you start looking for a new hosting plan, let's make sure your current setup doesn't have any loose ends.
Flip the Debug Switch (Off!)Once your Django site goes live, the first thing you should do is turn DEBUG off. This setting shows detailed error texts and makes troubleshooting a lot easier.
This tip is helpful for web development, but it backfires during production because it can reveal sensitive information to anyone who notices an error.
To turn DEBUG off, simply set it to False in your settings.py file.
DEBUG = FalseNext, don’t forget to configure ALLOWED_HOSTS. This setting controls which domains can access your Django site. Without it, your site might be vulnerable to unwanted traffic. Add your domain name to the list like this:
ALLOWED_HOSTS =['yourdomain.com', 'www.yourdomain.com']With DEBUG off and ALLOWED_HOSTS locked down, your Django site is already more secure and efficient. But there’s one more trick that can take your performance to the next level.
Cache! Cache! Cache!Imagine every time someone visits your site, Django processes the request and renders a response. What if you could save those results and serve them instantly instead? That’s where caching comes in.
Caching is like putting your site’s most frequently used data on the fast lane. You can use tools like Redis to keep your data in RAM. If it's just about API responses or database query results, in-memory caching can prove to be a game changer for you.
To be more specific, there's also Django's built-in caching:
- Queryset caching: if your system is repeatedly running database queries, keep the query results.
- Template fragment caching: This feature caches the parts of your page that almost always remain the same (headers, sidebars, etc.) to avoid unnecessary rendering.
Your database is the backbone of your Django site. Django makes database interactions easy with its ORM (Object-Relational Mapping). But if you’re not careful, those queries can become a bone in your kebab.
- Use .select_related() and .prefetch_related()
When querying related objects, Django can make multiple database calls without you even realizing it. These can pile up and slow your site.
Instead of this:
posts = Post.objects.all() for post in posts: print(post.author.name) # Multiple queries for each post's authorUse this:
posts = Post.objects.select_related('author') for post in posts: print(post.author.name) # One query for all authors- Avoid the N+1 Query Problem: The N+1 query problem happens when you unknowingly run one query for the initial data and an additional query for each related object. Always check your queries using tools like Django Debug Toolbar to spot and fix these inefficiencies.
- Index Your Database:
Indexes help your database find data faster. Identify frequently searched fields and ensure they’re indexed. In Django, you can add indexes like this:
- Query Only What You Need:
Fetching unnecessary data wastes time and memory. Use .only() or .values() to retrieve only the fields you actually need.
Static files (images, CSS, and JavaScript) can put a heavy load on your server. But have you ever thought of offloading them to a Content Delivery Network (CDN)? CDN is a dedicated storage service. The steps are as follows:
- Set Up a CDN (e.g., Cloudflare, AWS CloudFront):
A CDN will cache your static files and serve them from locations closest to your clients. - Use Dedicated Storage (e.g., AWS S3, Google Cloud Storage):
Store your files in a service designed for static content. Use Django’s storages library. - Compress and Optimize Files:
Minify your CSS and JavaScript files and compress images to reduce file sizes. Use tools like django-compressor to automate this process.
By offloading static files, you’ll free up server storage and improve your site’s speed. It’s one more thing off your plate!
Lightweight Middleware, Heavyweight ImpactMiddleware sits between your server and your application. It processes every request and response.
Check your MIDDLEWARE setting and remove anything you don’t need. Use Django’s built-in middleware whenever you can because it’s faster and more reliable. If you create custom middleware, make sure it’s simple and only does what’s really necessary. Keeping middleware lightweight reduces server strain and uses fewer resources.
Frontend First AidYour frontend is the first thing users see, so a slow, clunky interface can leave a bad impression. Using your frontend the right way can dramatically improve the user experience.
-
Minimize HTTP Requests: Combine CSS and JavaScript files to reduce the number of requests.
-
Optimize Images: Use tools like TinyPNG or ImageOptim to compress images without losing quality.
-
Lazy Load Content: Delay loading images or videos until they’re needed on the screen.
-
Enable Gzip Compression: Compress files sent to the browser to reduce load times.
In the end, the key to maintaining a Django site is constant monitoring. By using tools like Django Debug Toolbar or Sentry, you can quickly identify performance issues.
Once you have a clear picture of what’s happening under the radar, measure your site’s performance. Use tools like New Relic or Google Lighthouse. These tools will help you prioritize where to make improvements. With this knowledge, you can optimize your code, tweak settings, and ensure your site runs smoothly.
This Week in Plasma: end-of-year bug fixing
Lots of KDE folks are winding down for well-deserved end-of-year breaks, but that didn't stop a bunch of people from landing some awesome changes anyway! This will be a short one, and I may skip next week as many of us are going to be focusing on family time. But in the meantime, check out what we have here:
Notable UI ImprovementsWhen applying screen settings fails due to a graphics driver issue, the relevant page in System Settings now tells you about it, instead of failing silently. (Xaver Hugl, 6.3.0. Link)
Added a new Breeze open-link icon with the typical "arrow pointing out of the corner of a square" appearance, which should start showing up in places where web URLs are opened from things that don't clearly look like blue clickable links. (Carl Schwan, Frameworks 6.10. Link)
Notable Bug FixesFixed one of the most common recent Powerdevil crashes. (Jakob Petsovits, 6.2.5. Link)
Recording a specific window in Spectacle and OBS now produces a recording with the correct scale when using any screen scaling (Xaver Hugl, 6.2.5. Link)
When using a WireGuard VPN, the "Persistent keepalive" setting now works. (Adrian Thiele, 6.2.5. Link)
Implemented multiple fixes and improvements for screen brightness and dimming. (Jakob Petsovits, 6.3.0. Link 1, link 2, link 3, and link 4)
Auto-updates in Discover now work again! (Harald Sitter, 6.3.0. Link)
Vastly improved game controller joystick support in Plasma, fixing many weird and random-seeming bugs. (Arthur Kasimov, 6.3.0. Link)
For printers that report per-color ink levels, System Settings' Printers page now displays the ink level visualization in the actual ink colors again. (Kai Uwe Broulik, 6.3.0. Link)
Pager widgets on very thin floating panels are now clickable in all the places they're supposed to be clickable. (Niccolò Venerandi, 6.3.0. Link)
Wallpapers with very very special EXIF metadata can no longer generate text labels that escape from their intended boundaries on Plasma's various wallpaper chooser views. (Jonathan Riddell and Nate Graham, Frameworks 6.10. Link)
Fixed one of the most common Qt crashes affecting Plasma and KDE apps. (Fabian Kosmale, Qt 6.8.2. Link)
Other bug information of note:
- 2 Very high priority Plasma bugs (same as last week). Current list of bugs
- 34 15-minute Plasma bugs (up from 32 last week). Current list of bugs
- 125 KDE bugs of all kinds fixed over the last week. Full list of bugs
Significantly reduced the CPU usage of System Monitor during the time after you open the app but before you visit to the History page. More CPU usage fixes are in the pipeline, too! (Arjen Hiemstra, 6.2.5. Link)
Plasma Browser Integration now works for the Flatpak-packaged version of Firefox. (Harald Sitter, 6.3.0. Link)
How You Can HelpKDE has become important in the world, and your time and contributions have helped us get there. As we grow, we need your support to keep KDE sustainable.
You can help KDE by becoming an active community member and getting involved somehow. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine!
You don’t have to be a programmer, either. Many other opportunities exist:
- Triage and confirm bug reports, maybe even identify their root cause
- Contribute designs for wallpapers, icons, and app interfaces
- Design and maintain websites
- Translate user interface text items into your own language
- Promote KDE in your local community
- …And a ton more things!
You can also help us by donating to our yearly fundraiser! Any monetary contribution — however small — will help us cover operational costs, salaries, travel expenses for contributors, and in general just keep KDE bringing Free Software to the world.
To get a new Plasma feature or a bugfix mentioned here, feel free to push a commit to the relevant merge request on invent.kde.org.
Michael Prokop: Grml 2024.12 – codename Adventgrenze
We did it again™! Just in time, we’re excited to announce the release of Grml stable version 2024.12, code-named ‘Adventgrenze’! (If you’re not familiar with Grml, it’s a Debian-based live system tailored for system administrators.)
This new release is built on Debian trixie, and for the first time, we’re introducing support for 64-bit ARM CPUs (arm64 architecture)!
I’m incredibly proud of the hard work that went into this release. A significant amount of behind-the-scenes effort went into reworking our infrastructure and redesigning the build process. Special thanks to Chris and Darsha – our Grml developer days in November and December were a blast!
For a detailed overview of the changes between releases 2024.02 and 2024.12, check out our official release announcement. And, as always, after a release comes the next one – exciting improvements are already in the works!
BTW: recently we also celebrated 20(!) years of Grml Releases. If you’re a Grml and or grml-zsh user, please join us in celebrating and send us a postcard!
ComputerMinds.co.uk: Views Data Export: Sprint 1 Summary
As explained in the previous article in the series I've started working on maintaining Views Data Export again.
I've decided to document my work in 2 week 'sprints'. And so this article is about what I did in Sprint 1.
Sprint progressAt the start of the sprint there in the Drupal.org issue queue there were:
- 204 open bugs
- 276 other open issues.
So that's a total of 480 open issues.
By the end it looked like this:
- 91 open bugs
- 17 fixed issues.
- 81 other open issues
So that's a total of 189 open issues, a 60% reduction from before!
Key goalsIn this sprint I wanted to:
- Tame the issue queues on Drupal.org and get a handle on what the common frustrations and missing features were.
- Read and understand all of the code in the Drupal 8.x-1.x branch.
Taming the issue queue
As mentioned in a previous article I decided to close down pretty much all the tickets for the Drupal 7 version of the module. This is the codebase that I'm most familiar with, but it's causing a lot of noise in the issue queue, so getting rid of that is a great first step, and pretty easy.
https://www.drupal.org/project/views_data_export/issues/3492246 was my ticket where I detailed what I was going to do, and then I went about doing that.
This felt immensely good! I went through each Drupal 7 ticket and gave it a quick scan and then pasted-in my prepared closing statement. It took just over an hour, and was like taking a trip down memory lane: seeing all those old issues come up and remembering when I triaged some of them originally.
After this initial round of work, I've also been working in the 8.x-1.x queue to close out duplicate and solved issues. I've been focussing on support requests which are usually super quick to evaluate and close out. However, this means that I've not really had a chance to look through all the feature requests and bugs, so I still don't really have a handle on what's needed/broken with the module.
Understanding the codeI had a good old read of the code. There's some really great stuff in there, and there's some obvious room for improvement.
But, at least I know what the code does now, and can see some obvious problems/issues. But also, the codebase is small, and there some automated tests, so we've got a great platform to get going with.
Giving directionThere were a few tickets for 8.x-1.x where there were contributors making great contributions and I was able to provide some guidance of how to implement a feature or resolve a bug. I feel like the issue queue has been lacking any kind of technical leadership and so many tickets are collections of patches where developers are fixing the problem they have in quite a specific way. I'm really looking forward to giving some direction to these contributions and then at some point committing and releasing the great work!
Future roadmap/goalsI'm not committing myself to doing these exactly, or any particular order, but this is my high-level list of hopes/dreams/desires, I'll copy and paste this to the next sprint summary article as I go and adjust as required.
- Get the project page updated with information relevant to Drupal 8.x-1.x version of the module
- Update the documentation on Drupal.org
- Not have any duplicate issues on Drupal.org
MidCamp - Midwest Drupal Camp: Last Chance Proposal Help: MidCamp 2025 Session Proposal Workshop
Missed the last Session Proposal Workshop? Don't worry; we have another one in January right before the submission deadline!
🚀 Ready to take your session ideas to the next level? Whether you're a seasoned speaker or a first-time presenter, the MidCamp 2025 Session Proposal Workshop is here to help you craft standout submissions.
📅 Date: January 7, 2025
🕒 Time: 3:00 PM - 4:00 PM CST
🌐 Location: Virtual via MidCamp Slack (#speakers channel)
This workshop will be led by Aaron Feledy, a seasoned Drupal contributor and experienced speaker. Aaron brings years of expertise in proposal crafting and conference speaking, offering practical advice to help you refine and elevate your session submissions.
Why Attend?Submitting a session proposal can be daunting—but it doesn't have to be! This workshop is designed to guide you through the process, from brainstorming topics to refining your submission. Our expert facilitators will share insider tips on what makes a proposal stand out to reviewers and resonate with attendees.
What You’ll Learn:- How to choose and frame a compelling topic
- Crafting clear, concise, and engaging abstracts
- Tips for tailoring your proposal to different audiences
- Insight into the MidCamp review process
Ready to submit? Session submissions for MidCamp 2025 are now open! Visit the MidCamp 2025 session submission page for guidelines and start your journey to the stage.
How to Join:Simply join the MidCamp Slack and head over to the #speakers channel on December 12th at 3:00 PM CST. No registration required—just jump in and start collaborating!
Old New Blog
I started this blog back in 2010. Back then I used Wordpress and it worked reasonably well. In 2018 I decided to switch to a static generated site, mostly because the Wordpress blog felt slow to load and it was hassle to maintain. Back then the go-to static site generator was Jekyll, so I went with that. Lately I’ve been struggling with it though, because in order to keep all the plugins working, I needed to use older versions or Ruby, which meant I had to use Docker to build the blog locally. Overall, it felt like too much work and for the past few years I’ve been eyeing Hugo - more so since Carl and others migrated most of KDE websites to it. I mean, if it’s good enough for KDE, it’s good enough for me, right?
So this year I finally got around to do the switch. I migrated all the content from Jekyll. This time I actually went through every single post, converted it to proper Markdown, fixed formatting, images etc. It was a nice trip down the memory lane, reading all the old posts, remembering all the sprints and Akademies… I also took the opportunity to clean up the tags and categories, so that they are more consistent and useful.
Finally, I have rewritten the theme - I originally ported the template from Wordpress to Jekyll, but it was a bit of a mess, responsivity was “hacked” in via JavaScript. Web development (and my skills) has come a long way since then, so I was able to leverage more modern CSS and HTML features to make the site look the same, but be more responsive and accessible.
CommentsWhen I switched from Wordpress to Jekyll, I was looking for a way to preserve comments. I found Isso, which is basically a small CGI server backed with SQLite that you can run on the server and embed it into your static website through JavaScript. It could also natively import comments from Wordpress, so that’s the main reason why I went with it, I think. Isso was not perfect (although the development has picked up again in the past few years) and it kept breaking for me. I think it haven’t worked for the past few years on my blog and I just couldn’t be bothered to fix it. So, I decided to ditch it in favor of another solution…
I wanted to keep the comments for old posts by generating them as static HTML from the Isso’s SQLite database, alas the database file was empty. Looks like I lost all comments at some point in 2022. It sucks, but I guess it’s not the end of the world. Due to the nature of how Isso worked, not even the Wayback Machine was able to archive the comments, so I guess they are lost forever…
For this new blog, I decided to use Carl’s approach with embedding replies to a Mastodon. I think it’s a neat idea and it’s probably the most reliable solution for comments on a static blog (that I don’t have to pay for, host myself or deal with privacy concerns or advertising).
I have some more ideas regarding the comments system, but that’s for another post ;-) Hopefully I’ll get to blog more often now that I have a shiny new blog!
Happy Holidays 🎄Enjoy the holidays and see you in 2025 🥳!
ImageX: Exploring New Features for Content Editors in Drupal as We Step into 2025
Freelock Blog: Automatically post to BlueSky
Since the 2024 election, the BlueSky social network has exploded in popularity, and appears to be replacing the cesspool that used to be Twitter. I'm not much of a social media person -- I much prefer hanging out in smaller spaces with people with shared interests. If you're like me, I would highly recommend finding a Mastodon server that caters to your interests, where you're sure to find rewarding conversations.
Noah Meyerhans: Local Development VM Management
A coworker asked recently about how people use VMs locally for dev work, so I figured I’d take a few minutes to write up a bit about what I do. There are many use cases for local virtual machines in software development and testing. They’re self-contained, meaning you can make a mess of them without impacting your day-to-day computing environment. They can run different distributions, kernels, and even entirely different operating systems from the one you use regularly. Etc. They’re also cheaper than cloud services and provide finer grained control over the resources.
I figured I’d share a little bit about how I manage different virtual machines in case anybody finds this useful. This is what works for me, but it won’t necessarily work for you, or maybe you’ve already got something better. I’ve found it to be easy to work with, light weight, and is easy to evolve my needs change.
Use short-lived VMsRather than keep a long-lived “development” VM around that you customize over time, I recommend automating the common customizations and provisioning new VMs regularly. If I’m working on reproducing a bug or testing a change prior to submitting it upstream, I’ll do this work in a VM and delete the VM when when I’m done. When provisioning VMs this frequently, though, walking through the installation process for every new VM is tedious and a waste of time. Since most of my work is done in Debian, so I start with images generated daily by the cloud team. These images are available for multiple releases and architectures. The ‘nocloud’ variant boots to a root prompt and can be useful directly, or the ‘generic’ images can be used for cloud-init based customization.
Automating image preparationThis makefile lets me do something like make image and get a new qcow2 image with the latest build of a given Debian release (sid by default, with others available by specifying DIST).
DATESTAMP=$(shell date +"%Y-%m-%d") FLAVOR?=generic ARCH?=$(shell dpkg --print-architecture) DIST?=sid RELEASE=$(DIST) URL_PATH=https://cloud.debian.org/images/cloud/$(DIST)/daily/latest/ ifeq ($(DIST),trixie) RELEASE=13 endif ifeq ($(DIST),bookworm) RELEASE=12 endif ifeq ($(DIST),bullseye) RELEASE=11 endif debian-$(DIST)-$(FLAVOR)-$(ARCH)-daily.tar.xz: curl --fail --connect-timeout 20 -LO \ $(URL_PATH)/debian-$(RELEASE)-$(FLAVOR)-$(ARCH)-daily.tar.xz $(DIST)-$(FLAVOR)-$(DATESTAMP).qcow2: debian-$(RELEASE)-$(FLAVOR)-$(ARCH)-daily.tar.xz tar xvf debian-$(RELEASE)-$(FLAVOR)-$(ARCH)-daily.tar.xz qemu-img convert -O qcow2 disk.raw $@ rm -f disk.raw qemu-img resize $@ 20g qemu-img snapshot -c untouched $@ image: $(DIST)-$(FLAVOR)-$(DATESTAMP).qcow2 .PHONY: image Customize the VM environment with cloud-initWhile the ‘nocloud’ images can be useful, I typically find that I want to apply the same modifications to each new VM I launch, and they don’t provide facilities for automating this. The ‘generic’ images, on the other hand, run cloud-init by default. Using cloud-init, I can create my user account, point apt at local mirrors, install my preferred tools, ensure the root filesystem is resized to make full use of the backing storage, etc.
The cloud-init configuration on the generic images will read from a local config drive, which can contain an ISO9660 (cdrom) filesystem image. This image can be generated from a subdirectory containing the various cloud-init input files using the following make syntax:
IMDS_FILES=$(shell find seedconfig -path '*/.git/*' \ -prune -o -type f -name '*.in.json' -print) \ seedconfig/openstack/latest/user_data seed.iso: $(IMDS_FILES) genisoimage -V config-2 -o $@ -J -R -m '*~' -m '.git' seedconfigWith the image in place, the VM can be created with
qemu-system-x86_64 -machine q35,accel=kvm -cpu host -m 4g -drive file=${img},index=0,if=virtio,media=disk -drive file=seed.iso,media=cdrom,format=raw,index=2,if=virtio -nic user -nographicThis invokes qemu with the root volume and ISO image attached as disks, uses an emulated “q35” machine with the host’s CPU and KVM acceleration, the userspace network stack, and a serial console. The first time the VM boots, cloud-init will apply the configuration from the cloud-config available in the ISO9660 filesystem.
Alternatives to cloud-initvirt-customize is another tool accomplishing the same type of customization. I use cloud-init because it works directly with cloud providers in addition to local VM images. You could also use something like ansible.
VariationsI have a variant of this that uses a bridged network, which I’ll write more about later. The bridge is nice because it’s more featureful, with full support for IPv6, etc, but it needs a bit more infrastructure in place.
It also can be helpful to use 9p or virtfs to share filesystem state between the host the VM. I don’t tend to rely on these, and will instead use rsync or TRAMP for moving files around.
Containers are also useful, of course, and there are plenty of times when the full isolation of a VM is not worth the overhead.
The Drop Times: An Enriching Experience to Carry Forward: Reflections from DrupalCon Asia
Web Review, Week 2024-51
Let’s go for my web review for the week 2024-51.
Advice for First-Time Open Source ContributorsTags: tech, foss, community
Definitely a good list of advices for first time contributors.
https://www.yegor256.com/2024/12/15/open-source-beginner-advice.html
Tags: tech, internet, geospatial
IRIS² is the friendly reminder that tens of thousand of low orbit satellites is not the only design… and likely not the smartest one.
https://www.theverge.com/2024/12/16/24322358/iris2-starlink-rival-europe-date-cost
Tags: tech, tv, attention-economy, advertisement
The TV market is really turning into an anti-consumer one.
Tags: tech, social-media, bluesky, fediverse, architecture
Yet another long piece in this interesting and in depth conversation about Bluesky. The fact that it stays civil is called out explicitly and this is appreciated.
https://dustycloud.org/blog/re-re-bluesky-decentralization/
Tags: tech, social-media, moderation, bluesky, politics
Bluesky is already hitting growth pains regarding moderation and its guidelines. By being centralized it is also more at risk within the current US political climate.
Tags: tech, social-media, linkedin, ai, machine-learning, gpt, fake
Kind of unsurprising right? I mean LinkedIn is clearly a deformed version of reality where people write like corporate drones most of the time. It was only a matter of time until robot generated content would be prevalent there, it’s just harder to spot since even humans aren’t behaving genuinely there.
https://www.wired.com/story/linkedin-ai-generated-influencers/
Tags: tech, internet, web, ai, machine-learning, gpt, fake, knowledge
Indeed, we’ll have to relearn “internet hygiene”, it is changing quickly now that we prematurely unleashed LLM content on the open web.
https://www.late-review.com/p/ai-and-internet-hygiene
Tags: tech, ai, machine-learning, gpt, criticism
A good balanced post on the topic. Maybe we’ll finally see a resurgence of real research innovation and not just stupid scaling at all costs. Reliability will stay the important factor of course and this one is still hard to crack.
https://www.aisnakeoil.com/p/is-ai-progress-slowing-down
Tags: tech, analogic, ai, machine-learning, neural-networks, hardware
It looks like analog chips for neural network workloads are on the verge of finally becoming reality. This would reduce consumption by an order of magnitude and hopefully more later on. Very early days for this new attempt, let’s see if it holds its promises.
https://spectrum.ieee.org/analog-ai-2669898661
Tags: tech, hardware, foss
A good question, it is somewhat of a grey area at times. We need to come up with better answers.
https://mjg59.dreamwidth.org/70895.html
Tags: tech, databases, sqlite, asynchronous, rust, system, filesystem
Interesting explanation of a research paper exploring the possibility of a faster SQLite by focusing on async I/O.
https://avi.im/blag/2024/faster-sqlite/
Tags: tech, java, tools
I wouldn’t use it as much as advocated in this article, still this is a good reminder that Java became way more approachable for smaller programs in recent years.
https://horstmann.com/unblog/2024-12-11/index.html
Tags: tech, career, complexity, learning
It tries hard at not being a “get off my lawn” post. It clearly points some kind of disconnects though. They’re real. I guess it’s to be expected with the breadth of our industry. There are so many abstractions piled onto each other that it’s difficult to explore them all.
https://rakhim.exotext.com/web-developers-a-growing-disconnect
Tags: tech, design, pattern, java, type-systems
One of my favorite of the traditional design patterns in object oriented languages. Now obviously when you get pattern matching in your language… you don’t need the visitor pattern anymore.
https://nipafx.dev/java-visitor-pattern-pointless/
Tags: tech, project-management, estimates
I don’t exactly use this approach to factor in the uncertainty… but I guess there’s something to be made out of this proposal. I’ll keep it in mind for my next project.
https://ntietz.com/blog/estimating-projects-short-sale/
Tags: leadership, management, communication
Interesting ideas about leadership lacking in impact. Indeed it should be seen as a communal function, it’s not about individuals leading each in their own directions. Think about it in a systemic way.
https://suzansfieldnotes.substack.com/p/the-one-way-i-know-a-team-is-in-trouble
Tags: ecology, politics, law, energy
This is not all bad news, there are a few things to rejoice about.
Bye for now!
Real Python: The Real Python Podcast – Episode #232: Exploring Modern Sentiment Analysis Approaches in Python
What are the current approaches for analyzing emotions within a piece of text? Which tools and Python packages should you use for sentiment analysis? This week, Jodie Burchell, developer advocate for data science at JetBrains, returns to the show to discuss modern sentiment analysis in Python.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
LostCarPark Drupal Blog: Drupal Advent Calendar day 20 - Navigation
It’s day 20 of the Drupal Advent Calendar, and today we’re looking at the admin UI Navigation. Joining us today are Pablo López and Matthew Oliveira, so let’s look into it…
The aim of the Navigation track is to provide a better site management experience for Drupal users. It does not provide a specific recipe or feature to Drupal CMS. Navigation is a core experimental module. However, the Navigation track provides key integration points to Drupal CMS that will help other tracks to highlight their features in the new Navigation left sidebar.
The navigation sidebar provides an improved interface for site builders and content creatorsSince Navigation has replaced Toolbar in Drupal CMS…
TagsCKEditor: Unlock New Levels of Drupal Content Editing: Webinar Recap
Talk Python to Me: #489: Anaconda Toolbox for Excel and more with Peter Wang
New LabPlot User Documentation
In recent weeks we have been working on transferring LabPlot’s documentation to a new format.
We decided to move the documentation from the DocBook and MediaWiki format to the Sphinx/reStrcutredText framework. In our perception Sphinx offers a user-friendly and flexible way to create and manage documentation. Easy math typing and code formatting also come along. Additionally, Sphinx supports basic syntax checks, and modern documentation practices, such as versioning and integration with various output formats like HTML, PDF and ePub.
The new user’s manual is available on a dedicated page: https://docs.labplot.org. Please check it out and let us know what you think.
The manual still needs to be supplemented with new content, so we encourage you to contribute to the documentation, e.g. by fixing and adding new sections, updating images, as collaborative efforts can lead to a more comprehensive resource for everyone. Please check the Git repository dedicated to the documentation to find more details on how to help make it better.