Feeds
Quansight Labs Blog: Towards Inclusive Documentation: the PyData Sphinx Theme, Before and After Accessibility Fixes
Bits from Debian: Bits from the DPL
Dear Debian community,
this are my bits from DPL for August.
Happy Birthday DebianOn 16th of August Debian celebrated its 31th birthday. Since I'm unable to write a better text than our great publicity team I'm simply linking to their article for those who might have missed it:
https://bits.debian.org/2024/08/debian-turns-31.html
Removing more packages from unstableHelmut Grohne argued for more aggressive package removal and sought consensus on a way forward. He provided six examples of processes where packages that are candidates for removal are consuming valuable person-power. I’d like to add that the Bug of the Day initiative (see below) also frequently encounters long-unmaintained packages with popcon votes sometimes as low as zero, and often fewer than ten.
Helmut's email included a list of packages that would meet the suggested removal criteria. There was some discussion about whether a popcon vote should be included in these criteria, with arguments both for and against it. Although I support including popcon, I acknowledge that Helmut has a valid point in suggesting it be left out.
While I’ve read several emails in agreement, Scott Kitterman made a valid point "I don't think we need more process. We just need someone to do the work of finding the packages and filing the bugs." I agree that this is crucial to ensure an automated process doesn’t lead to unwanted removals. However, I don’t see "someone" stepping up to file RM bugs against other maintainers' packages. As long as we have strict ownership of packages, many people are hesitant to touch a package, even for fixing it. Asking for its removal might be even less well-received. Therefore, if an automated procedure were to create RM bugs based on defined criteria, it could help reduce some of the social pressure.
In this aspect the opinion of Niels Thykier is interesting: "As much as I want automation, I do not mind the prototype starting as a semi-automatic process if that is what it takes to get started."
The urgency of the problem to remove packages was put by CharlesPlessy into the words: "So as of today, it is much less work to keep a package rotting than removing it." My observation when trying to fix the Bug of the Day exactly fits this statement.
I would love for this discussion to lead to more aggressive removals that we can agree upon, whether they are automated, semi-automated, or managed by a person processing an automatically generated list (supported by an objective procedure). To use an analogy: I’ve found that every image collection improves with aggressive pruning. Similarly, I’m convinced that Debian will improve if we remove packages that no longer serve our users well.
DEP14 / DEP18There are two DEPs that affect our workflow for maintaining packages—particularly for those who agree on using Git for Debian packages. DEP-14 recommends a standardized layout for Git packaging repositories, which benefits maintainers working across teams and makes it easier for newcomers to learn a consistent repository structure.
DEP-14 stalled for various reasons. Sam Hartman suspected it might be because 'it doesn't bring sufficient value.' However, the assumption that git-buildpackage is incompatible with DEP-14 is incorrect, as confirmed by its author, Guido Günther. As one of the two key tools for Debian Git repositories (besides dgit) fully supports DEP-14, though the migration from the previous default is somewhat complex.
Some investigation into mass-converting older formats to DEP-14 was conducted by the Perl team, as Gregor Hermann pointed out..
The discussion about DEP-14 resurfaced with the suggestion of DEP-18. Guido Günther proposed the title Encourage Continuous Integration and Merge Request-Based Collaboration for Debian Packages’, which more accurately reflects the DEP's technical intent.
Otto Kekäläinen, who initiated DEP-18 (thank you, Otto), provided a good summary of the current status. He also assembled a very helpful overview of Git and GitLab usage in other Linux distros.
More Salsa CIAs a result of the DEP-18 discussion, Otto Kekäläinen suggested implementing Salsa CI for our top popcon packages.
I believe it would be a good idea to enable CI by default across Salsa whenever a new repository is created.
Progress in Salsa migrationIn my campaign, I stated that I aim to reduce the number of packages maintained outside Salsa to below 2,000. As of March 28, 2024, the count was 2,368. Today, it stands at 2,187 (UDD query: SELECT DISTINCT count(*) FROM sources WHERE release = 'sid' and vcs_url not like '%salsa%' ;).
After a third of my DPL term (OMG), we've made significant progress, reducing the amount in question (369 packages) by nearly half. I'm pleased with the support from the DDs who moved their packages to Salsa. Some packages were transferred as part of the Bug of the Day initiative (see below).
Bug of the DayAs announced in my 'Bits from the DPL' talk at DebConf, I started an initiative called Bug of the Day. The goal is to train newcomers in bug triaging by enabling them to tackle small, self-contained QA tasks. We have consistently identified target packages and resolved at least one bug per day, often addressing multiple bugs in a single package.
In several cases, we followed the Package Salvaging procedure outlined in the Developers Reference. Most instances were either welcomed by the maintainer or did not elicit a response. Unfortunately, there was one exception where the recipient of the Package Salvage bug expressed significant dissatisfaction. The takeaway is to balance formal procedures with consideration for the recipient’s perspective.
I'm pleased to confirm that the Matrix channel has seen an increase in active contributors. This aligns with my hope that our efforts would attract individuals interested in QA work. I’m particularly pleased that, within just one month, we have had help with both fixing bugs and improving the code that aids in bug selection.
As I aim to introduce newcomers to various teams within Debian, I also take the opportunity to learn about each team's specific policies myself. I rely on team members' assistance to adapt to these policies. I find that gaining this practical insight into team dynamics is an effective way to understand the different teams within Debian as DPL.
Another finding from this initiative, which aligns with my goal as DPL, is that many of the packages we addressed are already on Salsa but have not been uploaded, meaning their VCS fields are not published. This suggests that maintainers are generally open to managing their packages on Salsa. For packages that were not yet on Salsa, the move was generally welcomed.
Publicity team wants youThe publicity team has decided to resume regular meetings to coordinate their efforts. Given my high regard for their work, I plan to attend their meetings as frequently as possible, which I began doing with the first IRC meeting.
During discussions with some team members, I learned that the team could use additional help. If anyone interested in supporting Debian with non-packaging tasks reads this, please consider introducing yourself to debian-publicity@lists.debian.org. Note that this is a publicly archived mailing list, so it's not the best place for sharing private information.
Kind regards Andreas.
#! code: Drupal 11: Batch Processing Using Drush
This is the second part of a series of articles looking at the Batch API in Drupal. The Batch API is a system in Drupal that allows data to be processed in small chunks in order to prevent timeout errors or memory problems.
In the previous article we looked at how to setup the batch process using a form, with the batch methods being contained within the structure of the form class. When the form was submitted the batch process ran through 1,000 items and then printed out a result at the end.
Whilst there is nothing wrong with running the Batch API with everything in a form class, it is normally better to abstract the batch processing code into a separate class.
Using a separate batch class to contain the process and finish methods is a much better way of setting things up as it allows you to abstract away the batch process from the action that starts it. This means that you can start the batch from anywhere, even a Drush command.
Allowing you batch processes to be run via Drush is a really powerful feature for a module to include. It means that any big process that can be run by a user can be run automatically via a Drush command.
The Batch ClassTo create a batch class I normally create a directory called "Batch" inside the module "/src" directory that contains any batch class I need to define. The contents of the class are the two batch methods from the form class used previously, namely the batchProcess() and batchFinished() methods.
The following shows the basic structure of this class.
Colin Watson: Free software activity in August 2024
All but about four hours of my Debian contributions this month were sponsored by Freexian. (I ended up going a bit over my 20% billing limit this month.)
You can also support my work directly via Liberapay.
man-db and friendsI released libpipeline 1.5.8 and man-db 2.13.0.
Since autopkgtests are great for making sure we spot regressions caused by changes in dependencies, I added one to man-db that runs the upstream tests against the installed package. This required some preparatory work upstream, but otherwise was surprisingly easy to do.
OpenSSHI fixed the various 9.8 regressions I mentioned last month: socket activation, libssh2, and Twisted. There were a few other regressions reported too: TCP wrappers support, openssh-server-udeb, and xinetd were all broken by changes related to the listener/per-session binary split, and I fixed all of those.
Once all that had made it through to testing, I finally uploaded the first stage of my plan to split out GSS-API support: there are now openssh-client-gssapi and openssh-server-gssapi packages in unstable, and if you use either GSS-API authentication or key exchange then you should install the corresponding package in order for upgrades to trixie+1 to work correctly. I’ll write a release note once this has reached testing.
Multiple identical results from getaddrinfoI expect this is really a bug in a chroot creation script somewhere, but I haven’t been able to track down what’s causing it yet. My sbuild chroots, and apparently Lucas Nussbaum’s as well, have an /etc/hosts that looks like this:
$ cat /var/lib/schroot/chroots/sid-amd64/etc/hosts 127.0.0.1 localhost 127.0.1.1 [...] 127.0.0.1 localhost ip6-localhost ip6-loopbackThe last line clearly ought to be ::1 rather than 127.0.0.1; but things mostly work anyway, since most code doesn’t really care which protocol it uses to talk to localhost. However, a few things try to set up test listeners by calling getaddrinfo("localhost", ...) and binding a socket for each result. This goes wrong if there are duplicates in the resulting list, and the test output is typically very confusing: it looks just like what you’d see if a test isn’t tearing down its resources correctly, which is a much more common thing for a test suite to get wrong, so it took me a while to spot the problem.
I ran into this in both python-asyncssh (#1052788, upstream PR) and Ruby (ruby3.1/#1069399, ruby3.2/#1064685, ruby3.3/#1077462, upstream PR). The latter took a while since Ruby isn’t one of my languages, but hey, I’ve tackled much harder side quests. I NMUed ruby3.1 for this since it was showing up as a blocker for openssl testing migration, but haven’t done the other active versions (yet, anyway).
OpenSSL vs. cryptographyI tend to care about openssl migrating to testing promptly, since openssh uploads have a habit of getting stuck on it otherwise.
Debian’s OpenSSL packaging recently split out some legacy code (cryptography that’s no longer considered a good idea to use, but that’s sometimes needed for compatibility) to an openssl-legacy-provider package, and added a Recommends on it. Most users install Recommends, but package build processes don’t; and the Python cryptography package requires this code unless you set the CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 environment variable, which caused a bunch of packages that build-depend on it to fail to build.
After playing whack-a-mole setting that environment variable in a few packages’ build process, I decided I didn’t want to be caught in the middle here and filed an upstream issue to see if I could get Debian’s OpenSSL team and cryptography’s upstream talking to each other directly. There was some moderately spirited discussion and the issue remains open, but for the time being the OpenSSL team has effectively reverted the change so it’s no longer a pressing problem.
GCC 14 regressionsContinuing from last month, I fixed build failures in pccts (NMU) and trn4.
Python teamI upgraded alembic, automat, gunicorn, incremental, referencing, pympler (fixing compatibility with Python >= 3.10), python-aiohttp, python-asyncssh (fixing CVE-2023-46445, CVE-2023-46446, and CVE-2023-48795), python-avro, python-multidict (fixing a build failure with GCC 14), python-tokenize-rt, python-zipp, pyupgrade, twisted (fixing CVE-2024-41671 and CVE-2024-41810), zope.exceptions, zope.interface, zope.proxy, zope.security, zope.testrunner. In the process, I added myself to Uploaders for zope.interface; I’m reasonably comfortable with the Zope Toolkit and I seem to be gradually picking up much of its maintenance in Debian.
A few of these required their own bits of yak-shaving:
- python-aiohttp 3.10.0 needed fixes in blinkpy (#1077981, upstream PR) and python-yalexs (#1077985, upstream PR).
- twisted 24.7.0 needed fixes in pytest-twisted (cherry-picked existing upstream commit), python-daphne (cherry-picked existing upstream PR), and python-tornado (#1078411, upstream PR).
I improved some Multi-Arch: foreign tagging (python-importlib-metadata, python-typing-extensions, python-zipp).
I fixed build failures in pipenv, python-stdlib-list, psycopg3, and sen, and fixed autopkgtest failures in autoimport (upstream PR), python-semantic-release and rstcheck.
Upstream for zope.file (not in Debian) filed an issue about a test failure with Python 3.12, which I tracked down to a Python 3.12 compatibility PR in zope.security.
I made python-nacl build reproducibly (upstream PR).
I moved aliased files from / to /usr in timekpr-next (#1073722).
Installer teamI applied a patch from Ubuntu to make os-prober support building with the noudeb profile (#983325).
Guido Günther: Free Software Activities August 2024
Another short status update of what happened on my side last month.
Quite a bit of time went into helping organize the FrOSCon FOSS on Mobile dev room (day 1, day 2, summary) but that was all worth it and fun - so was releasing Phosh 0.41.0 (which incidetally happened right before FrOScon). A three years old MR to xdg-spec to add call categories landed (thanks Matthias) allowing us to finally provide proper feedback for e.g. IM calls too. The rest was some OSK improvements (around Indic language support via varnam and layout configuration), some Cell Broadcast advancements (thanks to NGI0 for supporting this) but also some fixes. Here's the details:
Phosh- Debug crash when swiping away keyboard on lockscreen (MR).
- Fix outdated clock when swiping back from lockscreen plugins (MR)
- Avoid deprecation warning (MR)
- Better handle mobile network generation bit masks (MR)
- Improve docs that end up in the libphosh-rs docs (MR)
- Modernize ModemManager backend in preparation for Cellbroadcast support (MR)
- Remove hacks from Cell Broadcast support MR (MR). Still draft but not much todo left once the ModemManager side landed
- Remove deprecated UI props and add a check so they don't creep back in (MR)
- Allow to use ASAN when feedbackd is a subproject (MR)
- Fix crash when Wi-Fi hot spot quick setting gets disabled (MR)
- Don't allow to change hotspot state on the lock screen (MR)
- Prepare and release Phosh 0.41.0~rc1 and Phosh 0.41.0
- Prepare 0.41.1 (MR)
- Don't reject gesture when we cross another surface (MR)
- Drop redundant enums (MR)
- Remember last used panel (MR)
- Fix initial state of move up/down popovers (MR)
- Allow to select OSK layouts (MR). This ensures only actually available layouts can be selected. Currently used by phosh-osk-stub but can easily be extended to squeekboard once it provides the information.
- Allow to run doc build locally and fix CI (MR)
- Release 0.0.2
- Allow to open OSK Settings panel when screen is not locked (MR)
- Unswap Enter and Backspace (MR)
- Bug fix release 0.41.1
- Use varnam_learn() for better completions in the varnam completer (MR)
- Export layout information (MR)
- Reduce flicker when launching settings (MR)
- Add release helpers (MR)
- VM images to test nightly packages https://salsa.debian.org/agx/phosh-recipes (based on Mobian's recipes)
- Upload Phosh 0.41.0~rc1 and 0.41.0 releases
- Robustify release script a bit (MR)
- Enable binding lib in phosh (MR)
- Move govarnam and varnam schemes packages into the input method team
- Upload varnam schemes to sid (MR)
- Make varnam-schemes reproducible, add autopkgtests and run upstream test during build (MR)
- Build wlroots with xcb-errors support (MR)
- Help mobian-recipes with newer debos: (MR)
- Rework most bits of Cell Broadcast to move it closer to undraft status (MR). (Remaining bits affect enabling of unsolicited messages and setting channels).
- Fix some deprecations (MR)
- Make pairing dialog adaptive (MR)
- Allow to use with Phosh without imposing more API/ABI guarantees (MR
- Fix crash when hitting an error condition (which could then bring down the whole session): (MR)
- Install the udev rule via meson (MR to makes it easier for distros to pick up rule changes
- Sync packaging with Debian (MR)
- Document used gsettings (MR)
- Update information at matrix.org (MR)
- Implement more unified push bits: (MR
- Document things a bit (MR
- Chase libcmatrix API changes (MR)
- More cleanups (MR, MR, MR)
- Publish docs (MR)
- Ship introspection data (MR)
- Release 0.0.2 (MR) (Release announcement)
- Catch up with libcmatrix API changes (MR)
- Avoid broken URLs when using ntfy (MR)
- Improve error message when not running in CI (MR)
- Drop outdated comments (MR)
- propose some hints for Mobile clients (MR)
- propose new sound name for cell broadcasts (MR)
- Make reproducible (MR)
- Don't ignore errors in build scripts (MR)
- Allow to run test against installed schemes (MR
- Fix build with recent ruby (MR)
- Helped a bit to setup the event
- Gave a short Phosh status update
If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.
Steinar H. Gunderson: Zyxel GS1900 firmware source dump
I asked Zyxel for a source dump for GPLed firmware on their GS1900-8HP switches, and after months, they finally obliged (they seemingly had no idea that it should just be, well, available). So I'm dumping it here in case anyone else wants it.
I haven't tried actually building it, but notably, it seems to contain the entire CLI, since they base it on Quagga's vtysh (which is GPL).
Tryton News: Newsletter August 2024
In the last month we focused on fixing bugs, improving the behaviour of things, speeding-up performance issues - building on the changes from our last release. We also added some new features which we would like to introduce to you in this newsletter.
For an in depth overview of the Tryton issues please take a look at our issue tracker or see the issues and merge requests filtered by label.
Changes for the User Sales, Purchases and ProjectsNow Tryton warns the user when submitting another complaint for the same origin.
Accounting, Invoicing and PaymentsNow Tryton allows to copy account moves from closed periods. When copy a move from a closed period we now set the period of the duplicate to the current period and the date to the actual date, while informing the user about the changes.
Stock, Production and ShipmentsOur sendcloud integration now adds package weight and warehouse as shipping method criteria.
User InterfaceNow we add a searcher for summary fields, which searches the whole description.
We’ve now merged the HTML edit and translate buttons into a single button that asks for the language before opening, with the current language pre-selected.
System Data and ConfigurationNow you can configure the license key for TinyMCE.
Now Tryton supports the UTR (United Kingdom Unique Taxpayer Reference) identifier.
New Documentation We've created or updated the following documentations:- Server Modules (ir, res, report): look-up changes…
- account_es_sii: look-up changes…
- account_invoice_correction: look-up changes…
- account_invoice_history: look-up changes…
- account_invoice_line_standalone: look-up changes…
- account_invoice_secondary_unit: look-up changes…
- account_payment_braintree: look-up changes…
- account_payment_sepa_cfonb: look-up changes…
- account_statement_coda: look-up changes…
We released bug fixes for the currently maintained long term support series
7.0 and 6.0, and for the penultimate series 7.2.
1 post - 1 participant
Russ Allbery: Review: Reasons Not to Worry
Review: Reasons Not to Worry, by Brigid Delaney
Publisher: Harper Copyright: 2022 Printing: October 2023 ISBN: 0-06-331484-3 Format: Kindle Pages: 295Reasons Not to Worry is a self-help non-fiction book about stoicism, focusing specifically on quotes from Seneca, Epictetus, and Marcus Aurelius. Brigid Delaney is a long-time Guardian columnist who has written on a huge variety of topics, including (somewhat relevantly to this book) her personal experiences trying weird fads.
Stoicism is having a moment among the sort of men who give people life advice in podcast form. Ryan Holiday, a former marketing executive, has made a career out of being the face of stoicism in everyone's podcast (and, of course, hosting his own). He is far from alone. If you pay attention to anyone in the male self-help space right now (Cal Newport, in my case), you have probably heard something vague about the "wisdom of the stoics."
Given that the core of stoicism is easily interpreted as a strategy for overcoming your emotions with logic, this isn't surprising. Philosophies that lean heavily on college dorm room logic, discount emotion, and argue that society is full of obvious flaws that can be analyzed and debunked by one dude with some blog software and a free afternoon have been very popular in tech circles for the past ten to fifteen years, and have spread to some extent into popular culture. Intriguingly, though, stoicism is a system of virtue ethics, which means it is historically in opposition to consequentialist philosophies like utilitarianism, the ethical philosophy behind effective altruism and other related Silicon Valley fads.
I am pretty exhausted with the whole genre of men talking to each other about how to live a better life — Cal Newport by himself more than satisfies the amount of that I want to absorb — but I was still mildly curious about stoicism. My education didn't provide me with a satisfying grounding in major historical philosophical movements, so I occasionally look around for good introductions. Stoicism also has some reputation as an anxiety-reduction technique, and I could use more of those. When I saw a Discord recommendation for Reasons Not to Worry that specifically mentioned its lack of bro perspective, I figured I'd give it a shot.
Reasons Not to Worry is indeed not a bro book, although I would have preferred fewer appearances of the author's friend Andrew, whose opinions on stoicism I could not possibly care less about. What it is, though, is a shallow and credulous book that falls squarely in the middle of the lightweight self-help genre. Delaney is here to explain why stoicism is awesome and to convince you that a school of Greek and Roman philosophers knew exactly how you should think about your life today. If this sounds quasi-religious, well, I'll get to that.
Delaney does provide a solid introduction to stoicism that I think is a bit more approachable than reading the relevant Wikipedia article. In her presentation, the core of stoicism is the practice of four virtues: wisdom, courage, moderation, and justice. The modern definition of "stoic" as someone who is impassive in the presence of pleasure or pain is somewhat misleading, but Delaney does emphasize a goal of ataraxia, or tranquility of mind. By making that the goal rather than joy or pleasure, stoicism tries to avoid the trap of the hedonic treadmill in favor of a more achievable persistent contentment.
As an aside, some quick Internet research makes me doubt Delaney's summary here. Other material about stoicism I found focuses on apatheia and associates ataraxia with Epicureanism instead. But I won't start quibbling with Delaney's definitions; I'm not qualified and this review is already too long.
The key to ataraxia, in Delaney's summary of stoicism, is to focus only on those parts of life we can control. She summarizes those as our character, how we treat others, and our actions and reactions. Everything else — wealth, the esteem of our colleagues, good health, good fortune — is at least partly outside of our control, and therefore we should enjoy it when we have it but try to be indifferent to whether it will last. Attempting to control things that are outside of our control is doomed to failure and will disturb our tranquility. Essentially all of this book is elaborations and variations on this theme, specialized to some specific area of life like social media, anxiety, or grief and written in the style of a breezy memoir.
If you're familiar with modern psychological treatment frameworks like cognitive behavioral therapy or acceptance and commitment therapy, this summary of stoicism may sound familiar. (Apparently this is not an accident; the predecessor to CBT used stoicism as a philosophical basis.) Stoicism, like those treatment approaches, tries to refocus your attention on the things that you can improve and de-emphasizes the things outside of your control. This is a lot of the appeal, at least to me (and I think to Delaney as well).
Hearing that definition, you may have some questions. Why those virtues specifically? They sound good, but all virtues sound good almost by definition. Is there any measure of your success in following those virtues outside your subjective feeling of ataraxia? Does the focus on only things you can control lead to ignoring problems only mostly outside of your control, where your actions would matter but only to a small degree? Doesn't this whole philosophy sound a little self-centered? What do non-stoic virtue ethics look like, and why do they differ from stoicism? What is the consequentialist critique of stoicism?
This is where the shortcomings of this book become clear: Delaney is not very interested in questions like this. There are sections on some of those topics, particularly the relationship between stoicism and social justice, but her treatment is highly unsatisfying. She raises the question, talks about her doubts about stoicism's applicability, and then says that, after further thought, she decided stoicism is entirely consistent with social justice and the stoics were right after all. There is a little bit more explanation than that, but not much. Stoicism can apparently never be wrong; it can only be incompletely understood.
Self-help books often fall short here, and I suspect this may be what the audience wants. Part of the appeal of the self-help genre is artificial certainty. Becoming a better manager, starting a business, becoming more productive, or working out an entire life philosophy are not problems amenable to a highly approachable and undemanding book. We all know that at some level, but the seductive allure of the self-help genre is the promise of simplifying complex problems down to a few approachable bullet points. Here is a life philosophy in a neatly packaged form, and if you just think deeply about its core principles, you will find they can be applied to any situation and any doubts you were harboring will turn out to be incorrect.
I am all too familiar with this pattern because it's also how fundamentalist Christianity works. The second time Delaney talked about her doubts about the applicability of stoicism and then claimed a few pages later that those doubts disappeared with additional thought and discussion, my radar went off. This book was sounding less like a thoughtful examination of one specific philosophy out of many and more like the soothing adoption of religious certainty by a convert. I was therefore entirely unsurprised when Delaney all but says outright in the epilogue that she's adopted stoicism as her religion and approaches it with the same dedicated practice that she used to bring to Catholicism. I think this is where a lot of self-help books end up, although most of them don't admit it.
There's nothing wrong with this, to be clear. It sounds like she was looking for a non-theistic religion, found one that she liked, and is excited to tell other people about it. But it's a profound mismatch with what I was looking for in an introduction to stoicism. I wanted context, history, and a frank discussion of the problems with adopting philosophy to everyday issues. I also wanted some acknowledgment that it is highly unlikely that a few men who lived 2000 years ago in a wildly different social context, and with drastically limited information about cultures other than their own, figured out a foolproof recipe for how to approach life. The subsequent two millennia of philosophical debates prove that stoicism didn't end the argument, and that a lot of other philosophers thought that stoicism got a few things wrong. You would never know that from this book.
What I wanted is outside the scope of this sort of undemanding self-help book, though, and this is the problem that I keep having with philosophy. The books I happen across are either nigh-incomprehensibly dense and academic, or they're simplified into catechism. This was the latter. That's probably more the fault of my reading selection than it is the fault of the book, but it was still annoying.
What I will say for this book, and what I suspect may be the most useful property of self-help books in general, is that it prompts you to think about basic stoic principles without getting in the way of your thoughts. It's like background music for the brain: nothing Delaney wrote was very thorny or engaging, but she kept quietly and persistently repeating the basic stoic formula and turning my thoughts back to it. Some of those thoughts may have been useful? As a source of prompts for me to ponder, Reasons Not to Worry was therefore somewhat successful. The concept of not trying to control things outside of my control is simple but valid, and it probably didn't hurt me to spend a week thinking about it.
"It kind of works as an undemanding meditation aid" is not a good enough reason for me to recommend this book, but maybe that's what someone else is looking for.
Rating: 5 out of 10
Andrew Cater: Debian release weekend - media team update 202408311900 UTC
We're doing fairly well: Debian release team have been working
really hard on a double point release today. Final release for Bullseye
as 11.11 as it moves to LTS.
12.7 Bookworm install media finishing tests - it's been quite a long day so far.
For 11.11 we're part way through media tests.
We've
been joined by a lot of enthusiastic folk from Cape Town who've been a
great help. Always nice to see old friends and new people join us on IRC
- and they've just joined us for a short video call.
This has gone well: two release day media checking and bug-squashing groups on two continents is excellent.
Dear
Cape Town - feel free to join us for the next time and we'll hold the
video call open for longer. If we don't see any of you here in Cambridge
for mini-Debconf, we'll meet up in Brest for Debconf 25.
Russell Coker: Links August 2024
Bruce Schneier and Kim Córdova wrote an insightful article about the changes that corporations make to culture as technical debt [1]. We need anti-trust laws to be enforced before it’s too late!
Cory Doctorow wrote an insightful blog post about companies that are “too big to care” [3]. We need to break up those monopolies.
Science Alert has an interesting article on plans to get renewable energy by drilling into the magma chamber of an active volcano [4]. What I want to know is whether using the energy could reduce the power of an eruption or even prevent it from happening.
Bruce Schneier wrote an interesting article about Crowdstrike and the market incentives for brittle systems [5]. Also we need to have more formally proven software and more use of systems like seL4.
Dave’s Garage on YouTube has an interesting video about modern Mainframes [6]. Their IO capacity dwarfs the memory bandwidth of most PC servers.
Rolling Stone has an interesting story about the consequences of being a CIA agent in al Quaeda [9].
- [1] https://tinyurl.com/2d49n8al
- [2] https://tinyurl.com/26vnjl6e
- [3] https://tinyurl.com/27njkm72
- [4] https://tinyurl.com/yvrnxq6w
- [5] https://tinyurl.com/298p9zxv
- [6] https://www.youtube.com/watch?v=ouAG4vXFORc
- [7] https://www.youtube.com/watch?v=iMwepyyaj8I
- [8] https://www.youtube.com/watch?v=jW2NSrzcrIQ
- [9] https://tinyurl.com/2dfgty8z
Related posts:
- Links July 2024 Interesting Scientific American article about the way that language shapes...
- Links June 2024 Modos Labs have released the design of an e-ink display...
- Links August 2023 This is an interesting idea from Bruce Schneier, an “AI...
Carl Trachte: Scalable Vector Graphics Followup
A quick follow up to the last post:
1) I spelled scalable wrong in the title; hopefully it's fixed.
2) The svg partial logo that rendered in Blogger did not render on the Planet Python feed.
3) The logo svg is too big for Blogger. I need to find a way to make a smaller (file size) one.
One more try with svg in Blogger. The png DAG Hamilton logo first:
And one more attempt:
Vincent Bernat: Fixing layout shifts caused by web fonts
In 2020, Google introduced Core Web Vitals metrics to measure some aspects of real-world user experience on the web. This blog has consistently achieved good scores for two of these metrics: Largest Contentful Paint and Interaction to Next Paint. However, optimizing the third metric, Cumulative Layout Shift, which measures unexpected layout changes, has been more challenging. Let’s face it: optimizing for this metric is not really useful for a site like this one. But getting a better score is always a good distraction. 💯
To prevent the “flash of invisible text” when using web fonts, developers should set the font-display property to swap in @font-face rules. This method allows browsers to initially render text using a fallback font, then replace it with the web font after loading. While this improves the LCP score, it causes content reflow and layout shifts if the fallback and web fonts are not metrically compatible. These shifts negatively affect the CLS score. CSS provides properties to address this issue by overriding font metrics when using fallback fonts: size-adjust, ascent-override, descent-override, and line-gap-override.
Two comprehensive articles explain each property and their computation methods in detail: Creating Perfect Font Fallbacks in CSS and Improved font fallbacks.
Interactive tuning toolInstead of computing each property from font average metrics, I put together a tool for interactively tuning fallback fonts.1
Instructions-
Load your custom font.
-
Select a fallback font to tune.
-
Adjust the size-adjust property to match the width of your custom font with the fallback font. With a proportional font, it is not possible to achieve a perfect match.
-
Fine-tune the ascent-override property. Aim to align the final dot of the last paragraph while monitoring the font’s baseline. For more precise adjustment, disable the “Set line height” option.
-
Modify the descent-override property. The goal is to make the two boxes match. You may need to alternate between this and the previous property for optimal results.
-
If necessary, adjust the line-gap-override property. This step is typically not required.
The process needs to be repeated for each fallback font. Some platforms may not include certain fonts. Notably, Android lacks most fonts found in other operating systems. It replaces Georgia with Noto Serif, which is not metrically-compatible.
ToolThis tool is not available from the Atom feed.
ResultsFor the body text of this blog, I get the following CSS definition:
@font-face { font-family: Merriweather; font-style: normal; font-weight: 400; src: url("../fonts/merriweather.woff2") format("woff2"); font-display: swap; } @font-face { font-family: "Fallback for Merriweather"; src: local("Noto Serif"), local("Droid Serif"); size-adjust: 98.3%; ascent-override: 99%; descent-override: 27%; } @font-face { font-family: "Fallback for Merriweather"; src: local("Georgia"); size-adjust: 106%; ascent-override: 90.4%; descent-override: 27.3%; } font-family: Merriweather, "Fallback for Merriweather", serif;After a month, the CLS metric improved to 0:
Recent Core Web Vitals scores for vincent.bernat.ch About custom fontsUsing safe web fonts or a modern font stack is often simpler. However, I prefer custom web fonts. Merriweather and Iosevka, which are used in this blog, enhance the reading experience. An alternative approach could be to use Georgia as a serif option. Unfortunately, most default monospace fonts are ugly.
Furthermore, paragraphs that combine proportional and monospace fonts can create visual disruption. This occurs due to mismatched vertical metrics or weights. To address this issue, I adjust Iosevka’s metrics and weight to align with Merriweather’s characteristics.
-
Similar tools already exist, like the Fallback Font Generator, but they were missing a few features, such as the ability to load the fallback font or to have decimals for the CSS properties. And no source code. ↩︎
Andrew Cater: Debian release weekend - Bullseye and Bookworm 20240831
A double length Debian release
means the Release Team don't get much peace
What with last minute breaks
And the time that it takes
Treat them with respect today, please
The media teams on the hook
As we follow our normal play book
With laptops all primed
The images are timed
Once we're told we'll start taking our look
This is the last time for 11
And for Bookworm, it's just 12.7
Give us time for each test
As we all do our best
With our ThinkPads - I see at least seven :)
Getting ready for Akademy
Next week Akademy, KDE’s annual community conference, will take place in Würzburg, Germany. There are a few features that I actually began during various conferences throughout the years to address real-world problems. I decided to have look at some of them again that would be most useful for people travelling to Akademy from abroad or who will be giving a presentation there.
Connect to a Wi-Fi network by scanning its QR code Wifi QR Code ScannerTwo years ago at Akademy in Barcelona the venue Wifi details were printed onto our attendee badges as QR code. Already during the conference I started adding a scanner to the networks menu using our fantastic Prison Framework scanner feature. However, with Qt 6 at the horizon and its significant changes to Qt Multimedia I didn’t pursuit it further at the time. Then, whenever I attended another sprint or got to a hotel where they handed out Wifi QR codes, I was reminded that I still haven’t finished the thing.
Still, all prerequisites under the hood have long been merged. The first task was to move the MeCard parser (the format used for those QR codes) from the QRCA scanner app into a shared location. Since MeCard can even contain WPA-EAP configuration I wanted to have the logic for setting up NetworkManager accordingly shared, too. In preparation for adding a sub-page for the camera viewfinder I replaced the full-screen QR code generator with an integrated one.
A huge challenge was that we either need to add a new connection or update an existing one with new credentials. It took me forever to figure out why it wouldn’t update the Wifi passphrase on my network: I’m meanwhile using WPA3-SAE at home! The code for setting up NetworkManager from MeCard didn’t handle “T=SAE” and thus didn’t generate any credentials. Funnily enough, our MeCard generator did. Once that was sorted, the code worked as intended.
However, after a lot of back and forth I abandoned the idea of putting it in the Network applet directly. Using Qt Multimedia, potentially loading a bunch of backends and codecs is probably not something we want to have in the shell. Instead, I will look into adding a button to the Network settings module instead. In the meantime, you can always use QRCA to scan a code a Wifi code.
Location-aware time zone settingLast year’s Akademy was located in Thessaloniki, Greece which is in a different time zone for many of its attendees. Our phones automatically adjust to that using the cellular network and we don’t even think about it. Frequently during the conference people were confused which talk will come next when their laptop and phone disagreed on the time. Therefore, I wrote a daemon that automatically adjusts the time zone based on your physical location.
Enable location-based time zone option in “Date & Time” settingsFor privacy, this uses KDE’s own GeoIP endpoint and is opt-in. It is also disabled when connected to a VPN (the endpoint could be in an entirely different location after all) or a metered connection. We’re considering adding a checkbox to Plasma’s Welcome Center on its “Privacy” page to make the feature more discoverable for laptop users. However, there’s still a few quirks to be ironed out to ensure that our users don’t bring down the server hosting that service since by definition you cannot put that sort of thing behind a CDN.
I can’t wait to travel to Würzburg next week. Given the convenient location I am positive that a lot of people and “old-timers” that I haven’t seen around in a while will stop by to say hello. Come join us, attendance is free and it’s always great fun!
Getting ready for Akademy
Next Saturday, this year’s Akademy starts in Würzburg, Germany. After a rather long absence – the last Akademy I attended in person was in Mechelen (2008) – I am very much looking forward to meet old friends and make new ones. Mainly due to my own summer vacation plans and conflicting family matters, I was not able to make it to the event in recent years. Now, as the venue is only just over 100 kilometers from my home and I have no other commitments, I’m traveling to
My plan is to arrive on Friday late afternoon and join the welcome event. The conference schedule is pretty loaded and I have only decided on a few talks I definitely want to visit. For more, I need to take another look at the schedule and also the list of planned BoFs on Monday and Tuesday.
Topics I am interested in are (not in a particular order)
- Getting KMyMoney to be properly build on the CI/CD for MacOS (both ARM and x86_64)
- Meet people in person I only know from the online world
- What is needed on the project configuration (Gitlab, metadata) side to move onto KF6
For 1. I hope to get some more background information out of the KDE’s CI and CD Infrastructure talk held by Ben, Hannah, Julius and Volker on Sunday and the CI/CD BoF on Tuesday.. For other topics I will see.
Looking forward to seeing many of you KDE fellows next week in Würzburg!
Getting ready for Akademy 2024
In less than a week this year’s Akademy starts in Würzburg, Germany, and as usual I’m very much looking forward to that :)
TalksAkademy starts with two days full of presentations. Online participation is possible as well, you can register here.
I have two talks myself.
KDE’s CI and CD InfrastructureSunday, 14:30 in room 1, together with Ben, Hannah and Julius, details.
We want to give an overview of the many parts and options we have around continuous integration and deployment since the migration from Jenkins to Gitlab, to enable more people to do changes and adjustments there themselves.
Finding your way around Akademy - Venue maps and indoor routing in KongressSaturday, 18:00 in room 1, details.
A lightning talk about the use of OSM indoor venue maps in our conference companion app Kongress.
BoFsFollowing the talks there will be three days of meetings on various topics (“BoFs”). There’s fewer BoFs I’m (co-)hosting or otherwise directly involved in than last year:
- KDE PIM (Monday 14:00)
- CI/CD (Tuesday 15:00)
While that should be a bit less stressful, those will be very busy days nevertheless, with already now too many sessions on the schedule I’d like to participate in.
And of course every event essentially is just one big Itinerary BoF in disguise.
Qt Contributor SummitRight before Akademy there’s also the Qt Contributor Summit in Würzburg.
I have been asked to give a short presentation on experiences with migrating to Qt 6 and maintaining software on top of Qt 6, both from the KDE and from my work perspective. Currently that’s planned for the morning talks session on Thursday.
KongressAnd as usual there’s also code to finish before Akademy. This time the important one has been the venue map integration for Kongress. Not only does this have to be ready for my talk about it, it’s also meant to be used at Akademy.
Time-dependent mapsA big part still missing in the previous post was support for time-dependent content. While conceptually clear (OSM has time-conditional tags and we have an parser/evaluator for the opening hours expressions used in those) I was still hoping we might get away without that.
And we would now (again), but there was an intermediate state of the Akademy schedule until two days ago where “Room 1” referred to a different physical room on Saturday and Sunday than during the rest of the week. While that might sound confusing, this is actually not unusual, so properly supporting all that makes sense anyway.
With the rooms now having distinct names again I can’t easily demonstrate the full power of time-dependent content anymore, but you’ll still notice that the venue map looks different depending on which day you open it (by opening it from events on different dates, or by waiting a week).
Android dark mode supportTo look good for the occasion we now also have proper Android dark mode support in KDE Frameworks, thanks to Julius’s recent work on icon recoloring, and the necessary changes in Kongress will be part of 24.08.1.
Kongress on Android in dark mode. xCal supportWith September being packed with conferences, another problem has become pressing for Kongress as well: Pretalx (a widely used open source conference management software) apparently removed access to the schedule in iCal format, something Kongress relies on. Akademy isn’t affected due to using a different system, but SotM or the Matrix Conference are.
As a solution we could roll out quickly I added basic xCal read support to KF::CalendarCore, a format Pretalx still supports. This has been integrated yesterday and should make it in the September update of KDE Frameworks.
ItinerarySince conferences involve travel for most people, Itinerary can’t go without attention either.
For making coordination of train trips easier, the “Share Journey” links you’ll get from Deutsche Bahn’s website can now be directly imported.
Akademy and all its sub-events are machine-readable on their corresponding websites as usual, and thus can also be imported by pasting/opening/dropping the corresponding URLs in(to) Itinerary.
The day trip shows some limitations though, we don’t have a good way to model that yet. It’s now imported as an event, but ideally it would be two bus trips, so you have the most crucial information (places and times of the bus departures) included.
Itinerary supports bus trips, but we have a few specialties here. The departure location of the return trip is still unknown (expecting that to be where we get dropped off), so this needs to be editable. For bus trips however Itinerary doesn’t allow arbitrary changes when editing, so you can’t accidentally end up with something that can no longer be matched against realtime data. That’s however irrelevant for chartered buses, those don’t show up on any schedule anyway.
So that bus trip is actually less of a bus trip in the public transport sense, but more an individual transport trip (ie. similar to taking your own bike or car). And that’s something we don’t support in Itinerary at all yet. If you have input on how that should look like, I’d be happy to hear that in the Itinerary Matrix channel!
Hoping nobody gets lost due to an Itinerary bug on the way and very much looking forward to seeing many of you next week in Würzburg!
Carl Trachte: Scaleable Vector Graphics (svg) - Decomposing and Scaling Elements for Blogger
Last post I lamented the failure of any the svg graphs I had for DAG Hamilton workflows to render in Blogger. I set out to rectify this and have had some initial success.
My first attempt to get svg to show up in Blogger has the DAG Hamilton logo as its subject. My approach was to bring the svg elements down to the most basic primitives I could manage, then scale and translate their individual coordinates to bring them into the view.Fortunately, I was able to find an example from someone who had successfully rendered svg in Blogger. The subject blog post is eleven years old. It appears svg never totally caught on for some platforms. Nonetheless, I used this as a template, and, after some initial success, managed to render the Hamilton logo.
The post will step through the individual path components of the Hamilton logo (7) and show the code used to transform the coordinates to make the svg elements render at an appropriate location and size. The individual path elements that make up the logo are a large number of 3 coordinate bezier curves. The manner in which the curves listed is very format specific to the platform that created them. Unfortunately, I cannot recall which online png to svg converter I used to create the svg. Portable Inkscape kept crashing on the large png file, so I brute forced the issue by using an online converter.
The first element of the logo is basically a purple background for the whole logo. A gap or divit can be seen just left of center on the upper part. I'll cover the manual "fixing" of this further on.
Second path (orange)
Third path (deep purple)
Fourth path (yellow top point)
Fifth path (light orangey left leg were the star to face out from the screen)
Sixth path (medium purple little triangle in the center)
Seventh path (light orangey little triangle to the right of center)
Fixing that niggling divit on the top half of the logo (the thin subvertical line).
Final product. This is the one svg inline drawing I was able to show (content size limitations of Blogger?)
The full logo refuses to render (although I swear it did before). Well, at least there is an svg element showing up on the page (a purple star). <sigh> Another png . . .
The code. I'm not strong on HTML, but it was necessary to edit this post inline. As part of that exercise, I included the post outline generation as part of the script.
# python 3.12
# blog_post_make_outline.py
"""
Attempt to scale, translate, and inline svg
elements for display in Blogger.
"""
import re
import pprint
import sys
import copy
import xml.etree.ElementTree as ET
# REGEX patterns.
PATHPAT = r'[<]path[ ]d[=]["]'
MPAT = r'M([-]*[0-9]+[.]*[0-9]*)[ ]([-]*[0-9]+[.]*[0-9]*)[ ]'
# first bezier curve coord
BEZPAT = (r'C([-]*[0-9]+[.]*[0-9]*)[ ]([-]*[0-9]+[.]*[0-9]*)[ ]'
# second bezier curve coord
r'([-]*[0-9]+[.]*[0-9]*)[ ]([-]*[0-9]+[.]*[0-9]*)[ ]'
# third bezier curve coord
r'([-]*[0-9]+[.]*[0-9]*)[ ]([-]*[0-9]+[.]*[0-9]*)[ ]')
ZPAT = r'Z[ ]'
FILLPAT = (r'["] fill[=]["]([#][A-F0-9][A-F0-9][A-F0-9]'
r'[A-F0-9][A-F0-9][A-F0-9])["][ ]')
# transform="translate(4756.96875,109.12890625)"
# transform="translate(5619,4112)"/>
TRANSFORMPAT = (r'transform[=]["]translate[(]'
r'([-]*[0-9]+[.]*[0-9]*)[,]([-]*[0-9]+[.]*[0-9]*)[)]["]')
# Output formats/constants.
BEZFMT = ('C{0:.7f} {1:.7f} '
'{2:.7f} {3:.7f} '
'{4:.7f} {5:.7f} ')
PATHFMT_OPEN = '<path d="'
PATHFMT_1 = 'M{mstartx:.5f} {mstarty:.5f} {path:s} Z '
PATHFMT_2 = '" fill="{fill:s}" transform="translate({translatex:.7f},{translatey:.7f})"'
PATHFMT_CLOSE = ' />'
SVG_TAG_OPEN = ('<svg xmlns="http://www.w3.org/2000/svg" '
'xmlns:xlink="http://www.w3.org/1999/xlink" '
"width='500px' height='500px'>")
SVG_TAG_CLOSE = '</svg>'
def parse_path(pathstring):
"""
Capture path elements in a dictionary.
pathstring is the svg string for the path (one line).
For a path comprised entirely of bezier curves
in the format (all one line):
<ns0:path d="M0 0 C2.54601018 1.57157072 5.09846344 3.13131386 7.65625 4.68359375 C39.179 . . . 0 0 Z M-690.96875 4007.87109375 C-707.702 . . . Z " fill="#C3368C" transform="translate(4756.96875,109.12890625)" />
Returns dictionary.
"""
retval = {}
patpath = re.compile(PATHPAT)
match = patpath.match(pathstring)
startindex = match.span()[1]
mpat = re.compile(MPAT)
# MPAT
match = mpat.match(pathstring[startindex:])
mpatgroups = match.groups()
retval['mpatgroups'] = []
retval['mpatgroups'].append(mpatgroups)
startindex += match.span()[1]
bezpat = re.compile(BEZPAT)
zpat = re.compile(ZPAT)
retval['paths'] = []
while match:
pathpoints = []
# BEZPAT
match = bezpat.match(pathstring[startindex:])
while match:
# Sentinel.
if not match:
continue
pathpoints.append(match.groups())
startindex += match.span()[1]
match = bezpat.match(pathstring[startindex:])
retval['paths'].append(pathpoints)
# ZPAT
match = zpat.match(pathstring[startindex:])
startindex += match.span()[1]
# Then look for MPAT
# MPAT
match = mpat.match(pathstring[startindex:])
# If MPAT not there, work on color and transform.
if not match:
continue
startindex += match.span()[1]
retval['mpatgroups'].append(mpatgroups)
fillpat = re.compile(FILLPAT)
match = fillpat.match(pathstring[startindex:])
startindex += match.span()[1]
print('adding fill . . .')
fill = match.groups()[0]
retval['fill'] = fill
transformpat = re.compile(TRANSFORMPAT)
match = transformpat.match(pathstring[startindex:])
transform = match.groups()
print('adding transform . . .')
retval['transform'] = transform
return retval
def parse_all_paths(svgfilepath):
"""
Finds and parses all svg paths
within an svg file (very format
specific - bezier curves only).
Returns list of dictionaries, one
for each path line of the svg file.
"""
# Do all paths in Hamilton logo.
# Make list of dictionaries.
patpath = re.compile(PATHPAT)
with open(svgfilepath, 'r') as f:
paths = []
# Line one.
next(f)
# Line two.
next(f)
for linex in f:
print(PATHPAT)
print(linex[:30])
match = patpath.match(linex)
if not match:
break
paths.append(parse_path(linex))
return paths
def work_paths(paths, fcn):
"""
Apply an operation to all coordinates in
the bezier curve paths represented in
paths.
Also covers translate and fill.
paths is a list of dictionaries. Each
dictionary represents one line of an
svg file with a path made up of
bezier curves.
"""
# Return value.
newpaths = []
# M - start of path segment.
for pthx in paths:
newmpatgroups = []
for coordsx in pthx['mpatgroups']:
newmpatgroups.append([fcn(x) for x in coordsx])
newpaths.append({'mpatgroups':newmpatgroups})
# to Z - end of path segment.
# List of path dictionaries (paths).
for pthx, newpath in zip(paths, newpaths):
newpath['paths'] = []
# Each M to Z path segment.
for path in pthx['paths']:
curvegroup = []
for curve in path:
newcurve = [fcn(x) for x in curve]
curvegroup.append(newcurve)
newpath['paths'].append(curvegroup)
# transform and fill.
for pthx, newpath in zip(paths, newpaths):
newpath['transform'] = [fcn(x) for x in pthx['transform']]
newpath['fill'] = pthx['fill']
return newpaths
def translate_paths(paths, translation):
"""
From a two tuple of x, y translation,
adjust dictionary values for x, y
translation in each path in path
list accordingly.
Returns new dictionary
"""
# TRANSLATE
# ['mpatgroups', 'paths', 'fill', 'translate']
translated_paths = copy.deepcopy(paths)
for pathx in translated_paths:
pathx['transform'][0] += translation[0]
pathx['transform'][1] += translation[1]
return translated_paths
def get_path_strings(paths):
"""
From a list of path dictionaries,
builds one line strings for insertion
into svg file.
Returns list
"""
pathdict_2 = {'fill':None,
'translatex':None,
'translatey':None}
path_segment_dict = {'mstartx':None,
'mstarty':None,
'path':None}
pathstrings = []
for pathx in paths:
# Copy and initialize fill/translate dictionary.
fill_translate = copy.deepcopy(pathdict_2)
fill_translate['fill'] = pathx['fill']
fill_translate['translatex'] = pathx['transform'][0]
fill_translate['translatey'] = pathx['transform'][1]
# Zip together M and path segments.
path_segs = zip(pathx['mpatgroups'], pathx['paths'])
# For each path segment.
path_strings = []
for M, path_seg in path_segs:
seg_dict = copy.deepcopy(path_segment_dict)
seg_dict['mstartx'] = M[0]
seg_dict['mstarty'] = M[1]
# Build path segment string.
path_seg_strings = [BEZFMT.format(*coords) for coords in path_seg]
path = ''.join(path_seg_strings)
seg_dict['path'] = path
# Make final segment string with M (PATHFMT_1)
path_with_M = PATHFMT_1.format(**seg_dict)
path_strings.append(path_with_M)
path_all_together = ''.join(path_strings)
# Tack on fill/translate at end and beginning with d path flag.
# Add to pathstrings.
pathstrings.append(PATHFMT_OPEN +
path_all_together +
PATHFMT_2.format(**fill_translate) +
PATHFMT_CLOSE)
return pathstrings
paths = parse_all_paths('hamilton_logo_large.svg')
print('len(paths) = {0:d}'.format(len(paths)))
pprint.pprint([x for x in paths[0]])
paths = work_paths(paths, float)
scale = 0.035
scaleit = lambda x: scale * x
paths = work_paths(paths, scaleit)
# pprint.pprint(paths[0]['paths'][0])
paths = translate_paths(paths, (125, 0))
pathstrings = get_path_strings(paths)
# with open('test_paths.txt', 'w') as f:
# for pathx in pathstrings:
# print(pathx, file=f)
# print('\n\n', file=f)
GAP_FIX = "<polygon points='265 90.25, 245 143.75, 268 144.1, 295 90.25' style='fill: black;' />"
GAP_FIX_PROPER_COLOR = "<polygon points='265 90.25, 245 143.75, 268 144.1, 295 90.25' style='fill: #C3368C;' />"
TEXT = '<p>{0:s}</p>'
CODE = """
<p> </p><pre style="background: rgb(238, 238, 238); border-bottom-color: initial; border-bottom-style: initial; border-image: initial; border-left-color: initial; border-left-style: initial; border-radius: 10px; border-right-color: initial; border-right-style: initial; border-top-color: rgb(221, 221, 221); border-top-style: solid; border-width: 5px 0px 0px; color: #444444; font-family: "Courier New", Courier, monospace; font-stretch: inherit; font-variant-east-asian: inherit; font-variant-numeric: inherit; line-height: inherit; margin-bottom: 1.5em; margin-top: 0px; overflow-wrap: normal; overflow: auto; padding: 12px; vertical-align: baseline;"><span style="font-size: 13px;">{0:s}</span></pre>
"""
textlist = ['blah blah blah',
'blah blah blah again',
'blah blah blah a third time',
'blah blah blah a fourth time',
'blah blah blah a fifth time',
'blah blah blah a sixth time',
'blah blah blah a seventh time',
'fix divit',
'final product']
fixed_divit = copy.deepcopy(pathstrings)
fixed_divit.insert(1, GAP_FIX_PROPER_COLOR)
svglist = [pathstrings[0:1],
pathstrings[0:2],
pathstrings[0:3],
pathstrings[0:4],
pathstrings[0:5],
pathstrings[0:6],
pathstrings[0:],
pathstrings[0:] + [GAP_FIX],
fixed_divit]
with open('blogpost.html', 'w') as f:
for blah, svgels in zip(textlist, svglist):
print(TEXT.format(blah), file=f)
print(SVG_TAG_OPEN, file=f)
for svgelement in svgels:
print(svgelement, file=f)
print(SVG_TAG_CLOSE, file=f)
print(TEXT.format('More blah about code'), file=f)
print(CODE.format('>>> import this'), file=f)
print('Done')
Notes:
1) I had had good intentions of including a code box (the CODE string constant in the Python code) and I hit a bit of a wall. Not only is my code a mixed bag (this blog was always intended as a learning experience and a place for trying things out), it doesn't look good. We deal.
Which brings me to the point: you hear titles like front end developer, designer, website marketer etc. and think, "Well, it's kind of like art, kind of like coding . . . sort of creative." I now know, it's coding and it's thinking and it's grinding. All respect.
2) Steven Lott recently published a book that I bought (pdf). I have found it helpful. It's very pragmatic. I'm only about 15% of the way into it, but his treatment of regular expressions as just another tool not be scared of made me less tentative in my REGEX use (even if my REGEXes are far from elegant). Group capture with parens was really helpful for this exercise.
3) Our chief geology database administrator commented that the colors of the DAG Hamilton logo are those of the Spanish shawl nudibranch. There is a resemblance.
Photo courtesy of iNaturalist.
Thanks for stopping by.