Feeds

This week in KDE: autoscrolling

Planet KDE - Fri, 2024-07-05 23:25
New Features

You can now turn on the “autoscrolling” feature of the Libinput driver, which lets you scroll on any scrollable view by holding down the middle button of your mouse and moving the whole mouse (Evgeniy Chesnokov, Plasma 6.2.0. Link)

UI Improvements

When zooming into or out of a document in Okular using Ctrl+Scroll, it now zooms into or out of the actual cursor position, not the center of the page (Alexis Murzeau, Okular 24.08.0. Link)

Okular now scales radio buttons and checkboxes to the size of the form fields they inhabit, which looks better for forms that have huge or tiny versions of these (Pratham Gandhi, Okular 24.08.0. Link)

Dolphin now supports the systemwide “disable smooth scrolling” setting (Nathan Misner, Dolphin 24.08.0 Link)

Opening and closing Elisa’s playlist panel is no longer somewhat choppy (Jack Hill, Elisa 24.08. Link)

When quick-tiling two adjacent windows and resizing one, the other will resize too. The location of the split between them is now reset to its default position after all adjacent quick-tiled windows are closed or un-tiled (Erwin Saumweber, Plasma 6.1.2. Link)

.Desktop files in sub-folders below your desktop are now shown as they are on the desktop itself (Alexander Wilms, Plasma 6.2.0. Link)

On System Settings’ Accessibility page, the Open dialog for choosing custom bell sounds now accepts .oga files, and also tells you what types of files it supports (me: Nate Graham, Plasma 6.2.0. Link)

On System Settings Desktop Effects page, “internal” effects are no longer listed at all (even in a hidden-by-default state), which makes it more difficult for people to break their systems by accident, and also fixes an odd interaction whereby clicking the “Defaults” button would reset the default settings of internal effects changed elsewhere. You can still see the internal effects in KWin’s debug console window if needed (Vlad Zahorodnii, Plasma 6.2.0. Link)

Made a bunch of small changes to System Settings pages to align them better with the new human interface guidelines (me: Nate Graham, Plasma 6.2.0. Link 1, link 2, link 3, and link 4)

Improved the legibility of the text in Kirigami.NavigationTabBar buttons, especially on low or medium DPI screens (me: Nate Graham, Frameworks 6.4. Link)

Bug Fixes

Fixed a recent regression that caused the Powerdevil power management daemon to sometimes crash randomly when the system has any monitors connected that support DDC-based brightness control (Jakob Petsovits, Plasma 6.1.2. Link)

On the System Settings’ recently re-done Keyboard page, table columns in the layout table are once again resizable, and also have more sensible default widths now (Wind He, Plasma 6.1.2. Link)

Fixed one source of the recent issue with certain System Settings pages being sometimes broken when opened — this one being the issue where opening the Touchpad or Networks pages would break other ones opened afterwards. We’re still investigating the other issues, which frankly make no sense and shouldn’t be happening. Some of them may be Qt regressions. Investigation is ongoing (Marco Martin, Plasma 6.1.3. Link)

Icons in the new Edit Mode’s toolbar buttons are no longer slightly blurry (Akseli Lahtinen, Plasma 6.1.3. Link)

KWin’s “open new windows under pointer” feature now actually does, and ignores the active screen when that screen differs from the screen with the pointer on it (Xaver Hugl, Plasma 6.1.3. Link)

Fixed multiple recent regressions and longstanding issues with System Monitor widgets displayed on panels (Arjen Hiemstra, Plasma 6.2.0):

  • Text in small pie charts overflowing onto the next line awkwardly (link)
  • Adjacent pie charts overlapping at certain panel thicknesses (link)
  • Graphs not taking enough space on a thick panel (link)

With wide color gamut turned on or an ICC color profile in use, transparent windows are no longer too transparent (Xaver Hugl, Plasma 6.2.0. Link)

Showing and hiding titlebars and frames on a scaled display no longer causes XWayland windows to move diagonally by about 1px every time (Vlad Zahorodnii, Plasma 6.2.0. Link)

Fixed multiple issues and glitches affecting floating panels via a significant code refactor (Marco Martin, Plasma 6.2.0. Link 1, link 2, and link 3)

Fixed a recent Qt regression that caused Plasma to sometimes crash when screens were disconnected (David Edmundson, Qt 6.7.3. Link 1 and link 2)

Fixed a Qt regression that caused web pages rendered by QtWebEngine (most notably in KMail’s HTML message viewer window) to display have blocky, blurry, or pixelated text and graphics (David Edmundson, Qt 6.8.0. Link)

Other bug information of note:

Performance & Technical

Made the pam_kwallet library able to build with libgcrypt 1.11, restoring its ability to let the system wallet unlock automatically on login again (Daniel Exner, Plasma 6.1.2. Link)

Automation & Systematization

Added some UI tests to KCalc, ensuring that the recent prominent regression in functionality can’t happen again (Gabriel Barrantes, link)

…And Everything Else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

As I mentioned last week, if you use have multiple systems or an adventurous personality, you can really help us out by installing beta versions of Plasma using your distro’s available repos and reporting bugs. Arch, Fedora, and openSUSE Tumbleweed are examples of great distros for this purpose. So please please do try out Plasma beta versions. It truly does help us! Heck, if you’re very adventurous, live on the nightly repos. I’ve been doing this full-time for 5 years with my sole computer and it’s surprisingly stable.

Does that sound too scary? Consider donating today instead! That helps too.

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Categories: FLOSS Project Planets

Carl Trachte: Graphviz - Editing a DAG Hamilton Graph dot File

Planet Python - Fri, 2024-07-05 21:10

Last post featured the DAG Hamilton generated graphviz graph shown below. I'll be dressing this up a little and highlighting some functionality. For the toy example here, the script employed is a bit of overkill. For a bigger workflow, it may come in handy.





I'll start with the finished products:

1) A Hamilton logo and a would be company logo get added (manual; the Data Inputs Highlighted subtitle is there for later processing when we highlight functionality.)
2) through 4) are done programmatically (code is shown further down). I saw an example on the Hamilton web pages that used aquamarine as the highlight color; I liked that, so I stuck with it.

2) Data source and data source function highlighted.



3) Web scraping functions highlighted.



4) Output nodes highlighted.


A few observations and notes before we look at configuration and code: I've found the charts to be really helpful in presenting my workflow to users and leadership (full disclosure: my boss liked some initial charts I made; my dream of the PowerPoint to solve all scripter<->customer communication challenges is not yet reality, but for the first time in a long time, I have hope.)

In the web scraping highlighted diagram, you can pretty clearly see that data_with_company node has an input into the commodity_word_counts node. The domain specific rationale from the last blog post is that I don't want to count every "Barrick Gold" company name occurrence as another mention of "Gold" or "gold."

Toy example notwithstanding, in real life, being able to show where something branches critically is a real help. Assumptions about what a script is actually doing versus what it is doing can actually be costly in terms of time and productivity for all parties. Being able to say and show ideas like, "What it's doing over here doesn't carry over to that other mission critical part you're really concerned with; it's only for purposes of the visualization which lies over here on the diagram" or "This node up here representing <the real life thing> is your sole source of input for this script; it is not looking at <other real world thing> at all."

graphviz and diagrams like this have been around for decades - UML, database schema visualizations, etc. What makes this whole DAG Hamilton thing better for me is how easy and accessible it is. I've seen C++ UML diagrams over the years (all respect to the C++ people - it takes a lot of ability, discipline, and effort); my first thought is often, "Oh wow . . . I'm not sure I have what it takes to do that . . . and I'm not sure I'd want to . . ."

Enough rationalization and qualifying - on to the config and the code!

I added the title and logos manually. The assumption that the graphviz dot file output of DAG Hamilton will always be in the format shown would be premature and probably wrong. It's an implementation detail subject to change and not a feature. That said, I needed some features in my graph outputs and I achieved them this one time.

Towards the top of the dot file is where the title goes:

// Dependency Graphdigraph {        labelloc="t"        label=<<b>Toy Web Scraping Script Run Diagram<BR/>Data Inputs Highlighted</b>> fontsize="36" fontname=Helvetica

labelalloc="t" puts the text at the top of the graph (t for top, I think).
// Dependency Graphdigraph {        labelloc="t"        label=<<b>Toy Web Scraping Script Run Diagram<BR/>Data Inputs Highlighted</b>> fontsize="36" fontname=Helvetica        hamiltonlogo [label="" image="hamiltonlogolarge.png" shape="box", width=0.6, height=0.6, fixedsize=true]        companylogo [label="" image="fauxcompanylogo.png" shape="box", width=5.10 height=0.6 fixedsize=true]

The DAG Hamilton logo listed first appears to end up in the upper left part of the diagram most of the time (this is an empirical observation on my part; I don't have a super great handle on the internals of graphviz yet).

Getting the company logo next to it requires a bit more effort. A StackOverflow exchange had a suggestion of connecting it invisibly to an initial node. In this case, that would be the data source. Inputs in DAG Hamilton don't get listed in the graphviz dot file by their names, but rather by the node or nodes they are connected to: _parsed_data_inputs instead of "datafile" like you might expect. I have a preference for listing my input nodes only once (deduplicate_inputs=True is the keyword argument to DAG Hamilton's driver object's display_all_functions method that makes the graph).

The change is about one third of the way down the dot file where the node connection edges start getting listed:

parsed_data -> data_with_wikipedia _parsed_data_inputs [label=<<table border="0"><tr><td>datafile</td><td>str</td></tr></table>> fontname=Helvetica margin=0.15 shape=rectangle style="filled,dashed" fillcolor="#ffffff"]        companylogo -> _parsed_data_inputs [style=invis]

DAG Hamilton has a dashed box for script inputs. That's why there is all that extra description inside the square brackets for that node. I manually added the fillcolor="#ffffff" at the end. It's not necessary for the chart (I believe the default fill of white /#ffffff was specified near the top of the file), but it is necessary for the code I wrote to replace the existing color with something else. Otherwise, it does not affect the output.

I think that's it for manual prep.

Onto the code. Both DAG Hamilton and graphviz have API's for customizing the graphviz dot file output. I've opted to approach this with brute force text processing. For my needs, this is the best option. YMMV. In general, text processing any code or configuration tends to be brittle. It worked this time.

# python 3.12
"""Try to edit properties of graphviz output."""
import sys
import re
import itertools
import graphviz
INPUT = 'ts_with_logos_and_colors'
FILLCOLORSTRLEN = 12AQUAMARINE = '7fffd4'COLORLEN = len(AQUAMARINE)
BOLDED = ' penwidth=5'BOLDEDEDGE = ' [penwidth=5]'
NODESTOCOLOR = {'data_source':['_parsed_data_inputs',                               'parsed_data'],                'webscraping':['data_with_wikipedia',                               'colloquial_company_word_counts',                               'data_with_company',                               'commodity_word_counts'],                'output':['info_output',                          'info_dict_merged',                          'wikipedia_report']}
EDGEPAT = r'\b{0:s}\b[ ][-][>][ ]\b{1:s}\b'
TITLEPAT = r'Toy Web Scraping Script Run Diagram[<]BR[/][>]'ENDTITLEPAT = r'</b>>'
# Two tuples as values for edges.EDGENODESTOBOLD = {'data_source':[('_parsed_data_inputs', 'parsed_data')],                   'webscraping':[('data_with_wikipedia', 'colloquial_company_word_counts'),                                  ('data_with_wikipedia', 'data_with_company'),                                  ('data_with_wikipedia', 'commodity_word_counts'),                                  ('data_with_company', 'commodity_word_counts')],                   'output':[('data_with_company', 'info_output'),                             ('colloquial_company_word_counts', 'info_dict_merged'),                             ('commodity_word_counts', 'info_dict_merged'),                             ('info_dict_merged', 'wikipedia_report'),                             ('data_with_company', 'info_dict_merged')]}
OUTPUTFILES = {'data_source':'data_source_highlighted',               'webscraping':'web_scraping_functions_highlighted',               'output':'output_functions_highlighted'}
TITLES = {'data_source':'Data Sources and Data Source Functions Highlighted',          'webscraping':'Web Scraping Functions Highlighted',          'output':'Output Functions Highlighted'}
def get_new_source_nodecolor(src, nodex):    """    Return new source string for graphviz    with selected node colored aquamarine.
    src is the original graphviz text source    from file.
    nodex is the node to have it's color edited.    """    # Full word, exact match.    wordmatchpat = r'\b' + nodex + r'\b'    pat = re.compile(wordmatchpat)    # Empty string to hold full output of edited source.    src2 = ''    match = re.search(pat, src)    # nodeidx = src.find(nodex)    nodeidx = match.span()[0]    print('nodeidx = ', nodeidx)    src2 += src[:nodeidx]    idxcolor = src[nodeidx:].find('fillcolor')    print('idxcolor = ', idxcolor)    # fillcolor="#b4d8e4"    # 012345678901234567    src2 += src[nodeidx:nodeidx + idxcolor + FILLCOLORSTRLEN]    src2 += AQUAMARINE    currentposit = nodeidx + idxcolor + FILLCOLORSTRLEN + COLORLEN    src2 += src[currentposit:]    return src2
def get_new_title(src, title):    """    Return new source string for graphviz    with new title part of header.
    src is the original graphviz text source    from file.
    title is a string.    """    # Empty string to hold full output of edited source.    src2 = ''    match = re.search(TITLEPAT, src)    titleidx = match.span()[1]    print('titleidx = ', titleidx)    src2 += src[:titleidx]    idxendtitle = src[titleidx:].find(ENDTITLEPAT)    print('idxendtitle = ', idxendtitle)    src2 += title    currentposit = titleidx + idxendtitle    print('currentposit = ', currentposit)    src2 += src[currentposit:]    return src2
def get_new_source_penwidth_nodes(src, nodex):    """    Return new source string for graphviz    with selected node having bolded border.
    src is the original graphviz text source    from file.
    nodex is the node to have its box bolded.    """    # Full word, exact match.    wordmatchpat = r'\b' + nodex + r'\b'    pat = re.compile(wordmatchpat)    # Empty string to hold full output of edited source.    src2 = ''    match = re.search(pat, src)    nodeidx = match.span()[0]    print('nodeidx = ', nodeidx)    src2 += src[:nodeidx]    idxbracket = src[nodeidx:].find(']')    src2 += src[nodeidx:nodeidx + idxbracket]    print('idxbracket = ', idxbracket)    src2 += BOLDED    src2 += src[nodeidx + idxbracket:]    return src2
def get_new_source_penwidth_edges(src, nodepair):    """    Return new source string for graphviz    with selected node pair having bolded edge.
    src is the original graphviz text source    from file.
    nodepair is the two node tuple to have    its edge bolded.    """    # Full word, exact match.    edgepat = EDGEPAT.format(*nodepair)    print(edgepat)    pat = re.compile(edgepat)    # Empty string to hold full output of edited source.    src2 = ''    match = re.search(pat, src)    edgeidx = match.span()[1]    print('edgeidx = ', edgeidx)    src2 += src[:edgeidx]    src2 += BOLDEDEDGE     src2 += src[edgeidx:]    return src2
def makehighlightedfuncgraphs():    """    Cycle through functionalities to make specific    highlighted functional parts of the workflow    output graphs.
    Returns dictionary of new filenames.    """    with open(INPUT, 'r') as f:        src = f.read()
    retval = {}        for functionality in TITLES:        print(functionality)        src2 = src        retval[functionality] = {'dot':None,                                 'svg':None,                                 'png':None}        src2 = get_new_title(src, TITLES[functionality])        # list of nodes.        to_process = (nodex for nodex in NODESTOCOLOR[functionality])        countergenerator = itertools.count()        count = next(countergenerator)        print('\nDoing node colors\n')        for nodex in to_process:            print(nodex)            src2 = get_new_source_nodecolor(src2, nodex)            count = next(countergenerator)        to_process = (nodex for nodex in NODESTOCOLOR[functionality])        countergenerator = itertools.count()        count = next(countergenerator)        print('\nDoing node bolding\n')        for nodex in to_process:            print(nodex)            src2 = get_new_source_penwidth_nodes(src2, nodex)            count = next(countergenerator)        print('Bolding edges . . .')        to_process = (nodex for nodex in EDGENODESTOBOLD[functionality])        countergenerator = itertools.count()        count = next(countergenerator)        for nodepair in to_process:            print(nodepair)            src2 = get_new_source_penwidth_edges(src2, nodepair)            count = next(countergenerator)        print('Writing output files . . .')        outputfile = OUTPUTFILES[functionality]        with open(outputfile, 'w') as f:            f.write(src2)        graphviz.render('dot', 'png', outputfile)        graphviz.render('dot', 'svg', outputfile)
makehighlightedfuncgraphs()

Thanks for stopping by.

Categories: FLOSS Project Planets

TestDriven.io: Developing GraphQL APIs in Django with Strawberry

Planet Python - Fri, 2024-07-05 17:42
This tutorial details how to integrate GraphQL with Django using Strawberry.
Categories: FLOSS Project Planets

ImageX: The Gems of Drupal 10.3: Exploring What’s New in the Release

Planet Drupal - Fri, 2024-07-05 17:02

Authored by Nadiia Nykolaichuk.

Drupal is ceaselessly evolving, with the best Drupal minds nurturing brilliant ideas and implementing them in the new releases. Six months ago, Drupal 10.2 rolled out with a set of exciting enhancements. Now it’s time to celebrate that Drupal 10.3 has been officially released on June 20.

Categories: FLOSS Project Planets

FSF Blogs: Share free software with your friends and colleagues

GNU Planet! - Fri, 2024-07-05 16:54
Have you ever wondered how to get a friend or colleague or even a complete stranger hooked up with free software? Here's the ultimate guide.
Categories: FLOSS Project Planets

Share free software with your friends and colleagues

FSF Blogs - Fri, 2024-07-05 16:54
Have you ever wondered how to get a friend or colleague or even a complete stranger hooked up with free software? Here's the ultimate guide.
Categories: FLOSS Project Planets

Wim Leers: XB week 6: diagrams & meta issues

Planet Drupal - Fri, 2024-07-05 14:07

1.5 week prior, Lee “larowlan” + Jesse fixed a bug in the undo/redo functionality and added the first unit test (#3452895), and added it to CI (where it now runs immediately, no need to wait for composer to run JS unit tests!) Except … the unit tests didn’t actually run — oops! Rectifying that revealed a whole range of new Cypress challenges, which Ben “bnjmnm” worked tirelessly to solve this during the entire 5th week, and it was merged on Wednesday of this week :)

Anybody who has contributed to the drupal.org GitLab CI templates knows how painful this can be!

Missed a prior week? See all posts tagged Experience Builder.

Goal: make it possible to follow high-level progress by reading ~5 minutes/week. I hope this empowers more people to contribute when their unique skills can best be put to use!

For more detail, join the #experience-builder Slack channel. Check out the pinned items at the top!

As alluded to in last week’s update, I’m shifting my focus to coordinating.

That, together with quite a few people being out or preparing for webinars or Drupal Dev Days Burgas means this is an unusually short update — also because I left on vacation on Friday the 21st of June.

Before going on vacation, I wanted to ensure work could continue in my absence because we need to get to the point where Lauri’s vision is accessible in both UX wireframe form (Lauri’s working on that with Acquia UX) and technical diagram form (up to me to get that going). So, since last week:

  1. Lauri created #3454094: Milestone 0.1.0: Experience Builder Demo. I created #3455753: Milestone 0.2.0: Experience Builder-rendered nodes. Anything that is not necessary for either of those two should not be worked on at this time. They together form “the early phase” — the milestone 0.1.0 has a hard DrupalCon Barcelona 2024 deadline in September, the 0.2.0 does not have a similar firm date.
  2. As much of 0.2.0 as possible should already be in 0.1.0, which means ideally the back end is ahead of the front end. That’s what #3450586: [META] Early phase back-end work coordination and #3450592: [META] Early phase front-end work coordination. are for. I did a big update to have the next ~10 or so back-end things to build spelled out in detail in concrete issues, with vaguer descriptions for the things that are further out and subject to change anyway.
  3. Initial diagram of the data model as currently partially implemented and the direction we’ve been going in … followed by a significant expansion of detail. You can see the diagrams on GitLab, in the docs/diagrams directory.
  4. The JSON-based data storage model is influenced significantly by some of the product requirements, and what those are and their exact purpose has not been very clear. To fix that, I created [later phase] [META] 7. Content type templates — aka “default layouts” — affects the tree+props data model (which updates the data model diagram!) — and Lauri recorded a video with his thinking around this, in which he walks through two diagrams: one for data sources + design system, one that breaks down a concrete content type.
  5. A lot of discussion happened between Lauri, catch and I on [META] Configuration management: define needed config entity types, which needs a lot more clarity before all config entity types can be implemented.

One pretty cool issue landed this week that drives home that second item : #3455898: Connect client & server, with zero changes to client (UI): rough working endpoints that mimic the UI’s mocks — thanks to that, the PoC UI is now optionally populated by the first article node, unless you enable development mode (see ui/README.md), then it uses dummy data not served by Drupal. A small but hacky change but an important pragmatic step in the right direction :) And it unblocks Jesse on next steps on the UI!

Try it yourself locally if you like, but there’s not much you can do yet.
Install the 0.x branch — the “Experience Builder PoC” toolbar item takes you there!

Weeks 7 and 8 will be special editions: I will have been completely absent during week 7 (plus, it’ll be Drupal Dev Days!), and present only for the last day of week 8. I’ll catch up what happened and do a write-up both for myself as well as all of you!

Thanks to Lauri for reviewing this!

Categories: FLOSS Project Planets

Gábor Hojtsy: Continuous forward compatibility checking of extensions for Drupal 12, 13, etc

Planet Drupal - Fri, 2024-07-05 13:34
Continuous forward compatibility checking of extensions for Drupal 12, 13, etc

We still keep improving the ecosystem readiness and tooling with each new major Drupal core version. Drupal 11 is to be released in a few weeks on the week of July 29 (so probably the first days of August) and already almost half of the top 200 modules are ready. But we need to keep thinking ahead.

The Project Update Bot (originally built by Ted Bowman at Acquia and since then very actively owned and improved by Björn Brala at SWIS) posted into more than 7600 project issue queues on Drupal.org with merge request suggestions to improve and in many cases solve compatibility with the upcoming major version. 

The bot is a clever combination of Upgrade Status and drupal-rector with some custom decision logic. So humans can also run those tools! But what if we automate it even more? What if we help pre-empt forwards incompatible code getting into modules in the first place? 

Gábor Hojtsy Fri, 07/05/2024 - 20:34
Categories: FLOSS Project Planets

Sahil Dhiman: Atleast Not Written by an AI

Planet Debian - Fri, 2024-07-05 12:19

I keep on going back and correcting bootload of grammatical and other errors in my posts here. I somewhat feel embarrassed how such mistakes slip through when I was proofreading. Back then it was all good and suddenly this mistake cropped up in my text, which everyone might have already noticed by now. A thought just stuck around that. Those mistakes signify that the text is written by a real human, and humans makes mistakes. :)

PS - Even LanguageTool (non-premium) couldn’t identify those errors.

Categories: FLOSS Project Planets

The Drop Times: Drupal Usage in Government: A Data-Driven Study of CMS Adoption Patterns

Planet Drupal - Fri, 2024-07-05 12:14
This study published in The DropTimes by Veniz Maja Guzman, SEO Expert & Content Strategist at Promet Source, uncovers the growing trend of Drupal adoption in government websites, correlating with entity size. Highlighting Drupal's scalability and robust features, the research shows large entities have the highest adoption rates. However, Drupal's benefits, including cost-effectiveness, security, and customization, are valuable for all government levels. The study emphasizes the need for better education and marketing to increase Drupal adoption among smaller entities, demonstrating its flexibility and potential for long-term growth in the public sector. Discover the full insights and implications for government website modernization.
Categories: FLOSS Project Planets

Web Review, Week 2024-27

Planet KDE - Fri, 2024-07-05 11:12

Let’s go for my web review for the week 2024-27.

Online anonymity: study found ‘stable pseudonyms’ created a more civil environment than real user names 

Tags: tech, internet, anonymity, privacy

There’s clearly an interesting balance between full anonymity and no anonymity at all. This is a path to keep discussions genuine and civil.

https://theconversation.com/online-anonymity-study-found-stable-pseudonyms-created-a-more-civil-environment-than-real-user-names-171374


Telegram says it has ‘about 30 engineers’; security experts say that’s a red flag | TechCrunch

Tags: tech, telegram, security, criticism

This organization indeed doesn’t seem healthy. Especially regarding the amount of user data they are responsible of.

https://techcrunch.com/2024/06/24/experts-say-telegrams-30-engineers-team-is-a-security-red-flag/?guccounter=1


ChatGPT is bullshit | Ethics and Information Technology

Tags: tech, philosophy, ai, machine-learning, gpt, ethics

Makes a strong case about why LLMs are better described as “bullshit machine”. In any case this is a good introduction into bullshit as a philosophical concept. I guess with our current relationship to truth these are products well suited to their context…

https://link.springer.com/article/10.1007/s10676-024-09775-5


I received an AI email - Tim Hårek

Tags: tech, ai, machine-learning, gpt, spam

A new era of spam is on us… this is going to be annoying to filter out.

https://timharek.no/blog/i-received-an-ai-email


regreSSHion: RCE in OpenSSH’s server, on glibc-based Linux systems

Tags: tech, ssh, security

Make sure your OpenSSH server is up to date.

https://www.qualys.com/2024/07/01/cve-2024-6387/regresshion.txt


POSIX 2024 Changes

Tags: tech, unix, posix, system, standard

From the perspective of a given implementation. Still this is a good list of what POSIX 2024 changes. I’m particularly interested to see that per-file-descriptor advisory locks finally made it to the standard. Still some progress to make in this department but it’s a good step already.

https://sortix.org/blog/posix-2024/


Serving a billion web requests with boring code - llimllib notes

Tags: tech, architecture, services, complexity, go, postgresql, databases, react

Nice return on experience of using a simple stack to serve loads of web requests.

https://notes.billmill.org/blog/2024/06/Serving_a_billion_web_requests_with_boring_code.html


Trip report: Summer ISO C++ standards meeting (St Louis, MO, USA) – Sutter’s Mill

Tags: tech, c++, standard

Looks like C++26 is going to be a big deal. The reflection and generation features alone are going to be a game changer. Now if it also gets contracts it’d be really nice.

https://herbsutter.com/2024/07/02/trip-report-summer-iso-c-standards-meeting-st-louis-mo-usa/


Reasons to use your shell’s job control

Tags: tech, shell, processes

This is too often underestimated. This article shows nice uses of job control.

https://jvns.ca/blog/2024/07/03/reasons-to-use-job-control/


X-Ray vision for Linux systems | 0x.tools

Tags: tech, linux, profiling, debugging, tools

Nice suite of tools. The eBPF based ones look promising.

https://0x.tools/


Modern Good Practices for Python Development · Field Notes

Tags: tech, programming, python

Obviously very opinionated. Still probably a nice list to pick from when making your own project specific coding guidelines.

https://www.stuartellis.name/articles/python-modern-practices/


A Structured Approach to Custom Properties

Tags: tech, web, css, frontend, maintenance

Interesting approach to structure CSS custom properties. Should help a bit with maintainability.

https://keithjgrant.com/posts/2024/06/a-structured-approach-to-custom-properties/


Synchronous Core, Asynchronous Shell

Tags: tech, programming, asynchronous

Not really Rust specific, this might be an interesting way to structure your code once async gets introduced. Should avoid some of the usual traps.

https://blog.sulami.xyz/posts/sync-core-async-shell/


There’s plenty of room at the Top: What will drive computer performance after Moore’s law? | Science

Tags: tech, hardware, software, performance

As Moore’s law fades away this question is indeed essential. Looks like there will be more pressure on software and algorithms than before (at last one might say, we had decades of waste there). Streamlining hardware architectures will have a role too, we might see simpler cores in greater numbers.

https://www.science.org/doi/10.1126/science.aam9744


TDD is Not Hill Climbing - by Kent Beck

Tags: tech, tdd, tests

Starting from a wrong analogy to raise real thinking and questions about TDD.

https://tidyfirst.substack.com/p/tdd-is-not-hill-climbing


Code Reviews Do Find Bugs

Tags: tech, codereview

Good reasons to really make sure your organization practice code reviews.

https://two-wrongs.com/code-reviews-do-find-bugs.html


Quality and productivity are not necessarily mutually exclusive

Tags: tech, quality, productivity

Good reminder that those two aspects are not necessarily competing which each other. In the long run quality improves productivity. In the short term it might as well.

https://www.haskellforall.com/2024/07/quality-and-productivity-are-not.html?m=1


Planning fallacy - The Decision Lab

Tags: management, organization, cognition, planning, bias

Very good primer on a widespread and very hard to avoid bias. This is why it’s hard for projects to properly meet deadlines.

https://thedecisionlab.com/biases/planning-fallacy


The 4 keys to creating team accountability

Tags: tech, team, organization, leadership, management

Interesting tips and actions to help frame the conversation. The goal is to get the team better self-organized and directed.

https://newsletter.canopy.is/p/the-4-keys-to-creating-team-accountability


Bye for now!

Categories: FLOSS Project Planets

mark.ie: My Drupal Core Contributions for week-ending July 5th, 2024

Planet Drupal - Fri, 2024-07-05 11:09

Here's what I've been working on for my Drupal contributions this week. Thanks to Code Enigma for sponsoring the time to work on these.

Categories: FLOSS Project Planets

drunomics: A Journey Towards Sustainability and Team Building

Planet Drupal - Fri, 2024-07-05 09:15
A Journey Towards Sustainability and Team Building drunomics Team Event 2024 Burgas, Bulgaria jurgen.thano Fri, 07/05/2024 - 15:15 The drunomics team gathered for an event full of excitement and inspiration in Burgas, Bulgaria. From bonding activities to insightful discussions and workshops, it was a day full of energy and enthusiasm. Want to know more? Our blog post covers all the key moments and takeaways from this fantastic day! Body

At drunomics GmbH, our dedication to sustainable digital practices is at the heart of everything we do. We see technology as a powerful force for positive change, and our recent team event in Burgas, Bulgaria, emphasized sustainability as the focal point of the event. Here is a glimpse into the vibrant discussions, collaborative learning, and memorable moments that made this event truly special.

Nurturing Sustainability 1. Green UX Design Insights

During our event, we immersed ourselves in the principles of Green UX design. Our discussions centered on creating digital experiences that are both user-friendly and environmentally conscious. Here are some of the key insights we shared:

  • Optimizing Code for Efficiency: Streamlining scripts and using efficient algorithms not only boosts performance but also cuts down on energy consumption, contributing to a greener web.
  • Data Transfer Reduction: Minimizing data transfer is crucial for sustainability. We explored methods like lazy loading, content compression, and efficient caching to create smoother user experiences while reducing our digital carbon footprint.
  • Balancing Aesthetics and Sustainability: Green UX is about finding harmony between visual appeal and resource efficiency. Thoughtful use of images, animations, and fonts can achieve this balance effectively.
2. User-Centric Sustainability

Our discussions also focused on empowering users to make eco-friendly choices through smart intuitive design:

  • Empowering Users: We brainstormed ways to integrate sustainability into user interactions, such as through informative tool-tips, personalized recommendations, and eco-friendly badges.

  • Behavioral Nudges: Subtle prompts within the user experience can encourage eco-friendly behavior, like promoting energy-saving modes, suggesting public transportation options, or highlighting sustainable product choices.
3. Collaborative Learning

The event was an engaging forum for sharing ideas and fostering dynamic discussions. We challenged assumptions and explored new perspectives on integrating sustainability into our projects. This collaborative approach helped us discover innovative solutions and prepared us to infuse eco-consciousness into every stage of our work.

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #211: Python Doesn't Round Numbers the Way You Might Think

Planet Python - Fri, 2024-07-05 08:00

Does Python round numbers the same way you learned back in math class? You might be surprised by the default method Python uses and the variety of ways to round numbers in Python. Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

mark.ie: A bash script to install different Drupal profiles the easy way

Planet Drupal - Fri, 2024-07-05 05:49

Over the past few weeks I've been sharing handy ways to set up Drupal for easier Drupal core development. Here's a bash script for installing Drupal and allowing you to choose what profile you want.

Categories: FLOSS Project Planets

The Python Show: Dashboards in Python with Streamlit

Planet Python - Thu, 2024-07-04 21:20

This week, I chatted with Channin Nantasenamat about Python and the Streamlit web framework.

Specifically, we chatted about the following topics:

  • Python packages

  • Streamlit

  • Teaching bioinformatics

  • Differences in data science disciplines

  • Being a YouTuber

  • and much more!

Links
Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 272 released

Planet Debian - Thu, 2024-07-04 20:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 272. This version includes the following changes:

[ Chris Lamb] * Move away from using DSA OpenSSH keys in tests; support has been removed in OpenSSH 9.8p1. (Closes: reproducible-builds/diffoscope#382) * Move to assert_diff helper in test_openssh_pub_key.py * Update copyright years.

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

Carl Trachte: DAG Hamilton Workflow for Toy Text Processing Script

Planet Python - Thu, 2024-07-04 18:04

Hello. It's been a minute.

I was fortunate to attend PYCON US in Pittsburgh earlier this year. DAGWorks had a booth on the expo floor where I discovered Hamilton. The project grabbed my attention as something that could help organize and present my code workflow better. My reaction could be compared to browsing Walmart while picking up a hardware item and seeing the perfect storage medium for your clothes or crafts at a bargain price, but even better, having someone there to explain the whole thing to you. The folks at the booth were really helpful.




Below I take on a contrived web scraping (it's crude) script in my domain (metals mining) and create a Hamilton workflow from it.

Pictured below is the Hamilton flow in the graphviz output format the project uses for flowcharts (graphviz has been around for decades - an oldie but goodie as it were).





I start with a csv file that has some really basic data on three big American metal mines (I did have to research the Wikipedia addresses - for instance, I originally looked for the Goldstrike Mine under the name "Post-Betze." It goes by several different names and encompasses several mines - more on that anon):

mine,state,commodity,wikipedia page,colloquial associationRed Dog,Alaska,zinc,https://en.wikipedia.org/wiki/Red_Dog_mine,TeckGoldstrike,Nevada,gold,https://en.wikipedia.org/wiki/Goldstrike_mine,Nevada Gold MinesBingham Canyon,Utah,copper,https://en.wikipedia.org/wiki/Bingham_Canyon_Mine,Kennecott

Basically, I am going to attempt to scrape Wikipedia for information on who owns the three mines. Then I will try to use heuristics to gather information on what I think I know about them and gauge how up to date the Wikipedia information is.

Hamilton uses a system whereby you name your functions in a noun-like fashion ("def stuff()" instead of "def getstuff()") and feed those names as variables to the other functions in the workflow as parameters. This is what allows the tool to check your workflow for inconsistencies (types, for instance) and build the graphviz chart shown above.

You can use separate modules with functions and import them. I've done some of this on the bigger workflows I work with. Your Hamilton functions then end up being little one liners that call the bigger functions in the modules. This is necessary if you have functions you use repeatedly in your workflow that take different values at different stages. For this toy project, I've kept the whole thing self contained in one module toyscriptiii.py (yes, the iii in the filename represents my multiple failed attempts at web scraping and text processing - it's harder than it looks).

Below is the Hamilton main file run.py (I believe the "run.py" name is convention.) I have done my best to preserve the dictionary return values as "faux immutable" through use of the copy module in each function. This helps me in debugging and examining output, much of which can be done from the run.py file (all the return values are stored in a dictionary). I've worked with a dataset with about 600,000 rows that had about 10 nodes. My computer has 32GB of RAM (Windows 11); it handled memory fine (less than half). For really big data, keeping all these dictionaries in memory might be a problem.

# python 3.12
"""Hamilton demo."""
import sys
import pprint
from hamilton import driver
import toyscriptiii as ts
dr = driver.Builder().with_modules(ts).build()
dr.display_all_functions("ts.png", deduplicate_inputs=True, keep_dot=True, orient='BR')
results = dr.execute(['parsed_data',                      'data_with_wikipedia',                      'data_with_company',                      'info_output',                      'commodity_word_counts',                      'colloquial_company_word_counts',                      'info_dict_merged',                      'wikipedia_report'],                      inputs={'datafile':'data.csv'})
pprint.pprint(results['info_dict_merged'])print(results['info_output'])print(results['wikipedia_report'])

The main toy module with functions configured for the Hamilton graph:

# python 3.12
"""Toy script.
Takes some input from a csv file on big Americanmines and looks at Wikipedia text for some extracontext."""
import copy
import pprint
import sys
from urllib import request
import re
from bs4 import BeautifulSoup
def parsed_data(datafile:str) -> dict:    """    Get csv data into a dictionary keyed on mine name.    """    retval = {}    with open(datafile, 'r') as f:        headers = [x.strip() for x in next(f).split(',')]        for linex in f:            vals = [x.strip() for x in linex.split(',')]            retval[vals[0]] = {key:val for key, val in zip(headers, vals)}     pprint.pprint(retval)    return retval        def data_with_wikipedia(parsed_data:dict) -> dict:    """    Connect to wikipedia sites and fill in    raw html data.
    Return dictionary.    """    retval = copy.deepcopy(parsed_data)    for minex in retval:        obj = request.urlopen(retval[minex]['wikipedia page'])        html = obj.read()        soup = BeautifulSoup(html, 'html.parser')        print(soup.title)        # Text from html and strip out newlines.        newstring = soup.get_text().replace('\n', '')        retval[minex]['wikipediatext'] = newstring    return retval
def data_with_company(data_with_wikipedia:dict) -> dict:    """    Fetches company ownership for mine out of     Wikipedia text dump.
    Returns a new dictionary with the company name    without the big wikipedia text dump.    """    # Wikipedia setup for mine company name.    COMPANYPAT = r'[a-z]Company'    # Lower case followed by upper case heuristic.    ENDCOMPANYPAT = '[a-z][A-Z]'    retval = copy.deepcopy(data_with_wikipedia)    companypat = re.compile(COMPANYPAT)    endcompanypat = re.compile(ENDCOMPANYPAT)     for minex in retval:        print(minex)        match = re.search(companypat, retval[minex]['wikipediatext'])        if match:            print('Company match span = ', match.span())            companyidx = match.span()[1]            match2 = re.search(endcompanypat, retval[minex]['wikipediatext'][companyidx:])            print('End Company match span = ', match2.span())            retval[minex]['company'] = retval[minex]['wikipediatext'][companyidx:companyidx + match2.span()[0] + 1]        # Get rid of big text dump in return value.        retval[minex].pop('wikipediatext')    return retval
def info_output(data_with_company:dict) -> str:    """    Prints some output text to a file for each    mine in the data_with_company dictionary.
    Returns string filename of output.    """    INFOLINEFMT = 'The {mine:s} mine is a big {commodity:s} mine in the State of {state:s} in the US.'    COMPANYLINEFMT = '\n    {company:s} owns the mine.\n\n'    retval = 'mine_info.txt'    with open(retval, 'w') as f:        for minex in data_with_company:            print(INFOLINEFMT.format(**data_with_company[minex]), file=f)            print(COMPANYLINEFMT.format(**data_with_company[minex]), file=f)    return retval
def commodity_word_counts(data_with_wikipedia:dict, data_with_company:dict) -> dict:    """    Return dictionary keyed on mine with counts of    commodity (e.g., zinc etc.) mentions on Wikipedia    page (excluding ones in the company name).    """    retval = {}    # This will probably miss some occurrences at mashed together    # word boundaries. It is a rough estimate.    # '\b[Gg]old\b'    commoditypatfmt = r'\b[{0:s}{1:s}]{2:s}\b'    for minex in data_with_wikipedia:        print(minex)        commodityuc = data_with_wikipedia[minex]['commodity'][0].upper()        commoditypat = commoditypatfmt.format(commodityuc,                                              data_with_wikipedia[minex]['commodity'][0],                                              data_with_wikipedia[minex]['commodity'][1:])        print(commoditypat)        commoditymatches = re.findall(commoditypat, data_with_wikipedia[minex]['wikipediatext'])        # pprint.pprint(commoditymatches)        nummatchesraw = len(commoditymatches)        print('Initial length of commoditymatches is {0:d}.'.format(nummatchesraw))        companymatches = re.findall(data_with_company[minex]['company'],                                    data_with_wikipedia[minex]['wikipediatext'])        numcompanymatches = len(companymatches)        print('Length of companymatches is {0:d}.'.format(numcompanymatches))        # Is the commodity name part of the company name?        print('commoditypat = ', commoditypat)        print(data_with_company[minex]['company'])        commoditymatchcompany = re.search(commoditypat, data_with_company[minex]['company'])        if commoditymatchcompany:            print('commoditymatchcompany.span() = ', commoditymatchcompany.span())            nummatchesfinal = nummatchesraw - numcompanymatches            retval[minex] = nummatchesfinal         else:            retval[minex] = nummatchesraw     return retval
def colloquial_company_word_counts(data_with_wikipedia:dict) -> dict:    """    Find the number of times the company you associate with    the property/mine (very subjective) is within the    text of the mine's wikipedia article.    """    retval = {}    for minex in data_with_wikipedia:        colloquial_pat = data_with_wikipedia[minex]['colloquial association']        print(minex)        nummatches = len(re.findall(colloquial_pat, data_with_wikipedia[minex]['wikipediatext']))        print('{0:d} matches for colloquial association {1:s}.'.format(nummatches, colloquial_pat))        retval[minex] = nummatches    return retval
def info_dict_merged(data_with_company:dict,                     commodity_word_counts:dict,                     colloquial_company_word_counts:dict) -> dict:    """    Get a dictionary with all the collected information    in it minus the big Wikipedia text dump.    """    retval = copy.deepcopy(data_with_company)    for minex in retval:        retval[minex]['colloquial association count'] = colloquial_company_word_counts[minex]        retval[minex]['commodity word count'] = commodity_word_counts[minex]    return retval
def wikipedia_report(info_dict_merged:dict) -> str:    """    Writes out Wikipedia information (word counts)    to file in prose; returns string filename.    """    retval = 'wikipedia_info.txt'    colloqfmt = 'The {0:s} mine has {1:d} occurrences of colloquial association {2:s} in its Wikipedia article text.\n'    commodfmt = 'The {0:s} mine has {1:d} occurrences of commodity name {2:s} in its Wikipedia article text.\n\n'    with open(retval, 'w') as f:        for minex in info_dict_merged:            print(colloqfmt.format(info_dict_merged[minex]['mine'],                                   info_dict_merged[minex]['colloquial association count'],                                   info_dict_merged[minex]['colloquial association']), file=f)            print(commodfmt.format(info_dict_merged[minex]['mine'],                                   info_dict_merged[minex]['commodity word count'],                                   info_dict_merged[minex]['commodity']), file=f)    return retval

My REGEX abilities are somewhere between "I've heard the term REGEX and know regular expressions exist" and bracketed characters in each slot brute force. It worked for this toy example. Each Wikipedia page features the word "Company" followed by the name of the owning corporate entity.

Here is are the two text outputs the script produces from the information provided (Wikipedia articles from July, 2024):

The Red Dog mine is a big zinc mine in the State of Alaska in the US.
    NANA Regional Corporation owns the mine.

The Goldstrike mine is a big gold mine in the State of Nevada in the US.
    Barrick Gold owns the mine.

The Bingham Canyon mine is a big copper mine in the State of Utah in the US.
    Rio Tinto Group owns the mine.

The Red Dog mine has 21 occurrences of colloquial association Teck in its Wikipedia article text.
The Red Dog mine has 29 occurrences of commodity name zinc in its Wikipedia article text.

The Goldstrike mine has 0 occurrences of colloquial association Nevada Gold Mines in its Wikipedia article text.
The Goldstrike mine has 16 occurrences of commodity name gold in its Wikipedia article text.

The Bingham Canyon mine has 49 occurrences of colloquial association Kennecott in its Wikipedia article text.
The Bingham Canyon mine has 84 occurrences of commodity name copper in its Wikipedia article text.

Company names are relatively straightforward, although mining company and properties acquisitions and mergers being what they are, it can get complicated. I unwittingly chose three properties that Wikipedia reports as having one owner. Other big mines like Morenci, Arizona (copper) and Cortez, Nevada (gold) show more than one owner; that case is for another programming day. The Goldstrike information might be out of date - no mention of Nevada Gold Mines or Newmont (one mention, but in a different context). The Cortez Wikipedia page is more current, although it still doesn't mention Nevada Gold Mines.

The inclusion of colloquial association in the input csv file was an afterthought based on a lot of the Wikipedia information not being completely in line with what I thought I knew. Teck is the operator of the Red Dog Mine in Alaska. That name does get mentioned frequently in the Wikipedia article.

Enough mining stuff - it is a programming blog after all. Next time (not written yet) I hope to cover dressing up and highlighting the graphviz output a bit.

Thank you for stopping by.


Categories: FLOSS Project Planets

Manual action needed to resolve boot failure for Fedora Atomic Desktops and Fedora IoT

Planet KDE - Thu, 2024-07-04 18:00

Since the 39.20240617.0 and 40.20240617.0 updates for Atomic Desktops and the 40.20240617.0 update for IoT, systems with Secure Boot enabled may fail to boot if they have been installed before Fedora Linux 40. You might see the following error:

error: ../../grub-core/kern/efi/sb.c:182:bad shim signature. error: ../../grub-core/loader/i386/efi/linux.c:258:you need to load the kernel first. Press any key to continue...

Note: You can also read this post on the Fedora Magazine.

Workaround

In order to resolve this issue, you must first boot into the previous version of your system. It should still be functional. In order to do this, reboot your system and select the previous boot entry in the selection menu displayed on boot. Its name should be something like:

Fedora Linux 39.20240610.0 (Silverblue) (ostree:1)

Once you have logged in, search for the terminal application for your desktop and open a new terminal window. On Fedora IoT, log in via SSH or on the console. Make sure that you are not running in a toolbox for all the commands listed on this page.

If you are running a Fedora Atomic Desktop based on Fedora 39 and have not yet updated to Fedora 40, you first need to update to the latest working Fedora 39 version with those commands:

$ sudo rpm-ostree cleanup --pending $ sudo rpm-ostree deploy 39.20240616.0

If you are running Fedora IoT, then first update to the latest working version with this command:

$ sudo rpm-ostree cleanup --pending $ sudo rpm-ostree deploy 40.20240614.0

Then reboot your system.

Once you are logged in again on the latest working version, proceed with the following commands:

$ sudo -i $ cp -rp /usr/lib/ostree-boot/efi/EFI /boot/efi $ sync

Once completed, reboot your system. You should now be able to update again, as normal, using the graphical interface or the command line:

$ sudo rpm-ostree update Why did this happen?

On Fedora Atomic Desktops and Fedora IoT systems, the components that are part of the boot chain (Shim, GRUB) are not (yet) automatically updated alongside the rest of the system. Thus, if you have installed a Fedora Atomic Desktop or a Fedora IoT system before Fedora 40, it uses an old versions of the Shim and bootloader binaries to boot your system.

When Secure Boot is enabled, the EFI firmware loads Shim first. Shim is signed by the Microsoft Third Party Certificate Authority so that it can be verified on most hardware out of the box. The Shim binary includes the Fedora certificates used to verify binaries signed by Fedora. Then Shim loads GRUB, which in turn loads the Linux kernel. Both are signed by Fedora.

Until recently, the kernel binaries where signed two times, with an older key and a newer one. With the 6.9 kernel update, the kernel is no longer signed with the old key. If GRUB or Shim is old enough and does not know about the new key, the signature verification fails.

See the initial report in the Fedora Silverblue issue tracker.

What are we doing to prevent it from happening again?

We have known for a while that not updating the bootloader was not a satisfying situation. We have been working on enabling bootupd for Fedora Atomic Desktops and Fedora IoT. bootupd is a small application that is responsible only for bootloader updates. While initially planned for Fedora Linux 38 (!), we had to delay enabling it due to various issues and missing functionality in bootupd itself and changes needed in Anaconda.

We are hoping to enable bootupd in Fedora Linux 41, hopefully by default, which should finally resolve this situation. See the Enable bootupd for Fedora Atomic Desktops and Fedora IoT Fedora Change page.

Note that the root issue also impacts Fedora CoreOS but steps have been put in place to force a bootloader update before the 6.9 kernel update. See the tracking issue for Fedora CoreOS.

Categories: FLOSS Project Planets

Pages