Feeds
The Drop Times: VLSuite 1.1.0-rc4 Release Elevates Drupal Layout Builder Experience
Kushal Das: Documentation of Puppet code using sphinx
Sphinx is the primary documentation tooling for most of my projects. I use it for the Linux command line book too. Last Friday while in a chat with Leif about documenting all of our puppet codebase, I thought of mixing these too.
Now puppet already has a tool to generate documentation from it's code, called puppet strings. We can use that to generate markdown output and then use the same in sphix for the final HTML output.
I am using https://github.com/simp/pupmod-simp-simplib as the example puppet code as it comes with good amount of reference documentation.
Install puppet strings and the dependencies $ gem install yard puppet-stringsThen cloning puppet codebase.
$ git clone https://github.com/simp/pupmod-simp-simplibFinally generating the initial markdown output.
$ puppet strings generate --format markdown --out simplib.md Files 161 Modules 3 (3 undocumented) Classes 0 (0 undocumented) Constants 0 (0 undocumented) Attributes 0 (0 undocumented) Methods 5 (0 undocumented) Puppet Tasks 0 (0 undocumented) Puppet Types 7 (0 undocumented) Puppet Providers 8 (0 undocumented) Puppet Plans 0 (0 undocumented) Puppet Classes 2 (0 undocumented) Puppet Data Type Aliases 73 (0 undocumented) Puppet Defined Types 1 (0 undocumented) Puppet Data Types 0 (0 undocumented) Puppet Functions 68 (0 undocumented) 98.20% documented sphinx setup python3 -m venv .venv source .venv/bin/activate python3 -m pip install sphinx myst_parserAfter that create a standard sphinx project or use your existing one, and update the conf.py with the following.
extensions = ["myst_parser"] source_suffix = { '.rst': 'restructuredtext', '.txt': 'markdown', '.md': 'markdown', }Then copy over the generated markdown from the previous step and use sed command to update the title of the document to something better.
$ sed -i '1 s/^.*$/SIMPLIB Documenation/' simplib.mdDon't forget to add the simplib.md file to your index.rst and then build the HTML documentation.
$ make htmlWe can still improve the markdown generated by the puppet strings command, have to figure out simpler ways to do that part.
eGenix.com: Python Meeting Düsseldorf - 2023-09-27
The following text is in German, since we're announcing a regional user group meeting in Düsseldorf, Germany.
Das nächste Python Meeting Düsseldorf findet an folgendem Termin statt:
27.09.2023, 18:00 Uhr
Raum 1, 2.OG im Bürgerhaus Stadtteilzentrum Bilk
Düsseldorfer Arcaden, Bachstr. 145, 40217 Düsseldorf
- Moritz Damm:
Einführung in 'Kedro - A framework for production-ready data science' - Marc-André Lemburg:
Parsing structured content with Python 3.10's new match-case - Arkadius Schuchhardt:
Repository Pattern in Python: Why and how? - Jens Diemer:
CLI Tools
Weitere Vorträge können gerne noch angemeldet werden. Bei Interesse, bitte unter info@pyddf.de melden. Startzeit und Ort
Wir treffen uns um 18:00 Uhr im Bürgerhaus in den Düsseldorfer Arcaden.
Das Bürgerhaus teilt sich den Eingang mit dem Schwimmbad und befindet
sich an der Seite der Tiefgarageneinfahrt der Düsseldorfer Arcaden.
Über dem Eingang steht ein großes "Schwimm’ in Bilk" Logo. Hinter der Tür
direkt links zu den zwei Aufzügen, dann in den 2. Stock hochfahren. Der
Eingang zum Raum 1 liegt direkt links, wenn man aus dem Aufzug kommt.
>>> Eingang in Google Street View
Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.
Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf:
Das Python Meeting Düsseldorf nutzt eine Mischung aus (Lightning) Talks und offener Diskussion.
Vorträge können vorher angemeldet werden, oder auch spontan während des Treffens eingebracht werden. Ein Beamer mit HDMI und FullHD Auflösung steht zur Verfügung.(Lightning) Talk Anmeldung bitte formlos per EMail an info@pyddf.de
KostenbeteiligungDas Python Meeting Düsseldorf wird von Python Nutzern für Python Nutzer veranstaltet.
Da Tagungsraum, Beamer, Internet und Getränke Kosten produzieren, bitten wir die Teilnehmer um einen Beitrag in Höhe von EUR 10,00 inkl. 19% Mwst. Schüler und Studenten zahlen EUR 5,00 inkl. 19% Mwst.
Wir möchten alle Teilnehmer bitten, den Betrag in bar mitzubringen.
AnmeldungDa wir nur 25 Personen in dem angemieteten Raum empfangen können, möchten wir bitten, sich vorher anzumelden.
Meeting Anmeldung bitte per Meetup
Weitere Informationen finden Sie auf der Webseite des Meetings:
https://pyddf.de/
Viel Spaß !
Marc-Andre Lemburg, eGenix.com
Metadrop: Sevilla Drupal Camp Behat workshop
For this year's Drupal Camp a joint effort has been made to offer three different workshops on testing with Drupal. One of them was a Behat workshop where a theoretical basis was given and different practical exercises with Behat were proposed.
Through this article we want to make this same Behat workshop available to anyone who is interested so that they access the presentation and the exercises. Unfortunately, the slides are only available in Spanish. However, the exercises are written in English so even if you don't speak Spanish you may benefit of doing the practical part.
The workshopThe workshop uses two repositories, the first is a presentation used as a guide for the workshop, and the second is a pre-configured environment with Behat ready to be run.
The presentation can be viewed online directly from the browser. It consists of three parts:
- A very basic introduction to Behat, supported by more…
Zephyr - My Breeze fork
I had the hankering for tinkering the KDE application style. The default style by KDE, Breeze, is pretty nice as is, but there are small things I'd like to modify.
There's Klassy which is quite customizable and fun, but I don't really need all of the settings it has.
Then there's Kvantum which uses SVG files to create a theme, but they don't follow KDE colorschemes. And I dislike working with SVG files.
Both are brilliant for their usecases, but I wanted just Breeze with few changes.
Fork time!So, I did what one has to do, forked Breeze and renamed everything Breeze related to Zephyr. I chose Zephyr because it was synonym for Breeze in Thesaurus lol. Also, it makes sure it's last in the list of the application styles, so people don't accidentally confuse it to Breeze.
Here's link to the repository: https://codeberg.org/akselmo/Zephyr
Installation help is also there, but feel free to make issue and/or merge requests for adding stuff like what packages one has to install for their distro.
Unfortunately due to the massive size of the Breeze Gitlab repo, I didn't want to flood Codeberg with the whole history. So, some of the history got lost. I have mentioned it in the readme file though.
After renaming all the things, the whole thing built and installed surprisingly easily.
I then implemented following:
- Black outline setting, so the default outline has a black one around it.
- Why? Idk looks cool. Not really other reason.
- Yes, it can be disabled.
- Traffic color icons in window deco
- I am allergic to Apple but the traffic light concept just makes sense to me.
- Also can be enabled or disabled
- Customizable style frame and window deco outline colors
- You can completely change the frame colors.
- You can also make them invisible! No outlines, no frames! Fun!
- Slightly rounder windows and buttons
- At some point I will make a setting for these too, but now they're applied when the thing is built
- Fitting Plasma style if you use the defaults Zephyr offers (mostly black outlines)
- The plasma theme buttons do not match the application style in roundness, yet.
- I am lazy and avoid working with SVG files as long as I can
For fun! For learning! And I wanted to make something that is super close to Breeze (hell, it is Breeze, just few mods), but still has it's own charm and how I like seeing my desktop.
It also can work as a great test bench for others who want to see if they can modify application style.
Just rename anything Zephyr to YourForkNameHere and have fun. But it's probably better to fork the original Breeze project :)
Also, when making my own things for Breeze, it's nice to just implement them in something similar but different name so I can test the changes for longer period of time. And if I like the changes I can maybe show them to upstream.
In future, I will make it work with Plasma 6 (unless i feel lazy). Probably will have to fork Breeze then again and apply my changes. Hopefully it's not too big of a change.
Also, I will be working on the actual Breeze in future too! I hope to implement separator colors for the Plasma colorscheme, so basically you can change the color of all frames and outlines and whatnot. This kinda helped me to figure how that works as well!
All in all, good project, I keep tinkering with it and it helps me understand the Breeze styling and Qt in general more.
Revontuli and ZephyrMy colorscheme Revontuli works really well together with Zephyr. So, feel free to give them a go!
Thanks for reading as usual!
A bit on sponsorship and money
The topic of sponsored work comes up surprisingly often. Now, many KDE developers are already sponsored by businesses to work on KDE software, either on a full-time-work basis, or for specific areas of work. But what’s less common is for a specific person to sponsor another specific person to work on a specific bug or feature. I’m talking about short-term gigs paying most likely a few hundred euros or less. This can work well for getting persistent bugs in the yellow boxes fixed. It does happen, but it’s not as common as I think anyone would like! There’s a lot of untapped potential here, I think.
So today I’d like to announce the creation of a “Sponsored work” category in the KDE forum! This is a place for people to come together for the purpose of sponsoring work on individual bugs or features. If you’re willing to sponsor work for something, post about it here. If you’re open to these kinds of micro-sponsorship opportunities, look for them here!
Since we are a free software community, sometimes concerns about money and sponsored work arise. Therefore, let me bring up an additional option, originally thought up by Jakob Petsovits the last time someone offered to sponsor work: offer an option to donate the sponsorship money to KDE e.V.! This option can be more motivating for passionate KDE developers who don’t personally need the money, and might otherwise ignore such opportunities.
On the subject of donating to KDE e.V., we have a fancy new donation web page that makes it much easier to set up recurring donations! This being too hidden was been a very valid complaint in the recent past, so it’s wonderful to see a better UX here. This work was done by Carl Schwan, Paul Brown, and others at this weekend’s Promo Sprint–which is itself funded by KDE e.V.
And at this point, KDE e.V. is funding quite a lot of initiatives. Sprints have come roaring back, and we’re sponsoring people to represent KDE at more external events than ever before. We also have a whole bunch of employees and contractors doing meaningful technical work on core KDE software. It’s a lot!
Needless to say, this isn’t cheap. KDE e.V. has been funding this major expansion by deliberately spending down its reserves for a few years to avoid getting in trouble with the German tax authorities for having too much money (yes, really; this is actually a thing). But that can’t last forever! We’re going to need help to sustain this level of financial activity.
If you can, consider setting up a recurring donation today! It really does make a difference. Anything helps!
parallel @ Savannah: GNU Parallel 20230922 ('Derna') released [stable]
GNU Parallel 20230922 ('Derna') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
Parallel is so damn good! You’ve got to use it.
-- @ThePrimeTimeagen@youtube.com
New in this release:
- No new features. This is a candidate for a stable release.
- Bug fixes and man page updates.
News about GNU Parallel:
- This CLI Tool is AMAZING | Prime Reacts https://www.youtube.com/watch?v=ry49BZA-tgg
- New Data Engineering Stack - GNU parallel https://www.linkedin.com/feed/update/urn:li:activity:7100509073149743104?updateEntityUrn=urn%3Ali%3Afs_feedUpdate%3A%28V2%2Curn%3Ali%3Aactivity%3A7100509073149743104%29
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
- Give a demo at your local user group/team/colleagues
- Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
- Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
- Request or write a review for your favourite blog or magazine
- Request or build a package for your favourite distribution (if it is not already there)
- Invite me for your next conference
If you use programs that use GNU Parallel for research:
- Please cite GNU Parallel in you publications (use --citation)
If GNU Parallel saves you money:
- (Have your company) donate to FSF https://my.fsf.org/donate/
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
health @ Savannah: Release of GNU Health HMIS 4.2.3 patchset
Dear community
GNU Health 4.2.3 patchset has been released !
Priority: High
- About GNU Health Patchsets
- Updating your system with the GNU Health control Center
- Installation notes
- List of other issues related to this patchset
We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your production system up-to-date with the latest patches.
Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.
NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-4.2.3.tar.gz)
Starting GNU Health 3.x series, you can do automatic updates on the GNU Health HMIS kernel and modules using the GNU Health control center program.
Please refer to the administration manual section ( https://en.wikibooks.org/wiki/GNU_Health/Control_Center )
The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.
You must apply previous patchsets before installing this patchset. If your patchset level is 4.2.2, then just follow the general instructions. You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)
In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you.
Pre-requisites for upgrade to 4.2.3: None
Now follow the general instructions at
https://en.wikibooks.org/wiki/GNU_Health/Control_Center
After applying the patches, make a full update of your GNU Health database as explained in the documentation.
When running "gnuhealth-control" for the first time, you will see the following message: "Please restart now the update with the new control center" Please do so. Restart the process and the update will continue.
- Restart the GNU Health server
- bug #64712: Bug afer installing patcheset 4.2.2
- bug #64706: Error saving party with photo due to PIL deprecation of ANTIALIAS
For detailed information about each issue, you can visit :
https://savannah.gnu.org/bugs/?group=health
About each task, you can visit:
https://savannah.gnu.org/task/?group=health
For detailed information you can read about Patches and Patchsets
FSF Events: Free Software Directory meeting on IRC: Friday, September 29, starting at 12:00 EDT (16:00 UTC)
Thomas Goirand: Searching for a Ryzen 9, 16 cores, small laptop
The new 7945HX CPU from AMD is currently the most powerful. I’d love to have one of them, to replace the now aging 6 core Xeon that I’ve been using for more than 5 years. So, I’ve been searching for a laptop with that CPU.
Absolutely all of the laptops I found with this CPU also embed a very powerful RTX 40×0 series GPU, that I have no use: I don’t play games, and I don’t do AI. I just want something that builds Debian packages fast (like Ceph, that takes more than 1h to build for me…). The more cores I get, the faster all OpenStack unit tests are running too (stestr does a moderately good job at spreading the tests to all cores). That’d be ok if I had to pay more for a GPU that I don’t need, and I would have deal with the annoyance of the NVidia driver, if only I could find something with a correct size. But I can only find 16″ or bigger laptops, that wont fit in my scooter back case (most of the time, these laptops have an 17 inch screen: that’s a way too big).
Currently, I found:
- Lenovo Legion Pro 5: screen is 16.8″
- Dell Alienware m6: super heavy, 16″
- Asus ROG Zephyrus Duo 16: 16″
- MSI alpha (16 and 17): also 16″
If one of the readers of this post find a smaller laptop with a 7945HX CPU, please let me know! Even better if I can get rid of the expensive NVidia GPU.
Brussels without fosdem
This year I wasn’t able to attend fosdem due to family reasons. To remedy this I visited the city with my girlfriend this weekend.
All I can say is that Brussels is nice outside of February as well. The vintage stores and markets was my greatest take away.
Matrix Community Summit and KDE Promo Sprint in Berlin
On Thursday and Friday evenings, I went to the Matrix Community Summit at C-Base in Berlin with Tobias. It was the occasion to meet a few other Matrix developers particularly the Nheko developer, MTRNord and a few other devs whom I only knew by nickname. It was great even though I could only spend a few hours there. Tobias stayed longer and will be able to blog more about the event.
Photo of the C-Base showing a lot of electronical equipements
During the weekend, instead of going to the Matrix summit, I participated to the KDE Promo sprint with Paul, Aniqa, Niccolo, Volker, Joseph. Aron also joined us via video call on Saturday. This event was also in Berlin at the KDAB officem which we are very thankful for hosting us.
This sprint was the perfect occasion to move forward with many of our pending tasks. I mainly worked on web-related projects as I tried to work on a few items on my large todo list.
We now have an updated donation page, which includes the new donnorbox widget. Donnorboy is now our preferred way to make recurring donations and recurring donations are vital to the success of KDE. Check it out!
Screenshot of the website KDE.org/community/donations
With Paul, we also looked at the next KDE For-pages. Two of them are now done and we will publish them in the coming weeks. There are plans for a few more and if you want to get involved there, this is the phabricator task to follow.
I also updated the KDE For Kids with the help of Aniqa. It now features the book Ada & Zangemann from Matthias Kirschner and Sandra Brandstätter that sensibilise kids to Free Software. Let me know if you have other books suggestions for kids around Free Software and KDE that we can include on our websites.
This was only a short version of all the things we did during this sprint, I will let the others blog about what they did. More blog posts will certainly pop up on planet.kde.org soon.
The sprint would have been only possible thanks to the generous donation from our users, so consider making a donation today! Your donation also helps to pay for the cost of hosting conferences, server infrastructure, and maintain KDE software.
Web Wash: Getting Started with Bootstrap 5 using Radix in Drupal
Radix is a Bootstrap base theme for Drupal that provides a solid foundation for building your website. It includes built-in Bootstrap 4 and 5 support, Sass, ES6, and BrowserSync. This makes it easy to create a website that looks great on all devices and is easy to maintain.
In this video, you’ll learn the following:
- Download and install Radix.
- Generate a Radix sub-theme.
- Integrate a Bootswatch theme in your site.
- Implement the Carousel component using blocks and paragraphs.
- Implement the Accordion component using paragraphs.
- Display articles in a Bootstrap grid using Views.
The Nextcloud Conference was COOL
After joining the Hub 5 video recording event and making a video about it, I couldn't miss the Nextcloud Conference! So, let's talk a bit about what I saw and heard.
Of course, the very first thing was the announcement of the new Hub 6. This mainly focused on (a) ways to work in a healthy way, e.g. by turning off notifications outside work hours, postponing e-mails, and so on; and (b) ethical AI integration. In this context, "ethical" means that (1) the training data should be freely available and (2) the model should be run locally so that you don't have to send your data to e.g. OpenAI. They announced a LLM called Nextcloud Assistant which would run locally and do stuff like summarizing emails. I'm quite excited to see how that goes.
The place was really cool as well. I spent most time in the kitchen area, which had a great view of the talk area, and had some nice tables I could use for hacking. Which I did: I will do videos about this very soon, but I'm happy to report I managed to build I couple of KDE Plasma features while eating Nextcloud-provided cookies. Living the best life.
Soon enough, some even cooler talks came along.
The coolest one in my humble opinion was The fourth sector by Simon Phipps. He talked about representing what we do - Open Source™ - to large governmental institutions such as EU. Of course, he covered some of the legislations that are coming soon, such as the CRA (I had just published a video about it, so it was particularly interesting :-). If you're intrigued by this, fear not: I will have a video about this talk specifically soon.
But of course, I wasn't the only KDE developer.
I only snapped a picture of Carl, but actually there were at least a couple more. And there are even blogposts about that, so check those out too:
Nextcloud Conference 2023Last weekend I attended this year’s Nextcloud conference in Berlin, together with a few other fellow KDE contributors.Volker KrauseI also met a lot of super cool Nextcloud people, and we chatted quite a lot, it was super cool. Big thanks to Brent @ Linux Unplugged for telling me everything about how he does podcasting and showing me the Framework laptop around. I had to run away to the airport without being able to say "bye" to anyone, though, so please forgive me!
So, what's going to happen now?
Well, firstly, I will record and publish the video about Simon's talk. I will also start scripting a video specifically about Hub 6, though I haven't had the chance to test all of its functionalities yet, and I want the video to cover those as well. So, great news: lots of Nextcloud content is coming soon and it's going to be as cool as ever. Fun fact: I actually do use Nextcloud to organize everything in my Youtube channel!
Thanks everybody for tuning in, and see you soon with more content on my main page!
Sahil Dhiman: Abraham Raji
Man, you’re no longer with us, but I am touched by the number of people you have positively impacted. Almost every DebConf presentations by locals, I saw after you carried how you were instrumental in bringing them there. How you were a dear friend and brother.
It’s a weird turn of events, that you left us during one thing we deeply cared and worked towards making possible since the past 3 years, together. Who would have known, that “Sahil, I’m going back to my apartment tonight” and casual bye post that would be the last conversation we ever had.
Things were terrible after I heard the news. I had a hard time convincing myself to come see you one last time during your funeral. That was the last time I was going to get to see you, and I kept on looking at you. You, there in front of me, all calm, gave me peace. I’ll carry that image all my life now. Your smile will always remain with me. Who’ll meet and receive me on the door at almost every Debian event (just by sheer co-incidence?). Who’ll help me speak out loud about all the Debian shortcomings (and then discuss solutions, when sober :)).
It was a testament of the amount of time we had already spent together online, that when we first met during MDC Palakkad, it didn’t feel we’re physically meeting for the first time. The conversations just continued. Now this song is associated with you now due to your speech during MiniDebConf Palakkad dinner. Hearing this keeps on reminding me of all the times we spent together chilling and talking community (which you cared deeply about). IG now we can’t stop caring for the community, because your energy was contagious.
Now, I can’t directly dial your number to listen - “Hey Sahil! What’s up?” from the other end, or “Tell me, tell me” on any mention of the problem. Nor would I be able to send ref of usage of Debian packaging guide in the wild. You already know about that text of yours. How many people that guide has helped with getting started with packaging. Did I ever tell you, I too got my first start with packaging from there. Hell, I started looking up to you from there, even before we met or talked. Now, I missed telling you, I was probably your biggest fan whenever you had the mic in hand and started speaking. You always surprised me all the insights and idea you brought and would keep on impressing me for someone who was just my age but was way more mature.
Reading recent toots from Raju Dev made me realize, how much I loved your writings. You wrote How the Future will remember Us, Doing what’s right and many more. The level of depth in your thought was unparalleled. I loved reading those, that’s why I kept pestering you to write more, which you slowly stopped. Now I fully understand why though, you were busy, really busy helping people out or just working for making things better. You were doing Debian, upstream projects, web development, designs, graphics, mentoring, evangelist while being the go-to person for almost everyone around. Everyone depended on you, because you were too kind to turn down anyone.
Man, I still get your spelling wrong :) Did I ever tell you that? That was the reason, I used to use AR instead.
You’ll be missed and will always be part of our conversations, because you have left a profound impact on me, our friends, Debian India and everyone around. See you! the coolest man around.
In memory:
- Farewell Abraham video by Vysakh Premkumar
- Tails project dedicating 5.17.1 release
- Hopscotch team dedicating 2023.8.1
- Athul dedicating 0.2.5 release of waka-readme
- Debian project
PS - Just found you even had a Youtube channel, you one heck of a talented man.
Video: We visited Stockholm!
My wife made a video of us visiting Stockholm, go check it here!
Sergio Talens-Oliag: GitLab CI/CD Tips: Using Rule Templates
This post describes how to define and use rule templates with semantic names using extends or !reference tags, how to define manual jobs using the same templates and how to use gitlab-ci inputs as macros to give names to regular expressions used by rules.
Basic rule templatesI keep my templates in a rules.yml file stored on a common repository used from different projects as I mentioned on my previous post, but they can be defined anywhere, the important thing is that the files that need them include their definition somehow.
The first version of my rules.yml file was as follows:
.rules_common: # Common rules; we include them from others instead of forcing a workflow rules: # Disable branch pipelines while there is an open merge request from it - if: >- $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS && $CI_PIPELINE_SOURCE != "merge_request_event" when: never .rules_default: # Default rules, we need to add the when: on_success to make things work rules: - !reference [.rules_common, rules] - when: on_successThe main idea is that .rules_common defines a rule section to disable jobs as we can do on a workflow definition; in our case common rules only have if rules that apply to all jobs and are used to disable them. The example includes one that avoids creating duplicated jobs when we push to a branch that is the source of an open MR as explained here.
To use the rules in a job we have two options, use the extends keyword (we do that when we want to use the rule as is) or declare a rules section and add a !reference to the template we want to use as described here (we do that when we want to add additional rules to disable a job before evaluating the template conditions).
As an example, with the following definitions both jobs use the same rules:
job_1: extends: - .rules_default [...] job_2: rules: - !reference [.rules_default, rules] [...] Manual jobs and rule templatesTo make the jobs manual we have two options, create a version of the job that includes when: manual and defines if we want it to be optional or not (allow_failure: true makes the job optional, if we don’t add that to the rule the job is blocking) or add the when: manual and the allow_failure value to the job (if we work at the job level the default value for allow_failure is false for when: manual, so it is optional by default, we have to add an explicit allow_failure = true it to make it blocking).
The following example shows how we define blocking or optional manual jobs using rules with when conditions:
.rules_default_manual_blocking: # Default rules for optional manual jobs rules: - !reference [.rules_common, rules] - when: manual # allow_failure: false is implicit .rules_default_manual_optional: # Default rules for optional manual jobs rules: - !reference [.rules_common, rules] - when: manual allow_failure: true manual_blocking_job: extends: - .rules_default_manual_blocking [...] manual_optional_job: extends: - .rules_default_manual_optional [...]The problem here is that we have to create new versions of the same rule template to add the conditions, but we can avoid it using the keywords at the job level with the original rules to get the same effect; the following definitions create jobs equivalent to the ones defined earlier without creating additional templates:
manual_blocking_job: extends: - .rules_default when: manual allow_failure: false [...] manual_optional_job: extends: - .rules_default when: manual # allow_failure: true is implicit [...]As you can imagine, that is my preferred way of doing it, as it keeps the rules.yml file smaller and I see that the job is manual in its definition without problem.
Rules with allow_failure, changes, exists, needs or variablesUnluckily for us, for now there is no way to avoid creating additional templates as we did on the when: manual case when a rule is similar to an existing one but adds changes, exists, needs or variables to it.
So, for now, if a rule needs to add any of those fields we have to copy the original rule and add the keyword section.
Some notes, though:
- we only need to add allow_failure if we want to change its value for a given condition, in other cases we can set the value at the job level.
- if we are adding changes to the rule it is important to make sure that they are going to be evaluated as explained here.
- when we add a needs value to a rule for a specific condition and it matches it replaces the job needs section; when using templates I would use two different job names with different conditions instead of adding a needs on a single job.
I started to use rule templates to avoid repetition when defining jobs that needed the same rules and soon I noticed that giving them names with a semantic meaning they where easier to use and understand (we provide a name that tells us when we are going to execute the job, while the details of the variables names or values used on the rules are an implementation detail of the templates).
We are not going to define real jobs on this post, but as an example we are going to define a set of rules that can be useful if we plan to follow a scaled trunk based development workflow when developing, that is, we are going to put the releasable code on the main branch and use short-lived branches to test and complete changes before pushing things to main.
Using this approach we can define an initial set of rule templates with semantic names:
.rules_mr_to_main: rules: - !reference [.rules_common, rules] - if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main' .rules_mr_or_push_to_main: rules: - !reference [.rules_common, rules] - if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main' - if: >- $CI_COMMIT_BRANCH == 'main' && $CI_PIPELINE_SOURCE != 'merge_request_event' .rules_push_to_main: rules: - !reference [.rules_common, rules] - if: >- $CI_COMMIT_BRANCH == 'main' && $CI_PIPELINE_SOURCE != 'merge_request_event' .rules_push_to_branch: rules: - !reference [.rules_common, rules] - if: >- $CI_COMMIT_BRANCH != 'main' && $CI_PIPELINE_SOURCE != 'merge_request_event' .rules_push_to_branch_or_mr_to_main: rules: - !reference [.rules_push_to_branch, rules] - if: >- $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME != 'main' && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main' .rules_release_tag: rules: - !reference [.rules_common, rules] - if: $CI_COMMIT_TAG =~ /^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/ .rules_non_release_tag: rules: - !reference [.rules_common, rules] - if: $CI_COMMIT_TAG !~ /^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/With those names it is clear when a job is going to be executed and when using the templates on real jobs we can add additional restrictions and make the execution manual if needed as described earlier.
Using inputs as macrosOn the previous rules we have used a regular expression to identify the release tag format and assumed that the general branches are the ones with a name different than main; if we want to force a format for those branch names we can replace the condition != 'main' by a regex comparison (=~ if we look for matches, !~ if we want to define valid branch names removing the invalid ones).
When testing the new gitlab-ci inputs my colleague Jorge noticed that if you keep their default value they basically work as macros.
The variables declared as inputs can’t hold YAML values, the truth is that their value is always a string that is replaced by the value assigned to them when including the file (if given) or by their default value, if defined.
If you don’t assign a value to an input variable when including the file that declares it its occurrences are replaced by its default value, making them work basically as macros; this is useful for us when working with strings that can’t managed as variables, like the regular expressions used inside if conditions.
With those two ideas we can add the following prefix to the rules.yaml defining inputs for both regular expressions and replace the rules that can use them by the ones shown here:
spec: inputs: # Regular expression for branches; the prefix matches the type of changes # we plan to work on inside the branch (we use conventional commit types as # the branch prefix) branch_regex: default: '/^(build|ci|chore|docs|feat|fix|perf|refactor|style|test)\/.+$/' # Regular expression for tags release_tag_regex: default: '/^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/' --- [...] .rules_push_to_changes_branch: rules: - !reference [.rules_common, rules] - if: >- $CI_COMMIT_BRANCH =~ $[[ inputs.branch_regex ]] && $CI_PIPELINE_SOURCE != 'merge_request_event' .rules_push_to_branch_or_mr_to_main: rules: - !reference [.rules_push_to_branch, rules] - if: >- $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ $[[ inputs.branch_regex ]] && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main' .rules_release_tag: rules: - !reference [.rules_common, rules] - if: $CI_COMMIT_TAG =~ $[[ inputs.release_tag_regex ]] .rules_non_release_tag: rules: - !reference [.rules_common, rules] - if: $CI_COMMIT_TAG !~ $[[ inputs.release_tag_regex ]] Creating rules reusing existing onesI’m going to finish this post with a comment about how I avoid defining extra rule templates in some common cases.
The idea is simple, we can use !reference tags to fine tune rules when we need to add conditions to disable them simply adding conditions with when: never before referencing the template.
As an example, in some projects I’m using different job definitions depending on the DEPLOY_ENVIRONMENT value to make the job manual or automatic; as we just said we can define different jobs referencing the same rule adding a condition to check if the environment is the one we are interested in:
deploy_job_auto: rules: # Only deploy automatically if the environment is 'dev' by skipping this job # for other values of the DEPLOY_ENVIRONMENT variable - if: $DEPLOY_ENVIRONMENT != "dev" when: never - !reference [.rules_release_tag, rules] [...] deploy_job_manually: rules: # Disable this job if the environment is 'dev' - if: $DEPLOY_ENVIRONMENT == "dev" when: never - !reference [.rules_release_tag, rules] when: manual # Change this to `false` to make the deployment job blocking allow_failure: true [...]If you think about it the idea of adding negative conditions is what we do with the .rules_common template; we add conditions to disable the job before evaluating the real rules.
The difference in that case is that we reference them at the beginning because we want those negative conditions on all jobs and that is also why we have a .rules_default condition with an when: on_success for the jobs that only need to respect the default workflow (we need the last condition to make sure that they are executed if the negative rules don’t match).
GNUnet News: GNUnet 0.20.0
We are pleased to announce the release of GNUnet 0.20.0.
GNUnet is an alternative network stack for building secure, decentralized and
privacy-preserving distributed applications.
Our goal is to replace the old insecure Internet protocol stack.
Starting from an application for secure publication of files, it has grown to
include all kinds of basic protocol components and applications towards the
creation of a GNU internet.
This is a new major release.
It breaks protocol compatibility with the 0.19.x versions.
Please be aware that Git master is thus henceforth (and has been for a
while)
INCOMPATIBLE
with
the 0.19.x GNUnet network, and interactions between old and new peers
will result in issues. 0.19.x peers will be able to communicate with Git
master or 0.20.x peers, but some services will not be compatible.
In terms of usability, users should be aware that there are still
a number of known open issues
in particular with respect to ease
of use, but also some critical privacy issues especially for mobile users.
Also, the nascent network is tiny and thus unlikely to
provide good anonymity or extensive amounts of interesting information.
As a result, the 0.20.0 release is still
only suitable for early adopters
with some reasonable pain tolerance
.
- gnunet-0.20.0.tar.gz ( signature )
- gnunet-gtk-0.20.0.tar.gz ( signature )
- gnunet-fuse-0.20.0.tar.gz ( signature )
The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A
Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/
ChangesA detailed list of changes can be found in the git log , the NEWS and the bug tracker .
Known Issues- There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
- There are known moderate implementation limitations in CADET that negatively impact performance.
- There are known moderate design issues in FS that also impact usability and performance.
- There are minor implementation limitations in SET that create unnecessary attack surface for availability.
- The RPS subsystem remains experimental.
- Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.
In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.
ThanksThis release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, t3sserakt, TheJackiMonster, Marshall Stone, Özgür Kesim and Martin Schanzenbach.
Jonathan Wiltshire: Debian Family
Last week tragedy struck, and I saw the very best of the Debian community at work.
I heard first hand testimony about how helpless so many people felt at being physically unable to help their friend. I heard about how they couldn’t bear to leave and had to be ushered away to make space for rescue services to do their work. I heard of those who continued the search with private divers, even after the official rescue was called off.
I saw the shock and grief which engulfed everybody who I saw that night and in the following days. I watched friends comfort each other when it became too much. I read the messages we wrote in memory and smiled at how they described the person I’d only just started to know.
When I felt angry, and helpless, and frustrated that I couldn’t do more, the people around me caught me, comforted me, and cared for me.
Debian, you are like family and nobody can claim otherwise. You bicker and argue about the silliest things and sometimes it feels like we’ll never get past them. But when it comes to simple human compassion for each other, you always surprise me with your ability to care.
Drupal.org blog: What’s new on Drupal.org - Q1 2023
Read our roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community. You can also review the Drupal project roadmap.
Editor's note from Tim Lehnen(hestenet): A Q1 update in September? What's the deal?
The Drupal Association has recently undertaken a variety of new initiatives to accelerate innovation in the Drupal project, and to begin expanding our capacity and capabilities by seeking grant funding. Unfortunately this has meant my personal capacity has been degraded, and I'm afraid I just fell off the wagon of posting these regular updates. I want to thank my colleague Alex Moreno for stepping into the gap to get this news flowing out to all of you again! Now on to the update!
Drupal recognized as a Digital Public Good!
The Digital Public Good registry recognizes digital goods, including open source software, that advance the Sustainable Development Goals as defined by the United Nations. This recognition supports the Association's broader mission to support the Open Web, and should give users of Drupal in the public sector even greater confidence in their choice.
We want to thank the community volunteers who worked with the Drupal Association to make this happen!
Read the news: Drupal recognized as a Digital Public Good
Securing AutoUpdates OSTIF.org PartnershipThe Drupal.org secure signing infrastructure for Automatic Updates and Project Browser is now in testing, with the leads on those projects.
The Drupal Association has reached out to the Open Source Technology Improvement Fund to identify a partner to audit PHP TUF and Rugged, the two key software libraries being used to secure the Drupal.org supply chain for Automatic Updates and the Project Browser.
For an initiative as critical as applying automatic updates, external validation from a security vendor is critical.
OSTIF.org was previously involved in arranging the security audits for Python TUF and several other implementations of the framework, making them an ideal partner.
GitLab CI templatesAny project that opts in to using GitLab CI can now take advantage of an off-the-shelf .gitlab-ci.yml template that configures testing to follow core development. This template uses include files maintained by the Drupal Association on an ongoing basis.
We expect to open GitLabCI to every project on Drupal.org at DrupalCon Pittsburgh.
Having a template that we could centrally maintain was essential to being able to enable GitLab CI for every project, as many Drupal.org project maintainers are not CI experts.
This makes the migration much easier.
GitLab Issue CreditAfter scaffolding out our plan to manage contribution credit in GitLab, we have implemented a full development prototype, which allows a credit node to be created on modern Drupal, with a webhook from a bot-created comment on GitLab.
This allows us to create and store contribution credit information from an external source, and could even be expanded beyond GitLab.
This clears one of the final barriers to completing our tooling migration to GitLab.
With GitLab CI nearly ready for every project, and a solution for credit fully prototyped, our last major step is to solve the issue workflow (see the next topic).
GitLab Issue MigrationWith a solution for credit in pre-production, and GitLabCI about to roll-out to all projects the next phase is to move forward with migrating projects to GitLab issues.
The key problem to solve is shared access to issue forks. In Drupal, we're used to being able to collaborate with each other by default. In GitLab (and GitHub and most other tools) collaborators typically have to manually request access, or fork into a private workspace.
We are going to use an issue bot and webhooks to create simple tools for contributors to collaborate.
Moving to GitLab is part of our commitment to removing friction from the Drupal contribution process, and helps us to keep up to date in the latest innovations in code collaboration platforms.
However, we still lead compared to these platforms in collaborative workflow.
In fact, GitLab's own new 'Community Fork' experiment looks to implement ideas we pioneered in a GitLab context. Bringing these two ideas together is the best of both worlds
Events.Drupal.org on Drupal 9, pending Drupal 10 updateEvents.drupal.org was the first drupal.org property to be updated to Drupal 9, and is in prep to be upgraded to Drupal 10 when the final contributed module is ready.
Up next:
- api.drupal.org
- localize.drupal.org
- www.drupal.org project endpoints
- www.drupal.org marketing pages
Drupal.org is historically one of the last sites to upgrade to the latest version of Drupal. This is because we have a lot of unique project infrastructure that is not in use by other Drupal end users, and so is likely the last functionality be ported.
Upgrading the events site gave us a greater than 5x improvement in performance, with improved caching behavior, better editorial tools, and is battle-testing our new k8s based hosting cluster.
Improvements to anti-spam controls on Drupal.org registrationWe've made changes to Drupal.org's anti-spam protection that detect bot-like behavior or repeated account creation.
These changes decouple the anti-spam/anti-bot behavior from the account registration process, an important step so that we can move to a new Single Sign On system for Drupal.org.
Every moment spent on cleaning up spam is a moment not spent on Drupal contribution, and so it is a critically important if often invisible part of our work to protect and moderate Drupal.org.
We also thank the volunteer Drupal site moderators for their work to support this effort.
Multiple performance improvements (and bug fixes) to our GitLab installation.GitLab has feature releases and security releases every month, and keeping git.drupalcode.org up to date is an important part of the work we do.
In the first quarter of 2023, several of these updates introduced unexpected performance issues, having to do with repository file sizes, replication, etc.
We were able to work with the upstream maintainers of GitLab itself to resolve these issues, and improve the overall performance of git.drupalcode.org
As always, we’d like to thank all the volunteers who work with us and the Drupal Association Supporters who help to fund our work. In particular, we want to thank:
- Acquia - Renewing Enterprise Supporting Partner
- Annertech - Renewing Signature Supporting Partner
- Elevated Third - Renewing Signature Supporting Partner
- FFW - Renewing Signature Supporting Partner
- Full Fat Things - Renewing Signature Supporting Partner
- Centarro - Renewing Premium Supporting Partner
- Dotsquares *UPGRADE* Premium Supporting Partner
- Dropsolid - Renewing Premium Supporting Partner
- Pantheon - Renewing Premium Supporting Partner
- Promet Source - Renewing Premium Supporting Partner
- Vardot - Renewing Premium Supporting Partner
- Zyxware - Renewing Premium Supporting Partner
- Bear Group - Renewing Classic Supporting Partner
- Berger Schmidt - Renewing Classic Supporting Partner
- Factorial GmbH - Renewing Classic Supporting Partner
- JMA Consulting - *NEW* Classic Supporting Partner
- LN Webworks - Renewing Classic Supporting Partner
- Mobomo - Renewing Classic Supporting Partner
- Redfin Solutions - Renewing Classic Supporting Partner
- Spry Digital - Renewing Classic Supporting Partner
- Docomo Innovations - Renewing Community Supporting Partner
- Drunomics - Renewing Community Supporting Partner
- Highlight Technologies - Renewing Community Supporting Partner
- Icon Agency - *NEW* Community Supporting Partner
- JAVALI - *NEW*Community Supporting Partner
- LimoenGroen - Renewing Community Supporting Partner
- Magnétic - *NEW*Community Supporting Partner
- Metadrop - *NEW* Community Supporting Partner
- RatioWeb - Renewing Community Supporting Partner
- XIMA MEDIA - *NEW* Community Supporting Partner