Feeds
Python Anywhere: Improving PythonAnywhere's File Storage System
PythonAnywhere has been around for over 10 years, and as our platform continues to grow with thousands of users, we’re committed to keeping it in top shape. Part of this involves upgrading some of the older parts of our infrastructure, with a special focus on our file storage servers—some of the oldest systems we have.
Ensuring Product Longevity With Qt Long-Term Support
As we continue to evolve and adapt the Qt Framework to the needs of our users and upcoming regulation changes, we are excited to announce some significant changes to our Long-Term Support (LTS) policy from Qt 6.8 onwards. The changes are designed to provide a more robust and predictable support strategy, ensuring your projects remain secure and stable over their entire lifecycle.
Julien Tayon: Tune your guitar with python
Long story short, I suck at tuning my instrument and just lost my tuner...
This will require the python module soundevice and matplotlib.
So in order to tune my guitar I indeed need a spectrosonogram that displays the frequencies captured in real time by an audio device with an output readable enough I can actually know if I am nearing a legit frequency called a Note.
The frequencies for the notes are pretty arbitrary and I chose to only show the frequency for E, A , D, G, B since I have a 5 strings bass.
I chose the frequency between 100 and 2000 knowing that anyway any frequency below will trigger harmonics and above will trigger reasonance in the right frequency frame.
Plotting a spectrogram is done by tweaking the eponym matplotlib grapher with values chosen to fit my need and show me a laser thin beam around the right frequency. #!/usr/bin/env python3 """Show a text-mode spectrogram using live microphone data.""" import argparse import math import shutil import matplotlib.pyplot as plt from multiprocessing import Process, Queue import matplotlib.animation as animation import numpy as np import sounddevice as sd usage_line = ' press enter to quit,' def int_or_str(text): """Helper function for argument parsing.""" try: return int(text) except ValueError: return text try: columns, _ = shutil.get_terminal_size() except AttributeError: columns = 80 parser = argparse.ArgumentParser(add_help=False) parser.add_argument( '-l', '--list-devices', action='store_true', help='show list of audio devices and exit') args, remaining = parser.parse_known_args() if args.list_devices: print(sd.query_devices()) parser.exit(0) parser = argparse.ArgumentParser( description=__doc__ + '\n\nSupported keys:' + usage_line, formatter_class=argparse.RawDescriptionHelpFormatter, parents=[parser]) parser.add_argument( '-b', '--block-duration', type=float, metavar='DURATION', default=50, help='block size (default %(default)s milliseconds)') parser.add_argument( '-d', '--device', type=int_or_str, help='input device (numeric ID or substring)') parser.add_argument( '-g', '--gain', type=float, default=10, help='initial gain factor (default %(default)s)') parser.add_argument( '-r', '--range', type=float, nargs=2, metavar=('LOW', 'HIGH'), default=[50, 4000], help='frequency range (default %(default)s Hz)') args = parser.parse_args(remaining) low, high = args.range if high <= low: parser.error('HIGH must be greater than LOW') q = Queue() try: samplerate = sd.query_devices(args.device, 'input')['default_samplerate'] def plot(q): global samplerate fig, ( ax,axs) = plt.subplots(nrows=2) plt.ioff() def animate(i,q): data = q.get() ax.clear() axs.clear() axs.plot(data) ax.set_yticks([ 41.20, 82.41, 164.8, 329.6, 659.3, # E 55.00, 110.0, 220.0, 440.0, 880.0, # A 73.42, 146.8, 293.7, 587.3, # D 49.00, 98.00, 196.0, 392.0, 784.0, #G 61.74, 123.5, 246.9, 493.9, 987.8 ])#B ax.specgram(data[:,-1],mode="magnitude", Fs=samplerate*2, scale="linear",NFFT=9002) ax.set_ylim(150,1000) ani = animation.FuncAnimation(fig, animate,fargs=(q,), interval=500) plt.show() plotrt = Process(target=plot, args=(q,)) plotrt.start() def callback(indata, frames, time, status): if any(indata): q.put(indata) else: print('no input') with sd.InputStream(device=args.device, channels=1, callback=callback, blocksize=int(samplerate * args.block_duration /50 ), samplerate=samplerate) as sound: while True: response = input() if response in ('', 'q', 'Q'): break for ch in response: if ch == '+': args.gain *= 2 elif ch == '-': args.gain /= 2 else: print('\x1b[31;40m', usage_line.center(args.columns, '#'), '\x1b[0m', sep='') break except KeyboardInterrupt: parser.exit('Interrupted by user') except Exception as e: parser.exit(type(e).__name__ + ': ' + str(e))
Drupal Starshot blog: Drupal CMS base recipe update for initial release
Drupal CMS will come pre-installed with a set of modules and themes, using recipes, effectively replacing the "Standard" install profile. These recipes will provide the functionality that is considered must-have in modern CMSes, as well as what is deemed essential for our target persona and improve the overall user experience.
We have been calling this the base recipe, which adds functionality on its own (e.g. installing the necessary core and contrib modules) and also selects other recipes to be applied by default. A while back we ran a survey to ask the community what features they felt were essential for the out-of-the-box offering and this has informed the inclusions.
Along with the survey, we have done market research and benchmarking to see what our competitors include. But putting together a single proposal for the base recipe has proven challenging, because some features that we want are not yet available, or have some potential to conflict with Experience Builder or other upcoming initiatives. In some cases, contrib modules exist to provide a particular feature, but if it is not a high priority for our target user, we have left it out in order to focus our attention on what is.
So this plan is for the initial release of Drupal CMS, scheduled for 15 January 2025. New features will of course be added to future releases, and we plan to launch new work tracks with this in mind soon.
Current state of the base recipeIf you are not up for parsing the recipe.yml file linked above, here is a summary of what it currently does:
What
Why
Installs a bunch of core modules and applies some core recipes
We are no longer using install profiles, so we have to add the foundational stuff somehow
Adds a redirect on access denied to the login form, and then to the original destination (via ECA)
So users can easily reach their intended destination even if their session has expired
Adds support for logging in with email in addition to username (via Login Email or Username)
So users don't have to remember a separate username. There is also an issue for supporting this in core, and when that lands we will no longer require a contrib module.
#111317: Allow users to login using either their username OR their e-mail address
Adds Gin as the admin theme
Because Gin provides a more modern UI, and as a contrib theme, is able to innovate faster than Drupal core admin themes
Adds Navigation (with a left-side menu) instead of the traditional admin toolbar
So the admin UI feels more modern and aligned with other similar systems. Navigation is an experimental module in core and has a roadmap outlining the path to stable.
#3421969: [PLAN] New Navigation and Top Bar to replace Toolbar Roadmap: Path to Stable
Adds a quick search for the admin menu (via Coffee)
So users can easily search for the admin page they are looking for.
Adds Trash module
So users can recover deleted content
Adds Linkit support to CKEditor
So users can easily link to site content via search. Note there is an issue for adding a basic version of this in core, and we would prefer to use that. If it lands before 11.1, we will replace Linkit in the initial release.
#3317769: Drastically improve the linking experience in CKEditor 5
Adds a site dashboard (via Dashboard)
So users see a dashboard with relevant content when they first install, and when they log in (replacing /user as the default login page)
Adds focal point cropping to the image media type (via Focal point)
So users can select a focal point for their images to help them display nicely across aspect ratios
Adds Project Browser, Automatic updates, and Upgrade status
So users can add modules and keep their sites up to date from the UI, with no developer tools required
Adds some media management helper tools (Media entity download and Media file delete)
So the default media management experience is more intuitive. This will be extended and updated as part of the Media management track work.
Adds a Basic page content type
So every site has at least one content type available by default. See the full content strategy for more information.
Adds content cloning (via Quick node clone)
So users can duplicate content to easily create similar pages. This feature is a must-have, but the implementation is still up for discussion in #3474608: Evaluate cloning modules and #3477303: Create recipe to clone entities with ECA
Adds foundational SEO functionality: Pathauto, Redirect
Most sites require this functionality and the initial setup can be done generically
Coming soonSome things that it does not yet include, but most likely will be in the initial release:
What
Why
Better default site search
Drupal core search is very limited and not what site owners would expect from a modern platform. Drupal CMS will provide a more robust search experience using Search API. This is being done in the Advanced search work track, with the recipe in progress in #3468271: Add recipe for search backend
Autosave on forms (via Autosave Form)
So users don't lose their work. This feature is a must-have, but we wanted to ensure the approach did not conflict with Experience Builder's approach to the same problem.
HTML email sending
So users can send nicely formatted emails without additional configuration. See #3480680: Handle sending email in Drupal CMS
Coming... sometime?Some things we would like to include, but have some blockers:
What
Why
Better select lists
The default select list experience is suboptimal, however, there is not currently a viable non-jQuery solution for this. We would like to use the Accessible Autocomplete Element/Widget based on the Accessible Autocomplete library but there are technical limitations around managing front-end dependencies.
Sitewide alerts
This is a common feature request, but we don't want to implement something that will conflict with Experience Builder when it comes out, leaving sites with a problem to solve. We also feel it is a nice-to-have for our target person rather than a must-have.
What about [insert feature here]?This summary covers the base functionality only. So if there is something extremely obvious that seems like it's missing, it is probably covered in one of the other work tracks! Many of them have not yet completed their work, so there are still lots of exciting things to come. Each of the metas links to their current proposal, if they have one. The final track proposal for the initial release are due by 1 November.
If you've scoured the track proposals and the Drupal CMS issue queue and still feel that we're missing a killer feature that is easily included, and high priority for the marketer types that we are focused on, let us know via Slack, in #starshot, or create an issue in the Drupal CMS project.
Dirk Eddelbuettel: drat 0.2.5 on CRAN: Small Updates
A new minor release of the drat package arrived on CRAN today, which is just over a year since the previous release. drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.
Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for over two-and-a-half decades how to do this. And you can too: drat is easy to use, documented by six vignettes and just works. Detailed information about drat is at its documentation site. That said, and ‘these days’, if you mainly care about github code then r-universe is there too, also offering binaries its makes and all that jazz. But sometimes you just want to, or need to, roll a local repository and drat can help you there.
This release contains a small PR (made by Arne Holmin just after the previous release) adding support for an ‘OSflacour’ variable (helpful for macOS). We also corrected an issue with one test file being insufficiently careful of using git2r only when installed, and as usual did a round of maintenance for the package concerning both continuous integration and documentation.
The NEWS file summarises the release as follows:
Changes in drat version 0.2.5 (2024-10-21)Function insertPackage has a new optional argument OSflavour (Arne Holmin in #142)
A test file conditions correctly about git2r being present (Dirk)
Several smaller packaging updates and enhancements to continuous integration and documentation have been added (Dirk)
Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page as well as at the documentation site.
If you like this or other open-source work I do, you can sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
KDE and Google Summer of Code 2024
All but one of KDE's Google Summer of Code (GSoC) projects are complete. This post will summarize the completed project outcomes. GSoC is a program where people who are students or are new to Free and Open Source software make programming contributions to an open source project.
Projects Arianna- Port Arianna to Foliate-js: Ajay Chauhan worked on parting Arianna from epub.js to use Foliate-js. The work will hopefully be merged soon.
Frameworks
Python bindings for KDE Frameworks:
Manuel Alcaraz Zambrano, implemented Python bindings for KWidgetAddons, KUnitConversion, KCoreAddons, KGuiAddons, KI18n, KNotifications, and KXmlGUI. This was done using Shiboken. In addition, Manuel wrote a tutorial on how to generate Python bindings using Shiboken. The complicated set of merge requests are still being reviewed, and Manuel continues to interact with the KDE community.
KDE Connect
Update SSHD library in KDE Connect Android app
The main aim of ShellWen Chen's project was to update Apache Mina SSHD from 0.14.0 to 2.12.1. The older version has a few listed vulnerabilities. The newer library required additional code to enable it to work on older Android phones, upto Android API 21.
KDE GamesImplementing a computerized opponent for the Mancala variant Bohnenspiel:
João Gouveia created Mankala engine, a library to enable easy creation of Mancala games. The engine contains implementations for two Mancala games, Bohnenspiel and Oware. Both games contain computerized opponents, João also started on a QtQuick graphical user interface. The games are functional, but additional investigation on computerized opponents may help improve their effectiveness.
Kdenlive
Improved subtitling support for Kdenlive:
Kdenlive has gotten improved subtitling support. Chengkun Chen added support for using the Advanced SubStation (ASS) file format and for converting SubRip files to ASS files. To support this format, Chengkun Chen also made subtitling editor improvements. The work has been merged in the main repository. Documentation has been written, and will hopefully be merged soon.
Krita
Creating Pixel Perfect Tool for Krita:
Ken Lo worked on implementing Pixel Perfect lines in Krita. As explained by Ricky Han, such algorithms remove corner pixels from L shaped blocks and ensure the thinnest possible line is 1 pixel wide. Implementing such algorithms well is of use not only in Krita, but also in rendering web graphics where user screen resolutions can vary significantly. The algorithm was implemented to work in close to real time while lines are drawn, rather than as a post processing step. Ken Lo's work has been merged into Krita.
Labplot
Improve Python Interoperability with LabPlot
Israel Galadima worked on improving Python support in LabPlot. Shiboken was used for this. It is now possible to call some of LabPlot functions from Python and integrate these into other applications.
Kuntal Bar added 3D graphing abilities to LabPlot. This was done using QtGraphs. The work has yet to be merged, but there are many nice examples of 3D plots, for bar charts, scatter and surface plots.
Snaps
Improving Snap Ecosystem in KDE
Snaps are self contained linux application packging formats. Soumyadeep Ghosh worked on improving the tooling necessary to make KDE applications easily available in the Snap Store. In addition, Soumyadeep improved packaging of a number of KDE Snap packages, and packaged MarkNote. Finally, Soumyadeep created Snap KCM, a graphical user interface to manage permissions that Snaps have when running.
Next Steps
The GSoC period is over, for all but one contributor, Pratham Gandhi. A follow up post will summarize contributions from the remaining project. Contributors have enjoyed participating in GSoC and we look forward to their continuing participation in free and open source software communities and in contributing to KDE.
KDE Plasma 6.2.2, Bugfix Release for October
Tuesday, 22 October 2024. Today KDE releases a bugfix update to KDE Plasma 6, versioned 6.2.2.
Plasma 6.2 was released in October 2024 with many feature refinements and new modules to complete the desktop experience.
This release adds a week's worth of new translations and fixes from KDE's contributors. The bugfixes are typically small but important and include:
- KWin Backends/drm: leave all outputs disabled by default, including VR headsets. Commit. Fixes bug #493148
- KWin Set WAYLAND_DISPLAY before starting wayland server. Commit.
- Plasma Audio Volume Control: Fix text display for auto_null device. Commit. Fixes bug #494324
FSF News: FSF associate members to assist in review of current board members
parallel @ Savannah: GNU Parallel 20241022 ('Sinwar Nasrallah') released [stable]
GNU Parallel 20241022 ('Sinwar Nasrallah') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
GNU Parallel is one of the most helpful tools I've been using recently, and it's just something like: parallel -j4 'gzip {}' ::: folder/*.csv
-- Milton Pividori @miltondp@twitter
New in this release:
- No new features. This is a candidate for a stable release.
- Bug fixes and man page updates.
News about GNU Parallel:
- Separate arguments with a custom separator in GNU Parallel https://boxofcuriosities.co.uk/post/separate-arguments-with-a-custom-separator-in-gnu-parallel
- GNU parallel is underrated https://amontalenti.com/2021/11/10/parallel
- Unlocking the Power of Supercomputers: My HPC Adventure with 2800 Cores and GNU Parallel https://augalip.com/2024/03/10/unlocking-the-power-of-supercomputers-my-hpc-adventure-with-2800-cores-and-gnu-parallel/
- Converting WebP Images to PNG Using parallel and dwebp https://bytefreaks.net/gnulinux/bash/converting-webp-images-to-png-using-parallel-and-dwebp
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
- Give a demo at your local user group/team/colleagues
- Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
- Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
- Request or write a review for your favourite blog or magazine
- Request or build a package for your favourite distribution (if it is not already there)
- Invite me for your next conference
If you use programs that use GNU Parallel for research:
- Please cite GNU Parallel in you publications (use --citation)
If GNU Parallel saves you money:
- (Have your company) donate to FSF https://my.fsf.org/donate/
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
Talking Drupal: Talking Drupal #472 - Access Policy API
Today we are talking about Access Policy API, What it does, and How you can use it with guest Kristiaan Van den Eynde. We’ll also cover Visitors as our module of the week.
For show notes visit: https://www.talkingDrupal.com/472
Topics- What is the Access Policy API
- Why does Drupal need the Access Policy API
- How did Drupal handle access before
- How does the Access Policy API interact with roles
- Does a module exist that shows a UI
- What is the difference between Policy Based Access Control (PBAC), Attribute Based Access Control (ABAC) and Role Based Access Control (RBAC)
- How does Access Policy API work with PBAC, ABAC and RBAC
- Can you apply an access policy via a recipe
- Is there a roadmap
- What was it like going through pitchburg
- How can people get involved
- Access Policy API
- Access Policy
- Talking Drupal #226 Group
- Flexible Permissions
- External roles
- Test Super access policy
- Access policy talk at Drupalcon barcelona
- D.o Issue about exception on security issue
Kristiaan Van den Eynde - kristiaanvandeneynde
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Aubrey Sambor - star-shaped.org starshaped
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you ever wanted a Drupal-native solution for tracking website visitors and their behavior? There’s a module for that
- Module name/project name:
- Brief history
- How old: created in Mar 2009 by gashev, though recent releases are by Steven Ayers (bluegeek9)
- Versions available: 8.x-2.19, which works with Drupal 10 and 11
- Maintainership
- Actively maintained
- Security coverage
- Test coverage
- Documentation guide is available
- Number of open issues: 20 open issues, none of which are bugs against the 8.x branch
- Usage stats:
- Over 6,000 sites
- Module features and usage
- A benefit of using a Drupal-native solution is that you retain full ownership over your visitor data. Not sharing that data with third parties can be important for data protection regulations, as well as data privacy concerns.
- You also have a variety of reports you can access directly within the Drupal UI, including top pages, referrers, and more
- There is a submodule for geoip lookups using Maxmind, if you also want reporting on what region, country, or city your visitors hail from
- It provides drush commands to download a geoip database, and then update your data based on geoip lookups using that database
- It should be mentioned that the downside of using Drupal as your analytics solution is the potential performance impact and also a likely uptick in usage for hosts that charge based on the number of dynamic requests served
Sahil Dhiman: Free Software Mirrors in India
List of public mirrors in India. Location discovered basis personal knowledge, traces or GeoIP. Mirrors which aren’t accessible outside their own ASN are excluded.
North India- Bharat Datacenter - mirror.bharatdatacenter.com (AS151704)
- CSE Department, IIT Kanpur - mirror.cse.iitk.ac.in (AS55479)
- Cyfuture - cyfuture.dl.sourceforge.net (AS55470)
- Extreme IX - repos.del.extreme-ix.org | repo.extreme-ix.org (AS135814)
- Garuda Linux - in-mirror.garudalinux.org (AS133661)
- Hopbox - mirrors.hopbox.net (AS10029)
- IIT Delhi - mirrors.iitd.ac.in (AS132780)
- NKN - debianmirror.nkn.in (AS4758)
- Nxtgen - mirrors.nxtgen.com (AS132717)
- Saswata Sarkar - mirrors.saswata.cc (AS132453)
- Shiv Nadar Institution of Eminence - ubuntu-mirror.snu.edu.in (AS132785)
- NISER Bhubaneshwar - mirror.niser.ac.in (AS141288))
- Cogan Ng - in.mirror.coganng.com (AS31898)
- CUSAT - foss.cusat.ac.in/mirror (AS55824))
- Excell Media - centos-stream.excellmedia.net | excellmedia.dl.sourceforge.net (AS17754)
- IIT Madras - ftp.iitm.ac.in (AS141340)
- NIT Calicut - mirror.nitc.ac.in (AS55824)
- NKN - mirrors-1.nkn.in (AS148003)
- Planet Unix - mirror.planetunix.net | ariel.in.ext.planetunix.net (AS14061)
- Shrirang Kahale - mirror.maa.albony.in (AS24560)
- Abhinav Krishna C K - mirrors.abhy.me (AS31898)
- Arun Mathai - mirrors.arunmathaisk.in (AS141995)
- Balvinder Singh Rawat - mirror.ubuntu.bsr.one (AS31898)
- ICTS - cran.icts.res.in (AS134322)
- Nilesh Patra - mirrors.nileshpatra.info (AS31898)
- PicoNets-WebWerks - mirrors.piconets.webwerks.in (AS133296)
- Ravi Dwivedi - mirrors.ravidwivedi.in (AS141995)
- Sahil Dhiman - mirrors.in.sahilister.net (AS141995)
- Shrirang Kahale - mirror.bom.albony.in | mirror.nag.albony.in (AS24560)
- Starburst Services - almalinux.in.ssimn.org | elrepo.in.ssimn.org | epel.in.ssimn.org | mariadb.in.ssimn.org (AS141995)
- Unknown - mirror.4v1.in (AS24560)
- Utkarsh Gupta - mirrors.utkarsh2102.org (AS31898)
- Amazon Cloudfront - cdn-aws.deb.debian.org (AS16509)
- Cicku - in.mirrors.cicku.me (AS13335)
- CIQ - rocky-linux-asia-south1.production.gcp.mirrors.ctrliq.cloud | rocky-linux-asia-south2.production.gcp.mirrors.ctrliq.cloud (GeoIP doubtful. Could be behind a CDN or single node) (AS396982)
- Cloudflare - cloudflare.cdn.openbsd.org | kali.download/ (AS13335)
- Edgecast - mirror.edgecast.com (AS15133)
- Fastly - cdn.openbsd.org | deb.debian.org | dlcdn.apache.org | dl-cdn.alpinelinux.org (sponsored?) | images-cdn.endlessm.com (sponsored?) | repo-fastly.voidlinux.org (AS54113)
- Naman Garg - in-mirror.chaotic.cx (AS13335)
- Microsoft - debian-archive.trafficmanager.net (AS8075)
- Niranjan Fartare - arch.niranjan.co | termux.niranjan.co (AS13335)
- Sahil Kokamkar - mirror.sahil.world (AS13335)
Let me know if I’m missing someone or something is amiss.
FSF Blogs: FSD meeting recap 2024-10-18
FSD meeting recap 2024-10-18
Real Python: Python's property(): Add Managed Attributes to Your Classes
With Python’s property(), you can create managed attributes in your classes. You can use managed attributes when you need to modify an attribute’s internal implementation and don’t want to change the class’s public API. Providing stable APIs will prevent you from breaking your users’ code when they rely on your code.
Properties are arguably the most popular way to create managed attributes quickly and in the purest Pythonic style.
In this tutorial, you’ll learn how to:
- Create managed attributes or properties in your classes
- Perform lazy attribute evaluation and provide computed attributes
- Make your classes Pythonic using properties instead of setter and getter methods
- Create read-only and read-write properties
- Create consistent and backward-compatible APIs for your classes
You’ll also write practical examples that use property() for validating input data, computing attribute values dynamically, logging your code, and more. To get the most out of this tutorial, you should know the basics of object-oriented programming, classes, and decorators in Python.
Get Your Code: Click here to download the free sample code that shows you how to use Python’s property() to add managed attributes to your classes.
Take the Quiz: Test your knowledge with our interactive “Python's property(): Add Managed Attributes to Your Classes” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python's property(): Add Managed Attributes to Your ClassesIn this quiz, you'll test your understanding of Python's property(). With this knowledge, you'll be able to create managed attributes in your classes, perform lazy attribute evaluation, provide computed attributes, and more.
Managing Attributes in Your ClassesWhen you define a class in an object-oriented programming language, you’ll probably end up with some instance and class attributes. In other words, you’ll end up with variables that are accessible through the instance, class, or even both, depending on the language. Attributes represent and hold the internal state of a given object, which you’ll often need to access and mutate.
Typically, you have at least two ways to access and mutate an attribute. Either you can access and mutate the attribute directly or you can use methods. Methods are functions attached to a given class. They provide the behaviors and actions that an object can perform with its internal data and attributes.
If you expose attributes to the user, then they become part of the class’s public API. This means that your users will access and mutate them directly in their code. The problem comes when you need to change the internal implementation of a given attribute.
Say you’re working on a Circle class and add an attribute called .radius, making it public. You finish coding the class and ship it to your end users. They start using Circle in their code to create a lot of awesome projects and applications. Good job!
Now suppose that you have an important user that comes to you with a new requirement. They don’t want Circle to store the radius any longer. Instead, they want a public .diameter attribute.
At this point, removing .radius to start using .diameter could break the code of some of your other users. You need to manage this situation in a way other than removing .radius.
Programming languages such as Java and C++ encourage you to never expose your attributes to avoid this kind of problem. Instead, you should provide getter and setter methods, also known as accessors and mutators, respectively. These methods offer a way to change the internal implementation of your attributes without changing your public API.
Note: Getter and setter methods are often considered an anti-pattern and a signal of poor object-oriented design. The main argument behind this proposition is that these methods break encapsulation. They allow you to access and mutate the components of your objects from the outside.
These programming languages need getter and setter methods because they don’t have a suitable way to change an attribute’s internal implementation when a given requirement changes. Changing the internal implementation would require an API modification, which can break your end users’ code.
The Getter and Setter Approach in PythonTechnically, there’s nothing that stops you from using getter and setter methods in Python. Here’s a quick example that shows how this approach would look:
Python point_v1.py class Point: def __init__(self, x, y): self._x = x self._y = y def get_x(self): return self._x def set_x(self, value): self._x = value def get_y(self): return self._y def set_y(self, value): self._y = value Copied!In this example, you create a Point class with two non-public attributes ._x and ._y to hold the Cartesian coordinates of the point at hand.
Note: Python doesn’t have the notion of access modifiers, such as private, protected, and public, to restrict access to attributes and methods. In Python, the distinction is between public and non-public class members.
If you want to signal that a given attribute or method is non-public, then you have to use the well-known Python convention of prefixing the name with an underscore (_). That’s the reason behind the naming of the attributes ._x and ._y.
Note that this is just a convention. It doesn’t stop you and other programmers from accessing the attributes using dot notation, as in obj._attr. However, it’s bad practice to violate this convention.
To access and mutate the value of either ._x or ._y, you can use the corresponding getter and setter methods. Go ahead and save the above definition of Point in a Python module and import the class into an interactive session. Then run the following code:
Python >>> from point_v1 import Point >>> point = Point(12, 5) >>> point.get_x() 12 >>> point.get_y() 5 >>> point.set_x(42) >>> point.get_x() 42 >>> # Non-public attributes are still accessible >>> point._x 42 >>> point._y 5 Copied!With .get_x() and .get_y(), you can access the current values of ._x and ._y. You can use the setter method to store a new value in the corresponding managed attribute. From the two final examples, you can confirm that Python doesn’t restrict access to non-public attributes. Whether or not you access them directly is up to you.
Read the full article at https://realpython.com/python-property/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
SystemSeed.com: Prestigious medical journal - The Lancet - features SystemSeed project
Elise West, Evgeniy Maslovskiy and Andrey Yurtaev receive co-author credits for their work on the World Health Organization's EQUIP project
Tamsin Fox-Davies Mon, 10/21/2024 - 13:50Sven Hoexter: Terraform: Making Use of Precondition Checks
I'm in the unlucky position to have to deal with GitHub. Thus I've a terraform module in a project which deals with populating organization secrets in our GitHub organization, and assigning repositories access to those secrets.
Since the GitHub terraform provider internally works mostly with repository IDs, not slugs (this human readable organization/repo format), we've to do some mapping in between. In my case it looks like this:
#tfvars Input for Module org_secrets = { "SECRET_A" = { repos = [ "infra-foo", "infra-baz", "deployment-foobar", ] "SECRET_B" = { repos = [ "job-abc", "job-xyz", ] } } # Module Code /* Limitation: The GH search API which is queried returns at most 1000 results. Thus whenever we reach that limit this approach will no longer work. The query is also intentionally limited to internal repositories right now. */ data "github_repositories" "repos" { query = "org:myorg archived:false -is:public -is:private" include_repo_id = true } /* The properties of the github_repositories.repos data source queried above contains only lists. Thus we've to manually establish a mapping between the repository names we need as a lookup key later on, and the repository id we got in another list from the search query above. */ locals { # Assemble the set of repository names we need repo_ids for repos = toset(flatten([for v in var.org_secrets : v.repos])) # Walk through all names in the query result list and check # if they're also in our repo set. If yes add the repo name -> id # mapping to our resulting map repos_and_ids = { for i, v in data.github_repositories.repos.names : v => data.github_repositories.repos.repo_ids[i] if contains(local.repos, v) } } resource "github_actions_organization_secret" "org_secrets" { for_each = var.org_secrets secret_name = each.key visibility = "selected" # the logic how the secret value is sourced is omitted here plaintext_value = data.xxx selected_repository_ids = [ for r in each.value.repos : local.repos_and_ids[r] if can(local.repos_and_ids[r]) ] }Now if we do something bad, delete a repository and forget to remove it from the configuration for the module, we receive some error message that a (numeric) repository ID could not be found. Pretty much useless for the average user because you've to figure out which repository is still in the configuration list, but got deleted recently.
Luckily terraform supports since version 1.2 precondition checks, which we can use in an output-block to provide the information which repository is missing. What we need is the set of missing repositories and the validation condition:
locals { # Debug facility in combination with an output and precondition check # There we can report which repository we still have in our configuration # but no longer get as a result from the data provider query missing_repos = setsubtract(local.repos, data.github_repositories.repos.names) } # Debug facility - If we can not find every repository in our # search query result, report those repos as an error output "missing_repos" { value = local.missing_repos precondition { condition = length(local.missing_repos) == 0 error_message = format("Repos in config missing from resultset: %v", local.missing_repos) } }Now you only have to be aware that GitHub is GitHub and the TF provider has open bugs, but is not supported by GitHub and you will encounter inconsistent results. But it works, even if your terraform apply failed that way.
Kdenlive 24.08.2 released
Kdenlive 24.08.2 is out with many fixes to a wide range of bugs and regressions.
- Fix title producer update on edit undo. Commit. Fixes bug #494142.
- Fix typo in dance.xml. Commit.
- Fix single item(s) move. Commit.
- Fix cycle effects playling timeline and sometimes broken after reopening project. Commit.
- Fix recent regression breaking all sort of things when opening projects. Commit.
- Fix crash when dragging clip and using mouse wheel. Commit.
- Don’t play when clicking monitor container if disabled in settings. Commit.
- Fix effect zones lost on project reopening. Commit.
- Various fixes for bin clip effects. Commit.
- Disable check for ghost effects that currently removes valid effects. Commit.
- Detect and fix track producers with incorrect effects. Commit.
- Fix bin effects sometimes not correctly removed from timeline instance. Commit.
- Don’t try to build clone effect it if does not apply to the target. Commit.
- Don’t unnecessarily check MLT tractors. Commit.
- Fix crash opening file with missing clips. Commit.
- Fix crash on project close. Commit.
- Fix compilation. Commit.
- Fix possible crash opening an interlaced project. Commit.
- Fix on monitor seek to next/previous keyframe buttons. Commit.
- Fix crash editing keyframes in a bin clip with grouped effects enabled. Commit.
- Don’t try to connect to dbus jobview on command line rendering. Commit.
- Fix Qt5 compilation. Commit.
- FIx looping through clips in project monitor effect scene. Commit.
- Fix loop selected clip. Commit.
The post Kdenlive 24.08.2 released appeared first on Kdenlive.
GNU Guix: Build User Takeover Vulnerability
A security issue has been identified in guix-daemon which allows for a local user to gain the privileges of any of the build users and subsequently use this to manipulate the output of any build. Your are strongly advised to upgrade your daemon now (see instructions below), especially on multi-user systems.
This exploit requires the ability to start a derivation build and the ability to run arbitrary code with access to the store in the root PID namespace on the machine the build occurs on. As such, this represents an increased risk primarily to multi-user systems and systems using dedicated privilege-separation users for various daemons: without special sandboxing measures, any process of theirs can take advantage of this vulnerability.
VulnerabilityFor a very long time, guix-daemon has helpfully made the outputs of failed derivation builds available at the same location they were at in the build container. This has aided greatly especially in situations where test suites require the package to already be installed in order to run, as it allows one to re-run the test suite interactively outside of the container when built with --keep-failed. This transferral of store items from inside the chroot to the real store was implemented with a simple rename, and no modification of the store item or any files it may contain.
If an attacker starts a build of a derivation that creates a binary with the setuid and/or setgid bit in an output directory, then, and the build fails, that binary will be accessible unaltered for anybody on the system. The attacker or a cooperating user can then execute the binary, gain the privileges, and from there use a combination of signals and procfs to freeze a builder, open any file it has open via /proc/$PID/fd, and overwrite it with whatever it wants. This manipulation of builds can happen regardless of which user started the build, so it can work not only for producing compromised outputs for commonly-used programs before anybody else uses them, but also for compromising any builds another user happens to start.
A related vulnerability was also discovered concerning the outputs of successful builds. These were moved - also via rename() - outside of the container prior to having their permissions, ownership, and timestamps canonicalized. This means that there also exists a window of time for a successful build's outputs during which a setuid/setgid binary can be executed.
In general, any time that a build user running a build for some submitter can get a setuid/setgid binary to a place the submitter can execute it, it is possible for the submitter to use it to take over the build user. This situation always occurs when --disable-chroot is passed to guix-daemon. This holds even in the case where there are no dedicated build users, and builds happen under the same user the daemon runs as, as happens during make check in the guix repository. Consequently, if a permissive umask that allows execute permission for untrusted users on directories all the way to a user's guix checkout is used, an attacker can use that user's test-environment daemon to gain control over their user while make check is running.
MitigationThis security issue has been fixed by two commits. Users should make sure they have updated to the second commit to be protected from this vulnerability. Upgrade instructions are in the following section. If there is a possibility that a failed build has left a setuid/setgid binary lying around in the store by accident, run guix gc to remove all failed build outputs.
The fix was accomplished by sanitizing the permissions of all files in a failed build output prior to moving it to the store, and also by waiting to move successful build outputs to the store until after their permissions had been canonicalized. The sanitizing was done in such a way as to preserve as many non-security-critical properties of failed build outputs as possible to aid in debugging. After applying these two commits, the guix package in Guix was updated so that guix-daemon deployed using it would use the fixed version.
If you are using --disable-chroot, whether with dedicated build users or not, make sure that access to your daemon's socket is restricted to trusted users. This particularly affects anyone running make check and anyone running on GNU/Hurd. The former should either manually remove execute permission for untrusted users on their guix checkout or apply this patch, which restricts access to the test-environment daemon to the user running the tests. The latter should adjust the ownership and permissions of /var/guix/daemon-socket, which can be done for Guix System users using the new socket-directory-{perms,group,user} fields in this patch.
A proof of concept is available at the end of this post. One can run this code with:
guix repl -- setuid-exposure-vuln-check.scmThis will output whether the current guix-daemon being used is vulnerable or not. If it is vulnerable, the last line will contain your system is not vulnerable, otherwise the last line will contain YOUR SYSTEM IS VULNERABLE.
UpgradingDue to the severity of this security advisory, we strongly recommend all users to upgrade their guix-daemon immediately.
For Guix System, the procedure is to reconfigure the system after a guix pull, either restarting guix-daemon or rebooting. For example:
guix pull sudo guix system reconfigure /run/current-system/configuration.scm sudo herd restart guix-daemonwhere /run/current-system/configuration.scm is the current system configuration but could, of course, be replaced by a system configuration file of a user's choice.
For Guix running as a package manager on other distributions, one needs to guix pull with sudo, as the guix-daemon runs as root, and restart the guix-daemon service, as documented. For example, on a system using systemd to manage services, run:
sudo --login guix pull sudo systemctl restart guix-daemon.serviceNote that for users with their distro's package of Guix (as opposed to having used the install script) you may need to take other steps or upgrade the Guix package as per other packages on your distro. Please consult the relevant documentation from your distro or contact the package maintainer for additional information or questions.
ConclusionEven with the sandboxing features of modern kernels, it can be quite challenging to synthesize a situation in which two users on the same system who are determined to cooperate nevertheless cannot. Guix has an especially difficult job because it needs to not only realize such a situation, but also maintain the ability to interact with both users itself, while not allowing them to cooperate through itself in unintended ways. Keeping failed build outputs around for debugging introduced a vulnerability, but finding that vulnerability because of it enabled the discovery of an additional vulnerability that would have existed anyway, and prompted the use of mechanisms for securing access to the guix daemon.
I would like to thank Ludovic Courtès for giving feedback on these vulnerabilities and their fixes — discussion of which led to discovering the vulnerable time window with successful build outputs — and also for helping me to discover that my email server was broken.
Proof of ConceptBelow is code to check if your guix-daemon is vulnerable to this exploit. Save this file as setuid-exposure-vuln-check.scm and run following the instructions above, in "Mitigation."
(use-modules (guix) (srfi srfi-34)) (define maybe-setuid-file ;; Attempt to create a setuid file in the store, with one of the build ;; users as its owner. (computed-file "maybe-setuid-file" #~(begin (call-with-output-file #$output (const #t)) (chmod #$output #o6000) ;; Failing causes guix-daemon to copy the output from ;; its temporary location back to the store. (exit 1)))) (with-store store (let* ((drv (run-with-store store (lower-object maybe-setuid-file))) (out (derivation->output-path drv))) (guard (c (#t (if (zero? (logand #o6000 (stat:perms (stat out)))) (format #t "~a is not setuid: your system is not \ vulnerable.~%" out) (format #t "~a is setuid: YOUR SYSTEM IS VULNERABLE. Run 'guix gc' to remove that file and upgrade.~%" out)))) (build-things store (list (derivation-file-name drv))))))Python Bytes: #406 What's on Django TV tonight?
Akademy 2025 Call for Hosts
If you want to contribute to KDE in a significant way (beyond coding), here is your opportunity — help us organize Akademy 2025!
We are seeking hosts for Akademy 2025, which will occur in June, July, August, or September. This is your chance to bring KDE’s biggest event to your city! Download the Call for Hosts guide and submit your proposal to akademy-proposals@kde.org by December 1, 2024.
Feel free to reach out with any questions or concerns! We are here to help you organise a successful event and are here to offer you any advice, guidance, or help you need. Let’s work together to make Akademy 2025 an event to remember.