Feeds

Kirigami Addons 1.1.0

Planet KDE - Mon, 2024-04-01 11:00

It’s again time for a new Kirigami Addons release. Kirigami Addons is a collection of helpful components for your QML and Kirigami applications.

FormCard

I added a new FormCard delegate: FormColorDelegate which allow to select a color and a new delegate container: FormCardDialog which is a new type of dialog.

FormCardDialog containing a FormColorDelegate in Marknote

Aside from these new components, Joshua fixed a newline bug in the AboutKDE component and I updated the code examples in the API documentation.

TableView

This new component is intended to provide a powerful table view on top of the barebone one provided by QtQuick and similar to the one we have in our QtWidgets application.

This was contributed by Evgeny Chesnokov. Thanks!

TableView with resizable and sortable columns

Other components

The default size of MessageDialog was decreased and is now more appropriate.

MessageDialog new default size

James Graham fixed the autoplay of the video delegate for the maximized album component.

Packager section

You can find the package on download.kde.org and it has been signed with my GPG key.

Categories: FLOSS Project Planets

The interpersonal side of the xz-utils compromise

Planet KDE - Mon, 2024-04-01 10:54

While everyone is busy analyzing the highly complex technical details of the recently discovered xz-utils compromise that is currently rocking the internet, it is worth looking at the underlying non-technical problems that make such a compromise possible. A very good write-up can be found on the blog of Rob Mensching...

"A Microcosm of the interactions in Open Source projects"

Categories: FLOSS Project Planets

Ben Hutchings: FOSS activity in March 2024

Planet Debian - Mon, 2024-04-01 10:51
Categories: FLOSS Project Planets

Ben Hutchings: FOSS activity in March 2024

Planet Debian - Mon, 2024-04-01 10:51
Categories: FLOSS Project Planets

Drupal Association blog: Unveiling the Power of Drupal: Your Ultimate Choice for Web Development

Planet Drupal - Mon, 2024-04-01 09:54

Welcome to DrupalCon Portland 2024, where innovation, collaboration, and excellence converge! As the premier event for Drupal enthusiasts, developers, and businesses, it's the perfect occasion to explore why Drupal stands tall as the preferred choice for web development. In this article, we'll delve into the compelling reasons that make Drupal the ultimate solution for your web development needs.

Open Source Excellence

Drupal is renowned for being an open-source content management system (CMS), fostering a vibrant community of developers and contributors. The power of collaboration within the Drupal community results in continuous improvements, security updates, and a wealth of modules that cater to a wide range of functionalities. Choosing Drupal means embracing a platform that is constantly evolving and adapting to the ever-changing landscape of the digital world.

Flexibility and Scalability

Drupal's flexibility is one of its key strengths. Whether you're building a personal blog, a corporate website, or a complex e-commerce platform, Drupal adapts to your needs. Its modular architecture allows developers to create custom functionalities and integrate third-party tools seamlessly. As your business grows, Drupal scales with you, ensuring that your website remains robust, high-performing, and capable of handling increased traffic and data.

Exceptional Content Management

Content is at the heart of any successful website, and Drupal excels in providing an intuitive and powerful content management experience. The platform offers a sophisticated taxonomy system, making it easy to organize and categorize content. With a user-friendly interface, content creators can effortlessly publish, edit, and manage content, empowering organizations to maintain a dynamic and engaging online presence.

Security First

In the digital age, security is non-negotiable. Drupal takes a proactive approach to security, with a dedicated security team that monitors, identifies, and addresses vulnerabilities promptly. The platform's robust security features, frequent updates, and a vigilant community ensure that your website is well-protected against potential threats. By choosing Drupal, you're investing in a platform that prioritizes the security of your digital assets.

Mobile Responsiveness

With the increasing prevalence of mobile devices, it's crucial for websites to be responsive and accessible across various screen sizes. Drupal is designed with mobile responsiveness in mind, offering a seamless experience for users on smartphones, tablets, and other devices. This ensures that your website not only looks great but also performs optimally, regardless of the device your audience is using.

Community Support and Knowledge Sharing

Drupal's strength lies not only in its codebase but also in its vast and supportive community. DrupalCon is a testament to the spirit of collaboration and knowledge sharing within the community. Whether you're a seasoned developer or a newcomer, Drupal's community is there to offer support, guidance, and a wealth of resources to help you succeed. By choosing Drupal, you're not just adopting a technology but becoming part of a global network of passionate individuals.

As we gather at DrupalCon Portland 2024, the choice is clear – Drupal is the unparalleled solution for web development. Its open-source nature, flexibility, security features, exceptional content management capabilities, mobile responsiveness, and thriving community make it the go-to platform for building robust and scalable websites. Join the Drupal revolution and unlock the full potential of your digital presence!

Register now for DrupalCon Portland 2024!

Categories: FLOSS Project Planets

KDE - Kiosco De Empanadas

Planet KDE - Mon, 2024-04-01 09:47

You like tasy! At KDE we got tasty!

Categories: FLOSS Project Planets

DrupalEasy: DrupalEasy Podcast - A very special episode

Planet Drupal - Mon, 2024-04-01 09:38

A very special episode of the DrupalEasy Podcast - an episode two years in the making.

Categories: FLOSS Project Planets

Colin Watson: Free software activity in March 2024

Planet Debian - Mon, 2024-04-01 09:10

My Debian contributions this month were all sponsored by Freexian.

Categories: FLOSS Project Planets

LN Webworks: 7 Reasons Why Drupal is the Perfect Platform for Your Real Estate Website

Planet Drupal - Mon, 2024-04-01 07:06

The website of your business, be it any, is a complete brand narration in itself. These days, whenever you hear a brand’s name - you always either check out their social handles and then move on to their website. This holds true when we are talking about e-commerce business, real estate, and so on. 

Speaking of real estate specifically, creating a website that works in your favor and not for the sake of it can make or break your business. But, out of so many CMS options in the market, which one should you pick? The answer is simple - the one that suits your business and all its specific needs. And, what’s better than Drupal? Well, to simplify it, let’s have a look at some of the big top

Categories: FLOSS Project Planets

Simon Josefsson: Towards reproducible minimal source code tarballs? On *-src.tar.gz

GNU Planet! - Mon, 2024-04-01 06:28

While the work to analyze the xz backdoor is in progress, several ideas have been suggested to improve the entire software supply chain ecosystem. Some of those ideas are good, some of the ideas are at best irrelevant and harmless, and some suggestions are plain bad. I’d like to attempt to formalize one idea (remains to be see in which category it belongs), which have been discussed before, but the context in which the idea can be appreciated have not been as clear as it is today.

  1. Reproducible source tarballs. The idea is that published source tarballs should be possible to reproduce independently somehow, and that this should be continuously tested and verified — preferrably as part of the upstream project continuous integration system (e.g., GitHub action or GitLab pipeline). While nominally this looks easy to achieve, there are some complex matters in this, for example: what timestamps to use for files in the tarball? I’ve brought up this aspect before.
  2. Minimal source tarballs without generated vendor files. Most GNU Autoconf/Automake-based tarballs pre-generated files which are important for bootstrapping on exotic systems that does not have the required dependencies. For the bootstrapping story to succeed, this approach is important to support. However it has become clear that this practice raise significant costs and risks. Most modern GNU/Linux distributions have all the required dependencies and actually prefers to re-build everything from source code. These pre-generated extra files introduce uncertainty to that process.

My strawman proposal to improve things is to define new tarball format *-src.tar.gz with at least the following properties:

  1. The tarball should allow users to build the project, which is the entire purpose of all this. This means that at least all source code for the project has to be included.
  2. The tarballs should be signed, for example with PGP or minisign.
  3. The tarball should be possible to reproduce bit-by-bit by a third party using upstream’s version controlled sources and a pointer to which revision was used (e.g., git tag or git commit).
  4. The tarball should not require an Internet connection to download things.
    • Corollary: every external dependency either has to be explicitly documented as such (e.g., gcc and GnuTLS), or included in the tarball.
    • Observation: This means including all *.po gettext translations which are normally downloaded when building from version controlled sources.
  5. The tarball should contain everything required to build the project from source using as much externally released versioned tooling as possible. This is the “minimal” property lacking today.
    • Corollary: This means including a vendored copy of OpenSSL or libz is not acceptable: link to them as external projects.
    • Open question: How about non-released external tooling such as gnulib or autoconf archive macros? This is a bit more delicate: most distributions either just package one current version of gnulib or autoconf archive, not previous versions. While this could change, and distributions could package the gnulib git repository (up to some current version) and the autoconf archive git repository — and packages were set up to extract the version they need (gnulib’s ./bootstrap already supports this via the –gnulib-refdir parameter), this is not normally in place.
    • Suggested Corollary: The tarball should contain content from git submodule’s such as gnulib and the necessary Autoconf archive M4 macros required by the project.
  6. Similar to how the GNU project specify the ./configure interface we need a documented interface for how to bootstrap the project. I suggest to use the already well established idiom of running ./bootstrap to set up the package to later be able to be built via ./configure. Of course, some projects are not using the autotool ./configure interface and will not follow this aspect either, but like most build systems that compete with autotools have instructions on how to build the project, they should document similar interfaces for bootstrapping the source tarball to allow building.

If tarballs that achieve the above goals were available from popular upstream projects, distributions could more easily use them instead of current tarballs that include pre-generated content. The advantage would be that the build process is not tainted by “unnecessary” files. We need to develop tools for maintainers to create these tarballs, similar to make dist that generate today’s foo-1.2.3.tar.gz files.

I think one common argument against this approach will be: Why bother with all that, and just use git-archive outputs? Or avoid the entire tarball approach and move directly towards version controlled check outs and referring to upstream releases as git URL and commit tag or id. My counter-argument is that this optimize for packagers’ benefits at the cost of upstream maintainers: most upstream maintainers do not want to store gettext *.po translations in their source code repository. A compromise between the needs of maintainers and packagers is useful, so this *-src.tar.gz tarball approach is the indirection we need to solve that.

What do you think?

Categories: FLOSS Project Planets

Simon Josefsson: Towards reproducible minimal source code tarballs? On *-src.tar.gz

Planet Debian - Mon, 2024-04-01 06:28

While the work to analyze the xz backdoor is in progress, several ideas have been suggested to improve the entire software supply chain ecosystem. Some of those ideas are good, some of the ideas are at best irrelevant and harmless, and some suggestions are plain bad. I’d like to attempt to formalize one idea (remains to be see in which category it belongs), which have been discussed before, but the context in which the idea can be appreciated have not been as clear as it is today.

  1. Reproducible source tarballs. The idea is that published source tarballs should be possible to reproduce independently somehow, and that this should be continuously tested and verified — preferrably as part of the upstream project continuous integration system (e.g., GitHub action or GitLab pipeline). While nominally this looks easy to achieve, there are some complex matters in this, for example: what timestamps to use for files in the tarball? I’ve brought up this aspect before.
  2. Minimal source tarballs without generated vendor files. Most GNU Autoconf/Automake-based tarballs pre-generated files which are important for bootstrapping on exotic systems that does not have the required dependencies. For the bootstrapping story to succeed, this approach is important to support. However it has become clear that this practice raise significant costs and risks. Most modern GNU/Linux distributions have all the required dependencies and actually prefers to re-build everything from source code. These pre-generated extra files introduce uncertainty to that process.

My strawman proposal to improve things is to define new tarball format *-src.tar.gz with at least the following properties:

  1. The tarball should allow users to build the project, which is the entire purpose of all this. This means that at least all source code for the project has to be included.
  2. The tarballs should be signed, for example with PGP or minisign.
  3. The tarball should be possible to reproduce bit-by-bit by a third party using upstream’s version controlled sources and a pointer to which revision was used (e.g., git tag or git commit).
  4. The tarball should not require an Internet connection to download things.
    • Corollary: every external dependency either has to be explicitly documented as such (e.g., gcc and GnuTLS), or included in the tarball.
    • Observation: This means including all *.po gettext translations which are normally downloaded when building from version controlled sources.
  5. The tarball should contain everything required to build the project from source using as much externally released versioned tooling as possible. This is the “minimal” property lacking today.
    • Corollary: This means including a vendored copy of OpenSSL or libz is not acceptable: link to them as external projects.
    • Open question: How about non-released external tooling such as gnulib or autoconf archive macros? This is a bit more delicate: most distributions either just package one current version of gnulib or autoconf archive, not previous versions. While this could change, and distributions could package the gnulib git repository (up to some current version) and the autoconf archive git repository — and packages were set up to extract the version they need (gnulib’s ./bootstrap already supports this via the –gnulib-refdir parameter), this is not normally in place.
    • Suggested Corollary: The tarball should contain content from git submodule’s such as gnulib and the necessary Autoconf archive M4 macros required by the project.
  6. Similar to how the GNU project specify the ./configure interface we need a documented interface for how to bootstrap the project. I suggest to use the already well established idiom of running ./bootstrap to set up the package to later be able to be built via ./configure. Of course, some projects are not using the autotool ./configure interface and will not follow this aspect either, but like most build systems that compete with autotools have instructions on how to build the project, they should document similar interfaces for bootstrapping the source tarball to allow building.

If tarballs that achieve the above goals were available from popular upstream projects, distributions could more easily use them instead of current tarballs that include pre-generated content. The advantage would be that the build process is not tainted by “unnecessary” files. We need to develop tools for maintainers to create these tarballs, similar to make dist that generate today’s foo-1.2.3.tar.gz files.

I think one common argument against this approach will be: Why bother with all that, and just use git-archive outputs? Or avoid the entire tarball approach and move directly towards version controlled check outs and referring to upstream releases as git URL and commit tag or id. My counter-argument is that this optimize for packagers’ benefits at the cost of upstream maintainers: most upstream maintainers do not want to store gettext *.po translations in their source code repository. A compromise between the needs of maintainers and packagers is useful, so this *-src.tar.gz tarball approach is the indirection we need to solve that.

What do you think?

Categories: FLOSS Project Planets

Zero to Mastery: Python Monthly Newsletter 💻🐍

Planet Python - Mon, 2024-04-01 06:00
52nd issue of Andrei Neagoie's must-read monthly Python Newsletter: Whitehouse Recommends Python, Memory Footprint, Let's Talk About Devin, and much more. Read the full newsletter to get up-to-date with everything you need to know from last month.
Categories: FLOSS Project Planets

Arturo Borrero González: Kubecon and CloudNativeCon 2024 Europe summary

Planet Debian - Mon, 2024-04-01 05:00

This blog post shares my thoughts on attending Kubecon and CloudNativeCon 2024 Europe in Paris. It was my third time at this conference, and it felt bigger than last year’s in Amsterdam. Apparently it had an impact on public transport. I missed part of the opening keynote because of the extremely busy rush hour tram in Paris.

On Artificial Intelligence, Machine Learning and GPUs

Talks about AI, ML, and GPUs were everywhere this year. While it wasn’t my main interest, I did learn about GPU resource sharing and power usage on Kubernetes. There were also ideas about offering Models-as-a-Service, which could be cool for Wikimedia Toolforge in the future.

See also:

On security, policy and authentication

This was probably the main interest for me in the event, given Wikimedia Toolforge was about to migrate away from Pod Security Policy, and we were currently evaluating different alternatives.

In contrast to my previous attendances to Kubecon, where there were three policy agents with presence in the program schedule, Kyverno, Kubewarden and OpenPolicyAgent (OPA), this time only OPA had the most relevant sessions.

One surprising bit I got from one of the OPA sessions was that it could work to authorize linux PAM sessions. Could this be useful for Wikimedia Toolforge?

I attended several sessions related to authentication topics. I discovered the keycloak software, which looks very promising. I also attended an Oauth2 session which I had a hard time following, because I clearly missed some additional knowledge about how Oauth2 works internally.

I also attended a couple of sessions that ended up being a vendor sales talk.

See also:

On container image builds, harbor registry, etc

This topic was also of interest to me because, again, it is a core part of Wikimedia Toolforge.

I attended a couple of sessions regarding container image builds, including topics like general best practices, image minimization, and buildpacks. I learned about kpack, which at first sight felt like a nice simplification of how the Toolforge build service was implemented.

I also attended a session by the Harbor project maintainers where they shared some valuable information on things happening soon or in the future , for example:

  • new harbor command line interface coming soon. Only the first iteration though.
  • harbor operator, to install and manage harbor. Looking for new maintainers, otherwise going to be archived.
  • the project is now experimenting with adding support to hosting more artifacts: maven, NPM, pypi. I wonder if they will consider hosting Debian .deb packages.

On networking

I attended a couple of sessions regarding networking.

One session in particular I paid special attention to, ragarding on network policies. They discussed new semantics being added to the Kubernetes API.

The different layers of abstractions being added to the API, the different hook points, and override layers clearly resembled (to me at least) the network packet filtering stack of the linux kernel (netfilter), but without the 20 (plus) years of experience building the right semantics and user interfaces.

I very recently missed some semantics for limiting the number of open connections per namespace, see Phabricator T356164: [toolforge] several tools get periods of connection refused (104) when connecting to wikis This functionality should be available in the lower level tools, I mean Netfilter. I may submit a proposal upstream at some point, so they consider adding this to the Kubernetes API.

Final notes

In general, I believe I learned many things, and perhaps even more importantly I re-learned some stuff I had forgotten because of lack of daily exposure. I’m really happy that the cloud native way of thinking was reinforced in me, which I still need because most of my muscle memory to approach systems architecture and engineering is from the old pre-cloud days. That being said, I felt less engaged with the content of the conference schedule compared to last year. I don’t know if the schedule itself was less interesting, or that I’m losing interest?

Finally, not an official track in the conference, but we met a bunch of folks from Wikimedia Deutschland. We had a really nice time talking about how wikibase.cloud uses Kubernetes, whether they could run in Wikimedia Cloud Services, and why structured data is so nice.

We in WMCS usually consider ourselves only one or two engineers short of offering the same level of services as Google cloud :-P

Categories: FLOSS Project Planets

Tryton News: Newsletter April 2024

Planet Python - Mon, 2024-04-01 02:00

During the last month we focused on fixing bugs, improving the behaviour of things, speeding-up performance issues and adding new features for you.

Changes for the User Sales, Purchases and Projects

When processing an exception on an order, the user can ignore the exception and so no more related lines/documents will be re-created. But in case of a mistake it was not possible to cancel the ignore. Now we allow the Sale and Purchase administrator group to edit the list of ignored- lines to be able to remove mistakes. After changes to the list of ignored lines the user needs to manually reprocess the order, using the Process button, to restore it to a coherent state.

Accounting, Invoicing and Payments

Account users are now allowed to delete draft account moves.

Stock, Production and Shipments

When creating a stock forecast the warehouse is now filled in automatically.

Now the scheduled task maintains a global order of assignations for shipments and productions. A global order is important because assignations are competing with each other to get the products first.

User Interface

We now hide the traceback from an error behind an expander widget, as it may scare some users and it is not helpful for most of them.

System Data and Configuration

Employees are now activated based on the start and end date of their employment.

New Modules

The new stock_product_location_place module allows a specific place to be defined where goods are stored in their location. You can refer to its documentation for more details.

New Documentation

We reworked parts of the Tryton documentation.

How to enter in an opening balance.

We changed our documentation hub from readthedocs to self hosting.

New Releases

We released bug fixes for the currently maintained long term support series
7.0 and 6.0, and for the penultimate series 6.8.

Security Please update your systems to take care of a security related bug we found last month. Changes for the System Administrator

We now make cron and workers exit silently on a keyboard interrupt.

We also introduced a switch on trytond-admin to be able to delay the creation of indexes. This is because the index creation can take a long time to complete when updating modules on big databases. Using this switch the database schema can be quickly created, but will be without the performance gain from the new indexes, which are not available yet. Another run at a more appropriate time without the switch can then be used to create the indexes.

For history records we now display the date time on access errors.

Changes for Implementers and Developers

We now use dot notation and binary operators when converting PYSON to a string when it is to be displayed to the user.

Authors: @dave @pokoli @udono

1 post - 1 participant

Read full topic

Categories: FLOSS Project Planets

Salsa Digital: Mastering Drupal migration: Guide to seamless website upgrades

Planet Drupal - Mon, 2024-04-01 00:44
Drupal as a CMS: Your go-to CMS for a seamless digital experience In today's digital age, a robust and efficient content management system (CMS) is key for a seamless user experience. Drupal, known for its flexibility, scalability and customisation options, has emerged as one of the most popular CMS platforms for website development. As a CMS, Drupal allows you to create, organise and manage your website's content effortlessly. It also provides a user-friendly interface and a wide array of features that ensure a smooth and efficient content creation and management process. Read on for tips and tools for your Drupal migration, or reach out to us now for customised help with your migration.  Why choose Drupal CMS?
Categories: FLOSS Project Planets

Junichi Uekawa: Learning about xz and what is happening is fascinating.

Planet Debian - Sun, 2024-03-31 18:02
Learning about xz and what is happening is fascinating. The scope of potential exploit is very large. The Open source software space is filled with many unmaintained and unreviewed software.

Categories: FLOSS Project Planets

parallel @ Savannah: GNU Parallel 20240322 ('Sweden') released [stable]

GNU Planet! - Sun, 2024-03-31 17:11

GNU Parallel 20240322 ('Sweden') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

   GNU parallel ftw
    -- hostux.social/@rmpr @_paulmairo@twitter

New in this release:

  • Bug fixes and man page updates.


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

Categories: FLOSS Project Planets

Russell Coker: Links March 2024

Planet Debian - Sun, 2024-03-31 08:51

Bruce Schneier wrote an interesting blog post about his workshop on reimagining democracy and the unusual way he structured it [1]. It would be fun to have a security conference run like that!

Matthias write an informative blog post about Wayland “Wayland really breaks things… Just for now” which links to a blog debate about the utility of Wayland [2]. Wayland seems pretty good to me.

Cory Doctorow wrote an insightful article about the AI bubble comparing it to previous bubbles [3].

Charles Stross wrote an insightful analysis of the implications if the UK brought back military conscription [4]. Looks like the era of large armies is over.

Charles Stross wrote an informative blog post about the Worldcon in China, covering issues of vote rigging for location, government censorship vs awards, and business opportunities [5].

The Paris Review has an interesting article about speaking to the CIA’s Creative Writing Group [6]. It doesn’t explain why they have a creative writing group that has some sort of semi-official sanction.

LongNow has an insightful article about the threats to biodiversity in food crops and the threat that poses to humans [7].

Bruce Schneier and Albert Fox Cahn wrote an interesting article about the impacts of chatbots on human discourse [8]. If it makes people speak more precisely then that would be great for all Autistic people!

Related posts:

  1. Links February 2024 In 2018 Charles Stross wrote an insightful blog post Dude...
  2. Links January 2024 Long Now has an insightful article about domestication that considers...
  3. Links March 2023 Interesting paper about a plan for eugenics in dogs with...
Categories: FLOSS Project Planets

Pages