Feeds

Security advisories: Drupal core - Less critical - Gadget chain - SA-CORE-2024-006

Planet Drupal - Wed, 2024-11-20 12:25
Project: Drupal coreDate: 2024-November-20Security risk: Less critical 8 ∕ 25 AC:Complex/A:User/CI:None/II:Some/E:Theoretical/TD:UncommonVulnerability: Gadget chainAffected versions: >= 8.0.0 < 10.2.11 || >= 10.3.0 < 10.3.9 || >= 11.0.0 < 11.0.8Description: 

Drupal core contains a potential PHP Object Injection vulnerability that (if combined with another exploit) could lead to Artbitrary File Deletion. It is not directly exploitable.

This issue is mitigated by the fact that in order to be exploitable, a separate vulnerability must be present that allows an attacker to pass unsafe input to unserialize(). There are no such known exploits in Drupal core.

To help protect against this vulnerability, types have been added to properties in some of Drupal core's classes. If an application extends those classes, the same types may need to be specified on the subclass to avoid a TypeError.

Solution: 

Install the latest version:

All versions of Drupal 10 prior to 10.2 are end-of-life and do not receive security coverage. (Drupal 8 and Drupal 9 have both reached end-of-life.)

Reported By: Fixed By: Coordinated By: 
Categories: FLOSS Project Planets

Security advisories: Drupal core - Critical - Cross Site Scripting - SA-CORE-2024-005

Planet Drupal - Wed, 2024-11-20 12:24
Project: Drupal coreDate: 2024-November-20Security risk: Critical 17 ∕ 25 AC:None/A:None/CI:Some/II:Some/E:Theoretical/TD:DefaultVulnerability: Cross Site ScriptingDescription: 

Drupal 7 core's Overlay module doesn't safely handle user input, leading to reflected cross-site scripting under certain circumstances.

Only sites with the Overlay module enabled are affected by this vulnerability.

Solution: 

Install the latest version:

  • If you are using Drupal 7, update to Drupal 7.102
  • Sites may also disable the Overlay module to avoid the issue.

Drupal 10 and Drupal 11 are not affected, as the Overlay module was removed from Drupal core in Drupal 8.

Reported By: Fixed By: Coordinated By: 
Categories: FLOSS Project Planets

Security advisories: Drupal core - Moderately critical - Access bypass - SA-CORE-2024-004

Planet Drupal - Wed, 2024-11-20 12:21
Project: Drupal coreDate: 2024-November-20Security risk: Moderately critical 10 ∕ 25 AC:Basic/A:User/CI:None/II:Some/E:Theoretical/TD:DefaultVulnerability: Access bypassAffected versions: >= 8.0.0 < 10.2.11 || >= 10.3.0 < 10.3.9 || >= 11.0.0 < 11.0.8Description: 

Drupal's uniqueness checking for certain user fields is inconsistent depending on the database engine and its collation.

As a result, a user may be able to register with the same email address as another user.

This may lead to data integrity issues.

Solution: 

Install the latest version:

All versions of Drupal 10 prior to 10.2 are end-of-life and do not receive security coverage. (Drupal 8 and Drupal 9 have both reached end-of-life.)

Updating Drupal will not solve potential issues with existing accounts affected by this bug. See Fixing emails that vary only by case for additional guidance.

Reported By: Fixed By: Coordinated By: 
Categories: FLOSS Project Planets

Security advisories: Drupal core - Moderately critical - Cross Site Scripting - SA-CORE-2024-003

Planet Drupal - Wed, 2024-11-20 12:20
Project: Drupal coreDate: 2024-November-20Security risk: Moderately critical 13 ∕ 25 AC:Basic/A:User/CI:Some/II:Some/E:Theoretical/TD:DefaultVulnerability: Cross Site ScriptingAffected versions: >= 8.8.0 < 10.2.11 || >= 10.3.0 < 10.3.9 || >= 11.0.0 < 11.0.8Description: 

Drupal uses JavaScript to render status messages in some cases and configurations. In certain situations, the status messages are not adequately sanitized.

Solution: 

Install the latest version:

All versions of Drupal 10 prior to 10.2 are end-of-life and do not receive security coverage. (Drupal 8 and Drupal 9 have both reached end-of-life.)

Reported By: Fixed By: Coordinated By: 
Categories: FLOSS Project Planets

Real Python: NumPy Practical Examples: Useful Techniques

Planet Python - Wed, 2024-11-20 09:00

The NumPy library is a Python library used for scientific computing. It provides you with a multidimensional array object for storing and analyzing data in a wide variety of ways. In this tutorial, you’ll see examples of some features NumPy provides that aren’t always highlighted in other tutorials. You’ll also get the chance to practice your new skills with various exercises.

In this tutorial, you’ll learn how to:

  • Create multidimensional arrays from data stored in files
  • Identify and remove duplicate data from a NumPy array
  • Use structured NumPy arrays to reconcile the differences between datasets
  • Analyze and chart specific parts of hierarchical data
  • Create vectorized versions of your own functions

If you’re new to NumPy, it’s a good idea to familiarize yourself with the basics of data science in Python before you start. Also, you’ll be using Matplotlib in this tutorial to create charts. While it’s not essential, getting acquainted with Matplotlib beforehand might be beneficial.

Get Your Code: Click here to download the free sample code that you’ll use to work through NumPy practical examples.

Take the Quiz: Test your knowledge with our interactive “NumPy Practical Examples: Useful Techniques” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

NumPy Practical Examples: Useful Techniques

This quiz will challenge your knowledge of working with NumPy arrays. You won't find all the answers in the tutorial, so you'll need to do some extra investigating. By finding all the answers, you're sure to learn some interesting things along the way.

Setting Up Your Working Environment

Before you can get started with this tutorial, you’ll need to do some initial setup. In addition to NumPy, you’ll need to install the Matplotlib library, which you’ll use to chart your data. You’ll also be using Python’s pathlib library to access your computer’s file system, but there’s no need to install pathlib because it’s part of Python’s standard library.

You might consider using a virtual environment to make sure your tutorial’s setup doesn’t interfere with anything in your existing Python environment.

Using a Jupyter Notebook within JupyterLab to run your code instead of a Python REPL is another useful option. It allows you to experiment and document your findings, as well as quickly view and edit files. The downloadable version of the code and exercise solutions are presented in Jupyter Notebook format.

The commands for setting things up on the common platforms are shown below:

Fire up a Windows PowerShell(Admin) or Terminal(Admin) prompt, depending on the version of Windows that you’re using. Now type in the following commands:

Windows PowerShell PS> python -m venv venv\ PS> venv\Scripts\activate (venv) PS> python -m pip install numpy matplotlib jupyterlab (venv) PS> jupyter lab Copied!

Here you create a virtual environment named venv\, which you then activate. If the activation is successful, then the virtual environment’s name will precede your Powershell prompt. Next, you install numpy and matplotlib into this virtual environment, followed by the optional jupyterlab. Finally, you start JupyterLab.

Note: When you activate your virtual environment, you may receive an error stating that your system can’t run the script. Modern versions of Windows don’t allow you to run scripts downloaded from the Internet as a security feature.

To fix this, you need to type the command Set-ExecutionPolicy RemoteSigned, then answer Y to the question. Your computer will now run scripts that Microsoft has verified. Once you’ve done this, the venv\Scripts\activate command should work.

Fire up a terminal and type in the following commands:

Shell $ python -m venv venv/ $ source venv/bin/activate (venv) $ python -m pip install numpy matplotlib jupyterlab (venv) $ jupyter lab Copied!

Here you create a virtual environment named venv/, which you then activate. If the activation is successful, then the virtual environment’s name will precede your command prompt. Next, you install numpy and matplotlib into this virtual environment, followed by the optional jupyterlab. Finally, you start JupyterLab.

You’ll notice that your prompt is preceded by (venv). This means that anything you do from this point forward will stay in this environment and remain separate from other Python work you have elsewhere.

Now that you have everything set up, it’s time to begin the main part of your learning journey.

NumPy Example 1: Creating Multidimensional Arrays From Files

When you create a NumPy array, you create a highly-optimized data structure. One of the reasons for this is that a NumPy array stores all of its elements in a contiguous area of memory. This memory management technique means that the data is stored in the same memory region, making access times fast. This is, of course, highly desirable, but an issue occurs when you need to expand your array.

Suppose you need to import multiple files into a multidimensional array. You could read them into separate arrays and then combine them using np.concatenate(). However, this would create a copy of your original array before expanding the copy with the additional data. The copying is necessary to ensure the updated array will still exist contiguously in memory since the original array may have had non-related content adjacent to it.

Constantly copying arrays each time you add new data from a file can make processing slow and is wasteful of your system’s memory. The problem becomes worse the more data you add to your array. Although this copying process is built into NumPy, you can minimize its effects with these two steps:

  1. When setting up your initial array, determine how large it needs to be before populating it. You may even consider over-estimating its size to support any future data additions. Once you know these sizes, you can create your array upfront.

  2. The second step is to populate it with the source data. This data will be slotted into your existing array without any need for it to be expanded.

Next, you’ll explore how to populate a three-dimensional NumPy array.

Populating Arrays With File Data

In this first example, you’ll use the data from three files to populate a three-dimensional array. The content of each file is shown below, and you’ll also find these files in the downloadable materials:

The first file has two rows and three columns with the following content:

CSV file1.csv 1.1, 1.2, 1.3 1.4, 1.5, 1.6 Copied! Read the full article at https://realpython.com/numpy-example/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Ian Jackson: The Rust Foundation's 2nd bad draft trademark policy

Planet Debian - Wed, 2024-11-20 07:50

tl;dr: The Rust Foundation’s new trademark policy still forbids unapproved modifications: this would forbid both the Rust Community’s own development work(!) and normal Free Software distribution practices.

Background

In April 2023 I wrote about the Rust Foundation’s ham-fisted and misguided attempts to update the Rust trademark policy. This turned into drama.

The new draft

Recently, the Foundation published a new draft. It’s considerably less bad, but the most serious problem, which I identified last year, remains.

It prevents redistribution of modified versions of Rust, without pre-approval from the Rust Foundation. (Subject to some limited exceptions.) The people who wrote this evidently haven’t realised that distributing modified versions is how free software development works. Ie, the draft Rust trademark policy even forbids making a github branch for an MR to contribute to Rust!

It’s also very likely unacceptable to Debian. Rust is still on track to repeat the Firefox/Iceweasel debacle.

Below is a copy of my formal response to the consultation. The consultation closes at 07:59:00 UTC tomorrow (21st November), ie, at the end of today (Wednesday) US Pacific time, so if you want to reply, do so quickly.

My consultation response

Hi. My name is Ian Jackson. I write as a Rust contributor and as a Debian Developer with first-hand experience of Debian’s approach to trademarks. (But I am not a member of the Debian Rust Packaging Team.)

Your form invites me to state any blocking concerns. I’m afraid I have one:

PROBLEM

The policy on distributing modified versions of Rust (page 4, 8th bullet) is far too restrictive.

PROBLEM - ASPECT 1

On its face the policy forbids making a clone of the Rust repositories on a git forge, and pushing a modified branch there. That is publicly distributing a modified version of Rust.

I.e., the current policy forbids the Rust’s community’s own development workflow!

PROBLEM - ASPECT 2

The policy also does not meet the needs of Software-Freedom-respecting downstreams, including community Linux distributions such as Debian.

There are two scenarios (fuzzy, and overlapping) which provide a convenient framing to discuss this:

Firstly, in practical terms, Debian may need to backport bugfixes, or sometimes other changes. Sometimes Debian will want to pre-apply bugfixes or changes that have been contributed by users, and are intended eventually to go upstream, but are not included upstream in official Rust yet. This is a routine activity for a distribution. The policy, however, forbids it.

Secondly, Debian, as a point of principle, requires the ability to diverge from upstream if and when Debian decides that this is the right choice for Debian’s users. The freedom to modify is a key principle of Free Software. This includes making changes that the upstream project disapproves of. Some examples of this, where Debian has made changes, that upstream do not approve of, have included things like: removing user-tracking code, or disabling obsolescence “timebombs” that stop a particular version working after a certain date.

Overall, while alignment in values between Debian and Rust seems to be very good right now, modifiability it is a matter of non-negotiable principle for Debian. The 8th bullet point on page 4 of the PDF does not give Debian (and Debian’s users) these freedoms.

POSSIBLE SOLUTIONS

Other formulations, or an additional permission, seem like they would be able to meet the needs of both Debian and Rust.

The first thing to recognise is that forbidding modified versions is probably not necessary to prevent language ecosystem fragmentation. Many other programming languages are distributed under fully Free Software licences without such restrictive trademark policies. (For example, Python; I’m sure a thorough survey would find many others.)

The scenario that would be most worrying for Rust would be “embrace - extend - extinguish”. In projects with a copyleft licence, this is not a concern, but Rust is permissively licenced. However, one way to address this would be to add an additional permission for modification that permits distribution of modified versions without permission, but if the modified source code is also provided, under the original Rust licence.

I suggest therefore adding the following 2nd sub-bullet point to the 8th bullet on page 4:

  • changes which are shared, in source code form, with all recipients of the modified software, and publicly licenced under the same licence as the official materials.

This means that downstreams who fear copyleft have the option of taking Rust’s permissive copyright licence at face value, but are limited in the modifications they may make, unless they rename. Conversely downstreams such as Debian who wish to operate as part of the Free Software ecosystem can freely make modifications.

It also, obviously, covers the Rust Community’s own development work.

NON-SOLUTIONS

Some upstreams, faced with this problem, have offered Debian a special permission: ie, said that it would be OK for Debian to make modifications that Debian wants to. But Debian will not accept any Debian-specific permissions.

Debian could of course rename their Rust compiler. Debian has chosen to rename in the past: infamously, a similar policy by Mozilla resulted in Debian distributing Firefox under the name Iceweasel for many years. This is a PR problem for everyone involved, and results in a good deal of technical inconvenience and makework.

“Debian could seek approval for changes, and the Rust Foundation would grant that approval quickly”. This is unworkable on a practical level - requests for permission do not fit into Debian’s workflow, and the resulting delays would be unacceptable. But, more fundamentally, Debian rightly insists that it must have the freedom to make changes that the Foundation do not approve of. (For example, if a future Rust shipped with telemetry features Debian objected to.)

“Debian and Rust could compromise”. However, Debian is an ideological as well as technological project. The principles I have set out are part of Debian’s Foundation Documents - they are core values for Debian. When Debian makes compromises, it does so very slowly and with great deliberation, using its slowest and most heavyweight constitutional governance processes. Debian is not likely to want to engage in such a process for the benefit of one programming language.

“Users will get Rust from upstream”. This is currently often the case. Right now, Rust is moving very quickly, and by Debian standards is very new. As Rust becomes more widely used, more stable, and more part of the infrastructure of the software world, it will need to become part of standard, stable, reliable, software distributions. That means Debian.

(The consultation was a Google Forms page with a single text field, so the formatting isn’t great. I have edited the formatting very lightly to avoid rendering bugs here on my blog.)



comments
Categories: FLOSS Project Planets

Real Python: Quiz: NumPy Practical Examples: Useful Techniques

Planet Python - Wed, 2024-11-20 07:00

In this quiz, you’ll test your understanding of the techniques covered in the tutorial NumPy Practical Examples: Useful Techniques.

By working through the questions, you’ll review your understanding of NumPy arrays and also expand on what you learned in the tutorial.

You’ll need to do some research outside of the tutorial to answer all the questions. Embrace this challenge and let it take you on a learning journey.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Russell Coker: Solving Spam and Phishing for Corporations

Planet Debian - Wed, 2024-11-20 00:22
Centralisation and Corporations

An advantage of a medium to large company is that it permits specialisation. For example I’m currently working in the IT department of a medium sized company and because we have standardised hardware (Dell Latitude and Precision laptops, Dell Precision Tower workstations, and Dell PowerEdge servers) and I am involved in fixing all Linux compatibility issues on that I can fix most problems in a small fraction of the time that I would take to fix on a random computer. There is scope for a lot of debate about the extent to which companies should standardise and centralise things. But for computer problems which can escalate quickly from minor to serious if not approached in the correct manner it’s clear that a good deal of centralisation is appropriate.

For people doing technical computer work such as programming there’s a large portion of the employees who are computer hobbyists who like to fiddle with computers. But if the support system is run well even they will appreciate having computers just work most of the time and for a large portion of the failures having someone immediately recognise the problem, like the issues with NVidia drivers that I have documented so that first line support can implement workarounds without the need for a lengthy investigation.

A big problem with email in the modern Internet is the prevalence of Phishing scams. The current corporate approach to this is to send out test Phishing email to people and then force computer security training on everyone who clicks on them. One problem with this is that attackers only need to fool one person on one occasion and when you have hundreds of people doing something on rare occasions that’s not part of their core work they will periodically get it wrong. When every test Phishing run finds several people who need extra training it seems obvious to me that this isn’t a solution that’s working well. I will concede that the majority of people who click on the test Phishing email would probably realise their mistake if asked to enter the password for the corporate email system, but I think it’s still clear that this isn’t a great solution.

Let’s imagine for the sake of discussion that everyone in a company was 100% accurate at identifying Phishing email and other scam email, if that was the case would the problem be solved? I believe that even in that hypothetical case it would not be a solved problem due to the wasted time and concentration. People can spend minutes determining if a single email is legitimate. On many occasions I have had relatives and clients forward me email because they are unsure if it’s valid, it’s great that they seek expert advice when they are unsure about things but it would be better if they didn’t have to go to that effort. What we ideally want to do is centralise the anti-Phishing and anti-spam work to a small group of people who are actually good at it and who can recognise patterns by seeing larger quantities of spam. When a spam or Phishing message is sent to 600 people in a company you don’t want 600 people to individually consider it, you want one person to recognise it and delete/block all 600. If 600 people each spend one minute considering the matter then that’s 10 work hours wasted!

The Rationale for Human Filtering

For personal email human filtering usually isn’t viable because people want privacy. But corporate email isn’t private, it’s expected that the company can read it under certain circumstances (in most jurisdictions) and having email open in public areas of the office where colleagues might see it is expected. You can visit gmail.com on your lunch break to read personal email but every company policy (and common sense) says to not have actually private correspondence on company systems.

The amount of time spent by reception staff in sorting out such email would be less than that taken by individuals. When someone sends a spam to everyone in the company instead of 500 people each spending a couple of minutes working out whether it’s legit you have one person who’s good at recognising spam (because it’s their job) who clicks on a “remove mail from this sender from all mailboxes” button and 500 messages are deleted and the sender is blocked.

Delaying email would be a concern. It’s standard practice for CEOs (and C*Os at larger companies) to have a PA receive their email and forward the ones that need their attention. So human vetting of email can work without unreasonable delays. If we had someone checking all email for the entire company probably email to the senior people would never get noticeably delayed and while people like me would get their mail delayed on occasion people doing technical work generally don’t have notifications turned on for email because it’s a distraction and a fast response isn’t needed. There are a few senders where fast response is required, which is mostly corporations sending a “click this link within 10 minutes to confirm your password change” email. Setting up rules for all such senders that are relevant to work wouldn’t be difficult to do.

How to Solve This

Spam and Phishing became serious problems over 20 years ago and we have had 20 years of evolution of email filtering which still hasn’t solved the problem. The vast majority of email addresses in use are run by major managed service providers and they haven’t managed to filter out spam/phishing mail effectively so I think we should assume that it’s not going to be solved by filtering. There is talk about what “AI” technology might do for filtering spam/phishing but that same technology can product better crafted hostile email to avoid filters.

An additional complication for corporate email filtering is that some criteria that are used to filter personal email don’t apply to corporate mail. If someone sends email to me personally about millions of dollars then it’s obviously not legit. If someone sends email to a company then it could be legit. Companies routinely have people emailing potential clients about how their products can save millions of dollars and make purchases over a million dollars. This is not a problem that’s impossible to solve, it’s just an extra difficulty that reduces the efficiency of filters.

It seems to me that the best solution to the problem involves having all mail filtered by a human. A company could configure their mail server to not accept direct external mail for any employee’s address. Then people could email files to colleagues etc without any restriction but spam and phishing wouldn’t be a problem. The issue is how to manage inbound mail. One possibility is to have addresses of the form it+russell.coker@example.com (for me as an employee in the IT department) and you would have a team of people who would read those mailboxes and forward mail to the right people if it seemed legit. Having addresses like it+russell.coker means that all mail to the IT department would be received into folders of the same account and they could be filtered by someone with suitable security level and not require any special configuration of the mail server. So the person who read the is mailbox would have a folder named russell.coker receiving mail addressed to me. The system could be configured to automate the processing of mail from known good addresses (and even domains), so they could just put in a rule saying that when Dell sends DMARC authenticated mail to is+$USER it gets immediately directed to $USER. This is the sort of thing that can be automated in the email client (mail filtering is becoming a common feature in MUAs).

For a FOSS implementation of such things the server side of it (including extracting account data from a directory to determine which department a user is in) would be about a day’s work and then an option would be to modify a webmail program to have extra functionality for approving senders and sending change requests to the server to automatically direct future mail from the same sender. As an aside I have previously worked on a project that had a modified version of the Horde webmail system to do this sort of thing for challenge-response email and adding certain automated messages to the allow-list.

The Change

One of the first things to do is configuring the system to add every recipient of an outbound message to the allow list for receiving a reply. Having a script go through the sent-mail folders of all accounts and adding the recipients to the allow lists would be easy and catch the common cases.

But even with processing the sent mail folders going from a working system without such things to a system like this will take some time for the initial work of adding addresses to the allow lists, particularly for domain wide additions of all the sites that send password confirmation messages. You would need rules to direct inbound mail to the old addresses to the new style and then address a huge amount of mail that needs to be categorised. If you have 600 employees and the average amount of time taken on the first day is 10 minutes per user then that’s 100 hours of work, 12 work days. If you had everyone from the IT department, reception, and executive assistants working on it that would be viable. After about a week there wouldn’t be much work involved in maintaining it. Then after that it would be a net win for the company.

The Benefits

If the average employee spends one minute a day dealing with spam and phishing email then with 600 employees that’s 10 hours of wasted time per day. Effectively wasting one employee’s work! I’m sure that’s the low end of the range, 5 minutes average per day doesn’t seem unreasonable especially when people are unsure about phishing email and send it to Slack so multiple employees spend time analysing it. So you could have 5 employees being wasted by hostile email and avoiding that would take a fraction of the time of a few people adding up to less than an hour of total work per day.

Then there’s the training time for phishing mail. Instead of having every employee spend half an hour doing email security training every few months (that’s 300 hours or 7.5 working weeks every time you do it) you just train the few experts.

In addition to saving time there are significant security benefits to having experts deal with possibly hostile email. Someone who deals with a lot of phishing email is much less likely to be tricked.

Will They Do It?

They probably won’t do it any time soon. I don’t think it’s expensive enough for companies yet. Maybe government agencies already have equivalent measures in place, but for regular corporations it’s probably regarded as too difficult to change anything and the costs aren’t obvious. I have been unsuccessful in suggesting that managers spend slightly more on computer hardware to save significant amounts of worker time for 30 years.

Related posts:

  1. blocking spam There are two critical things that any anti-spam system must...
  2. Please Turn off Your Spam Protection Hi, I’d like to send an email from a small...
  3. A New Spam Trick One item on my todo list is to set up...
Categories: FLOSS Project Planets

Julien Tayon: The advantages of HTML as a data model over basic declarative ORM approach

Planet Python - Tue, 2024-11-19 23:04
Very often, backend devs don't want to write code.

For this, we use one trick : derive HTML widget for presentation, database access, REST endpoints from ONE SOURCE of truth and we call it MODEL.

A tradition, and I insist it's a conservative tradition, is to use a declarative model where we mad the truth of the model from python classes.

By declaring a class we will implicitly declare it's SQL structure, the HTML input form for human readable interaction and the REST endpoint to access a graph of objects which are all mapped on the database.

Since the arrival of pydantic it makes all the more sense when it comes to empower a strongly type approach in python.

But is it the only one worthy ?

I speak here as a veteran of the trenchline which job is to read a list of entries of customer in an xls file from a project manager and change the faulty value based on the retro-engineering of an HTML formular into whatever the freak the right value is supposed to be.

In this case your job is in fact to short circuit the web framework to which you don't have access to change values directly into the database.

More often than never is these real life case you don't have access to the team who built the framework (to much bureaucracy to even get a question answered before the situation gets critical) ... So you look at the form.

And you guess the name of the table that is impacted by looking at the « network tab » in the developper GUI when you hit the submit button.

And you guess the name of the field impacted in the table to guess the name of the columns.

And then you use your only magical tool which is a write access to the database to reflect the expected object with an automapper and change values.

You could do it raw SQL I agree, but sometimes you need to do a web query in the middle to change the value because you have to ask a REST service what is the new ID of the client.

And you see the more this experience of having to tweak into real life frameworks that often surprise users for the sake of the limitation of the source of truth, the more I want the HTML to be the source of truth.

The most stoïcian approach to full stack framework approach : to derive Everything from an HTML page.

The views, the controllers, the route, the model in such a true way that if you modify the HTML you modify in real time the database model, the routes, the displayed form.



What are the advantages of HTML as a declarative language ?

Here, one of the tradition is to prefere the human readable languages such as YAML and JSON, or machine readable as XML over HTML.

However, JSON and YAML are more limited in expressiveness of data structure than HTML (you can have a dict as a key in a dict in json ? Me I can.)

And on the other hand XML is quite a pain to read and write without mistakes.

HTML is just XML

HTML is a lax and lenient grammarless XML. No parsers will raise an exception because you wrote "<br>" instead of "<br/>" (or the opposite). You can add non existent attributes to tags and the parser will understand this easily without you having to redefine a full fledge grammar.

HTML is an XML YOU CAN SEE.

There are some tags that are related to a grammar of visual widget to which non computer people are familiar with.

If you use a FORM as a mapping to a database table, and all input inside has A column name you have already input drawn on your screen.



Modern « remote procedure call » are web based

Call it RPC, call it soap, call it REST, nowadays the web technologies trust 99% of how computer systems exchange data between each others.

You buy something on the internet, at the end you interact with a web formular or a web call. Hence, we can assert with strong convictions that 100% of web technologies can serve web pages. Thus, if you use your html as a model and present it, therefore you can deduce the data model from the form without needing a new pivoting language.

Proof of concept

For the convenience of « fun » we are gonna imagine a backend for « agile by micro blogging » (à la former twitter).

We are gonna assume the platform is structured micro blogging around where agile shines the most : not when things are done, but to move things on.

Things that are done will be called statements. Like : « software is delivered. Here is a factoid (a git url for instance) ». We will call this nodes in a graph and are they will be supposed to immutable states that can't be contested.

Each statement answers another statement's factoid like a delivery statement tends to follow a story point (at least should lead by the mean of a transition.

Hence in this application we will mirco-blog about the transition ... like on a social network with members of concerned group.
The idea of the application is to replace scrum meetings with micro blogging.

Are you blocked ? Do you need anything ? Can be answered on the mirco blogging platform, and every threads that are presented archived, used for machine learning (about what you want to hear as a good news) in a data form that is convenient for large language model.

As such we want to harvest a text long enough to express emotions, constricted to a laughingly small amount of characters so that finesse and ambiguity are tough to raise. That's the heart of the application : harvesting comments tagged with associated emotions to ease the work of tagging for Artificial Intelligence.

Hear me out, this is just a stupid idea of mine to illustrate a graph like structure described with HTML, not a real life idea. Me I just love to represent State Machine Diagram with everything that fall under my hands.

Here is the entity relationship diagram I have in mind :

Let's see what a table declaration might look like in HTML, let's say transition : <form action=/transition > <input type=number name=id /> <input type=number name=user_group_id nullable=false reference=user_group.id /> <textarea name=message rows=10 cols=50 nullable=false ></textarea> <input type=url name=factoid /> <select name="emotion_for_group_triggered" value=neutral > <option value="">please select a value</option> <option value=positive >Positive</option> <option value=neutral >Neutral</option> <option value=negative >Negative</option> </select> <input type=number name=expected_fun_for_group /> <input type=number name=previous_statement_id reference=statement.id nullable=false /> <input type=number name=next_statement_id reference=statement.id /> <unique_constraint col=next_statement_id,previous_statement_id name=unique_transition ></unique_constraint> <input type=checkbox name=is_exception /> </form> Through the use of additionnal tags of html and attributes we can convey a lot of informations usable for database construction/querying that are gonna be silent at the presentation (like unique_constraint). And with a little bit of javascript and CSS this html generate the following rendering (indicating the webservices endpoint as input type=submit :

Meaning that you can now serve a landing page that serve the purpose of human interaction, describing a « curl way » of automating interaction and a full model of your database.

Most startup think data model should be obfuscated to prevent being copied, most free software project thinks that sharing the non valuable assets helps adopt the technology.

And thanks to this, I can now create my own test suite that is using the HTML form to work on a doppleganger of the real database by parsing the HTML served by the application service (pdca.py) and launch a perfectly functioning service out of it: from requests import post from html.parser import HTMLParser import requests import os from dateutil import parser from passlib.hash import scrypt as crypto_hash # we can change the hash easily from urllib.parse import parse_qsl, urlparse # heaviweight from requests import get from sqlalchemy import * from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session DB=os.environ.get('DB','test.db') DB_DRIVER=os.environ.get('DB_DRIVER','sqlite') DSN=f"{DB_DRIVER}://{DB_DRIVER == 'sqlite' and not DB.startswith('/') and '/' or ''}{DB}" ENDPOINT="http://127.0.0.1:5000" os.chdir("..") os.system(f"rm {DB}") os.system(f"DB={DB} DB_DRIVER={DB_DRIVER} python pdca.py & sleep 2") url = lambda table : ENDPOINT + "/" + table os.system(f"curl {url('group')}?_action=search") form_to_db = transtype_input = lambda attrs : { k: ( # handling of input having date/time in the name "date" in k or "time" in k and v and type(k) == str ) and parser.parse(v) or # handling of boolean mapping which input begins with "is_" k.startswith("is_") and [False, True][v == "on"] or # password ? "password" in k and crypto_hash.hash(v) or v for k,v in attrs.items() if v and not k.startswith("_") } post(url("user"), params = dict(id=1, secret_password="toto", name="jul2", email="j@j.com", _action="create"), files=dict(pic_file=open("./assets/diag.png", "rb").read())).status_code #os.system(f"curl {ENDPOINT}/user?_action=search") #os.system(f"sqlite3 {DB} .dump") engine = create_engine(DSN) metadata = MetaData() transtype_true = lambda p : (p[0],[False,True][p[1]=="true"]) def dispatch(p): return dict( nullable=transtype_true, unique=transtype_true, default=lambda p:("server_default",eval(p[1])), ).get(p[0], lambda *a:None)(p) transtype_input = lambda attrs : dict(filter(lambda x :x, map(dispatch, attrs.items()))) class HTMLtoData(HTMLParser): def __init__(self): global engine, tables, metadata self.cols = [] self.table = "" self.tables= [] self.enum =[] self.engine= engine self.meta = metadata super().__init__() def handle_starttag(self, tag, attrs): global tables attrs = dict(attrs) simple_mapping = { "email" : UnicodeText, "url" : UnicodeText, "phone" : UnicodeText, "text" : UnicodeText, "checkbox" : Boolean, "date" : Date, "time" : Time, "datetime-local" : DateTime, "file" : Text, "password" : Text, "uuid" : Text, #UUID is postgres specific } if tag in {"select", "textarea"}: self.enum=[] self.current_col = attrs["name"] self.attrs= attrs if tag == "option": self.enum.append( attrs["value"] ) if tag == "unique_constraint": self.cols.append( UniqueConstraint(*attrs["col"].split(','), name=attrs["name"]) ) if tag in { "input" }: if attrs.get("name") == "id": self.cols.append( Column('id', Integer, **( dict(primary_key = True) | transtype_input(attrs )))) return try: if attrs.get("name").endswith("_id"): table=attrs.get("name").split("_") self.cols.append( Column(attrs["name"], Integer, ForeignKey(attrs["reference"])) ) return except Exception as e: log(e, ln=line()) if attrs.get("type") in simple_mapping.keys() or tag in {"select",}: self.cols.append( Column( attrs["name"], simple_mapping[attrs["type"]], **transtype_input(attrs) ) ) if attrs["type"] == "number": if attrs.get("step","") == "any": self.cols.append( Columns(attrs["name"], Float) ) else: self.cols.append( Column(attrs["name"], Integer) ) if tag== "form": self.table = urlparse(attrs["action"]).path[1:] def handle_endtag(self, tag): global tables if tag == "select": # self.cols.append( Column(self.current_col,Enum(*[(k,k) for k in self.enum]), **transtype_input(self.attrs)) ) self.cols.append( Column(self.current_col, Text, **transtype_input(self.attrs)) ) if tag == "textarea": self.cols.append( Column( self.current_col, String(int(self.attrs["cols"])*int(self.attrs["rows"])), **transtype_input(self.attrs)) ) if tag=="form": self.tables.append( Table(self.table, self.meta, *self.cols), ) #tables[self.table] = self.tables[-1] self.cols = [] with engine.connect() as cnx: self.meta.create_all(engine) cnx.commit() HTMLtoData().feed(get("http://127.0.0.1:5000/").text) os.system("pkill -f pdca.py") #metadata.reflect(bind=engine) Base = automap_base(metadata=metadata) Base.prepare() with Session(engine) as session: for table,values in tuple([ ("user", form_to_db(dict( name="him", email="j2@j.com", secret_password="toto"))), ("group", dict(id=1, name="trolol") ), ("group", dict(id=2, name="serious") ), ("user_group", dict(id=1,user_id=1, group_id=1, secret_token="secret")), ("user_group", dict(id=2,user_id=1, group_id=2, secret_token="")), ("user_group", dict(id=3,user_id=2, group_id=1, secret_token="")), ("statement", dict(id=1,user_group_id=1, message="usable agile workflow", category="story" )), ("statement", dict(id=2,user_group_id=1, message="How do we code?", category="story_item" )), ("statement", dict(id=3,user_group_id=1, message="which database?", category="question")), ("statement", dict(id=4,user_group_id=1, message="which web framework?", category="question")), ("statement", dict(id=5,user_group_id=1, message="preferably less", category="answer")), ("statement", dict(id=6,user_group_id=1, message="How do we test?", category="story_item" )), ("statement", dict(id=7,user_group_id=1, message="QA framework here", category="delivery" )), ("statement", dict(id=8,user_group_id=1, message="test plan", category="test" )), ("statement", dict(id=9,user_group_id=1, message="OK", category="finish" )), ("statement", dict(id=10, user_group_id=1, message="PoC delivered",category="delivery")), ("transition", dict( user_group_id=1, previous_statement_id=1, next_statement_id=2, message="something bugs me",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=2, next_statement_id=4, message="standup meeting feedback",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=2, next_statement_id=3, message="standup meeting feedback",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=2, next_statement_id=6, message="change accepted",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=4, next_statement_id=5, message="arbitration",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=3, next_statement_id=5, message="arbitration",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=6, next_statement_id=7, message="R&D", )), ("transition", dict( user_group_id=1, previous_statement_id=7, next_statement_id=8, message="Q&A", )), ("transition", dict( user_group_id=1, previous_statement_id=8, next_statement_id=9, message="CI action", )), ("transition", dict( user_group_id=1, previous_statement_id=2, next_statement_id=10, message="situation unblocked", )), ("transition", dict( user_group_id=1, previous_statement_id=9, next_statement_id=10, message="situation unblocked", )), ]): session.add(getattr(Base.classes,table)(**values)) session.commit() os.system("python ./generate_state_diagram.py sqlite:///test.db > out.dot ;dot -Tpng out.dot > diag2.png; xdot out.dot") s = requests.session() os.system(f"DB={DB} DB_DRIVER={DB_DRIVER} python pdca.py & sleep 1") print(s.post(url("group"), params=dict(_action="delete", id=3,name=1)).status_code) print(s.post(url("grant"), params = dict(secret_password="toto", email="j@j.com",group_id=1, )).status_code) print(s.post(url("grant"), params = dict(_redirect="/group",secret_password="toto", email="j@j.com",group_id=2, )).status_code) print(s.cookies["Token"]) print(s.post(url("user_group"), params=dict(_action="search", user_id=1)).text) print(s.post(url("group"), params=dict(_action="create", id=3,name=2)).text) print(s.post(url("group"), params=dict(_action="delete", id=3)).status_code) print(s.post(url("group"), params=dict(_action="search", )).text) os.system("pkill -f pdca.py") Which give me a nice set of data to play with while I experiment on how to handle the business logic where the core of the value is.
Categories: FLOSS Project Planets

Julien Tayon: The advantages of HTML as a data model over basic declarative ORM approach

Planet Python - Tue, 2024-11-19 23:04
Very often, backend devs don't want to write code.

For this, we use one trick : derive HTML widget for presentation, database access, REST endpoints from ONE SOURCE of truth and we call it MODEL.

A tradition, and I insist it's a conservative tradition, is to use a declarative model where we mad the truth of the model from python classes.

By declaring a class we will implicitly declare it's SQL structure, the HTML input form for human readable interaction and the REST endpoint to access a graph of objects which are all mapped on the database.

Since the arrival of pydantic it makes all the more sense when it comes to empower a strongly type approach in python.

But is it the only one worthy ?

I speak here as a veteran of the trenchline which job is to read a list of entries of customer in an xls file from a project manager and change the faulty value based on the retro-engineering of an HTML formular into whatever the freak the right value is supposed to be.

In this case your job is in fact to short circuit the web framework to which you don't have access to change values directly into the database.

More often than never is these real life case you don't have access to the team who built the framework (to much bureaucracy to even get a question answered before the situation gets critical) ... So you look at the form.

And you guess the name of the table that is impacted by looking at the « network tab » in the developper GUI when you hit the submit button.

And you guess the name of the field impacted in the table to guess the name of the columns.

And then you use your only magical tool which is a write access to the database to reflect the expected object with an automapper and change values.

You could do it raw SQL I agree, but sometimes you need to do a web query in the middle to change the value because you have to ask a REST service what is the new ID of the client.

And you see the more this experience of having to tweak into real life frameworks that often surprise users for the sake of the limitation of the source of truth, the more I want the HTML to be the source of truth.

The most stoïcian approach to full stack framework approach : to derive Everything from an HTML page.

The views, the controllers, the route, the model in such a true way that if you modify the HTML you modify in real time the database model, the routes, the displayed form.



What are the advantages of HTML as a declarative language ?

Here, one of the tradition is to prefere the human readable languages such as YAML and JSON, or machine readable as XML over HTML.

However, JSON and YAML are more limited in expressiveness of data structure than HTML (you can have a dict as a key in a dict in json ? Me I can.)

And on the other hand XML is quite a pain to read and write without mistakes.

HTML is just XML

HTML is a lax and lenient grammarless XML. No parsers will raise an exception because you wrote "<br>" instead of "<br/>" (or the opposite). You can add non existent attributes to tags and the parser will understand this easily without you having to redefine a full fledge grammar.

HTML is an XML YOU CAN SEE.

There are some tags that are related to a grammar of visual widget to which non computer people are familiar with.

If you use a FORM as a mapping to a database table, and all input inside has A column name you have already input drawn on your screen.



Modern « remote procedure call » are web based

Call it RPC, call it soap, call it REST, nowadays the web technologies trust 99% of how computer systems exchange data between each others.

You buy something on the internet, at the end you interact with a web formular or a web call. Hence, we can assert with strong convictions that 100% of web technologies can serve web pages. Thus, if you use your html as a model and present it, therefore you can deduce the data model from the form without needing a new pivoting language.

Proof of concept

For the convenience of « fun » we are gonna imagine a backend for « agile by micro blogging » (à la former twitter).

We are gonna assume the platform is structured micro blogging around where agile shines the most : not when things are done, but to move things on.

Things that are done will be called statements. Like : « software is delivered. Here is a factoid (a git url for instance) ». We will call this nodes in a graph and are they will be supposed to immutable states that can't be contested.

Each statement answers another statement's factoid like a delivery statement tends to follow a story point (at least should lead by the mean of a transition.

Hence in this application we will mirco-blog about the transition ... like on a social network with members of concerned group.
The idea of the application is to replace scrum meetings with micro blogging.

Are you blocked ? Do you need anything ? Can be answered on the mirco blogging platform, and every threads that are presented archived, used for machine learning (about what you want to hear as a good news) in a data form that is convenient for large language model.

As such we want to harvest a text long enough to express emotions, constricted to a laughingly small amount of characters so that finesse and ambiguity are tough to raise. That's the heart of the application : harvesting comments tagged with associated emotions to ease the work of tagging for Artificial Intelligence.

Hear me out, this is just a stupid idea of mine to illustrate a graph like structure described with HTML, not a real life idea. Me I just love to represent State Machine Diagram with everything that fall under my hands.

Here is the entity relationship diagram I have in mind :

Let's see what a table declaration might look like in HTML, let's say transition : <form action=/transition > <input type=number name=id /> <input type=number name=user_group_id nullable=false reference=user_group.id /> <textarea name=message rows=10 cols=50 nullable=false ></textarea> <input type=url name=factoid /> <select name="emotion_for_group_triggered" value=neutral > <option value="">please select a value</option> <option value=positive >Positive</option> <option value=neutral >Neutral</option> <option value=negative >Negative</option> </select> <input type=number name=expected_fun_for_group /> <input type=number name=previous_statement_id reference=statement.id nullable=false /> <input type=number name=next_statement_id reference=statement.id /> <unique_constraint col=next_statement_id,previous_statement_id name=unique_transition ></unique_constraint> <input type=checkbox name=is_exception /> </form> Through the use of additionnal tags of html and attributes we can convey a lot of informations usable for database construction/querying that are gonna be silent at the presentation (like unique_constraint). And with a little bit of javascript and CSS this html generate the following rendering (indicating the webservices endpoint as input type=submit :

Meaning that you can now serve a landing page that serve the purpose of human interaction, describing a « curl way » of automating interaction and a full model of your database.

Most startup think data model should be obfuscated to prevent being copied, most free software project thinks that sharing the non valuable assets helps adopt the technology.

And thanks to this, I can now create my own test suite that is using the HTML form to work on a doppleganger of the real database by parsing the HTML served by the application service (pdca.py) and launch a perfectly functioning service out of it: from requests import post from html.parser import HTMLParser import requests import os from dateutil import parser from passlib.hash import scrypt as crypto_hash # we can change the hash easily from urllib.parse import parse_qsl, urlparse # heaviweight from requests import get from sqlalchemy import * from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session DB=os.environ.get('DB','test.db') DB_DRIVER=os.environ.get('DB_DRIVER','sqlite') DSN=f"{DB_DRIVER}://{DB_DRIVER == 'sqlite' and not DB.startswith('/') and '/' or ''}{DB}" ENDPOINT="http://127.0.0.1:5000" os.chdir("..") os.system(f"rm {DB}") os.system(f"DB={DB} DB_DRIVER={DB_DRIVER} python pdca.py & sleep 2") url = lambda table : ENDPOINT + "/" + table os.system(f"curl {url('group')}?_action=search") form_to_db = transtype_input = lambda attrs : { k: ( # handling of input having date/time in the name "date" in k or "time" in k and v and type(k) == str ) and parser.parse(v) or # handling of boolean mapping which input begins with "is_" k.startswith("is_") and [False, True][v == "on"] or # password ? "password" in k and crypto_hash.hash(v) or v for k,v in attrs.items() if v and not k.startswith("_") } post(url("user"), params = dict(id=1, secret_password="toto", name="jul2", email="j@j.com", _action="create"), files=dict(pic_file=open("./assets/diag.png", "rb").read())).status_code #os.system(f"curl {ENDPOINT}/user?_action=search") #os.system(f"sqlite3 {DB} .dump") engine = create_engine(DSN) metadata = MetaData() transtype_true = lambda p : (p[0],[False,True][p[1]=="true"]) def dispatch(p): return dict( nullable=transtype_true, unique=transtype_true, default=lambda p:("server_default",eval(p[1])), ).get(p[0], lambda *a:None)(p) transtype_input = lambda attrs : dict(filter(lambda x :x, map(dispatch, attrs.items()))) class HTMLtoData(HTMLParser): def __init__(self): global engine, tables, metadata self.cols = [] self.table = "" self.tables= [] self.enum =[] self.engine= engine self.meta = metadata super().__init__() def handle_starttag(self, tag, attrs): global tables attrs = dict(attrs) simple_mapping = { "email" : UnicodeText, "url" : UnicodeText, "phone" : UnicodeText, "text" : UnicodeText, "checkbox" : Boolean, "date" : Date, "time" : Time, "datetime-local" : DateTime, "file" : Text, "password" : Text, "uuid" : Text, #UUID is postgres specific } if tag in {"select", "textarea"}: self.enum=[] self.current_col = attrs["name"] self.attrs= attrs if tag == "option": self.enum.append( attrs["value"] ) if tag == "unique_constraint": self.cols.append( UniqueConstraint(*attrs["col"].split(','), name=attrs["name"]) ) if tag in { "input" }: if attrs.get("name") == "id": self.cols.append( Column('id', Integer, **( dict(primary_key = True) | transtype_input(attrs )))) return try: if attrs.get("name").endswith("_id"): table=attrs.get("name").split("_") self.cols.append( Column(attrs["name"], Integer, ForeignKey(attrs["reference"])) ) return except Exception as e: log(e, ln=line()) if attrs.get("type") in simple_mapping.keys() or tag in {"select",}: self.cols.append( Column( attrs["name"], simple_mapping[attrs["type"]], **transtype_input(attrs) ) ) if attrs["type"] == "number": if attrs.get("step","") == "any": self.cols.append( Columns(attrs["name"], Float) ) else: self.cols.append( Column(attrs["name"], Integer) ) if tag== "form": self.table = urlparse(attrs["action"]).path[1:] def handle_endtag(self, tag): global tables if tag == "select": # self.cols.append( Column(self.current_col,Enum(*[(k,k) for k in self.enum]), **transtype_input(self.attrs)) ) self.cols.append( Column(self.current_col, Text, **transtype_input(self.attrs)) ) if tag == "textarea": self.cols.append( Column( self.current_col, String(int(self.attrs["cols"])*int(self.attrs["rows"])), **transtype_input(self.attrs)) ) if tag=="form": self.tables.append( Table(self.table, self.meta, *self.cols), ) #tables[self.table] = self.tables[-1] self.cols = [] with engine.connect() as cnx: self.meta.create_all(engine) cnx.commit() HTMLtoData().feed(get("http://127.0.0.1:5000/").text) os.system("pkill -f pdca.py") #metadata.reflect(bind=engine) Base = automap_base(metadata=metadata) Base.prepare() with Session(engine) as session: for table,values in tuple([ ("user", form_to_db(dict( name="him", email="j2@j.com", secret_password="toto"))), ("group", dict(id=1, name="trolol") ), ("group", dict(id=2, name="serious") ), ("user_group", dict(id=1,user_id=1, group_id=1, secret_token="secret")), ("user_group", dict(id=2,user_id=1, group_id=2, secret_token="")), ("user_group", dict(id=3,user_id=2, group_id=1, secret_token="")), ("statement", dict(id=1,user_group_id=1, message="usable agile workflow", category="story" )), ("statement", dict(id=2,user_group_id=1, message="How do we code?", category="story_item" )), ("statement", dict(id=3,user_group_id=1, message="which database?", category="question")), ("statement", dict(id=4,user_group_id=1, message="which web framework?", category="question")), ("statement", dict(id=5,user_group_id=1, message="preferably less", category="answer")), ("statement", dict(id=6,user_group_id=1, message="How do we test?", category="story_item" )), ("statement", dict(id=7,user_group_id=1, message="QA framework here", category="delivery" )), ("statement", dict(id=8,user_group_id=1, message="test plan", category="test" )), ("statement", dict(id=9,user_group_id=1, message="OK", category="finish" )), ("statement", dict(id=10, user_group_id=1, message="PoC delivered",category="delivery")), ("transition", dict( user_group_id=1, previous_statement_id=1, next_statement_id=2, message="something bugs me",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=2, next_statement_id=4, message="standup meeting feedback",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=2, next_statement_id=3, message="standup meeting feedback",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=2, next_statement_id=6, message="change accepted",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=4, next_statement_id=5, message="arbitration",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=3, next_statement_id=5, message="arbitration",is_exception=True, )), ("transition", dict( user_group_id=1, previous_statement_id=6, next_statement_id=7, message="R&D", )), ("transition", dict( user_group_id=1, previous_statement_id=7, next_statement_id=8, message="Q&A", )), ("transition", dict( user_group_id=1, previous_statement_id=8, next_statement_id=9, message="CI action", )), ("transition", dict( user_group_id=1, previous_statement_id=2, next_statement_id=10, message="situation unblocked", )), ("transition", dict( user_group_id=1, previous_statement_id=9, next_statement_id=10, message="situation unblocked", )), ]): session.add(getattr(Base.classes,table)(**values)) session.commit() os.system("python ./generate_state_diagram.py sqlite:///test.db > out.dot ;dot -Tpng out.dot > diag2.png; xdot out.dot") s = requests.session() os.system(f"DB={DB} DB_DRIVER={DB_DRIVER} python pdca.py & sleep 1") print(s.post(url("group"), params=dict(_action="delete", id=3,name=1)).status_code) print(s.post(url("grant"), params = dict(secret_password="toto", email="j@j.com",group_id=1, )).status_code) print(s.post(url("grant"), params = dict(_redirect="/group",secret_password="toto", email="j@j.com",group_id=2, )).status_code) print(s.cookies["Token"]) print(s.post(url("user_group"), params=dict(_action="search", user_id=1)).text) print(s.post(url("group"), params=dict(_action="create", id=3,name=2)).text) print(s.post(url("group"), params=dict(_action="delete", id=3)).status_code) print(s.post(url("group"), params=dict(_action="search", )).text) os.system("pkill -f pdca.py") Which give me a nice set of data to play with while I experiment on how to handle the business logic where the core of the value is.
Categories: FLOSS Project Planets

Simon's Blog: Drupal Settings.php Snippets for Debug

Planet Drupal - Tue, 2024-11-19 19:00
Usage (TLDR;)

You can simply copy <CTRL-C> + <CTRL-V> the following into your public_html/site/default/settings.php, and toggle on/off those that are relevant to you based on your need:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 $config['system.logging']['error_level'] = 'verbose'; // [A] (For Drupal 8+) Turn on verbose debug message reporting //$conf['error_level'] = 2; // [A] (For Drupal 7) Turn on verbose debug message reporting (Equivalant to navigate to Administration→ Configuration→ Development → logging and errors and select "All messages".) // [A] ------------------------------------------- error_reporting(E_ALL); // [A] Enable PHP errors (For local Drupal development, you can also enable error reporting, display errors and display startup error to help you further debugging and fixing major runtime errors ini_set('display_errors', TRUE); // [A] Enable PHP errors (For local Drupal development, you can also enable error reporting, display errors and display startup error to help you further debugging and fixing major runtime errors ini_set('display_startup_errors', TRUE); // [A] Enable PHP errors (For local Drupal development, you can also enable error reporting, display errors and display startup error to help you further debugging and fixing major runtime errors $settings['twig_debug'] = TRUE; // [B] Twig Debug - Turn on twig debug mode $settings['twig_auto_reload'] = TRUE; // [B] Twig Debug - Turn on twig template auto reload $settings['twig_cache'] = FALSE; // [B] Twig Debug - Turn off twig cache // [B] ------------------------------------------ $settings[‘cache’][‘bins’][‘render’] = ‘cache.backend.null’; // [B] Disable Caching - Disable render caching. $settings[‘cache’][‘bins’][‘page’] = ‘cache.backend.null’; // [B] Disable Caching - Disable page cache. $settings[‘cache’][‘bins’][‘dynamic_page_cache’] = ‘cache.backend.null’; // [B] Disable Caching - Disable dynamic page cache. $settings[‘cache’][‘default’] = ‘cache.backend.null’; // [B] Disable Caching - Disable backend cache. $config['system.performance']['css']['preprocess'] = FALSE; // [C] Turn off agrregated css (see: https://www.drupal.org/docs/develop/development-tools/disabling-and-debugging-caching) $config['system.performance']['js']['preprocess'] = FALSE; // [C] Turn off agrregated js (see: https://www.drupal.org/docs/develop/development-tools/disabling-and $settings['update_free_access'] = FALSE; // [D] Enable access to /update.php $settings['rebuild_access'] = TRUE; // [D] Enable access to /rebuild.php (This setting can be enabled to allow Drupal's php and database cached storage to be cleared via the rebuild.php page. Access to this page can also be gained by generating a query string from rebuild_token_calculator.sh and using these parameters in a request to rebuild.php. $settings['skip_permissions_hardening'] = TRUE; // [E] Skip file system permissions hardening. (The system module will periodically check the permissions of your site's site directory to ensure that it is not writable by the website user. For sites that are managed with a version control system, this can cause problems when files in that directory such as settings.php are updated, because the user pulling in the changes won't have permissions to modify files in the directory. $settings['extension_discovery_scan_tests'] = TRUE; // [E] Allow test modules and themes to be installed. (Drupal ignores test modules and themes by default for performance reasons. During development it can be useful to install test extensions for debugging purpose. $settings['trusted_host_patterns'] = array(); // [E] Turn off trusted host

(for instance verbose debug required then turn comment all lines except for $config['system.logging']['error_level'] = 'verbose';)

Categories: FLOSS Project Planets

Seth Michael Larson: SEGA Genesis &amp; Mega Drive games and ROMs from Steam

Planet Python - Tue, 2024-11-19 19:00
SEGA Genesis & Mega Drive games and ROMs from Steam AboutBlogCool URLs SEGA Genesis & Mega Drive games and ROMs from Steam

Published 2024-11-20 by Seth Larson
Reading time: minutes

TDLR: SEGA is discontinuing the "SEGA Mega Drive and Genesis Classics" on December 6th. This is an affordable way to purchase these games and ROMs compared to the original cartridges. Buy games you are interested in while you still can.

In particular, Dr. Robotnik's Mean Bean Machine is one of my favorite games. I created copy-cat games when I was first learning how to program computers. I already own this game twice over as a Genesis cartridge and in the Sonic Mega Collection for the GameCube, but neither of those formats are easy to find the ROM itself to be played elsewhere.


So I heard you like beans.
That's where the SEGA Mega Drive and Genesis Classics comes in. This launcher provides uncompressed ROMs that are easily accessible after purchasing the game. For the below instructions, I am using Ubuntu 24.04 as my operating system. Here's what I did:

  • Download the Steam launcher for Linux.
  • Purchase Dr. Robotnik's Mean Bean Machine on Steam for $4.99 USD.
  • Download the "SEGA Mega Drive and Genesis Classics" launcher and the Dr. Robotnik's Mean Bean Machine "DLC". You don't have to launch the game through Steam.
  • Navigate to ~/.steam/steam/steamapps/common/Sega\ Classics/uncompressed\ ROMs.
  • ROM files can be found in this directory. Their file extension will be either .SGD or .68K. These can be changed to .bin to be recognized by emulators for Linux like Kega Fusion.
# How to mass-rename ROM extensions if you purchase multiple like I did: $ for f in *.68K; do mv -- "$f" "${f%.68K}.bin"; done $ for f in *.SGD; do mv -- "$f" "${f%.SGD}.bin"; done

From here, you should be able to load these ROMs into any emulator. Happy gaming!

Have thoughts or questions? Let's chat over email or social:

sethmichaellarson@gmail.com
@sethmlarson@fosstodon.org

Want more articles like this one? Get notified of new posts by subscribing to the RSS feed or the email newsletter. I won't share your email or send spam, only whatever this is!

Want more content now? This blog's archive has ready-to-read articles. I also curate a list of cool URLs I find on the internet.

Find a typo? This blog is open source, pull requests are appreciated.

Thanks for reading! ♡ This work is licensed under CC BY-SA 4.0

Categories: FLOSS Project Planets

Arnaud Rebillout: Installing an older Ansible version via pipx

Planet Debian - Tue, 2024-11-19 19:00
Latest Ansible requires Python 3.8 on the remote hosts

... and therefore, hosts running Debian Buster are now unsupported.

Monday, I updated the system on my laptop (Debian Sid), and I got the latest version of ansible-core, 2.18:

$ ansible --version | head -1 ansible [core 2.18.0]

To my surprise, Ansible started to fail with some remote hosts:

ansible-core requires a minimum of Python version 3.8. Current version: 3.7.3 (default, Mar 23 2024, 16:12:05) [GCC 8.3.0]

Yep, I do have to work with hosts running Debian Buster (aka. oldoldstable). While Buster is old, it's still out there, and it's still supported via Freexian’s Extended LTS.

How are we going to keep managing those machines? Obviously, we'll need an older version of Ansible.

Pipx to the rescue TL;DR pipx install --include-deps ansible==10.6.0 pipx inject ansible dnspython # for community.general.dig Installing Ansible via pipx

Lately I discovered pipx and it's incredibly simple, so I thought I'd give it a try for this use-case.

Reminder: pipx allows users to install Python applications in isolated environments. In other words, it doesn't make a mess with your system like pip does, and it doesn't require you to learn how to setup Python virtual environments by yourself. It doesn't ask for root privileges either, as it installs everything under ~/.local/.

First thing to know: pipx install ansible won't cut it, it doesn't install the whole Ansible suite. Instead we need to use the --include-deps flag in order to install all the Ansible commands.

The output should look something like that:

$ pipx install --include-deps ansible==10.6.0 installed package ansible 10.6.0, installed using Python 3.12.7 These apps are now globally available - ansible - ansible-community - ansible-config - ansible-connection - ansible-console - ansible-doc - ansible-galaxy - ansible-inventory - ansible-playbook - ansible-pull - ansible-test - ansible-vault done! ✨ 🌟 ✨

Note: at the moment 10.6.0 is the latest release of the 10.x branch, but make sure to check https://pypi.org/project/ansible/#history and install whatever is the latest on this branch. The 11.x branch doesn't work for us, as it's the branch that comes with ansible-core 2.18, and we don't want that.

Next: do NOT run pipx ensurepath, even though pipx might suggest that. This is not needed. Instead, check your ~/.profile, it should contain these lines:

# set PATH so it includes user's private bin if it exists if [ -d "$HOME/.local/bin" ] ; then PATH="$HOME/.local/bin:$PATH" fi

Meaning: ~/.local/bin/ should already be in your path, unless it's the first time you installed a program via pipx and the directory ~/.local/bin/ was just created. If that's the case, you have to log out and log back in.

Now, let's open a new terminal and check if we're good:

$ which ansible /home/me/.local/bin/ansible $ ansible --version | head -1 ansible [core 2.17.6]

Yep! And that's working already, I can use Ansible with Buster hosts again.

What's cool is that we can run ansible to use this specific Ansible version, but we can also run /usr/bin/ansible to run the latest version that is installed via APT.

Injecting Python dependencies needed by collections

Quickly enough, I realized something odd, apparently the plugin community.general.dig didn't work anymore. After some research, I found a one-liner to test that:

# Works with APT-installed Ansible? Yes! $ /usr/bin/ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}" localhost | SUCCESS => { "msg": "151.101.66.132,151.101.2.132,151.101.194.132,151.101.130.132" } # Works with pipx-installed Ansible? No! $ ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}" localhost | FAILED! => { "msg": "An unhandled exception occurred while running the lookup plugin 'dig'. Error was a <class 'ansible.errors.AnsibleError'>, original message: The dig lookup requires the python 'dnspython' library and it is not installed." }

The issue here is that we need python3-dnspython, which is installed on my system, but is not installed within the pipx virtual environment. It seems that the way to go is to inject the required dependencies in the venv, which is (again) super easy:

$ pipx inject ansible dnspython injected package dnspython into venv ansible done! ✨ 🌟 ✨

Problem fixed! Of course you'll have to iterate to install other missing dependencies, depending on which Ansible external plugins are used in your playbooks.

Closing thoughts

Hopefully there's nothing left to discover and I can get back to work! If there's more quirks and rough edges, drop me an email so that I can update this blog post.

Let me also credit another useful blog post on the matter: https://unfriendlygrinch.info/posts/effortless-ansible-installation/

Categories: FLOSS Project Planets

Aurelien Jarno: AI crawlers should be smarter

Planet Debian - Tue, 2024-11-19 17:31

It would be fantastic if all those AI companies dedicated some time to make their web crawlers smarter (what about using AI?). Noawadays most of them still stupidly follow every link on a Git frontend.

Hint: Changing the display options does not provide more training data!

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #656 (Nov. 19, 2024)

Planet Python - Tue, 2024-11-19 14:30

#656 – NOVEMBER 19, 2024
View in Browser »

How to Debug Your Textual Application

TUI applications require a full terminal which most IDEs don’t implement. To make matters more complicated, TUIs use the same calls that many command line debuggers use, making it hard to deal with breakpoints. This article teaches you how to debug a Textual TUI program.
MIKE DRISCOLL

Dictionary Comprehensions: How and When to Use Them

In this tutorial, you’ll learn how to write dictionary comprehensions in Python. You’ll also explore the most common use cases for dictionary comprehensions and learn about some bad practices that you should avoid when using them in your code.
REAL PYTHON

What We Learned From Analyzing 20.2 Million CI Jobs

The Trunk Flaky Test public beta is open! You can now detect, quarantine, and eliminate flaky tests from your codebase. Discover insights from our analysis of 20.2 million CI jobs and see how Trunk can unblock pipelines and stop reruns. Access is free. Check out our getting started guide here →
TRUNK sponsor

Python Puzzles

A collection of Python puzzles. You are given a test file, and should write an implementation that passes the tests. All done in your browser.
GPTENGINEER.RUN

Announcing DjangoCon Europe 2025 in Dublin, Ireland

DJANGO SOFTWARE FOUNDATION

Flask 3.1 Released

PALLETSPROJECTS.COM

Quiz: Basic Input and Output in Python

REAL PYTHON

PEP 761: Deprecating PGP Signatures for CPython Artifacts (Approved)

PYTHON.ORG

Quiz: Using .__repr__() vs .__str__() in Python

REAL PYTHON

Discussions Andrej Karpathy on Learning

Entertainment-based content may appear educational, but it is not effective for learning. To truly learn, one should seek out long-form, challenging content that requires effort and engagement. Educators should prioritize creating meaningful, in-depth content that fosters deep learning.
X.COM

Ideas: Turn shutil Into a Runnable Module

PYTHON.ORG

Articles & Tutorials Maintaining the Foundations of Python & Cautionary Tales

How do you build a sustainable open-source project and community? What lessons can be learned from Python’s history and the current mess that the WordPress community is going through? This week on the show, we speak with Paul Everitt from JetBrains about navigating open-source funding and the start of the Python Software Foundation.
REAL PYTHON podcast

The Practical Guide to Scaling Django

Most Django scaling guides focus on theoretical maximums. But real scaling isn’t about handling hypothetical millions of users - it’s about systematically eliminating bottlenecks as you grow. Here’s how to do it right, based on patterns that work in production.
ANDREW

Build Your Own AI Assistant with Edge AI

Simplify workloads and elevate customer service. Build customized AI assistants that respond to voice prompts with powerful language and comprehension capabilities. Personalized AI assistance based on your unique needs with Intel’s OpenVINO toolkit.
INTEL CORPORATION sponsor

The Polars vs pandas Difference Nobody Is Talking About

When people compare pandas and Polars, they usually bring up topics such as lazy execution, Rust, null values, multithreading, and quey optimisation. Yet there’s one innovation which people often overlook: non-elementary group-by aggregations.
MARCO GORELLI • Shared by Marco Gorelli

PyPI Introduces Digital Attestations to Strengthen Security

PyPI now supports digital attestations. This feature lets Python package maintainers verify the authenticity and integrity of their uploads with cryptographically verifiable attestations, adding an extra layer of security and trust.
SARAH GOODING • Shared by Sarah Gooding

Django’s Technical Governance Challenges, and Opportunities

On October 29th, two DSF steering council members resigned, triggering an election earlier than planned. This note explains what that means and how you can get involved.
DJANGO SOFTWARE FOUNDATION

We’ve Moved to Hetzner

This post from Michael Kennedy talks about moving Talk Python’s hosting environment from Digital Ocean to Hetzner. It details everything involved in a move like this.
TALK PYTHON

Formatting Floats Inside Python F-Strings

In this video course, you’ll learn how to use Python format specifiers within an f-string to allow you to neatly format a float to your required precision.
REAL PYTHON course

Package Compatibility With Free-Threading and Subinterpreters

This tracker tests the compatibility of the 500 most popular packages with Python 3.13’s free-threading and subinterpreter features.
PYTHON.TIPS • Shared by Vita Midori

Projects & Code chonkie: CHONK Your Texts With Chonkie

GITHUB.COM/BHAVNICKSM

seqlogic: Sequential Logic Simulator

GITHUB.COM/CJDRAKE

venvstacks: Virtual Environment Stacks for Python

GITHUB.COM/LMSTUDIO-AI

terminal-tree: Experimental Filesystem Navigator in Textual

GITHUB.COM/WILLMCGUGAN

chdb: An in-Process OLAP SQL Engine

GITHUB.COM/CHDB-IO

Events Weekly Real Python Office Hours Q&A (Virtual)

November 20, 2024
REALPYTHON.COM

PyData Bristol Meetup

November 21, 2024
MEETUP.COM

PyLadies Dublin

November 21, 2024
PYLADIES.COM

PyConAU 2024

November 22 to November 27, 2024
PYCON.ORG.AU

Plone Conference 2024

November 25 to December 1, 2024
PLONECONF.ORG

Code, Configure and Deploy a Market Making Bot

November 25, 2024
MEETUP.COM

PyCon Wroclaw 2024

November 30 to December 1, 2024
PYCONWROCLAW.COM

Happy Pythoning!
This was PyCoder’s Weekly Issue #656.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Python Insider: Python 3.14.0 alpha 2 released

Planet Python - Tue, 2024-11-19 11:03

Alpha 2? But Alpha 1 only just came out!

https://www.python.org/downloads/release/python-3140a2/

This is an early developer preview of Python 3.14

Major new features of the 3.14 series, compared to 3.13

Python 3.14 is still in development. This release, 3.14.0a2 is the second of seven planned alpha releases.

Alpha releases are intended to make it easier to test the current state of new features and bug fixes and to test the release process.

During the alpha phase, features may be added up until the start of the beta phase (2025-05-06) and, if necessary, may be modified or deleted up until the release candidate phase (2025-07-22). Please keep in mind that this is a preview release and its use is not recommended for production environments.

Many new features for Python 3.14 are still being planned and written. Among the new major new features and changes so far:

The next pre-release of Python 3.14 will be 3.14.0a3, currently scheduled for 2024-12-17.

More resources Enjoy the new release

Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organisation contributions to the Python Software Foundation.

Regards from a chilly Helsinki with snow on the way,

Your release team,
Hugo van Kemenade
Ned Deily
Steve Dower
Łukasz Langa

Categories: FLOSS Project Planets

Drupal Association blog: Celebrating Success: DrupalCon Barcelona 2024 Event Impact Recap

Planet Drupal - Tue, 2024-11-19 11:00

Welcome to the Event Impact Recap of DrupalCon Barcelona 2024. This year’s conference not only showcased the vibrant spirit of our global network but also highlighted the achievements and successes that emerged from this remarkable gathering. As we look forward to upcoming events in Singapore and Atlanta, let's take a minute to celebrate what we accomplished together in Barcelona!

At every DrupalCon, we unite the global Drupal community—crafted by the community, for the community. Our mission is to foster an inclusive environment where Drupal Certified Partners, Agencies, Marketers, End Users, Developers, Site Builders, and Community Organizers come together to train, learn, network, see old friends and make new ones, and grow their careers. We strive to create a vibrant space that celebrates collaboration and innovation, providing opportunities for personal and professional development.

Through shared knowledge, diverse perspectives, and active engagement, DrupalCon serves as a beacon for Drupal enthusiasts, empowering them to contribute to the future of open-source software. Together, we will shape the next generation of digital experiences, ensuring that Drupal continues to thrive, grow and innovate worldwide.

Key Highlights from DrupalCon Barcelona 2024 Attendance and Engagement

With 1,087 registered attendees and an impressive 96% check-in rate, DrupalCon Barcelona brought together a passionate community of Drupal enthusiasts and professionals. Notably, 307 participants received complimentary registrations (that’s 31%!) for their roles as speakers, scholarship recipients, or planners, reinforcing our commitment to inclusivity and accessibility.

Among the attendees, 27% were first-time DrupalCon participants, while 33.8% had attended four or more times. An impressive 79.1% of attendees expressed their intention to recommend DrupalCon to friends or colleagues, highlighting the event’s value.

Global Representation

DrupalCon Barcelona truly exemplified our global reach, with attendees from 66 countries across six continents. This diversity enriched our discussions and collaborations, showcasing the power of Drupal as a unifying platform.

Registrations Per Country United Kingdom 122 Japan 4 Spain 113 Slovenia 3 Germany 111 Uruguay 3 Belgium 102 Iceland 3 United States 97 Estonia 2 France 58 Czechia 2 India 33 España 2 Netherlands 31 Israel 2 Norway 30 Armenia 2 Denmark 27 Croatia 2 Switzerland 24 Ghana 2 Austria 23 Schweiz 1 Sweden 22 Nicaragua 1 Finland 21 Singapore 1 Bulgaria 20 Thailand 1 Poland 16 Cyprus 1 Portugal 15 Turkey 1 Ireland 15 Åland Islands 1 Italy 15 Luxembourg 1 Greece 13 Algeria 1 Canada 12 Magyarország 1 Czech Republic 9 Niger 1 Georgia 9 Antigua 1 Romania 8 Bangladesh 1 Serbia 7 Saudi Arabia 1 Brazil 7 Tunisia 1 Ukraine 6 Peru 1 Australia 5 Argentina 1 Lithuania 5 Philippines 1 Belarus 5 Colombia 1 Hungary 5 Burkina Faso 1 Mexico 5 Afghanistan 1 Slovakia 4 Iran 1 DriesNote and Starshot

A standout moment was the DriesNote, which attracted 810 attendees eager to learn about the future of Drupal CMS and the role of AI in expanding our marketplace. The insights shared during this session sparked lively discussions and innovative ideas.

The Starshot track and Makers and Takers tracks were immensely popular, with the top session, "Drupal AI: The Golden Era of the Web," drawing 520 attendees. These sessions not only highlighted cutting-edge topics but also fostered collaboration and knowledge sharing among participants.

Sponsorship Support

DrupalCon Barcelona 2024 was made possible by the generous support of our sponsors:

  • Diamond Sponsors: 4
  • Platinum Sponsors: 6
  • Gold Sponsors: 3
  • Silver Sponsors: 12
  • Module Sponsors: 11
  • Village Sponsors: 5
  • Media Sponsors: 3
  • Scholarship Sponsors: 3
  • Total Sponsors: 30

In total, we had 30 sponsors whose commitment to the Drupal community was essential for the event and the overall community growth and success. Their support underscores the strength of our partnerships and shared goals.

Volunteer Contributions

The success of DrupalCon Barcelona was greatly aided by 208 dedicated volunteers, who contributed their time and talents across various roles—from session review committees and help desks to contribution monitors and photographers. Their hard work and enthusiasm were crucial in creating a welcoming and productive environment for all.

Looking Ahead

As we reflect on the achievements and connections fostered at DrupalCon Barcelona 2024, I feel optimistic about the future of Drupal. This event was not just a conference; it was a celebration of collaboration, knowledge sharing, and community spirit. 

I extend my heartfelt gratitude to everyone who contributed to this success—from attendees and volunteers to sponsors and organizers. Together, we can carry this momentum forward as we embark on the next chapter of Drupal's journey at DrupalCon Singapore, DrupalCon Atlanta, and beyond.

Here’s to continued growth, innovation, and the vibrant spirit of the Drupal community! I hope to see many of you in Singapore in December where we will be getting a sneak peek of the Drupal CMS, ahead of it’s release in January 2025; tickets are available on the DrupalCon Singapore website.

Categories: FLOSS Project Planets

PyCharm: Code Faster with JetBrains AI in PyCharm

Planet Python - Tue, 2024-11-19 10:25

PyCharm 2024.3 comes with many improvements to JetBrains AI to help you code faster. I’m going to walk you through some of these updates in this blog post. 

Natural language inline AI prompt

You can now use JetBrains AI by typing straight into your editor in natural language without opening the AI Assistant tool window. If you use either IntelliJ IDEA or PyCharm, you might already be familiar with natural language AI prompts, but let me walk you through the process. 

If you’re typing in the gutter you can start typing your request straight into the editor, and then press Tab. Here’s an example of one such request:

write a script to capture a date input from a user and print it out prefixed by a message stating that their birthday is on that date.

You can then iterate on the initial input by clicking on the purple block in the gutter or by pressing ⌘\ or Ctrl+\ and pressing Enter:

add error handling so that when a birthday is in the future, we dont accept it

You can use  ⌘\ or Ctrl+\ to keep iterating until you’re happy with the result. For example, we can use the prompt:

print out the day of the week as well as their birthday date

And then: 

change the format of day_of_week to short

This feature is available for Python, JavaScript, TypeScript, JSON, and YAML files.

Let’s look at some more examples. We can get JetBrains AI Assistant to help us generate new code with a prompt like this:

Write code that lists the latest polls, shows poll details, handles voting, updates votes, and displays poll results, ensuring only published polls are accessible.

Or add some error handling to our code:

Add edge case handling to this code

Remember, context is everything. Where you start your natural language prompt is important, as PyCharm uses the placement of your caret to figure out the context. You don’t need to prefix your query with a ? or $ if you start typing in the gutter because the context is the file, but if your caret is indented, you’ll need to start your query with the ? or $ character so PyCharm knows you’re crafting a natural language query.

In this example, we want to refactor existing code, so we need to prefix our query with the ? character:

?create a dedicated function for printing the schedule and remove the code from here

Try JetBrains AI for free

Running code in the Python console

We know that JetBrains AI can generate code for you, but now you can run that code in the Python console without leaving the AI Assistant tool window by clicking the green run arrow.

For example, let’s say you have the following prompt:

Create a python script that asks for a birthday date in standard format yyy-MM-dd then converts it and prints it back out in a written format such as 22nd January 1991 

You can now click the green run arrow on the top-right of the code snippet to run it in your Python console:

Even more features

In addition to the new functionality for natural language and code completion for PyCharm highlighted above, there are several other improvements to JetBrains AI. 

Faster code completion

We have introduced a new model for faster cloud-based completion with AI Assistant which is showing very promising results.

Faster documentation

If documentation isn’t your thing, you can now hand off writing your Python docstrings to JetBrains AI. If you type either single or double quotes to enter a docstring and then press Return, you’ll see a prompt that says Generate with AI Assistant. Click that prompt and let JetBrains AI generate the documentation for you:

Help at your fingertips

We all need a little help now and again, and we can get JetBrains AI to help us here too. We’ve added a /docs prompt to the JetBrains AI tool window. This prompt will query the PyCharm documentation to save you from switching out of the context you’re working in!

Ability to choose your LLM

For AI Chat, you can now select a different LLM from the drop-down menu in the chat window itself. There are lots of options for you to choose from:

More context in Jupyter notebooks

We’ve also improved how JetBrains AI works for data scientists. JetBrains AI now recognizes DataFrames and variables in your notebook. You can prefix your DataFrame or variable with # so that JetBrains AI considers it as part of the context. 

Summary

JetBrains AI is available inside PyCharm, right where you need it. This release brings many improvements, from writing in natural language inside the editor and running AI-generated Python snippets in the console to generating documentation. 

Remember, if you’re in the gutter, you can start typing in natural language and then press Tab to get AI Assistant to generate the code. If you’re inside a method or function, you need to prefix your natural language query with either ? or $. You can then iterate on the generated code as many times as you like as you build out your new functionality and explore further.

Try JetBrains AI for free

Categories: FLOSS Project Planets

Pages