FLOSS Project Planets

Petter Reinholdtsen: Plain text accounting file from your bitcoin transactions

Planet Debian - Thu, 2024-03-07 12:00

A while back I wrote a small script to extract the Bitcoin transactions in a wallet in the ledger plain text accounting format. The last few days I spent some time to get it working better with more special cases. In case it can be useful for others, here is a copy:

#!/usr/bin/python3 # -*- coding: utf-8 -*- # Copyright (c) 2023-2024 Petter Reinholdtsen from decimal import Decimal import json import subprocess import time import numpy def format_float(num): return numpy.format_float_positional(num, trim='-') accounts = { u'amount' : 'Assets:BTC:main', } addresses = { '' : 'Assets:bankkonto', '' : 'Assets:bankkonto', } def exec_json(cmd): proc = subprocess.Popen(cmd,stdout=subprocess.PIPE) j = json.loads(proc.communicate()[0], parse_float=Decimal) return j def list_txs(): # get all transactions for all accounts / addresses c = 0 txs = [] txidfee = {} limit=100000 cmd = ['bitcoin-cli', 'listtransactions', '*', str(limit)] if True: txs.extend(exec_json(cmd)) else: # Useful for debugging with open('transactions.json') as f: txs.extend(json.load(f, parse_float=Decimal)) #print txs for tx in sorted(txs, key=lambda a: a['time']): # print tx['category'] if 'abandoned' in tx and tx['abandoned']: continue if 'confirmations' in tx and 0 >= tx['confirmations']: continue when = time.strftime('%Y-%m-%d %H:%M', time.localtime(tx['time'])) if 'message' in tx: desc = tx['message'] elif 'comment' in tx: desc = tx['comment'] elif 'label' in tx: desc = tx['label'] else: desc = 'n/a' print("%s %s" % (when, desc)) if 'address' in tx: print(" ; to bitcoin address %s" % tx['address']) else: print(" ; missing address in transaction, txid=%s" % tx['txid']) print(f" ; amount={tx['amount']}") if 'fee'in tx: print(f" ; fee={tx['fee']}") for f in accounts.keys(): if f in tx and Decimal(0) != tx[f]: amount = tx[f] print(" %-20s %s BTC" % (accounts[f], format_float(amount))) if 'fee' in tx and Decimal(0) != tx['fee']: # Make sure to list fee used in several transactions only once. if 'fee' in tx and tx['txid'] in txidfee \ and tx['fee'] == txidfee[tx['txid']]: True else: fee = tx['fee'] print(" %-20s %s BTC" % (accounts['amount'], format_float(fee))) print(" %-20s %s BTC" % ('Expences:BTC-fee', format_float(-fee))) txidfee[tx['txid']] = tx['fee'] if 'address' in tx and tx['address'] in addresses: print(" %s" % addresses[tx['address']]) else: if 'generate' == tx['category']: print(" Income:BTC-mining") else: if amount < Decimal(0): print(f" Assets:unknown:sent:update-script-addr-{tx['address']}") else: print(f" Assets:unknown:received:update-script-addr-{tx['address']}") print() c = c + 1 print("# Found %d transactions" % c) if limit == c: print(f"# Warning: Limit {limit} reached, consider increasing limit.") def main(): list_txs() main()

It is more of a proof of concept, and I do not expect it to handle all edge cases, but it worked for me, and perhaps you can find it useful too.

To get a more interesting result, it is useful to map accounts sent to or received from to accounting accounts, using the addresses hash. As these will be very context dependent, I leave out my list to allow each user to fill out their own list of accounts. Out of the box, 'ledger reg BTC:main' should be able to show the amount of BTCs present in the wallet at any given time in the past. For other and more valuable analysis, a account plan need to be set up in the addresses hash. Here is an example transaction:

2024-03-07 17:00 Donated to good cause Assets:BTC:main -0.1 BTC Assets:BTC:main -0.00001 BTC Expences:BTC-fee 0.00001 BTC Expences:donations 0.1 BTC

It need a running Bitcoin Core daemon running, as it connect to it using bitcoin-cli listtransactions * 100000 to extract the transactions listed in the Wallet.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: FLOSS Project Planets

The Drop Times: A Peek into the Vibrant Tapestry of Nerd Summit 2024

Planet Drupal - Thu, 2024-03-07 10:19
Dive into the forefront of technological innovation and community engagement at NERD Summit 2024. Explore the depths of the 'Inventure Mindset' with keynote speaker Jessica Cobb and learn more about Rick Hood's NERD Summit 2023 experience.
Categories: FLOSS Project Planets

PyCon: Travel Grants Process for PyCon US 2024

Planet Python - Thu, 2024-03-07 10:13

Awarding travel grants to community members to PyCon US and seeing how much they get out of and contribute to the event is one of the most rewarding things we do at the PSF; every year, we wish we could award more.

PyCon US 2024 received 852 travel grant requests totaling almost $1.5 million. With estimated revenue of $1.8M and conference costs of $2.1M, we didn’t have the funds to support every applicant. In round one, we offered $360K, or about 24%, of the grant requests received. This is a record number of travel grants received and offered. We know many folks hoped for a grant and were disappointed not to receive one, so we wanted to share more about the process.

Travel Grant Funding

PyCon US travel grants are funded by various sources, from PyCon US ticket sales and sponsorships, Python Core Devs, the PSF, and generous contributions from PyLadies.

In 2024, some funding sources have contracted. To date, we have not received any corporate travel grant sponsorships. Furthermore, the crunch in the tech sector’s economy is negatively affecting general conference sponsorship. Inflation in event costs and lower projected sponsorships have combined to create a significant loss for PyCon US this year. The tech sector crunch also means that corporations aren’t funding travel for both speakers and attendees, which leads to more travel grant requests, which leads to us awarding a lower percentage of travel grants.

Travel Grant Award Philosophy

PyCon US Travel Grants are intended to support our non-profit mission to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers. Overall, we aim to have an event that reflects that diverse community and our diverse interests, that brings together people who will get a lot out of the event and bring benefits back to their home communities, and that is engaging and impactful from the first tutorial to the last sprint.

To those ends, a part of our budget went to funding our incredible lineup of speakers. We also considered factors such as getting representation from around the world, bringing people who have never attended PyCon US, people who are looking for work, as well as educators and conference and community organizers around the world. Even still, the bittersweet reality is there are so many wonderful people in our community that we could not award grants to everyone who applied (though we sorely wish we were able to).

Although we are a US-based conference, we strive to bring folks from around the world. In fact, 77% of our travel grant funds were offered to 54 countries outside of the US!


Since PyCon US 2022, the Travel Grant Team has used an individualized approach; we read every application more than once. Funds awarded are more likely to be used under this awards model; in fact, there were no travel grant leftovers from PyCon US 2023. Prior to 2020, the team used an algorithm-based approach that relied on randomization and sometimes left money on the table that could have been used by other applicants.

Exploring Alternatives

We decided to offer an in-person ticket to anyone within the US and an online ticket for anyone outside the US. This resulted in offers for 203 in-person tickets and 420 online tickets (which would otherwise cost $100-$400 each). We know it may be impossible to use the in-person ticket without accompanying travel funds, but we wanted to offer something even if we couldn’t award the full request. If you’d like to request an exchange for an in-person ticket to an online ticket, or vice versa, please email your request to pycon-reg@python.org.

Some questions were raised about how multiple rounds of grant allocations work. Our award emails advise folks that they may request additional funds if the round one award is insufficient. Requesting additional funds removes awardees from the round one travel grant pool and puts them into the second round. Unless we receive new corporate sponsorships or other funds, round two funds come from the funds folks don’t accept in round one.

To clarify the language in the award emails, the complimentary registrations will not be removed from anyone in round 2. That is to say, if you have been offered a complimentary ticket, that offer stands regardless of whether you request additional funds or not. Even if you put yourself into the second round, you will NOT forfeit your complimentary registration.


Find more information on the PyCon US Travel Grants FAQ. If you have feedback on our process, please send it to pycon-aid@python.org.
 

PyCon US Staff

 PS - As a final note, we’d love to continue expanding our travel grant program in the future. If you’d like to help do that, a great way is to encourage your employer or other companies you are connected with to sponsor the PSF or to let them know you notice and appreciate it if they are already a sponsor.

Categories: FLOSS Project Planets

Guido Günther: Phosh Nightly Package Builds

Planet Debian - Thu, 2024-03-07 09:19
Tightening the feedback loop Link to heading One thing we notice ever so often is that although Phosh’s source code is publicly available and upcoming changes are open for review the feedback loop between changes being made to the development branch and users noticing the change can still be quiet long. This can be problematic as we ideally want to catch a regression or broken use case triggered by a change on the development branch (aka main) before the general availability of a new version.
Categories: FLOSS Project Planets

PyBites: Build a route planner for maps using Python

Planet Python - Thu, 2024-03-07 08:39

In this article I use the metric system, where 1 meter = 3.28 feet. I also talk about highways. Here I use the definition in British English, where highway refers to any path with a public right of access.

We are all familiar with the existence of route planners like google maps or OpenStreetMaps (OSM), that allows us to get the shortest path between two points in space. With OSM and it’s read-only API Overpass we can get very detailed information about a chosen region, including streets roads and ways, but also information like if a street is illuminated, if a bus stop has a bank or which bars are in the region. This opens the door to the possibility to not only calculate a route based on distance, but also to use other criteria like prioritizing illuminated streets or the estimated quietness, so I wrote a web application to do exactly this.

How does this app look like?

You can find the application deployed here.

To use it, first of all we have to scroll the Map to the position we want to get our way. Then we need to click the two points on the map where we want to have our start and end point. On Path Priority we can select to get the quietest path or the shortest one, where the quietness option is the default one.

After clicking on Submit a new tab opens with the calculated route.

How does this App work?

This seems very simple on the surface, but what is on the inside?

Getting the data for the desired area

To build all functionalities First of all we use Leaflet and Javascript to display the map and write the functionality for the markers.

With the markers we can get the coordinates of the desired start and end points.

Downloading the map data

The first thing we need to find a route between the two points on the map is the data of the map! With Overpass API we can query the data we need from OSM:

[out:json]; way(south, west, north, east)[highway]; (._;>;); out;

South, west, north and east stand for the coordinates limiting the area we want to download. How Overpass works would be a blog article for itself, but in this case we get all ways inside of the area bounded by the given coordinates and who contain data on highway. In addition we get a list of nodes contained in the ways. In python, we execute the query with the following (simplified) code:

import requests OVERPASS_URL = 'https://overpass-api.de/api/interpreter' OVERPASS_QUERY = ''' [out:json]; way({s},{w},{n},{e})[highway]; (._;>;); out body; ''' overpass_query = OVERPASS_QUERY.format(n=self.area_boundaries.north, s=self.area_boundaries.south, e=self.area_boundaries.east, w=self.area_boundaries.west) response = requests.get(OVERPASS_URL, params={'data': overpass_query})

I decided to enhance the boundaries of the downloaded area because following scenario could happen. Imagine we want to find a path between points 1 and 2. If we didn’t add a “margin” to the boundary marked by the points, we would only get the data inside the box on figure 2. In this case it implies that the application wouldn’t find a path between the two points, despite one existing!

By adding a margin we get the data for the way between 1 and 2, as shown in figure 3.

Parsing the data and creating the graph

Once the data is downloaded, it has to be parsed. The data contains a list of node with the latitude and longitude coordinates where they are located, and a list of way containing data for highway and a list of the node id’s forming the way.

Node1
id1
lat1, lon1Node2
id2
lat2, lon2…Way1
highway:type1
nodes: id1, id2, …Way2
highway: type2
nodes: idi, idj, ……Representation of the downloaded data

With this information we know which nodes are directly connected. For the edges we need to calculate the weight. Depending on if we want to get the shortest path or the (estimated) quietest one, we need different information in order to calculate the weight of the edges.

Calculating the distance

In order to get the shortest path, we can just take the distance between two connected nodes. Since we have the coordinates of each node, we can just use the haversine library:

from haversine import Unit, haversine # Some more code coordinates_0 = (node_0.lat, node_0.lon) coordinates_1 = (node_1.lat, node_1.lon) haversine(coordinates_0, coordinates_1, unit=Unit.METERS)

This way we don’t need to implement the formula ourselves, which is quite complicated.

Estimating the quietness

To estimate the quietness, things become more complicated. The downloaded and parsed information includes the highway. The highway tells us which kind of road we have, for example if it’s residential or a motorway. Most interestingly, OSM does not save which type a road has been officially classified but rather which type it really is.

Different types of paths will have a different amount of traffic, which results in more or less sound volume. For example, a track in the middle of a forest will be quieter than a road with 6 lanes crossing a big city. With this information we can guess that a way classified as track will be quieter than a way classified as trunk.

Using that approach alone is a good start, but it has a problem: a footpath located directly next to a road is being estimated as being as quiet as the ones located far away of roads. As we can see in the picture below, the way starting on point 1 is of the same type as on point 2. However point 1 is directly next to a secondary road while point 2 is located in the middle of a park. If we don’t take into account the surroundings, we would be classifying the edges forming both ways as being equally quiet, making the estimation very imprecise.

To solve this problem, we can take into account everything we have in a certain radius. For now I took 20 meters, which after some tests it seems to allow a better estimate than if I take 10 meters.

Calculating the path and displaying it

Once we have the weights of the edges, we can construct the graph with the help of the dijkstra library.

from dijkstra import Graph # More code graph = Graph() weight_calculator = WeightCalculator(self.path_way_priority, self.nodeId_to_nodeInfo_dict) for node_id_0, node_id_1 in self.edge_list: weight = weight_calculator.get_weight(node_id_0, node_id_1) graph.add_edge(node_id_0, node_id_1, weight) graph.add_edge(node_id_1, node_id_0, weight) return graph

The weight calculator allows us to get the weight of an edge depending on if we need to take into account the distance or the estimated quietness. For each connected node pair, we calculate the weight of the connecting edge and add those into the graph.

Later we use the following code to get our path:

from dijkstra import DijkstraSPF # More code # Use graph object from above dijkstra = DijkstraSPF(graph, init_point) # Even more code dijkstra.get_path(end_node_id)

Finally we want to display it to the user using the folium library. Folium allows us to create html files where we display a map using OSM and displaying elements on said map. On the one hand we want to get the trail coordinates, which allows us to display the calculated route. dijkstra.get_path(end_node_id) returns a list of nodes, so we can use that list to get the list coordinates (latitude, longitude) to display said trail coordinates.

map = folium.Map(location=start_node_coords, zoom_start=ZOOM_START) # etc # Returns a list of nodes forming the calculated path between the init and end node nodes_forming_path = dijkstra.get_path(end_node_id) # returns a list of tuples containing the coordinates of the nodes trail_coordinates = self.get_trail_coordinates(nodes_forming_path) folium.PolyLine(trail_coordinates).add_to(map)

With the following code we add the start and end marker:

folium.Marker(start_node_coords, popup='Start').add_to(map) folium.Marker(end_node_coords, popup='End').add_to(map)

After all this we are ready to save the map and display it to the user.

Possible improvements and enhancements

It would be an improvement to make the radius for the estimated quietness weighted. This means that objects located at for example 15 meters away from the “starting node” would be less relevant as those located 2 meters away.

This application can potentially be enhanced to offer other criteria to calculate a path. Not only does it allow to help people reduce the amount of noise one is exposed, but it can add safety by prioritizing illuminated ways. Maybe it can also be used to get a friendlier path for people with mobility issues or suffering from fatigue. What cool enhancements can you think about?

Categories: FLOSS Project Planets

PyBites: Case Study: GitHub Copilot and the deceiving ladder

Planet Python - Thu, 2024-03-07 08:37

I recently gave a workshop at the Developer Open Space un-conference about GitHub Copilot and how developers can benefit from these AI assistants. I was fairly new to Copilot myself, having only tried it for a few months. But I was already impressed by its often spot-on suggestions, its understanding of context, and its rich feature set.

At the end of the course, a participant asked me an interesting question about how to approach a typical extreme value or optimization task from school mathematics. I had a gut feeling that this type of task would be challenging for an AI assistant, and it sparked a great discussion about the limitations of AI and problem-solving in general. I didn’t have time to think about it back then, but now I can dive deeper. Come along and see how it works out.

So let’s see if GitHub Copilot can be used to solve mathematical puzzles. We will see that we need to learn to recognise the signs of when AI assistants are overconfident.

The mathematical problem of finding the highest point a ladder can reach while leaning against a cube The problem

You’ll find a good description of the problem here, along with several solutions, but don’t worry, you have to deliberately click to see the solution, so there’s no risk of accidentally discovering it. As with all good problems, the statement is very simple, but there is a lot to think about: There is a cube leaning against a wall, and a ladder leaning against the same wall, but at the same time touching the cube. The question is: how far up the wall can the ladder go when it has to touch the cube, so that we can get the ladder as close to the wall as the cube will allow. Ideally I would like to find a solution for every pair of ladder and cube.

In terms of variables, we have the length \( l \) of the ladder and the side length of the cube \( d \). What we want to know is the height \( h \) from the ground to the point where the ladder touches the wall. That shouldn’t be too hard, should it? Well, try to solve it yourself!

A first attempt

Well, GitHub Copilot or AI assistants to the rescue, then! Why bother with middle school mathematics? Long forgotten or never learned! Surely we’re not going to do the hard intellectual work! The whole point of AI is to assist us, isn’t it? So let’s ask GitHub Copilot to solve this problem for us.

GitHub Copilot can be used in many different ways, but the most satisfying and natural way is to simply write a comment asking for something and wait for Copilot’s suggestions. Alternatively, you can launch an inline chat prompt, which is also a nice way to go, and for most of my workflows it doesn’t feel too disruptive when I’m already in the editor window.

This article is not about how to get started with GitHub Copilot. There is plenty of material already available. You can follow the official instructions here and start with a test trial, or you can use similar services like plugins that use an OpenAI API developer key or other services like Codium. For the rest of this article, I will demonstrate my approach using GitHub Copilot in Visual Studio Code with the following extensions installed: GitHub Copilot and GitHub Copilot Chat. But the nice thing about the current landscape is, that it doesn’t really matter which tool you use, as long as you start using it. There is a lot going on at the moment and different service providers are tying to dethrone each other every day, giving us developers a wide variety of providers and tools to choose from. In my personal opinion, GitHub Copilot offers one of the best integrations with Visual Studio Code and it has a very nice update cycle that regularly delivers new and helpful features (you should not miss the official YouTube channel, which is a great source of newly available features).

Having that said, let’s see what GitHub Copilot does, if we ask it in a very simplistic way:

GitHub Copilot suggests the rest of the function definition for the given problem statement GitHub Copilot suggests the rest of the function definition for the given problem statement GitHub Copilot suggests the rest of the function definition for the given problem statement

As you can see in the screenshot, GitHub Copilot suggests a solution (hence the grey color) based on our input. The suggested solution is

def max_ladder_height(l, d): return (l**2 - d**2)**0.5

Not bad! But wait… this solution is so simple that even without having studied the problem in detail, we might start to get suspicious. And we would be right to do so, because this is a Pythagorean equation that assumes \( l^2 = h^2 + d^2 \), which basically says that the squared length of the ladder is equal to the sum of the squared height and the squared side length of the cube. But this is wrong, because the side length of the cube is not the whole distance between the ladder and the wall. This would only be the case if the ladder leaned directly against the cube, but then there would be no point of contact with the wall (and no triangle), the ladder would stand vertically like a candle! No, this equation, although stated with great confidence, is simply wrong.

If you are interested in why GitHub Copilot came up with this solution, you can try out the chat feature and give it the same prompt. Here is the first part of the answer I got back from it:

To solve this problem, we need to use the Pythagorean theorem. The highest point the ladder can reach is when the ladder is perpendicular to the ground. This is because the ladder’s length forms the hypotenuse of a right triangle, and the height is one of the sides. The maximum height is achieved when the other side (the base of the triangle) is minimized. In this case, the base of the triangle is the edge of the cube, so the maximum height is when the ladder is directly above the cube’s edge.”

Github Colpilot Chat

Alternatively, if you do not want to think too much about the suggested solution, you can ask Copilot to generate some outputs for you. This will allow you to do some basic plausibility checks yourself:

Using GitHub Copilot Completion Panel to retrieve several suggestions at once Using GitHub Copilot Completion Panel to retrieve several suggestions at once Using GitHub Copilot Completion Panel to retrieve several suggestions at once

Normally with GitHub Copilot and a prompt (comment) like this, you end up with suggestions coming in line by line. If you want Copilot to generate the whole block of code at once, there are better ways than relying on the inline suggestions. One alternative is the Completion Panel, which can be opened with Ctrl + Enter. This panel shows up to 10 different suggestions, offering a greater variety than the inline suggestions, and you can choose which suggestion you want to accept. In my case, I chose suggestion 3, which complied with my request that d be only one of two values. Funnily enough, Copilot directly lists the output of each function call as a comment in front of the call, so we don’t even need to run the code, right? Wrong, let’s see what happens in the console:

2.8284271247461903 2.23606797749979 3.872983346207417 3.4641016151377544 4.898979485566356 4.58257569495584

The output is different from the comments! What a bad behavior, Copilot, shame on you! Giving me test cases with the wrong output is not a nice way to build trust between us. But honestly, I expected as much, so let’s move on.

To make the comparison easier, let’s put the predicted and the actual outputs of the function calls side by side:

Function callSuggested outputActual outputmax_ladder_height(3,1)\( 2.828 \)\( 2.828 \)max_ladder_height(3,2)\( 2.646 \)\( 2.236 \)

Note that in the second case, the suggested output of the function call is not the actual output of the function call, something you should always be aware of when working with Copilot.

Back to the question of whether this code is actually a correct solution (which we have already concluded it is not). In the first example, a ladder of length 3 (metres, if you like) leaning against a cube of side length 1 can reach a height of 2.83. This sounds wrong. Consider that a ladder leaning against a cube with a side length of 1 metre would most likely have to be tilted a bit until it actually touches the wall, which means that it is not plausible to reach a height of 2.83 metres in this scenario.

The second example is even more problematic: A ladder of length 3 and a cube of side length 2. Let’s imagine this for a moment. You can do some drawing, but it will turn out to be completely impractical for a simple reason: the ladder must be at least twice as long as the side length of the cube, otherwise it will never reach the wall if it has to lean against the cube.

The (geometric) reason for this is that the cube divides the ladder into two segments: the upper segment, which touches the wall and the cube, and the lower segment, which touches the cube and the ground. Both of these segments are part of a rectangular triangle with the cube and either the wall or the floor. In both cases, the ladder segment is the hypotenuse, that is, the longest side of the triangle (this is easier to follow if you look at the picture below, where the problem is shown in 2D space).Since each ladder segment is part of a triangle where the side length of the cube is one of the two shorter sides of the triangle, it follows that each segment must be at least as long as the side length of the cube So, the whole ladder must be at least twice the side length of the cube.

So not only does Copilot generate code that does not solve the problem correctly, it also violates some important physical constraints, such as the ladder being at least twice the size of the cube’s side length. Of course, once we have derived this knowledge ourselves, it would be quite easy to make it part of our prompts (comments) and let Copilot fix the existing function by implementing some plausibility checks. But the main point of this case study is to show how much we can rely on Copilot without spending too much effort analysing the problem first and wasting time on prompt engineering.

Deriving a working solution

I’ve already included a link to the solution in the problem statement, but because I just love maths, and because I love self-contained articles where you can get the whole story and all the information in one go, we’re going to derive the solution by hand. However, if you don’t want to see the maths behind the problem, all you need to follow this article is the final equation we derive at the end of this section.

There are several ways to solve this problem, but one elegant way is to strip away unnecessary information and think of this as a 2D problem (see the next picture). Imagine a common Cartesian coordinate system with \( y \) and \( x \) axes. In our case, the wall represents the \( y \) axis and the floor represents the x axis. The cube is no longer a square but a simple rectangle with a side length \( d \), and the ladder becomes a straight line that touches the rectangle in a single point and intersects the axes. The point where the ladder touches the wall becomes the \( y \) intercept \( P(0, h) \). This is exactly the point we are interested in, because \( h \) gives us the height we want to know. The point where the ladder touches the floor becomes the \( x \) intercept \( P(a, 0) \). We do not yet know \( a \), but it represents the distance of the ladder from the wall so we keep this unknown variable for now. There is also a third point of interest that will be important, and that is the point where the ladder touches the cube: \( P(d, d) \).

The original problem in 2D. The ladder can be understood as a straight line intersection both the x- and y-axis.

If we look at the figure above, we can see that \( a \) is the base of a right triangle, \( h \) the height, and \( l \) is the hypotenuse. Thanks to Pythagoras, we know that \( l^2 = a^2 + h^2 \). But we don’t know \( a \) and we need another equation to get closer to a solution. We now do the “trick” (I should really avoid calling it a trick because people often fear that mathematics is all about tricks and that you can only be good at it if you know those tricks, but that is not true. It is more about trying out different perspectives and representations until the problem becomes solvable or simple enough to see a solution) and think of the ladder as a straight line intersecting the two axes. The slope of a straight line \( m \) is the ratio of how much you go up or down when you go left or right. Mathematically it is expressed as the quotient of the difference in \( y \) and the difference in \( x \), so \( m = \frac{\Delta y}{\Delta x}=\frac{y_2 – y_1}{x_2 – x_1} \). We only need to fill in two points on the line to get the slope! Lucky for us, we already have two points \( P(0, h) \) and \( P(a, 0) \):

$$ m = \frac{h – 0}{a – 0} = \frac{h}{a} $$

That’s splendid! But wait, what is \( a \)? We don’t know it, so maybe we can express it with variables that we already know? Yes we can, see the Pythagorean equation above! Thanks to

$$ a^2 = l^2 – h^2 \Rightarrow a = \sqrt{l^2-h^2} $$

we get the following final expression for the slope

$$ m = \frac{h}{a} = \frac{h}{\sqrt{l^2 – h^2}} $$

That is good because we know \( l \) and thus only have an equation with the one variable that we actually want to find out: the height \( h \). But we only have the slope of the straight line, how do we get a full equation?

Taking into consideration the full equation of a straight line \( y = mx + b \) with \( m \) being the slope and \( b \) being the y-axis intersection (which we already now, because it is exactly \( h \), so \( b=h \)), we cannot reuse the two points we already used to get the slope. Luckily, we don’t have to because we have another point \( P(d,d) \) that we can just plug in into this equation.

$$ d = \frac{h}{\sqrt{l^2 – h^2}}d + h $$

Look at the beauty of that equation! The square root can look a bit scary but it is actually quite straight forward to solve this equation for \( h \), which is the final step we have to do to solve our problem. What follows are some algebraic finger exercises to solve this equation for \( h \). This is done by isolating the square root and squaring on each side and handling a few bracket expressions. Once we have done all that, we arrive at

$$ h^4 – 2h^3 + h^2 (2d^2-l^2) + 2dl^2h – d^2l^2 = 0 $$

And that is a fourth degree polynomial equation, quite different from Copilot’s first attempt! This equation looks complex because it still contains \( l \) and \( d \) as two parameters that we can plug in, so to make this equation more tangible we could plug in \( l = 3 \) and \( d=1 \) to get the equation for the height when the ladder is 3 meters and the cube’s side length is 1 meter. This would give us

$$ h^4 – 2h^3 – 7h^2 + 18h – 9 = 0 $$

How to solve such a polynomial equation is beyond the scope of this article. In short, there is no algebraic closed solution for this particular equation, which means you have to use numerical methods to solve this equation. Good for us: That is something Copilot can do easily for us, finally!

Bringing it all together

With our newly derived knowledge about the general solution to our problem for any pair of \( (l,d) \) values, let’s ask Copilot again to solve this problem for us:

# solve the equation h^4 - 2dh^3 + h^2*(2d^2 - l^2) + 2dl^2h - d^2l^2=0 # where d is the length of the cube and l is the length of the ladder def get_highest_point(l, d): a = 1 b = -2 * d c = 2 * d**2 - l**2 e = 2 * d * l**2 f = -(d**2 * l**2) roots = np.roots([a, b, c, e, f]) max_h = max(roots) # return the roots, the number of roots, the maximum root and the distance of the ladder from the wall return roots, len(roots), max_h, (l**2 - max_h**2) ** 0.5

As you can see, I gave him as prompt the full equation and explained the meaning of both variables. And in one go with a single suggestion Copilot completed the whole function for me! The only thing I entered was def and Copilot did the rest. You can see a clear structure in the code defining first all coefficients of the polynomial.

Did you notice that Copilot started with a, b and so on, but after c continues with e and f? Actually, in the first attempt, Copilot wasn’t so clever and actually used d as the fourth coefficient, overwriting the function parameter d and thus getting the wrong result for the next coefficient e which actually is calculated by \( d^2 \cdot{} l^2 \) and here \( d \) has to be the cube’s side length and not the fourth coefficient value. I used Copilot’s inline chat functionality to ask for a simple fix and it refactored the code into the version you can see above. But again, lesson learned: Copilot is not always clever enough to avoid such simple mistakes like accidentally overwriting important function parameters.

Looking at the final solution, it is actually quite nice. Using NumPy’s np.roots (the documentation actually says there is a newer API we should use) is a great solution for this problem as the roots of the polynomial are possible solutions for \( h \) because the roots are zero points that solve this equation. Knowing that it is a fourth degree polynomial equation we already know that there can only exist up to four different solutions (when you think about it, there are infinite solutions because in the real world we can put the ladder in any angle we want, not only in four discrete positions, so the solutions to this equation are not the only possible heights the ladder can take, only the maximum is meaningful in this context), but under certain conditions, it can be less than four. However, to us only the maximum height possible is interesting, so we use max(roots) to get the maximum height of all possible heights.

You may notice the second comment inside the function saying “return the roots, the number of roots, the maximum root and the distance of the ladder from the wall”. Here I prompted Copilot to generate for me the desired output and it did so flawlessly. This function returns four things: all found solutions for \( h \), the number of found solutions, the maximum height, and the distance of the ladder from the wall, so I could picture things more easily in my head!

Let’s do a quick run for one of our test cases from the beginning:

print(get_highest_point(3, 1)) >>> (array([-2.9062796 , 2.49206604, 1.67021162, 0.74400194]), 4, 2.4920660376475325, 1.670211622520849)

For a ladder with length 3 and a cube with length 1, there are four solutions, one being physically implausible. The remaining three heights are all valid solutions, from which 2.492 is the maximum height we were looking for. The ladder is actually 1.67 away from the wall. Copilot’s suggested solution for this case was 2.828, which we knew was wrong, but only now can we assess how wrong it actually was.

Final words about Copilot

What have I learnt from this case study? Copilot is a powerful tool that works surprisingly well far more often than it fails. It increases my daily productivity by several orders of magnitude, especially with mundane, repetitive tasks such as writing tests and documentation, writing well-defined methods, fixing common bugs, and the like. It is also very helpful when doing things I have not done for a while, such as working with a lesser-known library, or testing a new library I am not yet familiar with. In both cases, it can save me a lot of time by not requiring me to read tutorials or API documentation. Instead, it can directly generate examples and code snippets that show me how to use them. Copilot also shines when it comes to very common tasks like implementing algorithms. Since problems like finding a prime number, sorting a list of numbers and so on have been solved literally hundreds of times and are available in public repositories, Copilot has seen enough examples of how to generate this code and how to generate it well. Finally, Copilot is also great for interactive workflows, where I write part of a function or refactor an existing function and let it help me finish or revise the existing code using the inline chat functionality. Because Copilot knows the context, the surrounding code, it’s suggestions can be spot on.

Yet there is a reason why GitHub Copilot always warns of the real possibility that the generated code might be wrong. Whatever wrong means. Wrong can mean many things, as we have seen. It can mean that the code does not work at all and produces some kind of runtime error. It can mean that the code does not do exactly what we want it to do. Of course, there is always the possibility that our description was ambiguous and that Copilot simply took advantage of the freedom given by the vagueness of our description. It can mean that the code does what we asked it to do, but not in the best possible way, or in a way that is unsafe or otherwise considered harmful. It can mean that the code does what we asked it to do, but the generated comments and documentation are wrong. And it is difficult to detect all these things if we are too trusting. We need to be aware of this kind of misbehaviour, and we need to develop a gut feeling about which tasks will prove easy for AI assistants, and which tasks will prove difficult. Copilot gets a lot of things right the first time, often flawlessly. And then there are times when it does not get it right, even after many attempts with different prompts and strategies, adding more and more information, details, examples and guidance.

Let’s summarise:

  • Copilot (and any other AI tool) can produce working code that seems correct and reasonable at first glance, but does not provide the correct solution to a given problem when tested thoroughly.
  • Copilot can create test cases for existing code, but those test cases may be wrong. Similarly, it may suggest tests for parameter combinations that are not plausible.
  • Copilot (and especially more generalised models) can be very persuasive and assertive in the way it presents its solutions, so you need to be aware that it can be wrong even when it argues that it is right.
  • In general, Copilot is not good at solving problems that don’t have a clear solution or a known algorithm. If you ask it to sort numbers, it will do it without breaking a sweat. But if you ask it to solve a mathematical puzzle for you, well…it tries its best. Our problem was particularly hard, when you think about it, because the solution could not be derived without finding a better representation of the problem. If you have a list of numbers, you can just start sorting them in different ways, there is no step you have to take first to understand the problem. But in the case of our ladder and cube problem, the facts about the length of the ladder and the side length of the cube are not helpful in finding the solution. We had to first transform this problem into a geometric problem and define some helper variables to be able to solve it (not to mention that we relied on several mathematical facts such as equations about rectangular triangles and straight lines).
  • When generating code, Copilot can introduce errors such as overwriting existing function parameters.
  • However, once we had an equation and wanted Copilot to solve it for us using any method it deemed appropriate, it could do it in a single pass on the first try. I doubt it had ever seen this exact equation before, but it drew parallels from the hundreds of similar polynomial equations found in other public code repositories.
Side note: And what about ChatGPT?

If you are familiar with Copilot, you mayknow that it is based on OpenAI’s Codex model. OpenAI states, that “OpenAI Codex is most capable in Python, but it is also proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift and TypeScript, and even Shell.” Basically, it is a fine-tuned foundational Large Language Model (LLM) that excels at generating code. The astute reader may object that Copilot is simply the wrong model for such a task for precisely this reason. And they might be right about that. So I decided to ask ChatGPT the same question: Can you tell me how to solve this maths problem (without the code generation)?

I will not include any more screenshots of my attempts here, you can see the chat history via this link (very nice feature of ChatGPT!). As you can see, at no point was ChatGPT able to derive at the same equation as we did. However, each solution it presented was well explained and well argued. So, basically, we can see some of the same problems we already noticed with GitHub Copilot.

There are other LLMs I could have tried, such as more powerful ones like GPT-4 or fine-tuned ones like MetaMath-Mistral-7B. It has become a sport to find the best fine-tuned model for the task at hand, and it makes sense to spend time fine-tuning for a specific problem. But the point of this article was to learn about the inherent limitations of models and AI assistants like GitHub Copilot, and when to be cautious about the answers you get and the code you generate. There’s always a better tool for the job, but the truth is that we don’t usually use all the tools available to us. Humans are creatures of habit and like to follow the path of least resistance. We try to do as much as we can with one tool, and are happy if that tool works well for almost any task we throw at it. In this respect, GitHub Copilot has many strengths, but we should never let our guard down and always remember that these tools are assistants, not experts that replace us humans.

Categories: FLOSS Project Planets

Golems GABB: Using Drupal for Multilingual Websites: Configuration and Best Practices

Planet Drupal - Thu, 2024-03-07 07:09
Using Drupal for Multilingual Websites: Configuration and Best Practices Editor Thu, 03/07/2024 - 15:57

In today's globalized digital space, multilingualism plays a huge role. Information resources are often popular among representatives of different nations and language groups. These people may have different beliefs and be followers of other religions, but they all have access to information on the Internet. Modern developers must always take the issue of universal access responsibly. The more versatile their site is, the more people they can reach the math, numbers, and facts. For example, 55% of pages on the Internet are in English, but only 19% of users are English-speaking.

Categories: FLOSS Project Planets

eGenix.com: PyDDF Python Spring Sprint 2024

Planet Python - Thu, 2024-03-07 04:00

The following text is in German, since we're announcing a Python sprint in Düsseldorf, Germany.

Ankündigung

Python Meeting Herbst Sprint 2024 in
Düsseldorf

Samstag, 09.03.2024, 10:00-18:00 Uhr
Sonntag, 10.03.2024. 10:00-18:00 Uhr

Eviden / Atos Information Technology GmbH, Am Seestern 1, 40547 Düsseldorf

Informationen Das Python Meeting Düsseldorf (PyDDF) veranstaltet mit freundlicher Unterstützung von Eviden Deutschland ein Python Sprint Wochenende.

Der Sprint findet am Wochenende 09./10.03.2024 in der Eviden / Atos Niederlassung, Am Seestern 1, in Düsseldorf statt.Folgende Themengebiete sind als Anregung bereits angedacht:
  • Data Apps mit Taipy bauen
  • XML Parser Bindings für Rust:QuickXML
  • Extraktion & Visualisierung von Wissensgraphen aus unstrukturietem Text mit Hilfe eines lokalen LLMs
Natürlich können die Teilnehmenden weitere Themen vorschlagen und umsetzen.
Anmeldung, Kosten und weitere Infos

Alles weitere und die Anmeldung findet Ihr auf der Meetup Sprint Seite:

WICHTIG: Ohne Anmeldung können wir den Gebäudezugang nicht vorbereiten. Eine spontane Anmeldung am Sprint Tag wird daher vermutlich nicht funktionieren.

Teilnehmer sollten sich zudem in der PyDDF Telegram Gruppe registrieren, da wir uns dort koordinieren:
Über das Python Meeting Düsseldorf

Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python-Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf.

Viel Spaß !

Marc-André Lemburg, eGenix.com

Categories: FLOSS Project Planets

Reducing Visual Studio Installations with Toolchains

Planet KDE - Thu, 2024-03-07 03:00

If you work on C++ projects on Windows that need to be built with multiple Visual Studio C++ compiler versions, you need some way to manage the installations of all these build environments. Either you have multiple IDEs installed, or you know about build tools (https://aka.ms/vs/17/release/vs_BuildTools.exe) and maybe keep only the latest full VS IDE plus older Build Tools.

However, it turns out that you can have just the latest IDE but with multiple toolchains installed for older compiler targets. You won’t even need the Build Tools.

To use these toolchains you need to install them in your chosen VS installation and then call vcvarsall.bat with an appropriate parameter.

You can even have no IDE installed if you don’t need it but only the Build Tools with the required toolchains. That’s useful when you use a different IDE like JetBrains Clion or Visual Studio Code. Note, however, that to be license- compliant, you still need a valid Visual Studio subscription.

Installing the toolchain
  1. Go to the Visual Studio Installer and click “Modify” on your main VS version (2022 in my case).
  2. Go to “Individual components” and search for the appropriate toolchain. For example, to get the latest VS2019 C++ compiler in VS 2022 installer, you need to look at this:How do you know that 14.29 corresponds to VS 2019? Well, you have to consult this table: https://en.wikipedia.org/wiki/Microsoft_Visual_C%2B%2B#Internal_version_numbering – look at the “runtime library version” column as that’s the C++ compiler version really.
  3. Finish the installation of the desired components.
Setting up the environment from cmd.exe

The only thing you need to do to start with a different toolchain is to pass an option to your vcvarsall.bat invocation:

C:\Users\mikom>"c:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Auxiliary\Build\vcvarsall.bat" -vcvars_ver=14.29

With such a call, I get a shell where cl.exe indeed uses the VS2019 compiler variant:

As you can see, I called vcvarsall.bat from VS2022 yet I got VS2019 variant of the compiler For more info about vcvarsall.bat see: https://learn.microsoft.com/en-us/cpp/build/building-on-the-command-line.

Setting up the environment for PowerShell

If you do your compilation in Powershell, vcvarsall.bat is not very helpful. It will spawn an underlying cmd.exe, set the necessary env vars inside it, and close it without altering your PowerShell environment.

(You may try to do some hacks of printing the environment in the child cmd.exe and adopting it to your Pwsh shell, but that’s a hack).

For setting up a development environment from PowerShell, Microsoft introduced a PowerShell module that does just that.

To get it, you first have to load the module:

Import-Module "C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"

And then call the Enter-VsDevShell cmdlet with appropriate parameters:

Enter-VsDevShell -VsInstallPath 'C:\Program Files\Microsoft Visual Studio\2022\Professional\' -DevCmdArguments "-vcvars_ver=14.29" -Arch amd64 -SkipAutomaticLocation

This cmdlet internally passes arguments to vcvarsall.bat so you specify the toolchain version as above.

With this invocation, you get your desired cl.exe compiler:

For more info about Powershell DevShell module see: https://learn.microsoft.com/en-us/visualstudio/ide/reference/command-prompt-powershell

Setting up Qt Creator Kit with a toolchain

Unfortunately, Qt Creator doesn’t detect the toolchains in a single Visual Studio installation as multiple kits. You have to configure the compiler and the kit yourself:

I am a script, how do I know where Visual Studio is installed?

If you want to query the system for VS installation path programatically (to find either vcvarsall.bat or the DevShell PowerShell module) you can use the vswhere tool (https://github.com/microsoft/vswhere).

It’s a small-ish (0.5MB) self-contained .exe so you can just drop it in your repository and don’t care if it’s in the system. You can also install it with winget:

winget install vswhere

It queries the well-known registry entries and does some other magic to find out what Visual Studio installations your machine has.

By default it looks for the latest version of Visual Studio available and returns easily parseable key:value pairs with various infos about the installation. Most notably, the installation path in installationPath.

It also has various querying capabilities, like showing only the VS installation that have the C++ workload installed.

For example, to get the installation path of the newest Visual Studio with C++ workload, you call:

vswhere -latest -products * -requires Microsoft.VisualStudio.Component.VC.Tools.x86.x64 -property installationPath

Caveats

I haven’t found an easy way to query vcvarsall for something along the lines “give me the latest available toolchain in a given VS product line (2019, 2022 etc.)”. So if you call an explicit version (like 14.29) and a newer one appears, you will still be looking for the older one. However:

  • When vcvarsall.bat is called without any toolchain parameter (vcvars_ver), it defaults to itself so you may assume that it’s the latest one in this installation folder.
  • Microsoft seems to stop bumping the relevant part of the version of the Visual Studio C++ compiler once the next version of VS is out. So for example it seems that 14.29 will be a proper target for VS2019 C++ compiler until the end of time.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Reducing Visual Studio Installations with Toolchains appeared first on KDAB.

Categories: FLOSS Project Planets

Salsa Digital: Salsa at DrupalSouth 2024

Planet Drupal - Thu, 2024-03-07 00:54
Sponsorship Salsa has a long history of contributing at DrupalSouth and Drupal via sponsorship. This year, we’re Platinum sponsors.
Categories: FLOSS Project Planets

Gunnar Wolf: Constructed truths — truth and knowledge in a post-truth world

Planet Debian - Wed, 2024-03-06 20:08
This post is a review for Computing Reviews for Constructed truths — truth and knowledge in a post-truth world , a book published in Springer Link

Many of us grew up used to having some news sources we could implicitly trust, such as well-positioned newspapers and radio or TV news programs. We knew they would only hire responsible journalists rather than risk diluting public trust and losing their brand’s value. However, with the advent of the Internet and social media, we are witnessing what has been termed the “post-truth” phenomenon. The undeniable freedom that horizontal communication has given us automatically brings with it the emergence of filter bubbles and echo chambers, and truth seems to become a group belief.

Contrary to my original expectations, the core topic of the book is not about how current-day media brings about post-truth mindsets. Instead it goes into a much deeper philosophical debate: What is truth? Does truth exist by itself, objectively, or is it a social construct? If activists with different political leanings debate a given subject, is it even possible for them to understand the same points for debate, or do they truly experience parallel realities?

The author wrote this book clearly prompted by the unprecedented events that took place in 2020, as the COVID-19 crisis forced humanity into isolation and online communication. Donald Trump is explicitly and repeatedly presented throughout the book as an example of an actor that took advantage of the distortions caused by post-truth.

The first chapter frames the narrative from the perspective of information flow over the last several decades, on how the emergence of horizontal, uncensored communication free of editorial oversight started empowering the “netizens” and created a temporary information flow utopia. But soon afterwards, “algorithmic gatekeepers” started appearing, creating a set of personalized distortions on reality; users started getting news aligned to what they already showed interest in. This led to an increase in polarization and the growth of narrative-framing-specific communities that served as echo chambers for disjoint views on reality. This led to the growth of conspiracy theories and, necessarily, to the science denial and pseudoscience that reached unimaginable peaks during the COVID-19 crisis. Finally, when readers decide based on completely subjective criteria whether a scientific theory such as global warming is true or propaganda, or question what most traditional news outlets present as facts, we face the phenomenon known as “fake news.” Fake news leads to “post-truth,” a state where it is impossible to distinguish between truth and falsehood, and serves only a rhetorical function, making rational discourse impossible.

Toward the end of the first chapter, the tone of writing quickly turns away from describing developments in the spread of news and facts over the last decades and quickly goes deep into philosophy, into the very thorny subject pursued by said discipline for millennia: How can “truth” be defined? Can different perspectives bring about different truth values for any given idea? Does truth depend on the observer, on their knowledge of facts, on their moral compass or in their honest opinions?

Zoglauer dives into epistemology, following various thinkers’ ideas on what can be understood as truth: constructivism (whether knowledge and truth values can be learnt by an individual building from their personal experience), objectivity (whether experiences, and thus truth, are universal, or whether they are naturally individual), and whether we can proclaim something to be true when it corresponds to reality. For the final chapter, he dives into the role information and knowledge play in assigning and understanding truth value, as well as the value of second-hand knowledge: Do we really “own” knowledge because we can look up facts online (even if we carefully check the sources)? Can I, without any medical training, diagnose a sickness and treatment by honestly and carefully looking up its symptoms in medical databases?

Wrapping up, while I very much enjoyed reading this book, I must confess it is completely different from what I expected. This book digs much more into the abstract than into information flow in modern society, or the impact on early 2020s politics as its editorial description suggests. At 160 pages, the book is not a heavy read, and Zoglauer’s writing style is easy to follow, even across the potentially very deep topics it presents. Its main readership is not necessarily computing practitioners or academics. However, for people trying to better understand epistemology through its expressions in the modern world, it will be a very worthy read.

Categories: FLOSS Project Planets

Matt Layman: Final Pre-live Features - Building SaaS with Python and Django #184

Planet Python - Wed, 2024-03-06 19:00
In this episode, we completed the final features needed to get the site open for others to sign up. This included some dynamic limiting of the number of people allowed to sign up. We also had to add the template styling for the login page.
Categories: FLOSS Project Planets

Valhalla's Things: Jeans, step two. And three. And four.

Planet Debian - Wed, 2024-03-06 19:00
Posted on March 7, 2024
Tags: madeof:atoms, FreeSoftWear

I was working on what looked like a good pattern for a pair of jeans-shaped trousers, and I knew I wasn’t happy with 200-ish g/m² cotton-linen for general use outside of deep summer, but I didn’t have a source for proper denim either (I had been low-key looking for it for a long time).

Then one day I looked at an article I had saved about fabric shops that sell technical fabric and while window-shopping on one I found that they had a decent selection of denim in a decent weight.

I decided it was a sign, and decided to buy the two heaviest denim they had: a 100% cotton, 355 g/m² one and a 97% cotton, 3% elastane at 385 g/m² 1; the latter was a bit of compromise as I shouldn’t really be buying fabric adulterated with the Scourge of Humanity, but it was heavier than the plain one, and I may be having a thing for tightly fitting jeans, so this may be one of the very few woven fabric where I’m not morally opposed to its existence.

And, I’d like to add, I resisted buying any of the very nice wools they also seem to carry, other than just a couple of samples.

Since the shop only sold in 1 meter increments, and I needed about 1.5 meters for each pair of jeans, I decided to buy 3 meters per type, and have enough to make a total of four pair of jeans. A bit more than I strictly needed, maybe, but I was completely out of wearable day-to-day trousers.

The shop sent everything very quickly, the courier took their time (oh, well) but eventually delivered my fabric on a sunny enough day that I could wash it and start as soon as possible on the first pair.

The pattern I did in linen was a bit too fitting, but I was afraid I had widened it a bit too much, so I did the first pair in the 100% cotton denim. Sewing them took me about a week of early mornings and late afternoons, excluding the weekend, and my worries proved false: they were mostly just fine.

The only bit that could have been a bit better is the waistband, which is a tiny bit too wide on the back: it’s designed to be so for comfort, but the next time I should pull the elastic a bit more, so that it stays closer to the body.

I wore those jeans daily for the rest of the week, and confirmed that they were indeed comfortable and the pattern was ok, so on the next Monday I started to cut the elastic denim.

I decided to cut and sew two pairs, assembly-line style, using the shaped waistband for one of them and the straight one for the other one.

I started working on them on a Monday, and on that week I had a couple of days when I just couldn’t, plus I completely skipped sewing on the weekend, but on Tuesday the next week one pair was ready and could be worn, and the other one only needed small finishes.

And I have to say, I’m really, really happy with the ones with a shaped waistband in elastic denim, as they fit even better than the ones with a straight waistband gathered with elastic. Cutting it requires more fabric, but I think it’s definitely worth it.

But it will be a problem for a later time: right now three pairs of jeans are a good number to keep in rotation, and I hope I won’t have to sew jeans for myself for quite some time.

I think that the leftovers of plain denim will be used for a skirt or something else, and as for the leftovers of elastic denim, well, there aren’t a lot left, but what else I did with them is the topic for another post.

Thanks to the fact that they are all slightly different, I’ve started to keep track of the times when I wash each pair, and hopefully I will be able to see whether the elastic denim is significantly less durable than the regular, or the added weight compensates for it somewhat. I’m not sure I’ll manage to remember about saving the data until they get worn, but if I do it will be interesting to know.

Oh, and I say I’ve finished working on jeans and everything, but I still haven’t sewn the belt loops to the third pair. And I’m currently wearing them. It’s a sewist tradition, or something. :D

  1. The links are to the shop for Italy; you can copy the “Codice prodotto” and look for it on one of the shop version for other countries (where they apply the right vat etc., but sadly they don’t allow to mix and match those settings and the language).↩︎

Categories: FLOSS Project Planets

GNUnet News: libgnunetchat 0.3.0

GNU Planet! - Wed, 2024-03-06 18:00
libgnunetchat 0.3.0 released

We are pleased to announce the release of libgnunetchat 0.3.0.
This is a major new release bringing compatibility with the major changes in the Messenger service from latest GNUnet release 0.21.0 adding new message kinds, adjusting message processing and key management. This release will also require your GNUnet to be at least 0.21.0 because of that.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.3.0
  • This release requires the GNUnet Messenger Service 0.3!
  • It allows ticket management for tickets sent from contacts.
  • Deletions or other updates of messages result in separate event calls.
  • It is possible to tag messages or contacts.
  • Invitations can be rejected via tag messages.
  • Contacts can be blocked or unblocked which results in filtering messages.
  • Processing of messages is ensured by enforcing logical order of callbacks while querying old messages.
  • Private messages are readable to its sender.
  • Messages provide information about its recipient.
  • Logouts get processed on application level on exit.
  • Delays message callbacks depending on message kind (deletion with custom delay).
  • New debug tools are available to visualize the message graph.
  • Add test case for message receivement.
  • Multiple issues are fixed.

A detailed list of changes can be found in the ChangeLog .

Categories: FLOSS Project Planets

Data School: Get started with conda environments 🤝

Planet Python - Wed, 2024-03-06 12:58

In a previous post, I explained the differences between conda, Anaconda, and Miniconda.

I said that you can use conda to manage virtual environments:

If you’re not familiar with virtual environments, they allow you to maintain isolated environments with different packages and versions of those packages.

In this post, I’m going to explain the benefits of virtual environments and how to use virtual environments in conda.

Let’s go! &#x1F447;

Why use virtual environments?

A virtual environment is like a “workspace” where you can install a set of packages with specific versions. Each environment is isolated from all other environments, and also isolated from the base environment. (The base environment is created when you install conda.)

So, why use virtual environments at all?

  • Different packages can have conflicting requirements for their dependencies, meaning installing one may cause the other to stop working.
  • If you put them in separate environments instead, you can switch between the environments as needed, and both will continue to work.

Thus by using environments, you won’t breaking existing projects when you install, update, or remove packages, since each project can have its own environment.

You can also delete environments once you’re done with them, and if you run into problems with an environment, it’s easy to start a new one!

Six commands you need to know

conda environments have a lot of complexity, but there are actually only six commands you need to learn in order to get most of the benefits:

1️⃣ conda create -n myenv jupyter pandas matplotlib scikit-learn

This tells conda to:

  • Create an environment named myenv (or whatever name you choose)
  • Install jupyter, pandas, matplotlib, and scikit-learn in myenv (or whatever packages you need installed)
  • Install all of their dependencies in myenv
  • Choose the latest version of every one of those packages (as long as it works with every other package being installed)

That last point is a mouthful, but it basically means that conda will try to avoid any conflicts between package dependencies.

Note: conda stores all of your environments in one location on your computer, so it doesn’t matter what directory you are in when you create an environment.

2️⃣ conda activate myenv

This activates the myenv environment, such that you are now working in the myenv “workspace”.

In other words:

  • If you now type python or jupyter lab (for example), you’ll be running the Python or JupyterLab that is installed in myenv.
  • If you then type import pandas, you’ll be importing the pandas that’s installed in myenv.

Note: Activating an environment does not change your working directory.

3️⃣ conda list

This lists all of the packages that are installed in the active environment (along with their version numbers). If you followed my commands above, you’ll see python, jupyter, pandas, matplotlib, scikit-learn, and all of their dependencies.

4️⃣ conda env list

This lists all of the conda environments on your system, with an asterisk (*) next to the active environment.

5️⃣ conda deactivate

This exits the active environment, which will usually take you back to the “base” environment (which was created by Anaconda or Miniconda during installation).

6️⃣ conda env remove -n myenv

This permanently deletes the myenv environment. You can’t delete the active environment, so you have to deactivate myenv (or activate a different environment) first.

Going further

If you want to learn more about conda environments, check out this section of conda’s user guide:

&#x1F517; Managing environments

If you want a broader view of conda and its capabilities, check out this section:

&#x1F517; Common tasks

Or, share your question in the comments below and I’m happy to help! &#x1F447;

Categories: FLOSS Project Planets

Steinar H. Gunderson: Reverse Amdahl's Law

Planet Debian - Wed, 2024-03-06 11:39

Everybody working in performance knows Amdahl's law, and it is usually framed as a negative result; if you optimize (in most formulations, parallelize) a part of an operation, you gain diminishing results after a while. (When optimizing a given fraction p of the total time T by a speedup factor s, the new time taken is (1-p)T + pT/s.)

However, Amdahl's law also works beautifully in reverse! When you optimize something, there's usually some limit where a given optimization isn't worth it anymore; I usually put this around 1% or so, although of course it varies with the cost of the optimization and such. (Most people would count 1% as ridiculously low, but it's usually how mature systems go; you rarely find single 30% speedups, but you can often find ten smaller speedups and apply them sequentially. SQLite famously tripled their speed by chaining optimizations so tiny that they needed to run in a simulator to measure them.) And as your total runtime becomes smaller, things that used to not be worth it now pop over that threshold! If you have enough developer resources and no real upper limit for how much performance you would want, you can keep going forever.

A different way to look at it is that optimizations give you compound interest; if measuring in terms of throughput instead of latency (i.e., items per second instead of seconds per item), which I contend is the only reasonable way to express performance percentages, you can simply multiply the factors together.[1] So 1% and then 1% means 1.01 * 1.01 = 1.0201 = 2.01% speedup and not 2%. Thirty 1% optimizations sum to 34.8%, not 30%.

So here's my formulation of Amdahl's law, in a more positive light: The more you speed up a given part of a system, the more impactful optimizations in the other parts will be. So go forth and fire up those profilers :-)

[1] Obviously throughput measurements are inappropriate if what you care about is e.g. 99p latency. It is still better to talk about a 50% speedup than removing 33% of the latency, though, especially as the speedup factor gets higher.

Categories: FLOSS Project Planets

Real Python: Build an LLM RAG Chatbot With LangChain

Planet Python - Wed, 2024-03-06 09:00

You’ve likely interacted with large language models (LLMs), like the ones behind OpenAI’s ChatGPT, and experienced their remarkable ability to answer questions, summarize documents, write code, and much more. While LLMs are remarkable by themselves, with a little programming knowledge, you can leverage libraries like LangChain to create your own LLM-powered chatbots that can do just about anything.

In an enterprise setting, one of the most popular ways to create an LLM-powered chatbot is through retrieval-augmented generation (RAG). When you design a RAG system, you use a retrieval model to retrieve relevant information, usually from a database or corpus, and provide this retrieved information to an LLM to generate contextually relevant responses.

In this tutorial, you’ll step into the shoes of an AI engineer working for a large hospital system. You’ll build a RAG chatbot in LangChain that uses Neo4j to retrieve data about the patients, patient experiences, hospital locations, visits, insurance payers, and physicians in your hospital system.

In this tutorial, you’ll learn how to:

  • Use LangChain to build custom chatbots
  • Design a chatbot using your understanding of the business requirements and hospital system data
  • Work with graph databases
  • Set up a Neo4j AuraDB instance
  • Build a RAG chatbot that retrieves both structured and unstructured data from Neo4j
  • Deploy your chatbot with FastAPI and Streamlit

Click the link below to download the complete source code and data for this project:

Get Your Code: Click here to download the free source code for your LangChain chatbot.

Demo: An LLM RAG Chatbot With LangChain and Neo4j

By the end of this tutorial, you’ll have a REST API that serves your LangChain chatbot. You’ll also have a Streamlit app that provides a nice chat interface to interact with your API:

Under the hood, the Streamlit app sends your messages to the chatbot API, and the chatbot generates and sends a response back to the Streamlit app, which displays it to the user.

You’ll get an in-depth overview of the data that your chatbot has access to later, but if you’re anxious to test it out, you can ask questions similar to the examples given in the sidebar:

Example questions can be found in the sidebar.

You’ll learn how to tackle each step, from understanding the business requirements and data to building the Streamlit app. There’s a lot to unpack in this tutorial, but don’t feel overwhelmed. You’ll get some background on each concept introduced, along with links to external sources that will deepen your understanding. Now, it’s time to dive in!

Prerequisites

This tutorial is best suited for intermediate Python developers who want to get hands-on experience creating custom chatbots. Aside from intermediate Python knowledge, you’ll benefit from having a high-level understanding of the following concepts and technologies:

Nothing listed above is a hard prerequisite, so don’t worry if you don’t feel knowledgeable in any of them. You’ll be introduced to each concept and technology along the way. Besides, there’s no better way to learn these prerequisites than to implement them yourself in this tutorial.

Next up, you’ll get a brief project overview and begin learning about LangChain.

Project Overview

Throughout this tutorial, you’ll create a few directories that make up your final chatbot. Here’s a breakdown of each directory:

  • langchain_intro/ will help you get familiar with LangChain and equip you with the tools that you need to build the chatbot you saw in the demo, and it won’t be included in your final chatbot. You’ll cover this in Step 1.

  • data/ has the raw hospital system data stored as CSV files. You’ll explore this data in Step 2. In Step 3, you’ll move this data into a Neo4j database that your chatbot will query to answer questions.

  • hospital_neo4j_etl/ contains a script that loads the raw data from data/ into your Neo4j database. You have to run this before building your chatbot, and you’ll learn everything you need to know about setting up a Neo4j instance in Step 3.

  • chatbot_api/ is your FastAPI app that serves your chatbot as a REST endpoint, and it’s the core deliverable of this project. The chatbot_api/src/agents/ and chatbot_api/src/chains/ subdirectories contain the LangChain objects that comprise your chatbot. You’ll learn what agents and chains are later, but for now, just know that your chatbot is actually a LangChain agent composed of chains and functions.

  • tests/ includes two scripts that test how fast your chatbot can answer a series of questions. This will give you a feel for how much time you save by making asynchronous requests to LLM providers like OpenAI.

  • chatbot_frontend/ is your Streamlit app that interacts with the chatbot endpoint in chatbot_api/. This is the UI that you saw in the demo, and you’ll build this in Step 5.

All the environment variables needed to build and run your chatbot will be stored in a .env file. You’ll deploy the code in hospital_neo4j_etl/, chatbot_api, and chatbot_frontend as Docker containers that’ll be orchestrated with Docker Compose. If you want to experiment with the chatbot before going through the rest of this tutorial, then you can download the materials and follow the instructions in the README file to get things running:

Get Your Code: Click here to download the free source code for your LangChain chatbot.

With the project overview and prerequisites behind you, you’re ready to get started with the first step—getting familiar with LangChain.

Read the full article at https://realpython.com/build-llm-rag-chatbot-with-langchain/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Python GUIs: Drag &amp; Drop Widgets with PySide6 — Sort widgets visually with drag and drop in a container

Planet Python - Wed, 2024-03-06 08:00

I had an interesting question from a reader of my PySide6 book, about how to handle dragging and dropping of widgets in a container showing the dragged widget as it is moved.

I'm interested in managing movement of a QWidget with mouse in a container. I've implemented the application with drag & drop, exchanging the position of buttons, but I want to show the motion of QPushButton, like what you see in Qt Designer. Dragging a widget should show the widget itself, not just the mouse pointer.

First, we'll implement the simple case which drags widgets without showing anything extra. Then we can extend it to answer the question. By the end of this quick tutorial we'll have a generic drag drop implementation which looks like the following.

Table of Contents Drag & Drop Widgets

We'll start with a simple application which creates a window using QWidget and places a series of QPushButton widgets into it.

You can substitute QPushButton for any other widget you like, e.g. QLabel. Any widget can have drag behavior implemented on it, although some input widgets will not work well as we capture the mouse events for the drag.

python from PySide6.QtWidgets import QApplication, QHBoxLayout, QPushButton, QWidget class Window(QWidget): def __init__(self): super().__init__() self.blayout = QHBoxLayout() for l in ["A", "B", "C", "D"]: btn = QPushButton(l) self.blayout.addWidget(btn) self.setLayout(self.blayout) app = QApplication([]) w = Window() w.show() app.exec()

If you run this you should see something like this.

The series of QPushButton widgets in a horizontal layout.

Here we're creating a window, but the Window widget is subclassed from QWidget, meaning you can add this widget to any other layout. See later for an example of a generic object sorting widget.

QPushButton objects aren't usually draggable, so to handle the mouse movements and initiate a drag we need to implement a subclass. We can add the following to the top of the file.

python from PySide6.QtCore import QMimeData, Qt from PySide6.QtGui import QDrag from PySide6.QtWidgets import QApplication, QHBoxLayout, QPushButton, QWidget class DragButton(QPushButton): def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) drag.exec(Qt.DropAction.MoveAction)

We implement a mouseMoveEvent which accepts the single e parameter of the event. We check to see if the left mouse button is pressed on this event -- as it would be when dragging -- and then initiate a drag. To start a drag, we create a QDrag object, passing in self to give us access later to the widget that was dragged. We also must pass in mime data. This is used for including information about what is dragged, particularly for passing data between applications. However, as here, it is fine to leave this empty.

Finally, we initiate a drag by calling drag.exec_(Qt.MoveAction). As with dialogs exec_() starts a new event loop, blocking the main loop until the drag is complete. The parameter Qt.MoveAction tells the drag handler what type of operation is happening, so it can show the appropriate icon tip to the user.

You can update the main window code to use our new DragButton class as follows.

python class Window(QWidget): def __init__(self): super().__init__() self.setAcceptDrops(True) self.blayout = QHBoxLayout() for l in ["A", "B", "C", "D"]: btn = DragButton(l) self.blayout.addWidget(btn) self.setLayout(self.blayout) def dragEnterEvent(self, e): e.accept()

If you run the code now, you can drag the buttons, but you'll notice the drag is forbidden.

Dragging of the widget starts but is forbidden.

What's happening? The mouse movement is being detected by our DragButton object and the drag started, but the main window does not accept drag & drop.

To fix this we need to enable drops on the window and implement dragEnterEvent to actually accept them.

python class Window(QWidget): def __init__(self): super().__init__() self.setAcceptDrops(True) self.blayout = QHBoxLayout() for l in ["A", "B", "C", "D"]: btn = DragButton(l) self.blayout.addWidget(btn) self.setLayout(self.blayout) def dragEnterEvent(self, e): e.accept()

If you run this now, you'll see the drag is now accepted and you see the move icon. This indicates that the drag has started and been accepted by the window we're dragging onto. The icon shown is determined by the action we pass when calling drag.exec_().

Dragging of the widget starts and is accepted, showing a move icon.

Releasing the mouse button during a drag drop operation triggers a dropEvent on the widget you're currently hovering the mouse over (if it is configured to accept drops). In our case that's the window. To handle the move we need to implement the code to do this in our dropEvent method.

The drop event contains the position the mouse was at when the button was released & the drop triggered. We can use this to determine where to move the widget to.

To determine where to place the widget, we iterate over all the widgets in the layout, until we find one who's x position is greater than that of the mouse pointer. If so then when insert the widget directly to the left of this widget and exit the loop.

If we get to the end of the loop without finding a match, we must be dropping past the end of the existing items, so we increment n one further (in the else: block below).

python def dropEvent(self, e): pos = e.position() widget = e.source() self.blayout.removeWidget(widget) for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if pos.x() < w.x(): # We didn't drag past this widget. # insert to the left of it. break else: # We aren't on the left hand side of any widget, # so we're at the end. Increment 1 to insert after. n += 1 self.blayout.insertWidget(n, widget) e.accept()

The effect of this is that if you drag 1 pixel past the start of another widget the drop will happen to the right of it, which is a bit confusing. To fix this we can adjust the cut off to use the middle of the widget using if pos.x() < w.x() + w.size().width() // 2: -- that is x + half of the width.

python def dropEvent(self, e): pos = e.position() widget = e.source() self.blayout.removeWidget(widget) for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if pos.x() < w.x() + w.size().width() // 2: # We didn't drag past this widget. # insert to the left of it. break else: # We aren't on the left hand side of any widget, # so we're at the end. Increment 1 to insert after. n += 1 self.blayout.insertWidget(n, widget) e.accept()

The complete working drag-drop code is shown below.

python from PySide6.QtCore import QMimeData, Qt from PySide6.QtGui import QDrag from PySide6.QtWidgets import QApplication, QHBoxLayout, QPushButton, QWidget class DragButton(QPushButton): def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) drag.exec(Qt.DropAction.MoveAction) class Window(QWidget): def __init__(self): super().__init__() self.setAcceptDrops(True) self.blayout = QHBoxLayout() for l in ["A", "B", "C", "D"]: btn = DragButton(l) self.blayout.addWidget(btn) self.setLayout(self.blayout) def dragEnterEvent(self, e): e.accept() def dropEvent(self, e): pos = e.position() widget = e.source() self.blayout.removeWidget(widget) for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if pos.x() < w.x() + w.size().width() // 2: # We didn't drag past this widget. # insert to the left of it. break else: # We aren't on the left hand side of any widget, # so we're at the end. Increment 1 to insert after. n += 1 self.blayout.insertWidget(n, widget) e.accept() app = QApplication([]) w = Window() w.show() app.exec() Visual Drag & Drop

We now have a working drag & drop implementation. Next we'll move onto improving the UX by showing the drag visually. First we'll add support for showing the button being dragged next to the mouse point as it is dragged. That way the user knows exactly what it is they are dragging.

Qt's QDrag handler natively provides a mechanism for showing dragged objects which we can use. We can update our DragButton class to pass a pixmap image to QDrag and this will be displayed under the mouse pointer as the drag occurs. To show the widget, we just need to get a QPixmap of the widget we're dragging.

python from PySide6.QtCore import QMimeData, Qt from PySide6.QtGui import QDrag from PySide6.QtWidgets import QApplication, QHBoxLayout, QPushButton, QWidget class DragButton(QPushButton): def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) drag.exec(Qt.DropAction.MoveAction)

To create the pixmap we create a QPixmap object passing in the size of the widget this event is fired on with self.size(). This creates an empty pixmap which we can then pass into self.render to render -- or draw -- the current widget onto it. That's it. Then we set the resulting pixmap on the drag object.

If you run the code with this modification you'll see something like the following --

Dragging of the widget showing the dragged widget.

Generic Drag & Drop Container

We now have a working drag and drop behavior implemented on our window. We can take this a step further and implement a generic drag drop widget which allows us to sort arbitrary objects. In the code below we've created a new widget DragWidget which can be added to any window.

You can add items -- instances of DragItem -- which you want to be sorted, as well as setting data on them. When items are re-ordered the new order is emitted as a signal orderChanged.

python from PySide6.QtCore import QMimeData, Qt, Signal from PySide6.QtGui import QDrag, QPixmap from PySide6.QtWidgets import ( QApplication, QHBoxLayout, QLabel, QMainWindow, QVBoxLayout, QWidget, ) class DragItem(QLabel): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setContentsMargins(25, 5, 25, 5) self.setAlignment(Qt.AlignmentFlag.AlignCenter) self.setStyleSheet("border: 1px solid black;") # Store data separately from display label, but use label for default. self.data = self.text() def set_data(self, data): self.data = data def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) pixmap = QPixmap(self.size()) self.render(pixmap) drag.setPixmap(pixmap) drag.exec(Qt.DropAction.MoveAction) class DragWidget(QWidget): """ Generic list sorting handler. """ orderChanged = Signal(list) def __init__(self, *args, orientation=Qt.Orientation.Vertical, **kwargs): super().__init__() self.setAcceptDrops(True) # Store the orientation for drag checks later. self.orientation = orientation if self.orientation == Qt.Orientation.Vertical: self.blayout = QVBoxLayout() else: self.blayout = QHBoxLayout() self.setLayout(self.blayout) def dragEnterEvent(self, e): e.accept() def dropEvent(self, e): pos = e.position() widget = e.source() self.blayout.removeWidget(widget) for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if self.orientation == Qt.Orientation.Vertical: # Drag drop vertically. drop_here = pos.y() < w.y() + w.size().height() // 2 else: # Drag drop horizontally. drop_here = pos.x() < w.x() + w.size().width() // 2 if drop_here: break else: # We aren't on the left hand/upper side of any widget, # so we're at the end. Increment 1 to insert after. n += 1 self.blayout.insertWidget(n, widget) self.orderChanged.emit(self.get_item_data()) e.accept() def add_item(self, item): self.blayout.addWidget(item) def get_item_data(self): data = [] for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() data.append(w.data) return data class MainWindow(QMainWindow): def __init__(self): super().__init__() self.drag = DragWidget(orientation=Qt.Orientation.Vertical) for n, l in enumerate(["A", "B", "C", "D"]): item = DragItem(l) item.set_data(n) # Store the data. self.drag.add_item(item) # Print out the changed order. self.drag.orderChanged.connect(print) container = QWidget() layout = QVBoxLayout() layout.addStretch(1) layout.addWidget(self.drag) layout.addStretch(1) container.setLayout(layout) self.setCentralWidget(container) app = QApplication([]) w = MainWindow() w.show() app.exec()

Generic drag-drop sorting in horizontal orientation.

You'll notice that when creating the item, you can set the label by passing it in as a parameter (just like for a normal QLabel which we've subclassed from). But you can also set a data value, which is the internal value of this item -- this is what will be emitted when the order changes, or if you call get_item_data yourself. This separates the visual representation from what is actually being sorted, meaning you can use this to sort anything not just strings.

In the example above we're passing in the enumerated index as the data, so dragging will output (via the print connected to orderChanged) something like:

python [1, 0, 2, 3] [1, 2, 0, 3] [1, 0, 2, 3] [1, 2, 0, 3]

If you remove the item.set_data(n) you'll see the labels emitted on changes.

python ['B', 'A', 'C', 'D'] ['B', 'C', 'A', 'D']

We've also implemented orientation onto the DragWidget using the Qt built in flags Qt.Orientation.Vertical or Qt.Orientation.Horizontal. This setting this allows you sort items either vertically or horizontally -- the calculations are handled for both directions.

Generic drag-drop sorting in vertical orientation.

Adding a Visual Drop Target

If you experiment with the drag-drop tool above you'll notice that it doesn't feel completely intuitive. When dragging you don't know where an item will be inserted until you drop it. If it ends up in the wrong place, you'll then need to pick it up and re-drop it again, using guesswork to get it right.

With a bit of practice you can get the hang of it, but it would be nicer to make the behavior immediately obvious for users. Many drag-drop interfaces solve this problem by showing a preview of where the item will be dropped while dragging -- either by showing the item in the place where it will be dropped, or showing some kind of placeholder.

In this final section we'll implement this type of drag and drop preview indicator.

The first step is to define our target indicator. This is just another label, which in our example is empty, with custom styles applied to make it have a solid "shadow" like background. This makes it obviously different to the items in the list, so it stands out as something distinct.

python from PySide6.QtCore import QMimeData, Qt, Signal from PySide6.QtGui import QDrag, QPixmap from PySide6.QtWidgets import ( QApplication, QHBoxLayout, QLabel, QMainWindow, QVBoxLayout, QWidget, ) class DragTargetIndicator(QLabel): def __init__(self, parent=None): super().__init__(parent) self.setContentsMargins(25, 5, 25, 5) self.setStyleSheet( "QLabel { background-color: #ccc; border: 1px solid black; }" )

We've copied the contents margins from the items in the list. If you change your list items, remember to also update the indicator dimensions to match.

The drag item is unchanged, but we need to implement some additional behavior on our DragWidget to add the target, control showing and moving it.

First we'll add the drag target indicator to the layout on our DragWidget. This is hidden to begin with, but will be shown during the drag.

python class DragWidget(QWidget): """ Generic list sorting handler. """ orderChanged = Signal(list) def __init__(self, *args, orientation=Qt.Orientation.Vertical, **kwargs): super().__init__() self.setAcceptDrops(True) # Store the orientation for drag checks later. self.orientation = orientation if self.orientation == Qt.Orientation.Vertical: self.blayout = QVBoxLayout() else: self.blayout = QHBoxLayout() # Add the drag target indicator. This is invisible by default, # we show it and move it around while the drag is active. self._drag_target_indicator = DragTargetIndicator() self.blayout.addWidget(self._drag_target_indicator) self._drag_target_indicator.hide() self.setLayout(self.blayout)

Next we modify the DragWidget.dragMoveEvent to show the drag target indicator. We show it by inserting it into the layout and then calling .show -- inserting a widget which is already in a layout will move it. We also hide the original item which is being dragged.

In the earlier examples we determined the position on drop by removing the widget being dragged, and then iterating over what is left. Because we now need to calculate the drop location before the drop, we take a different approach.

If we wanted to do it the same way, we'd need to remove the item on drag start, hold onto it and implement re-inserting at it's old position on drag fail. That's a lot of work.

Instead, the dragged item is left in place and hidden during move.

python def dragMoveEvent(self, e): # Find the correct location of the drop target, so we can move it there. index = self._find_drop_location(e) if index is not None: # Inserting moves the item if its alreaady in the layout. self.blayout.insertWidget(index, self._drag_target_indicator) # Hide the item being dragged. e.source().hide() # Show the target. self._drag_target_indicator.show() e.accept()

The method self._find_drop_location finds the index where the drag target will be shown (or the item dropped when the mouse released). We'll implement that next.

The calculation of the drop location follows the same pattern as before. We iterate over the items in the layout and calculate whether our mouse drop location is to the left of each widget. If it isn't to the left of any widget, we drop on the far right.

python def _find_drop_location(self, e): pos = e.position() spacing = self.blayout.spacing() / 2 for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if self.orientation == Qt.Orientation.Vertical: # Drag drop vertically. drop_here = ( pos.y() >= w.y() - spacing and pos.y() <= w.y() + w.size().height() + spacing ) else: # Drag drop horizontally. drop_here = ( pos.x() >= w.x() - spacing and pos.x() <= w.x() + w.size().width() + spacing ) if drop_here: # Drop over this target. break return n

The drop location n is returned for use in the dragMoveEvent to place the drop target indicator.

Next wee need to update the get_item_data handler to ignore the drop target indicator. To do this we check w against self._drag_target_indicator and skip if it is the same. With this change the method will work as expected.

python def get_item_data(self): data = [] for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if w != self._drag_target_indicator: # The target indicator has no data. data.append(w.data) return data

If you run the code a this point the drag behavior will work as expected. But if you drag the widget outside of the window and drop you'll notice a problem: the target indicator will stay in place, but dropping the item won't drop the item in that position (the drop will be cancelled).

To fix that we need to implement a dragLeaveEvent which hides the indicator.

python def dragLeaveEvent(self, e): self._drag_target_indicator.hide() e.accept()

With those changes, the drag-drop behavior should be working as intended. The complete code is shown below.

python from PySide6.QtCore import QMimeData, Qt, Signal from PySide6.QtGui import QDrag, QPixmap from PySide6.QtWidgets import ( QApplication, QHBoxLayout, QLabel, QMainWindow, QVBoxLayout, QWidget, ) class DragTargetIndicator(QLabel): def __init__(self, parent=None): super().__init__(parent) self.setContentsMargins(25, 5, 25, 5) self.setStyleSheet( "QLabel { background-color: #ccc; border: 1px solid black; }" ) class DragItem(QLabel): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setContentsMargins(25, 5, 25, 5) self.setAlignment(Qt.AlignmentFlag.AlignCenter) self.setStyleSheet("border: 1px solid black;") # Store data separately from display label, but use label for default. self.data = self.text() def set_data(self, data): self.data = data def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) pixmap = QPixmap(self.size()) self.render(pixmap) drag.setPixmap(pixmap) drag.exec(Qt.DropAction.MoveAction) class DragWidget(QWidget): """ Generic list sorting handler. """ orderChanged = Signal(list) def __init__(self, *args, orientation=Qt.Orientation.Vertical, **kwargs): super().__init__() self.setAcceptDrops(True) # Store the orientation for drag checks later. self.orientation = orientation if self.orientation == Qt.Orientation.Vertical: self.blayout = QVBoxLayout() else: self.blayout = QHBoxLayout() # Add the drag target indicator. This is invisible by default, # we show it and move it around while the drag is active. self._drag_target_indicator = DragTargetIndicator() self.blayout.addWidget(self._drag_target_indicator) self._drag_target_indicator.hide() self.setLayout(self.blayout) def dragEnterEvent(self, e): e.accept() def dragLeaveEvent(self, e): self._drag_target_indicator.hide() e.accept() def dragMoveEvent(self, e): # Find the correct location of the drop target, so we can move it there. index = self._find_drop_location(e) if index is not None: # Inserting moves the item if its alreaady in the layout. self.blayout.insertWidget(index, self._drag_target_indicator) # Hide the item being dragged. e.source().hide() # Show the target. self._drag_target_indicator.show() e.accept() def dropEvent(self, e): widget = e.source() # Use drop target location for destination, then remove it. self._drag_target_indicator.hide() index = self.blayout.indexOf(self._drag_target_indicator) if index is not None: self.blayout.insertWidget(index, widget) self.orderChanged.emit(self.get_item_data()) widget.show() self.blayout.activate() e.accept() def _find_drop_location(self, e): pos = e.position() spacing = self.blayout.spacing() / 2 for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if self.orientation == Qt.Orientation.Vertical: # Drag drop vertically. drop_here = ( pos.y() >= w.y() - spacing and pos.y() <= w.y() + w.size().height() + spacing ) else: # Drag drop horizontally. drop_here = ( pos.x() >= w.x() - spacing and pos.x() <= w.x() + w.size().width() + spacing ) if drop_here: # Drop over this target. break return n def add_item(self, item): self.blayout.addWidget(item) def get_item_data(self): data = [] for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if w != self._drag_target_indicator: # The target indicator has no data. data.append(w.data) return data class MainWindow(QMainWindow): def __init__(self): super().__init__() self.drag = DragWidget(orientation=Qt.Orientation.Vertical) for n, l in enumerate(["A", "B", "C", "D"]): item = DragItem(l) item.set_data(n) # Store the data. self.drag.add_item(item) # Print out the changed order. self.drag.orderChanged.connect(print) container = QWidget() layout = QVBoxLayout() layout.addStretch(1) layout.addWidget(self.drag) layout.addStretch(1) container.setLayout(layout) self.setCentralWidget(container) app = QApplication([]) w = MainWindow() w.show() app.exec()

If you run this example on macOS you may notice that the widget drag preview (the QPixmap created on DragItem) is a bit blurry. On high-resolution screens you need to set the device pixel ratio and scale up the pixmap when you create it. Below is a modified DragItem class which does this.

Update DragItem to support high resolution screens.

python class DragItem(QLabel): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setContentsMargins(25, 5, 25, 5) self.setAlignment(Qt.AlignmentFlag.AlignCenter) self.setStyleSheet("border: 1px solid black;") # Store data separately from display label, but use label for default. self.data = self.text() def set_data(self, data): self.data = data def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) # Render at x2 pixel ratio to avoid blur on Retina screens. pixmap = QPixmap(self.size().width() * 2, self.size().height() * 2) pixmap.setDevicePixelRatio(2) self.render(pixmap) drag.setPixmap(pixmap) drag.exec(Qt.DropAction.MoveAction)

That's it! We've created a generic drag-drop handled which can be added to any projects where you need to be able to reposition items within a list. You should feel free to experiment with the styling of the drag items and targets as this won't affect the behavior.

Categories: FLOSS Project Planets

The Drop Times: Sci-Fi to Software: James Shields' Evolution with Drupal

Planet Drupal - Wed, 2024-03-06 06:14
Discover the inspiring journey of James Shields in our latest interview, where he dives deep into his 15-year voyage within the Drupal ecosystem. Uncover how his passion for science fiction conventions led to significant contributions and the shaping of digital communities. Join us as James reflects on the transformative power of collaboration, continuous learning, and his commitment to fostering innovation in open-source development.
Categories: FLOSS Project Planets

Talking Drupal: Skills Upgrade - Episode #1

Planet Drupal - Tue, 2024-03-05 21:53

This is the first episode of Skills Upgrade, a Talking Drupal mini-series following the journey of a Drupal 7 developer learning Drupal 10.

Topics
  • Chad and Mike's first meeting
  • Chad's Background
  • Chad's goals
  • Tasks for the week
Resources

Chad's Drupal 10 Learning Curriclum & Journal Chad's Drupal 10 Learning Notes

Hosts

AmyJune Hineline - @volkswagenchick

Guests

Chad Hester - chadkhester.com @chadkhest Mike Anello - DrupalEasy.com @ultamike

Categories: FLOSS Project Planets

Pages