FLOSS Project Planets

PyCharm: Join the Livestream: “Python, Django, PyCharm, and More”

Planet Python - Mon, 2024-01-15 05:44

Join us for the new PyCharm Livestream episode to learn about everything new in the world of Python on January 25 at 4:00 pm UTC.

We will be chatting with Helen Scott, Jodie Burchell, Sarah Boyce, Mukul Mantosh, and Paul Everitt. Among other things, we’ll be talking about Python 3.12, Django 5.0, and PyCharm 2023.3. We’ll highlight some of the features we’re most excited about, and what we’re tracking in the future.

It’s an excellent opportunity to put faces to names and share your thoughts. If you have something you’re excited about in the Python world or you want to share your latest data science project with us, we’re looking forward to hearing about it!

Join the livestream

Date: January 25, 2024

Time: 4:00 pm UTC (5:00 pm CET)

Categories: FLOSS Project Planets

Golems GABB: Creating Custom Drupal Blocks: Best Practices and Use Cases

Planet Drupal - Mon, 2024-01-15 05:14
Creating Custom Drupal Blocks: Best Practices and Use Cases Editor Mon, 01/15/2024 - 16:49

Discover the power of custom Drupal blocks as we delve into website customization in this article. Custom Drupal blocks offer personalized functionality, design, and content that set your website apart. Investing time in creating these blocks unlocks many benefits for Drupal website development.
To inspire your creativity, we will present real-world use cases demonstrating how custom Drupal blocks can elevate user experiences and deliver tangible results.
Are you ready to unlock the full potential of your Drupal website? Join us as we open the secrets of creating custom Drupal blocks, sharing best practices, and exploring compelling use cases.

Categories: FLOSS Project Planets

Zato Blog: Network packet brokers and automation in Python

Planet Python - Mon, 2024-01-15 03:00
Network packet brokers and automation in Python 2024-01-15, by Dariusz Suchojad

Packet brokers are crucial for network engineers, providing a clear, detailed view of network traffic, aiding in efficient issue identification and resolution.

But what is a network packet broker (NBP) really? Why are they needed? And how to automate one in Python?

Read this article about network packet brokers and their automation in Python to find out more.

Next steps
  • Click here to read more about using Python and Zato in telecommunications
  • Start the tutorial which will guide you how to design and build Python API services for automation and integrations
More blog posts
Categories: FLOSS Project Planets

Python GUIs: Plotting With PyQtGraph — Create Custom Plots in PyQt with PyQtGraph

Planet Python - Mon, 2024-01-15 01:00

One of the major fields where Python shines is in data science. For data exploration and cleaning, Python has many powerful tools, such as pandas and polar. For visualization, Python has Matplotlib.

When you're building GUI applications with PyQt, you can have access to all those tools directly from within your app. While it is possible to embed matplotlib plots in PyQt, the experience doesn't feel entirely native. So, for highly integrated plots, you may want to consider using the PyQtGraph library instead.

PyQtGraph is built on top of Qt's native QGraphicsScene, so it gives better drawing performance, particularly for live data. It also provides interactivity and the ability to customize plots according to your needs.

In this tutorial, you'll learn the basics of creating plots with PyQtGraph. You'll also explore the different plot customization options, including background color, line colors, line type, axis labels, and more.

Table of Contents Installing PyQtGraph

To use PyQtGraph with PyQt, you first need to install the library in your Python environment. You can do this using pip as follows:

sh $ python -m pip install pyqtgraph

Once the installation is complete, you will be able to import the module into your Python code. So, now you are ready to start creating plots.

Creating a PlotWidget Instance

In PyQtGraph, all plots use the PlotWidget class. This widget provides a canvas on which we can add and configure many types of plots. Under the hood, PlotWidget uses Qt's QGraphicsScene class, meaning that it's fast, efficient, and well-integrated with the rest of your app.

The code below shows a basic GUI app with a single PlotWidget in a QMainWindow:

python import pyqtgraph as pg from PyQt5 import QtWidgets class MainWindow(QtWidgets.QMainWindow): def __init__(self): super().__init__() # Temperature vs time plot self.plot_graph = pg.PlotWidget() self.setCentralWidget(self.plot_graph) time = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] temperature = [30, 32, 34, 32, 33, 31, 29, 32, 35, 30] self.plot_graph.plot(time, temperature) app = QtWidgets.QApplication([]) main = MainWindow() main.show() app.exec()

In this short example, you create a PyQt app with a PlotWidget as its central widget. Then you create two lists of sample data for time and temperature. The final step to create the plot is to call the plot() methods with the data you want to visualize.

The first argument to plot() will be your x coordinate, while the second argument will be the y coordinate.

In all the examples in this tutorial, we import PyQtGraph using import pyqtgraph as pg. This is a common practice in PyQtGraph examples to keep things tidy and reduce typing.

If you run the above application, then you'll get the following window on your screen:

Basic PyQtGraph plot: Temperature vs time.

PyQtGraph's default plot style is quite basic — a black background with a thin (barely visible) white line. Fortunately, the library provides several options that will allow us to deeply customize our plots.

In the examples in this tutorial, we'll create the PyQtGraph widget in code. To learn how to embed PyQtGraph plots when using Qt Designer, check out Embedding custom widgets from Qt Designer.

In the following section, we'll learn about the options we have available in PyQtGraph to improve the appearance and usability of our plots.

Customizing PyQtGraph Plots

Because PyQtGraph uses Qt's QGraphicsScene to render the graphs, we have access to all the standard Qt line and shape styling options for use in plots. PyQtGraph provides an API for using these options to draw plots and manage the plot canvas.

Below, we'll explore the most common styling features that you'll need to create and customize your own plots with PyQtGraph.

Background Color

Beginning with the app skeleton above, we can change the background color by calling setBackground() on our PlotWidget instance, self.graphWidget. The code below sets the background to white by passing in the string "w":

python import pyqtgraph as pg from PyQt5 import QtWidgets class MainWindow(QtWidgets.QMainWindow): def __init__(self): super().__init__() # Temperature vs time plot self.plot_graph = pg.PlotWidget() self.setCentralWidget(self.plot_graph) self.plot_graph.setBackground("w") time = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] temperature = [30, 32, 34, 32, 33, 31, 29, 32, 35, 45] self.plot_graph.plot(time, temperature) app = QtWidgets.QApplication([]) main = MainWindow() main.show() app.exec()

Calling setBackground() with "w" as an argument changes the background of your plot to white, as you can see in the following window:

PyQtGraph plot with a white background.

There are a number of colors available using single letters, as we did in the example above. They're based on the standard colors used in Matplotlib. Here are the most common codes:

Letter Code Color "b" Blue "c" Cian "d" Grey "g" Green "k" Black "m" Magenta "r" Red "w" White "y" Yellow

In addition to these single-letter codes, we can create custom colors using the hexadecimal notation as a string:

python self.plot_graph.setBackground("#bbccaa") # Hex

We can also use RGB and RGBA values passed in as 3-value and 4-value tuples, respectively. We must use values in the range from 0 to 255:

python self.plot_graph.setBackground((100, 50, 255)) # RGB each 0-255 self.plot_graph.setBackground((100, 50, 255, 25)) # RGBA (A = alpha opacity)

The first call to setBackground() takes a tuple representing an RGB color, while the second call takes a tuple representing an RGBA color.

We can also specify colors using Qt's QColor class if we prefer it:

python from PyQt5 import QtGui # ... self.plot_graph.setBackground(QtGui.QColor(100, 50, 254, 25))

Using QColor can be useful when you're using specific QColor objects elsewhere in your application and want to reuse them in your plots. For example, say that your app has a custom window background color, and you want to use it in the plots as well. Then you can do something like the following:

python color = self.palette().color(QtGui.QPalette.Window) # ... self.plot_graph.setBackground(color)

In the first line, you get the GUI's background color, while in the second line, you use that color for your plots.

Line Color, Width, and Style

Plot lines in PyQtGraph are drawn using the Qt QPen class. This gives us full control over line drawing, as we would have in any other QGraphicsScene drawing. To use a custom pen, you need to create a new QPen instance and pass it into the plot() method.

In the app below, we use a custom QPen object to change the line color to red:

python from PyQt5 import QtWidgets import pyqtgraph as pg class MainWindow(QtWidgets.QMainWindow): def __init__(self): super().__init__() # Temperature vs time plot self.plot_graph = pg.PlotWidget() self.setCentralWidget(self.plot_graph) self.plot_graph.setBackground("w") pen = pg.mkPen(color=(255, 0, 0)) time = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] temperature = [30, 32, 34, 32, 33, 31, 29, 32, 35, 45] self.plot_graph.plot(time, temperature, pen=pen) app = QtWidgets.QApplication([]) main = MainWindow() main.show() app.exec()

Here, we create a QPen object, passing in a 3-value tuple that defines an RGB red color. We could also define this color with the "r" code or with a QColor object. Then, we pass the pen to plot() with the pen argument.

PyQtGraph plot with a red plot line.

By tweaking the QPen object, we can change the appearance of the line. For example, you can change the line width in pixels and the style (dashed, dotted, etc.), using Qt's line styles.

Update the following lines of code in your app to create a red, dashed line with 5 pixels of width:

python from PyQt5 import QtCore, QtWidgets # ... pen = pg.mkPen(color=(255, 0, 0), width=5, style=QtCore.Qt.DashLine)

The result of this code is shown below, giving a 5-pixel, dashed, red line:

PyQtGraph plot with a red, dashed, and 5-pixel line

You can use all other Qt's line styles, including Qt.SolidLine, Qt.DotLine, Qt.DashDotLine, and Qt.DashDotDotLine. Examples of each of these lines are shown in the image below:

Qt's line styles.

To learn more about Qt's line styles, check the documentation about pen styles. There, you'll all you need to deeply customize the lines in your PyQtGraph plots.

Line Markers

For many plots, it can be helpful to use point markers in addition or instead of lines on the plot. To draw a marker on your plot, pass the symbol you want to use as a marker when calling plot(). The following example uses the plus sign as a marker:

python self.plot_graph.plot(hour, temperature, symbol="+")

In this line of code, you pass a plus sign to the symbol argument. This tells PyQtGraph to use that symbol as a marker for the points in your plot.

If you use a custom symbol, then you can also use the symbolSize, symbolBrush, and symbolPen arguments to further customize the marker.

The value passed as symbolBrush can be any color, or QBrush instance, while symbolPen can be any color or a QPen instance. The pen is used to draw the shape, while the brush is used for the fill.

Go ahead and update your app's code to use a blue marker of size 15, on a red line:

python pen = pg.mkPen(color=(255, 0, 0)) self.plot_graph.plot( time, temperature, pen=pen, symbol="+", symbolSize=20, symbolBrush="b", )

In this code, you pass a plus sign to the symbol argument. You also customize the marker size and color. The resulting plot looks something like this:

PyQtGraph plot with a plus sign as a point marker.

In addition to the + plot marker, PyQtGraph supports the markers shown in the table below:

Character Marker Shape "o" Circle "s" Square "t" Triangle "d" Diamond "+" Plus "t1" Triangle pointing upwards "t2" Triangle pointing right side "t3" Triangle pointing left side "p" Pentagon "h" Hexagon "star" Star "x" Cross "arrow_up" Arrow Up "arrow_right" Arrow Right "arrow_down" Arrow Down "arrow_left" Arrow Left "crosshair" Crosshair

You can use any of these symbols as markers for your data points. If you have more specific marker requirements, then you can also use a QPainterPath object, which allows you to draw completely custom marker shapes.

Plot Titles

Plot titles are important to provide context around what is shown on a given chart. In PyQtGraph, you can add a main plot title using the setTitle() method on the PlotWidget object. Your title can be a regular Python string:

python self.plot_graph.setTitle("Temperature vs Time")

You can style your titles and change their font color and size by passing additional arguments to setTitle(). The code below sets the color to blue and the font size to 20 points:

python self.plot_graph.setTitle("Temperature vs Time", color="b", size="20pt")

In this line of code, you set the title's font color to blue and the size to 20 points using the color and size arguments of setTitle().

You could've even used CSS style and basic HTML tag syntax if you prefer, although it's less readable:

python self.plot_graph.setTitle( '<span style="color: blue; font-size: 20pt">Temperature vs Time</span>' )

In this case, you use a span HTML tag to wrap the title and apply some CSS styles on to of it. The final result is the same as suing the color and size arguments. Your plot will look like this:

PyQtGraph plot with title.

Your plot looks way better now. You can continue customizing it by adding informative lables to both axis.

Axis Labels

When it comes to axis labels, we can use the setLabel() method to create them. This method requires two arguments, position and text.

python self.plot_graph.setLabel("left", "Temperature (°C)") self.plot_graph.setLabel("bottom", "Time (min)")

The position argument can be any one of "left", "right", "top", or "bottom". They define the position of the axis on which the text is placed. The second argument, text is the text you want to use for the label.

You can pass an optional style argument into the setLabel() method. In this case, you need to use valid CSS name-value pairs. To provide these CSS pairs, you can use a dictionary:

python styles = {"color": "red", "font-size": "18px"} self.plot_graph.setLabel("left", "Temperature (°C)", **styles) self.plot_graph.setLabel("bottom", "Time (min)", **styles)

Here, you first create a dictionary containing CSS pairs. Then you pass this dictionary as an argument to the setLabel() method. Note that you need to use the dictionary unpacking operator to unpack the styles in the method call.

Again, you can use basic HTML syntax and CSS for the labels if you prefer:

python self.plot_graph.setLabel( "left", '<span style="color: red; font-size: 18px">Temperature (°C)</span>' ) self.plot_graph.setLabel( "bottom", '<span style="color: red; font-size: 18px">Time (min)</span>' )

This time, you've passed the styles in a span HTML tag with appropriate CSS styles. In either case, your plot will look something like this:

PyQtGraph plot with axis labels.

Having axis labels highly improves the readability of your plots as you can see in the above example. So, it's a good practice that you should keep in mind when creating your plots.

Plot Legends

In addition to the axis labels and the plot title, you will often want to show a legend identifying what a given line represents. This feature is particularly important when you start adding multiple lines to a plot.

You can add a legend to a plot by calling the addLegend() method on the PlotWidget object. However, for this method to work, you need to provide a name for each line when calling plot().

The example below assigns the name "Temperature Sensor" to the plot() method. This name will be used to identify the line in the legend:

python self.plot_graph.addLegend() # ... self.plot_graph.plot( time, temperature, name="Temperature Sensor", pen=pen, symbol="+", symbolSize=15, symbolBrush="b", )

Note that you must call addLegend() before you call plot() for the legend to show up. Otherwise, the plot won't show the legend at all. Now your plot will look like the following:

PyQtGraph plot with legend.

The legend appears in the top left by default. If you would like to move it, you can drag and drop the legend elsewhere. You can also specify a default offset by passing a 2-value tuple to the offset parameter when calling the addLegend() method. This will allow you to specify a custom position for the legend.

Background Grid

Adding a background grid can make your plots easier to read, particularly when you're trying to compare relative values against each other. You can turn on the background grid for your plot by calling the showGrid() method on your PlotWidget instance. The method takes two Boolean arguments, x and y:

python self.plot_graph.showGrid(x=True, y=True)

In this call to the showGrid() method, you enable the grid lines in both dimensions x and y. Here's how the plot looks now:

PyQtGraph plot with grid.

You can toggle the x and y arguments independently, according to the dimension on which you want to enable the grid lines.

Axis Range

Sometimes, it can be useful to predefine the range of values that is visible on the plot or to lock the axis to a consistent range regardless of the data input. In PyQtGraph, you can do this using the setXRange() and setYRange() methods. They force the plot to only show data within the specified ranges.

Below, we set two ranges, one on each axis. The first argument is the minimum value, and the second is the maximum:

python self.plot_graph.setXRange(1, 10) self.plot_graph.setYRange(20, 40)

The first line of code sets the x-axis to show values between 1 and 10. The second line sets the y-axis to display values between 20 and 40. Here's how this changes the plot:

PyQtGraph plot with axis ranges

Now your plot looks more consistent. The axis show fix scales that are specifically set for the possible range of input data.

Multiple Plot Lines

It is common to have plots that involve more than one dependent variable. In PyQtGraph, you can plot multiple variables in a single chart by calling .plot() multiple times on the same PlotWidget instance.

In the following example, we plot temperatures values from two different sensors. We use the same line style but change the line color. To avoid code repetition, we define a new plot_line() method on our window:

python from PyQt5 import QtWidgets import pyqtgraph as pg class MainWindow(QtWidgets.QMainWindow): def __init__(self): super().__init__() # Temperature vs time plot self.plot_graph = pg.PlotWidget() self.setCentralWidget(self.plot_graph) self.plot_graph.setBackground("w") self.plot_graph.setTitle("Temperature vs Time", color="b", size="20pt") styles = {"color": "red", "font-size": "18px"} self.plot_graph.setLabel("left", "Temperature (°C)", **styles) self.plot_graph.setLabel("bottom", "Time (min)", **styles) self.plot_graph.addLegend() self.plot_graph.showGrid(x=True, y=True) self.plot_graph.setXRange(1, 10) self.plot_graph.setYRange(20, 40) time = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] temperature_1 = [30, 32, 34, 32, 33, 31, 29, 32, 35, 30] temperature_2 = [32, 35, 40, 22, 38, 32, 27, 38, 32, 38] pen = pg.mkPen(color=(255, 0, 0)) self.plot_line( "Temperature Sensor 1", time, temperature_1, pen, "b" ) pen = pg.mkPen(color=(0, 0, 255)) self.plot_line( "Temperature Sensor 2", time, temperature_2, pen, "r" ) def plot_line(self, name, time, temperature, pen, brush): self.plot_graph.plot( time, temperature, name=name, pen=pen, symbol="+", symbolSize=15, symbolBrush=brush, ) app = QtWidgets.QApplication([]) main = MainWindow() main.show() app.exec()

The custom plot_line() method on the main window does the hard work. It accepts a name to set the line name for the plot legend. Then it takes the time and temperature arguments. The pen and brush arguments allow you to tweak other features of the lines.

To plot separate temperature values, we'll create a new list called temperature_2 and populate it with random numbers similar to our old temperature, which now is temperature_1. Here's the plot looks now:

PyQtGrap plot with two lines.

You can play around with the plot_line() method, customizing the markers, line widths, colors, and other parameters.

Creating Dynamic Plots

You can also create dynamic plots with PyQtGraph. The PlotWidget can take new data and update the plot in real time without affecting other elements. To update a plot dynamically, we need a reference to the line object that the plot() method returns.

Once we have the reference to the plot line, we can call the setData() method on the line object to apply the new data. In the example below, we've adapted our temperature vs time plot to accept new temperature measures every minute. Note that we've set the timer to 300 milliseconds so that we don't have to wait an entire minute to see the updates:

python from random import randint import pyqtgraph as pg from PyQt5 import QtCore, QtWidgets class MainWindow(QtWidgets.QMainWindow): def __init__(self): super().__init__() # Temperature vs time dynamic plot self.plot_graph = pg.PlotWidget() self.setCentralWidget(self.plot_graph) self.plot_graph.setBackground("w") pen = pg.mkPen(color=(255, 0, 0)) self.plot_graph.setTitle("Temperature vs Time", color="b", size="20pt") styles = {"color": "red", "font-size": "18px"} self.plot_graph.setLabel("left", "Temperature (°C)", **styles) self.plot_graph.setLabel("bottom", "Time (min)", **styles) self.plot_graph.addLegend() self.plot_graph.showGrid(x=True, y=True) self.plot_graph.setYRange(20, 40) self.time = list(range(10)) self.temperature = [randint(20, 40) for _ in range(10)] # Get a line reference self.line = self.plot_graph.plot( self.time, self.temperature, name="Temperature Sensor", pen=pen, symbol="+", symbolSize=15, symbolBrush="b", ) # Add a timer to simulate new temperature measurements self.timer = QtCore.QTimer() self.timer.setInterval(300) self.timer.timeout.connect(self.update_plot) self.timer.start() def update_plot(self): self.time = self.time[1:] self.time.append(self.time[-1] + 1) self.temperature = self.temperature[1:] self.temperature.append(randint(20, 40)) self.line.setData(self.time, self.temperature) app = QtWidgets.QApplication([]) main = MainWindow() main.show() app.exec()

The first step to creating a dynamic plot is to get a reference to the plot line. In this example, we've used a QTimer object to set the measuring interval. We've connected the update_plot() method with the timer's timeout signal.

Theupdate_plot() method does the work of updating the data in every interval. If you run the app, then you will see a plot with random data scrolling to the left:

The time scale in the x-axis changes as the stream of data provides new values. You can replace the random data with your own real data. You can take the data from a live sensor readout, API, or from any other stream of data. PyQtGraph is performant enough to support multiple simultaneous dynamic plots using this technique.

Conclusion

In this tutorial, you've learned how to draw basic plots with PyQtGraph and customize plot components, such as lines, markers, titles, axis labels, and more. For a complete overview of PyQtGraph methods and capabilities, see the PyQtGraph documentation. The PyQtGraph repository on Github also has a complete set of plot examples.

Categories: FLOSS Project Planets

Russ Allbery: Review: The Library of Broken Worlds

Planet Debian - Sun, 2024-01-14 23:42

Review: The Library of Broken Worlds, by Alaya Dawn Johnson

Publisher: Scholastic Press Copyright: June 2023 ISBN: 1-338-29064-9 Format: Kindle Pages: 446

The Library of Broken Worlds is a young-adult far-future science fantasy. So far as I can tell, it's stand-alone, although more on that later in the review.

Freida is the adopted daughter of Nadi, the Head Librarian, and her greatest wish is to become a librarian herself. When the book opens, she's a teenager in highly competitive training. Freida is low-wetware, without the advanced and expensive enhancements of many of the other students competing for rare and prized librarian positions, which she makes up for by being the most audacious. She doesn't need wetware to commune with the library material gods. If one ventures deep into their tunnels and consumes their crystals, direct physical communion is possible.

The library tunnels are Freida's second home, in part because that's where she was born. She was created by the Library, and specifically by Iemaja, the youngest of the material gods. Precisely why is a mystery. To Nadi, Freida is her daughter. To Quinn, Nadi's main political rival within the library, Freida is a thing, a piece of the library, a secondary and possibly rogue AI. A disruptive annoyance.

The Library of Broken Worlds is the sort of science fiction where figuring out what is going on is an integral part of the reading experience. It opens with a frame story of an unnamed girl (clearly Freida) waking the god Nameren and identifying herself as designed for deicide. She provokes Nameren's curiosity and offers an Arabian Nights bargain: if he wants to hear her story, he has to refrain from killing her for long enough for her to tell it. As one might expect, the main narrative doesn't catch up to the frame story until the very end of the book.

The Library is indeed some type of library that librarians can search for knowledge that isn't available from more mundane sources, but Freida's personal experience of it is almost wholly religious and oracular. The library's material gods are identified as AIs, but good luck making sense of the story through a science fiction frame, even with a healthy allowance for sufficiently advanced technology being indistinguishable from magic. The symbolism and tone is entirely fantasy, and late in the book it becomes clear that whatever the material gods are, they're not simple technological AIs in the vein of, say, Banks's Ship Minds.

Also, the Library is not solely a repository of knowledge. It is the keeper of an interstellar peace. The Library was founded after the Great War, to prevent a recurrence. It functions as a sort of legal system and grand tribunal in ways that are never fully explained.

As you might expect, that peace is based more on stability than fairness. Five of the players in this far future of humanity are the Awilu, the most advanced society and the first to leave Earth (or Tierra as it's called here); the Mahām, who possess the material war god Nameren of the frame story; the Lunars and Martians, who dominate the Sol system; and the surviving Tierrans, residents of a polluted and struggling planet that is ruthlessly exploited by the Lunars. The problem facing Freida and her friends at the start of the book is a petition brought by a young Tierran against Lunar exploitation of his homeland. His name is Joshua, and Freida is more than half in love with him. Joshua's legal argument involves interpretation of the freedom node of the treaty that ended the Great War, a node that precedent says gives the Lunars the freedom to exploit Tierra, but which Joshua claims has a still-valid originalist meaning granting Tierrans freedom from exploitation.

There is, in short, a lot going on in this book, and "never fully explained" is something of a theme. Freida is telling a story to Nameren and only explains things Nameren may not already know. The reader has to puzzle out the rest from the occasional hint. This is made more difficult by the tendency of the material gods to communicate only in visions or guided hallucinations, full of symbolism that the characters only partly explain to the reader.

Nonetheless, this did mostly work, at least for me. I started this book very confused, but by about the midpoint it felt like the background was coming together. I'm still not sure I understand the aurochs, baobab, and cicada symbolism that's so central to the framing story, but it's the pleasant sort of stretchy confusion that gives my brain a good workout. I wish Johnson had explained a few more things plainly, particularly near the end of the book, but my remaining level of confusion was within my tolerances.

Unfortunately, the ending did not work for me. The first time I read it, I had no idea what it meant. Lots of baffling, symbolic things happened and then the book just stopped. After re-reading the last 10%, I think all the pieces of an ending and a bit of an explanation are there, but it's absurdly abbreviated. This is another book where the author appears to have been finished with the story before I was.

This keeps happening to me, so this probably says something more about me than it says about books, but I want books to have an ending. If the characters have fought and suffered through the plot, I want them to have some space to be happy and to see how their sacrifices play out, with more detail than just a few vague promises. If much of the book has been puzzling out the nature of the world, I would like some concrete confirmation of at least some of my guesswork. And if you're going to end the book on radical transformation, I want to see the results of that transformation. Johnson does an excellent job showing how brutal the peace of the powerful can be, and is willing to light more things on fire over the course of this book than most authors would, but then doesn't offer the reader much in the way of payoff.

For once, I wish this stand-alone turned out to be a series. I think an additional book could be written in the aftermath of this ending, and I would definitely read that novel. Johnson has me caring deeply about these characters and fascinated by the world background, and I'd happily spend another 450 pages finding out what happens next. But, frustratingly, I think this ending was indeed intended to wrap up the story.

I think this book may fall between a few stools. Science fiction readers who want mysterious future worlds to be explained by the end of the book are going to be frustrated by the amount of symbolism, allusion, and poetic description. Literary fantasy readers, who have a higher tolerance for that style, are going to wish for more focused and polished writing. A lot of the story is firmly YA: trying and failing to fit in, developing one's identity, coming into power, relationship drama, great betrayals and regrets, overcoming trauma and abuse, and unraveling lies that adults tell you. But this is definitely not a straight-forward YA plot or world background. It demands a lot from the reader, and while I am confident many teenage readers would rise to that challenge, it seems like an awkward fit for the YA marketing category.

About 75% of the way in, I would have told you this book was great and you should read it. The ending was a let-down and I'm still grumpy about it. I still think it's worth your attention if you're in the mood for a sink-or-swim type of reading experience. Just be warned that when the ride ends, I felt unceremoniously dumped on the pavement.

Content warnings: Rape, torture, genocide.

Rating: 7 out of 10

Categories: FLOSS Project Planets

Season Of KDE 2024 Projects

Planet KDE - Sun, 2024-01-14 19:00

It is a really impressive Season of KDE this year -- we have selected a total of 18 projects!

To all of our new contributors, thank you very much for your drive to gain experience and improve FOSS at the same time. The mentors and mentorship team, as well as the broader KDE community, are sure you'll create something fantastic over the next 10 weeks.

Figure :

Group photo of several members of the KDE community at Akademy 2022 in Barcelona, Spain. The KDE community enthusiastically welcomes all new contributors! (Image: CC BY 4.0)

Translation Projects

We have 2 projects to translate multiple apps into Hindi:

They will be mentored by Benson Muite and Raghavendra Kamath.

Kdenlive

We have 2 projects:

They will be mentored by Julius Künzel and Jean-Baptiste Mardelle.

KDE Eco / Accessibility

We have 5 projects:

They will be mentored by Karanjot Singh, Emmanuel Charruau, Nitin Tejuja, Rishi Kumar, and Joseph P. De Veaugh-Geiss.

Cantor / LabPlot

8 projects have been accepted for these two applications:

They will be mentored by Alexander Semke.

KWin

1 project has been selected:

They will be mentored by Xaver Hugl.

The 18 contributors will start developing on 17 January under the guidance of their mentors. SOK will end 31 March. Check out the full SOK 2024 timeline.

Let's warmly welcome all of them and make sure the beginning of their journey with KDE is successful!

You will be able to follow the progress via blog posts published on KDE's planet.

Categories: FLOSS Project Planets

GNU Taler news: NGI Taler project launched

GNU Planet! - Sun, 2024-01-14 18:00
We are excited to announce the creation of an EU-funded consortium with the central objective to launch GNU Taler as a privacy-preserving payment system across Europe. You can find more information on the consortium page.
Categories: FLOSS Project Planets

Chris Moffitt: Introduction to Polars

Planet Python - Sun, 2024-01-14 17:25
Introduction

It’s been a while since I’ve posted anything on the blog. One of the primary reasons for the hiatus is that I have been using python and pandas but not to do anything very new or different.

In order to shake things up and hopefully get back into the blog a bit, I’m going to write about polars. This article assumes you know how to use pandas and are interested in determining if polars can fit into your workflow. I will cover some basic polars concepts that should get you started on your journey.

Along the way I will point out some of the things I liked and some of the differences that that might limit your usage of polars if you’re coming from pandas.

Ultimately, I do like polars and what it is trying to do. I’m not ready to throw out all my pandas code and move over to polars. However, I can see where polars could fit into my toolkit and provide some performance and capability that is missing from pandas.

As you evaluate the choice for yourself, it is important to try other frameworks and tools and evaluate them on their merits as they apply to your needs. Even if you decide polars doesn’t meet your needs it is good to evaluate options and learn along the way. Hopefully this article will get you started down that path.

Polars

As mentioned above, pandas has been the data analysis tool for python for the past few years. Wes McKinney started the initial work on pandas in 2008 and the 1.0 release was in January 2020. Pandas has been around a long time and will continue to be.

While pandas is great, it has it’s warts. Wes McKinney wrote about several of these challenges. There are many other criticisms online but most will boil down to two items: performance and awkward/complex API.

Polars was initially developed by Richie Vink to solve these issues. His 2021 blog post does a thorough job of laying out metrics to back up his claims on the performance improvements and underlying design that leads to these benefit with polars.

The user guide concisely lays out the polars philosophy:

The goal of Polars is to provide a lightning fast DataFrame library that:

  • Utilizes all available cores on your machine.
  • Optimizes queries to reduce unneeded work/memory allocations.
  • Handles datasets much larger than your available RAM.
  • Has an API that is consistent and predictable.
  • Has a strict schema (data-types should be known before running the query).

Polars is written in Rust which gives it C/C++ performance and allows it to fully control performance critical parts in a query engine.

As such Polars goes to great lengths to:

  • Reduce redundant copies.
  • Traverse memory cache efficiently.
  • Minimize contention in parallelism.
  • Process data in chunks.
  • Reuse memory allocations.

Clearly performance is an important goal in the development of polars and key reason why you might consider using polars.

This article won’t discuss performance but will focus on the polars API. The main reason is that for the type of work I do, the data easily fits in RAM on a business-class laptop. The data will fit in Excel but it is slow and inefficient on a standard computer. I rarely find myself waiting on pandas once I have read in the data and have done basic data pre-processing.

Of course performance matters but it’s not everything. If you’re trying to make a choice between pandas, polars or other tools don’t make a choice based on general notions of “performance improvement” but based on what works for your specific needs.

Getting started

For this article, I’ll be using data from an earlier post which you can find on github.

I would recommend following the latest polar installation instructions in the user guide .

I chose to install polars with all of the dependencies:

python -m pip install polars[all]

Once installed, reading the downloaded Excel file is straightforward:

import polars as pl df = pl.read_excel( source="2018_Sales_Total_v2.xlsx", schema_overrides={"date": pl.Datetime} )

When I read this specific file, I found that the date column did not come through as a DateTime type so I used the scheme_override argument to make sure the data was properly typed.

Since data typing is so important, here’s one quick way to check on it:

df.schema OrderedDict([('account number', Int64), ('name', Utf8), ('sku', Utf8), ('quantity', Int64), ('unit price', Float64), ('ext price', Float64), ('date', Datetime(time_unit='us', time_zone=None))])

A lot of the standard pandas commands such as head , tail , describe work as expected with a little extra output sprinkled in:

df.head() df.describe()

The polars output has a couple of notable features:

  • The shape is included which is useful to make sure you’re not dropping rows or columns inadvertently
  • Underneath each column name is a data type which is another useful reminder
  • There are no index numbers
  • The string columns include ” ” around the values

Overall, I like this output and do find it useful for analyzing the data and making sure the data is stored in the way I expect.

Basic concepts - selecting and filtering rows and columns

Polars introduces the concept of Expressions to help you work with your data. There are four main expressions you need to understand when working with data in polars:

  • select to choose the subset of columns you want to work with
  • filter to choose the subset of rows you want to work with
  • with_columns to create new columns
  • group_by to group data together

Choosing or reordering columns is straightforward with select()

df.select(pl.col("name", "quantity", "sku"))

The pl.col() code is used to create column expressions. You will want to use this any time you want to specify one or more columns for an action. There are shortcuts where you can use data without specifying pl.col() but I’m choosing to show the recommended way.

Filtering is a similar process (note the use of pl.col() again):

df.filter(pl.col("quantity") > 50)

Coming from pandas, I found selecting columns and filtering rows to be intuitive.

Basic concepts - adding columns

The next expression, with_columns , takes a little more getting used to. The easiest way to think about it is that any time you want to add a new column to your data, you need to use with_columns .

To illustrate, I will add a month name column which will also show how to work with date and strings.

df.with_columns((pl.col("date").dt.strftime("%b").alias("month_name")))

This command does a couple of things to create a new column:

  • Select the date column
  • Access the underlying date with dt and convert it to the 3 character month name using strftime
  • Name the newly created column month_name using the alias function

As a brief aside, I like using alias to rename columns. As I played with polars, this made a lot of sense to me.

Here’s another example to drive the point home.

Let’s say we want to understand how much any one product order contributes to the total percentage unit volume for the year:

df.with_columns( (pl.col("quantity") / pl.col("quantity").sum()).alias("pct_total") )

In this example we divide the line item quantity by the total quantity pl.col("quantity").sum() and label it as pct_total .

You may have noticed that the previous month_name column is not there. That’s because none of the operations we have done are in-place. If we want to persist a new column, we need to assign it to a new variable. I will do so in a moment.

I briefly mentioned working with strings but here’s another example.

Let’s say that any of the sku data with an “S” at the front is a special product and we want to indicate that for each item. We use str in a way very similar to the pandas str accessor.

df.with_columns(pl.col("sku").str.starts_with("S").alias("special"))

Polars has a useful function when then otherwise which can replace pandas mask or np.where

Let’s say we want to create a column that indicates a special or includes the original sku if it’s not a special product.

df.with_columns( pl.when(pl.col("sku").str.starts_with("S")) .then(pl.lit("Special")) .otherwise(pl.col("sku")) .alias("sales_status") )

Which yields:

This is somewhat analogous to an if-then-else statement in python. I personally like this syntax because I alway struggle to use pandas equivalents.

This example also introduces pl.lit() which we use to assign a literal value to the columns.

Basic concepts - grouping data

The pandas groupby and polars group_by functional similarly but the key difference is that polars does not have the concept of an index or multi-index.

There are pros and cons to this approach which I will briefly touch on later in this article.

Here’s a simple polars group_by example to total the unit amount by sku by customer.

df.group_by("name", "sku").agg(pl.col("quantity").sum().alias("qty-total"))

The syntax is similar to pandas groupby with agg dictionary approach I have mentioned before. You will notice that we continue to use pl.col() to reference our column of data and then alias() to assign a custom name.

The other big change here is that the data does not have a multi-index, the result is roughly the same as using as_index=False with a pandas groupby. The benefit of this approach is that it is easy to work with this data without flattening or resetting your data.

The downside is that you can not use unstack and stack to make the data wider or narrower as needed.

When working with date/time data, you can group data similar to the pandas grouper function by using group_by_dynamic :

df.sort(by="date").group_by_dynamic("date", every="1mo").agg( pl.col("quantity").sum().alias("qty-total-month") )

There are a couple items to note:

  • Polars asks that you sort the data by column before doing the group_by_dynamic
  • The every argument allows you to specify what date/time level to aggregate to

To expand on this example, what if we wanted to show the month name and year, instead of the date time? We can chain together the group_by_dynamic and add a new column by using with_columns

df.sort(by="date").group_by_dynamic("date", every="1mo").agg( pl.col("quantity").sum().alias("qty-total-month") ).with_columns(pl.col("date").dt.strftime("%b-%Y").alias("month_name")).select( pl.col("month_name", "qty-total-month") )

This example starts to show the API expressiveness of polars. Once you understand the basic concepts, you can chain them together in a way that is generally more straightforward than doing so with pandas.

To summarize this example:

  • Grouped the data by month
  • Totaled the quantity and assigned the column name to qty-total-month
  • Change the date label to be more readable and assigned the name month_name
  • Then down-selected to show the two columns I wanted to focus on
Chaining expressions

We have touched on chaining expressions but I wanted to give one full example below to act as a reference.

Combining multiple expressions is available in pandas but it’s not required. This post from Tom Augspurger shows a nice example of how to use different pandas functions to chain operations together. This is also a common topic that Matt Harrison (@__mharrison__) discusses.

Chaining expressions together is a first class citizen in polars so it is intuitive and an essential part of working with polars.

Here is an example combining several concepts we showed earlier in the article:

df_month = df.with_columns( (pl.col("date").dt.month().alias("month")), (pl.col("date").dt.strftime("%b").alias("month_name")), (pl.col("quantity") / pl.col("quantity").sum()).alias("pct_total"), ( pl.when(pl.col("sku").str.starts_with("S")) .then(pl.lit("Special")) .otherwise(pl.col("sku")) .alias("sales_status") ), ).select( pl.col( "name", "quantity", "sku", "month", "month_name", "sales_status", "pct_total" ) ) df_month

I made this graphic to show how the pieces of code interact with each other:

The image is small on the blog but if you open it in a new window, it should be more legible.

It may take a little time to wrap your head around this approach to programming. But the results should pay off in more maintainable and performant code.

Additional notes

As you work with pandas and polars there are convenience functions for moving back and forth between the two. Here’s an example of creating a pandas dataframe from polars:

df.with_columns( pl.when(pl.col("sku").str.starts_with("S")) .then(pl.lit("Special")) .otherwise(pl.lit("Standard")) .alias("sales_status") ).to_pandas()

Having this capability means you can gradually start to use polars and go back to pandas if there are activities you need in polars that don’t quite work as expected.

If you need to work the other way, you can convert a pandas dataframe to a polars one using from_pandas()

Finally, one other item I noticed when working with polars is that there are some nice convenience features when saving your polars dataframe to Excel. By default the dataframe is stored in a table and you can make a lot of changes to the output by tweaking the parameters to the write_excel() . I recommend reviewing the official API docs for the details.

To give you a quick flavor, here is an example of some simple configuration:

df.group_by("name", "sku").agg(pl.col("quantity").sum().alias("qty-total")).write_excel( "sample.xlsx", table_style={ "style": "Table Style Medium 2", }, autofit=True, sheet_zoom=150, )

There are a lot of configuration options available but I generally find this default output easier to work with thank pandas.

Additional resources

I have only touched on the bare minimum of capabilities in polars. If there is interest, I’ll write some more. In the meantime, I recommend you check out the following resources:

The Modern Polars resource goes into a much more detailed look at how to work with pandas and polars with code examples side by side. It’s a top notch resource. You should definitely check it out.

Conclusion

Pandas has been the go-to data analysis tool in the python ecosystem for over a decade. Over that time it has grown and evolved and the surrounding ecosystem has changed. As a result some of the core parts of pandas might be showing their age.

Polars brings a new approach to working with data. It is still in the early phases of its development but I am impressed with how far it has come in the first few years. As of this writing, polars is moving to a 1.0 release. This milestone means that the there will be fewer breaking changes going forward and the API will stabilize. It’s a good time to jump in and learn more for yourself.

I’ve only spent a few hours with polars so I’m still developing my long-term view on where it fits. Here are a few of my initial observations:

Polars pros:

  • Performant design from the ground up which maximizes modern hardware and minimizes memory usage
  • Clean, consistent and expressive API for chaining methods
  • Not having indices simplifies many cases
  • Useful improvement in displaying output, saving excel files, etc.
  • Good API and user documentation
  • No built in plotting library.

Regarding the plotting functionality, I think it’s better to use the available ones than try to include in polars. There is a plot namespace in polars but it defers to other libraries to do the plotting.

Polars cons:

  • Still newer code base with breaking API changes
  • Not as much third party documentation
  • Not as seamlessly integrated with other libraries (although it is improving)
  • Some pandas functions like stacking and unstacking are not as mature in polars

Pandas pros:

  • Tried and tested code base that has been improved significantly over the years
  • The multi-index support provides helpful shortcuts for re-shaping data
  • Strong integrations with the rest of the python data ecosystem
  • Good official documentation as well as lots of 3rd party sources for tips and tricks

Pandas cons:

  • Some cruft in the API design. There’s more than one way to do things in many cases.
  • Performance for large data sets can get bogged down

This is not necessarily exhaustive but I think hits the highlights. At the end of the day, diversity in tools and approaches is helpful. I intend to continue evaluating the integration of polars into my analysis - especially in cases where performance becomes an issue or the pandas code gets too be too messy. However, I don’t think pandas is going away any time soon and I continue to be excited about pandas evolution.

I hope this article helps you get started. As always, if you have experiences, thoughts or comments on the article, let me know below.

Categories: FLOSS Project Planets

Ned Batchelder: Randomly sub-setting test suites

Planet Python - Sun, 2024-01-14 09:39

I needed to run random subsets of my test suite to narrow down the cause of some mysterious behavior. I didn’t find an existing tool that worked the way I wanted to, so I cobbled something together.

I wanted to run 10 random tests (out of 1368), and keep choosing randomly until I saw the bad behavior. Once I had a selection of 10, I wanted to be able to whittle it down to try to reduce it further.

I tried a few different approaches, and here’s what I came up with, two tools in the coverage.py repo that combine to do what I want:

  • A pytest plugin (select_plugin.py) that lets me run a command to output the names of the exact tests I want to run,
  • A command-line tool (pick.py) to select random lines of text from a file. For convenience, blank or commented-out lines are ignored.

More details are in the comment at the top of pick.py, but here’s a quick example:

  1. Get all the test names in tests.txt. These are pytest “node” specifications: pytest --collect-only | grep :: > tests.txt
  2. Now tests.txt has a line per test node. Some are straightforward: tests/test_cmdline.py::CmdLineStdoutTest::test_version
    tests/test_html.py::HtmlDeltaTest::test_file_becomes_100
    tests/test_report_common.py::ReportMapsPathsTest::test_map_paths_during_html_report
    but with parameterization they can be complicated: tests/test_files.py::test_invalid_globs[bar/***/foo.py-***]
    tests/test_files.py::FilesTest::test_source_exists[a/b/c/foo.py-a/b/c/bar.py-False]
    tests/test_config.py::ConfigTest::test_toml_parse_errors[[tool.coverage.run]\nconcurrency="foo"-not a list]
  3. Run a random bunch of 10 tests: pytest --select-cmd="python pick.py sample 10 < tests.txt"
    We’re using --select-cmd to specify the shell command that will output the names of tests. Our command uses pick.py to select 10 random lines from tests.txt.
  4. Run many random bunches of 10, announcing the seed each time: for seed in $(seq 1 100); do
        echo seed=$seed
        pytest --select-cmd="python pick.py sample 10 $seed < tests.txt"
    done
  5. Once you find a seed that produces the small batch you want, save that batch: python pick.py sample 10 17 < tests.txt > bad.txt
  6. Now you can run that bad batch repeatedly: pytest --select-cmd="cat bad.txt"
  7. To reduce the bad batch, comment out lines in bad.txt with a hash character, and the tests will be excluded. Keep editing until you find the small set of tests you want.

I like that this works and I understand it. I like that it’s based on the bedrock of text files and shell commands. I like that there’s room for different behavior in the future by adding to how pick.py works. For example, it doesn’t do any bisecting now, but it could be adapted to it.

As usual, there might be a better way to do this, but this works for me.

Categories: FLOSS Project Planets

TechBeamers Python: Understanding LangChain: A Guide for Beginners

Planet Python - Sun, 2024-01-14 09:21

LangChain is a toolkit for building apps powered by large language models like GPT-3. Think of it as Legos for AI apps – it simplifies connecting these powerful models to build things like text generators, chatbots, and question answerers. It was created by an open-source community and lets developers quickly prototype and deploy AI-powered apps. […]

The post Understanding LangChain: A Guide for Beginners appeared first on TechBeamers.

Categories: FLOSS Project Planets

KDE e.V. is looking for a project lead and event manager for environmental sustainability project

Planet KDE - Sun, 2024-01-14 08:00

KDE e.V. is looking for you as an employee for a new project to promote environmentally-sustainable software and long-term hardware use. You will be the person who makes sure the administration of the project and the organization of the campaigns and workshops are on track and successful. Please see the full job listing for more details about this opportunity.

Applications will be accepted until mid-february, and we look forward to your application.

Categories: FLOSS Project Planets

cpio @ Savannah: GNU cpio version 2.15

GNU Planet! - Sun, 2024-01-14 07:21

GNU cpio version 2.15 is available for download. This is a bug-fixing release.  Short summary of changes:

  • Fix the operation of --no-absolute-filenames --make-directories.
  • Restore access and modification times of symlinks in copy-in and copy-pass modes.
Categories: FLOSS Project Planets

GNU Guix: Building packages targeting psABIs

GNU Planet! - Sun, 2024-01-14 07:00

Starting with version 2.33, the GNU C library (glibc) grew the capability to search for shared libraries using additional paths, based on the hardware capabilities of the machine running the code. This was a great boon for x86_64, which was first released in 2003, and has seen many changes in the capabilities of the hardware since then. While it is extremely common for Linux distributions to compile for a baseline which encompasses all of an architecture, there is performance being left on the table by targeting such an old specification and not one of the newer revisions.

One option used internally in glibc and in some other performance-critical libraries is indirect functions, or IFUNCs (see also here) The loader, ld.so uses them to pick function implementations optimized for the available CPU at load time. GCC's (functional multi-versioning (FMV))[https://gcc.gnu.org/wiki/FunctionMultiVersioning] generates several optimized versions of functions, using the IFUNC mechanism so the approprate one is selected at load time. These are strategies which most performance-sensitive libraries do, but not all of them.

With the --tune using package transformation option, Guix implements so-called package multi-versioning, which creates package variants using compiler flags set to use optimizations targeted for a specific CPU.

Finally - and we're getting to the central topic of this post! - glibc since version 2.33 supports another approach: ld.so would search not just the /lib folder, but also the glibc-hwcaps folders, which for x86_64 included /lib/glibc-hwcaps/x86-64-v2, /lib/glibc-hwcaps/x86-64-v3 and /lib/glibc-hwcaps/x86-64-v4, corresponding to the psABI micro-architectures of the x86_64 architecture. This means that if a library was compiled against the baseline of the architecture then it should be installed in /lib, but if it were compiled a second time, this time using (depending on the build instructions) -march=x86-64-v2, then the libraries could be installed in /lib/glibc-hwcaps/x86-64-v2 and then glibc, using ld.so, would choose the correct library at runtime.

These micro-architectures aren't a perfect match for the different hardware available, it is often the case that a particular CPU would satisfy the requirements of one tier and part of the next but would therefore only be able to use the optimizations provided by the first tier and not by the added features that the CPU also supports.

This of course shouldn't be a problem in Guix; it's possible, and even encouraged, to adjust packages to be more useful for one's needs. The problem comes from the search paths: ld.so will only search for the glibc-hwcaps directory if it has already found the base library in the preceding /lib directory. This isn't a problem for distributions following the File System Hierarchy (FHS), but for Guix we will need to ensure that all the different versions of the library will be in the same output.

With a little bit of planning this turns out to not be as hard as it sounds. Lets take for example, the GNU Scientific Library, gsl, a math library which helps with all sorts of numerical analysis. First we create a procedure to generate our 3 additional packages, corresponding to the psABIs that are searched for in the glibc-hwcaps directory.

(define (gsl-hwabi psabi) (package/inherit gsl (name (string-append "gsl-" psabi)) (arguments (substitute-keyword-arguments (package-arguments gsl) ((#:make-flags flags #~'()) #~(append (list (string-append "CFLAGS=-march=" #$psabi) (string-append "CXXFLAGS=-march=" #$psabi)) #$flags)) ((#:configure-flags flags #~'()) #~(append (list (string-append "--libdir=" #$output "/lib/glibc-hwcaps/" #$psabi)) #$flags)) ;; The building machine can't necessarily run the code produced. ((#:tests? _ #t) #f) ((#:phases phases #~%standard-phases) #~(modify-phases #$phases (add-after 'install 'remove-extra-files (lambda _ (for-each (lambda (dir) (delete-file-recursively (string-append #$output dir))) (list (string-append "/lib/glibc-hwcaps/" #$psabi "/pkgconfig") "/bin" "/include" "/share")))))))) (supported-systems '("x86_64-linux" "powerpc64le-linux")) (properties `((hidden? . #t) (tunable? . #f)))))

We remove some directories and any binaries since we only want the libraries produced from the package; we want to use the headers and any other bits from the main package. We then combine all of the pieces together to produce a package which can take advantage of the hardware on which it is run:

(define-public gsl-hwcaps (package/inherit gsl (name "gsl-hwcaps") (arguments (substitute-keyword-arguments (package-arguments gsl) ((#:phases phases #~%standard-phases) #~(modify-phases #$phases (add-after 'install 'install-optimized-libraries (lambda* (#:key inputs outputs #:allow-other-keys) (let ((hwcaps "/lib/glibc-hwcaps/")) (for-each (lambda (psabi) (copy-recursively (string-append (assoc-ref inputs (string-append "gsl-" psabi)) hwcaps psabi) (string-append #$output hwcaps psabi)) '("x86-64-v2" "x86-64-v3" "x86-64-v4")))))))) (native-inputs (modify-inputs (package-native-inputs gsl) (append (gsl-hwabi "x86-64-v2") (gsl-hwabi "x86-64-v3") (gsl-hwabi "x86-64-v4")))) (supported-systems '("x86_64-linux")) (properties `((tunable? . #f)))))

In this case the size of the final package is increased by about 13 MiB, from 5.5 MiB to 18 MiB. It is up to you if the speed-up from providing an optimized library is worth the size trade-off.

To use this package as a replacement build input in a package package-input-rewriting/spec is a handy tool:

(define use-glibc-hwcaps (package-input-rewriting/spec ;; Replace some packages with ones built targeting custom packages build ;; with glibc-hwcaps support. `(("gsl" . ,(const gsl-hwcaps))))) (define-public inkscape-with-hwcaps (package (inherit (use-glibc-hwcaps inkscape)) (name "inkscape-with-hwcaps")))

Of the Guix supported architectures, x86_64-linux and powerpc64le-linux can both benefit from this new capability.

Through the magic of newer versions of GCC and LLVM it is safe to use these libraries in place of the standard libraries while compiling packages; these compilers know about the glibc-hwcap directories and will purposefully link against the base library during build time, with glibc's ld.so choosing the optimized library at runtime.

One possible use case for these libraries is crating guix packs of packages to run on other systems. By substituting these libraries it becomes possible to crate a guix pack which will have better performance than a standard package used in a guix pack. This works even when the included libraries don't make use of the IFUNCs from glibc or functional multi-versioning from GCC. Providing optimized yet portable pre-compiled binaries is a great way to take advantage of this feature.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

Categories: FLOSS Project Planets

Debian Brasil: MiniDebConf BH 2024 - abertura de inscrição e chamada de atividades

Planet Debian - Sun, 2024-01-14 06:00

Está aberta a inscrição de participantes e a chamada de atividades para a MiniDebConf Belo Horizonte 2024 e para o FLISOL - Festival Latino-americano de Instalação de Software Livre.

Veja abaixo algumas informações importantes:

Data e local da MiniDebConf e do FLISOL

A MiniDebConf acontecerá de 27 a 30 de abril no Campus Pampulha da UFMG - Universidade Federal de Minas Gerais.

No dia 27 (sábado) também realizaremos uma edição do FLISOL - Festival Latino-americano de Instalação de Software Livre, evento que acontece no mesmo dia em várias cidades da América Latina.

Enquanto a MiniDebConf terá atividades focados no Debian, o FLISOL terá atividades gerais sobre Software Livre e temas relacionados como linguagem de programação, CMS, administração de redes e sistemas, filosofia, liberdade, licenças, etc.

Inscrição gratuita e oferta de bolsas

Você já pode realizar a sua inscrição gratuita para a MiniDebConf Belo Horizonte 2024.

A MiniDebConf é um evento aberto a todas as pessoas, independente do seu nível de conhecimento sobre Debian. O mais importante será reunir a comunidade para celebrar um dos maiores projeto de Software Livre no mundo, por isso queremos receber desde usuários(as) inexperientes que estão iniciando o seu contato com o Debian até Desenvolvedores(as) oficiais do projeto. Ou seja, estão todos(as) convidados(as)!

Este ano estamos ofertando bolsas de hospedagem e passagens para viabilizar a vinda de pessoas de outras cidades que contribuem para o Projeto Debian. Contribuidores(as) não oficiais, DMs e DDs podem solicitar as bolsas usando o formulário de inscrição.

Também estamos ofertando bolsas de alimentação para todos(as) os(as) participantes, mesmo não contribuidores(as), e pessoas que moram na região de BH.

Os recursos financeiros são bastante limitados, mas tentaremos atender o máximo de pedidos.

Se você pretende pedir alguma dessas bolsas, acesse este link e veja mais informações antes de realizar a sua inscrição:

A inscrição (sem bolsas) poderá ser feita até a data do evento, mas temos uma data limite para o pedido de bolsas de hospedagem e passagens, por isso fique atento(a) ao prazo final: até 18 de fevereiro.

Como estamos usando mesmo formulário para os dois eventos, a inscrição será válida tanto para a MiniDebConf quanto para o FLISOL.

Para se inscrever, acesse o site, vá em Criar conta. Criei a sua conta (preferencialmente usando o Salsa) e acesse o seu perfil. Lá você verá o botão de Se inscrever.

https://bh.mini.debconf.org

Chamada de atividades

Também está aberta a chamada de atividades tanto para MiniDebConf quanto para o FLISOL.

Para mais informações, acesse este link.

Fique atento ao prazo final para enviar sua proposta de atividade: até 18 de fevereiro.

Contato

Qualquer dúvida, mande um email para contato@debianbrasil.org.br

Organização

Categories: FLOSS Project Planets

Programiz: Python List

Planet Python - Sat, 2024-01-13 23:11
In this tutorial, we will learn about Python lists (creating lists, changing list items, removing items and other list operations) with the help of examples.
Categories: FLOSS Project Planets

New countries in KDE Itinerary

Planet KDE - Sat, 2024-01-13 19:12
Lithuania and Latvia

Caused by a small discussion about how it is difficult to get from Berlin to Riga by train, and in direct consequence a quick look at how the official app for trains in Latvia finds its connections, I added support for it in KDE Itinerary. KDE Itinerary is KDE’s travel planning app.

After I understood how it works, adding support for new data sources seemed pretty doable, so I directly moved on to do the same for trains in Lithuania as well.

As a result of this, it is now possible to travel from Berlin to Riga with Itinerary and continue further with the local trains there:

The connection is still far from good, but fear I can’t fix that in software.

What still does not work, is directly searching from Berlin to Riga, as that depends on having a single data source that has data on the entire route to find it. So it is necessary to split the route and search for the parts yourself.

Why you can’t always find a route even though there is one

The main data source for Itinerary in Europe is the API of the “Deutsche Bahn”, the main railway operator in Germany. Its API also has data for neighbouring countries, and even beyond that. According to Jon Worth their data comes from UIC Merits, which is a common system that railway operators can submit their routes to. However that probably comes with high costs, so many smaller operators like the ones in Latvia and Lithuania don’t do that. For that reason there is no such single data source that can route for example from Berlin to Riga.

What most of the operators in Europe do however, is publish schedule data in a common format (GTFS). What is missing so far, is a single service that can route on all of the available data and has an API that we can use. Setting something like this up would require a bit of community and hosting resources, but I am hopeful that we can have something like this in the future.

In the meantime, it already helps to fill in the missing countries one by one, so at least local users can already find their routes in Itinerary, and for Interrail and other cross border travel, people can at least patch routes together.

More countries

The next country I worked on was Montenegro. The reason for that is that it is close to the area that the DB API can still give results for, and also still has useful train services. Getting their API to work well was a bit more difficult though, as it doesn’t provide some of the information that Itinerary usually depends on. For example coordinates for stations. Those are needed to select where to search for trains going from a station. Luckily, exporting the list of stations and their coordinates from OpenStreetMap was relatively easy and provided me with all the data I needed.

Thanks to that Itinerary can now even show the route on a map properly.

Now only the API for Serbia is missing to actually connect to the part of the network DB knows about.

The new backends are not yet included in any release, but you can already find them in the nightly builds. Be aware that the nightly builds have switched to Qt6 and KF6 faily recently, which means there are still a few rough edges and small bugs in the UI.

On Linux, you can use the nightly flatpak:

flatpak install https://cdn.kde.org/flatpak/itinerary-nightly/org.kde.itinerary.flatpakref

On Android, the usual KDE Nightly F-Droid repository has up to date builds.

Categories: FLOSS Project Planets

Python People: Julian Sequeira - Pybites, Australia, Mindset, and Teaching New Programmers

Planet Python - Sat, 2024-01-13 16:55

Julian Sequeira is a cofounder of Pybites. 
He's a Python coach, a podcaster, a career mindset advocate, and is learning guitar.

Topics include:

  • Learning guitar
  • Vacationing in Canada
  • Pybites
  • Splitting finances with Bob
  • Building a community and a team
  • Coaching
  • Conscious positivity
  • Australia is full of animals that want to kill you. Except kangaroos.
  • Teaching Python to non-technical people

The Complete pytest Course

★ Support this podcast on Patreon ★ <p>Julian Sequeira is a cofounder of Pybites. <br>He's a Python coach, a podcaster, a career mindset advocate, and is learning guitar.</p><p>Topics include:</p><ul><li>Learning guitar</li><li>Vacationing in Canada</li><li>Pybites</li><li>Splitting finances with Bob</li><li>Building a community and a team</li><li>Coaching</li><li>Conscious positivity</li><li>Australia is full of animals that want to kill you. Except kangaroos.</li><li>Teaching Python to non-technical people</li></ul> <br><p><strong>The Complete pytest Course</strong></p><ul><li>Level up your testing skills and save time during coding and maintenance.</li><li>Check out <a href="https://courses.pythontest.com/p/complete-pytest-course">courses.pythontest.com</a></li></ul> <strong> <a href="https://www.patreon.com/PythonPeople" rel="payment" title="★ Support this podcast on Patreon ★">★ Support this podcast on Patreon ★</a> </strong>
Categories: FLOSS Project Planets

KDE’s 6th Megarelease – RC1 on Fedora Rawhide!

Planet KDE - Sat, 2024-01-13 16:23

After a few days of work the Fedora KDE SIG is proud to announce the availability of KDE 6th Megarelease Release Candidate 1 on Fedora Rawhide!

For those who like bleeding edge, feel free to try it!

We are very excited and looking forward to Fedora 40 + KDE 6 + Wayland only

Note: right now the update is sitting on testing. If you don’t want to wait a few hours until it reaches stable

You can access it via a dnf repository like:

[main] cachedir=/var/cache/yum debuglevel=1 logfile=/var/log/yum.log reposdir=/dev/null retries=20 obsoletes=1 gpgcheck=0 assumeyes=1 keepcache=1 install_weak_deps=0 strict=1 # repos [build] name=build baseurl=https://kojipkgs.fedoraproject.org/repos/f40-build-side-81132/5738561/x86_64
Categories: FLOSS Project Planets

Test and Code: 212: Canon TDD - by Kent Beck

Planet Python - Sat, 2024-01-13 14:18

In 2002, Kent Beck released a book called  "Test Driven Development by Example".
In December of 2023, Kent wrote an article called "Canon TDD".
With Kent's permission, this episode contains the full content of the article.

Brian's commentary is saved for a followup episode.

Links:


The Complete pytest Course

<p>In 2002, Kent Beck released a book called  "Test Driven Development by Example".<br>In December of 2023, Kent wrote an article called "Canon TDD".<br>With Kent's permission, this episode contains the full content of the article.</p><p>Brian's commentary is saved for a followup episode.</p><p>Links:</p><ul><li><a href="https://tidyfirst.substack.com/p/canon-tdd">Canon TDD</a></li><li><a href="https://bookshop.org/p/books/test-driven-development-by-example-kent-beck/115205">Test Driven Development by Example</a></li></ul> <br><p><strong>The Complete pytest Course</strong></p><ul><li>Level up your testing skills and save time during coding and maintenance.</li><li>Check out <a href="https://courses.pythontest.com/p/complete-pytest-course">courses.pythontest.com</a></li></ul>
Categories: FLOSS Project Planets

TechBeamers Python: Top 50 Python Data Structure Exercises (List, Set, Dictionary, and Tuple)

Planet Python - Sat, 2024-01-13 03:24

Here are 50 Python Data Structure exercises covering List, Set, Dictionary, and Tuple operations. These are excellent exercises for any beginner learning Python. In the following exercises, you’ll engage in a series of hands-on challenges that traverse the landscape of Lists, Sets, Dictionaries, and Tuples, as well as more advanced concepts like list comprehension. Each […]

The post Top 50 Python Data Structure Exercises (List, Set, Dictionary, and Tuple) appeared first on TechBeamers.

Categories: FLOSS Project Planets

Pages