Feeds
PyCharm: 7 Ways To Use Jupyter Notebooks inside PyCharm
Jupyter notebooks allow you to tell stories by creating and sharing data, equations, and visualizations sequentially, with a supporting narrative as you go through the notebook.
Jupyter notebooks in PyCharm Professional provide functionality above and beyond that of browser-based Jupyter notebooks, such as code completion, dynamic plots, and quick statistics, to help you explore and work with your data quickly and effectively.
Let’s take a look at 7 ways you can use Jupyter notebooks in PyCharm to achieve your goals. They are:
- Creating or connecting to an existing notebook
- Importing your data
- Getting acquainted with your data
- Using JetBrains AI Assistant
- Exploring your code with PyCharm
- Getting insights from your code
- Sharing your insights and charts
The Jupyter notebook that we used in this demo is available on GitHub.
1. Creating or connecting to an existing notebookYou can create and work on your Jupyter notebooks locally or connect to one remotely with PyCharm. Let’s take a look at both options so you can decide for yourself.
Creating a new Jupyter notebookTo work with a Jupyter notebook locally, you need to go to the Project tool window inside PyCharm, navigate to the location where you want to add the notebook, and invoke a new file. You can do this by using either your keyboard shortcuts ⌘N (macOS) / Alt+Ins (Windows/Linux) or by right-clicking and selecting New | Jupyter Notebook.
Give your new notebook a name, and PyCharm will open it ready for you to start work. You can also drag local Jupyter notebooks into PyCharm, and the IDE will automatically recognise them for you.
Connecting to a remote Jupyter notebookAlternatively, you can connect to a remote Jupyter notebook by selecting Tools | Add Jupyter Connection. You can then choose to start a local Jupyter server, connect to an existing running local Jupyter server, or connect to a Jupyter server using a URL – all of these options are supported.
Now you have your Jupyter notebook, you need some data!
2. Importing your dataData generally comes in two formats, CSV or database. Let’s look at importing data from a CSV file first.
Importing from a CSV filePolars and pandas are the two most commonly used libraries for importing data into Jupyter notebooks. I’ll give you code for both in this section, and you can check out the documentation for both Polars and pandas and learn how Polars is different to pandas.
You need to ensure your CSV is somewhere in your PyCharm project, perhaps in a folder called `data`. Then, you can invoke import pandas and subsequently use it to read the code in:
import pandas as pd df = pd.read_csv("../data/airlines.csv")In this example, airlines.csv is the file containing the data we want to manipulate. To run this and any code cell in PyCharm, use ⇧⏎ (macOS) / Shift+Enter (Windows/Linux). You can also use the green run arrows on the toolbar at the top.
If you prefer to use Polars, you can use this code:
import polars as pl df = pl.read_csv("../data/airlines.csv") Importing from a databaseIf your data is in a database, as is often the case for internal projects, importing it into a Jupyter notebook will require just a few more lines of code. First, you need to set up your database connection. In this example, we’re using postgreSQL.
For pandas, you need to use this code to read the data in:
import pandas as pd engine = create_engine("postgresql://jetbrains:jetbrains@localhost/demo") df = pd.read_sql(sql=text("SELECT * FROM airlines"), con=engine.connect())And for Polars, it’s this code:
import polars as pl engine = create_engine("postgresql://jetbrains:jetbrains@localhost/demo") connection = engine.connect() query = "SELECT * FROM airlines" df = pl.read_database(query, connection) 3. Getting acquainted with your dataNow we’ve read our data in, we can take a look at the DataFrame or `df` as we will refer to it in our code. To print out the DataFrame, you only need a single line of code, regardless of which method you used to read the data in:
df DataFramesPyCharm displays your DataFrame as a table firstly so you can explore it. You can scroll horizontally through the DataFrame and click on any column header to order the data by that column. You can click on the Show Column Statistics icon on the right-hand side and select Compact or Detailed to get some helpful statistics on each column of data.
Dynamic chartsYou can use PyCharm to get a dynamic chart of your DataFrame by clicking on the Chart View icon on the left-hand side. We’re using pandas in this example, but Polars DataFrames also have the same option.
Click on the Show Series Settings icon (a cog) on the right-hand side to configure your plot to meet your needs:
In this view, you can hover your mouse over your data to learn more about it and easily spot outliers:
You can do all of this with Polars, too.
4. Using JetBrains AI AssistantJetBrains AI Assistant has several offerings that can make you more productive when you’re working with Jupyter notebooks inside PyCharm. Let’s take a closer look at how you can use JetBrains AI Assistant to explain a DataFrame, write code, and even explain errors.
Explaining DataFramesIf you’ve got a DataFrame but are unsure where to start, you can click the purple AI icon on the right-hand side of the DataFrame and select Explain DataFrame. JetBrains AI Assistant will use its context to give you an overview of the DataFrame:
You can use the generated explanation to aid your understanding.
Writing CodeYou can also get JetBrains AI Assistant to help you write code. Perhaps you know what kind of plot you want, but you’re not 100% sure what the code should look like. Well, now you can use JetBrains AI Assistant to help you. Let’s say you want to use ‘matplotlib’ to create a chart that finds the relationship between ‘TimeMonthName’ and ‘MinutesDelayedWeather’. By specifying the column names, we’re giving more context to the request which improves the reliability of the generated code. Try it with the following prompt:
Give me code using matplotlib to create a chart which finds the relationship between ‘TimeMonthName’ and ‘MinutesDelayedWeather’ for my dataframe df
If you like the resulting code, you can use the Insert Snippet at Caret button to insert the code and then run it:
import matplotlib.pyplot as plt # Assuming your data is in a DataFrame named 'df' # Replace 'df' with the actual name of your DataFrame if different # Plotting plt.figure(figsize=(10, 6)) plt.bar(df['TimeMonthName'], df['MinutesDelayedWeather'], color='skyblue') plt.xlabel('Month') plt.ylabel('Minutes Delayed due to Weather') plt.title('Relationship between TimeMonthName and MinutesDelayedWeather') plt.xticks(rotation=45) plt.grid(axis='y', linestyle='--', alpha=0.7) plt.tight_layout() plt.show()If you don’t want to open the AI Assistant tool window, you can use the AI cell prompt to ask your questions. For example, we can ask the same question here and get the code we need:
Explaining errors
You can also get JetBrains AI Assistant to explain errors for you. When you get an error, click Explain with AI:
You can use the resulting output to further your understanding of the problem and perhaps even get some code to fix it!
5. Exploring your codePyCharm can help you get an overview of your Jupyter notebook, complete parts of your code to save your fingers, refactor it as required, debug it, and even add integrations to help you take it to the next level.
Tips for navigating and optimizing your codeOur Jupyter notebooks can grow large quite quickly, but thankfully you can use PyCharm’s Structure view to see all your notebook’s headings by clicking ⌘7 (macOS) / Alt+7 (Windows/Linux).
Code completionAnother helpful feature that you can take advantage of when using Jupyter notebooks inside PyCharm is code completion. You get both basic and type-based code completion out of the box with PyCharm, but you can also enable Full Line Code Completion in PyCharm Professional, which uses a local AI model to provide suggestions. Lastly, JetBrains AI Assistant can also help you write code and discover new libraries and frameworks.
RefactoringSometimes you need to refactor your code, and in that case, you only need to know one keyboard shortcut ⌃T (macOS) / Shift+Ctrl+Alt+T (Windows/Linux) then you can choose the refactoring you want to invoke. Pick from popular options such as Rename, Change Signature, and Introduce Variable, or lesser-known options such as Extract Method, to change your code without changing the semantics:
As your Jupyter notebook grows, it’s likely that your import statements will also grow. Sometimes you might import a package such as polars and numpy, but forget that numpy is a transitive dependency of the Polars library and as such, we don’t need to import it separately.
To catch these cases and keep your code tidy, you can invoke Optimize Imports ⌃⌥O (macOS) / Ctrl+Alt+O (Windows/Linux) and PyCharm will remove the ones you don’t need.
Debugging your codeYou might not have used the debugger in PyCharm yet, and that’s okay. Just know that it’s there and ready to support you when you need to better understand some behavior in your Jupyter notebook.
Place a breakpoint on the line you’re interested in by clicking in the gutter or by using ⌘F8 (macOS) / Ctrl+F8 (Windows/Linux), and then run your code with the debugger attached with the debug icon on the top toolbar:
You can also invoke PyCharm’s debugger in your Jupyter notebook with ⌥⇧⏎ (macOS) / Shift+Alt+Enter (Windows/Linux). There are some restrictions when it comes to debugging your code in a Jupyter notebook, but please try this out for yourself and share your feedback with us.
Adding integrations into PyCharmIDEs wouldn’t be complete without the integrations you need. PyCharm Professional 2024.2 brings two new integrations to your workflow: DataBricks and HuggingFace.
You can enable the integrations with both Databricks and HuggingFace by going to your Settings <kbd>⌘</kbd> (macOS) / <kbd>Ctrl+Alt+S</kbd> (Windows/Linux), selecting Plugins and searching for the plugin with the corresponding name on the Marketplace tab.
6. Getting insights from your codeWhen analyzing your data, there’s a difference between categorical and continuous variables. Categorical data has a finite number of discrete groups or categories, whereas continuous data is one continuous measurement. Let’s look at how we can extract different insights from both the categorical and continuous variables in our airlines dataset.
Continuous variablesWe can get a sense of how continuous data is distributed by looking at measures of the average value in that data and the spread of the data around the average. In normally distributed data, we can use the mean to measure the average and the standard deviation to measure the spread. However, when data is not distributed normally, we can get more accurate information using the median and the interquartile range (this is the difference between the seventy-fifth and twenty-fifth percentiles). Let’s look at one of our continuous variables to understand the difference between these measurements.
In our dataset, we have lots of continuous variables, but we’ll work with `NumDelaysLateAircraft` to see what we can learn. Let’s use the following code to get some summary statistics for just that column:
df['NumDelaysLateAircraft'].describe()Looking at this data, we can see that there is a big difference between the `mean` of ~789 and the ‘median’ (our fiftieth percentile, indicated by “50%” in the table below) of ~618.
This indicates a skew in our variable’s distribution, so let’s use PyCharm to explore it further. Click on the Chart View icon at the top left. Once the chart has been rendered, we’ll change the series settings represented by the cog on the right-hand side of the screen. Change your x-axis to `NumDelaysLateAircraft` and your y-axis to `NumDelaysLateAircraft`.
Now drop down the y-axis using the little arrow and select `count`. The final step is to change the chart type to Histogram using the icons in the top-right corner:
Now that we can see the skew laid out visually, we can see that most of the time, the delays are not too excessive. However, we have a number of more extreme delays – one aircraft is an outlier on the right and it was delayed by 4,509 minutes, which is just over three days!
In statistics, the mean is very sensitive to outliers because it’s a geometric average, unlike the median, which, if you ordered all observations in your variable, would sit exactly in the middle of these values. When the mean is higher than the median, it’s because you have outliers on the right-hand side of the data, the higher side, as we had here. In such cases, the median is a better indicator of the true average delay, as you can see if you look at the histogram.
Categorical variablesLet’s take a look at how we can use code to get some insights from our categorical variables. In order to get something that’s a little more interesting than just `AirportCode`, we’ll analyze how many aircraft were delayed by weather, `NumDelaysWeather`, in the different months of the year, `TimeMonthName`.
Use this code to group `NumDelaysWeather` with `TimeMonthName`:
result = df[['TimeMonthName', 'NumDelaysWeather']].groupby('TimeMonthName').sum() resultThis gives us the DataFrame again in table format, but click the Chart View icon on the left-hand side of the PyCharm UI to see what we can learn:
This is okay, but it would be helpful to have the months ordered according to the Gregorian calendar. Let’s first create a variable for the months that we expect:
month_order = [ "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December" ]Now we can ask PyCharm to use the order that we’ve just defined in `month_order`:
# Convert the 'TimeMonthName' column to a categorical type with the specified order df["TimeMonthName"] = pd.Categorical(df["TimeMonthName"], categories=month_order, ordered=True) # Now you can group by 'TimeMonthName' and perform sum operation, specifying observed=False result = df[['TimeMonthName', 'NumDelaysWeather']].groupby('TimeMonthName', observed=False).sum() resultWe then click on the Chart View icon once more, but something’s wrong!
Are we really saying that there were no flights delayed in February? That can’t be right. Let’s check our assumption with some more code:
df['TimeMonthName'].value_counts()Aha! Now we can see that `Febuary` has been misspelt in our data set, so the correct spelling in our variable name does not match. Let’s update the spelling in our dataset with this code:
df["TimeMonthName"] = df["TimeMonthName"].replace("Febuary", "February") df['TimeMonthName'].value_counts()Great, that looks right. Now we should be able to re-run our earlier code and get a chart view that we can interpret:
From this view, we can see that there is a higher number of delays during the months of December, January, and February, and then again in June, July, and August. However, we have not standardized this data against the total number of flights, so there may just be more flights in those months, which would cause these results along with an increased number of delays in those summer and winter months.
7. Sharing your insights and chartsWhen your masterpiece is complete, you’ll probably want to export data, and you can do that in various ways with Jupyter notebooks in PyCharm.
Exporting a DataFrameYou can export a DataFrame by clicking on the down arrow on the right-hand side:
You have lots of helpful formats to choose from, including SQL, CSV, and JSON:
Exporting chartsIf you prefer to export the interactive plot, you can do that too by clicking on the Export to PNG icon on the right-hand side:
Viewing your notebook as a browserYou can view your whole Jupyter notebook at any time in a browser by clicking the icon in the top-right corner of your notebook:
Finally, if you want to export your Jupyter notebook to a Python file, 2024.2 lets you do that too! Right-click on your Jupyter notebook in the Project tool window and select Convert to Python File. Follow the instructions, and you’re done!
SummaryUsing Jupyter notebooks inside PyCharm Professional provides extensive functionality, enabling you to create code faster, explore data easily, and export your projects in the formats that matter to you.
Download PyCharm Professional to try it out for yourself! Get an extended trial today and experience the difference PyCharm Professional can make in your data science endeavors.
Use the promo code “PyCharmNotebooks” at checkout to activate your free 60-day subscription to PyCharm Professional. The free subscription is available for individual users only.
Activate your 60-day trialZato Blog: Smart IoT integrations with Akenza and Python
The Akenza IoT platform, on its own, excels in collecting and managing data from a myriad of IoT devices. However, it is integrations with other systems, such as enterprise resource planning (ERP), customer relationship management (CRM) platforms, workflow management or environmental monitoring tools that enable a complete view of the entire organizational landscape.
Complementing Akenza's capabilities, and enabling the smooth integrations, is the versatility of Python programming. Given how flexible Python is, the language is a natural choice when looking for a bridge between Akenza and the unique requirements of an organization looking to connect its intelligent infrastructure.
This article is about combining the two, Akenza and Python. At the end of it, you will have:
- A bi-directional connection to Akenza using Python and WebSockets
- A Python service subscribed to and receiving events from IoT devices through Akenza
- A Python service that will be sending data to IoT devices through Akenza
Since WebSocket connections are persistent, their usage enhances the responsiveness of IoT applications which in turn helps to exchange occurs in real-time, thus fostering a dynamic and agile integrated ecosystem.
Python and Akenza WebSocket connectionsFirst, let's have a look at full Python code - to be discussed later.
# -*- coding: utf-8 -*- # Zato from zato.server.service import WSXAdapter # ############################################################################################### # ############################################################################################### if 0: from zato.server.generic.api.outconn.wsx.common import OnClosed, \ OnConnected, OnMessageReceived # ############################################################################################### # ############################################################################################### class DemoAkenza(WSXAdapter): # Our name name = 'demo.akenza' def on_connected(self, ctx:'OnConnected') -> 'None': self.logger.info('Akenza OnConnected -> %s', ctx) # ############################################################################################### def on_message_received(self, ctx:'OnMessageReceived') -> 'None': # Confirm what we received self.logger.info('Akenza OnMessageReceived -> %s', ctx.data) # This is an indication that we are connected .. if ctx.data['type'] == 'connected': # .. for testing purposes, use a fixed asset ID .. asset_id:'str' = 'abc123' # .. build our subscription message .. data = {'type': 'subscribe', 'subscriptions': [{'assetId': asset_id, 'topic': '*'}]} ctx.conn.send(data) else: # .. if we are here, it means that we received a message other than type "connected". self.logger.info('Akenza message (other than "connected") -> %s', ctx.data) # ############################################################################################## def on_closed(self, ctx:'OnClosed') -> 'None': self.logger.info('Akenza OnClosed -> %s', ctx) # ############################################################################################## # ##############################################################################################Now, deploy the code to Zato and create a new outgoing WebSocket connection. Replace the API key with your own and make sure to set the data format to JSON.
Receiving messages from WebSocketsThe WebSocket Python services that you author have three methods of interest, each reacting to specific events:
-
on_connected - Invoked as soon as a WebSocket connection has been opened. Note that this is a low-level event and, in the case of Akenza, it does not mean yet that you are able to send or receive messages from it.
-
on_message_received - The main method that you will be spending most time with. Invoked each time a remote WebSocket sends, or pushes, an event to your service. With Akenza, this method will be invoked each time Akenza has something to inform you about, e.g. that you subscribed to messages, that
-
on_closed - Invoked when a WebSocket has been closed. It is no longer possible to use a WebSocket once it has been closed.
Let's focus on on_message_received, which is where the majority of action takes place. It receives a single parameter of type OnMessageReceived which describes the context of the received message. That is, it is in the "ctx" that you will both the current request as well as a handle to the WebSocket connection through which you can reply to the message.
The two important attributes of the context object are:
-
ctx.data - A dictionary of data that Akenza sent to you
-
ctx.conn - The underlying WebSocket connection through which the data was sent and through you can send a response
Now, the logic from lines 30-40 is clear:
-
First, we check if Akenza confirmed that we are connected (type=='connected'). You need to check the type of a message each time Akenza sends something to you and react to it accordingly.
-
Next, because we know that we are already connected (e.g. our API key was valid) we can subscribe to events from a given IoT asset. For testing purposes, the asset ID is given directly in the source code but, in practice, this information would be read from a configuration file or database.
-
Finally, for messages of any other type we simply log their details. Naturally, a full integration would handle them per what is required in given circumstances, e.g. by transforming and pushing them to other applications or management systems.
A sample message from Akenza will look like this:
INFO - WebSocketClient - Akenza message (other than "connected") -> {'type': 'subscribed', 'replyTo': None, 'timeStamp': '2023-11-20T13:32:50.028Z', 'subscriptions': [{'assetId': 'abc123', 'topic': '*', 'tagId': None, 'valid': True}], 'message': None} How to send messages to WebSocketsAn aspect not to be overlooked is communication in the other direction, that is, sending of messages to WebSockets. For instance, you may have services invoked through REST APIs, or perhaps from a scheduler, and their job will be to transform such calls into configuration commands for IoT devices.
Here is the core part of such a service, reusing the same Akenza WebSocket connection:
# -*- coding: utf-8 -*- # Zato from zato.server.service import Service # ############################################################################################## # ############################################################################################## class DemoAkenzaSend(Service): # Our name name = 'demo.akenza.send' def handle(self) -> 'None': # The connection to use conn_name = 'Akenza' # Get a connection .. with self.out.wsx[conn_name].conn.client() as client: # .. and send data through it. client.send('Hello') # ############################################################################################## # ##############################################################################################Note that responses to the messages sent to Akenza will be received using your first service's on_message_received method - WebSockets-based messaging is inherently asynchronous and the channels are independent.
Now, we have a complete picture of real-time, IoT connectivity with Akenza and WebSockets. We are able to establish persistent, responsive connections to assets, we can subscribe to and send messages to devices, and that lets us build intelligent automation and integration architectures that make use of powerful, emerging technologies.
More resources➤ Python API integration tutorial
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
Django Weblog: Nominate a Djangonaut for the 2024 Malcolm Tredinnick Memorial Prize
Hello Everyone 👋 It is that time of year again when we recognize someone from our community in memory of our friend Malcolm.
Malcolm was an early core contributor to Django and had both a huge influence and impact on Django as we know it today. Besides being knowledgeable he was also especially friendly to new users and contributors. He exemplified what it means to be an amazing Open Source contributor. We still miss him to this day.
The prizeThe Django Software Foundation Prizes page summarizes it nicely:
The Malcolm Tredinnick Memorial Prize is a monetary prize, awarded annually, to the person who best exemplifies the spirit of Malcolm’s work - someone who welcomes, supports, and nurtures newcomers; freely gives feedback and assistance to others, and helps to grow the community. The hope is that the recipient of the award will use the award stipend as a contribution to travel to a community event -- a DjangoCon, a PyCon, a sprint -- and continue in Malcolm’s footsteps.Please make your nominations using our form: 2024 Malcolm Tredinnick Memorial Prize.
We will take nominations until Monday, September 30th, 2024, Anywhere on Earth, and will announce the winner(s) soon after the next DSF Board meeting in October. If you have any questions please reach out to the DSF Board at foundation@djangoproject.com.
Russ Allbery: Review: The Wings Upon Her Back
Review: The Wings Upon Her Back, by Samantha Mills
Publisher: Tachyon Copyright: 2024 ISBN: 1-61696-415-4 Format: Kindle Pages: 394The Wings Upon Her Back is a political steampunk science fantasy novel. If the author's name sounds familiar, it may be because Samantha Mills's short story "Rabbit Test" won Nebula, Locus, Hugo, and Sturgeon awards. This is her first novel.
Winged Zemolai is a soldier of the mecha god and the protege of Mecha Vodaya, the Voice. She has served the city-state of Radezhda by defending it against all enemies, foreign and domestic, for twenty-six years. Despite that, it takes only a moment of errant mercy for her entire life to come crashing down. On a whim, she spares a kitchen worker who was concealing a statue of the scholar god, meaning that he was only pretending to worship the worker god like all workers should. Vodaya is unforgiving and uncompromising, as is the sleeping mecha god. Zemolai's wings are ripped from her back and crushed in the hand of the god, and she's left on the ground to die of mechalin withdrawal.
The Wings Upon Her Back is told in two alternating timelines. The main one follows Zemolai after her exile as she is rescued by a young group of revolutionaries who think she may be useful in their plans. The other thread starts with Zemolai's childhood and shows the reader how she became Winged Zemolai: her scholar family, her obsession with flying, her true devotion to the mecha god, and the critical early years when she became Vodaya's protege. Mills maintains the separate timelines through the book and wraps them up in a rather neat piece of symbolic parallelism in the epilogue.
I picked up this book on a recommendation from C.L. Clark, and yes, indeed, I can see why she liked this book. It's a story about a political awakening, in which Zemolai slowly realizes that she has been manipulated and lied to and that she may, in fact, be one of the baddies. The Wings Upon Her Back is more personal than some other books with that theme, since Zemolai was specifically (and abusively) groomed for her role by Vodaya. Much of the book is Zemolai trying to pull out the hooks that Vodaya put in her or, in the flashback timeline, the reader watching Vodaya install those hooks.
The flashback timeline is difficult reading. I don't think Mills could have left it out, but she says in the afterword that it was the hardest part of the book to write and it was also the hardest part of the book to read. It fills in some interesting bits of world-building and backstory, and Mills does a great job pacing the story revelations so that both threads contribute equally, but mostly it's a story of manipulative abuse. We know from the main storyline that Vodaya's tactics work, which gives those scenes the feel of a slow-motion train wreck. You know what's going to happen, you know it will be bad, and yet you can't look away.
It occurred to me while reading this that Emily Tesh's Some Desperate Glory told a similar type of story without the flashback structure, which eliminates the stifling feeling of inevitability. I don't think that would not have worked for this story. If you simply rearranged the chapters of The Wings Upon Her Back into a linear narrative, I would have bailed on the book. Watching Zemolai being manipulated would have been too depressing and awful for me to make it to the payoff without the forward-looking hope of the main timeline. It gave me new appreciation for the difficulty of what Tesh pulled off.
Mills uses this interwoven structure well, though. At about 90% through this book I had no idea how it could end in the space remaining, but it reaches a surprising and satisfying conclusion. Mills uses a type of ending that normally bothers me, but she does it by handling the psychological impact so well that I couldn't help but admire it. I'm avoiding specifics because I think it worked better when I wasn't expecting it, but it ties beautifully into the thematic point of the book.
I do have one structural objection, though. It's one of those problems I didn't notice while reading, but that started bothering me when I thought back through the story from a political lens. The Wings Upon Her Back is Zemolai's story, her redemption arc, and that means she drives the plot. The band of revolutionaries are great characters (particularly Galiana), but they're supporting characters. Zemolai is older, more experienced, and knows critical information they don't have, and she uses it to effectively take over. As setup for her character arc, I see why Mills did this. As political praxis, I have issues.
There is a tendency in politics to believe that political skill is portable and repurposable. Converting opposing operatives to the cause is welcomed not only because they indicate added support, but also because they can use their political skill to help you win instead. To an extent this is not wrong, and is probably the most true of combat skills (which Zemolai has in abundance). But there's an underlying assumption that politics is symmetric, and a critical reason why I hold many of the political positions that I do hold is that I don't think politics is symmetric.
If someone has been successfully stoking resentment and xenophobia in support of authoritarians, converts to an anti-authoritarian cause, and then produces propaganda stoking resentment and xenophobia against authoritarians, this is in some sense an improvement. But if one believes that resentment and xenophobia are inherently wrong, if one's politics are aimed at reducing the resentment and xenophobia in the world, then in a way this person has not truly converted. Worse, because this is an effective manipulation tactic, there is a strong tendency to put this type of political convert into a leadership position, where they will, intentionally or not, start turning the anti-authoritarian movement into a copy of the authoritarian movement they left. They haven't actually changed their politics because they haven't understood (or simply don't believe in) the fundamental asymmetry in the positions. It's the same criticism that I have of realpolitik: the ends do not justify the means because the means corrupt the ends.
Nothing that happens in this book is as egregious as my example, but the more I thought about the plot structure, the more it bothered me that Zemolai never listens to the revolutionaries she joins long enough to wrestle with why she became an agent of an authoritarian state and they didn't. They got something fundamentally right that she got wrong, and perhaps that should have been reflected in who got to make future decisions. Zemolai made very poor choices and yet continues to be the sole main character of the story, the one whose decisions and actions truly matter. Maybe being wrong about everything should be disqualifying for being the main character, at least for a while, even if you think you've understood why you were wrong.
That problem aside, I enjoyed this. Both timelines were compelling and quite difficult to put down, even when they got rather dark. I could have done with less body horror and a few fewer fight scenes, but I'm glad I read it.
Science fiction readers should be warned that the world-building, despite having an intricate and fascinating surface, is mostly vibes. I started the book wondering how people with giant metal wings on their back can literally fly, and thought the mentions of neural ports, high-tech materials, and immune-suppressing drugs might mean that we'd get some sort of explanation. We do not: heavier-than-air flight works because it looks really cool and serves some thematic purposes. There are enough hints of technology indistinguishable from magic that you could make up your own explanations if you wanted to, but that's not something this book is interested in. There's not a thing wrong with that, but don't get caught by surprise if you were in the mood for a neat scientific explanation of apparent magic.
Recommended if you like somewhat-harrowing character development with a heavy political lens and steampunk vibes, although it's not the sort of book that I'd press into the hands of everyone I know. The Wings Upon Her Back is a complete story in a single novel.
Content warning: the main character is a victim of physical and emotional abuse, so some of that is a lot. Also surgical gore, some torture, and genocide.
Rating: 7 out of 10
Oliver Davies' daily list: Experimenting with the Default Content module
I recently sent a database to a client whose new Drupal website I'm building.
I'd populated it with some default users, nodes and menu links that they'd be able to review after they import the database into their hosting.
That worked well, but I've also recently been using the Default Content module which exports entities into YAML and saves them as code alongside the configuration.
Now I can install the website from scratch using the exported configuration to re-add the content types, block types, etc, and by enabling a custom module, all the default content will also be recreated.
I can tear the site down now and rebuild it as often as I like and avoid contaminating my environment with any rogue configuration or content changes.
Everything is reproducible.
I also wouldn't have needed to send the database to the client. They could have installed Drupal and followed the same steps I would do locally and got exactly the same result.
I like this approach and can see me using it more on future projects.
Python⇒Speed: Let's build and optimize a Rust extension for Python
If your Python code isn’t fast enough, you have many options for compiled languages to write a faster extension. In this article we’ll focus on Rust, which benefits from:
- Modern tooling, including a package repository called crates.io, and built-in build tool (cargo).
- Excellent Python integration and tooling. The Rust package (they’re known as “crates”) for Python support is PyO3. For packaging you can use setuptools-rust, for integration with existing setuptools projects, or for standalone extensions you can use Maturin.
- Memory- and thread-safe, so it’s much less prone to crashes or memory corruption compared to C and C++.
In particular, we’ll:
- Implement a small algorithm in Python.
- Re-implement it as a Rust extension.
- Optimize the Rust version so it runs faster.
This Week in KDE Apps
Welcome to the first post in our "This Week in KDE Apps" series! You may have noticed that Nate's "This Week in KDE" blog posts no longer cover updates about KDE applications. KDE has grown significantly over the years, making it increasingly difficult for just one person to keep track of all the changes that happen each week in Plasma, and to cover the rest of KDE as well.
After discussing this at Akademy, we decided to create a parallel blog series specifically focused on KDE applications, supported by a small team of editors. This team is initially constituted by Tobias Fella, Joshua Goins and Carl Schwan.
Our goal is to cover as much as possible of what's happening in the KDE world, but we also encourage KDE app developers to collaborate with us to ensure we don't miss anything important. This collaboration will take place on Invent and on Matrix #this-week-kde-apps:kde.org.
We plan to publish a new blog post every Sunday, bringing you a summary of the previous week's developments.
This week we look at news regarding NeoChat, KDE's Matrix chat client; Itinerary, the travel assistant that lets you plan all your trips; the Gwenview image viewer; our sleek music player Elisa; KleverNotes, KDE's new note-taking application; the KStars astronomy software; and Konsole, the classic KDE terminal emulator loaded with features and utilities.
We also look at how Android support has been subtly improved, and the effort to clean up our software catalogue, retiring unmaintained programs and getting rid of cruft.
Let's get started!
NeoChatEmojis in NeoChat are now all correctly detected by using ICU instead of a simple regex. (Claire, NeoChat 24.08.2, Link)
On mobile, NeoChat doesn't open any room by default any more, offering instead a list rooms and users. (Bart Ribbers, NeoChat 24.08.02, Link)
Filtering the list of users is back! (Tobias Fella, NeoChat 24.08.02, Link)
ItineraryThe seat information on public transport is now displayed in a more compact layout. (Carl Schwan, Itinerary 24.12.0, Link)
Gwenview
Rendering previews for RAW images is now much faster as long as KDcraw is installed and available (Fabian Vogt, Gwenview 24.12.0, Link)
ElisaWe fixed playing tracks without metadata (Pedro Nishiyama, Elisa 24.08.2, Link)
KleverNotesThe KleverNotes editor now comes with a powerful highlighter. (Louis Schul, KleverNotes 1.1.0, Link)
KStarsThe scheduler will now show a small window popup graphing the altitude of the target for that night. (Hy Murveit, KStars 3.7.0, Link)
KonsoleYou can set the cursor's color in Konsole using the OSC 12 escape sequence (e.g., printf '\e]12;red\a'). (Matan Ziv-Av, Konsole 24.12.0, Link)
Android SupportThe status bars on Android apps now follow the colors of the Kirigami applications (Volker Krause, Craft backports, Link)
Cleaning UpWe have archived multiple old applications with no dedicated maintainers and no activity. This applies to Kuickshow, Kopete and Trojita, among others. Link
...And Everything ElseThis blog only covers the tip of the iceberg! If you’re hungry for more, check out Nate's blog and KDE's Planet, where you can find more news from other KDE contributors.
Get InvolvedThe KDE organization has become important in the world, and your time and contributions have helped achieve that status. As we grow, it’s going to be equally important that your support become sustainable.
We need you for this to happen. You can help KDE by becoming an active community member and getting involved. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to be a programmer, either. There are many things you can do: you can help hunt and confirm bugs, even maybe solve them; contribute designs for wallpapers, web pages, icons and app interfaces; translate messages and menu items into your own language; promote KDE in your local community; and a ton more things.
You can also help us by donating. Any monetary contribution, however small, will help us cover operational costs, salaries, travel expenses for contributors and in general help KDE continue bringing Free Software to the world.
Dirk Eddelbuettel: RcppFastAD 0.0.3 on CRAN: Updated
A new release 0.0.3 of the RcppFastAD package by James Yang and myself is now on CRAN.
RcppFastAD wraps the FastAD header-only C++ library by James which provides a C++ implementation of both forward and reverse mode of automatic differentiation. It offers an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C++ applications. This release turns compilation to the C++20 standard as newer clang++ versions complained about a particular statement (it took to be C++20) when compiled under C++17. So we obliged.
The NEWS file for these two initial releases follows.
Changes in version 0.0.3 (2024-09-15)The package now compiles under the C++20 standard to avoid a warning under clang++-18 (Dirk addressing #9)
Minor updates to continuous integration and badges have been made as well
Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.
If you like this or other open-source work I do, you can sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
#! code: Drupal 11: Using The Finished State In Batch Processing
This is the third article in a series of articles about the Batch API in Drupal. The Batch API is a system in Drupal that allows data to be processed in small chunks in order to prevent timeout errors or memory problems.
So far in this series we have looked at creating a batch process using a form and then creating a batch class so that batches can be run through Drush. Both of these examples used the Batch API to run a set number of items through a set number of process function callbacks. When setting up the batch run we created a list of items that we wanted to process and then split this list up into chunks, each chunk being sent to a batch process callback.
There is another way to set up the Batch API that will run the same number of operations without defining how many times we want to run them first. This is possible by using the "finished" setting in the batch context.
Let's create a batch process that we can run and control using the finished setting.
Setting UpFirst we need to create a batch process that will accept the array we want to process. This is the same array as we have processed in the last two articles, but in this case we are passing the entire array to a single callback via the addOperation() method of the BatchBuilder class.
Raju Devidas: Setting a local test deployment of moinmoin wiki
Kdenlive 24.08.1 released
Kdenlive 24.08.1 is out and we urge all to upgrade. This version fixes recent playback and render regressions while fixing a wide range of bugs.
Full changelog:
- Fix reassigning timecode to project clip. Commit. Fixes bug #492697.
- Fix possible crash on undo/redo single selection move. Commit.
- Fix dragging transitions to a clip cut to create a mix. Commit.
- Fix multiple selection broken. Commit.
- Fix clip offset not appearing on selection in timeline. Commit.
- Ensure bin clips with effects disabled keep their effects disabled when added to a new sequence. Commit.
- Fix keyframe at last frame prevents resizing clip on high zoom. Commit.
- Fix effects/compositions list size. Commit. Fixes bug #492586.
- Fix compositions cannot be easily selected in timeline. Commit.
- Replace : and ? chars in guides names for rendering. Commit. See bug #492595.
- Don’t trigger timeline scroll when mouse exits timeline on a clip drag, it caused incorrect droppings and ghost clips. Commit. See bug #492720.
- Fix scolling timeline with rubberband or when dragging from file manager can move last selected clip in timeline. Commit. Fixes bug #492635.
- Fix adding marker from project notes always adds it at 00:00. Commit. Fixes bug #492697.
- Fix blurry widgets on high DPI displays. Commit.
- Fix keyframe param not correctly enabled on first keyframe click. Commit.
- Fix curveeditor crash on empty track. Commit.
- Ensure rendering with separate file for each audio track keeps the correct audio tag in the file name. Commit.
- Fix render project folder sometimes lost, add proper enums instead of unreadable ints. Commit. See bug #492476.
- Fix MLT lumas not correctly recognized by archive feature. Commit. Fixes bug #492435.
- Fix configure toolbars messing UI layout. Commit.
- Effects List: ensure deprecated category is always listed last. Commit.
- Fix tabulations in Titler (requires latest MLT git). Commit.
- Titler: ensure only plain text can be pasted, prepare support for tabulations (needs MLT patch). Commit.
- Don’t accept empty whisper device. Commit.
- Fix ffmpeg path for Whisper on Mac. Commit.
- Fix archive doesn’t save the video assets when run multiple times. Commit.
- Fix document notes timecode links may be broken after project reload. Commit. See bug #443597.
- Fix broken qml font on AppImage. Commit.
- Remove incorrect taskmanager unlock. Commit.
The post Kdenlive 24.08.1 released appeared first on Kdenlive.
Russell Coker: Kogan AX1800 Wifi6 Mesh
I previously blogged about the difficulties in getting a good Wifi mesh network setup [1].
I bought the Kogan AX1800 Wifi6 Mesh with 3 nodes for $140, the price has now dropped to $130. It’s only Wifi 6 (not 6E which has the extra 6GHz frequency) because all the 6E ones were more expensive than I felt like paying.
I’ve got it running and it’s working really well. One of my laptops has a damaged wire connecting to it’s Wifi device which decreased the signal to a degree that I could usually only connect to wifi when in the computer room (and then walk with it to another room once connected). Now I can connect that laptop to wifi in any part of my home. I can now get decent wifi access in my car in front of my home which covers the important corner case of walking to my car and then immediately asking Google maps for directions. Previously my phone would be deciding whether to switch away from wifi due to poor signal and that would delay getting directions, now I get directions quickly on Google Maps.
I’ve done tests with the Speedtest.net Android app and now get speeds of about 52Mbit/17Mbit in all parts of my home which is limited only by the speed of my NBN connection (one of the many reasons for hating conservatives is giving us expensive slow Internet). As my main reason for buying the devices is for Internet access they have clearly met my reason for purchase and probably meet the requirements for most people as well. Getting that speed is not trivial, my neighbours have lots of Wifi APs and bandwidth is congested. My Kogan 4K Android TV now plays 4K Netflix without pausing even though it only supports 2.4GHz wifi, so having a wifi mesh node next to the TV seems to help it.
I did some tests with the Olive Tree FTP server on a Galaxy Note 9 phone running the stock Samsung Android and got over 10MByte (80Mbit) upload and 8Mbyte (64Mbit) download speeds. This might be limited by the Android app or might be limited by the older version of Android. But it still gives higher speeds than my home Internet connection and much higher speeds than I need from an Android device.
Running iperf on Linux laptops talking to a Linux workstation that’s wired to the main mesh node I get speeds of 27.5Mbit from an old laptop on 2.4GHz wifi, 398Mbit from a new Wifi5 laptop when near the main mesh node, and 91Mbit from the same laptop when at the far end of my home. So not as fast as I’d like but still acceptable speeds.
The claims about Wifi 6 vs Wifi 5 speeds are that 6 will be about 3* faster. That would be 20% faster than the Gigabit ethernet ports on the wifi nodes. So while 2.5Gbit ethernet on Wifi 6 APs would be a good feature to have it seems that it might provide a 20% benefit at some future time when I have laptops with Wifi 6. At this time all the devices with 2.5Gbit ethernet cost more than I wanted to pay so I’m happy with this. It will probably be quite a while before laptops with Wifi 6 are in the price range I feel like paying.
For Wifi 6E it seems that anything less than 2.5Gbit ethernet will be a significant bottleneck. But I expect that by the time I buy a Wifi 6E mesh they will all have 2.5Gbit ethernet as standard.
The configuration of this device was quite easy via the built in web pages, everything worked pretty much as I expected and I hardly had to look at the manual. The mesh nodes are supposed to connect to each other when you press hardware buttons but that didn’t work for me so I used the web admin page to tell them to connect which worked perfectly. The admin of this seemed to be about as good as it gets.
ConclusionThe performance of this mesh hardware is quite decent. I can’t know for sure if it’s good or bad because performance really depends on what interference there is. But using this means that for me the Internet connection is now the main bottleneck for all parts of my home and I think it’s quite likely that most people in Australia who buy it will find the same result.
So for everyone in Australia who doesn’t have fiber to their home this seems like an ideal set of mesh hardware. It’s cheap, easy to setup, has no cloud stuff to break your configuration, gives quite adequate speed, and generally just does the job.
Related posts:
- Wifi 6E Mesh I am looking into getting a Wifi mesh network. The...
- 2.5Gbit Ethernet I just decided to upgrade the core of my home...
- USB-A vs USB-C USB-A is the original socket for USB at the PC...
CodeLift: Introduction to Diffy for Visual Regression Testing
Oliver Davies' daily list: Looking for alpha testers
As someone who works on multiple Drupal applications, I know it can be tricky to keep on top of all the available updates.
So, I'm building a SaaS project to display all your available updates in one place.
If you're a freelancer or work for an agency or any team that works on multiple Drupal applications, this could be useful for you.
If this is you, I'm looking for alpha testers to help me test it.
If you're interested, reply and let me know.
Python Morsels: Boolean operators
Python's Boolean operators are used for combining Boolean expressions and negating Boolean expressions.
Table of contents
- Combining two if statements using and
- Combining expressions with Boolean operators
- Using or instead of and
- Negating expressions
- Embrace and, or, and not in your Boolean expressions
Here we have a program called word_count.py:
words_written_today = int(input("How many words did you write today? ")) if words_written_today < 50_000/30: print("Yay! But you need to write more still.") else: print("Congratulations!")This program has an if statement that checks whether we've written enough words each day, with the assumption that we need to write 50,000 words every 30 days.
If our word count is under 1,666 words (50,000 / 30) it will say we need to write more:
$ python3 word_count.py How many words did you write today? 500 Yay! But you need to write more still.We'd like to modify our if condition to also make sure that we only require this if today's date is in the month of November.
We could do that using Python's datetime module:
>>> from datetime import date >>> is_november = date.today().month == 11That is_november variable will be True if it's November and False otherwise:
>>> is_november FalseIf we combine this with the code we had before, we could use two if statements:
from datetime import date words_written_today = int(input("How many words did you write today? ")) is_november = date.today().month == 11 if words_written_today < 50_000/30: if is_november: print("Yay! But you need to write more still.") else: print("Congratulations!") else: print("Congratulations!")One of our if statements checks whether we're under our word limit. The other if statement checks whether it's the month of November. If both are true then we end up printing out that we still need to write more words. Otherwise we print a success message:
$ python3 word_count.py How many words did you write today? 500 Congratulations!This works, but there is a better way to write this code.
We could instead use Python's and operator to combine these two conditions into one:
from datetime import date words_written_today = int(input("How many words did you write today? ")) is_november = date.today().month == 11 if is_november and words_written_today < 50_000/30: print("Yay! But you need to write more still.") else: print("Congratulations!")We're using a single if statement to asking whether it's November and whether our word count is less than we expect.
Combining expressions with Boolean operatorsPython's and operator is a …
Read the full article: https://www.pythonmorsels.com/boolean-operators/Evgeni Golov: Fixing the volume control in an Alesis M1Active 330 USB Speaker System
I've a set of Alesis M1Active 330 USB on my desk to listen to music. They were relatively inexpensive (~100€), have USB and sound pretty good for their size/price.
They were also sitting on my desk unused for a while, because the left speaker didn't produce any sound. Well, almost any. If you'd move the volume knob long enough you might have found a position where the left speaker would work a bit, but it'd be quieter than the right one and stop working again after some time. Pretty unacceptable when you want to listen to music.
Given the right speaker was working just fine and the left would work a bit when the volume knob is moved, I was quite certain which part was to blame: the potentiometer.
So just open the right speaker (it contains all the logic boards, power supply, etc), take out the broken potentiometer, buy a new one, replace, done. Sounds easy?
Well, to open the speaker you gotta loosen 8 (!) screws on the back. At least it's not glued, right? Once the screws are removed you can pull out the back plate, which will bring the power supply, USB controller, sound amplifier and cables, lots of cables: two pairs of thick cables, one to each driver, one thin pair for the power switch and two sets of "WTF is this, I am not going to trace pinouts today", one with a 6 pin plug, one with a 5 pin one.
Unplug all of these! Yes, they are plugged, nice. Nope, still no friggin' idea how to get to the potentiometer. If you trace the "thin pair" and "WTF1" cables, you see they go inside a small wooden box structure. So we have to pull the thing from the front?
Okay, let's remove the plastic part of the knob Right, this looks like a potentiometer. Unscrew it. No, no need for a Makita wrench, I just didn't have anything else in the right size (10mm).
Still, no movement. Let's look again from the inside! Oh ffs, there are six more screws inside, holding the front. Away with them! Just need a very long PH1 screwdriver.
Now you can slowly remove the part of the front where the potentiometer is. Be careful, the top tweeter is mounted to the front, not the main case and so is the headphone jack, without an obvious way to detach it. But you can move away the front far enough to remove the small PCB with the potentiometer and the LED.
Great, this was the easy part!
The only thing printed on the potentiometer is "A10K". 10K is easy -- 10kOhm. A?! Wikipedia says "A" means "logarithmic", but only if made in the US or Asia. In Europe that'd be "linear". "B" in US/Asia means "linear", in Europe "logarithmic". Do I need to tap the sign again? (The sign is a print of XKCD#927.) My multimeter says in this case it's something like logarithmic. On the right channel anyway, the left one is more like a chopping board. And what's this green box at the end? Oh right, this thing also turns the power on and off. So it's a power switch.
Where the fuck do I get a logarithmic 10kOhm stereo potentiometer with a power switch? And then in the exact right size too?!
Of course not at any of the big German electronics pharmacies. But AliExpress saves the day, again. It's even the same color!
Soldering without pulling out the cable out of the case was a bit challenging, but I've managed it and now have stereo sound again. Yay!
PS: Don't operate this thing open to try it out. 230V are dangerous!
Ned Batchelder: Cogged GitHub profile
Cog is my tool for using bits of Python to generate content inside an otherwise static file. I used it in extreme ways to generate my GitHub profile page.
If you haven’t seen it before, you can customize your GitHub profile by creating a README.md in a repo named the same as your username. So my profile is rendered from nedbat/nedbat/README.md.
My profile has a bit of static text, but much of it is badges, blog posts, links to PyPI projects, and so on. The README.md is literally a Markdown file that can be displayed by GitHub, but it’s full HTML comments containing Python code that generates the content. The generation happens once a day in a GitHub action.
There are three kinds of lines in a file run through cog: static content, code that will generate content, and generated content. My README.md is lop-sided: it has 225 lines of code, 38 of static content, and 43 of generated content.
The badges are made with shields.io image URLs. To make this easier, there are Python functions for Markdown image syntax, for building shields.io badge URLs, and so on.
I can’t walk through all of the code, but I can show a few simplified versions to convey the idea. Read the file itself if you are interested in the full details.
This makes a shields.io URL:
def shields_url(label=None,
message=None,
color=None,
label_color=None,
logo=None,
):
params = {"style": "flat"}
url = "".join([
"/badge/",
quote(label or ""),
"-",
quote(message),
"-",
color,
])
url = "https://img.shields.io" + url
if label_color:
params["labelColor"] = label_color
if logo:
params["logo"] = logo
return url + "?" + urlencode(params)
This makes a Markdown image:
def md_image(image_url, text, link):return f'[![{text}]({image_url} "{text}")]({link})'
Now we can make a Markdown badge:
def badge(text=None, link=None, **kwargs):return md_image(image_url=shields_url(**kwargs), text=text, link=link)
Anything print’ed will become part of the generated portions of the file. We can add a badge to the page with:
print(badge(logo="discord", logo_color="white", label_color="7289da",
message="Discord", color="ffe97c",
text="Python Discord", link="https://discord.gg/python",
))
There are other functions built on top of these to make Mastodon badges, Stack Overflow badges, a row of badges for a PyPI project, and so on.
Building the page ends up pulling data from 10 URLs, including a JSON summary of my blog for including blog posts. It’s satisfying to be able to have this update automatically instead of having to copy data around.
The result is a convenient mix of static and generated, and it was a fun exercise in light-touch automation.
Akademy went to me
This year’s Akademy was a special one for me in many ways.
First of all, instead of me travelling to Akademy it took place in my hometown of Würzburg, Germany. While I did have a hand in organizing it, most of the credit for it goes to Tobias and David. I had a lot of fun introducing people to my area and the concept of drinking wine on a bridge.
Qt Contributor SummitRight before Akademy there was the Qt Contributor Summit, also in Würzburg (what a coincidence!). It was great to meet old and new Qt faces and talk about topics that are relevant to KDE, like the upcoming migration of KDE API documentation to qdoc.
Akademy TalksThis Akademy I gave two talks: One long one where I looked back at the Qt6/KF6 transition, what went well, what didn’t, and looked towards the future of what’s next for our software platform. Then I also had a lightning talk where I talked about the role of maintainers in open-source projects, why KDE doesn’t have traditional maintainers, and why that’s a good thing.
Besides that there were also a lot of interesting talks from other people, too many to mention right now. Speaking as a member of the program committee we had some tough decisions to make about what to include.
GoalsDuring the conference we announced the new set of Goals that were recently elected. I’m excited that my own proposal “Streamlined Application Development Experience” got selected and I’m looking forward to working on it with you. Besides that I also want to see how I can help out with the other elected goals: “We care about your input” and “KDE needs you 🫵”.
Akademy AwardsAnother way this Akademy was special for me is that I was awarded with an Akademy award for my work on KDE Frameworks and Plasma. It feels great to get recognition for all the work I’ve been doing for the last seven years.
BOFsDuring the week we had lots of smaller meetings and workshops (a.k.a BOFs, world’s most terrible acronym). I was leading two of them, one about my newly-elected goal where I was presenting my proposal in more detail, and one about the ongoing work of mine to migrate our API documentation to qdoc. Thanks to our sysadmin Ben we now have a website where the current (still very much WIP) state of the new API documentation page can be seen.
Other ThingsWhat’s great about Akademy isn’t just talks and BOFs, it’s meeting people you only see online all year, talking to them in person, getting your code reviewed while staring on a screen together, chatting over random visions, complaining about things, laughing and enjoying things together, and wrapping up the day with a nice beer in your hand.
I’m already looking forward to next year’s Akademy, wherever that will be. Maybe it will be your place, organizing it is a lot less scary than you’d think ;).
Web Review, Week 2024-37
Alright… this is published a bit later than usual due to travels and lack of energy. Anyway, let’s go for my web review for the week 2024-37.
Fediverse Discovery ProvidersTags: tech, fediverse, search
Nice to see such a project be funded. Let’s see how far this will go.
Tags: tech, html, quality
This is clearly not a great outcome. The browser monoculture probably doesn’t help.
https://meiert.com/en/blog/html-conformance-2024/
Tags: tech, ai, machine-learning, gpt, law
This is bad. There was no way to know the book was AI generated and clearly it contained errors and lies.
Tags: tech, gpt, security
Looks like an interesting venue to attack systems which use LLMs.
https://conspirator0.substack.com/p/baiting-the-bot
Tags: tech, web, browser, servo
It’s good to see servo getting closer to being usable in a browser. Makes me dream of Falkon or Konqueror being resurrected with Servo as the engine.
https://servo.org/blog/2024/09/11/building-browser/
Tags: tech, windows, unix, design, system, architecture
Interesting exploration of the NT design compared to Unix. There was less legacy to carry around which explains some of the choices which could be made. In practice similarities abound.
https://blogsystem5.substack.com/p/windows-nt-vs-unix-design
Tags: tech, debian, redhat, security
Interesting comparison of the difference in approaches between RedHat and Debian about default system hardening.
https://unix.foo/posts/insecurity-of-debian/
Tags: tech, linux, kernel, power
Ever wondered what happens when you suspend or hibernate on Linux? Here is a very deep exploration of the process from the kernel perspective.
https://tookmund.com/2024/09/hibernation-preparation
Tags: tech, multithreading, system, kernel
Good reminder of what OS threads entails and why they can’t be optimized much further. There’s so much you can do properly in userland.
https://utcc.utoronto.ca/~cks/space/blog/tech/OSThreadsAlwaysExpensive
Tags: tech, networking, performance, quic
Looks like there is still some work required on QUIC. There is a path forward though.
https://dl.acm.org/doi/10.1145⁄3589334.3645323
Tags: tech, json, tools
Looks like a very nice tool to deal with JSON files.
https://github.com/josephburnett/jd
Tags: tech, linux, profiling, tools, processes
Looks like an interesting little profiling tool. The article explains quite well how it’s been done. Can be a nice blueprint to make other such tools.
https://tinkering.xyz/proctrace/
Tags: tech, python, packaging
It feels more and more that uv might turn out to be a game changer for the Python ecosystem.
https://mkennedy.codes/posts/python-docker-images-using-uv-s-new-python-features/
Tags: tech, python, foss, community, business
There is a sane conversation going on around uv in the Python community. Here is a good summary.
https://simonwillison.net/2024/Sep/8/uv-under-discussion-on-mastodon/
Tags: tech, c++
Clearly nice examples of better quality of life adjustments coming with C++26.
https://mariusbancila.ro/blog/2024/09/06/whats-new-in-c26-part-1/
Tags: tech, c++, performance, memory
Good reminder that packing your data is generally the right move when squeezing for performances.
https://lemire.me/blog/2024/09/09/replace-stdstring-by-stdstring_view-when-you-can/
Tags: tech, failure, exceptions
A couple of flaws in this article I think. For instance, the benchmark part looks fishy to me. Also it’s a bit opinionated and goes too far in advocating exceptions at the expense of error values. Still, I think it shows quite well that we can’t do without exceptions at all, even in the case of error values being available. In my opinion, we’re still learning how both can be cleverly used in code base.
https://cedardb.com/blog/exceptions_vs_errors/
Tags: tech, version-control, git
A bit too much of a rant for my taste (even though I agree with the GitHub flaws). That said it illustrates nicely a use of git range-diff which is often overlooked.
https://gist.github.com/thoughtpolice/9c45287550a56b2047c6311fbadebed2
Tags: tech, quality, agile, project-management, product-management
He is spot on again. The scope is what will allow to create flexibility in a fixed price project. This is what leads to the necessity to work incrementally.
https://tidyfirst.substack.com/p/scope-management-101
Tags: tech, engineering, career, learning
Interesting musing about what it takes for engineers to grow. Clearly there are a few paradoxes in there… that gives ideas to manage your career though.
https://tidyfirst.substack.com/p/the-impossibility-of-making-an-elite
Bye for now!
Akademy 2024
This week I attended the 2024 edition of KDE Akademy in Würzburg, Germany.
Akademy CC-BY-SA 4.0 by Andy BettsAkademy is the people. Just a bit over 100km away from Würzburg I attended my very first Akademy in 2004. Twenty years later I still meet some of the same people, as well as some I had never met in person before. Some people I had met in several countries this year alone already, some I hadn’t seen again since before the pandemic. It’s a week of hanging out with friends.
I got back physically exhausted but refreshed with many ideas and a huge motivational boost, and I can’t wait to see what will come out of all the things discussed and started there.
A big thank you to everyone who helped to make Akademy happen, and to those of you who enabled people to attend with your donations!
TopicsI’ll try to list some of the topics I ended up involved in discussing, in talks, BoFs or elsewhere, but that’s bound to only scratch the surface. Also check out Planet KDE for more reports.
CI/CD and Craft- If/how could we give tooling the ability to create MRs (e.g. for release automation)?
- How can we get CI coverage for Craft and Craft Blueprint changes? At least for the latter there are some ideas.
- Possible branching strategies for Craft Blueprints, to address the problem of all changes hitting the stable package builds immediately.
- Ways to work around or remove assumptions in our CI/CD infrastructure about the amount of parallel branches. Usually we have a development and a release branch, but there are cases of multiple still active release branches (e.g. Plasma LTS, or overlaps during the Gear release cycle).
- Removing the strong version locking between the Android CI image and the target Qt version.
- Using Qt’s upcoming SBOM tooling to generate package manifests, to automate collecting and maintaining information about 3rd party dependencies we ship in application packages (for FOSS license compliance).
See also the CI/CD BoF notes and Ben’s, Hannah’s, Julius’ and my talk.
KWallet successorHow to evolve our password and credential store was also a topic, following previous discussions at GPN22 and FrOSCon.
There generally seem to be two different types of data that need their own handling and consumer-facing APIs:
- Usernames and passwords, passkeys or 2FA secrets that you might want to sync between different devices.
- Device-bound secrets that are not shareable, like OAuth tokens or XDG portal secrets.
Building blocks for parts of this exist, but even when putting everything together there’s still gaps.
Migration from the status quo will also be challenging, as many different things need to happen in the right order, not all of which are under our control.
Localization- Qt 6.6 added QQmlEngine::markCurrentFunctionAsTranslationBinding() which should allow us to make our i18n QML functions automatically reevaluate on language changes. That would be an enormous step forward for making runtime language changes work in QML applications, but it still requires a creative solution for dependency issues its use would cause in KF::I18n.
- Debugged various cases of our Android apps mixing up translations from different languages. All of that seems to trace back to a wrong fallback handling of non-US English-language locales (we should prefer en_US as a fallback in that case, but end up using secondary languages first instead). And newer Android versions seem to have a separated region from the language settings, making it easier to hit this issue.
Being able to build our libraries and applications statically has been on the wishlist since a long time. Work has happened into that direction, but we haven’t gotten to the point to put all this together yet.
There’s now a stronger need for this though, with the first bits of iOS support landing in Craft, and Qt on iOS can only be linked statically.
Emergency alertsThursday morning the Plasma Mobile BoF coincided with the yearly test of Germany’s emergency alert systems. And while we didn’t manage to capture the cell broadcast with ModemManager, the push notification based system worked.
Public emergency alert notification.I also got a data feed for New Zealand earthquake warnings and we discussed ways to make push notifications work on Linux mobile devices even in power save mode, something that will benefit not just the emergency alerts.
Android- There’s a new Qt JNI array API coming, similar to something we already have in the KAndroidExtras code. More of that in Qt should help reducing the dependencies on the Android platform calendar integration, making it easier to move that to KF::CalendarCore.
- All pieces of the window insets color API have been merged, so the Android status and navigation bars now follow the Breeze style color for KDE apps.
- There’s agreement on retiring the KDE Frameworks 5 Android CI coverage, which would remove quite some maintenance burden. We don’t use this anywhere anymore, and external users of KF5 on Android are exceedingly unlikely as Qt5 will likely not produce APKs anymore which are in line with Google Play store guidelines.
- We discussed ideas for a cross-platform alarm/wakeup API, to be added to KIdleTime. That is, timers that also work while the application isn’t running, or even when the device is in sleep mode.
Kongress generally worked, and given the incoming wishes for additional features it seems it was actually used.
We did learn though that rolling out updates to event specific content for the map needs to be possible fairly quickly, this tended to need manual CDN flushes too often.
I also got a chance to try the indoor localization solution from the team we met at 37C3 in the Akademy venue. It’s unfortunately not Free Software, but it’s nevertheless interesting to see what performance/precision can be achieved without special infrastructure in the building, with just the existing radio beacons, inertial sensors and a building map. Still a bit out of reach for us, but if the past is any indication we’ll eventually get there as well I guess.
See also my talk on OSM indoor venue maps in Kongress.
ItineraryConference travel of course also results in work around KDE Itinerary:
- Nobody got lost on the way to Akademy due to Itinerary issues it seems. That’s a big relief.
- As this was my first chance of field testing the new two-level timeline view, a bunch of fixes and improvements followed from that.
- Identified why opening the bus stop map showed the full city map instead in Würzburg (it’s the fault of the “Ringpark”…).
- Improved stop point/quay display for large bus stations on the map.
- Andy Betts designed new public transport icons, replacing the current incoherent mix of different styles.
- As one attendee got Frankfurt Hahn’ed we are now looking into having Itinerary warn about airports with SEO names.
Looking forward to the next opportunity to meet all of you again! At least for some I don’t have to wait very long, considering the Nextcloud Community Conference 2024 today and the Matrix conference next week in Berlin.