FLOSS Project Planets

Nextcloud Conference 2023

Planet KDE - Sat, 2023-09-23 02:00

Last weekend I attended this year’s Nextcloud conference in Berlin, together with a few other fellow KDE contributors.

Nextcloud Itinerary integration

My main involvement with Nextcloud so far has been the integration with KDE Itinerary. While this generally works fine, it’s not ideal yet when it comes to continuously getting updates of the travel document extractor to Nextcloud users.

Nextcloud uses this in form of a statically built executable, going back to the initial prototype still using ad hoc builds of that executable. Meanwhile we have this implemented on KDE’s Gitlab CI, which should give us more reliable and reproducible results. What’s still missing though is feeding this into the corresponding Nextcloud package more or less automatically for every release, either from our Gitlab job, or by running the same job in Nextclouds Github setup.

Virtual file system API for cloud syncing clients

There is demand for Linux getting an equivalent to the virtual file system APIs other platforms offer for cloud syncing clients, so that all file dialogs or file managers can see the remote file trees even if not fully synchronized locally and request a file of interest to be synchronized on demand.

It came up for the Nextcloud Linux desktop client here, I have heard the same from ownCloud before, KDE wants this and I’m sure so does GNOME. It might be possible to implement this using environment-specific APIs such as a KIO or GVfs, but this doesn’t scale.

This looks like something where a FreeDesktop/XDG standardized API would make sense, so that each syncing client has to only implement that to support all Linux environments (ideally without much extra code, given similar logic is needed for other platforms already), and where KDE/GNOME/etc would gain support for all kinds of cloud file storage system by supporting that API.

Push notifications for CalDAV/CardDAV

Probably the biggest surprise for me was the DAVx⁵ team presenting their work on adding push support to the CalDAV/CardDAV protocols. This would allow e.g. changes to calendar events becoming instantly available on all your devices, while saving energy by needing a lot less polling. This might also then allow things like synchronized reminder states across multiple devices.

Technically this nicely ties in with our existing CalDAV/CardDAV connectors in KDE PIM and the work on integrating push notifications using UnifiedPush, and thus should hopefully be easy for us to add once this becomes available in servers.

For more details see the WebDAV Push Matrix room and the corresponding Github project.

Too much “AI” hype

The “AI” topic was a bit over-hyped for my taste, like in many other places as well. While things like the ethical AI rating or the focus on getting things to run locally address important problems, the features as such and their consequences weren’t questioned much. And how people can consider “AI” as a tool to solve the climate crises is beyond me.

An interesting point raised in the discussions was how to mark machine generated content. This might not even always be obvious to the direct user, and people the output is shared with have even less of a chance to assess this. And this already matters for relatively uncontroversial uses like machine translations.

As an example, some time ago I was confronted with some questionable statements I had supposedly made on the Internet (in German). Being relatively sure I hadn’t said that I was pointed to my own website as the source. With all content here being in English though it eventually turned out that the person was using some automatic translation feature in their browser without even realizing that. It worked mostly fine so this went unnoticed for quite some time, up to the point where it failed and the failure then was attributed to me instead.

Responsible use of such technology requires transparency.

Political work

Two particular highlights for me were Max Schrems and Simon Phipps presenting about their respective work on fixing EU regulations. It’s quite encouraging to see that this isn’t out of reach for “us” (ie. the wider community that shares our ideas and values around free and open source software and privacy), and that the way of influencing this has become much more professional and effective in the recent years, with many organization coordinating, building relations and sharing the work on lobbying “political communication”.

I’m happy that KDE is associated with two organizations doing good work there, FSFE and OSI, but we could probably do more still, on the EU level, the regional/national level in the EU and of course outside of the EU as well, be it through throwing out weight behind allies or by being present in stakeholder hearings ourselves.

Categories: FLOSS Project Planets

This week in KDE: an unfrozen panel for NVIDIA Wayland users

Planet KDE - Sat, 2023-09-23 01:57

Though the number of total Plasma 6 known issues rose this week, we managed to fix some major and longstanding ones from Plasma 5! You might recognize a few in the text below. Ultimately, these were deemed more pressing than the comparatively minor new ones. We’ll be continuing to hammer those bugs, but we do need help–your help! Discovering bugs is important, but so is fixing them, and we need help to get it done.

Plasma 6

General infoOpen issues: 90

Fixed the infamous issue of Panels visually freezing in the Plasma Wayland session when using a non-Intel GPU in conjunction with the Basic QtQuick render loop and Task Manager previews turned on (David Edmundson et al, link)

Searching for apps, System Settings pages, and other things classified internally as “services” in KRunner and other KRunner-powered search tools (such as Kickoff) now matches English text as well when using the system in a language that’s not English (Alexander Lohnau, link)

When using a single-row Icons-Only Task Manager, the last opened task on the row is no longer sometimes missing under certain circumstances (Marco Martin, Link)

Fixed a bug that could cause auto-started apps with System Tray icons to sometimes not show their System Tray icons as expected until manually quit and re-launched (David Edmundson, link)

Fixed various cursor glitches and brokenness when using rotated screens in the Plasma Wayland session on a GPU that supports hardware cursors (Xaver Hugl, link)

The “Manually block sleep and screen locking” switches of multiple Battery and Brightness widgets are now synchronized; when one is toggled, all of them will change as well (Natalie Clarius, link)

Repeated messages on the lock screen now do a little bounce, rather than piling up and repeating themselves (me: Nate Graham, link):

https://i.imgur.com/DhyKaa4.mp4

Reduced resource usage in QtQuick apps that have mnemonics–those little underlines below letters when you hold down the Alt key (Kai Uwe Broulik, link 1 and link 2)

The menu item that says “Enter Exit Mode” now changes to “Exit Edit Mode” when you’re already in Edit Mode (me: Nate Graham, link)

The Kirigami.BasicListItem component has been deprecated with a planned removal in KF6, because it was too slow, heavy, and inflexible, worsening performance in QtQuick apps that used a lot of them. In its place are a new set of lightweight components that are thin wrappers around the standard Qt ItemDelegate, CheckDelegate etc. components, plus some more basic building blocks for making custom list items. This provides most of the convenience of BasicListItem, without the performance overhead (Arjen Hiemstra, link)

Other Significant Bugfixes

(This is a curated list of e.g. HI and VHI priority bugs, Wayland showstoppers, major regressions, etc.)

Gwenview now displays images more correctly when using a fractional scale factor in the Plasma Wayland session (Kai Uwe Broulik, Gwenview 24.02. link)

Fixed multiple bugs in Elisa that could cause odd and incorrect behavior when you re-arrange the contents of the playlist around the currently-playing song (Jack Hill, Elisa 24.02. Link)

Filelight once again respects the settings regarding folders exclusions and filesystem boundaries (Yifan Zhu, Filelight 23.08.2. Link)

When closing a document in Kate and KWrite that has unsaved changes, you’ll no longer see two dialogs asking you if you want to save them (Kai Uwe Broulik, Kate and KWrite 23.08.2. Link)

Widgets using the standard Plasma Calendar integration will no longer sometimes display holidays from the default region, rather than the selected one (Eugene Popov, Plasma 5.27.9. Link)

Other bug-related information of interest:

Automation & Systematization

Added some autotests for MPRIS media playback global shortcuts (Fushan Wen, link 1 and link 2)

…And everything else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

If you’re a developer, work on Qt6/KF6/Plasma 6 issues! Plasma 6 is usable for daily driving now, but still in need of bugfixing and polishing to get it into a releasable state by the end of the year.

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

And finally, KDE can’t work without financial support, so consider making a donation today! This stuff ain’t cheap and KDE e.V. has ambitious hiring goals. We can’t meet them without your generous donations!

Categories: FLOSS Project Planets

Stack Abuse: How to Check for NaN Values in Python

Planet Python - Fri, 2023-09-22 16:12
Introduction

Today we're going to explore how to check for NaN (Not a Number) values in Python. NaN values can be quite a nuisance when processing data, and knowing how to identify them can save you from a lot of potential headaches down the road.

Why Checking for NaN Values is Important

NaN values can be a real pain, especially when you're dealing with numerical computations or data analysis. They can skew your results, cause errors, and generally make your life as a developer more difficult. For instance, if you're calculating the average of a list of numbers and a NaN value sneaks in, your result will also be NaN, regardless of the other numbers. It's almost as if it "poisons" the result - a single NaN can throw everything off.

Note: NaN stands for 'Not a Number'. It is a special floating-point value that cannot be converted to any other type than float.

NaN Values in Mathematical Operations

When performing mathematical operations, NaN values can cause lots of issues. They can lead to unexpected results or even errors. Python's math and numpy libraries typically propagate NaN values in mathematical operations, which can lead to entire computations being invalidated.

For example, in numpy, any arithmetic operation involving a NaN value will result in NaN:

import numpy as np a = np.array([1, 2, np.nan]) print(a.sum())

Output:

nan

In such cases, you might want to consider using functions that can handle NaN values appropriately. Numpy provides nansum(), nanmean(), and others, which ignore NaN values:

print(np.nansum(a))

Output:

3.0

Pandas, on the other hand, generally excludes NaN values in its mathematical operations by default.

How to Check for NaN Values in Python

There are many ways to check for NaN values in Python, and we'll cover some of the most common methods used in different libraries. Let's start with the built-in math library.

Using the math.isnan() Function

The math.isnan() function is an easy way to check if a value is NaN. This function returns True if the value is NaN and False otherwise. Here's a simple example:

import math value = float('nan') print(math.isnan(value)) # True value = 5 print(math.isnan(value)) # False

As you can see, when we pass a NaN value to the math.isnan() function, it returns True. When we pass a non-NaN value, it returns False.

The benefit of using this particular function is that the math module is built-in to Python, so no third party packages need to be installed.

Using the numpy.isnan() Function

If you're working with arrays or matrices, the numpy.isnan() function can be a nice tool as well. It operates element-wise on an array and returns a Boolean array of the same shape. Here's an example:

import numpy as np array = np.array([1, np.nan, 3, np.nan]) print(np.isnan(array)) # array([False, True, False, True])

In this example, we have an array with two NaN values. When we use numpy.isnan(), it returns a Boolean array where True corresponds to the positions of NaN values in the original array.

You'd want to use this method when you're already using NumPy in your code and need a function that works well with other NumPy structures, like np.array.

Using the pandas.isnull() Function

Pandas provides an easy-to-use function, isnull(), to check for NaN values in the DataFrame or Series. Let's take a look at an example:

import pandas as pd # Create a DataFrame with NaN values df = pd.DataFrame({'A': [1, 2, np.nan], 'B': [5, np.nan, np.nan], 'C': [1, 2, 3]}) print(df.isnull())

The output will be a DataFrame that mirrors the original, but with True for NaN values and False for non-NaN values:

A B C 0 False False False 1 False True False 2 True True False

One thing you'll notice if you test this method out is that it also returns True for None values, hence why it refers to null in the method name. It will return True for both NaN and None.

Comparing the Different Methods

Each method we've discussed — math.isnan(), numpy.isnan(), and pandas.isnull() — has its own strengths and use-cases. The math.isnan() function is a straightforward way to check if a number is NaN, but it only works on individual numbers.

On the other hand, numpy.isnan() operates element-wise on arrays, making it a good choice for checking NaN values in numpy arrays.

Finally, pandas.isnull() is perfect for checking NaN values in pandas Series or DataFrame objects. It's worth mentioning that pandas.isnull() also considers None as NaN, which can be very useful when dealing with real-world data.

Conclusion

Checking for NaN values is an important step in data preprocessing. We've explored three methods — math.isnan(), numpy.isnan(), and pandas.isnull() — each with its own strengths, depending on the type of data you're working with.

We've also discussed the impact of NaN values on mathematical operations and how to handle them using numpy and pandas functions.

Categories: FLOSS Project Planets

Stack Abuse: How to Position Legend Outside the Plot in Matplotlib

Planet Python - Fri, 2023-09-22 15:14
Introduction

In data visualization, often create complex graphs that need to have legends for the reader to be able to interpret the graph. But what if those legends get in the way of the actual data that they need to see? In this Byte, we'll see how you can move the legend so that it's outside of the plot in Matplotlib.

Legends in Matplotlib

In Matplotlib, legends provide a mapping of labels to the elements of the plot. These can be very important to help the reader understand the visualization they're looking at. Without the legend, you might not know which line represented which data! Here's a basic example of how legends work in Matplotlib:

import matplotlib.pyplot as plt # Create a simple line plot plt.plot([1, 2, 3, 4], [1, 4, 9, 16], label='Sample Data') # Add a legend plt.legend() # Display the plot plt.show()

This will produce a plot with a legend located in the upper-left corner inside the plot. The legend contains the label 'Sample Data' that we specified in the plt.plot() function.

Why Position the Legend Outside the Plot?

While having the legend inside the plot is the default setting in Matplotlib, it's not always the best choice. Legends can obscure important details of the plot, especially when dealing with complex data visualizations. By positioning the legend outside the plot, we can be sure that all data points are clearly visible, making our plots easier to interpret.

How to Position the Legend Outside the Plot in Matplotlib

Positioning the legend outside the plot in Matplotlib is fairly easy to do. We simply need to use the bbox_to_anchor and loc parameters of the legend() function. Here's how to do it:

import matplotlib.pyplot as plt # Create a simple line plot plt.plot([1, 2, 3, 4], [1, 4, 9, 16], label='Sample Data') # Add a legend outside the plot plt.legend(bbox_to_anchor=(1, 1.10), loc='upper right') # Display the plot plt.show()

In this example, bbox_to_anchor is a tuple specifying the coordinates of the legend's anchor point, and loc indicates the location of the anchor point with respect to the legend's bounding box. The coordinates are in axes fraction (i.e., from 0 to 1) relative to the size of the plot. So, (1, 1.10) positions the anchor point just outside the top right corner of the plot.

Positioning this legend is a bit more of an art than a science, so you may need to play around with the values a bit to see what works.

Common Issues and Solutions

One common issue is the legend getting cut off when you save the figure using plt.savefig(). This happens because plt.savefig() doesn't automatically adjust the figure size to accommodate the legend. To fix this, you can use the bbox_inches parameter and set it to 'tight' like so:

plt.savefig('my_plot.png', bbox_inches='tight')

Another common issue is the legend overlapping with the plot when positioned outside. This can be fixed by adjusting the plot size or the legend size to ensure they fit together nicely. Again, this is something you'll likely have to test with many different values to find the right configuration and positioning.

Note: Adjusting the plot size can be done using plt.subplots_adjust(), while the legend size can be adjusted using legend.get_frame().

Conclusion

And there you have it! In this Byte, we showed how you can position the legend outside the plot in Matplotlib and explained some common issues. We've also talked a bit about some use-cases where you'll need to position the legend outside the plot.

Categories: FLOSS Project Planets

Ravi Dwivedi: Debconf23

Planet Debian - Fri, 2023-09-22 14:19
Official logo of DebConf23 Introduction

DebConf23, the 24th annual Debian Conference, was held in India in the city of Kochi, Kerala from 3 September - 17 September 2023. I was excited to attend DebConf in my home country ever since I got to know about it (more than an year ago). This was my second DebConf as I attended one last year in Kosovo. I was very happy that I don’t need to apply for a visa to attend the conference. This time I submitted two talks - one on Debian packaging for beginners and the other on ideas on sustainable solutions for self-hosting. I got full bursary to attend the event (thanks a lot to debian for that!) which is always helpful in covering the expenses, especially if the venue is a five star hotel :)

My friend Suresh, who is enthusiastic about debian and free software, also wanted to attend the DebConf. When the registration started, I reminded him to apply. We landed in Kochi on 28 August 2023 during the Onam festival. Then, we celebrated Onam in Kochi, had a trip to Wayanad and returned to Kochi. On 3 September evening, we reached the venue - Four Points Hotel by Sheraton, Infopark Kochi, Ernakulam, Kerala, India.

Hotel overview

The hotel had 14 floors, and featured a swimming pool and gym - these were included in our package. The hotel gave us elevator access only for our floor and public spaces like reception, gym, swimming pool and meals. The temperature inside the hotel was pretty cold and I had to buy a jacket to survive. Perhaps the hotel had tie up with warm clothing sellers :)

Meals

On the first day, Suresh and I went to dinner which was at the eatery on the third floor. At the entrance, there was a staff who asked us about how many people we want the table for. I told her that it’s just the two [of us] at the moment, but we might be joined by others, as we are attending a conference, more people may join us. Even so, they gave us a table for two. Within a few minutes, Alper (from Turkey) and urbec (Germany) showed up and joined us. So we shifted to a larger table, and even more people joined, so we were busy adding more chairs to our table. urbec was already in Kerala since 5-6 days and was very happy already with the quality and taste of bananas in Kerala, and also afraid of the spicy food :)

Two days later, the lunch and dinner got shifted to the All Spice Restaurant at the 14th floor, but the breakfast was still at eatery. Since eatery (on 3rd floor) had much more options than the other venue, this move made breakfast the best meal for me and many others. Many attendees from outside India were not accustomed to the “spicy” and hot food. It is difficult for locals to help because what they find non spicy can be spicy for non Indians. It is not easy to satisy everyone at the dining table but I think the organizing team did a very good job in the food department. Well, it didn’t matter for me after a point and you will know why. The pappadam were really good, I liked the rice labelled “Kerala rice”. I actually brought that exact rice and pappadam home during my last trip to Kochi and everyone at my home liked it (thanks to Abhijit PA). I also wished to eat all types of payasams from Kerala and this really happened (thanks to Sruthi who designed the menu). Every meal had a different variety of payasam and it was awesome, although I didn’t like some of them mostly because of them being very sweet. Meals were later shifted to the ground floor (taking away the best breakfast option which was eatery).

Swag bag was excellent

The debconf registration desk was at the second floor. We got a very nice swag bag. The swag bags were available in multiple colors - grey, green, blue, red. The bag included an umbrella, a steel mug, a multiboot USB drive by Mostly Harmless, a thermal flask, a mug by Canonical, a [paper] coaster and stickers. It rained almost everyday in Kochi when we were there so handing out an umbrella to every attendee was a good idea.

Picture of awesome swag bag we got in debconf23. Nattie got a gift

One day during the breakfast, Nattie said she wants to buy a coffee filter. Next time when I went to the market, I bought a coffee filter for her as a gift. She seemed happy with the gift and was flattered to receive a gift from a young man :)

Mentoring by me

There were many newbies and they were eager to learn and contribute to debian. So, I mentored whoever came to me and was interested to learn. I took a packaging workshop in the bootcamp but could only cover how to setup debian unstable environment, not how to package (but I covered in my talk). Carlos (Brazil) gave a keysigning session in the bootcamp. Praveen was also mentoring in the bootcamp. I helped people in understanding why we sign gpg keys and how to sign it. I planned to take a workshop on it but cancelled it later.

My talk

My debian packaging talk was on 10 September 2023. I had not prepared slides for my debian packaging talk in advance and I thought I can do it during the trip but I didn’t get time for that. So I prepared them within a day before the talk. Since it was mostly a tutorial, it didn’t need so much preparation for the slides. it was possible to do in a hurry because Suresh helped me with the slides. Thanks to him.

My talk was well received by the audience as implied by their comments. I am glad that I could give an interesting presentation.

My presentation photo. Credits: Valessio A visit to saree shop

After my talk, I went with Anisa and Kristi (both from Albania), whose fascination for the Indian culture is never ending :), (along with Suresh and Alper) as they wanted to buy sarees for themselves. We took autos to Kakkanad market and found a shop with lots of variety of sarees. Obviously, I got a little familiar with the area surrounding the hotel as I was there since a week. Indian women usually don’t try sarees on themselves while buying, they only select the design. But Anisa wanted to put one on along with a photoshoot. The shop staff weren’t ready with a trial saree for this, so they took a saree from a mannequin. It took about an hour for the lady at the shop to get that saree on her. Anyone can tell that she felt in heaven while wearing that saree and immediately bought that one :) Alper also bought a saree to take back to Turkey for his mother. Me and Suresh wanted to buy a kurta which can go along mundu we already had, but we didn’t find anything that we liked.

Selfie with Anisa and Kristi. Cheese and Wine Party

11 September was Cheese and Wine Party, a tradition of every debconf. I brought Kaju Samosa and Nankhatai from home. Many attendees told me they liked the samosa. During the party, I was with Abhas and had a lot of fun. Abhas brought paan packets and put them for the Cheese and Wine Party. We discussed interesting things and ate burgers. But due to the restrictive alcohol laws in the state, it was not the same as in previous debconf because you can only drink alcohol served by the hotel in public places. If you buy your own alcohol, you can only drink in private places like in your room or friend’s room but not in public places.

Me helping with the Cheese and Wine Party Party at my room

Last year, Joenio (Brazilian) brought pastis from France which I liked. He brought the same alocholic drink this year too. So I invited him to my room after Cheese and Wine party to have pastis. My idea was to have this with my roommate Suresh and Joenio. But then we permitted Joenio to bring as many people as he wants. He brought some ten people I think and suddenly it was crowded. I was having good time in the party, serving them snacks that Abhas gave me. The news of an alcohol party at my room spread like wildfire. Soon there were so many people that the AC was not cooling anymore and I was sweating. I left the room and roamed around in the hotel for some fresh air. I came back after 1.5 hours, after sitting mostly at the ground floor with someone whose name I can’t remember. And then I met Abraham near the gym (which was my last meeting with him). I came back to my room at around 02:30 AM and nobody seems to have realized that I was gone. They were thanking me for hosting such a good party. A lot of people left at that point and the remaining people were playing songs and dancing (everyone was dancing all along!). I had no energy left to dance and to join them. They left around 03:00 AM. But I am glad that people enjoyed partying in my room.

This picture was taken when there were few people in my room for the party. Sadhya Thali

On 12 September, we got sadhya thali in lunch. It is vegetarian thali served on banana leaf and served on the eve of Thiruvonam. That day was not thiruvonam but we got a special filling lunch. Rasam and payasam were especially yummy.

Sadhya Thali: A vegetarian meal served on banana leaf. Payasam and rasam were especially yummy! Day trip

13 September was the daytrip. I chose the daytrip Houseboat in Allepey. Suresh also chose the same and we registered for that daytrip as soon as it was open. This was the most preferred daytrip by debconf attendees (80 people registered for it). Our bus was set to leave at 9 AM on 13 September. Me and Suresh woke up at 08:40 and hurried to get the bus in time. It took two hours to reach the venue where we get the houseboat.

The houseboat experience was good. The trip featured some good scenery. I was having experience of the renowned Kerala backwaters. We were served food on the boat. We also stopped at a place and had coconut water. We came back to the place where we boarded the boat by evening.

Group photo of our daytrip. Credits: Radhika Jhalani Lost a good friend during daytrip

When we came back from the daytrip, we got the news that Abhraham Raji died due to drowning. He went to the kayaking daytrip. I am not sure what exactly happened but the story goes that he jumped into the water for swimming and drowned.

Abraham Raji was a very good friend of mine. In my trip to Albania - Kosovo - Dubai last year, he was my roommate in the apartment in Tirana, had lot of discussions during DebConf22 Kosovo and I roamed around in Dubai with him. In fact, the photo of me on the homepage of this blogpost was taken by Abraham. Then I met him in MiniDebConf22 Palakkad and MiniDebConf23 Tamil Nadu. I also went to his flat in Kochi this year in June. Plus we had many projects in common. He was also a Free Software activist and was the designer of the DebConf23 logo. He also designed logos for other debian events in India.

A selfie in memory of Abraham.

We all got pretty shocked by the news. As far as I am concerned, I have still not recovered (9 days after the incident) and still cannot believe it happened. Food does not taste anything and sleep is hard to come by. That night, Anisa and Kristi cheered me up and gave me company. Thanks a lot to them. Next day, Joenio also tried to console me. I thank him for doing a great job. I thank everyone who helped me in coping the difficult situation.

Next day (14 September), Debian project leader Jonathan Carter addressed and announced the news officially. Debian project also published it on their website. In fact, Abraham was supposed to give a talk at that time. All the talks were cancelled on that day. The conference dinner was also cancelled. I was totally devastated!

A visit to Abraham’s house

On 15 September, the conference ran two buses from the hotel to Abraham’s house in Kottayam (2 hours ride). I hopped in the first bus and my mood was not very good. Evangelos (Germany) was sitting in front of me and he started discussing with me. The distraction helped and I was back to normal for a while. Thanks to Evangelos as he supported me a lot on that trip. He was also very impressed by my use of the StreetComplete app which I was using to edit OpenStreetMap.

In two hours, we reached Abraham’s house. Obviously, I bursted into tears and couldn’t control myself. Then I went to see the dead body. I met his family (mother, father and sister). I had nothing to say and I felt helpless. I had no energy left, mainly due to lack of sleep since last few days and my shrinking apetite, so I didn’t think it was good idea for me to stay there. I went back by taking the bus after one hour and had lunch at the hotel. I withdrew my talk scheduled on 16th September.

A Japanese gift

I got a nice Japanese gift from Niibe Yutaka (Japan) - a folder to keep papers which had ancient Japanese manga characters. He said he felt guilty as he swapped his talk with me and so it got rescheduled from 12th September to 16 September which I withdrew later.

Thanks to Niibe Yutaka (the person towards your right hand) from Japan (FSIJ) gave me a wonderful Japanese gift during debconf23: A folder to keep pages with ancient Japanese manga characters printed on it. I realized I immediately needed that :) This is the japanese gift I recieved. Group photo

On 16th September, we had a group photo and I am glad this year I was more clear in the picture than debconf22.

Click to enlarge Volunteer work and talks attended

I went to training for video team and I worked as a camera operator. The Bits from DPL was nice. I enjoyed Abhas’ presentation on home automation. He basically demonstrated how he liberated home devices which work with internet. I also liked Kristi’s presentation on ways to engage with the GNOME community.

Kristi on GNOME community. Abhas' talk on home automation

I also attended lightning talks on the last day. Me, Badri and Wouter gave a demo on how to register on prav app. Prav app also got its fair share of advertising during last few days.

17 September night

On 17 September night, Suresh left the hotel and Badri joined me in my room. That night I wore a mundu (thanks to Abhijit PA, Kiran and Ananthu).

Me in mundu. Picture credits: Abhijith PA

Then I joined Kalyani, Mangesh, Ruchika, Anisa, Ananthu and Kiran. We took pictures and this marked the last night of debconf.

Departure Day

18 September was departure day. Badri slept in my room and left early morning (06:30 AM). I dropped him at the hotel gate. The breakfast was at the eatery (3rd floor) again and it was good.

Me, Sahil, Saswata, Nilesh hanged out sometime at the ground floor.

From left: Nilesh, Saswata, me, Sahil

I had a 8 PM flight from Kochi to Delhi. So I took a cab with Rhonda (Austria), Michael (Nigeria) and Yash (India). We were also joined by other debconf attendees at the Kochi airport. We took another selfie at the airport:

Ruchika (taking the selfie) and from left to right: Yash, Joost (Netherlands), me, Rhonda

Joost had the same flight with me and we sat next to each other. He then took a connecting flight from Delhi to Netherlands. And I went with Yash to New Delhi station and we took our respective trains. I reached home in the morning of 19 September 2023.

Joost and me going to Delhi Big thanks to the organizers

DebConf23 was hard to organize - strict alcohol laws, wierd hotel rules, death of a close friend (almost a family member) and a scary notice by the immigration bureau. People from the team are my close friends and I am proud that they organized such a good event. None of this would have been possible without the organizers who put more than a year long voluntary effort to produce this. In the meanwhile, many of them had organized local events in the run up before debconf.

Shoutout for them.

The organizers also tried their best to get clearance for countries the ministry didn’t approve. I am also sad that people from China, Kosovo, Iran could not join. Especially, I feel bad for people from Kosovo who wanted to attend but could not (as India does not consider their passport as a valid travel document) as we Indians were well received last year in their country.

Note about myself

I am writing this on 22 September 2023 and it took three days to put up this post. This was one of the tragic and hard to write posts for me. I have literally forced myself to write this. I have still not recovered from the loss of my friend. Thanks a lot to all those who helped me.

PS: Credits to contrapuntus for correcting grammatical mistakes.

Categories: FLOSS Project Planets

KDE: KDE Neon updates! Qt6 transition moving along.

Planet KDE - Fri, 2023-09-22 14:10

With user edition out the door last week, this week was spent stabilizing unstable!

Spent some time sorting out our Calamares installer being quite grumpy which is now fixed by reverting an upstream change. Unstable and developer ISO rebuilt and installable. Spent some time sorting out some issues with using an unreleased appstream ( thanks ximion for help with packagekit! ) KDE applications are starting to switch to Qt6 in master this week, the big one being KDE PIM! This entails an enormous amount of work re-packaging. I have made a dent, sorta. To be continued next week. I fixed our signond / kaccounts line for qt6 which entailed some work on upstream code that uses QStringList.toSet which was removed in Qt6! Always learning new things!

I have spent some time working on the KF6 content snap, working with Jarred to make sure his qt6 content snap will work for us. Unfortunately, I do not have much time for this as I must make money to survive, donations help free up time for this Our new proposal with Kevin’s super awesome management company has been submitted and we will hopefully hear back next week.

Thanks for stopping by! Till next week.

If you can spare some change, consider a donation

Thank you!

https://gofund.me/b8b69e54

Categories: FLOSS Project Planets

Scarlett Gately Moore: KDE: KDE Neon updates! Qt6 transition moving along.

Planet Debian - Fri, 2023-09-22 14:10

With user edition out the door last week, this week was spent stabilizing unstable!

Spent some time sorting out our Calamares installer being quite grumpy which is now fixed by reverting an upstream change. Unstable and developer ISO rebuilt and installable. Spent some time sorting out some issues with using an unreleased appstream ( thanks ximion for help with packagekit! ) KDE applications are starting to switch to Qt6 in master this week, the big one being KDE PIM! This entails an enormous amount of work re-packaging. I have made a dent, sorta. To be continued next week. I fixed our signond / kaccounts line for qt6 which entailed some work on upstream code that uses QStringList.toSet which was removed in Qt6! Always learning new things!

I have spent some time working on the KF6 content snap, working with Jarred to make sure his qt6 content snap will work for us. Unfortunately, I do not have much time for this as I must make money to survive, donations help free up time for this Our new proposal with Kevin’s super awesome management company has been submitted and we will hopefully hear back next week.

Thanks for stopping by! Till next week.

If you can spare some change, consider a donation

Thank you!

https://gofund.me/b8b69e54

Categories: FLOSS Project Planets

ImageX: Drupal 10.1's Front-End Transformation: A Review of Impressive Updates

Planet Drupal - Fri, 2023-09-22 13:08

Front-end developers are committed to crafting website interfaces that are not just visually captivating but also a joy for users to interact with. It’s great to know that Drupal helps them along the way.

Categories: FLOSS Project Planets

Stack Abuse: Importing Python Modules

Planet Python - Fri, 2023-09-22 11:40
Introduction

Python allows us to create just about anything, from simple scripts to complex machine learning models. But to work on any complex project, you'll likely need to use or create modules. These are the building blocks of complex projects. In this article, we'll explore Python modules, why we need them, and how we can import them in our Python files.

Understanding Python Modules

In Python, a module is a file containing Python definitions and statements. The file name is the module name with the suffix .py added. Imagine you're working on a Python project, and you've written a function to calculate the Fibonacci series. Now, you need to use this function in multiple files. Instead of rewriting the function in each file, you can write it once in a Python file (module) and import it wherever needed.

Here's a simple example. Let's say we have a file math_operations.py with a function to add two numbers:

# math_operations.py def add_numbers(num1, num2): return num1 + num2

We can import this math_operations module in another Python file and use the add_numbers function:

# main.py import math_operations print(math_operations.add_numbers(5, 10)) # Output: 15

In the above example, we've imported the math_operations module using the import statement and used the add_numbers function defined in the module.

Note: Python looks for module files in the directories defined in sys.path. It includes the directory containing the input script (or the current directory), the PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH), and the installation-dependent default directory. You can check the sys.path using import sys; print(sys.path).

But why do we need to import Python files? Why can't we just write all our code in one file? Let's find out in the next section.

Why Import Python Files?

The concept of importing files in Python is comparable to using a library or a toolbox. Imagine you're working on a project and need a specific tool. Instead of creating that tool from scratch every time you need it, you would look in your toolbox for it, right? The same goes for programming in Python. If you need a specific function or class, instead of writing it from scratch, you can import it from a Python file that already contains it.

This not only helps us from having to continously rewrite code we've already written, but it also makes our code cleaner, more efficient, and easier to manage. This promotes a modular programming approach where the code is broken down into separate parts or modules, each performing a specific function. This modularity makes debugging and understanding the code much easier.

Here's a simple example of importing a Python standard library module:

import math # Using the math library to calculate the square root print(math.sqrt(16))

Output:

4.0

We import the math module and use its sqrt function to calculate the square root of 16.

Different Ways to Import Python Files

Python provides several ways to import modules, each with its own use cases. Let's look at the three most common methods.

Using 'import' Statement

The import statement is the simplest way to import a module. It simply imports the module, and you can use its functions or classes by referencing them with the module name.

import math print(math.pi)

Output:

3.141592653589793

In this example, we import the math module and print the value of pi.

Using 'from...import' Statement

The from...import statement allows you to import specific functions, classes, or variables from a module. This way, you don't have to reference them with the module name every time you use them.

from math import pi print(pi)

Output:

3.141592653589793

Here, we import only the pi variable from the math module and print it.

Using 'import...as' Statement

The import...as statement is used when you want to give a module a different name in your script. This is particularly useful when the module name is long and you want to use a shorter alias for convenience.

import math as m print(m.pi)

Output:

3.141592653589793

Here, we import the math module as m and then use this alias to print the value of pi.

Importing Modules from a Package

Packages in Python are a way of organizing related modules into a directory hierarchy. Think of a package as a folder that contains multiple Python modules, along with a special __init__.py file that tells Python that the directory should be treated as a package.

But how do you import a module that's inside a package? Well, Python provides a straightforward way to do this.

Suppose you have a package named shapes and inside this package, you have two modules, circle.py and square.py. You can import the circle module like this:

from shapes import circle

Now, you can access all the functions and classes defined in the circle module. For instance, if the circle module has a function area(), you can use it as follows:

circle_area = circle.area(5) print(circle_area)

This will print the area of a circle with a radius of 5.

Note: If you want to import a specific function or class from a module within a package, you can use the from...import statement, as we showed earlier.

But what if your package hierarchy is deeper? What if the circle module is inside a subpackage called 2d inside the shapes package? Python has got you covered. You can import the circle module like this:

from shapes.2d import circle

Python's import system is quite flexible and powerful. It allows you to organize your code in a way that makes sense to you, while still providing easy access to your functions, classes, and modules.

Common Issues Importing Python Files

As you work with Python, you may come across several errors while importing modules. These errors could stem from a variety of issues, including incorrect file paths, syntax errors, or even circular imports. Let's see some of these common errors.

Fixing 'ModuleNotFoundError'

The ModuleNotFoundError is a subtype of ImportError. It's raised when you try to import a module that Python cannot find. It's one of the most common issues developers face while importing Python files.

import missing_module

This will raise a ModuleNotFoundError: No module named 'missing_module'.

There are several ways you can fix this error:

  1. Check the Module's Name: Ensure that the module's name is spelled correctly. Python is case-sensitive, which means module and Module are treated as two different modules.

  2. Install the Module: If the module is not a built-in module and you have not created it yourself, you may need to install it using pip. For example:

$ pip install missing_module
  1. Check Your File Paths: Python searches for modules in the directories defined in sys.path. If your module is not in one of these directories, Python won't be able to find it. You can add your module's directory to sys.path using the following code:
import sys sys.path.insert(0, '/path/to/your/module')
  1. Use a Try/Except Block: If the module you're trying to import is not crucial to your program, you can use a try/except block to catch the ModuleNotFoundError and continue running your program. For example:
try: import missing_module except ModuleNotFoundError: print("Module not found. Continuing without it.") Avoiding Circular Imports

In Python, circular imports can be quite a headache. They occur when two or more modules depend on each other, either directly or indirectly. This leads to an infinite loop, causing the program to crash. So, how do we avoid this common pitfall?

The best way to avoid circular imports is by structuring your code in a way that eliminates the need for them. This could mean breaking up large modules into smaller, more manageable ones, or rethinking your design to remove unnecessary dependencies.

For instance, consider two modules A and B. If A imports B and B imports A, a circular import occurs. Here's a simplified example:

# A.py import B def function_from_A(): print("This is a function in module A.") B.function_from_B() # B.py import A def function_from_B(): print("This is a function in module B.") A.function_from_A()

Running either module will result in a RecursionError. To avoid this, you could refactor your code so that each function is in its own module, and they import each other only when needed.

# A.py def function_from_A(): print("This is a function in module A.") # B.py import A def function_from_B(): print("This is a function in module B.") A.function_from_A()

Note: It's important to remember that Python imports are case-sensitive. This means that import module and import Module would refer to two different modules and could potentially lead to a ModuleNotFoundError if not handled correctly.

Using __init__.py in Python Packages

In our journey through learning about Python imports, we've reached an interesting stop — the __init__.py file. This special file serves as an initializer for Python packages. But what does it do, exactly?

In the simplest terms, __init__.py allows Python to recognize a directory as a package so that it can be imported just like a module. Previously, an empty __init__.py file was enough to do this. However, from Python 3.3 onwards, thanks to the introduction of PEP 420, __init__.py is no longer strictly necessary for a directory to be considered a package. But it still holds relevance, and here's why.

Note: The __init__.py file is executed when the package is imported, and it can contain any Python code. This makes it a useful place for initialization logic for the package.

Consider a package named animals with two modules, mammals and birds. Here's how you can use __init__.py to import these modules.

# __init__.py file from . import mammals from . import birds

Now, when you import the animals package, mammals and birds are also imported.

# main.py import animals animals.mammals.list_all() # Use functions from the mammals module animals.birds.list_all() # Use functions from the birds module

By using __init__.py, you've made the package's interface cleaner and simpler to use.

Organizing Imports: PEP8 Guidelines

When working with Python, or any programming language really, it's important to keep your code clean and readable. This not only makes your life easier, but also the lives of others who may need to read or maintain your code. One way to do this is by following the PEP8 guidelines for organizing imports.

According to PEP8, your imports should be grouped in the following order:

  1. Standard library imports
  2. Related third party imports
  3. Local application/library specific imports

Each group should be separated by a blank line. Here's an example:

# Standard library imports import os import sys # Related third party imports import requests # Local application/library specific imports from my_library import my_module

In addition, PEP8 also recommends that imports should be on separate lines, and that they should be ordered alphabetically within each group.

Note: While these guidelines are not mandatory, following them can greatly improve the readability of your code and make it more Pythonic.

To make your life even easier, many modern IDEs, like PyCharm, have built-in tools to automatically organize your imports according to PEP8.

With proper organization and understanding of Python imports, you can avoid common errors and improve the readability of your code. So, the next time you're writing a Python program, give these guidelines a try. You might be surprised at how much cleaner and more manageable your code becomes.

Conclusion

And there you have it! We've taken a deep dive into the world of Python imports, exploring why and how we import Python files, the different ways to do so, common errors and their fixes, and the role of __init__.py in Python packages. We've also touched on the importance of organizing imports according to PEP8 guidelines.

Remember, the way you handle imports can greatly impact the readability and maintainability of your code. So, understanding these concepts is not just a matter of knowing Python's syntax—it's about writing better, more efficient code.

Categories: FLOSS Project Planets

Web Review, Week 2023-38

Planet KDE - Fri, 2023-09-22 10:32

Let’s go for my web review for the week 2023-38.

Unity’s New Pricing: A Wake-up Call on the Importance of Open Source in Gaming – Ramatak Inc.

Tags: tech, 3d, foss, gaming, business, licensing

Unsurprisingly after people massively converged to two main closed source engines for their games, they start to be massively screwed over. Maybe it’s time for them to finally turn to Free Software alternatives?

https://ramatak.com/2023/09/15/unitys-new-pricing-a-wake-up-call-on-the-importance-of-open-source-in-gaming/


Your New Apple Watch Series 9 Won’t Be Carbon Neutral | WIRED

Tags: tech, apple, ecology

Sure they’re pulling some effort on the way their hardware is produced and cheap. But don’t be fooled by the grand claims, this can’t be carbon neutral.

https://www.wired.com/story/new-apple-watch-series-9-wont-be-carbon-neutral/


We Are Retroactively Dropping the iPhone’s Repairability Score | iFixit News

Tags: tech, apple, ecology, repair

What are the hardware improvements good for if it’s all locked down through software? This is wasted.

https://www.ifixit.com/News/82493/we-are-retroactively-dropping-the-iphones-repairability-score-en


Organic Maps: An Open-Source Maps App That Doesn’t Suck

Tags: tech, map, foss

Interesting review, this seems mostly aligned with my own experience. That said I got less mileage since I use it mostly when walking around places I don’t know well.

https://hardfault.life/p/organic-maps-review


Long-term support for Linux kernel to be cut as maintainence remains under strain

Tags: tech, linux, kernel, community

Interesting, the situation for kernel maintainers is actually harder than I thought. You’d expect more of them could do the maintainer job on work time…

https://www.zdnet.com/article/long-term-support-for-linux-kernel-to-be-cut-as-maintainence-remains-under-strain/


Matrix.org - Matrix 2.0: The Future of Matrix

Tags: tech, messaging, matrix

Lots of progress, they’re finally delivering on past announcements at FOSDEM it seems. Let’s hope the spec effort catches up though.

https://matrix.org/blog/2023/09/matrix-2-0/


This month in Servo: upcoming events, new browser UI, and more! - Servo, the embeddable, independent, memory-safe, modular, parallel web rendering engine

Tags: tech, web, servo, rust

Nice to see this effort keeps bearing fruits. This is a very needed engine to avoid a monoculture.

https://servo.org/blog/2023/09/15/upcoming-events-and-new-browser-ui/


Should I Rust or should I go?

Tags: tech, rust

Keep the downsides in mind. Rust has an ecological niche, but it’s maybe not that big.

https://kerkour.com/should-i-rust-or-should-i-go


Supply Chain Issues in PyPI - by Stian Kristoffersen

Tags: tech, python, supply-chain

There’s still some work to secure the Python supply chain. It’s clearly suffering from fragmentation and ambiguous data.

https://stiankri.substack.com/p/supply-chain-issues-in-pypi


Java 21 makes me actually like Java again - WSCP’s blog

Tags: tech, java, type-systems

This is definitely a big deal for the Java type system and its ergonomics.

https://wscp.dev/posts/tech/java-pattern-matching/


Java 21 Is Available Today, And It’s Quite The Update | Foojay.io Today

Tags: tech, programming, java

This is definitely a big release! Lots of changes and improvements in this language.

https://foojay.io/today/java-21-is-available-today-and-its-quite-the-update/


Monkey-patching in Java

Tags: tech, java, metaprogramming

Lots of possibilities in the JVM to monkey-patch some behavior. Most of them are a bit involved though.

https://blog.frankel.ch/monkeypatching-java/


SQL join flavors

Tags: tech, sql, databases

Everything you wanted to know about SQL joins, and more.

https://antonz.org/sql-join/


B612 – The font family

Tags: tech, fonts

Interesting free font. Made for aeronautics, but brings interesting properties which might be useful in other contexts as well. Created around Toulouse and my old University too!

https://b612-font.com/


The Frustration Loop | ᕕ( ᐛ )ᕗ Herman’s blog

Tags: tech, spam, blog, funny

This is a funny spammer deterrent. I like the idea.

https://herman.bearblog.dev/the-frustration-loop/


1 Trick to Finish Your Next Talk in Style - David Nihill

Tags: communication, talk, presentation

Interesting trick indeed. I’ll try this when I get the chance. Clearly this avoids the underwhelming atmosphere closing after most Q&A session.

https://davidnihill.com/1-trick-to-finish-your-next-talk-in-style/


Bye for now!

Categories: FLOSS Project Planets

Stack Abuse: Fix the "AttributeError: module object has no attribute 'Serial'" Error in Python

Planet Python - Fri, 2023-09-22 08:51
Introduction

Even if you're a seasoned Python developer, you'll occasionally encounter errors that can be pretty confusing. One such error is the AttributeError: module object has no attribute 'Serial'. This Byte will help you understand and resolve this issue.

Understanding the AttributeError

The AttributeError in Python is raised when you try to access or call an attribute that a module, class, or instance does not have. Specifically, the error AttributeError: module object has no attribute 'Serial' suggests that you're trying to access the Serial attribute from a module that doesn't possess it.

For instance, if you have a module named serial and you're trying to use the Serial attribute from it, you might encounter this error. Here's an example:

import serial ser = serial.Serial('/dev/ttyUSB0') # This line causes the error

In this case, the serial module you're importing doesn't have the Serial attribute, hence the AttributeError.

Note: It's important to understand that Python is case-sensitive. Serial and serial are not the same. If the attribute exists but you're using the wrong case, you'll also encounter an AttributeError.

Fixes for the Error

The good news is that this error is usually a pretty easy fix, even if it seems very confusing at first. Let's explore some of the solutions.

Install the Correct pyserial Module

One of the most common reasons for this error is the incorrect installation of the pyserial module. The Serial attribute is a part of the pyserial module, which is used for serial communication in Python.

You might have installed a module named serial instead of pyserial (this is more common than you think!). To fix this, you need to uninstall the incorrect module and install the correct one.

$ pip uninstall serial $ pip install pyserial

After running these commands, your issue may be resolved. Now you can import Serial from pyserial and use it in your code:

from pyserial import Serial ser = Serial('/dev/ttyUSB0') # This line no longer causes an error

If this didn't fix the error, keep reading.

Rename your serial.py File

For how much Python I've written in my career, you'd think that I wouldn't make this simple mistake as much as I do...

Another possibility is that the Python interpreter gets confused when there's a file in your project directory with the same name as a module you're trying to import. This is another common source of the AttributeError error.

Let's say, for instance, you have a file named serial.py in your project directory (or maybe your script itself is named serial.py). When you try to import serial, Python might import your serial.py file instead of the pyserial module, leading to the AttributeError.

The solution here is simple - rename your serial.py file to something else.

$ mv serial.py my_serial.py Conclusion

In this Byte, we've explored two common causes of the AttributeError: module object has no attribute 'Serial' error in Python: installing the wrong pyserial module, and having a file in your project directory that shares a name with a module you're trying to import. By installing the correct module or renaming conflicting files, you should be able to eliminate this error.

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #173: Getting Involved in Open Source & Generating QR Codes With Python

Planet Python - Fri, 2023-09-22 08:00

Have you thought about contributing to an open-source Python project? What are possible entry points for intermediate developers? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Salsa Digital: Drupal security — a complete Drupal self-help guide to ensuring your website’s security

Planet Drupal - Fri, 2023-09-22 01:39
Enhancing Drupal security for a safer online experience Drupal is a powerful and versatile open-source content management system (CMS) that offers extensive functionality and customisation possibilities for creating and managing dynamic websites. As more businesses and organisations choose Drupal for their web presence, ensuring the security and privacy of their data and user information has become increasingly important.
Categories: FLOSS Project Planets

unifont @ Savannah: Unifont 15.1.02 Released

GNU Planet! - Fri, 2023-09-22 00:05

21 September 2023 Unifont 15.1.02 is now available.
This is a minor release.  It adjusts 46 glyphs in the Wen Quan Yi range, and adds Plane 3 ideographs.  This release no longer builds TrueType fonts by default, as announced over the past year.  They have been replaced with their OpenType equivalents.  TrueType fonts can still be built manually by typing "make truetype" in the font directory.

This release also includes a new Hangul Syllables Johab 6/3/1 encoding proposed by Ho-Seok Ee.  New Hangul supporting software for this encoding allows formation of all double-width Hangul syllables, including those with ancient letters that are outside the Unicode Hangul Syllables range.  Details are in the ChangeLog file.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-15.1.02/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-15.1.02/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-15.1.02/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-15.1.02/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-15.1.02/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      https://unifoundry.com/unifont/unifont-utilities.html

Information about Hangul modifications is at

      https://unifoundry.com/hangul/index.html

and

      http://unifoundry.com/hangul/hangul-generation.html

Categories: FLOSS Project Planets

Salsa Digital: Cybersecurity, the National Institute of Standards and Technology (NIST) and Drupal

Planet Drupal - Thu, 2023-09-21 22:34
About the National Institute of Standards and Technology (NIST) NIST is a US-based agency that provides critical measurement solutions to promote equitable standards such as the NIST Cybersecurity Framework (NIST CSF). NIST CSF is recognised globally as one of the leading standards for organisational cybersecurity management. The CSF is based on existing standards, guidelines and practices for organisations to better manage and reduce cybersecurity risk. In addition, it was designed to foster risk and cybersecurity management communications among both internal and external organisational stakeholders. The NIST CSF covers the following five domains: Identify : Activities to understand and manage cybersecurity risk by identifying assets, vulnerabilities and threats.
Categories: FLOSS Project Planets

Wesley Chun: Managing Shared (formerly Team) Drives with Python and the Google Drive API

Planet Python - Thu, 2023-09-21 18:31
2023 UPDATE: We are working to put updated versions of all the code into GitHub... stay tuned. The link will provided in all posts once the code sample(s) is(are) available.
2019 UPDATE: "G Suite" is now called "Google Workspace", "Team Drives" is now known as "Shared Drives", and the corresponding supportsTeamDrives flag has been renamed to supportsAllDrives. Please take note of these changes regarding the post below.
NOTE 1: Team Drives is only available for G Suite Business Standard users or higher. If you're developing an application for Team Drives, you'll need similar access.
NOTE 2: The code featured here is also available as a video + overview post as part of this series.

Introduction Team Drives is a relatively new feature from the Google Drive team, created to solve some of the issues of a user-centric system in larger organizations. Team Drives are owned by an organization rather than a user and with its use, locations of files and folders won't be a mystery any more. While your users do have to be a G Suite Business (or higher) customer to use Team Drives, the good news for developers is that you won't have to write new apps from scratch or learn a completely different API.

Instead, Team Drives features are accessible through the same Google Drive API you've come to know so well with Python. In this post, we'll demonstrate a sample Python app that performs core features that all developers should be familiar with. By the time you've finished reading this post and the sample app, you should know how to:
  • Create Team Drives
  • Add members to Team Drives
  • Create a folder in Team Drives
  • Import/upload files to Team Drives folders

Using the Google Drive API The demo script requires creating files and folders, so you do need full read-write access to Google Drive. The scope you need for that is:
  • 'https://www.googleapis.com/auth/drive' — Full (read-write) access to Google Drive
If you're new to using Google APIs, we recommend reviewing earlier posts & videos covering the setting up projects and the authorization boilerplate so that we can focus on the main app. Once we've authorized our app, assume you have a service endpoint to the API and have assigned it to the DRIVE variable.

Create Team Drives New Team Drives can be created with DRIVE.teamdrives().create(). Two things are required to create a Team Drive: 1) you should name your Team Drive. To make the create process idempotent, you need to create a unique request ID so that any number of identical calls will still only result in a single Team Drive being created. It's recommended that developers use a language-specific UUID library. For Python developers, that's the uuid module. From the API response, we return the new Team Drive's ID. Check it out:
def create_td(td_name): request_id = str(uuid.uuid4()) body = {'name': td_name} return DRIVE.teamdrives().create(body=body, requestId=request_id, fields='id').execute().get('id') Add members to Team Drives To add members/users to Team Drives, you only need to create a new permission, which can be done with  DRIVE.permissions().create(), similar to how you would share a file in regular Drive with another user.  The pieces of information you need for this request are the ID of the Team Drive, the new member's email address as well as the desired role... choose from: "organizer", "owner", "writer", "commenter", "reader". Here's the code:
def add_user(td_id, user, role='commenter'): body = {'type': 'user', 'role': role, 'emailAddress': user} return DRIVE.permissions().create(body=body, fileId=td_id, supportsTeamDrives=True, fields='id').execute().get('id') Some additional notes on permissions: the user can only be bestowed permissions equal to or less than the person/admin running the script... IOW, they cannot grant someone else greater permission than what they have. Also, if a user has a certain role in a Team Drive, they can be granted greater access to individual elements in the Team Drive. Users who are not members of a Team Drive can still be granted access to Team Drive contents on a per-file basis.

Create a folder in Team Drives Nothing to see here! Yep, creating a folder in Team Drives is identical to creating a folder in regular Drive, with DRIVE.files().create(). The only difference is that you pass in a Team Drive ID rather than regular Drive folder ID. Of course, you also need a folder name too. Here's the code:
def create_td_folder(td_id, folder): body = {'name': folder, 'mimeType': FOLDER_MIME, 'parents': [td_id]} return DRIVE.files().create(body=body, supportsTeamDrives=True, fields='id').execute().get('id') Import/upload files to Team Drives folders Uploading files to a Team Drives folder is also identical to to uploading to a normal Drive folder, and also done with DRIVE.files().create(). Importing is slightly different than uploading because you're uploading a file and converting it to a G Suite/Google Apps document format, i.e., uploading CSV as a Google Sheet, or plain text or Microsoft Word® file as Google Docs. In the sample app, we tackle the former:
def import_csv_to_td_folder(folder_id, fn, mimeType): body = {'name': fn, 'mimeType': mimeType, 'parents': [folder_id]} return DRIVE.files().create(body=body, media_body=fn+'.csv', supportsTeamDrives=True, fields='id').execute().get('id') The secret to importing is the MIMEtype. That tells Drive whether you want conversion to a G Suite/Google Apps format (or not). The same is true for exporting. The import and export MIMEtypes supported by the Google Drive API can be found in my SO answer here.

Driver app All these functions are great but kind-of useless without being called by a main application, so here we are:
FOLDER_MIME = 'application/vnd.google-apps.folder' SOURCE_FILE = 'inventory' # on disk as 'inventory.csv' SHEETS_MIME = 'application/vnd.google-apps.spreadsheet' td_id = create_td('Corporate shared TD') print('** Team Drive created') perm_id = add_user(td_id, 'email@example.com') print('** User added to Team Drive') folder_id = create_td_folder(td_id, 'Manufacturing data') print('** Folder created in Team Drive') file_id = import_csv_to_td_folder(folder_id, SOURCE_FILE, SHEETS_MIME) print('** CSV file imported as Google Sheets in Team Drives folder') The first set of variables represent some MIMEtypes we need to use as well as the CSV file we're uploading to Drive and requesting it be converted to Google Sheets format. Below those definitions are calls to all four functions described above.

Conclusion If you run the script, you should get output that looks something like this, with each print() representing each API call:
$ python3 td_demo.py ** Team Drive created ** User added to Team Drive ** Folder created in Team Drive ** CSV file imported as Google Sheets in Team Drives folder When the script has completed, you should have a new Team Drives folder called "Corporate shared TD", and within, a folder named "Manufacturing data" which contains a Google Sheets file called "inventory".

Below is the entire script for your convenience which runs on both Python 2 and Python 3 (unmodified!)—by using, copying, and/or modifying this code or any other piece of source from this blog, you implicitly agree to its Apache2 license:
from __future__ import print_function import uuid from apiclient import discovery from httplib2 import Http from oauth2client import file, client, tools SCOPES = 'https://www.googleapis.com/auth/drive' store = file.Storage('storage.json') creds = store.get() if not creds or creds.invalid: flow = client.flow_from_clientsecrets('client_secret.json', SCOPES) creds = tools.run_flow(flow, store) DRIVE = discovery.build('drive', 'v3', http=creds.authorize(Http())) def create_td(td_name): request_id = str(uuid.uuid4()) # random unique UUID string body = {'name': td_name} return DRIVE.teamdrives().create(body=body, requestId=request_id, fields='id').execute().get('id') def add_user(td_id, user, role='commenter'): body = {'type': 'user', 'role': role, 'emailAddress': user} return DRIVE.permissions().create(body=body, fileId=td_id, supportsTeamDrives=True, fields='id').execute().get('id') def create_td_folder(td_id, folder): body = {'name': folder, 'mimeType': FOLDER_MIME, 'parents': [td_id]} return DRIVE.files().create(body=body, supportsTeamDrives=True, fields='id').execute().get('id') def import_csv_to_td_folder(folder_id, fn, mimeType): body = {'name': fn, 'mimeType': mimeType, 'parents': [folder_id]} return DRIVE.files().create(body=body, media_body=fn+'.csv', supportsTeamDrives=True, fields='id').execute().get('id') FOLDER_MIME = 'application/vnd.google-apps.folder' SOURCE_FILE = 'inventory' # on disk as 'inventory.csv'... CHANGE! SHEETS_MIME = 'application/vnd.google-apps.spreadsheet' td_id = create_td('Corporate shared TD') print('** Team Drive created') perm_id = add_user(td_id, 'email@example.com') # CHANGE! print('** User added to Team Drive') folder_id = create_td_folder(td_id, 'Manufacturing data') print('** Folder created in Team Drive') file_id = import_csv_to_td_folder(folder_id, SOURCE_FILE, SHEETS_MIME) print('** CSV file imported as Google Sheets in Team Drives folder') As with our other code samples, you can now customize it to learn more about the API, integrate into other apps for your own needs, for a mobile frontend, sysadmin script, or a server-side backend!

Code challenge Write a simple application that moves folders (and its files or folders) in regular Drive to Team Drives. Each folder you move should be a corresponding folder in Team Drives. Remember that files in Team Drives can only have one parent, and the same goes for folders.
Categories: FLOSS Project Planets

Jonathan Carter: DebConf23

Planet Debian - Thu, 2023-09-21 16:36

I very, very nearly didn’t make it to DebConf this year, I had a bad cold/flu for a few days before I left, and after a negative covid-19 test just minutes before my flight, I decided to take the plunge and travel.

This is just everything in chronological order, more or less, it’s the only way I could write it.

DebCamp

I planned to spend DebCamp working on various issues. Very few of them actually got done, I spent the first few days in bed further recovering, took a covid-19 test when I arrived and after I felt better, and both were negative, so not sure what exactly was wrong with me, but between that and catching up with other Debian duties, I couldn’t make any progress on catching up on the packaging work I wanted to do. I’ll still post what I intended here, I’ll try to take a few days to focus on these some time next month:

Calamares / Debian Live stuff:

  • #980209 – installation fails at the “install boot loader” phase
  • #1021156 – calamares-settings-debian: Confusing/generic program names
  • #1037299 – “Install Debian” -> “Untrusted application launcher”
  • #1037123 – “Minimal HD space required’ too small for some live images”
  • #971003– Console auto-login doesn’t work with sysvinit

At least Calamares has been trixiefied in testing, so there’s that!

Desktop stuff:

  • #1038660 – please set a placeholder theme during development, different from any release
  • #1021816 – breeze: Background image not shown any more
  • #956102 – desktop-base: unwanted metadata within images
  • #605915 – please mtheake it a non-native package
  • #681025 – Put old themes in a new package named desktop-base-extra
  • #941642 – desktop-base: split theme data files and desktop integrations in separate packages

The “Egg” theme that I want to develop for testing/unstable is based on Juliette Taka’s Homeworld theme that was used for Bullseye. Egg, as in, something that hasn’t quite hatched yet. Get it? (for #1038660)

Debian Social:

  • Set up Lemmy instance
    • I started setting up a Lemmy instance before DebCamp, and meant to finish it.
  • Migrate PeerTube to new server
    • We got a new physical server for our PeerTube instance, we should have more space for growth and it would help us fix the streaming feature on our platform.

Loopy:

I intended to get the loop for DebConf in good shape before I left, so that we can spend some time during DebCamp making some really nice content, unfortunately this went very tumbly, but at least we ended up with a loopy that kind of worked and wasn’t too horrible. There’s always another DebConf to try again, right?

So DebCamp as a usual DebCamp was pretty much a wash (fitting with all the rain we had?) for me, at least it gave me enough time to recover a bit for DebConf proper, and I had enough time left to catch up on some critical DPL duties and put together a few slides for the Bits from the DPL talk.

DebConf

Bits From the DPL

I had very, very little available time to prepare something for Bits fro the DPL, but I managed to put some slides together (available on my wiki page).

I mostly covered:

  • A very quick introduction of myself (I’ve done this so many times, it feels redundant giving my history every time), and some introduction on what it is that the DPL does. I declared my intent not to run for DPL again, and the reasoning behind it, and a few bits of information for people who may intend to stand for DPL next year.
  • The sentiment out there for the Debian 12 release (which has been very positive). How we include firmware by default now, and that we’re saying goodbye to architectures both GNU/KFreeBSD and mipsel.
  • Debian Day and the 30th birthday party celebrations from local groups all over the world (and a reminder about the Local Groups BoF later in the week).
  • I looked forward to Debian 13 (trixie!), and how we’re gaining riscv64 as a release architecture, as well as loongarch64, and that plans seem to be forming to fix 2k38 in Debian, and hopefully largely by the time the Trixie release comes by.
  • I made some comments about “Enterprise Linux” as people refer to the RHEL eco-system these days, how really bizarre some aspects of it is (like the kernel maintenance), and that some big vendors are choosing to support systems outside of that eco-system now (like CPanel now supporting Ubuntu too). I closed with the quote below from Ian Murdock, and assured the audience that if they want to go out and make money with Debian, they are more than welcome too.

Job Fair

I walked through the hallway where the Job Fair was hosted, and enjoyed all the buzz. It’s not always easy to get this right, but this year it was very active and energetic, I hope lots of people made some connections!

Cheese & Wine

Due to state laws and alcohol licenses, we couldn’t consume alcohol from outside the state of Kerala in the common areas of the hotel (only in private rooms), so this wasn’t quite as big or as fun as our usual C&W parties since we couldn’t share as much from our individual countries and cultures, but we always knew that this was going to be the case for this DebConf, and it still ended up being alright.

Day Trip

I opted for the forest / waterfalls daytrip. It was really, really long with lots of time in the bus. I think our trip’s organiser underestimated how long it would take between the points on the route (all in all it wasn’t that far, but on a bus on a winding mountain road, it takes long). We left at 8:00 and only found our way back to the hotel around 23:30. Even though we arrived tired and hungry, we saw some beautiful scenery, animals and also met indigenous river people who talked about their struggles against being driven out of their place of living multiple times as government invests in new developments like dams and hydro power.

Photos available in the DebConf23 public git repository.

Losing a beloved Debian Developer during DebConf

To our collective devastation, not everyone made it back from their day trips. Abraham Raji was out to the kayak day trip, and while swimming, got caught by a whirlpool from a drainage system.

Even though all of us were properly exhausted and shocked in disbelief at this point, we had to stay up and make some tough decisions. Some initially felt that we had to cancel the rest of DebConf. We also had to figure out how to announce what happened asap both to the larger project and at DebConf in an official manner, while ensuring that due diligence took place and that the family is informed by the police first before making anything public.

We ended up cancelling all the talks for the following day, with an address from the DPL in the morning to explain what had happened. Of all the things I’ve ever had to do as DPL, this was by far the hardest. The day after that, talks were also cancelled for the morning so that we could attend his funeral. Dozens of DebConf attendees headed out by bus to go pay their final respects, many wearing the t-shirts that Abraham had designed for DebConf.

A book of condolences was set up so that everyone who wished to could write a message on how they remembered him. The book will be kept by his family.

Today marks a week since his funeral, and I still feel very raw about it. And even though there was uncertainty whether DebConf should even continue after his death, in hindsight I’m glad that everyone pushed forward. While we were all heart broken, it was also heart warming to see people care for each other in all of this. If anything, I think I needed more time at DebConf just to be in that warm aura of emotional support for just a bit longer. There are many people who I wanted to talk to who I barely even had a chance to see.

Abraham, or Abru as he was called by some people (which I like because “bru” in Afrikaans” is like “bro” in English, not sure if that’s what it implied locally too) enjoyed artistic pursuits, but he was also passionate about knowledge transfer. He ran classes at DebConf both last year and this year (and I think at other local events too) where he taught people packaging via a quick course that he put together. His enthusiasm for Debian was contagious, a few of the people who he was mentoring came up to me and told me that they were going to see it through and become a DD in honor of him. I can’t even remember how I reacted to that, my brain was already so worn out and stitching that together with the tragedy of what happened while at DebConf was just too much for me.

I first met him in person last year in Kosovo, I already knew who he was, so I think we interacted during the online events the year before. He was just one of those people who showed so much promise, and I was curious to see what he’d achieve in the future. Unfortunately, we was taken away from us too soon.

Poetry Evening

Later in the week we had the poetry evening. This was the first time I had the courage to recite something. I read Ithaka by C.P. Cavafy (translated by Edmund Keely). The first time I heard about this poem was in an interview with Julian Assange’s wife, where she mentioned that he really loves this poem, and it caught my attention because I really like the Weezer song “Return to Ithaka” and always wondered what it was about, so needless to say, that was another rabbit hole at some point.

Group Photo

Our DebConf photographer organised another group photo for this event, links to high-res versions available on Aigar’s website.

BoFs

I didn’t attend nearly as many talks this DebConf as I would’ve liked (fortunately I can catch up on video, should be released soon), but I did make it to a few BoFs.

In the Local Groups BoF, representatives from various local teams were present who introduced themselves and explained what they were doing. From memory (sorry if I left someone out), we had people from Belgium, Brazil, Taiwan and South Africa. We talked about types of events a local group could do (BSPs, Mini DC, sprints, Debian Day, etc. How to help local groups get started, booth kits for conferences, and setting up some form of calendar that lists important Debian events in a way that makes it easier for people to plan and co-ordinate. There’s a mailing list for co-ordination of local groups, and the irc channel is -localgroups on oftc.

If you got one of these Cheese & Wine bags from DebConf, that’s from the South African local group!

In the Debian.net BoF, we discussed the Debian.net hosting service, where Debian pays for VMs hosted for projects by individual DDs on Debian.net. The idea is that we start some form of census that monitors the services, whether they’re still in use, whether the system is up to date, whether someone still cares for it, etc. We had some discussion about where the lines of responsibility are drawn, and we can probably make things a little bit more clear in the documentation. We also want to offer more in terms of backups and monitoring (currently DDs do get 500GB from rsync.net that could be used for backups of their services though). The intention is also to deploy some form of configuration management for some essentials across the hosts. We should also look at getting some sponsored hosting for this.

In the Debian Social BoF, we discussed some services that need work / expansion. In particular, Matrix keeps growing at an increased rate as more users use it and more channels are bridged, so it will likely move to its own host with big disks soon. We might replace Pleroma with a fork called Akkoma, this will need some more home work and checking whether it’s even feasible. Some services haven’t really been used (like Writefreely and Plume), and it might be time to retire them. We might just have to help one or two users migrate some of their posts away if we do retire them. Mjolner seems to do a fine job at spam blocking, we haven’t had any notable incidents yet. WordPress now has improved fediverse support, it’s unclear whether it works on a multi-site instance yet, I’ll test it at some point soon and report back. For upcoming services, we are implementing Lemmy and probably also Mobilizon. A request was made that we also look into Loomio.

More Information Overload

There’s so much that happens at DebConf, it’s tough to take it all in, and also, to find time to write about all of it, but I’ll mention a few more things that are certainly worth of note.

During DebConf, we had some people from the Kite Linux team over. KITE supplies the ICT needs for the primary and secondary schools in the province of Kerala, where they all use Linux. They decided to switch all of these to Debian. There was an ad-hoc BoF where locals were listening and fielding questions that the Kite Linux team had. It was great seeing all the energy and enthusiasm behind this effort, I hope someone will properly blog about this!

I learned about the VGLUG Foundation, who are doing a tremendous job at promoting GNU/Linux in the country. They are also training up 50 people a year to be able to provide tech support for Debian.

I came across the booth for Mostly Harmless, they liberate old hardware by installing free firmware on there. It was nice seeing all the devices out there that could be liberated, and how it can breathe new life into old harware.

Some hopefully harmless soldering.

Overall, the community and their activities in India are very impressive, and I wish I had more time to get to know everyone better.

Food

Oh yes, one more thing. The food was great. I tasted more different kinds of curry than I ever did in my whole life up to this point. The lunch on banana leaves was interesting, and also learning how to eat this food properly by hand (thanks to the locals who insisted on teaching me!), it was a… fruitful experience? This might catch on at home too… less dishes to take care of!

Special thanks to the DebConf23 Team

I think this may have been one of the toughest DebConfs to organise yet, and I don’t think many people outside of the DebConf team knows about all the challenges and adversity this team has faced in organising it. Even just getting to the previous DebConf in Kosovo was a long and tedious and somewhat risky process. Through it all, they were absolute pro’s. Not once did I see them get angry or yell at each other, whenever a problem came up, they just dealt with it. They did a really stellar job and I did make a point of telling them on the last day that everyone appreciated all the work that they did.

Back to my nest

I bought Dax a ball back from India, he seems to have forgiven me for not taking him along.

I’ll probably take a few days soon to focus a bit on my bugs and catch up on my original DebCamp goals. If you made it this far, thanks for reading! And thanks to everyone for being such fantastic people.

Categories: FLOSS Project Planets

Stack Abuse: How to Pass Multiple Arguments to the map() Function in Python

Planet Python - Thu, 2023-09-21 16:22
Introduction

The goal of Python, with its rich set of built-in functions, is to allow developers to accomplish complex tasks with relative ease. One such powerful, yet often overlooked, function is the map() function. The map() function will execute a given function over a set of items, but how do we pass additional arguments to the provided function?

In this Byte, we'll be exploring the map() function and how to effectively pass multiple arguments to it.

The map() Function in Python

The map() function in Python is a built-in function that applies a given function to every item of an iterable (like list, tuple etc.) and returns a list of the results.

def square(number): return number ** 2 numbers = [1, 2, 3, 4, 5] squared = map(square, numbers) print(list(squared)) # Output: [1, 4, 9, 16, 25]

In this snippet, we've defined a function square() that takes a number and returns its square. We then use the map() function to apply this square() function to each item in the numbers list.

Why Pass Multiple Arguments to map()?

You might be wondering, "Why would I need to pass multiple arguments to map()?" Well, there are scenarios where you might have a function that takes more than one argument, and you want to apply this function to multiple sets of data simultaneously.

Not every function we provide to map() will take only one argument. What if, instead of a squared function, we have a more generic math.pow function, and one of the arguments is what number to raise the item to. How do we handle a case like this?

Or maybe you have two lists of numbers, and you want to find the product of corresponding numbers from these lists. This is another case where passing multiple arguments to map() can come be helpful.

How to Pass Multiple Arguments to map()

There are a few different types of cases in which you'd want to pass multiple arguments to map(), two of which we mentioned above. We'll walk through both of those cases here.

Multiple Iterables

Passing multiple arguments to the map() function is simple once you understand how to do it. You simply pass additional iterables after the function argument, and map() will take items from each iterable and pass them as separate arguments to the function.

Here's an example:

def multiply(x, y): return x * y numbers1 = [1, 2, 3, 4, 5] numbers2 = [6, 7, 8, 9, 10] result = map(multiply, numbers1, numbers2) print(list(result)) # Output: [6, 14, 24, 36, 50]

Note: Make sure that the number of arguments in the function should match the number of iterables passed to map()!

In the example above, we've defined a function multiply() that takes two arguments and returns their product. We then pass this function, along with two lists, to the map() function. The map() function applies multiply() to each pair of corresponding items from the two lists, and returns a new list with the results.

Multiple Arguments, One Iterable

Continuing with our math.pow example, let's see how we can still use map() to run this function on all items of an array.

The first, and probably simplest, way is to not use map() at all, but to use something like list comprehension instead.

import math numbers = [1, 2, 3, 4, 5] res = [math.pow(n, 3) for n in numbers] print(res) # Output: [1.0, 8.0, 27.0, 64.0, 125.0]

This is essentiall all map() really is, but it's not as compact and neat as using a convenient function like map().

Now, let's see how we can actually use map() with a function that requires multiple arguments:

import math import itertools numbers = [1, 2, 3, 4, 5] res = map(math.pow, numbers, itertools.repeat(3, len(numbers))) print(list(res)) # Output: [1.0, 8.0, 27.0, 64.0, 125.0]

This may seem a bit more complicated at first, but it's actually very simple. We use a helper function, itertools.repeat, to create a list the same length as numbers and with only values of 3.

So the output of itertools.repeat(3, len(numbers)), when converted to a list, is just [3, 3, 3, 3, 3]. This works because we're now passing two lists of the same length to map(), which it happily accepts.

Conclusion

The map() function is particularly useful when working with multiple iterables, as it can apply a function to the elements of these iterables in pairs, triples, or more. In this Byte, we've covered how to pass multiple arguments to the map() function and how to work with multiple iterables.

Categories: FLOSS Project Planets

William Minchin: minchin.jrnl v7 “Phoenix” released

Planet Python - Thu, 2023-09-21 15:22

Today, I do something that I should have done 5 years ago, and something that I’ve been putting off for the last 2 years1: I’m releasing a personal fork of jrnl2! I’ve given this release the codename Phoenix, after the mythical bird of rebirth, that springs forth renewed from the ashes of the past.

You can install it today:

pip install minchin.jrnl

And then to run it from the command line:

minchin.jrnl

(or

jrnl

)

Features Today

Today, the codebase is that of jrnl v2.63 in a new namespace. In particular, that gives us a working yaml exporter; now you can build your Pelican sites again (or maybe only I was doing that…).

The version number (7) was picked to be larger than the current jrnl-org/jrnl release (currently at 4.0.1). (Plus I thought it would look cool!)

I’ve moved the configuration to a new location on disk4, as to not stomp on your existing jrnl (i.e. jrnl-org/jrnl or “legacy”) installation.

Limited documentation, to match the current codebase, has been uploaded to my personal site at minchin.ca/minchin.jrnl. (Although it remains incomplete and very much a work in progress.)

And finally, I extend an invitation to all those current or former users of jrnl to move here. I welcome your contributions and support. If there are features missing, please feel free to let me know.

Short Term Update Plans

I’ve pushed this release out with very few changes from the base codebase in a effort to get it live. But I have some updates that I’m planning to do very shortly. There updates will maintain the Phoenix codename, even if the major version number increments.

The biggest of these is to launch my much anticipated plugin system. The code has been already written (for several years now5, actually), can it just needs to be double checked that it still works as expected.

The other immediate update is to make sure the code works with Python 3.11 (the current version of Python), which seems to already be the case.

Medium to Long Term Project Goals, or My Vision

These are features I’d like to add, although I realize this will take more than tonight. Also this section lays out my visions for the project and some anti-features I want to avoid.

The Plugin System

The plugin system I think will be huge movement forward to make minchin.jrnl more useful. In particular, it allows minchin.jrnl to import and export to and from new formats, including allowing you to write one-off export formats (which I intend to use personally right away!). Displying the journal entries on the commandline is also handled by exporters, so you’d be able to tweak that output as well. I also intend to extend the plugin system to the storage backends.

My hope is that this will futureproof minchin.jrnl, allowing new formats to quickly and easily be added, retiring deprecated formats to external plugins, and being able to quickly test and integrate new formats by seemlessly bring external plugins “in-house”.

In particular, I’m planning to have separate plugins for “true” yaml + markdown exports and Pelican-formated markdown, to add an export format for Obsidian6 formatted markdown, and add backend format plugins to support jrnl v1 and whatever format they’re dreaming up for jrnl v47.

In short, I hope plugins will allow you to make minchin.jrnl more useful, without me being the bottleneck.

Plain Text Forever

One of the things that drew to the original jrnl implementation was that is was based on plain text, and using plain text to store journal entries. Plain text has a number of advantages8, but the near universal backwards and forewards compatibility in high on that list. Yes, plain text has it’s limitations9, but I think the advantages far outweight the disadvantages, particularly when it comes to a journal that you might hope will be readable years or generations from now. Also, plain text just makes it so much easier to develop minchin.jrnl.

The included, default option for minchin.jrnl will always be plain text.

If you’re looking to upgrade your plain text, you might consider Markdown10 or ReStructured Text (ReST)11.

If you’re looking for either a database backend or more multimedia functionality (or both), you’re welcome to write something as a backend plugin for minchin.jrnl; that ability is a featured reason for providing the (to be added shortly!) plugin system in the first place!

MIT Licensed

The original jrnl was initially released under the MIT license, and that only changed with the v2.4 release to GPLv312. My hope and goal is to remove all GPL-licensed code and release future versions of minchin.jrnl under the MIT license23.

My opposition to the change13 was because I’ve come to feel that Open Source work is both given and received as a gift, and I feel the GPL license moves away from that ethos.

I suspect the least fun part of this partial re-write will be getting the testing system up and running again, as the original library jrnl v1 had been using has gone many years without updates.

To this end, I’m requesting that any code contributions to the project be dual-licensed under both MIT and GPLv3.

Documentation in Sphinx

Documentation will eventually be moved over to Sphinx (from the current MkDocs), a process I’ve began but not finished. Driving this is the expectation that I’ll have more technical documentation (than is included currently) as I layout how to work with the plugin API, and Sphinx makes it easy to keep code and documentation side by side in the same (code) file.

Furthermore, I want to document how to use minchin.jrnl as a Python API generally; this would allow you to interact with your journal from other Python programs.

Easy to Push Releases

Knowing my own schedule, I want to be able to sit down for an evening, make (several) small improvements, and then push out a new release before I go to bed. To that end, I want to make the streamlined to push out new releases. Expect lots of small releases. :)

Drop poetry

poetry is a powerful Python project management tool, but is one I’ve never really liked14. Particular issues include a lack of first class Windows support15 and very conservative upper bounds for dependencies and supported Python versions. Plus I have refind a system elsewhere using pip-tools16 and setup.py to manage these same issues that I find works very well for me.

This has been accomplished with the current release!

Windows Support

jnrl, to date, has always had decent Windows support. As I personally work on Windows, Windows will continue to have first class support.

Where this may show is tools beyond Python will need to be readily available on Windows before they’re used33, and the Windows Terminal is fairly limited in what in can do, at least compared with some Linux terminals.

Replace the Current Code of Conduct

I don’t much care for the current Code of Conduct17: it seems to be overly focused on the horrible things people might do to each other, and I’m not sure I want that to be the introduction people get to the project. I hope to find a Code of Conduct that focuses more on the positive things I hope people will do as they interact with each other and the project.

My replaced/current Code of Conduct is here (although this may be updated again in the future).

Open Stewardship Discussion

If the time comes when someone else is assuming stewardship for the project, I intend for those discussions to be help publicly18.

My History with the Project, and Why the Fork

This section is different: it is much less about the codebase and more focused on myself and my relationship to it. I warn you it is likely to be somewhat long and winding.

My Introduction to jrnl

Many years ago now, I was new in Python. At that time34 when I came across a problem that I thought programming might solve, I first went looking from a Python program that might solve it.

In looking for a way to manage the regular text notes I was taking at work, I found jrnl, which I eventually augmented with DayOne (Classic) for in-field note entry (on a work-supplied iPad) and Pelican for display.

Jrnl was more though: it was the object of my first Pull Request35, my first contribution to Open Source. My meagre help was appreciated and welcomed warmly, and so I returned often. I found jrnl to be incredibly useful in learning about Python best practices and code organization; here was a program that was more than a toy but simple enough (or logically separated) that I could attempt to understand it, to gork it, as a whole. I contributed in many places, but particularly around Windows support, DayOne backend support, and exports to Markdown (to be fed to Pelican).

In short jrnl became part of the code I used everyday.

jrnl Goes Into Hibernation

I have heard it said that software rewrites are a great way to kill a project. The reasons for this are multiple, but in short it (typically) saps the energy to update the legacy version even as bugs pile up, but the new thing can’t go into everyday use until it is feature-compatible with the legacy version, and the re-write always takes way more effort than initial estimates.

For reasons now forgotten36, a “v2” semi-rewrite was undertaken. And then it stalled. And then the project maintainer got a life19 and the re-write stalled even moreso.

The (Near) Endless Beta of v2, or the First Time I Should Have Forked

For me, initially, this wasn’t a big deal: I was often running a development build locally as I tinkered away with jrnl, so I just kept doing that. Also, I had just started working on my plugin system (for exporters first, but expecting it could easily be extended to importers and backends).

As the months of inactivity on the part of the maintainer stretched on, and pull requests grew staler, at some point I should have forked the project and “officially” released my development version. But I never did, because it seemed like a scary new thing to do20.

Invitation to Become a Maintainer

And then21 one day out of the blue, I got an email from the maintainer asking if I wanted to be a co-maintainer for jrnl! I was delighted, and promptly said yes. I was given commit access to the repo on GitHub (but, as far as I knew, no ability to push releases to PyPI), and then…not much happened. I reached out to the maintainer to suggest some plans, as it still felt like “his” project, but I never heard much back. And I was too timid to move forward without at least something from him. And I was busy with the rest of life too. After a few months, I realized my first plan wasn’t working and started thinking about how to try again to move the project forward, more on my own. In front of me was the push to v2.0, and a question of how to integrate my in-development plug-in system.

The Coup

And then on a one another day, again out of the blue, I got an unexpected email that I no longer had commit access to the jrnl repo. I searched the project for updates, including the issue tracker and came up with #591 where a transition to new maintainers was brought up; I don’t know why I wasn’t pinged on it. At the time22, I said I was happy to see new life in the project and to see it move forward. But it was unsettling that I’d missed the early discussions.

It also seemed odd to me that the two maintainer that stepped forward hadn’t seemed to be involved with the project at all before that point.

For a while, things were ok: a “version 2” was released that was very close to the v2 branch I was using at home, bugs started getting fixed regularly, and new releases continued to come out. But my plugins never made it into a release.

Things Fall Apart (aka I Get Frustrated)

But things under new management didn’t stay rosy.

One of the things they did was completely rewrite the Git history, and thus change all the commit hashes. This was a small but continueing annoyance (until I got a new computer), because everytime I would go to push changes to GitHub, my git client would complain about the “new” (old) tags it was trying to push, because it couldn’t find a commit with the matching hash.

But my two big annoyances were a re-write of the YAML exporter and the continual roadblocks to getting my plugin system merged in.

My plugin system has the longest history, having been started before the change in stewardship. Many times (after the change in stewardship), I created a pull request24 and the response would be to make various changes or to split it into smaller pieces; I would make the changes, and the cycle would continue. But there was never a plan presented that I felt I could successful complete, nor was I ever told the plugin system was unaligned with the vision they had for the project. I lost considerable enthusiasm for trying to get the plugins merged after rewriting the tests for the third time (as the underlying testing framework was changed).

The YAML exporter changes are what ultimately left me feeling frozen out of the project. Without much fanfare, the YAML exporter was changed, because someone25 felt that the resulting output wasn’t “true” or “pure” YAML. This is true, in a sense, because when I had originally written the exporter, it was designed to output files for Pelican with an assumed Markdown body and YAML front matter for metadata. At the request of the (then) maintainer, I called it the “YAML exporter”, partly because there was already a “Markdown exporter”. I didn’t realize it had been broken until I went to upgrade jrnl and render my Pelican site (which I use to search and read my notes) and it had just stopped working26. The change wasn’t noted (in a meaningful way) in the release notes, and the version numbers didn’t give an indication of when this change had happened30. I eventually figured out where the change had happened, explained the history of the exporter (that again, I had written years earlier) and proposed three solutions, each with a corresponding Pull Request: 1) return the YAML exporter to it’s previous output27, 2) duplicate the old exporter under a new name28, or 3) merge my plugin system, which would allow me to write my own plugin, and solve the problem myself. I was denied on all three, and told that I ‘didn’t understand the purpose or function of the YAML exporter’31 (yes, of the plugin I’d written37). The best I got was that they would reconsider what rose to the level of a “breaking change” when dealing with versioning32.

I Walk Away

The combined experience left me feeling very frustrated: jrnl was broken (to me) and all my efforts to fix it were forably rebuffed.

When I tried to express my frustrations at my inability to get a “working” version of jrnl, I was encouraged to take a mantal health break. While I appreciate the awareness of mental health, stepping away wouldn’t be helpful in this particlar case because the nothing would happen to fix my broken tooling (the cause of my frustrations). It seemed like the “right words”(tm) someone had picked up at a workshop, but the same workshop figured that the “right words”(tm) would solve everything without requiring a deeper look or deeper changes.

So I took my leave. I’ve been running an outdated version (v2.6) ever since, and because of the strictness of the Poetry metadata29, I can’t run it on anything newer than Python 3.9 (even as I’ve upgraded my “daily” Python to 3.11).

I Return (Sort Of); The Future and My Fears

So this marks my return. My “mental health break” is over. As I realize I can only change myself (and not others), I will do the work to fix the deeper issues (e.g. broken Pelican exports, lack of “modern” Python support) by managing my own fork. And so that is the work I’ll do.

Looking forward, if I’m the only one that uses my fork, that would be somewhat disappointing, but also fine. After all, I write software, first and foremost, for my own usecase and offer it to others as a gift. On the other hand, if a large part of the community moves here, I worry about being able to shepherd that community any better than the one I am leaving.

I worry too that either due to there being conflict at all, or that all of my writings are publically displayed, others will think less of my work or myself because of the failings they see there. It is indeed very hard to get through a disagreement like this without failing in some degree.

But it seems better to act, than to suffer in silence.

A Thank You

Thank you to all those who have worked to make jrnl as successful as it has been to date.

If you’ve gotten this far, thank you for reading this all. I hope you will join me, and I hope your experiences with minchin.jrnl are positive!

The header image was generated locally by Standard Diffusion XL.

  1. October 18, 2021 todo item: “fork jrnl” 

  2. main landing page at jrnl.sh, code at jrnl-orl/jrnl on GitHub, and jrnl on PyPI 

  3. https://github.com/jrnl-org/jrnl/tree/v2.6 

  4. this varies by OS, so run jrnl --list to see where yours is stored. 

  5. Pull Request #1115 

  6. I’ve started using Obsidian to take notes on my workstation and on my phone, and find it incredible. The backend format remains Markdown with basically Yaml front matter, but the format is slightly different from Pelican, and exported file layout differs. 

  7. The initial draft of this post was written before the v4 release, when there was talk of changing how the journal files were kept. v4 has since been released, and I’m unclear if that change ever happened, or what “breaking change” occurred that bumped the version number from 3 to 4 generally. In any case, if they change their format, with the plugin system it becomes fairly trivial to add a format-specific importer. 

  8. also: tiny file size, easy to put under version control, no proprietary format or data lock-in, portability across computing platforms, and generally are human readable 

  9. includes limitations on embedded text formating, storing pictures, videos, or sound recordings, and lacking standardized internal metadata 

  10. Markdown has several variants and many extensions. If you’re starting out, I recommend looking at the CommonMark specification. Note however that Markdown was originally designed as a shortcut for creating HTML documents, and so has no built in features for managing groups of Markdown documents. It is also deliberately limited in the formating options available, while officially supporting raw HTML as a fallback for anything missing. 

  11. ReST is older than Markdown and has always had a full specification. It was originally designed for the Python language documentation, and so was designed from the beginning to deal with the interplay between several text documents. Sadly, it doesn’t seem to have been adopted much outside of the Python ecosystem. 

  12. version 2.3.1 license (MIT); version 2.4 license (GPLv3), released April 18, 2020. 

  13. as I detailed at the time. But the issue (#918) announcing the change was merged within 20 minutes of being opened, so I’m not sure anything I could have said would have changed their minds. 

  14. this can and should be flushed out into a full blog post. But another day. 

  15. and I work on Windows. And I work with Python because Python has had good Windows support. 

  16. https://pip-tools.readthedocs.io/en/latest/ 

  17. jrnl-org/jrnl’s Code of Conduct: the Contributor Code of Conduct

  18. I imagine in the issue tracker for the project. 

  19. I think he got a job with or founded a startup, and I suspect he probably moved continents. 

  20. In the intervening time, I ended up releasing personal forks of several Pelican plugins. The process is no longer new or scary, but still can be a fair bit of work. And that experience has given me the confidence to go forward with this fork. 

  21. February 16, 2018 

  22. July 5, 2019; my comment, at the time 

  23. my (pending) codename for these releases is ⚜ Fluer-de-lis. The reference is to the Lily, a flower that is a symbol of purity and rebirth. 

  24. see Pull Request #1216 and Discussion #1006 

  25. Issue #1065 

  26. in particular, Pelican could no longer find the metadata block and instead rendered the text of each entry as if it was a code block. 

  27. I’m sure I wrote the code to do this, but can’t find the Pull Request at the moment. Maybe I figured the suggestion wouldn’t go anyway. 

  28. Pull Request #1337 

  29. https://github.com/jrnl-org/jrnl/blob/v2.6/pyproject.toml#L25 

  30. perhaps because I was looking for a breaking change rather than a bug fix

  31. this comment and this one, in particular. I can’t find those exact quoted words, but that was the sentiment I was left with. 

  32. this comment 

  33. So no make. But Invoke, written in Python, works well for many of make‘s use cases. 

  34. and still today 

  35. Pull Request #110, dated November 27, 2013 

  36. but likely recorded in the issue tracker 

  37. Pull Request #258, opened July 30, 2014. 

Categories: FLOSS Project Planets

Stack Abuse: How to Flatten Specific Dimensions of NumPy Array

Planet Python - Thu, 2023-09-21 14:41
Introduction

In data manipulation and scientific computing, NumPy stands as one of the most-used libraries as it provides quite a few functionalities. One such operation, "flattening," helps to transform multi-dimensional arrays into a one-dimensional sequence. While flattening an entire array is pretty straightforward, there are times when you might want to selectively flatten specific dimensions to suit the requirements of your data pipeline or algorithm. In this Byte, we'll see various techniques to achieve this more nuanced form of flattening.

NumPy Arrays

NumPy, short for Numerical Python, is a library in Python that provides support for large, multi-dimensional arrays and matrices, along with a collection of mathematical functions to operate on these arrays. NumPy arrays are a key ingredient in scientific computing with Python. They are more efficient and faster compared to Python's built-in list data type, especially when it comes to mathematical operations.

This code shows what a NumPy array can look like:

import numpy as np # Creating a 2D NumPy array array_2D = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) print(array_2D)

Output:

[[1 2 3] [4 5 6] [7 8 9]] Why Flatten Some Dimensions of a NumPy Array?

Flattening an array means converting a multidimensional array into a 1D array. But why would you want to flatten just some dimensions of a NumPy array?

Well, there are many scenarios where you might need to do this. For example, in machine learning, often we need to flatten our input data before feeding it into a model. This is because many machine learning algorithms expect input data in a specific format, usually as a 1D array.

But sometimes, you might not want to flatten the entire array. Instead, you might want to flatten specific dimensions of the array while keeping the other dimensions intact. This can be useful in scenarios where you want to maintain some level of the original structure of the data.

How to Flatten a NumPy Array

Flattening a NumPy array is fairly easy to do. You can use the flatten() method provided by NumPy to flatten an array:

import numpy as np # Creating a 2D NumPy array array_2D = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # Flattening the 2D array flattened_array = array_2D.flatten() print(flattened_array)

Output:

[1 2 3 4 5 6 7 8 9]

As you can see, the flatten() method has transformed our 2D array into a 1D array.

But what if we want to flatten only a specific dimension of the array and not the entire array? We'll explore this in the next sections.

Flattening Specific Dimensions of a NumPy Array

Flattening a NumPy array is quite straightforward. But, what if you need to flatten only specific dimensions of an array? This is where the reshape function comes into play.

Let's say we have a 3D array and we want to flatten the last two dimensions, keeping the first dimension as it is. The reshape function can be used to achieve this. Here's a simple example:

import numpy as np # Create a 3D array array_3d = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]) # Reshape the array flattened_array = array_3d.reshape(array_3d.shape[0], -1) print(flattened_array)

Output:

[[ 1 2 3 4 5 6] [ 7 8 9 10 11 12]]

In the above code, the -1 in the reshape function indicates that the size of that dimension is to be calculated automatically. This is based on the size of the array and the size of the other dimensions.

Note: The reshape function does not modify the original array. Instead, it returns a new array that has the specified shape.

Similar Solutions and Use-Cases

Flattening specific dimensions of a NumPy array isn't the only way to manipulate your data. There are other similar solutions you might find useful. For example, the ravel function can also be used to flatten an array. However, unlike reshape, ravel always returns a flattened array.

Additionally, you can use the transpose function to change the order of the array dimensions. This can be useful in cases where you need to rearrange your data for specific operations or visualizations.

These techniques can be particularly useful in data preprocessing for machine learning. For instance, you might need to flatten the input data for a neural network. Or, you might need to transpose your data to ensure that it's in the correct format for a particular library or mathematical function.

Conclusion

In this Byte, we've explored how to flatten specific dimensions of a NumPy array using the reshape function. We've also looked at similar solutions such as ravel and transpose and discussed some use-cases where these techniques can be particularly useful.

While these techniques are powerful tools for data manipulation, they are just the tip of the iceberg when it comes to what you can do with NumPy. So I'd suggest taking a deeper look at the NumPy documentation and see what other interesting features you can discover.

Categories: FLOSS Project Planets

Pages