FLOSS Project Planets

Python for Beginners: Create Index in a Pandas Series

Planet Python - Mon, 2022-12-05 09:00

Pandas series objects are used to store data when we need to access it using its position as well as labels. In this article, we will discuss different ways to create index in a pandas series. 

Table of Contents
  1. Create Index in a Pandas Series Using the Index Parameter
  2. Create Index in a Pandas Series Using the Index Attribute
  3. Create an Index in a Pandas Series Using the set_axis() Method
    1. The set_axis() Method
  4. Create Index Inplace in a Pandas Series
  5. Conclusion
Create Index in a Pandas Series Using the Index Parameter

When we create a pandas series, it has a default index starting from 0 to the length of the series-1. For instance, consider the following example.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] series=pd.Series(letters) print("The series is:") print(series)

Output:

The series is: 0 a 1 b 2 c 3 ab 4 abc 5 abcd 6 bc 7 d dtype: object

In the above example, we have created a series of 8 elements. You can observe that the indices of the elements in the series are numbered from 0 to 7. These are the default indices.

If you want to assign a custom index to the series, you can use the index parameter in the Series() constructor. The index parameter in the Series() constructor takes a list having an equal number of elements as the elements in the series and creates a custom index for the series as shown below.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters,index=numbers) print("The series is:") print(series)

Output:

The series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object

In the above example, we have passed the python list [3, 23, 11, 14, 16, 2, 45, 65] to the index parameter of the Series() constructor. After the execution of the Series() constructor, the elements of this list are assigned as the indices of the elements in the series.

Create Index in a Pandas Series Using the Index Attribute

You can also create a new index for a series after creating the series. For instance, if you want to assign other values as indices in the series, you can use the index attribute of the series object. To create a new custom index, you can assign a list of values to the index attribute as shown below.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series.index=numbers print("The series is:") print(series)

Output:

The series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object

In this example, we have assigned the list [3, 23, 11, 14, 16, 2, 45, 65] to the index attribute of the series after creating the series. Hence, the elements of this list are assigned as the indices of the elements in the series.

Here, the list passed to the index attribute must have a length equal to the number of elements in the series. Otherwise, the program will run into a ValueError exception. You can observe this in the following example.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65,117] series=pd.Series(letters) series.index=numbers print("The series is:") print(series)

Output:

ValueError: Length mismatch: Expected axis has 8 elements, new values have 9 elements

In the above example, you can observe that the list "letters" has only 8 elements. As a result, the series contains only 8 elements. On the other hand, the list "numbers" has 9 elements. Hence, when we assign the "numbers" list to the index attribute of the series, the program runs into a ValueError exception.

Suggested Reading: If you are into machine learning, you can read this MLFlow tutorial with code examples. You might also like this article on clustering mixed data types in Python.

Create an Index in a Pandas Series Using the set_axis() Method

Instead of using the index attribute, we can use the set_axis() method to create an index in a pandas series. 

The set_axis() Method

The set_axis() method has the following syntax.

Series.set_axis(labels, *, axis=0, inplace=_NoDefault.no_default, copy=_NoDefault.no_default)

Here, 

  • The labels parameter takes a list-like object containing index values. You can also pass an Index object to the labels parameter. The number of elements in any object passed to the labels parameter should have the same number of elements as the series on which the set_axis() method is invoked.
  • The axis parameter is used to decide if we want to create the index for rows or columns. As a Series has only one column, the axis parameter is unused. 
  • After creating a new index, the set_axis() method returns a new Series object. If you want to modify the original Series object, you can set the inplace parameter to True. 
  • The copy parameter is used to decide whether to make a copy of the underlying data instead of modifying the original series. By default, it is True. 

To create an index using the set_axis() method, we will invoke this method on the original series object. We will pass a list containing the new index values to the set_axis() method as an input argument. After execution, the set_axis() method will return a new series having a modified index. You can observe this in the following example.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series=series.set_axis(labels=numbers) print("The series is:") print(series)

Output:

The series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object

In this example, we have first created a series containing 8 elements. Then, we used the set_index() method to assign new indices to the elements in the series. You can observe that the set_index() method returns a new series. Hence, the original series isn’t modified. To modify the original series by assigning new indices instead of creating a new one, you can create an index in place in the series.

Create Index Inplace in a Pandas Series

To create an index inplace in a pandas series, you can assign the new index to the index attribute of the series object as shown in the following example.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series.index=numbers print("The series is:") print(series)

Output:

The series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object

You can also use the set_axis() method to create an index inplace in a series. For this, you can pass the list containing the new index values to the set_axis() method and set the inplace parameter to True while invoking the set_axis() method on the original series object. After execution, you will get the modified series object as shown below.

import pandas as pd import numpy as np letters=["a","b","c","ab","abc","abcd","bc","d"] numbers=[3,23,11,14,16,2,45,65] series=pd.Series(letters) series.set_axis(labels=numbers,inplace=True) print("The series is:") print(series)

Output:

The series is: 3 a 23 b 11 c 14 ab 16 abc 2 abcd 45 bc 65 d dtype: object

In this example, we used the set_index() method to assign new indices to the elements in the series. You can observe that we have set the inplace parameter to True in the set_index() method. Hence, the new indices are assigned in the original series object itself.

While using the inplace parameter you will get a FutureWarning stating "FutureWarning: Series.set_axis 'inplace' keyword is deprecated and will be removed in a future version. Use obj = obj.set_axis(..., copy=False) instead". It means that the inplace parameter has been deprecated. Hence, if the same code is used in future versions of pandas, the program may run into an error. To avoid this, you can use the copy parameter.

By default, the copy parameter is set to True. Hence, the set_axis() method uses a copy of the original series and modifies it. If you want to modify the original series, you can set the copy parameter to False in the set_axis() method.

Conclusion

In this article, we discussed different ways to create the index in a pandas series in Python. To know more about the pandas module, you can read this article on how to sort a pandas dataframe. You might also like this article on how to drop columns from a pandas dataframe.

The post Create Index in a Pandas Series appeared first on PythonForBeginners.com.

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Miroslav Šedivý

Planet Python - Mon, 2022-12-05 08:23

This week we welcome Miroslav Šedivý as our PyDev of the Week! Miro has been a speaker at several different Python conferences. You can also catch up with Miro on his website or by checking out Miro's GitHub Profile.

Let's take some time to get to know Miro better!

Can you tell us a little about yourself (hobbies, education, etc):

My name is Miroslav Šedivý, but most people call me Miro, which allows them to avoid typing some letters they do not have on their keyboard. I was born in Czechoslovakia (yes, I am over 30 now), studied computer sciences in France and Germany, and now I am living in Austria.

Computers and programming were the hobbies that became my occupation. Another hobby, which is difficult to combine professionally with the first one, is human languages. As a Central European fascinated by travel, I speak a bunch of languages and enjoy using them every day.

Apart from this, I love spending time with my family, hiking, biking, fixing OpenStreetMap, camping, and woodworking.

You can find me at https://mas.to/@eumiro and occasionally at some Python events.

Why did you start using Python?

After I have developed a quite complex power forecasting system for wind turbines in Perl using PDL (Perl equivalent to NumPy), my colleagues started using Python for new projects. It was around 2008, in the age of Python 2.5 with some rumors of Python 3. I got myself a book called “Perl to Python Migration” and tried to wrap my head around the new concepts.

The worst thing I remember from my beginnings was the necessity to "import re" each time I wanted to work with regular expressions. In Perl it was second nature to do almost anything with regular expressions. Not it Python. And this is beautiful about different languages: the way of thinking in them. Just like in French one does not simply count to seventy, nor in Czech say “one beer” (“one” is actually a synonym for “beer”), in Python one, can use other tools to dissect strings.

What other programming languages do you know and which is your favorite?

There were some BASIC and Pascal somewhere in the past century, but now my number one is Python, accompanied by Shell. I remember some Perl, Java, C, and PHP from more or less serious projects in the past, and I have played with Go and Rust.

The choice of language usually depends on the task. I also do not use English to speak about food and cuisine, or Polish to comment a Python code review. I am using Python only for stuff it fits well. Luckily, it fits most of the stuff I am working on.

What projects are you working on now?

Apart from my work, I have some sleeping projects at https://github.com/eumiro, which are currently waiting for another burst of enthusiasm or external input. A day has only 24±1 hours, but I wish it had more.

Which Python libraries are your favorite (core or 3rd party)?

From standard lib, I am always happy if I can reach into the itertools module and simplify my code. Migration to Python 3 brings pathlib, which is a complete game changer when working with file system. I am also happy to see some gaps closing between the different modules, so you do not have to import anything else to obtain Unix timestamp of a datetime object, for instance.

There are plenty of wonderful third-party libraries I have worked with. The most important and complex one is probably pandas.

How did you end up contributing to open source projects?

At my first Python conference EuroPython 2015 in Bilbao, I met Francesc Alted from the PyTables team and as an active user of the library I had a long discussion with him at the end, we even did a sprint and with his help, I removed one obsolete exception-raiser from the codebase that was bugging me since some time.

Later I occasionally contributed to some projects I was using, but then around Christmas 2020 I had quite a lot of free time and I discovered outdated CI/CD configurations in dozens of Python projects, so I started systematically helping them to fix the pipelines and also to modernize their code. This is where one of my talks “There Are Python 2 Relics In Your Code!” were born. Python 2 hacks like `int(math.floor(x))` work in Python 3 but they do not make sense in modern code and should be refactored.

What are the top three things you've learned as a contributor?

Make small steps. Even smaller. If you think you've done a great bunch of work and submit a huge PR, the maintainers will have a very difficult time reviewing it and any rebase in an active project will probably stop their interest in your contribution.

Maintainers are humans with jobs paying their bills. Respect their time and energy and do not expect they're here only for you.

Comment on the project. Show how you're using it in your own work to make the maintainers feel their endeavor matters.

Is there anything else you’d like to say?

Please do not use backslashes to deliberately break lines. If there's some line length limit (whether it is 78, 80, 88, 100, or 120), it makes your code more readable. There's virtually always a possibility to reformat your code to respect this limit and avoid horizontal scrolling in editor. Every such formatting allows you to break the line at a suitable position. Breaking lines with backslashes tells the reader “you have to join these lines in your head into a very long one to understand it.”

Finally, I'd like to thank Mike for his engagement and the whole Python community for collaborating on such a marvelous set of products and the whole ecosystem around it. So happy to be a part of it!

Thanks for doing the interview, Miro!

The post PyDev of the Week: Miroslav Šedivý appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Zato Blog: LDAP and Active Directory as Python API Services

Planet Python - Mon, 2022-12-05 05:41

LDAP and Active Directory often play key a role in the management of a company’s network resources yet it is not always very convenient to query a directory directly using the LDAP syntax and protocol that few people truly specialize in. This is why in this article we are using Zato to offer a REST API on top of directory services so that API clients can use REST and JSON instead.

Installing Zato

Start off by installing Zato - if you are not sure what to choose, pick the Docker Quickstart option and this will set up a working environment in a few minutes.

Creating connections

Once Zato is running, connections can be easily created in its Dashboard (by default, http://127.0.0.1:8183). Navigate to Connections -> Outgoing -> LDAP ..

.. and then click Create a new connection which will open a form as below:

The same form works for both regular LDAP and Active Directory - in the latter case, make sure that Auth type is set to NTLM.

The most important information is:

  • User credentials
  • Authentication type
  • Server or servers to connect to

Note that if authentication type is not NTLM, user credentials can be provided using the LDAP syntax, e.g. uid=MyUser,ou=users,o=MyOrganization,dc=example,dc=com.

Right after creating a connection be sure to set its password too - the password asigned by default is a randomly generated one.

Pinging

It is always prudent to ping a newly created connection to ensure that all the information entered was correct.

Note that if you have more than one server in a pool then the first available one of them will be pinged - it is the whole pool that is pinged, not a particular part of it.

Active Directory as a REST service

As the first usage example, let’s create a service that will translate JSON queries into LDAP lookups - given username or email the service will basic information about the person’s account, such as first and last name.

Note that the conn object returned by client.get() below is capable of running any commands that its underlying Python library offers - in this case we are only using searches but any other operation can also be used, e.g. add or modify as well.

# -*- coding: utf-8 -*- # stdlib from json import loads # Bunch from bunch import bunchify # Zato from zato.server.service import Service # Where in the directory we expect to find the user search_base = 'cn=users, dc=example, dc=com' # On input, we are looking users up by either username or email search_filter = '(&(|(uid={user_info})(mail={user_info})))' # On output, we are interested in username, first name, last name and the person's email query_attributes = ['uid', 'givenName', 'sn', 'mail'] class ADService(Service): """ Looks up users in AD by their username or email. """ class SimpleIO: input_required = 'user_info' output_optional = 'message', 'username', 'first_name', 'last_name', 'email' response_elem = None skip_empty_keys = True def handle(self): # Connection name to use conn_name = 'My AD Connection' # Get a handle to the connection pool with self.out.ldap[conn_name].conn.client() as client: # Get a handle to a particular connection with client.get() as conn: # Build a filter to find a user by user_info = self.request.input['user_info'] user_filter = search_filter.format(user_info=user_info) # Returns True if query succeeds and has any information on output if conn.search(search_base, user_filter, attributes=query_attributes): # This is where the actual response can be found response = conn.entries # In this case, we expect at most one user matching input criteria entry = response[0] # Convert it to JSON for easier handling .. entry = entry.entry_to_json() # .. and load it from JSON to a Python dict entry = loads(entry) # Convert to a Bunch instance to get dot access to dictionary keys entry = bunchify(entry['attributes']) # Now, actually produce a JSON response. For simplicity's sake, # assume that users have only one of email or other attributes. self.response.payload.message = 'User found' self.response.payload.username = entry.uid[0] self.response.payload.first_name = entry.givenName[0] self.response.payload.last_name = entry.sn[0] self.response.payload.email = entry.mail[0] else: # No business response = no such user found self.response.payload.message = 'No such user'

After creating a REST channel, we can invoke the service from command line, thus confirming that we can offer the directory as a REST service:

$ curl "localhost:11223/api/get-user?user_info=MyOrganization\\MyUser" ; echo { "message": "User found", "username": "MyOrganization\\MyUser", "first_name": "First", "last_name": "Last", "email": "address@example.com" } $ Next steps
  • Start the tutorial to learn more technical details about Zato, including its architecture, installation and usage. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single and consistent API to its callers.

  • Check more resources for developers and screenshots.

  • Para aprender más sobre las integraciones de Zato y API en español, haga clic aquí

Categories: FLOSS Project Planets

Salsa Digital Drupal-Related Articles: GovCMS Mega Meetup wrap up

Planet Drupal - Mon, 2022-12-05 02:59
December 2022 GovCMS Mega Meetup It was fantastic to be part of the December 2022 GovCMS Mega Meetup, the first in-person GovCMS event for 3 years. It was a great community turnout, and attendees were clearly very engaged.  John Sheridan kicked the day off by announcing a major milestone…over a billion page views hit since GovCMS was launched 2015.  Next, Sharyn Clarkson took to the stage and presented on the ‘great spike’, showing attendees stats on some of the traffic GovCMS sites had during the pandemic. She focused most of her presentation on the GovCMS Roadmap.  Two of the major points on the roadmap are: Rules as Code (RaC)  CivicTheme for GovCMS  RaC is a space we’ve actively been working in over the past year or so.  We were particularly thrilled to have CivicTheme highlighted.
Categories: FLOSS Project Planets

Podcast.__init__: Declarative Machine Learning For High Performance Deep Learning Models With Predibase

Planet Python - Sun, 2022-12-04 19:42
Deep learning is a revolutionary category of machine learning that accelerates our ability to build powerful inference models. Along with that power comes a great deal of complexity in determining what neural architectures are best suited to a given task, engineering features, scaling computation, etc. Predibase is building on the successes of the Ludwig framework for declarative deep learning and Horovod for horizontally distributing model training. In this episode CTO and co-founder of Predibase, Travis Addair, explains how they are reducing the burden of model development even further with their managed service for declarative and low-code ML and how they are integrating with the growing ecosystem of solutions for the full ML lifecycle.Preamble

This is a cross-over episode from our new show The Machine Learning Podcast, the show about going from idea to production with machine learning.

Summary

Deep learning is a revolutionary category of machine learning that accelerates our ability to build powerful inference models. Along with that power comes a great deal of complexity in determining what neural architectures are best suited to a given task, engineering features, scaling computation, etc. Predibase is building on the successes of the Ludwig framework for declarative deep learning and Horovod for horizontally distributing model training. In this episode CTO and co-founder of Predibase, Travis Addair, explains how they are reducing the burden of model development even further with their managed service for declarative and low-code ML and how they are integrating with the growing ecosystem of solutions for the full ML lifecycle.

Announcements
  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great!
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Your host is Tobias Macey and today I’m interviewing Travis Addair about Predibase, a low-code platform for building ML models in a declarative format
Interview
  • Introduction
  • How did you get involved in machine learning?
  • Can you describe what Predibase is and the story behind it?
  • Who is your target audience and how does that focus influence your user experience and feature development priorities?
  • How would you describe the semantic differences between your chosen terminology of "declarative ML" and the "autoML" nomenclature that many projects and products have adopted?
    • Another platform that launched recently with a promise of "declarative ML" is Continual. How would you characterize your relative strengths?
  • Can you describe how the Predibase platform is implemented?
    • How have the design and goals of the product changed as you worked through the initial implementation and started working with early customers?
    • The operational aspects of the ML lifecycle are still fairly nascent. How have you thought about the boundaries for your product to avoid getting drawn into scope creep while providing a happy path to delivery?
  • Ludwig is a core element of your platform. What are the other capabilities that you are layering around and on top of it to build a differentiated product?
  • In addition to the existing interfaces for Ludwig you created a new language in the form of PQL. What was the motivation for that decision?
    • How did you approach the semantic and syntactic design of the dialect?
    • What is your vision for PQL in the space of "declarative ML" that you are working to define?
  • Can you describe the available workflows for an individual or team that is using Predibase for prototyping and validating an ML model?
    • Once a model has been deemed satisfactory, what is the path to production?
  • How are you approaching governance and sustainability of Ludwig and Horovod while balancing your reliance on them in Predibase?
  • What are some of the notable investments/improvements that you have made in Ludwig during your work of building Predibase?
  • What are the most interesting, innovative, or unexpected ways that you have seen Predibase used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on Predibase?
  • When is Predibase the wrong choice?
  • What do you have planned for the future of Predibase?
Contact Info Parting Question
  • From your perspective, what is the biggest barrier to adoption of machine learning today?
Closing Announcements
  • Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links

The intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

Categories: FLOSS Project Planets

digiKam 7.9.0 is released

Planet KDE - Sun, 2022-12-04 19:00
Dear digiKam fans and users, After four months of active maintenance and another bug triage, the digiKam team is proud to present version 7.9.0 of its open source digital photo manager. See below the list of most important features coming with this release. Bundles Internal Component Updates As with the previous releases, we take care about upgrading the internal components from the Bundles. Microsoft Windows Installer, Apple macOs Package, and Linux AppImage binaries now hosts:
Categories: FLOSS Project Planets

Guest Post: OpenUK Awards 2022 Sustainability

Planet KDE - Sun, 2022-12-04 19:00

This is a guest post by Jonathan Esk-Riddell for the KDE Eco blog about the OpenUK Awards.

OpenUK is an advocacy organisation for open tech (software, hardware and data) in the UK. We run various activities and I have had the priviledge of hosting the award ceremony for the last few years.

Last year at COP26 in Glasgow I announced KDE Eco, the KDE project to measure and certify apps as energy efficient. For those reading this who aren't familiar, KDE is an open source community making apps for Linux and other platforms. KDE Eco has two parts, FOSS Energy Efficiency Project, developing tools to improve energy efficiency in free and open source software development. And Blauer Engel For FOSS, working with German Environment Agency to create eco-certification with the Blauer Engel label for desktop software.

This year our ceremony was at the House of Lords in the UK parliament. The host was Francis Maud, a member of the House of Lords who as a minister a decade ago created gov.uk, a single website for many government services with policies for open data and open formats.

At the House of Lords I gave an update on KDE Eco on St Andrews day. I was pleased to talk about how Okular, our PDF and docs reader, had become the first software product to receive the Blue Angel eco-label.

The link was because one of the awards we present at the OpenUK Awards is for sustainability.

The nominations on the shortlist for sustainability award were:

Carbon Aware SDK, Szymon Duchniewicz, an SDK to enable the creation of carbon aware applications, applications that do more when the electricity is clean and do less when the electricity is dirty, to help organisations achieve Net Zero for carbon emissions.

Devtank, a company focused on sustainability and reducing our customers carbon footprint to Net Zero using Open Source licensed solutions. We are delighted to be delivering energy management and control systems to businesses and local authorities, nationwide. If a potential customer is looking to decarbonise their business and monitor environmental performance, then our Open Smart Monitor ENV01 is the recommended product.

Fergus Kidd, Carbon CI Pipeline Tooling, provides a feasible way to measure carbon generated by cloud infrastructure as part of the software development lifecycle.

Scores from our judges were high for all of these but the final trophy went to Carbon Aware SDK by Szymon Duchniewicz. Congratulations to Szymon and all the nominees.

The OpenUK Awards 2022 at the House of Lords

The work of KDE is made possible thanks to the contributions from KDE Community members, donors and corporations that support us. Every individual counts, and every commitment, large or small, is a commitment to Free Software. Head to the KDE's End of Year fundraiser page and donate now.

Categories: FLOSS Project Planets

Brian Okken: Testing with Python 3.12

Planet Python - Sun, 2022-12-04 19:00
Python 3.12.0a2 is out. So now may be a great time to get your projects to start testing against 3.12. Note about alpha releases of Python This is from the same link as above: “During the alpha phase, features may be added up until the start of the beta phase (2023-05-08) and, if necessary, may be modified or deleted up until the release candidate phase (2023-07-31). Please keep in mind that this is a preview release and its use is not recommended for production environments.
Categories: FLOSS Project Planets

Community maintained images for toolbox (and distrobox)

Planet KDE - Sun, 2022-12-04 18:00

In this post I will discuss how we made community maintained container images for common Linux distributions available for use with toolbox (and distrobox) and why we can not call them “official”.

What is toolbox (or toolbx)?

But first, let’s start with a bit of context. On image based Linux distributions (such as Fedora Silverblue, Fedora Kinoite, Fedora CoreOS, etc.), it is not practical to install random packages the way you may be used to do on classic package based Linux distributions. You are expected to run applications in containers, either via Flatpak for graphical applications, or via podman for command line ones.

While you can directly manage your own custom container images and environment configurations, it is not useful to have everyone rediscover what to do thus a new tool has been created to make that easier: toolbox (or toolbx) (containers/toolbox on GitHub).

Toolbox lets you easily create a mutable and persistent environments inside containers that are well integrated with your host system.

Why do we need other images?

Toolbox needs a few things to be available in the container image to be able to provide a good user experience and integration with the host system (see details in the Distro support page).

The current version of toolbox only primarily includes support for a Fedora based environment via the fedora-toolbox container image that includes all the required tools. There is also a RHEL 8 image based on UBI available.

If you wanted to use toolbox with another Linux distribution, you had to make your own custom container image and to make sure to include all the required tools.

Introducing toolbx-images

Together with some other folks from the community, we have setup a community maintained repository so that we can share the maintenance of container images designed to be used with toolbox.

The toolbx-images repository is hosted on GitHub and the container images are hosted in the quay.io/toolbx-images org on Quay.io. The full instructions on how to use them are available in the README. The images are rebuilt and updated weekly (at minimum). Everything is public and open on GitHub: the image builds happen via GitHub Action runs.

We now have images for AlmaLinux, Alpine Linux, Arch Linux, CentOS Stream, Debian, openSUSE, RHEL, Rocky Linux and Ubuntu. It’s also really easy to add more.

See also toolbox#1019 for historical details.

What about distrobox?

Distrobox is another tool very similar to toolbox. One of its advantage is that it can directly use any Linux distribution container image as a base. But in order to do that, it needs to setup the environment in the container the first time it is created.

Distrobox is not included in Fedora Silverblue and Fedora Kinoite by default but you can easily install it either by overlaying the RPM package with rpm-ostree (rpm-ostree install distrobox) or by installing it manually in you home directory via the official instructions.

You should be able to directly use the same container images that we are making for toolbox with distrobox to reduce the setup time for each newly created container created. I’ve started a discussion about that in distrobox#544.

Why are those images not official?

To answer that question, we have to answer another one: What makes a container image official?

According to me, a container image is official if it is provided directly by the Linux distribution it is based on, maintained by developers or users of that Linux distribution and hosted on infrastructure validated by that Linux distribution.

Right now, as far as I know, only Fedora is building, maintaining and distributing a container image purposely made for toolbox so there is only one official image.

If you want to have an official image for toolbox for your Linux distribution, then please reach out to your maintainers or developers and suggest or contribute the necessary work.

Conclusion

In the meantime, feel free to join us and help us provide as many community maintained images as possible.

Categories: FLOSS Project Planets

Status of the 15-Minute Bug Initiative

Planet KDE - Sun, 2022-12-04 17:36

It’s been almost a year since I announced the 15-Minute Bug Initiative for Plasma. In a nutshell, this initiative proposed to identify and prioritize fixing bugs you can find “within the first 15-minutes of using the system” that make Plasma look bad and feel fundamentally unstable and broken.

This initiative has been a huge success so far! We started out with 100 bugs, and 11 months later we’re down to 47! But it’s even better than that; more bugs were added to the list over time as new issues were discovered (or created as a result of regressions), so the fact that we’re at 47 today means that a lot more than 53 bugs have been fixed. How many more? Well, the total list of 15-minute bugs fixed stands at 95 today!

This means that in total, there have been 142 15-minute bugs, and we’ve fixed 95 of them, for a fix rate of 67%. That’s not too shabby!

There’s more to do, of course. The remaining 47 bugs are some of the more challenging ones, and many are quite egregiously bad. I expect the fix rate to slow as the list is reduced mostly to issues beyond the capabilities or time budgets of volunteers. That’s one of the reasons why the KDE e.V. is looking to hire a Software Platform Engineer; in addition to other responsibilities, the person we select will be working on some of these bugs. Hiring someone technically skilled enough to consistently fix these complex bugs won’t be cheap, and if you’d like to help KDE sustainably afford that cost, please consider donating to our end-of-year fundraiser! It really does help. Thanks for being awesome!

Categories: FLOSS Project Planets

Mike Herchel's Blog: Using ECA to Send Emails When Creating Nodes

Planet Drupal - Sun, 2022-12-04 10:00
Using ECA to Send Emails When Creating Nodes mherchel Sun, 12/04/2022 - 10:00
Categories: FLOSS Project Planets

John Ludhi/nbshare.io: ERROR Could not find a version that satisfies the requirement numpy==1 22 3

Planet Python - Sun, 2022-12-04 02:39
ERROR: Could not find a version that satisfies the requirement numpy==1.22.3

You might run in to following error if you try to install numpy version 1.22+.
The reason is mostly the wrong Python version as shown below.

In [2]: pip --version pip 22.2.2 from /home/anaconda3/envs/py373/lib/python3.7/site-packages/pip (python 3.7) Note: you may need to restart the kernel to use updated packages. In [7]: !python --version Python 3.7.3 In [8]: !pip install numpy==1.22.3 ERROR: Ignored the following versions that require a different python version: 1.22.0 Requires-Python >=3.8; 1.22.0rc1 Requires-Python >=3.8; 1.22.0rc2 Requires-Python >=3.8; 1.22.0rc3 Requires-Python >=3.8; 1.22.1 Requires-Python >=3.8; 1.22.2 Requires-Python >=3.8; 1.22.3 Requires-Python >=3.8; 1.22.4 Requires-Python >=3.8; 1.23.0 Requires-Python >=3.8; 1.23.0rc1 Requires-Python >=3.8; 1.23.0rc2 Requires-Python >=3.8; 1.23.0rc3 Requires-Python >=3.8; 1.23.1 Requires-Python >=3.8; 1.23.2 Requires-Python >=3.8; 1.23.3 Requires-Python >=3.8; 1.23.4 Requires-Python >=3.8; 1.23.5 Requires-Python >=3.8; 1.24.0rc1 Requires-Python >=3.8 ERROR: Could not find a version that satisfies the requirement numpy==1.22.3 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0rc1, 1.15.0rc2, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0rc1, 1.16.0rc2, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0rc1, 1.17.0rc2, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0rc1, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0rc1, 1.19.0rc2, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0rc1, 1.20.0rc2, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0rc1, 1.21.0rc2, 1.21.0, 1.21.1, 1.21.2, 1.21.3, 1.21.4, 1.21.5, 1.21.6) ERROR: No matching distribution found for numpy==1.22.3 How to fix error - Could not find a version that satisfies the requirement numpy==1.22.3

Install Python3.8 or Python3.8+

In [1]: !python --version Python 3.8.0 In [14]: !pip install numpy==1.22.3 Collecting numpy==1.22.3 Using cached numpy-1.22.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.8 MB) Installing collected packages: numpy Successfully installed numpy-1.22.3 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Categories: FLOSS Project Planets

John Ludhi/nbshare.io: Understand Python Slicing

Planet Python - Sat, 2022-12-03 23:39
Understand Python Slicing

Python slicing is used to slice lists or tuples. Let us learn Python slicing through examples.

Let us declare a Python list.

In [1]: x = [1,2,3,4,5,6,7,8,9]

Python slicing can be used in two ways...
x[start:stop:step] -- Start through stop-1
x[slice(start,stop,step) -- Start through stop-1

In [2]: x[0:1] Out[2]: [1] In [3]: x[0:1:1] Out[3]: [1] In [4]: x[slice(0,1)] Out[4]: [1] In [5]: x[slice(0,1,1)] Out[5]: [1]

Note step parameter is optional, by default it is 1.

Python slice all but first element In [6]: x[1:] Out[6]: [2, 3, 4, 5, 6, 7, 8, 9] Python slice all but last element In [7]: x[:8] Out[7]: [1, 2, 3, 4, 5, 6, 7, 8] Python slice and copy the whole list In [8]: x[:] Out[8]: [1, 2, 3, 4, 5, 6, 7, 8, 9] Python slice every other element in the list

The below command means start from index 0 and extract every 2nd element.
index 0 , index (2) 0 +2, index (4) 0 + 2 +2, index (6) 0 + 2 + 2 +2 , index (8) 0 + 2 + 2 + 2 +2

Note two colons in the below syntax. There is nothing between Ist and 2nd colon that means evreything from start to end of list.

In [9]: x[0::2] Out[9]: [1, 3, 5, 7, 9] Python Slicing - Negative and Positive step values

Negative step of -1 means go from right to left
Positive step of 1 means go from left to right

Python slicing using negative indices

Negative number -1 means last element, -2 means seconda last element so on and so forth.

In [10]: x[-1] Out[10]: 9

The below syntax will print nothing, because index is starting from -1 which means last element in the list then it goes to index 0 because by default step is 1 but our "stop" is -3. To fix this we will have to give step of -1.

In [11]: x[-1:-3] Out[11]: []

As we can see, the index will start from -1, the last element, then goes to -2 because step is -1, then stops at -3

In [12]: x[-1:-3:-1] Out[12]: [9, 8] Python Reverse all the items in the array using Slicing In [13]: x[::-1] Out[13]: [9, 8, 7, 6, 5, 4, 3, 2, 1]

Note in the above, since step is -1, Python considers start:stop as -1:-10, we can re-write above as shown below.

In [14]: x[-1:-10:-1] Out[14]: [9, 8, 7, 6, 5, 4, 3, 2, 1]

print last 3 elements. Note below syntax means -3 to last element (right to left) and step is 1 by default

In [15]: x[-3:] Out[15]: [7, 8, 9]

Below syntax means, start at -3 index, then go to (-5) -3-2 so on and so forth to all the way to start of the list (right to left) because step is -1.

In [16]: x[-3::-2] Out[16]: [7, 5, 3, 1] Python Slicing - Reverse first two items In [17]: x[1::-1] Out[17]: [2, 1]

Let us see how did the above slicing syntax works.
The index will start at...
index 1, index (0) 1-1
Note -1 tells it go from right to left

In [18]: x[1::-1] Out[18]: [2, 1] Python Everything Except the last two items - Reversed In [19]: x[-3::-1] Out[19]: [7, 6, 5, 4, 3, 2, 1]

Note all the above can be used with slice method. The different is instead of ":" (colon) use None to indicate end or start of list index.

In [20]: x[slice(-3,None,-1)] Out[20]: [7, 6, 5, 4, 3, 2, 1]
Categories: FLOSS Project Planets

Peter Hoffmann: beautiful leaflet markers with folium and fontawesome

Planet Python - Sat, 2022-12-03 19:00

Folium is a Python library that allows users to create and display interactive maps. The library uses the Leaflet.js library and is capable of creating powerful and visually appealing maps. Folium can be used to visualize geographical data by adding markers, polygons, heatmaps, and other geographical elements onto a map. The library is easy to use and offers a range of options for customizing the maps and the elements that are displayed on them.

Minimal marker

The minimal example just adds a marker at a specific location:

import folium loc = [45.957916666667, 7.8123888888889] m = folium.Map(location=loc, zoom_start=13) folium.Marker( location=loc ).add_to(m) m

Marker with a bootstrap icon

Markers can be customized through providing a Icon instance. As a default you can use bootstrap glyphicons that provide over 250 glyphs for free:

In addition you can colorize the marker. Available color names are red blue green purple orange darkred lightred beige darkblue darkgreen cadetblue darkpurple white pink lightblue lightgreen gray black lightgray .

m = folium.Map(location=loc) folium.Marker( location=loc, icon=folium.Icon(icon="home", color="purple", icon_color="blue") ).add_to(m)

Marker with a fonteawesome icon

Font Awesome is a collection of scalable vector icons that can be customized and used in a variety of ways, such as in graphic design projects, websites, and applications. The icons are available in different styles, including Solid, Regular, and Brands, and can be easily integrated by adding the fa prefix

m = folium.Map(location=loc) folium.Marker( location=loc, icon=folium.Icon(icon="tents", prefix='fa') ).add_to(m)

Extended Marker Customization with BeautifulIcons

The Leaflet Beautiful Icons is lightweight plugin that adds colorful iconic markers without images for Leaflet by giving full control of style to end user ( i.e. unlimited colors and many more...).

It ist exposed to folium via the Beautiful Icon plugin

Supported icon shapes are circle circle-dot doughnut rectangle rectangle-dot marker and the color be either one of the predefined or any valid hex code.

import folium.plugins as plugins folium.Marker( location=loc, icon=plugins.BeautifyIcon( icon="tent", icon_shape="circle", border_color='purple', text_color="#007799", background_color='yellow' ) ).add_to(m)

Categories: FLOSS Project Planets

Ben Hutchings: Debian LTS work, November 2022

Planet Debian - Sat, 2022-12-03 17:57

In November I was assigned 24 hours by Freexian's Debian LTS initiative. I worked 9 of those hours and will carry over the remainder.

I updated the linux (4.19) package to the latest stable update, but didn't upload it. I attended the monthly LTS team meeting.

Categories: FLOSS Project Planets

Vincent Bernat: Broken commit diff on Cisco IOS XR

Planet Debian - Sat, 2022-12-03 10:40

TL;DR

Never trust show commit changes diff on Cisco IOS XR.

Cisco IOS XR is the operating system running for the Cisco ASR, NCS, and 8000 routers. Compared to Cisco IOS, it features a candidate configuration and a running configuration. In configuration mode, you can modify the first one and issue the commit command to apply it to the running configuration.1 This is a common concept for many NOS.

Before committing the candidate configuration to the running configuration, you may want to check the changes that have accumulated until now. That’s where the show commit changes diff command2 comes up. Its goal is to show the difference between the running configuration (show running-configuration) and the candidate configuration (show configuration merge). How hard can it be?

Let’s put an interface down on IOS XR 7.6.2 (released in August 2022):

RP/0/RP0/CPU0:router(config)#int Hu0/1/0/1 shut RP/0/RP0/CPU0:router(config)#show commit changes diff Wed Nov 23 11:08:30.275 CET Building configuration... !! IOS XR Configuration 7.6.2 + interface HundredGigE0/1/0/1 + shutdown ! end

The + sign before interface HundredGigE0/1/0/1 makes it look like you did create a new interface. Maybe there was a typo? No, the diff is just broken. If you look at the candidate configuration, everything is like you expect:

RP/0/RP0/CPU0:router(config)#show configuration merge int Hu0/1/0/1 Wed Nov 23 11:08:43.360 CET interface HundredGigE0/1/0/1 description PNI: (some description) bundle id 4000 mode active lldp receive disable transmit disable ! shutdown load-interval 30

Here is a more problematic example on IOS XR 7.2.2 (released in January 2021). We want to unconfigure three interfaces:

RP/0/RP0/CPU0:router(config)#no int GigabitEthernet 0/0/0/5 RP/0/RP0/CPU0:router(config)#int TenGigE 0/0/0/5 shut RP/0/RP0/CPU0:router(config)#no int TenGigE 0/0/0/28 RP/0/RP0/CPU0:router(config)#int TenGigE 0/0/0/28 shut RP/0/RP0/CPU0:router(config)#no int TenGigE 0/0/0/29 RP/0/RP0/CPU0:router(config)#int TenGigE 0/0/0/29 shut RP/0/RP0/CPU0:router(config)#show commit changes diff Mon Nov 7 15:07:22.990 CET Building configuration... !! IOS XR Configuration 7.2.2 - interface GigabitEthernet0/0/0/5 - shutdown ! + interface TenGigE0/0/0/5 + shutdown ! interface TenGigE0/0/0/28 - description Trunk vers ahp2a-1.nra - bundle id 2 mode active ! end

The two first commands are correctly represented by the first two chunks of the diff: we remove GigabitEthernet0/0/0/5 and create TenGigE0/0/0/5. The two next commands are also correctly represented by the last chunk of the diff. TenGigE0/0/0/28 was already shut down, so it is expected that only description and bundle id are removed. However, the diff command forgets about the modifications for TenGigE0/0/0/29. The diff should include a chunk similar to the last one.

RP/0/RP0/CPU0:router(config)#show run int TenGigE 0/0/0/29 Mon Nov 7 15:07:43.571 CET interface TenGigE0/0/0/29 description Trunk to other router bundle id 2 mode active shutdown ! RP/0/RP0/CPU0:router(config)#show configuration merge int TenGigE 0/0/0/29 Mon Nov 7 15:07:53.584 CET interface TenGigE0/0/0/29 shutdown !

How can the diff be correct for TenGigE0/0/0/28 but incorrect for TenGigE0/0/0/29 while they have the same configuration? How can you trust the diff command if it forgets part of the configuration?

Do you remember the last time you ran an Ansible playbook and discovered the whole router ospf block disappeared without a warning? If you use automation tools, you should check how the diff is assembled. Automation tools should build it from the result of show running-config and show configuration merge. This is what NAPALM does. This is not what cisco.iosxr collection for Ansible does.

The problem is not limited to the interface directives. You can get similar issues for other parts of the configuration. For example, here is what we get when removing inactive BGP neighbors on IOS XR 7.2.2:

RP/0/RP0/CPU0:router(config)#router bgp 65400 RP/0/RP0/CPU0:router(config-bgp)#vrf public RP/0/RP0/CPU0:router(config-bgp-vrf)#no neighbor 217.29.66.1 RP/0/RP0/CPU0:router(config-bgp-vrf)#no neighbor 217.29.66.75 RP/0/RP0/CPU0:router(config-bgp-vrf)#no neighbor 217.29.66.110 RP/0/RP0/CPU0:router(config-bgp-vrf)#no neighbor 217.29.66.112 RP/0/RP0/CPU0:router(config-bgp-vrf)#no neighbor 217.29.66.158 RP/0/RP0/CPU0:router(config-bgp-vrf)#show commit changes diff Tue Aug 2 13:58:02.536 CEST Building configuration... !! IOS XR Configuration 7.2.2 router bgp 65400 vrf public - neighbor 217.29.66.1 - remote-as 16004 - use neighbor-group MIX_IPV4_PUBLIC - description MIX: MIX-IT ! - neighbor 217.29.66.75 - remote-as 49367 - use neighbor-group MIX_IPV4_PUBLIC ! - neighbor 217.29.67.10 - remote-as 19679 ! - neighbor 217.29.67.15 - neighbor 217.29.66.112 - remote-as 8075 - use neighbor-group MIX_IPV4_PUBLIC - description MIX: Microsoft - address-family ipv4 unicast - maximum-prefix 1500 95 restart 5 ! ! - neighbor 217.29.66.158 - remote-as 24482 - use neighbor-group MIX_IPV4_PUBLIC - description MIX: SG.GS - address-family ipv4 unicast ! ! ! ! end

The only correct chunk is for neighbor 217.29.66.112. All the others are missing some of the removed lines. 217.29.67.15 is even missing all of them. How bad is the code providing such a diff?

I could go all day with examples such as these. Cisco TAC is happy to open a case in DDTS, their bug tracker, to fix specific occurrences of this bug.3 However, I fail to understand why the XR team is not just providing the diff between show run and show configuration merge. The output would always be correct! 🙄

  1. IOS XR has several limitations. The most inconvenient one is the inability to change the AS number in the router bgp directive. Such a limitation is a great pain for both operations and automation. ↩︎

  2. This command could have been just show commit, as show commit changes diff is the only valid command you can execute from this point. Starting from IOS XR 7.5.1, show commit changes diff precise is also a valid command. However, I have failed to find any documentation about it and it seems to provide the same output as show commit changes diff. That’s how clunky IOS XR can be. ↩︎

  3. See CSCwa26251 as an example of a fix for something I reported earlier this year. You need a valid Cisco support contract to be able to see its content. ↩︎

Categories: FLOSS Project Planets

KDE Dev-Vlog 5: Dolphin's New Selection Mode

Planet KDE - Sat, 2022-12-03 06:51

Dolphin 22.12 is going to be released in a few days so it is high time that I report on its big new feature which I have implemented: the selection mode. In this light-hearted video I will present it next to problems, whose solutions have not been implemented yet.

The video has English subtitles.

Categories: FLOSS Project Planets

October/November in KDE Itinerary

Planet KDE - Sat, 2022-12-03 05:15

Since the last update two month ago KDE Itinerary got a UI refresh, improved station maps and support for a new European train ticket standard, and there’s a new Nextcloud itinerary workflow app.

New Features Nextcloud Workflow

There’s a new Nextcloud workflow app making use of our travel document extractor engine. This allows configuring ways to automatically extract travel documents given certain criteria, add the resulting information to your calendar and get notifications about that.

Nextcloud itinerary workflow setup.

Also checkout the screencasts showing this in action.

UI Refresh

Like many other Plasma Mobile apps, KDE Itinerary has been ported to use new Kirigami “Mobile Forms” component for the reservation details and settings pages.

As a result of this a lot of functionality that so far was found in the context menu of the details pages or on separate sub-pages is now shown inline at the bottom of the details pages. That helps with discoverability, and makes things like attached documents much easier to reach.

Train ticket page with inline context actions and attached documents.

This change landed shortly after branching for the 22.12 release.

Staircase Navigation

The indoor map we use for train stations so far could either navigate between floor levels by clicking on individual stairs or selecting the floor for an elevator. In some buildings stairs aren’t mapped as individual ways though, but as a multi-level stairwell area. We can now handle this as well, offering the same floor level selector as for elevators.

Floor selector when clicking on a staircase area.

To make this discoverable, staircase areas now also have a corresponding icon.

Working out map modelling details like this has been helped a lot by collaboration with others working on OSM indoor mapping.

Infrastructure Work ERA FCB Support

The first uses of the ERA FCB ticket format have been observed in the wild, so we finally could implement support for those.

The “Flexible Content Barcode” (FCB) of the European Union Agency for Railways (ERA) as defined in TAP TSI Technical Document Annex B.12 is the designated successor for existing international railway tickets in the EU. And as that lengthy name might already suggests, this is the by far most complicated ticket barcode format we encountered so far. Fortunately ERA published the corresponding ASN.1 specification on our request some time ago, that’s 2000+ lines of code defining hundreds of possible data fields.

While there is plenty of libraries and tools for dealing with ASN.1 data formats, there is little support in those for the “unaligned Packed Encoding Representation (uPER)” variant used here, let alone one that would then also work in combination with the complex FCB structures. So a lot of ground work on this was required as well.

Conceptually FCB is very interesting for us, it’s fully machine readable (unlike it’s ASCII-art like predecessor RCT2) and the use case of a 3rd party providing additional assistance features for the traveler based on this (ie. the thing we do) was considered as part of the design. On the other hand it looks like it has all possible ticket and tariff variants used anywhere crammed together, in mostly optional data fields.

We still have to see how useful this turns out in reality, that is if the information relevant for us actually gets populated. At least we now have the ability to completely dump the content of FCB tickets, which is also important from a privacy point of view.

ERA FCB tickets like their predecessors occur as payload in the UIC 918.3 container format, our data extractors for that have been extended to at least handle the ERA FCB variants we have seen so far.

OSM Tile Server Upgrade

The server providing OSM raw data tiles for Marble and also Itinerary’s station maps has been migrated to new hardware. The ever-growing OSM database has gotten close to the available SSD storage space on the old system, reaching 895GB.

On the new system we now have 2 TB of NVMe storage, which should hopefully last for a bit.

Fixes & Improvements Travel document extractor
  • Improve HTML to text conversion, for extractor scripts using text-based extraction of HTML content.
  • Decode barcodes encoded as PDF image masks as well.
  • Added support for RCT2 “Rail Pass Tickets”, such as Interrail passes.
  • New extractors for Aegean Airlines, Bateliers Arcachon, České dráhy, Italo, Ouigo Spain and PKP.
  • Improved extractor scripts for booking.com, FlixBus, SNCF, Thalys and Vueling tickets.
Indoor map
  • Improved contrast of buildings in the Breeze light style.
  • Show more accessibility related element properties, such as wheelchair lift availability, and availability of information in tactile writing or speech output.
  • Correctly compute hit boxes for labels with a fixed (maximum) text width. This fixes close-by elements getting wrongly selected sometimes.
  • Show information about gender neutral/gender segregated restrooms.
  • Show room numbers if no room name is available. This is particularly useful when looking at university or office buildings.
Indoor map of a floor in an university building. Itinerary app
  • Railjet coach layouts are now also retrieved for stops in Germany.
  • Attached documents are no longer lost or duplicated when merging trip data from multiple sources.
  • Ticket numbers (as opposed to booking references) for train tickets are shown when available. This is for example necessary for connecting to the Renfe onboard WiFi.
  • Fixed date/time input for manually added trips being sometimes off by one day (bug 461963).
  • Fixed barcode scanner not closing after detecting a health certificate.
  • Fixed link color styling in Applet Wallet pass rendering.
  • Fixed driving side information being wrong when living in a country driving on the left (bug 461438).
  • Improved window layout and size when running on the desktop.
How you can help

More than ever this has been a team effort, and you can be part of this!

Feedback and travel document samples are very much welcome, and there are plenty of other things that can be done without traveling as well. The KDE Itinerary workboard or the more specialized indoor map workboard show what’s on the todo list, and are a good place for collecting new ideas. For questions and suggestions, please feel free to join us on the KDE PIM mailing list or in the #kontact channel on Matrix.

Categories: FLOSS Project Planets

This week in KDE: custom tiling

Planet KDE - Fri, 2022-12-02 23:53

KWin got a very cool new feature this week: a built-in advanced tiling system that you can use to set up custom tile layouts and resize multiple adjacent windows at a time by dragging on the gaps between them!

  • Custom tiling!
  • Tile setup and configuration!
  • Pre-made tiling layouts!

This feature is still in its infancy and not designed to completely replicate the workflow of a tiling window manager. But we expect it to grow and advance over time, and also the new APIs added for it should benefit 3rd-party tiling scripts that do want to let you turn KWin into a tiling window manager. Thanks very much to Marco Martin for contributing this work, which will be released in Plasma 5.27!

But there’s much, much more as well!

Other New Features

You can now browse Apple iOS devices using its native afc:// protocol in Dolphin, file dialogs, and other file management tools (Kai Uwe Broulik, kio-extras 23.04. Link):

Konsole has now adopted KHamburgerMenu (Me: Nate Graham, Felix Ernst, and Andrey Butirsky, Konsole 23.04. Link):

As always, if you hate hamburger menus, you’re welcome to use the traditional in-window menubar, which is still there if you show the menubar using Ctrl+M, and won’t be going anywhere

By default, Konsole’s tab bar is now located toward the top of the window like in most other apps, rather than at the bottom (me: Nate Graham, Konsole 23.04. Link)

You can now drag an image onto the Color Picker widget to make it calculate the average color for that image and store it in its list of stored colors (Fushan Wen, Plasma 5.27. Link):

When a KRunner search matches nothing, you’ll now be given the opportunity to do a web search for the search term (Alexander Lohnau, Plasma 5.27. Link)

Gained support for the Global Shortcuts portal, which allows Flatpak and other sandboxed apps using the portal system to offer a standardized user interface for setting and editing global shortcuts (Aleix Pol Gonzalez, Plasma 5.27. Link)

User Interface Improvements

When you delete the current folder in Dolphin, it now automatically navigates back to the parent folder (Vova Kulik and Méven Car, Dolphin 23.04. Link)

When you launch Discover from the “Uninstall or Manage Add-Ons…” menu item in Kickoff for an installed app, and that app is available in Discover from multiple backends, Discover now always opens showing you the app from the backend it’s actually installed from, so you can immediately click a “Remove” button if your goal in opening Discover was to uninstall the app (Aleix Pol Gonzalez, Plasma 5.26.4. Link)

Speaking of the context menu that contains that action, the first time you right-click on an app in Kickoff to show it, the menu now appears immediately instead of being delayed by a few seconds (David Redondo, Plasma 5.27. Link)

KWin’s “Cascaded” window placement mode has been removed, because now every other window placement mode where it makes sense includes cascading behavior itself! (Natalie Clarius, Plasma 5.27. Link):

The screen chooser dialog you’ll see for Flatpak and Snap apps using the XDG portal system now includes preview thumbnails for each screen or window that you can share (Aleix Pol Gonzalez, Plasma 5.27. Link):

Plasma panels now automatically become thicker as needed when you switch to a Plasma theme whose graphics don’t work in thin panels (Niccolò Venerandi, Plasma 5.27. Link)

Plasma no longer somewhat strangely remembers different thicknesses for each panel in horizontal vs vertical setups; now each panel has one thickness and it keeps that thickness when you change from horizontal to vertical and vice versa (Fushan Wen, Plasma 5.27. Link)

When you manually add your home timezone to the Digital Clock’s timezones list so that you can change it to something else when traveling and have your home timezone appear automatically, it now disappears automatically when you’re in your home timezone when displaying it would be redundant (me: Nate Graham, Plasma 5.27, Link):

The Battery & Brightness widget now considers a battery that’s been charged to its configured charge limit to be fully charged (me: Nate Graham, Plasma 5.27. Link)

The questionably useful “Search For” section in the Places panel has been removed by default to avoid presenting so much visual clutter by default. The functionality is still available and you can re-add these items if you like and use them, of course (me: Nate Graham, Frameworks 5.101. Link):

The way the Places Panel looks by default now, with greater relevance Significant Bugfixes

(This is a curated list of e.g. HI and VHI priority bugs, Wayland showstoppers, major regressions, etc.)

Plasma is no longer capable of crashing in a loop on launch when any of the Qt image reader plugins that are installed on your system but aren’t in use to display the wallpaper are buggy and crash-prone (Fushan Wen, Plasma 5.26.4. Link)

Scrolling on the language list sheet on System Settings Region and Language page is no longer almost unusably choppy (me: Nate Graham, Plasma 5.26.5. Link)

When your 3rd-party lock screen theme is broken but the kscreenlocker_greet background process has not crashed, you’ll once again see the fallback lock screen rather than the dreaded “your screen locker is broken” screen (David Redondo, Plasma 5.27. Link)

The Weather widget no longer escapes from its space in the System Tray and overlaps other icons at various icon and panel sizes (Ismael Asensio, Plasma 5.27. Link)

When Night color is active and the system or KWin is restarted, it now turns on again as expected (Vlad Zahorodnii, Plasma 5.27. Link)

Notifications can now be read using a screen reader (Fushan Wen, Plasma 5.27. Link)

Did a bunch of performance work to speed up the process of drawing UI elements in Plasma and QtQuick-based apps, which should result in faster speed and lower power usage (Arjen Hiemstra, Frameworks 5.101. Link 1 and link 2)

In the Plasma Wayland session, when you drag a window containing QtQuick-based user interface elements to another screen that’s using a different scale factor, the window instantly adjusts itself to display properly according to that screen’s scale factor, with no blurriness or pixelation. It even works when a window is partially on one screen and partially on another! (David Edmundson, Frameworks 5.101. Link 1 and link 2)

Other bug-related information of interest:

Automation & Systematization

Until this point, Plasma Mobile-focused apps have been released using a release schedule called “Plasma Mobile Gear.” Going forward, these apps will be moving to the normal “KDE Gear” release schedule, with “Plasma Mobile Gear” being discontinued to simplify and unify packaging (Link)

Added an autotest for local file size calculation in Filelight (Harald Sitter, Link)

Set an appropriate image for the Automation goal group, which was clearly the most important thing to do (Justin Zobel and me: Nate Graham)

Changes not in KDE that affect KDE

A new Wayland protocol for fractional scaling was merged, which opens the door for Qt and KWin to support it and then we get better fractional scaling visuals and performance for Qt and KDE apps! This work on the Qt and KDE sides is in progress, but not merged yet. Once it is, I’ll be sure to announce it! (Kenny Levinsen, wayland-protocols 1.31. Link).

…And everything else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

KDE’s end-of-year fundraiser is in full swing, so please consider making a donation!

Otherwise if you’re a developer, check out our 15-Minute Bug Initiative. Working on these issues makes a big difference quickly! And you can have a look at https://community.kde.org/Get_Involved to discover lots of ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Categories: FLOSS Project Planets

Declassed Art: Declassed Plausible Deniability Toolkit

Planet Python - Fri, 2022-12-02 19:00
How to make your computer looking an innocent toy, used only to play tux racer and watch cats on youtube. Forensics would find the data with abnormally high entropy in unused sectors, a few suspicious tweaks in your system, but none of explicit evidences of encryption. This may help to avoid rubber-hose cryptanalysis, highly possible if you used LUKS, Tomb, Shufflecake, or simply encrypted your files.
Categories: FLOSS Project Planets

Pages