Feeds

First Week of Work and School

Planet KDE - Fri, 2023-08-25 04:52

As the first week of work and school comes to an end, I realized that this 100 days to offload is harder than predicted. I partly blame that I got the traditional going-back-to-work cold, but I guess I also have less time to spend on fun stuff like writing.

This week has been about cleaning up.

I’ve started to clean-up my backlog of foss-north video recordings. I’ve got some 12GB of videos rendered, and I’m not even halfway. For next year we really need to do something about the audio recording situation, but it is what it is and it will have to do.

I’ve also contacted a lawyer to help me clean up some personal stuff that I need to complete, given my new family situation. It is not hard, I just find myself procrastinating instead of doing the paperwork. By paying someone (a lot) I guess I will be more focused at completing the task.

Finally, I’m cleaning out stuff from my office and garage. The office simply needed cleaning. If I intend to keep a collection of keyboards, I probably need to make sure they fit somewhere. But it is getting there. I need to reorder the stuff in my shelves to make for a nicer video call background though :-)

I’m cleaning the garage to make room for the electrician who will come and install a charging box for my new, all electric, car. More about this in another post – but it will be nice to be able to do three phase charging at home instead of just 2kW.

All in all – a productive week!

Categories: FLOSS Project Planets

Ian Jackson: I cycled to all the villages in alphabetical order

Planet Debian - Thu, 2023-08-24 20:33

This last weekend I completed a bike rides project I started during the first Covid lockdown in 2020:

I’ve cycled to every settlement (and radio observatory) within 20km of my house, in alphabetical order.

Stir crazy

In early 2020, during the first lockdown, I was going a bit stir crazy. Clare said “you’re going very strange, you have to go out and get some exercise”. After a bit of discussion, we came up with this plan: I’d visit all the local villages, in alphabetical order.

Choosing the radius

I decided that I would pick a round number of kilometers, as the crow flies, from my house. 20km seemed about right. 25km would have included Ely, which would have been nice, but it would have added a great many places, all of them quite distant.

Software

I wrote a short Rust program to process OSM data into a list of places to visit, and their distances and bearings.

You can download a tarball of the alphabetical villages scanner. (I haven’t published the git history because it has my house’s GPS coordinates in it, and because I committed the output files from which that location can be derived.)

The Rides

I set off on my first ride, to Aldreth, on Sunday the 31st of May 2020. The final ride collected Yelling, on Saturday the 19th of August 2023.

I did quite a few rides in June and July 2020 - more than one a week. (I’d read the lockdown rules, and although some of the government messaging said you should stay near your house, that wasn’t in the legislation. Of course I didn’t go into any buildings or anything.)

I’m not much of a morning person, so I often set off after lunch. For the longer rides I would usually pack a picnic. Almost all of the rides I did just by myself. There were a handful where I had friends along:

Dry Drayton, which I collected with Clare, at night. I held my bike up so the light shone at the village sign, so we could take a photo of it.

Madingley, Melbourn and Meldreth, which was quite an expedition with my friend Ben. We went out as far as Royston and nearby Barley (both outside my radius and not on my list) mostly just so that my project would have visited Hertfordshire.

The Hemingfords, where I had my friend Matthew along, and we had a very nice pub lunch.

Girton and Wilburton, where I visited friends. Indeed, I stopped off in Wilburton on one or two other occasions.

And, of course, Yelling, for which there were four of us, again with a nice lunch (in Eltisley).

I had relatively little mechanical trouble. My worst ride for this was Exning: I got three punctures that day. Luckily the last one was close to home.

I often would stop to take lots of photos en-route. My mum in particular appreciated all the pretty pictures.

Rules

I decided on these rules:

I would cycle to each destination, in order, and it would count as collected if I rode both there and back. I allowed collecting multiple villages in the same outing, provided I did them in the right order. (And obviously I was allowed to pass through places out of order, without counting them.)

I tried to get a picture of the village sign, where there was one. Failing that, I got a picture of something in the village with the village’s name on it. I think the only one I didn’t manage this for was Westley Bottom; I had to make do with the word “Westley” on some railway level crossing equipment. In Barway I had to make do with a planning application, stuck to a pole.

I tried not to enter and leave a village by the same road, if possible.

Edge cases

I had to make some decisions:

I decided that I would consider the project complete if I visited everywhere whose centre was within my radius. But the centre of a settlement is rather hard to define. I needed a hard criterion for my OpenStreetMap data mining: a place counted if there was any node, way or relation, with the relevant place tag, any part of which was within my ambit. That included some places that probably oughtn’t to have counted, but, fine.

I also decided that I wouldn’t visit suburbs of Cambridge, separately from Cambridge itself. I don’t consider them separate settlements, at least, not if they’re conurbated with Cambridge. So that excluded Trumpington, for example. But I decided that Girton and Fen Ditton were (just) separable. Although the place where I consider Girton and Cambridge to nearly touch, is administratively well inside Girton, I chose to look at land use (on the ground, and in OSM data), rather than administrative boundaries.

But I did visit both Histon and Impington, and all each of the Shelfords and Stapleford, as separate entries in my list. Mostly because otherwise I’d have to decide whether to skip (say) Impington, or Histon. Whereas skipping suburbs of Cambridge in favour of Cambridge itself was an easy decision, and it also got rid of a bunch of what would have been quite short, boring, urban expeditions.

I sorted all the Greats and Littles under G and L, rather than (say) “Shelford, Great”, which seemed like it would be cheating because then I would be able to do “Shelford, Great” and “Shelford, Little” in one go.

Northstowe turned from mostly a building site into something that was arguably a settlement, during my project. It wasn’t included in the output of my original data mining. Of course it’s conurbated with Oakington - but happily, Northstowe inserts right before Oakington in the alphabetical list, so I decided to add it, visiting both the old and new in the same day.

There are a bunch of other minor edge cases. Some villages have an outlying hamlet. Mostly I included these. There are some individual farms, which I generally didn’t count.

Some stats

I visited 150 villages plus the Lords Bridge radio observatory. The project took 3 years and 3 months to complete.

There were 96 rides, totalling about 4900km. So my mean distance was around 51km. The median distance per ride was a little higher, at around 52 km, and the median duration (including stoppages) was about 2h40. The total duration, if you add them all up, including stoppages, was about 275h, giving a mean speed including photo stops, lunches and all, of 18kph.

The longest ride was 89.8km, collecting Scotland Farm, Shepreth, and Six Mile Bottom, so riding across the Cam valley. The shortest ride was 7.9km, collecting Cambridge (obviously); and I think that’s the only one I did on my Brompton. The rest were all on my trusty Thorn Audax.

My fastest ride (ranking by distance divided by time spent in motion) was to collect Haddenham, where I covered 46.3km in 1h39, giving an average speed in motion of 28.0kph.

The most I collected in one day was 5 places: West Wickham, West Wratting, Westley Bottom, Westley Waterless, and Weston Colville. That was the day of the Wests. (There’s only one East: East Hatley.)

Map

Here is a pretty picture of all of my tracklogs:

Edited 2023-08-25 01:32 BST to correct a slip.

comments
Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 248 released

Planet Debian - Thu, 2023-08-24 20:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 248. This version includes the following changes:

[ Greg Chabala ] * Merge Docker "RUN" commands into single layer.

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

Debian Brasil: Debian Day 30 years in Belo Horizonte - Brazil

Planet Debian - Thu, 2023-08-24 19:00

For the first time, the city of Belo Horizonte held a Debian Day to celebrate the anniversary of the Debian Project.

The communities Debian Minas Gerais and Free Software Belo Horizonte and Region felt motivated to celebrate this special date due the 30 years of the Debian Project in 2023 and they organized a meeting on August 12nd in UFMG Knowledge Space.

The Debian Day organization in Belo Horizonte received the important support from UFMG Computer Science Department to book the room used by the event.

It was scheduled three activities:

  • Talk: The Debian project wants you! Paulo Henrique de Lima Santana
  • Talk: Customizing Debian for use in PBH schools: the history of Libertas - Fred Guimarães
  • Discussion: about the next steps to increase a Free Software community in BH - Bruno Braga Fonseca

In total, 11 people were present and we took a photo with those who stayed until the end.

Categories: FLOSS Project Planets

Debian Brasil: Debian Day 30 anos em Belo Horizonte

Planet Debian - Thu, 2023-08-24 19:00

Pela primeira vez a cidade de Belo Horizonte realizou um Debian Day para celebrar o aniversário do Projeto Debian.

As comunidades Debian Minas Gerais e Software Livre de BH e Região se sentiram motivadas para celebrar esta data especial devido aos 30 anos do Projeto Debian em 2023 e organizou um encontro no dia 12 de agosto dentro Espaço do Conhecimento da UFMG.

A organização do Debian Day em Belo Horizonte recebeu o importante apoio do Departamento de Ciência da Computação da UFMG para reservar a sala que foi utilizada para o evento.

A programação contou com três atividades:

  • Palestra O projeto Debian quer você! Paulo Henrique de Lima Santana
  • Palestra Personalizando o Debian para uso em escolas da PBH: a história da Libertas - Fred Guimarães
  • Bate-papo sobre os próximos passos para formar uma comunidade de Software Livre em BH - Bruno Braga Fonseca

No total etiveram presentes 11 pessoas e fizemos uma foto com as que ficaram até o final.

Categories: FLOSS Project Planets

TestDriven.io: Customizing the Django Admin

Planet Python - Thu, 2023-08-24 18:28
In this article, we'll look at how to customize Django's admin site.
Categories: FLOSS Project Planets

Debian Brasil: Debian Day 30 years in Curitiba - Brazil

Planet Debian - Thu, 2023-08-24 16:00

As we all know, this year is a very special year for the Debian project, the project turns 30!
The Brazilian Community joined in and during the anniversary week, organized some online activities through the Debian Brasil YouTube channel.
Information about talks given can be seen on the commemoration website.
Talks are also have been published individually on the Debian social Peertube and Youtube.

After this week of celebration, the Debian Community in Curitiba, decided to get together for a lunch for confraternization among some local members.
The confraternization took place at The Barbers restaurant. The local menu was a traditional Feijoada (Rice, Beans with pork meat, fried banana, cabbage, orange and farofa). The meeting took place with a lot of laughs, conversations, fun, caipirinha and draft beer!

We can only thank the Debian Project for providing great moments!

A small photographic record of the people present!

Categories: FLOSS Project Planets

Drupal Association blog: Digital service providers share market insights in the 2023 Drupal Business Survey

Planet Drupal - Thu, 2023-08-24 15:00
The Drupal Business Survey invites business leaders worldwide to share their insights and metrics on the Drupal business ecosystem. Relevant topics to many digital service providers include business development, community, marketing, and human capital. The results of this survey are analyzed and aggregated into a comprehensive report and presented at DrupalCon Europe, the yearly Drupal summit. Participants can support anonymously.

The Drupal Business Survey is about sharing business insights between Drupal service providers worldwide. Drupal’s open source ecosystem thrives by a collaborative community of tens of thousands of professionals worldwide, working together on the popular digital experience platform. Drupal has an Open Web objective, allowing anyone to work with the web or contribute to it. The Drupal Business Survey gives meaningful data for Drupal experts to make business decisions for the coming years.

The survey allows Drupal service providers and independent professionals to share insights on various topics. Its comprehensive results on business outlook and customer engagement has been a valuable guide for digital service providers, even to those working with other technologies but Drupal. Data is coming in from all continents, with most of the companies being in business for 10 years or more.

Drupal business booming

In 2022, over 15% of the respondents indicated seeing a significant impact on their digital business by the pandemic. Covid-19 clearly was a catalyst for investments in the digital market. At the same time, the size of deals made grew by almost a third. Non-Profit and Government made the leading markets for Drupal last year, with Education at the top with over 60% of agencies working with these sectors. However, comparing all the industries over the past 5 years, the number of Drupal projects in Non Profit shows a significant decrease. Acquiring new talent is constantly challenging and business leaders need to find new ways to get the right people in.

The survey revealed that open source software, like Drupal, has maintained its popularity, which will not come as a surprise to those following the industry. Janne Kalliola: “One of the most significant advantages of open source compared to similar SaaS solutions is the flexibility and manageability of pricing. Organizations can easily adapt their use of Drupal to changing circumstances, such as the pandemic or the energy crisis, and they can use Drupal without large license fees.”

CEO Dinner

The Drupal market is looking forward to how digital service providers are using Drupal and how the Drupal market has evolved this year. The results of the 2023 Drupal Business Survey will be presented at the CEO Dinner at DrupalCon Europe. DrupalCon Europe is the yearly Drupal summit where over 1500 Drupal users and professionals meet to exchange ideas and further evolve Drupal. DrupalCon Europe is from 17-20 October in Lille, with the CEO Dinner on Tuesday, October 17th.

About the Business Survey

The Drupal Business Survey supports Drupal businesses worldwide and is organized by a team of industry experts Imre Gmelig Meijling (React Online), Janne Kalliola (Exove) and Michel van Velde (independent) in collaboration with the Drupal Association. The 2022 results can be viewed here.

Drupal is the open source Digital Experience Platform used organisations including Amnesty International, Tesla and DHL.

Participate and share your insights

Drupal experts are invited to share their Drupal business insights through the Business Survey anonymously and come to DrupalCon Europe to review the results together.

You can participate in the Drupal Business Survey anonymously here.
The survey closes on 4 September.

Categories: FLOSS Project Planets

Python Insider: Python 3.11.5, 3.10.13, 3.9.18, and 3.8.18 is now available

Planet Python - Thu, 2023-08-24 13:02

There’s security content in the releases, let’s dive right in.

  • gh-108310: Fixed an issue where instances of ssl.SSLSocket were vulnerable to a bypass of the TLS handshake and included protections (like certificate verification) and treating sent unencrypted data as if it were post-handshake TLS encrypted data. Security issue reported as CVE-2023-40217 1 by Aapo Oksman. Patch by Gregory P. Smith.

Upgrading is highly recommended to all users of affected versions.

Python 3.11.5

Get it here: https://www.python.org/downloads/release/python-3115/

This release was held up somewhat by the resolution of this CVE, which is why it includes a whopping 328 new commits since 3.11.4 (compared to 238 commits between 3.10.4 and 3.10.5). Among those, there is a fix for CVE-2023-41105 which affected Python 3.11.0 - 3.11.4. See gh-106242 for details.

There are also some fixes for crashes, check out the change log to see all information.

Most importantly, the release notes on the downloads page include a description of the Larmor precession. I understood some of the words there!

Python 3.10.13

Get it here: https://www.python.org/downloads/release/python-31013/

16 commits.

Python 3.9.18

Get it here: https://www.python.org/downloads/release/python-3918/

11 commits.

Python 3.8.18

Get it here: https://www.python.org/downloads/release/python-3818/

9 commits.

Stay safe and upgrade!

Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.


Łukasz Langa @ambv
on behalf of your friendly release team,

Ned Deily @nad
Steve Dower @steve.dower
Pablo Galindo Salgado @pablogsal
Łukasz Langa @ambv
Thomas Wouters @thomas

Categories: FLOSS Project Planets

PyCharm: PyCharm 2023.2.1 Is Out!

Planet Python - Thu, 2023-08-24 12:52

PyCharm 2023.2.1, the first bug-fix update for PyCharm 2023.2, is now available!

You can update to v2023.2.1 by using the Toolbox App, installing it right from the IDE, or downloading it from our website.

Download PyCharm 2023.2.1

Here are the most notable fixes available in this version:

Updates to profiler support and code coverage

Profiler and code coverage functionality is now available for projects using remote interpreters, like those on SSH, WSL, Docker, and Docker Compose. You can now use cProfile, yappi, and vmprof. Additionally, you can now use profilers for projects that use Python 3.12.

Django Inherited HTTP methods in the Endpoints tool window

In PyCharm 2023.2, we added support for Django in the Endpoints tool window to help you work with the Django Rest Framework more easily. Starting from this update, you will also be able to work with inherited HTTP methods for your Django views in the Endpoints tool window. [PY-61405]

We also fixed the bug that prevented generating HTTP requests in the Endpoints tool window for lowercase method names. [PY-62033

PyCharm will now show the correct results for the Go To Declaration action for routes in the HTTP Client when working with the Django Rest Framework. This also works for FastAPI and Flask.

Run manage.py Task from the main menu with remote interpreters

When working with a project with a remote interpreter on Docker, Docker Compose, SSH, or WSL, you can now run the manage.py task from the main menu (Tools | Run manage.py Task). [PY-52610]

Python Run/Debug Configuration Updates to the Parameters field

In the updated Python Run / Debug configuration, the Parameters field is available by default. We increased the minimum width for the field and restored the ability to add macros to the Parameters field. [PY-61917], [PY-59738]

We also fixed a bug that made it impossible to delete an option to add content roots or source roots to the PYTHONPATH. [PY-61902]

Black formatter: an option to suppress warnings about non-formatted files 

In PyCharm 2023.2, we added built-in support for the Black formatter. If you have Black configured in PyCharm, the IDE will check whether each file you are working with is formatted properly. When your code is not formatted with Black, PyCharm will notify you. If you don’t want to use the Black formatter for a particular file or the whole project, you can now suppress warnings about non-formatted files.

You can set this up in Settings | Appearance & Behavior | Notifications.

Updates to frontend development support

We’ve added support for:

  • CSS system colors. [WEB-59994]
  • CSS trigonometric and exponential functions. [WEB-61934]
  • .mjs and .cjs config files in Prettier. [WEB-61966]
General fixes
  • You can again run multiprocessing scripts in the Python console. [PY-50116]
  • Changing themes on Linux now works as expected. [IDEA-283945]
  • The IDE no longer enters full screen mode unexpectedly on a secondary monitor when the Linux native header is switched off. [IDEA-326021]
  • Updating bundled plugins no longer removes plugin files from the IDE’s installation folder. [IDEA-326800]
  • We fixed the behavior of the Go To Implementation and Go To Declaration actions when Python stubs are involved. PyCharm now shows the implementation instead of .pyi stubs. [PY-54905], [PY-54620], [PY-61740]

For the full list of issues addressed in PyCharm 2023.2.1, please see the release notes. Please, feel free to share your feedback with us or report any bugs you encounter using our issue tracker.

Categories: FLOSS Project Planets

PyCharm: PyCharm 2023.2.1 Is Out!

Planet Python - Thu, 2023-08-24 12:52

PyCharm 2023.2.1, the first bug-fix update for PyCharm 2023.2, is now available!

You can update to v2023.2.1 by using the Toolbox App, installing it right from the IDE, or downloading it from our website.

Download PyCharm 2023.2.1

Here are the most notable fixes available in v2023.2.1:

Updates to profiler support and code coverage

Profiler and code coverage functionality is now available for projects using remote interpreters, like those on SSH, WSL, Docker, and Docker Compose. You can now use cProfile, yappi, and vmprof. Additionally, you can now use profilers for projects on Python 3.12.

Django Inherited HTTP methods in the Endpoints tool window

In PyCharm 2023.2, we added support for Django in the Endpoints tool window so that you can work with the Django Rest Framework easily. Starting from this update, you will also be able to work with inherited HTTP methods for your Django views in the Endpoints tool window. [PY-61405]

We also fixed the bug that prevented generating HTTP requests in the Endpoints tool window for lowercase method names. [PY-62033

PyCharm will now show the correct results for the Go To Declaration action for routes in the HTTP Client when working with the Django Rest Framework. This also works for FastAPI and Flask. 

Run manage.py Task from the main menu with the remote interpreters

When working with a project with a remote interpreter on Docker, Docker Compose, SSH, or WSL, you can now run the manage.py task from the main menu (Tools | Run manage.py Task). [PY-52610]

Python Run/Debug Configuration Updates to the Parameters field

In the updated Python Run / Debug configuration, the Parameters field is available by default. We increased the minimum width for the field and restored the ability to add macros to the Parameters field. [PY-61917], [PY-59738]

We also fixed a bug making it impossible to delete an option to add content roots or source roots to the PYTHONPATH. [PY-61902]

Black formatter: an option to suppress warnings about non-formatted files 

In PyCharm 2023.2, we added built-in support for the Black formatter. If you configured Black in PyCharm, the IDE will check if each file you are working with is formatted properly. When your code is not formatted with Black, PyCharm will notify you. If you don’t want to use the Black formatter for a particular file or whole project, we’ve added the ability to suppress warnings about non-formatted files. 

You can set this up in Settings | Appearance & Behavior | Notifications.

Updates to frontend development support
  • We’ve added support for CSS system colors. [WEB-59994]
  • We’ve added support for CSS trigonometric and exponential functions. [WEB-61934]
  • We’ve added support for .mjs and .cjs config files in Prettier. [WEB-61966]
General fixes
  • You can again run multiprocessing scripts in the Python console. [PY-50116]
  • Changing themes on Linux now works as expected. [IDEA-283945]
  • The IDE no longer enters full screen mode unexpectedly on a secondary monitor when the Linux native header is switched off. [IDEA-326021]
  • Updating bundled plugins no longer removes plugin files from the IDE’s installation folder. [IDEA-326800]
  • We fixed the behavior of the Go To Implementation and Go To Declaration actions when Python stubs are involved. PyCharm now shows the implementation instead of .pyi stubs. [PY-54905], [PY-54620], [PY-61740]

For the full list of issues addressed in PyCharm 2023.2.1, please see the release notes. Please, feel free to share your feedback with us or report any bugs you encounter using our issue tracker.

Categories: FLOSS Project Planets

EuroPython Society: EPS 2023 General Assembly - Call for Board Candidates

Planet Python - Thu, 2023-08-24 12:50

It feels like yesterday that many of us were together in Prague or online for EuroPython 2023. Each year, the current board of the EuroPython Society (EPS) holds a General Assembly (GA). It is a precious opportunity for all our members to get together annually,  and reflect on the learnings of the past and direction for the future the Society holds.

This year’s GA will be held online again to allow as many members as possible to engage with us. We have tentatively reserved the date 1 October for the GA. But official confirmation will be sent out as soon as we receive the go-ahead from our auditor on the finance side.

As an EPS member, you are welcome  and encouraged to join us to discuss Society matters and vote at the meeting, including the next Society Board. A Zoom meeting link will be sent out to you with the formal General Assembly Invitation.

Calling for Board Candidates

Every year at the GA, we call for and vote in a new EPS Board of Directors. This is also our main theme of this post: we are calling for the next Board candidates.
This year, we have at least 4 members from the current board standing down, including myself who will be stepping down as chair and from the board. While transition always poses challenges, it is a chance to take in new experience, fresh perspectives and more diversity. With most, if not all, female board members from the current board stepping down, we are especially worried about the diversity of our next board and welcome all suggestions and nominations from our members to help make our next board diverse.

If you are interested in stepping up, or if you know someone who might be, please get in touch with us! You can reach the current board at board@europython.eu. We also have set up a private discord thread for you to get to know all interested candidates and ask any questions you might have. Get in touch with us if you would like an invite!

What does the EPS Board do?

As per our bylaws, the EPS board is made up of up to 9 directors (including 1 chair and 1 vice chair). The duties and responsibilities of the board are substantial: the board collectively takes up the fiscal and legal responsibility of the Society. At the moment, running the annual EuroPython conference is a major task for the EPS. As such, the board members are expected to invest significant time and effort towards overseeing the smooth execution of the conference, ranging from venue selection, contract negotiations, and budgeting, to volunteer management. Every board member has the duty to support one or more EuroPython teams to facilitate decision-making and knowledge transfer. In addition, the Society prioritises building a close relationship with local Python communities in Europe. Board members should be passionate about the Python community, and ideally also have a high-level vision and plan for how the EPS could best serve the community.

Time commitment for the board: as the Society currently comprises entirely volunteers, serving on the board does come with a significant time commitment. This is particularly important to keep in mind, due to the changes EPS will undergo this year. However, everyone has been very understanding of differing schedules. Other than the  1.5 hour board call we expect all board members to attend every two weeks,  we have managed to primarily work async.

The Nomination Process

All EPS members are eligible to stand for election to the board of directors . And everyone who wishes to stand or nominate others need to send in your nomination notice, along with a biography of yours.

Though the formal deadline for sending in your nomination is at the time of the GA, we would appreciate it if you could return it to us by emailing board@europython.eu by Friday 15 September 2023. We will publish all the candidates and their nomination statements on a separate blog post for our members to read in advance.

Then at the General Assembly, each candidate will usually be given a minute to introduce themselves before the members cast their anonymous votes. You can find out refer to our previous GAs if you want to find out more details: https://www.europython-society.org/records/

If you have any questions or concerns, you are also very welcome to reach out to me directly at raquel@europython.eu.

Raquel Dou

Categories: FLOSS Project Planets

Stack Abuse: Creating a Directory and its Parent Directories in Python

Planet Python - Thu, 2023-08-24 12:33
Introduction

In Python, we often need to interact with the file system, whether it's reading files, writing to them, or creating directories. This Byte will focus on how to create directories in Python, and more specifically, how to create a directory and any missing parent directories. We'll be exploring the os.mkdir and os.makedirs functions for this purpose.

Why do we need to create the parent directories?

When working with file systems, which is common for system utilities or tools, you'll likely need to create a directory at a certain path. If the parent directories of that path don't exist, you'll encounter an error.

To avoid this, you'll need to create all necessary parent directories since the OS/filesystem doesn't handle this for you. By ensuring all the necessary parent directories exist before creating the target directory, you can avoid these errors and have a more reliable codebase.

Creating a Directory Using os.mkdir

The os.mkdir function in Python is used to create a directory. This function takes the path of the new directory as an argument. Here's a simple example:

import os os.mkdir('my_dir')

This will create a new directory named my_dir in the current working directory. However, os.mkdir has a limitation - it can only create the final directory in the specified path, and assumes that the parent directories already exist. If they don't, you'll get a FileNotFoundError.

import os os.mkdir('parent_dir/my_dir')

If parent_dir doesn't exist, this code will raise a FileNotFoundError.

Note: The os.mkdir function will also raise a FileExistsError if the directory you're trying to create already exists. It's always a good practice to check if a directory exists before trying to create it. To do this, you can pass the exist_ok=True argument, like this: os.makedirs(path, exist_ok=True). This will make the function do nothing if the directory already exists.

One way to work around the limitation of os.mkdir is to manually check and create each parent directory leading up to the target directory. The easiest way to approach this problem is to split our path by the slashes and check each one. Here's an example of how you can do that:

import os path = 'parent_dir/sub_dir/my_dir' # Split the path into parts parts = path.split('/') # Start with an empty directory path dir_path = '' # Iterate through the parts, creating each directory if it doesn't exist for part in parts: dir_path = os.path.join(dir_path, part) if not os.path.exists(dir_path): os.mkdir(dir_path)

This code will create parent_dir, sub_dir, and my_dir if they don't already exist, ensuring that the parent directories are created before the target directory.

However, there's a more concise way to achieve the same goal by using the os.makedirs function, which we'll see in the next section.

Creating Parent Directories Using os.makedirs

To overcome the limitation of os.mkdir, Python provides another function - os.makedirs. This function creates all the intermediate level directories needed to create the final directory. Here's how you can use it:

import os os.makedirs('parent_dir/my_dir')

In this case, even if parent_dir doesn't exist, os.makedirs will create it along with my_dir. If parent_dir already exists, os.makedirs will simply create my_dir within it.

Note: Like os.mkdir, os.makedirs will also raise a FileExistsError if the final directory you're trying to create already exists. However, it won't raise an error if the intermediate directories already exist.

Using pathlib to Create Directories

The pathlib module in Python 3.4 and above provides an object-oriented approach to handle filesystem paths. It's more intuitive and easier to read than using os.mkdir or os.makedirs. To create a new directory with pathlib, you can use the Path.mkdir() method.

Here is an example:

from pathlib import Path # Define the path path = Path('/path/to/directory') # Create the directory path.mkdir(parents=True, exist_ok=True)

In this code, the parents=True argument tells Python to create any necessary parent directories, and exist_ok=True allows the operation to proceed without raising an exception if the directory already exists.

Handling Exceptions when Creating Directories

When working with filesystems, it's always a good idea to handle exceptions. This could be due to permissions, the directory already existing, or a number of other unforeseen issues. Here's one way to handle exceptions when creating your directories:

from pathlib import Path # Define the path path = Path('/path/to/directory') try: # Create the directory path.mkdir(parents=True, exist_ok=False) except FileExistsError: print("Directory already exists.") except PermissionError: print("You don't have permissions to create this directory.") except Exception as e: print(f"An error occurred: {e}")

In this code, we've set exist_ok=False to raise a FileExistsError if the directory already exists. We then catch this exception, along with PermissionError and any other exceptions, and print a relevant message. This gives us more fine-grained control over what we do when certain situations arise, although it's less concise and hurts readability.

When to use os or pathlib for Creating Directories

Choosing between os and pathlib for creating directories largely depends on your specific use case and personal preference.

The os module has been around for a while and is widely used for interacting with the operating system. It's a good choice if you're working with older versions of Python or if you need to use other os functions in your code.

On the other hand, pathlib is a newer module that provides a more intuitive, object-oriented approach to handling filesystem paths. It's a good choice if you're using Python 3.4 or above and prefer a more modern, readable syntax.

Luckily, both os and pathlib are part of the standard Python library, so you won't need to install any additional packages to use them.

Conclusion

In this Byte, we've explored how to create directories and handle exceptions using the os and pathlib modules in Python. Remember that choosing between these two options depends on your specific needs and personal preferences. Always be sure to handle exceptions when working with filesystems to make your code more robust and reliable. This is important as it's easy to make mistakes when working with filesystems and end up with an error.

Categories: FLOSS Project Planets

Stack Abuse: Get All Object Attributes in Python

Planet Python - Thu, 2023-08-24 11:00
Introduction

In Python, everything is an object - from integers and strings to classes and functions. This may seem odd, especially for primitive types like numbers, but even those have attributes, like real and imag. Each object has its own attributes, which are basically juset properties or characteristics that help define the object.

In this Byte, we will explore different ways to get all attributes of an object in Python, and how to display and manipulate them effectively.

Viewing Object Attributes

To start with, let's look at how we can view the attributes of an object in Python. Python provides a built-in function, dir(), which returns a list of all attributes and methods of an object, which also includes those inherited from its class or parent classes.

Consider a simple class, Company, with a few attributes:

class Company: def __init__(self, name, industry, num_employees): self.name = name self.industry = industry self.num_employees = num_employees

Now, let's create an instance of Company and use dir() to get its attributes:

c = Company('Dunder Mifflin', 'paper', 15) print(dir(c))

This will output:

['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'industry', 'num_employees', 'name']

As you can see, dir() returns not only the attributes we defined (i.e. name, industry, num_employees), but also a list of special methods (also known as dunder methods) inherent to all Python objects.

Getting their Values

Now that we know how to get the attributes of an object, let's see how to also extract their values. Python provides a built-in function, getattr(), which allows us to get the value of a specific attribute.

Here's how you can use getattr():

name = getattr(c, 'name') print(name)

This will output:

Dunder Mifflin

In this example, getattr() returns the value of the name attribute of the Company instance c. If the attribute does not exist, getattr() will raise an AttributeError. However, you can provide a third argument to getattr(), which will be returned if the attribute is not found, thus avoiding the error:

location = getattr(c, 'location', 'Not available') print(location)

This will output:

Not available

In this case, since location is not an attribute of c, getattr() returns the provided default value, 'Not available'.

Using __dict__ to get Properties and Values

In Python, every object is equipped with a __dict__ attribute. This built-in attribute is a dictionary that maps the object's attributes to their respective values. This can be very handy when we want to extract all properties and values of an object. Let's see how it works.

class TestClass: def __init__(self): self.attr1 = 'Hello' self.attr2 = 'World' instance = TestClass() print(instance.__dict__)

When you run the above code, it will output:

{'attr1': 'Hello', 'attr2': 'World'}

Note: __dict__ does not return methods of an object, only the properties and their values.

Formatting Object Attributes into Strings

Sometimes you may want to format the attributes of an object into a readable string for display or logging purposes. Python's built-in str function can be overridden in your class to achieve this. Here's how you can do it:

class TestClass: def __init__(self): self.attr1 = 'Hello' self.attr2 = 'World' def __str__(self): return str(self.__dict__) instance = TestClass() print(str(instance))

When you run the above code, it will output:

"{'attr1': 'Hello', 'attr2': 'World'}" Employing vars() for Attribute Extraction

Another way to extract attributes from an object in Python is by using the built-in vars() function. This function behaves very similar to the __dict__ attribute and returns the __dict__ attribute of an object. Here's an example:

class TestClass: def __init__(self): self.attr1 = 'Hello' self.attr2 = 'World' instance = TestClass() print(vars(instance))

When you run the above code, it will output:

{'attr1': 'Hello', 'attr2': 'World'}

Note: Like __dict__, vars() also does not return methods of an object, only the properties and their values.

Conclusion

Getting all of the attributes of an object in Python can be achieved in several ways. Whether you're using dir(), the __dict__ attribute, overriding the str function, or using the vars() function, Python provides a variety of tools to extract and manipulate object attributes.

Categories: FLOSS Project Planets

Droptica: Drupal BigPipe Module. Using Lazy Builders 2023

Planet Drupal - Thu, 2023-08-24 09:51

Site speed is crucial, particularly nowadays, when modern websites are more dynamic and interactive. The traditional approach of serving pages is notably inefficient in this context. Numerous techniques exist to achieve optimal performance, and one such method is the BigPipe technique, originally developed at Facebook. The good news is that the BigPipe module, which incorporates the same functionality, has been integrated into Drupal 8 core since version 8.1.

How does Drupal BigPipe work?

The general idea of the BigPipe technique is to decompose web pages into small chunks called pagelets and pipeline them through several execution stages inside web servers and browsers. 

At a high level, BigPipe sends an HTML response in chunks:

  1. One chunk: everything until just before - this contains BigPipe placeholders for the personalized parts of the page. Hence this sends the non-personalized parts of the page. Let's call it The Skeleton.
  2. N chunks: a tag per BigPipe placeholder in The Skeleton.
  3. One chunk:  and everything after it.

This is conceptually identical to Facebook's BigPipe (hence the name).

How does Drupal BigPipe differ from Facebook's technique?

Drupal module differs significantly from Facebook's implementation (and others) in its ability to automatically figure out which parts of the web page can benefit from BigPipe-style delivery. 

Drupal's render system has the concept of “auto-placeholdering.” What does it mean? The content that is too dynamic is replaced with a placeholder that can be rendered later. 

On top of that, it also has the concept of “placeholder strategies.” By default, placeholders are replaced on the server side, and the response is blocked on all of them being replaced. But it's possible to add additional placeholder strategies. BigPipe is just another one. Others could be ESI, AJAX, etc. 

BigPipe implemented by Facebook, can only work if JavaScript is enabled. Instead, the Drupal BigPipe module makes it possible to replace placeholders without JavaScript “no-JS BigPipe.” This isn’t technically BigPipe at all, but it's just the use of multiple flushes. 

This allows us to use both no-JS BigPipe and “classic” BigPipe in the same response to maximize the amount of content we can send as early as possible.

So basically, that is happening during the page rendering process:

  • The personalized parts are turned into placeholders &token=6NHeAQvXLYdzXuoWp2TRCvedTO2WAoVKnpW-5_pV9gk">. The placeholder contains information about which callback to call and the arguments to pass to it. The renderer then continues traversing the array and converting it to HTML. The resulting HTML, including placeholders, is cached. Then, depending on the rendering strategy being used, the placeholders are each replaced with their dynamic content.
  • The replacement of the placeholders is done using JavaScript. The callback starts looking for replacement script elements once a special element is printed and found.
  • At the very last moment, it’s replaced with the actual content. This new strategy allows us to flush the initial web page first and then stream the replacements for the placeholders.
When to use a lazy builder?

As a rule of thumb, you should consider using a lazy builder whenever the content you're adding to a render array is one of the following types.

  • Content that would have a high cardinality if cached. For example, a block that displays the user's name. It can be cached, but because it varies by user, it's also likely to result in cached objects with a low hit rate.
  • Content that cannot be cached or has a very high invalidation rate. For example, displaying the current date/time, or statistics that must always be as up-to-date as possible.
  • Content that requires a lengthy and potentially slow assembly process. For example, a block displaying content from a third-party API where requesting content from the API incurs overhead.
Using lazy builders in practice

To provide a comprehensive example of implementing lazy builders, let's consider a scenario with a custom profile widget block placed on a web page. The block contains the user's picture, full name, and profile menu.

In order to ensure that every user sees their personalized information, we can implement specific strategies, such as setting the “max-age” to 0 or utilizing user contexts and tags. However, it's important to note that setting “max-age” to 0 will lead to the rest of the web page being uncached.

Thanks to the concept of “auto-placeholdering,” Drupal considers this block as a personalized part and turns it into a placeholder.

The only problem here is that we have the entire block replaced by a placeholder afterward:

=profilewidget&args%5B1%5D=full&args%5B2%5D&token=QzMTPnxwihEGO
itjJB_tahJj8V-L-KopAVnEjVEMSsk">

Nonetheless, it's worth noting that specific data within the block might remain static or consistent for all users, like this:

To make our block more granular, we can transform dynamic parts into placeholders while the other block content remains cacheable and loads during the initial page load.

Step 1. Creating lazy builders

Lazy builders are implemented using the render array of the #lazy_builder type, just like other elements. The render array must contain a callback as the first element and an array of arguments to that callback as the second element.

Lazy builder render elements should only contain the #cache, #weight, and #create_placeholder properties. 

public function build() {  $build['user_data'] = [    '#lazy_builder' => [      'd_profiles.builder:build',      [],    ],    '#create_placeholder' => TRUE,  ];  $build['user_menu'] = $this->buildMenu()  $build['user_img'] = [    '#lazy_builder' => [      'd_profiles.builder:build',      ['profile_widget_mini'],    ],    '#create_placeholder' => TRUE,  ];  return $build; }
Step 2. Implementing TrustedCallbackInterface

Before we go any further, we need to ensure that our lazy builder implementation can call the lazyBuilder() method. To do this, we need to implement the TrustedCallbackInterface to tell Drupal that our lazy builder callback is allowed to be called.

When implementing this interface, we need to add a method called trustedCallbacks(), which will be called automatically by Drupal through the detection of the interface. The return value of this method must be any methods within this class that can be used as callbacks.

Here is the basic implementation of this for our block:

/** * Provides a lazy builder for the profile block. */ class ProfileBuilder implements TrustedCallbackInterface { /** * {@inheritdoc} */ public static function trustedCallbacks() { return ['build']; } /** * Build profile details. */ public function build($view_mode = 'navbar_widget') { return $this->entityBuilder->build($this->loadProfile(), $view_mode); } }

As a result, the cached block will look like:

Profile

Normally the lazy builder callback will be executed on every page load, which is the intended behavior. But in certain cases, it may also be necessary to cache placeholders. To achieve this, we need to include cache keys along with the cache context, as in the example below:

$build['user_data'] = [   '#lazy_builder' => [     'em_profiles.builder:build',     [],    ],   '#create_placeholder' => TRUE,   '#cache' => [     'contexts' => ['user'],     'keys' => [       'entity_view',       'user',      'profile',      'navbar_widget',     ],   ], ];
Step 3. Ensuring a smooth visual page load experience

Because Drupal BigPipe lazily loads certain parts of the page, it could result in a jarring page load experience. It depends on our theme and the location of the lazily loaded content.

The simplest solution is to have the lazily loaded content appear in a space reserved for them that avoids reflowing content. Alternatively, we can apply a “loading” animation to all BigPipe placeholders in our theme with some CSS.

The last possible option is to define an interface preview that will be populated by BigPipe, using a Twig template.

Let’s compare the final result of the custom lazy-builder strategy (1) vs. “auto-placeholdering” strategy (2).

Custom lazy-builder strategy (1)
 

Drupal "auto-placeholdering" strategy (2)

Both strategies work just fine, but you can see the cons of auto-placeholdering, like jarring page load experience (drastic layout shift).

More examples:

1. Statistics block with static part loaded immediately and dynamic content loaded later:


2. Views with skeleton:


Troubleshooting lazy builders

If you’ve implemented a lazy builder and it isn't speeding up your Drupal page load or just isn't working as expected, then there are some things you can try:

  • Be sure the Drupal BigPipe module is active. 
  • Check the cache settings of your lazy builder callback method. By default, Drupal will make some assumptions about how it should be cached, which isn't always right for your use case. Instead, you can explicitly set the cache settings.
  • An upstream CDN or Varnish layer might be caching your entire web page, so all of the output of the BigPipe rendering process will be served simultaneously. You'll need to find another mechanism to work around this.
Drupal BigPipe - summary 

In this article, we’ve explored an alternative rendering strategy that allows us to defer the rendering of highly dynamic content only after the static parts of the web page have already been loaded from the cache.

BigPipe, the Drupal module, can automatically enhance our website performance thanks to improved render pipeline and render API, particularly the cacheability metadata and auto-placeholdering.

However, it’s essential to note that using Drupal BigPipe doesn’t substitute for addressing underlying performance issues. Implementing lazy builders to mitigate the impact of slow code on your web page will only mask the problem rather than resolve it entirely. By implementing these techniques effectively, you can optimize performance and enrich the overall user experience on your Drupal-powered websites.

Categories: FLOSS Project Planets

Lukas Märdian: Netplan v0.107 is now available

Planet Debian - Thu, 2023-08-24 08:59

I’m happy to announce that Netplan version 0.107 is now available on GitHub and is soon to be deployed into a Linux installation near you! Six months and more than 200 commits after the previous version (including a .1 stable release), this release is brought to you by 8 free software contributors from around the globe.

Highlights

Highlights of this release include the new configuration types for veth and dummy interfaces:

network: version: 2 virtual-ethernets: veth0: peer: veth1 veth1: peer: veth0 dummy-devices: dm0: addresses: - 192.168.0.123/24 ...

Furthermore, we implemented CFFI based Python bindings on top of libnetplan’s API, that can easily be consumed by 3rd party applications (see full cffi-bindings.py example):

from netplan import Parser, State, NetDefinition from netplan import NetplanException, NetplanParserException parser = Parser() # Parse the full, existing YAML config hierarchy parser.load_yaml_hierarchy(rootdir='/') # Validate the final parser state state = State() try: # validation of current state + new settings state.import_parser_results(parser) except NetplanParserException as e: print('Error in', e.filename, 'Row/Col', e.line, e.column, '->', e.message) except NetplanException as e: print('Error:', e.message) # Walk through ethernet NetdefIDs in the state and print their backend # renderer, to demonstrate working with NetDefinitionIterator & # NetDefinition for netdef in state.ethernets.values(): print('Netdef', netdef.id, 'is managed by:', netdef.backend) print('Is it configured to use DHCP?', netdef.dhcp4 or netdef.dhcp6) Changelog: Bug fixes:
Categories: FLOSS Project Planets

Debian Brasil: Debian Day 30 years at IF Sul de Minas, Pouso Alegre - Brazil

Planet Debian - Thu, 2023-08-24 06:00

by Thiago Pezzo, Debian contributor, pt_BR localization team

This year's Debian day was a pretty special one, we are celebrating 30 years! Giving the importance of this event, the Brazilian community planned a very special week. Instead of only local gatherings, we had a week of online talks streamed via Debian Brazil's youtube channel (soon the recordings will be uploaded to our team's peertube instance).

Nonetheless the local celebrations happened around the country and one was organized in Pouso Alegre, MG, Brazil, at the Instituto Federal de Educação, Ciência e Tecnologia do Sul de Minas Gerais (IFSULDEMINAS - Federal Institute of Education, Science and Technology of the South of Minas Gerais). The Institute, as many of its counterparts in Brazil, specializes in professional and technological curricula to high school and undergraduate levels. All public, free and quality education!

The event happened on the afternoon of August 16th at the Pouso Alegre campus. Some 30 students from the High School Computer Technician class attended the presentation about the Debian Project and the Free Software movement in general. Everyone had a great time! And afterwards we had some spare time to chat.

I would like to thank all people who helped us:

  • Professors Michelle Nery and Ismael David Muro (IFSULDEMINAS Pouso Alegre)
  • Virginia Cardoso and Melissa de Abreu (IFSULDEMINAS Rector's Office)
  • Giovani Ferreira (a DD living in Minas Gerais state)
  • Felipe Maia (from Debian São Paulo - thanks for the posters!)
  • Gustavo (a clever young student who made important comments about accessibility - thanks!)
  • And to all students who attended the presentation, hope to see you again!

Here goes our group photo:

Categories: FLOSS Project Planets

Stack Abuse: Incompatible Type Comparisons in Python

Planet Python - Thu, 2023-08-24 06:00
Introduction

In Python, we often encounter a variety of errors and exceptions while writing or executing a script. A very common error, especially for beginners, is TypeError: '<' not supported between instances of str and int, or some variant. This error occurs when we try to perform an operation between two incompatible types.

In this article, we'll delve into what this error means, why it happens, and how to resolve it.

Note: There are quite a few different permutations of this error as it can occur between many different data types. I'd suggest looking at the Table of Contents on the right side to more easily find your specific scenario.

Incompatible Type Comparisons

Python is a dynamically typed language, which means the interpreter determines the type of an object at runtime. This flexibility allows us to write code quickly, but it can also lead to certain types of errors if we're not careful.

One of those errors is TypeError: '<' not supported between instances of str and int. This happens when we try to compare a string and an integer using the less than (<) operator. Python doesn't know how to compare these two different types of objects, so it raises a TypeError.

Note: The error could involve any comparison operator, not just the less than (<) operator. For example, you might see a similar error with the > (greater than) operator.

If you're coming to Python from a language like JavaScript, this may take some getting used to. JS will do the conversion for you, without the need for explicit type casting (i.e. convert "2" to the integer 2). It'll even happily compare different types that don't make sense (i.e. "StackAbuse" > 42). So in Python you'll need to remember to convert your data types.

Comparing a String and an Integer

To illustrate this error, let's try to compare a string and an integer:

print("3" < 2)

When you run this code, Python will throw an error:

TypeError: '<' not supported between instances of 'str' and 'int'

This error is stating that Python doesn't know how to compare a string ("3") and an integer (2). These are fundamentally different types of objects, and Python doesn't have a built-in way to determine which one is "less than" the other.

Fixing the TypeError with String to Integer Conversion

One way to resolve this error is by ensuring that both objects being compared are of the same type. If we're comparing a string and an integer, we can convert the string to an integer using the int() function:

print(int("3") < 2)

Now, when you run this code, Python will output:

False

By converting the string "3" to an integer, we've made it possible for Python to compare the two objects. Since 3 is not less than 2, Python correctly outputs False.

The Input Function and its String Return Type

In Python, the input() function is used to capture user input. The data entered by the user is always returned as a string, even if the user enters a number. Let's see an example:

user_input = input("Enter a number: ") print(type(user_input))

If you run this code and enter 123, the output will be:

<class 'str'>

This shows that the input() function returns the user input as a string, not an integer. This can lead to TypeError if you try to use the input in a comparison operation with an integer.

Comparing Integers and Strings with Min() and Max() Functions

The min() and max() functions in Python are used to find the smallest and largest elements in a collection, respectively. If you try to use these functions on a collection that contains both strings and integers, you'll encounter a TypeError. This is because Python cannot compare these two different types of data.

Here's an example:

values = [10, '20', 30] print(min(values))

Again, this will raise the TypeError: '<' not supported between instances of 'str' and 'int' because Python doesn't know how to compare a string to an integer.

Identifying Stored Variable Types

To avoid TypeError issues, it's crucial to understand the type of data stored in your variables. You can use the type() function to identify the data type of a variable. Here's an example:

value = '10' print(type(value))

Running this code will output:

<class 'str'>

This shows that the variable value contains a string. Knowing the data type of your variables can help you avoid TypeError issues when performing operations that require specific data types.

Comparing a List and an Integer

If you try to compare a list and an integer directly, Python will raise a TypeError. Python cannot compare these two different types of data. For example, the following code will raise an error:

numbers = [1, 2, 3] if numbers > 2: print("The list is greater than 2.")

When you run this code, you'll get TypeError: '>' not supported between instances of 'list' and 'int'. To compare an integer with the elements in a list, you need to iterate over the list and compare each element individually.

Accessing List Values for Comparison

In Python, we often need to access individual elements in a list for comparison. This is done by using indices. The index of a list starts from 0 for the first element and increases by one for each subsequent element. Here's an example:

my_list = ['apple', 2, 'orange', 4, 'grape', 6] print(my_list[1]) # Outputs: 2

In this case, we are accessing the second element in the list, which is an integer. If we were to compare this with another integer, we would not encounter an error.

Ensuring Value Compatibility for Comparison

It's essential to ensure that the values you're comparing are compatible. In Python, you cannot directly compare a string with an integer. Doing so will raise a TypeError. If you're unsure of the types of values you're dealing with, it's a good practice to convert them to a common type before comparison. For instance, to compare a string and an integer, you could convert the integer to a string:

str_num = '5' int_num = 10 comparison = str_num < str(int_num) # Converts int_num to string for comparison print(comparison) # Outputs: True

Note: Be cautious when converting types for comparison. Converting an integer to a string for comparison could lead to unexpected results. For instance, '10' is considered less than '2' in string comparison because the comparison is based on ASCII value, not numerical value.

Filtering Integers in a List for Comparison

In a list with mixed types, you might want to filter out the integers for comparison. You can do this using list comprehension and the isinstance() function, which checks if a value is an instance of a particular type:

my_list = ['apple', 2, 'orange', 4, 'grape', 6] integers = [i for i in my_list if isinstance(i, int)] print(integers) # Outputs: [2, 4, 6]

Now, you can safely compare the integers in the list without worrying about getting an error!

Comparing List Length with an Integer

Another common operation in Python is comparing the length of a list with an integer. This is done using the len() function, which returns the number of items in a list. Here's an example:

my_list = ['apple', 2, 'orange', 4, 'grape', 6] list_length = len(my_list) print(list_length > 5) # Outputs: True

In this case, we're comparing the length of the list (6) with the integer 5. Since 6 is greater than 5, the output is True. No TypeError is raised here because we're comparing two integers.

Comparing List Item Sum with an Integer

In Python, you can sum the items in a list using the built-in sum() function. This function returns the sum of all items if they are integers or floats. If you then want to compare this sum with an integer, you can do so without any issues. Here's an example:

list_numbers = [1, 2, 3, 4, 5] sum_of_list = sum(list_numbers) print(sum_of_list > 10) # Output: True

In this example, sum_of_list is the sum of all items in list_numbers. We then compare this sum with the integer 10.

Comparing a Float and a String

When you try to compare a float and a string in Python, you'll encounter the error. This is because Python doesn't know how to compare these two different types. Here's an example:

print(3.14 < "5") # Output: TypeError: '<' not supported between instances of 'float' and 'str'

In this example, Python throws a TypeError because it doesn't know how to compare a float (3.14) with a string ("5").

Resolving TypeError with String to Float Conversion

To resolve this issue, you can convert the string to a float using the float() function. This function takes a string or a number and returns a floating point number. Here's how you can use it:

print(3.14 < float("5")) # Output: True

In this example, we convert the string "5" to a float using the float() function. We then compare this float with the float 3.14. Since Python now knows how to compare these two floats, it doesn't throw a TypeError.

Note: The float() function can only convert strings that represent a number. If you try to convert a string that doesn't represent a number (like "hello"), Python will throw a ValueError.

Handling TypeError in Pandas

Pandas is a powerful data analysis library in Python that provides flexible data structures. However, you might encounter a TypeError when you try to compare different types in a Pandas DataFrame.

To handle this error, you can use the apply() function to apply a function to each element in a DataFrame column. This function can be used to convert the elements to the correct type. Here's an example:

import pandas as pd df = pd.DataFrame({ 'A': [1, 2, 3], 'B': ['4', '5', '6'] }) df['B'] = df['B'].apply(float) print(df['A'] < df['B']) # Output: 0 True # 1 True # 2 True # dtype: bool

In this example, we use the apply() function to convert the elements in column 'B' to floats. We then compare the elements in column 'A' with the elements in column 'B'. Since all elements are now floats, Python doesn't throw a TypeError.

Comparing Floats and Strings with Min() and Max() Functions

In Python, the min() and max() functions are used to find the smallest and largest elements in an iterable, respectively. However, these functions can throw a TypeError if you try to compare a float and a string.

Here's an example:

print(min(3.14, 'pi'))

This code will cause a TypeError, because Python cannot compare a float and a string. The error message will be: TypeError: '<' not supported between instances of 'str' and 'float'.

To resolve this, you can convert the float to a string before comparing:

print(min(str(3.14), 'pi'))

This will output '3.14', as it's the "smallest" in alphabetical order.

Comparing a Tuple and an Integer

A tuple is an immutable sequence of Python objects. If you try to compare a tuple with an integer, Python will throw a TypeError. Here's an example:

print((1, 2, 3) < 4)

This code will cause a TypeError with the message: TypeError: '<' not supported between instances of 'tuple' and 'int' since it doesn't know how to compare one number to a collection of numbers.

Accessing Tuple Values for Comparison

To compare an integer with a value inside a tuple, you need to access the tuple value first. You can do this by indexing the tuple. Here's how:

my_tuple = (1, 2, 3) print(my_tuple[0] < 4)

This will output True, as the first element of the tuple (1) is less than 4.

Filtering a Tuple for Comparison

If you want to compare all values in a tuple with an integer, you can loop through the tuple and compare each value individually. Here's an example:

my_tuple = (1, 2, 3) for i in my_tuple: print(i < 4)

This will output True three times, as all elements in the tuple are less than 4.

Note: Python's filter() function can also be used to filter a tuple based on a comparison with an integer. This function constructs an iterator from elements of the tuple for which the function returns true.

Here's an example of how to use the filter() function to filter a tuple:

my_tuple = (1, 2, 3) filtered_tuple = filter(lambda i: i < 4, my_tuple) print(tuple(filtered_tuple))

This will output (1, 2, 3), as all elements in the tuple are less than 4.

Comparing Tuple Length with an Integer

In Python, we often need to compare the length of a tuple with an integer. This is straightforward and can be done using the len() function, which returns the number of items in an object. Here's how you can do it:

my_tuple = ('apple', 'banana', 'cherry', 'dates') length = len(my_tuple) if length < 5: print('The tuple has less than 5 items.') else: print('The tuple has 5 or more items.')

In the above example, the length of my_tuple is 4, so the output will be 'The tuple has less than 5 items.'

Understanding Tuple Construction in Python

Tuples are one of Python's built-in data types. They are used to store multiple items in a single variable. Tuples are similar to lists, but unlike lists, tuples are immutable. This means that once a tuple is created, you cannot change its items.

You can create a tuple by placing a comma-separated sequence of items inside parentheses (). Here's an example:

my_tuple = ('apple', 'banana', 'cherry', 'dates') print(my_tuple)

In the above example, my_tuple is a tuple containing four items.

Note: A tuple with only one item is called a singleton tuple. You need to include a trailing comma after the item to define a singleton tuple. For example, my_tuple = ('apple',) is a singleton tuple.

Comparing Tuple Item Sum with an Integer

If your tuple contains numeric data, you might want to compare the sum of its items with an integer. You can do this using the sum() function, which returns the sum of all items in an iterable.

Here's an example:

my_tuple = (1, 2, 3, 4) total = sum(my_tuple) if total < 10: print('The sum of tuple items is less than 10.') else: print('The sum of tuple items is 10 or more.')

In the above example, the sum of my_tuple items is 10, so the output will be 'The sum of tuple items is 10 or more.'

Comparing a Method and an Integer

In Python, a method is a function that is associated with an object. Methods perform specific actions on an object and can also return a value. However, you cannot directly compare a method with an integer. You need to call the method and use its return value for comparison.

Here's an example:

class MyClass: def my_method(self): return 5 my_object = MyClass() if my_object.my_method() < 10: print('The return value of the method is less than 10.') else: print('The return value of the method is 10 or more.')

In the above example, the my_method() method of my_object returns 5, so the output will be 'The return value of the method is less than 10.'

Resolving TypeError by Calling the Method

In Python, methods are objects too. You'll definitely get an error when comparing a method and an integer. This is because Python doesn't know how to compare these two different types of objects. Comparing a method to an integer just doesn't make sense. Let's take a look at an example:

def my_method(): return 5 print(my_method < 10)

Output:

TypeError: '<' not supported between instances of 'function' and 'int'

To resolve this, we need to call the method instead of comparing the method itself to an integer. Remember, a method needs to be called to execute its function and return a value. We can do this by adding parentheses () after the method name:

def my_method(): return 5 print(my_method() < 10)

Output:

True

Note: The parentheses () are used to call a method in Python. Without them, you are referencing the method object itself, not the value it returns.

Conclusion

In Python, the error message "TypeError: '<' not supported between instances of 'str' and 'int'" is a common error that occurs when you try to compare incompatible types.

This article has walked you through various scenarios where this error may occur and how to resolve it. Understanding these concepts will help you write more robust and error-free code. Remember, when you encounter a TypeError, the key is to identify the types of the objects you are comparing and ensure they are compatible. In some cases, you may need to convert one type to another or access specific values from a complex object before comparison.

Categories: FLOSS Project Planets

PyBites: Make Each Line Count, Keeping Things Simple in Python

Planet Python - Thu, 2023-08-24 04:09

A challenge in software development is to keep things simple

For your code to not grow overly complex over time

Simple is better than complex.
Complex is better than complicated.

Zen of Python

Simplicity in your code means fewer possibilities for bugs to hide and easier debugging when they do arise

It also makes your code more understandable and maintainable, which is crucial in a team setting or when returning to your code after a period of time.

A good example is (not) using Python built-ins.

Photo by Pablo Arroyo on Unsplash

Given you go from supporting checking if one number is divisible:

def is_divisible(num, divisor): return num % divisor == 0

To multiple numbers:

def is_divisible_by_all(num, divisors): for divisor in divisors: if num % divisor != 0: return False return True

This is valid and works, but you might write this in a simpler matter using the all() built-in function:

def is_divisible_by_all(num, divisors): return all(num % divisor == 0 for divisor in divisors)

Very clean / easy to read

Another example is doing dictionary lookups, checking if the key is in the dictionary:

Complex (unnecessary):

def get_value(dictionary, key): if key in dictionary: return dictionary[key] else: return "Key not found"

Better: leverage the fact that Python dicts have a get() method:

dictionary.get(key, "Key not found")

Remember, simple code isn’t just about having fewer lines, it’s about being concise, making each line count

This usually means heavily using the Python built-ins and Standard Library

The more you can do with less, the easier your code is to understand and maintain

Code more idiomatically with our collection of 400 Python exercises on our platform

Categories: FLOSS Project Planets

Optimizing and Sharing Shader Structures

Planet KDE - Thu, 2023-08-24 04:00

When writing large graphics applications in Vulkan or OpenGL, there’s many data structures that need to be passed from the CPU to the GPU and vice versa. There are subtle differences in alignment, padding and so on between C++ and GLSL to keep track of as well. I’m going to cover a tool I wrote that generates safe and optimal code. This helps not only the GPU but the programmer writing shaders too. Here’s a rundown of the problems I’m trying to solve and how you can implement a similar system in your own programs.

This tool specifically targets and references Vulkan rules, but similar rules exist in OpenGL.

Reasoning

Here’s an example of real code, exposing options to a post-processing stage.

layout(push_constant) uniform PushConstant { vec4 viewport; vec4 options; vec4 transform_ops; vec4 ao_options; vec4 ao_options2; vec4 proj_info; mat4 cameraProj; mat4 invProj; };

Even for the person who wrote this code, it’s hard to tell what each option does from a glance. This is a great way to create bugs, since it’s extremely easy to mix up accessors like ao_options.x and ao_options.y. Ideally, we want these options to be separated but there’s a reason why they’re packed in the first place.

Alignment rules

Say you’re beginning to explore Phong shading, and you want to expose a position and a color property so you can change them while the program is running. In a 3D environment, there are three axes (X, Y and Z) so naturally it must be a vec3. Light color also makes sense to be a vec3. When emitted from a light, it’s color can’t really be “transparent” so  we don’t need the alpha channel. The GLSL code so far looks like this:

#version 430 out vec4 finalColor; layout(binding = 0) buffer block { vec3 position; vec3 color; } light; void main() { const vec3 dummy = vec3(1) - light.position; finalColor = vec4(vec3(1.0, 1.0, 1.0) * light.color, 1.0); }

(There’s no Phong formula here, we want to make sure the GLSL compiler doesn’t optimize anything out.)

When writing the structure on the C++ side, you might write something like this:

struct Light { glm::vec3 position; glm::vec3 color; } light; light.position = {1, 5, 0}; light.color = {3, 2, -1};

For this example I used the debug printf system, which is part of the Vulkan SDK so we can confirm the exact values. The output is as follows:

Position = (1.000000, 5.000000, 0.000000) Color = (2.000000, -1.000000, 0.000000)

As you can see, the first value of color is getting chopped off when reading it in the shader. The usual solution to the problem is to use a vec4 instead:

struct Light { glm::vec4 position; glm::vec4 color; };

And to confirm, this does indeed fix the issue:

Position = (1.000000, 5.000000, 0.000000) Color = (3.000000, 2.000000, -1.000000)

But why does it work when we change to it a vec4? This section from the Vulkan specification spells it out for us:

  • The base alignment of the type of an OpTypeStruct member is defined recursively as follows:
  • A scalar has a base alignment equal to its scalar alignment.
  • A two-component vector has a base alignment equal to twice its scalar alignment.
  • A three- or four-component vector has a base alignment equal to four times its scalar alignment.

The third bullet point hits it right on the head, vec4 and vec3 have the same alignment! An alternative solution could be to use alignas:

struct Light { glm::vec3 color; alignas(16) glm::vec3 position; };

There’s a bunch of more nitty and dirty alignment issues that stem from differences between C++ and GLSL, and this is one of those cases. In my opinion, this shouldn’t be nessecary for the programmer to handle themselves.

Passing booleans

Another example of esoteric shader rules is when you try passing booleans. Take a look at this C++ structure, which seems okay at first glance:

struct TestBuffer { bool a = false; bool b = true; bool c = false; bool d = true; };

And this is how it’s defined in GLSL:

layout(binding = 0) buffer readonly TestBuffer { bool a, b, c, d; };

When sent to the shader, the values of the structure end up like this:

a = 1, b = 0, c = 0, d = 0

This is because because SPIR-V doesn’t seem to define a physical size for bool, so  it could be represented as anything (like an unsigned integer). In this case, you actually want to define them as integer:

layout(binding = 0) buffer readonly TestBuffer { int a, b, c, d; };

This is a little disappointing, because the semantic meaning of a boolean option is lost when you declare them as integers. You can also pack a lot of booleans into the space of one 32-bit integer, which could be a possible space-saving optimization in the future.

Sharing structures

The last problem is keeping the structures in sync. There’s usually one instance of the structure written in C++ and many copies in GLSL shaders. This is problematic because member order could change, so parts of the structure itself could be undefined and can easily escape notice. Having one definition for all shaders and C++ would be a huge improvement!

Struct compiler

What I ended up with is a new pre-processing step, which I called the “struct compiler”. I tried searching on the Internet to see if someone has already made a tool like this, but couldn’t find much – maybe shader reflection is more popular. I did learn a lot from making this tool anyway. It’s main goals are:

  • Define the shader structures in one, centralized file.
  • Structures should be able to be written on a higher-level, allowing us to decouple the actual member order, alignment and packing from the logic. This enables the compiler to optimize the structure in the future, maybe beyond what we can reasonably hand-write.
  • The structure is usable in GLSL and C++.

First you write a .struct file, describing the required members and their types. Here’s the same post-processing structure showcased in the beginning, but now written in the compiler’s custom syntax:

primary PostPushConstant { viewport: vec4 camera_proj: mat4 inv_proj: mat4 inv_view: mat4 enable_aa: bool enable_dof: bool exposure: float display_color_space: int tonemapping: int ao_radius: float ao_r2: float ao_rneginvr2: float ao_rdotvbias: float ao_intensity: float ao_bias: float }

This looks much better, doesn’t it? Even without knowing anything else about the actual shader, you can guess which options do what with some accuracy. Here’s what it might look like, compiled to C++:

struct PostPushConstant { glm::mat4 camera_proj; glm::mat4 inv_proj; glm::mat4 inv_view; glm::vec4 viewport; glm::ivec4 enable_aa_enable_dof_display_color_space_tonemapping_; glm::vec4 exposure_ao_radius_ao_r2_ao_rneginvr2_; glm::vec4 ao_rdotvbias_ao_intensity_ao_bias_; ... };

(Setters like set_exposure() and set_exposure() are used instead of accessing the glm::vec4 manually.)

I hook the generation step in my buildsystem to automatically run, so all you need to do is include the auto-generated header. To use the structure in GLSL, I created a new directive that inserts the GLSL version of the structure given by the struct compiler. The same system that generates the C++ headers also generates GLSL which inserts where this directive is found:

#use_struct(push_constant, post, post_push_constant)

(The syntax could use some work, but the first argument is the usage, and the second argument is the name of the struct. The third argument is a unique name for the instance.)

Since the member order and names are undefined, you must access the members by a setter/getter in GLSL and C++. I think this is a worthwhile trade-off for readable code.

vec3 ao_result = pow(ao, ao_intensity())

This tool runs as a pre-processing step offline, before shader compilation begins. The tool’s source code is available here, which is taken from one of my personal projects. It’s quickly written and I don’t recommend using it directly, but I’m confident that this idea is worth pursuing.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Optimizing and Sharing Shader Structures appeared first on KDAB.

Categories: FLOSS Project Planets

Pages