Feeds

PyBites: Blaise Pabon on his developer journey, open source and why Python is great

Planet Python - Mon, 2023-03-13 14:00

Listen here:

Or watch here:

Welcome back to the Pybites podcast. This week we have a very special guest: Blaise Pabon.

We talk about his background in software development, how he started with Python and his journey with us in PDM.

We also pick his brains about why Python is such a great language, the importance of open source and his active role in it, including a myriad of developer communities he actively takes part in.

Lastly, we talk about the books we’re currently reading.

Links:
– Vitrina: a portfolio development kit for DevOps
– Boston Python hosts several online meetings a week
– The mother of all (web) demo apps
– Cucumberbdd New contributors ensemble programming sessions
– Reach out to Blaise: blaise at gmail dot com | Slack

Books:
– Antifragile
– The Tombs of Atuan (Earthsea series)
– How to write

Categories: FLOSS Project Planets

Real Python: Python News: What's New From February 2023

Planet Python - Mon, 2023-03-13 10:00

February is the shortest month, but it brought no shortage of activity in the Python world! Exciting developments include a new company aiming to improve cloud services for developers, publication of the PyCon US 2023 schedule, and the first release candidate for pandas 2.0.0.

In the world of artifical intelligence, OpenAI has continued to make strides. But while the Big Fix has worked to reduce vulnerabily for programmers, more malicious programs showed up on PyPI.

Read on to dive into the biggest Python news from the last month.

Join Now: Click here to join the Real Python Newsletter and you'll never miss another Python tutorial, course update, or post.

Pydantic Launches Commercial Venture

With over 40 million downloads per month, pydantic is the most-used data validation library in Python. So when its founder, Samuel Colvin, announces successful seed funding, there’s plenty of reason to believe there’s a game changer in the works.

In his announcement, Colvin compares the current state of cloud services to that of a tractor fifteen years after its invention. In both cases, the technology gets the job done, but without much consideration for the person in the driver’s seat.

Colvin’s new company builds on what pydantic has learned about putting developer experience and expertise first. The exact details of this new venture are still under wraps, but it sets out to answer these questions:

What if we could build a platform with the best of all worlds? Taking the final step in reducing the boilerplate and busy-work of building web applications — allowing developers to write nothing more than the core logic which makes their application unique and valuable? (Source)

This doesn’t mean that the open-source project is going anywhere. In fact, pydantic V2 is on its way and will be around seventeen times faster than V1, thanks to being rewritten in Rust.

To stay on up to date on what’s happening with pydantic’s new venture, subscribe to the GitHub issue.

Read the full article at https://realpython.com/python-news-february-2023/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Python for Beginners: Overwrite a File in Python

Planet Python - Mon, 2023-03-13 09:00

File handling is one of the first tasks we do while working with data in python. Sometimes, we need to change the contents of the original file or completely overwrite it. This article discusses different ways to overwrite a file in python.

Table of Contents
  1. Overwrite a File Using open() Function in Write Mode
  2. Using Read Mode and Write Mode Subsequently
  3. Overwrite a File Using seek() and truncate() Method in Python
  4. Conclusion
Overwrite a File Using open() Function in Write Mode

To overwrite a file in python, you can directly open the file in write mode. For this, you can use the open() function. The open() function takes a string containing the file name as its first input argument and the python literal “w” as its second input argument.

If the file with the given name doesn’t exist in the file system, the open() function creates a new file and returns the file pointer. If the file is already present in the system, the open() function deletes all the file contents and returns the file pointer. 

Once we get the file pointer, we can write the new data to the file using the file pointer and the write() method. The write() method, when invoked on the file pointer, takes the file content as its input argument and writes it to the file. Next, we will close the file using the close() method. After this, we will get the new file with overwritten content.

For instance, suppose that we want to overwrite the following text file in python.

Sample text file

To overwrite the above file, we will use the following code.

file=open("example.txt","w") newText="I am the updated text from python program." file.write(newText) file.close()

The text file after execution of the above code looks as follows.

In this example, we have directly opened the text file in write mode. Due to this, the original content of the file gets erased. After this, we have overwritten the file using new text and closed the file using the close() method.

Using Read Mode and Write Mode Subsequently

In the above example, we cannot read the original file contents. If you want to read the original file contents, modify it and then overwrite the original file, you can use the following steps.

  • First, we will open the file in read mode using the open() function. The open() function will return a file pointer after execution. 
  • Then, we will read the file contents using the read() method. The read() method, when invoked on the file pointer, returns the file contents as a string. 
  • After reading the file contents, we will close the file using the close() method. 
  • Now, we will again open the file, but in write mode. 
  • After opening the file in write mode, we will overwrite the file contents using the write() method. 
  • Finally, we will close the file using the close() method.

After executing the above steps, we can overwrite a file in python. You can observe this in the following example.

file=open("example.txt","r") fileContent=file.read() print("The file contents are:") print(fileContent) file.close() file=open("example.txt","w") newText="I am the updated text from python program." file.write(newText) file.close()

Output:

The file contents are: This a sample text file created for pythonforbeginners.

In this example, we first opened the file in read mode and read the contents as shown above. Then, we overwrote the file contents by opening it again in write mode. After execution of the entire code, the text file looks as shown below.

Modified text file

Although this approach works for us, it is just a workaround. We aren’t actually overwriting the file but reading and writing to the file after opening it separately. Instead of this approach, we can use the seek() and truncate() methods to overwrite a file in python. 

Overwrite a File Using seek() and truncate() Method in Python

To overwrite a file in a single step after reading its contents, we can use the read(), seek() truncate() and write() methods.

  • When we read the contents of a file using the read() method, the file pointer is moved to the end of the file. We can use the seek() method to again move to the start of the file. The seek() method, when invoked on the file pointer, takes the desired position of the file pointer as its input argument. After execution, it moves the file pointer to the desired location. 
  • To remove the contents of the file after reading it, we can use the truncate() method. The truncate() method, when invoked on a file pointer, truncates the file. In other words, it removes all the file contents after the current position of the file pointer.

To overwrite a file in python using the seek() and truncate() methods, we can use the following steps.

  • First, we will open the file in append mode using the open() function. 
  • Next, we will read the file contents using the read() method. 
  • After reading the file contents, we will move the file pointer to the start of the file using the seek() method. For this, we will invoke the seek() method on the file pointer with 0 as its input. The file pointer will be moved to the first character in the file.
  • Next, we will use the truncate() method to delete all the contents of the file.
  • After truncating, we will overwrite the file with new file contents using the write() method.
  • Finally, we will close the file using the close() method.

After executing the above steps, we can easily overwrite a file in python. You can observe this in the following example.

file=open("example.txt","r+") fileContent=file.read() print("The file contents are:") print(fileContent) newText="I am the updated text from python program." file.seek(0) file.truncate() file.write(newText) file.close()

Output:

The file contents are: This a sample text file created for pythonforbeginners.

After execution of the above code, the text file looks as follows.

Conclusion

In this article, we discussed different ways to overwrite a file in python. To learn more about python programming, you can read this article on how to convert a pandas series into a dataframe. You might also like this article on how to read a file line by line in python.

I hope you enjoyed reading this article. Stay tuned for more informative articles.

Happy Learning!

The post Overwrite a File in Python appeared first on PythonForBeginners.com.

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Logan Thomas

Planet Python - Mon, 2023-03-13 08:30

This week we welcome Logan Thomas as our PyDev of the Week! If you’d like to see some of the things that Logan has been up to, you can do so on Logan’s GitHub profile or by visiting his personal website. Logan works at Enthought as a teacher. If you haven’t done so, you should check out their list of courses.

Now let’s spend some time getting to know Logan better!

Can you tell us a little about yourself (hobbies, education, etc):

I currently work for Enthought as a Scientific Software Developer and Technical Trainer. Previously, I worked for a protective design firm as a machine learning engineer and I’ve also been employed as a data scientist in the digital media industry.

My educational background consists of mathematics, statistics, and mechanical engineering. During my final semester of graduate school at the University of Florida, I took a data mining class from the Department of Information Science and was hooked. I’ve worked in data science ever since.

I’m passionate about software and love to (humbly) share my knowledge with others. Working for Enthought gives me the opportunity to help students, and larger organizations, accelerate their journey toward greater digital maturity. We focus on digital transformation – transforming an organization’s business model to take full advantage of the possibilities that new technologies can provide. It’s so fulfilling being able to help scientists and engineers develop the digital skills and tools they need to be able to automate away the trivial tasks and free their minds of this cognitive load to focus on the things they love – the science and engineering behind the problems they face.

Outside of work, I love to stay active by running, brewing a good cup of coffee, and have recently started smoking brisket / pork (a must if you live in Texas like me).

Why did you start using Python?

I was originally hesitant to get into software development as both my parents are software engineers and I wanted my own career path. But, with their encouragement, I took an Introduction to Programming course during my undergraduate degree. The class was taught in Visual Basic and given it was my first true taste of any type of software development, I didn’t have the best experience. Thus, I decided to stay away from computer science and ended up going to graduate school for mechanical engineering. I’m glad the story didn’t end there.

In one of my mechanical design courses, we used a CAD (computer-aided design) program that supported Python scripting (if you knew how to use it!). I remember spending an entire weekend running simulations by hand with Microsoft Excel. When I asked my friend how he managed to get his testing done so quickly, he showed me roughly 10 lines of Python code he wrote to automate the simulation loop. After that exchange, I made it my goal to give computer science another try and take advantage of the superpower a general-purpose programming language like Python could provide.

What other programming languages do you know and which is your favorite?

Since graduate school, I’ve worked in the data science space for about 8 years – all of which my primary programming language has been Python. I love Python for its ease of use and ability to stretch across domain areas like scripting / automation, data analysis / machine learning, and even web design. For me, Python is the clear favorite.

Besides Python, I have the most experience with MATLAB, R, SQL, and Spark. I’ve dabbled in a few other languages as well: Scala, Ruby, C++, HTML / CSS.

What projects are you working on now?

Currently, I am working on building a curriculum for Enthought Academy – a scientific Python catalog of courses for R&D professionals. We have some really great courses in the queue for 2023! I’m putting final touches on our new Data Analysis course that covers Python libraries like NumPy, pandas, Xarray, and Awkward Array.

Personally, I’m working on a new project called Coffee & Code. The idea is to share small code snippets and video shorts for exploring the Python programming language — all during a morning coffee break. My first major project will be trying to teach my brother, who has 0 programming experience, the Python programming language. I plan to film it live and release videos as we walk through installation, environment setup, and language syntax together.

I’m also on the planning committee for two major Python conferences: PyTexas and SciPy Conference. I love both of these events and can’t wait for their 2023 appearance.

Which Python libraries are your favorite (core or 3rd party)?

There are so many to choose from! From the standard library, I have to give a shout-out to the collections module and itertools. The namedtuple, double-ended queue (deque), Counter, and defaultdict are gamechangers. The itertool recipes in the official Python documentation are a goldmine.

My favorite third-party libraries from the scientific Python stack are NumPy and scikit-learn. I use PyTorch and Tensorflow with Keras (in that order) for deep learning. I also have to give credit to IPython. It is my default choice for any type of interactive computing and is quite honestly my development environment of choice (when partnered with Vim). One lesser-known library that I have come to enjoy is DEAP (Distributed Evolutionary Algorithms in Python). I worked with this library extensively for a genetic programming project in the past.

I see you are an instructor at Enthought. What sorts of classes do you teach?

We teach all sorts of classes at Enthought. Given that all of our instructors started out in industry first, we can boast that our curriculum is “for scientists and engineers by scientists and engineers”. We have introduction to Python and programming courses as well as more advanced machine learning and deep learning courses. We also have a software engineering course, data analysis course, data management course, and desktop application development course. All of these live in what we call a learning path or “track”. We have an Enthought Academy Curriculum Map that displays the various tracks we offer: Data Analysis, Machine Learning, Tool Maker, and Manager.

What are some common issues that you see new students struggling with?

I love Python but can admit that there are some rough edges when first getting started. Most students struggle with understanding virtual environments and why they need one if they’ve never encountered dependency management before. Other students are surprised at how many third-party libraries exist in the ecosystem. I often get questions on which ones to use, how to know which ones are safe, and where to look for them.

Some students struggle with IDEs and which development environment to use. I’ve gotten questions like “which is the single best tool: JupyterLab, IPython, VSCode, PyCharm, Spyder, …”. For new students, I try to help them understand that these are tools and there may not be a single correct answer. Sometimes I reach for VSCode when needing to navigate a large code base. If I’m sharing an analysis with a colleague or onboarding a new analyst, I may reach for a Jupyter Notebook. If I’m prototyping a new algorithm, I’ll probably stay in IPython. But, these are my preferences and may not be the same for all students. When looking at these as tools in a toolbelt, rather than a single permanent choice, it becomes easier to pick the right one for the task at hand.

Is there anything else you’d like to say?

Thank you for the opportunity! I love to meet new people in the Python space and appreciate you reaching out!

The post PyDev of the Week: Logan Thomas appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

ComputerMinds.co.uk: Migrating cropped images

Planet Drupal - Mon, 2023-03-13 08:15

One of our big Drupal 7 to Drupal 9 migration projects included bringing across image cropping functionality and data on a longstanding client's website. This site had used the Imagefield Crop module, but that was only for Drupal 7. We set up Image Widget Crop for the new site, which is better in a few ways but is also fundamentally different. The old site referenced the cropped images from content, only bringing in the originally-uploaded images for edit pages, to allow editors to adjust the cropping, which was then used wherever that image appeared on the frontend. The new Image Widget Crop module, however, allows configuring different crops for different situations. For example, the same image could be cropped one way for use as a widescreen banner, but another way for when it appears in a grid of squares (as in the following screenshot). 

The real challenge was in migrating the data! But we call ourselves Drupal experts, so of course we dug to find solutions. What we came up with might not work for everyone, but hopefully sharing this might help someone else that might be close enough to our situation. We found the following steps were necessary...

1. Set up the new configuration

Configure the crop types, cropping settings, fields and widgets etc for the Drupal 9 site to use, including an image media type. I won't go into detail here as there are already guides about how to do this - e.g. from the Drupal Media Team and OSTraining. What I'm focussed on is how to migrate the cropped images and their original files, and the references to use them in the right places.

2. Migrate files, including media entities, and references to them

The old site's files can be migrated into the new site easily enough. I then use the Media Migration module to create media image entities that reference those files. In my situation, it was fine to migrate the file IDs over and to use matching media IDs too. I expect this won't be an option on some projects but it made things much easier for me.

The Media Migration module uses 'dealer' plugin classes to cover different types of media, but its 'image' dealer plugin ignores images handled by Drupal 7's Imagefield Crop module. So I had to replace that with a custom plugin.

In general, I aimed to migrate the originally-uploaded image files (i.e. from before cropping was applied), and reference those from host content. That's how the new Image Widget Crop module usually references the images, whereas the Imagefield Crop module referenced the cropped image files. The Image Widget Crop module usually maps to the cropped images via the chosen 'crop type' referenced in image styles so that different crops can be used for the same field when output in different places. Therefore, any migrations for it will have to translate back from the IDs of cropped image files to the IDs for 'uncropped' ones.

A custom module's hook_migrate_prepare_row() did that file ID translation, and also skipped migrating cropped images as media entities. Since the cropped images won't be referenced from content, they would just clog up the media library as duplicates of the original uncropped images. Detecting which files were only cropped images that wouldn't be referenced from elsewhere was a bit tricky, and one of the slowest parts of my migrations. So I allowed file entities to be created for these cropped images, as I figured that didn't matter so much. I imagine these two bits could have been done better with specific plugin classes rather than this hook.

For migrating the right data into the media field on every node/entity that referenced croppable images, I made a custom process plugin to make use of that mapping of cropped-to-uncropped file IDs. So my node migration YAML files declared this plugin should be used to get the uncropped file ID on the destination, that corresponded to the cropped images' IDs on the source end, like this:

field_image: plugin: MYMODULE_precropped_image source: field_image

That plugin basically just looks up the ID of the cropped file from the translation map that was set by the hook_migrate_prepare_row() and returns the ID of the uncropped image file.

3. Migrate cropping data

I needed additional migrations for the data in every Imagefield Crop field about the dimensions and positioning of the crops themselves. These created 'crop' entities, using a custom source plugin for a database table that extended Drupal\migrate\Plugin\migrate\source\SqlBase. This allowed me to use a bundle filter in my migrations, so different crop types could be used in Drupal 9 for images on different content types that happened to use the same source field storage in Drupal 7. The YAML for these migrations is simple enough to share:

langcode: en status: true dependencies: module: - crop - MYMODULE id: d7_crop_field_image class: Drupal\migrate\Plugin\Migration migration_tags: - Content migration_group: migrate_drupal_7 label: 'Image crops from field_image, except for galleries' source: plugin: MYMODULE_imagefield_crop_table field_name: field_image bundle_filter: - article - blog_post constants: crop_type: 4_3_landscape target_entity_type: file process: type: constants/crop_type # Our source plugin sets precropped_fid. entity_id: precropped_fid entity_type: constants/target_entity_type uri: uri height: field_image_cropbox_height width: field_image_cropbox_width x: plugin: callback # The D7 module recorded the co-ordinate of the top left of the crop box, # whereas we want the co-ordinate of the very centre of the crop box. callable: MYMODULE_translate_crop_coordinate unpack_source: true source: - field_image_cropbox_x - field_image_cropbox_width 'y': plugin: callback callable: MYMODULE_translate_crop_coordinate unpack_source: true source: - field_image_cropbox_y - field_image_cropbox_height destination: plugin: 'entity:crop' 4. Mitigate missing features in the new site

The old and new modules have their own different approaches, which means they don't quite have feature parity. I actually think the new Image Widget Crop module has a better overall approach but our old site did make use of some settings unique to Imagefield Crop which we wanted to bring across. Most interestingly, as many of the old site's image fields were configured to use the same dimensions or aspect ratios, we could set up crops on the new site to be shared across those fields. However, there were a few fields that had slightly different constraints on 'input' despite appearing the same on 'output'. So we had to alter Image Widget Crop's widget (via two levels of #process Form API callbacks!) to apply different maximum post-cropping dimensions for certain contexts. Editors could bypass this fairly easily, but it covers the most common journeys that they would normally follow.

Mission accomplished!

After all that, you can probably see it was quite a complex challenge! A lot of code went into this, which I have shared barely any of here but if you find yourself in a similar scenario and need help, get in touch or leave a comment here. If anyone actually needs the PHP or YAML code, maybe I can look at packaging it up to share. But I know it's probably not generic enough to cover everyone's situations - I'd love to contribute it back to the community otherwise.

Categories: FLOSS Project Planets

Russell Coker: Firebuild

Planet Debian - Mon, 2023-03-13 08:07

After reading Bálint’s blog post about Firebuild (a compile cache) [1] I decided to give it a go. It’s non-free, the project web site [2] says that it’s free for non-commercial use or commercial trials.

My first attempt at building a Debian package failed due to man-recode using a seccomp() sandbox, I filed Debian bug #1032619 [3] about this (thanks for the quick response Bálint). The solution for me was to edit /etc/firebuild.conf and add man-recode to the dont_intercept list. The new version that’s just been uploaded to Debian fixes it by disabling seccomp() and will presumably allow slightly better performance.

Here are the results of building the refpolicy package with Firebuild, a regular build, the first build with Firebuild (30% slower) and a rebuild with Firebuild that reduced the time by almost 42%.

real 1m32.026s user 4m20.200s sys 2m33.324s real 2m4.111s user 6m31.769s sys 3m53.681s real 0m53.632s user 1m41.334s sys 3m36.227s

Next I did a test of building a Linux 6.1.10 kernel with “make bzImage -j18“, here are the results from a normal build, first build with firebuild, and second build. The real time is worse with firebuild for this on my machine. I think that the relative speeds of my CPU (reasonably fast 18 core) and storage (two of the slower NVMe devices in a BTRFS RAID-1) is the cause of the first build being relatively so much slower for “make bzImage” than for building the refpolicy, as the kernel build process involves a lot more data. For the final build I moved ~/.cache/firebuild to a tmpfs (I have 128G of RAM and not much running on my machine at the time of the tests), even then building with firebuild was slightly slower in real time but took significantly less CPU time (user+real being 20mins instead of 36m). I also ran several tests with the kernel source tree on a tmpfs but for unknown reasons those tests each took about 6 minutes. Does firebuild or the Linux kernel build process dislike tmpfs for some reason?

real 2m43.020s user 31m30.551s sys 5m15.279s real 8m49.675s user 64m11.258s sys 19m39.016s real 3m6.858s user 7m47.556s sys 9m22.513s real 2m51.910s user 10m53.870s sys 9m21.307s

One thing I noticed from the kernel build tests is that the total CPU time taken by the firebuild process (as reported by ps) was more than 2/3 of the run time and top usually reported it as taking around 75% of a CPU core. It seems to me that the firebuild process itself is a bottleneck on build speed. Building refpolicy without firebuild has an average of 4.5 cores in use while building the kernel haas 13.5. Unless they make a multi-threaded version of firebuild it seems that it won’t give the performance one would hope for from a CPU with 18+ cores. I presume that if I had been running with hyper-threading enabled then firebuild would have been even worse for kernel builds as it would sometimes get on the second thread of a core. It looks like firebuild would perform better on AMD CPUs as they tend to have fewer CPU cores with greater average performance per core so a single CPU core for firebuild will be less limited. I presume that the firebuild developers will make it perform better with large numbers of cores in future, the latest Intel laptop CPUs have 16+ cores and servers with 2*40core CPUs are common.

The performance improvement for refpolicy is significant as a portion of build time, but insignificant in terms of real time. A full build of refpolicy doesn’t take enough time to get a Coke and reducing it doesn’t offer a huge benefit, if Firebuild was available in past years when refpolicy took 20 minutes to build (when DDR2 was the best RAM available) then it would be a different story.

There is some potential to optimise the build of refpolicy for the non-firebuild case. Getting it to average more than 4.5 cores in use when there’s 18 available should be possible, there are a number of shell for loops in the main Makefile and maybe some of them can be replaced by make constructs to allow running in parallel. If it used 7 cores on average then it would be faster in a regular build than it currently is with firebuild and a hot cache. Any advice from make experts would be appreciated.

Related posts:

  1. SE Linux Status in Debian 2012-03 I have just finished updating the user-space SE Linux code...
  2. CPU vs RAM When configuring servers the trade-offs between RAM and disk are...
  3. Load Average Other Unix systems apparently calculate the load average differently to...
Categories: FLOSS Project Planets

Kay Hayen: Python 3.11 and Nuitka experimental support

Planet Python - Mon, 2023-03-13 08:05

In my all in with Nuitka post and my first post Python 3.11 and Nuitka and then progress post Python 3.11 and Nuitka Progress , I promised to give you more updates on Python 3.11 and in general.

So this is where 3.11 is at, and the TLDR is, experimental support has arrives with Nuitka 1.5 release, follow develop branch for best support, and 1.6 is expected to support new 3.11 features.

What is now

The 1.5 release passes the CPython3.10 test suite practically as good as with Python3.11 as with Python3.10, with only a handful of tests failing and these do not seem significant, and it is expected to be resolved later when making the CPython3.11 test suite working.

The 1.5 release now gives this kind of output.

Nuitka:WARNING: The Python version '3.11' is not officially supported by Nuitka '1.5', but an Nuitka:WARNING: upcoming release will change that. In the mean time use Python version '3.10' Nuitka:WARNING: instead or newer Nuitka.

Using develop should always be relatively good, it doesn’t often have regressions, but Python3.11 improvements will accumulate there until 1.6 release happens. Follow it there if you want. However, checking those standalone cases that can be done, as many packages are not available for 3.11 yet, I have not found a single issue.

What you can do?

Try your software with Nuitka and Python3.11 now. Very likely your code base is not using 3.11 specific features, or is it? If it is, of course you may have to wait until develop catches up with new features and changes in behavior.

In case you are wondering, how I can invest this much time into doing all of what I do, consider becoming a subscriber of Nuitka commercial, even if you do not need the IP protection features it mostly has. All commonly essential packaging and performance features are entirely free, and I have put incredible amounts of works in this, and I need to now make a living off it, while I do not plan to make Nuitka annoying or unusable for non-commercial non-subscribers at all.

What was done?

Getting all of the test suite to work, is a big thing already. Also a bunch of performance degradations have been addressed. However right now, attribute lookups and updates e.g. are not as well optimized, and that despite and of course because Python 3.11 changed the core a lot in this area.

The Process

This was largely explained in my previous posts. I will just put where we are now and skip completed steps and avoid repeating it too much.

In the next phase, during 1.6 development the 3.11 test suite is used in the same way as the 3.10 test suite. Then we will get to support new features, new behaviors, newly allowed things, and achieve super compatibility with 3.11 as we always do for every CPython release. All the while doing this, the CPython3.10 test suite will be executed with 3.11 by my internal CI, immediately reporting when things change for the worse.

This phase is starting today actually.

When

It is very hard to predict what will be encountered in the test suite. It didn’t look like many things are there, but e.g. exception groups might be an invasive feature, otherwise I am not aware of too many things at this point. It sure feels close now.

These new features will be relatively unimportant to the masses of users who didn’t immediately change their code to use 3.11 only features.

The worst things with debugging is that I just never know how much time it will be. Often things are very quick to add to Nuitka, and sometimes they hurt a lot or cause regressions for other Python versions by mistake.

Benefits for older Python too

I mentioned stuff before, that I will not repeat only new stuff.

Most likely, attribute lookups will lead to adding the same JIT approach the Python 3.11 allows for now, and maybe that will be possible to backport to old Python as well. Not sure yet. For now, they are actually worse than with 3.10, while CPython made them faster.

Expected results

Not quite good for benchmarking at this time. From the comparisons I did, the compiled code of 3.10 and 3.11 seemed equally fast, allowing CPython to catch up. When Nuitka takes advantage of the core changes to dict and attributes more closely, hope is that will change.

So in a sense, using 3.11 with Nuitka over 3.10 actually doesn’t have much of a point yet.

I need to repeat this. People tend to expect that gains from Nuitka and enhancements of CPython stack up. The truth of the matter is, no they do not. CPython is now applying some tricks that Nuitka already did, some a decade ago. Not using its bytecode will then become less of a benefit, but that’s OK, this is not what Nuitka is about.

We need to get somewhere else entirely anyway, in terms of speed up. I will be talking about PGO and C types a lot in the coming year, that is at least the hope. The boost of 1.4 and 1.5 was only be the start. Once 3.11 support is sorted out, int will be getting dedicated code too, that’s where things will become interesting.

Final Words

So, this post is kind of too late. My excuse is that due to having Corona, I did kind of close down on some of the things, and actually started to do optimization that will lead towards more scalable class code. This is also already in 1.5 and important.

But now that I feel better, I actually forced myself to post this. I am getting better at this. Now back and starting the CPython3.11 test suite, I will get back to you once I have some progress with that. Unfortunately adding the suite is probably also a couple of days work, just technically before I encounter interesting stuff.

Categories: FLOSS Project Planets

Python Bytes: #327 Untangling XML with Pydantic

Planet Python - Mon, 2023-03-13 04:00
<a href='https://www.youtube.com/watch?v=rduFjpb-fAw' style='font-weight: bold;'>Watch on YouTube</a><br> <br> <p><strong>About the show</strong></p> <p>Sponsored by <a href="https://pythonbytes.fm/compiler"><strong>Compiler Podcast from Red Hat</strong></a>.</p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too.</p> <p><strong>Michael #1:</strong> <a href="https://pydantic-xml.readthedocs.io/en/latest/"><strong>pydantic-xml extension</strong></a></p> <ul> <li>via Ilan</li> <li>Recall untangle. How about some pydantic in the mix?</li> <li>pydantic-xml is a pydantic extension providing model fields xml binding and xml serialization / deserialization. It is closely integrated with pydantic which means it supports most of its features.</li> </ul> <p><strong>Brian #2:</strong> <a href="https://snarky.ca/how-virtual-environments-work"><strong>How virtual environments work</strong></a></p> <ul> <li>Brett Cannon</li> <li>This should be required reading for anyone learning Python. <ul> <li>Maybe right after “Hello World” and right before “My first pytest test”, approximately.</li> </ul></li> <li>Some history of environments <ul> <li>Back in the day, there was global and your directory.</li> </ul></li> <li>How environments work <ul> <li>structure: bin, include, and lib</li> <li>pyvenv.cfg configuration file </li> </ul></li> <li>How Python uses virtual environments</li> <li>What activation does, and that it’s <strong>optional.</strong> <ul> <li>Yes, activation is optional. </li> </ul></li> <li>A new project called microvenv that helps VS Code. <ul> <li>Mostly to fix the “Debian doesn’t ship python3 with venv” problem.</li> <li>It doesn’t include script activation stuff</li> <li>It’s super small, less than 100 lines of code, in one file.</li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://github.com/raaidarshad/dbdeclare"><strong>DbDeclare</strong></a></p> <ul> <li>Declarative layer for your database.</li> <li>https://raaidarshad.github.io/dbdeclare/guide/controller/#example</li> <li>Sent in by creator raaid</li> <li>DbDeclare is a Python package that helps you create and manage entities in your database cluster, like databases, roles, access control, and (eventually) more. </li> <li>It aims to fill the gap between SQLAlchemy (SQLA) and infrastructure as code (IaC).</li> <li>You can: <ul> <li>Declare desired state in Python</li> <li>Avoid maintaining raw SQL</li> <li>Tightly integrate your databases, roles, access control, and more with your tables</li> </ul></li> <li>Migrations like alembic coming too.</li> </ul> <p><strong>Brian #4:</strong> <a href="https://sethmlarson.dev/nox-pyenv-all-python-versions"><strong>Testing multiple Python versions with nox and pyenv</strong></a></p> <ul> <li>Seth Michael Larson</li> <li>This is a cool “what to do first” with nox.</li> <li>Specifically, how to use it to run pytest against your project on multiple versions of Python.</li> <li><p>Example noxfile.py is super small</p> <pre><code> import nox @nox.session(python=["3.8", "3.9", "3.10", "3.11", "3.12", "pypy3"]) def test(session): session.install(".") session.install("-rdev-requirements.txt") session.run("pytest", "tests/") </code></pre></li> <li><p>How to run everything, <code>nox</code> or <code>nox -s test</code>.</p></li> <li>How to run single sessions, <code>nox -s test-311</code> for just Python 3.11</li> <li>Also how to get this to work with pyenv. <ul> <li><code>pyenv global 3.8 3.9 3.10 3.11 3.12-dev</code></li> </ul></li> <li>This reminds me that I keep meaning to write a workflow comparison post about nox and tox.</li> </ul> <p><strong>Extras</strong> </p> <p>Michael:</p> <ul> <li><a href="https://www.bleepingcomputer.com/news/security/github-makes-2fa-mandatory-next-week-for-active-developers/">GitHub makes 2FA mandatory next week for active developers</a></li> <li>New adventure bike [<a href="https://python-bytes-static.nyc3.digitaloceanspaces.com/IMG_0205.jpeg">image 1</a>, <a href="https://python-bytes-static.nyc3.digitaloceanspaces.com/IMG_1002.jpeg">image 2</a>]. <ul> <li>Who’s got good ideas for where to ride in the PNW? </li> <li>Wondering why I got it, here’s <a href="https://www.youtube.com/watch?v=U1HfsqjnEc0">a fun video</a>.</li> </ul></li> </ul> <p><strong>Joke:</strong> <a href="https://www.reddit.com/r/ProgrammerHumor/comments/10p4wo0/anybody_else_having_this_kind_of_colleague_way_to/">Case of the Mondays</a></p>
Categories: FLOSS Project Planets

Russell Coker: Xmpp Tools

Planet Debian - Mon, 2023-03-13 03:13

For a while I’ve had my monitoring systems alert me via XMPP (Jabber). To do that I used the sendxmpp command-line program which worked well for it’s basic tasks. I recently noticed that my laptop and workstation which I had upgraded to Debian/Testing weren’t sending messages, I’m not sure when it started as my main monitoring of such machines is to touch a key and see if there’s a response – if I’m not at the keyboard then a failure doesn’t bother me too much.

I’ve filed Debian bug #1032868 [1] about this. As sendxmpp is apparently not supported upstream and we are preparing for a release it could be that the next version of Debian is released without this working (if it’s specific to talking to Prosody) or without sendxmpp (if it fails on all Jabber servers).

I next tested xmppc which doesn’t send messages (gives no error when I have apparently correct parameters and just doesn’t send anything) and doesn’t display any text output for info related commands while not giving error messages or an error return code. I filed Debian bug #1032869 [2] about this.

Currently the only success I’ve found with Debian/Testing for this is with go-sendxmpp. To configure that you setup a file named ~/.config/go-sendxmpp/config with the following contents:

username: JABBER-ID password: PASSWORD

Go-sendxmpp can take a username and password on the command-line but that’s bad for security as in the absence of SE Linux or other advanced security systems the password can be seen by any user on the same system who runs ps. To send a message run “echo $MESSAGE | go-sendxmpp $ADDR” to send $MESSAGE to $ADDR. It also has the option “go-sendxmpp -l” to listen for incoming messages. I don’t have an immediate need to receive messages from the command-line but it’s handy to have the option.

I probably won’t be able to get a new version of etbemon in Debian for the Bookworm release. So to get go-sendxmpp to work with etbemon you need to edit /usr/lib/mon/alert.d/mailxmpp.alert and change this sendxmpp line to this go-sendxmpp line:

open (XMPP, "| /usr/bin/sendxmpp -a /etc/ssl/certs -t @xmpprec -r $host") || open (XMPP, "| /usr/bin/go-sendxmpp @xmpprec") ||

Related posts:

  1. Inteltech/Clicksend SMS Script USER=username API_KEY=1234ABC OUTPUTDIR=/var/spool/sms LOG_SERVICE=local1 I’ve just written the below script...
  2. Updated EC2 API Tools package I’ve updated my package of the Amazon EC2 API Tools...
  3. Jabber I’ve just been setting up jabber. I followed the advice...
Categories: FLOSS Project Planets

Tryton News: Tryton Unconference Berlin Call for Sponsors and Presentations

Planet Python - Mon, 2023-03-13 03:00

The next Tryton Unconference is scheduled to begin in just over two months. As you can see in the linked topic the community is already booking their flights and accommodation, if you’ve already done this then please share your booking details so we can all meet up and spend some time together.

As is the case for all unconferences the content is provided by its participants, so if you have a topic that you want to present then please submit details of your talk by email to tub2023@tryton.org. We accept long presentations (30 minutes) and lighting talks (of just 5-10 minutes). So please include a estimate for the duration of your talk when submitting it.

If you don’t feel confident enough to present a topic all by yourself, or there is something you don’t fully understand and want explained, then please create a topic on the forum to see if anyone else is interested in joining forces, or doing a talk to try and make things clearer for you.

I would like to announce that the Foundation board has agreed to accept sponsors for this event. Sponsorship starts from 500€ and grants free entry to the unconference and the inclusion of your logo on the event’s page. If you are interested in sponsoring the event, please send us a message.

We will do our best to live stream the talks and make them available on Youtube. If you can help us with this, then please also raise your voice.

I hope to see you all in Berlin!

1 post - 1 participant

Read full topic

Categories: FLOSS Project Planets

The Drop Times: A Stitch in Time Saves Nine

Planet Drupal - Mon, 2023-03-13 02:57

Today, a Telugu-language movie got the Academy Award for best original song at the Oscars. While accepting the award, music composer M. M. Keeravani mentioned that he grew up listening to the Carpenters. Although he meant Karen and Richard Carpenter, the American music sensation of the '70s, three major media houses in Malayalam, another south Indian language, translated it as woodworkers.

It should be a classic example of shoddy journalism. But such mistakes are not so uncommon in vernacular media. The phrase 'prima facie,' was once misconstrued as a lady's name. One hundred eighty-six people sleeping in the railway station had washed off in a flash flood in an old story when in reality, it was sleepers on which the rails were paved. The word magazines got mistranslated as the literal monthly magazine in a story about the seizure of arms from the Sri Lankan Tamil militia. However, the editor saved the grace by finding it out before printing. While reporting a death after a 'hot dog' eating competition, a newspaper thought the man had eaten raging canines. If this is how journalists write, a techy said he would be in danger if he told Python is his bread and butter.

Now excuse me. It is the new normal. Our media houses have lost editorial prowess. Speed before accuracy is the new-age motto. In such a speed-crazy world, having your editorial arm halved would be a significant loss.

We at TDT have witnessed such a loss. As mentioned in the last newsletter, NERD Summit, and DrupalCamp NJ will happen this week. As media partners for the two camps, we had many plans to execute. And a significant part of the plans revolved around a young journalist we had just hired, S. Jayesh

S. Jayesh is a name heard in both Malayalam and Tamil literary circles. He is a poet and short story writer who translated a few novels from Tamil to Malayalam. I knew him from his previous stints, where he was a workaholic and punctual, more productive than most, but would never do overtime as was the common practice in this part of the world. A polyglot having years of experience in online media, we hired him by the end of December.

On February 13, he fell on his back, involuntarily wounding his head. He was rushed to the hospital, had to undergo two neuro surgeries as his blood clot in his head, and was in a coma stage for more than two weeks. Fortunately, he has regained consciousness but must remain in the hospital. As he lacks medical insurance, his mother has taken to alms to fund his hospitalization expenses. She is seeking around $18,300 in USD or ₹1,500,000 in INR. Until now, she could collect only 32% of the same. Even if he gets discharged, it will probably take months for him to rejoin work. So we urge the Drupal community to pour your hearts in small amounts to help him in need.

The crowdfunding request is placed on Milaap.org, a fundraising platform for medical emergencies and social causes. The platform charges no intermediary fees, and every penny donated to Jayesh will go into his mother's account for the treatment of her son.

Coming back to the past week's stories. On March 08, Wednesday, we published an Interview with Rick Hood as a primer to the NERD Summit 2023. In this exciting interview, he not only discusses Drupal but also goes into his music production interests and his past boat business.

Evolving Web has announced a training on Drupal Site Building in April. On March 15, Acquia will host a webinar on Securing The Modern Digital Landscape, and on March 16, another webinar on CDP. Tomorrow, Design4Drupal Boston will host AmyJune Hineline for an accessibility webinar.

All but three sessions of DrupalCamp Florida are online on their YouTube channel. MidCamp 2023 has announced its sessions and speakers. DrupalCamp Finland started accepting papers. NERD Summit was still accepting training session submissions as a backup. They have also pushed out a call for volunteers. DrupalCamp Poland has put early bird tickets on sale. DrupalCon Pittsburgh is seeking sponsors to support Women In Tech. The last day to apply for a volunteering opportunity in DrupalCon Lille is tomorrow.

Project Browser Initiative collects feedback via google forms about what information is most valuable to you when "browsing" for modules on drupal.org. In celebration of Women's history month, Drupal Association highlighted the work of Nichole Addeo, the Managing Director and Co-founder of Mythic Digital. ICFOSS and Zyxware Technologies joined hands to impart Drupal training for women as part of the' Back to Work for Women' campaign. 

On blogs and training materials, visit Kevin Funk's article in Acquia Developer Portal about utilizing developer workspaces with Acquia Code Studio. Alejandro Moreno Lopez, the Developer Advocate at Pantheon Platform, shared an educational video about the benefits of using Drupal for a Decoupled project.

That is for the week. Thank you,

Sincerely,
Sebin A. Jacob
Editor-in-Chief

Categories: FLOSS Project Planets

Antoine Beaupré: Framework 12th gen laptop review

Planet Debian - Sun, 2023-03-12 22:01

The Framework is a 13.5" laptop body with swappable parts, which makes it somewhat future-proof and certainly easily repairable, scoring an "exceedingly rare" 10/10 score from ifixit.com.

There are two generations of the laptop's main board (both compatible with the same body): the Intel 11th and 12th gen chipsets.

I have received my Framework, 12th generation "DIY", device in late September 2022 and will update this page as I go along in the process of ordering, burning-in, setting up and using the device over the years.

Overall, the Framework is a good laptop. I like the keyboard, the touch pad, the expansion cards. Clearly there's been some good work done on industrial design, and it's the most repairable laptop I've had in years. Time will tell, but it looks sturdy enough to survive me many years as well.

This is also one of the most powerful devices I ever lay my hands on. I have managed, remotely, more powerful servers, but this is the fastest computer I have ever owned, and it fits in this tiny case. It is an amazing machine.

On the downside, there's a bit of proprietary firmware required (WiFi, Bluetooth, some graphics) and the Framework ships with a proprietary BIOS, with currently no Coreboot support. Expect to need the latest kernel, firmware, and hacking around a bunch of things to get resolution and keybindings working right.

Like others, I have first found significant power management issues, but many issues can actually be solved with some configuration. Some of the expansion ports (HDMI, DP, MicroSD, and SSD) use power when idle, so don't expect week-long suspend, or "full day" battery while those are plugged in.

Finally, the expansion ports are nice, but there's only four of them. If you plan to have a two-monitor setup, you're likely going to need a dock.

Read on for the detailed review. For context, I'm moving from the Purism Librem 13v4 because it basically exploded on me. I had, in the meantime, reverted back to an old ThinkPad X220, so I sometimes compare the Framework with that venerable laptop as well.

This blog post has been maturing for months now. It started in September 2023 and I declared it completed in March 2023. It's the longest single article on this entire website, currently clocking at about 13,000 words. It will take an average reader a full hour to go through this thing, so I don't expect anyone to actually do that. This introduction should be good enough for most people, read the first section if you intend to actually buy a Framework. Jump around the table of contents as you see fit for after you did buy the laptop, as it might include some crucial hints on how to make it work best for you, especially on (Debian) Linux.

Advice for buyers

Those are things I wish I would have known before buying:

  1. consider buying 4 USB-C expansion cards, or at least a mix of 4 USB-A or USB-C cards, as they use less power than other cards and you do want to fill those expansion slots otherwise they snag around and feel insecure

  2. you will likely need a dock or at least a USB hub if you want a two-monitor setup, otherwise you'll run out of ports

  3. you have to do some serious tuning to get proper (10h+ idle, 10 days suspend) power savings

  4. in particular, beware that the HDMI, DisplayPort and particularly the SSD and MicroSD cards take a significant amount power, even when sleeping, up to 2-6W for the latter two

  5. beware that the MicroSD card is what it says: Micro, normal SD cards won't fit, and while there might be full sized one eventually, it's currently only at the prototyping stage

  6. the Framework monitor has an unusual aspect ratio (3:2): I like it (and it matches classic and digital photography aspect ratio), but it might surprise you

Current status

I have the framework! It's setup with a fresh new Debian bookworm installation. I've ran through a large number of tests and burn in.

I have decided to use the Framework as my daily driver, and had to buy a USB-C dock to get my two monitors connected, which was own adventure.

Specifications

Those are the specifications of the 12th gen, in general terms. Your build will of course vary according to your needs.

  • CPU: i5-1240P, i7-1260P, or i7-1280P (Up to 4.4-4.8 GHz, 4+8 cores), Iris Xe graphics
  • Storage: 250-4000GB NVMe (or bring your own)
  • Memory: 8-64GB DDR4-3200 (or bring your own)
  • WiFi 6e (AX210, vPro optional, or bring your own)
  • 296.63mm X 228.98mm X 15.85mm, 1.3Kg
  • 13.5" display, 3:2 ratio, 2256px X 1504px, 100% sRGB, >400 nit
  • 4 x USB-C user-selectable expansion ports, including
    • USB-C
    • USB-A
    • HDMI
    • DP
    • Ethernet
    • MicroSD
    • 250-1000GB SSD
  • 3.5mm combo headphone jack
  • Kill switches for microphone and camera
  • Battery: 55Wh
  • Camera: 1080p 60fps
  • Biometrics: Fingerprint Reader
  • Backlit keyboard
  • Power Adapter: 60W USB-C (or bring your own)
  • ships with a screwdriver/spludger
  • 1 year warranty
  • base price: 1000$CAD, but doesn't give you much, typical builds around 1500-2000$CAD
Actual build

This is the actual build I ordered. Amounts in CAD. (1CAD = ~0.75EUR/USD.)

Base configuration
  • CPU: Intel® Core™ i5-1240P, 1079$
  • Memory: 16GB (1 x 16GB) DDR4-3200, 104$
Customization
  • Keyboard: US English, included
Expansion Cards
  • 2 USB-C $24
  • 3 USB-A $36
  • 2 HDMI $50
  • 1 DP $50
  • 1 MicroSD $25
  • 1 Storage – 1TB $199
  • Sub-total: 384$
Accessories
  • Power Adapter - US/Canada $64.00
Total
  • Before tax: 1606$
  • After tax and duties: 1847$
  • Free shipping
Quick evaluation

This is basically the TL;DR: here, just focusing on broad pros/cons of the laptop.

Pros Cons
  • the 11th gen is out of stock, except for the higher-end CPUs, which are much less affordable (700$+)

  • the 12th gen has compatibility issues with Debian, followup in the DebianOn page, but basically: brightness hotkeys, power management, wifi, the webcam is okay even though the chipset is the infamous alder lake because it does not have the fancy camera; most issues currently seem solvable, and upstream is working with mainline to get their shit working

  • 12th gen might have issues with thunderbolt docks

  • they used to have some difficulty keeping up with the orders: first two batches shipped, third batch sold out, fourth batch should have shipped in October 2021. they generally seem to keep up with shipping. update (august 2022): they rolled out a second line of laptops (12th gen), first batch shipped, second batch shipped late, September 2022 batch was generally on time, see this spreadsheet for a crowdsourced effort to track those supply chain issues seem to be under control as of early 2023. I got the Ethernet expansion card shipped within a week.

  • compared to my previous laptop (Purism Librem 13v4), it feels strangely bulkier and heavier; it's actually lighter than the purism (1.3kg vs 1.4kg) and thinner (15.85mm vs 18mm) but the design of the Purism laptop (tapered edges) makes it feel thinner

  • no space for a 2.5" drive

  • rather bright LED around power button, but can be dimmed in the BIOS (not low enough to my taste) I got used to it

  • fan quiet when idle, but can be noisy when running, for example if you max a CPU for a while

  • battery described as "mediocre" by Ars Technica (above), confirmed poor in my tests (see below)

  • no RJ-45 port, and attempts at designing ones are failing because the modular plugs are too thin to fit (according to Linux After Dark), so unlikely to have one in the future Update: they cracked that nut and ship an 2.5 gbps Ethernet expansion card with a realtek chipset, without any firmware blob

  • a bit pricey for the performance, especially when compared to the competition (e.g. Dell XPS, Apple M1)

  • 12th gen Intel has glitchy graphics, seems like Intel hasn't fully landed proper Linux support for that chipset yet

Initial hardware setup

A breeze.

Accessing the board

The internals are accessed through five TorX screws, but there's a nice screwdriver/spudger that works well enough. The screws actually hold in place so you can't even lose them.

The first setup is a bit counter-intuitive coming from the Librem laptop, as I expected the back cover to lift and give me access to the internals. But instead the screws is release the keyboard and touch pad assembly, so you actually need to flip the laptop back upright and lift the assembly off to get access to the internals. Kind of scary.

I also actually unplugged a connector in lifting the assembly because I lifted it towards the monitor, while you actually need to lift it to the right. Thankfully, the connector didn't break, it just snapped off and I could plug it back in, no harm done.

Once there, everything is well indicated, with QR codes all over the place supposedly leading to online instructions.

Bad QR codes

Unfortunately, the QR codes I tested (in the expansion card slow, the memory slot and CPU slots) did not actually work so I wonder how useful those actually are.

After all, they need to point to something and that means a URL, a running website that will answer those requests forever. I bet those will break sooner than later and in fact, as far as I can tell, they just don't work at all. I prefer the approach taken by the MNT reform here which designed (with the 100 rabbits folks) an actual paper handbook (PDF).

The first QR code that's immediately visible from the back of the laptop, in an expansion cord slot, is a 404. It seems to be some serial number URL, but I can't actually tell because, well, the page is a 404.

I was expecting that bar code to lead me to an introduction page, something like "how to setup your Framework laptop". Support actually confirmed that it should point a quickstart guide. But in a bizarre twist, they somehow sent me the URL with the plus (+) signs escaped, like this:

https://guides.frame.work/Guide/Framework+Laptop+DIY+Edition+Quick+Start+Guide/57

... which Firefox immediately transforms in:

https://guides.frame.work/Guide/Framework/+Laptop/+DIY/+Edition/+Quick/+Start/+Guide/57

I'm puzzled as to why they would send the URL that way, the proper URL is of course:

https://guides.frame.work/Guide/Framework+Laptop+DIY+Edition+Quick+Start+Guide/57

(They have also "let the team know about this for feedback and help resolve the problem with the link" which is a support code word for "ha-ha! nope! not my problem right now!" Trust me, I know, my own code word is "can you please make a ticket?")

Seating disks and memory

The "DIY" kit doesn't actually have that much of a setup. If you bought RAM, it's shipped outside the laptop in a little plastic case, so you just seat it in as usual.

Then you insert your NVMe drive, and, if that's your fancy, you also install your own mPCI WiFi card. If you ordered one (which was my case), it's pre-installed.

Closing the laptop is also kind of amazing, because the keyboard assembly snaps into place with magnets. I have actually used the laptop with the keyboard unscrewed as I was putting the drives in and out, and it actually works fine (and will probably void your warranty, so don't do that). (But you can.) (But don't, really.)

Hardware review Keyboard and touch pad

The keyboard feels nice, for a laptop. I'm used to mechanical keyboard and I'm rather violent with those poor things. Yet the key travel is nice and it's clickety enough that I don't feel too disoriented.

At first, I felt the keyboard as being more laggy than my normal workstation setup, but it turned out this was a graphics driver issues. After enabling a composition manager, everything feels snappy.

The touch pad feels good. The double-finger scroll works well enough, and I don't have to wonder too much where the middle button is, it just works.

Taps don't work, out of the box: that needs to be enabled in Xorg, with something like this:

cat > /etc/X11/xorg.conf.d/40-libinput.conf <<EOF Section "InputClass" Identifier "libinput touch pad catchall" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Driver "libinput" Option "Tapping" "on" Option "TappingButtonMap" "lmr" EndSection EOF

But be aware that once you enable that tapping, you'll need to deal with palm detection... So I have not actually enabled this in the end.

Power button

The power button is a little dangerous. It's quite easy to hit, as it's right next to one expansion card where you are likely to plug in a cable power. And because the expansion cards are kind of hard to remove, you might squeeze the laptop (and the power key) when trying to remove the expansion card next to the power button.

So obviously, don't do that. But that's not very helpful.

An alternative is to make the power button do something else. With systemd-managed systems, it's actually quite easy. Add a HandlePowerKey stanza to (say) /etc/systemd/logind.conf.d/power-suspends.conf:

[Login] HandlePowerKey=suspend HandlePowerKeyLongPress=poweroff

You might have to create the directory first:

mkdir /etc/systemd/logind.conf.d/

Then restart logind:

systemctl restart systemd-logind

And the power button will suspend! Long-press to power off doesn't actually work as the laptop immediately suspends...

Note that there's probably half a dozen other ways of doing this, see this, this, or that.

Special keybindings

There is a series of "hidden" (as in: not labeled on the key) keybindings related to the fn keybinding that I actually find quite useful.

Key Equivalent Effect Command p Pause lock screen xset s activate b Break ? ? k ScrLk switch keyboard layout N/A

It looks like those are defined in the microcontroller so it would be possible to add some. For example, the SysRq key is almost bound to fn s in there.

Note that most other shortcuts like this are clearly documented (volume, brightness, etc). One key that's less obvious is F12 that only has the Framework logo on it. That actually calls the keysym XF86AudioMedia which, interestingly, does absolutely nothing here. By default, on Windows, it opens your browser to the Framework website and, on Linux, your "default media player".

The keyboard backlight can be cycled with fn-space. The dimmer version is dim enough, and the keybinding is easy to find in the dark.

A skinny elephant would be performed with alt PrtScr (above F11) KEY, so for example alt fn F11 b should do a hard reset. This comment suggests you need to hold the fn only if "function lock" is on, but that's actually the opposite of my experience.

Out of the box, some of the fn keys don't work. Mute, volume up/down, brightness, monitor changes, and the airplane mode key all do basically nothing. They don't send proper keysyms to Xorg at all.

This is a known problem and it's related to the fact that the laptop has light sensors to adjust the brightness automatically. Somehow some of those keys (e.g. the brightness controls) are supposed to show up as a different input device, but don't seem to work correctly. It seems like the solution is for the Framework team to write a driver specifically for this, but so far no progress since July 2022.

In the meantime, the fancy functionality can be supposedly disabled with:

echo 'blacklist hid_sensor_hub' | sudo tee /etc/modprobe.d/framework-als-blacklist.conf

... and a reboot. This solution is also documented in the upstream guide.

Note that there's another solution flying around that fixes this by changing permissions on the input device but I haven't tested that or seen confirmation it works.

Kill switches

The Framework has two "kill switches": one for the camera and the other for the microphone. The camera one actually disconnects the USB device when turned off, and the mic one seems to cut the circuit. It doesn't show up as muted, it just stops feeding the sound.

Both kill switches are around the main camera, on top of the monitor, and quite discreet. Then turn "red" when enabled (i.e. "red" means "turned off").

Monitor

The monitor looks pretty good to my untrained eyes. I have yet to do photography work on it, but some photos I looked at look sharp and the colors are bright and lively. The blacks are dark and the screen is bright.

I have yet to use it in full sunlight.

The dimmed light is very dim, which I like.

Screen backlight

I bind brightness keys to xbacklight in i3, but out of the box I get this error:

sep 29 22:09:14 angela i3[5661]: No outputs have backlight property

It just requires this blob in /etc/X11/xorg.conf.d/backlight.conf:

Section "Device" Identifier "Card0" Driver "intel" Option "Backlight" "intel_backlight" EndSection

This way I can control the actual backlight power with the brightness keys, and they do significantly reduce power usage.

Multiple monitor support

I have been able to hook up my two old monitors to the HDMI and DisplayPort expansion cards on the laptop. The lid closes without suspending the machine, and everything works great.

I actually run out of ports, even with a 4-port USB-A hub, which gives me a total of 7 ports:

  1. power (USB-C)
  2. monitor 1 (DisplayPort)
  3. monitor 2 (HDMI)
  4. USB-A hub, which adds:
  5. keyboard (USB-A)
  6. mouse (USB-A)
  7. Yubikey
  8. external sound card

Now the latter, I might be able to get rid of if I switch to a combo-jack headset, which I do have (and still need to test).

But still, this is a problem. I'll probably need a powered USB-C dock and better monitors, possibly with some Thunderbolt chaining, to save yet more ports.

But that means more money into this setup, argh. And figuring out my monitor situation is the kind of thing I'm not that big of a fan of. And neither is shopping for USB-C (or is it Thunderbolt?) hubs.

My normal autorandr setup doesn't work: I have tried saving a profile and it doesn't get autodetected, so I also first need to do:

autorandr -l framework-external-dual-lg-acer

The magic:

autorandr -l horizontal

... also works well.

The worst problem with those monitors right now is that they have a radically smaller resolution than the main screen on the laptop, which means I need to reset the font scaling to normal every time I switch back and forth between those monitors and the laptop, which means I actually need to do this:

autorandr -l horizontal && eho Xft.dpi: 96 | xrdb -merge && systemctl restart terminal xcolortaillog background-image emacs && i3-msg restart

Kind of disruptive.

Expansion ports

I ordered a total of 10 expansion ports.

I did manage to initialize the 1TB drive as an encrypted storage, mostly to keep photos as this is something that takes a massive amount of space (500GB and counting) and that I (unfortunately) don't work on very often (but still carry around).

The expansion ports are fancy and nice, but not actually that convenient. They're a bit hard to take out: you really need to crimp your fingernails on there and pull hard to take them out. There's a little button next to them to release, I think, but at first it feels a little scary to pull those pucks out of there. You get used to it though, and it's one of those things you can do without looking eventually.

There's only four expansion ports. Once you have two monitors, the drive, and power plugged in, bam, you're out of ports; there's nowhere to plug my Yubikey. So if this is going to be my daily driver, with a dual monitor setup, I will need a dock, which means more crap firmware and uncertainty, which isn't great. There are actually plans to make a dual-USB card, but that is blocked on designing an actual board for this.

I can't wait to see more expansion ports produced. There's a ethernet expansion card which quickly went out of stock basically the day it was announced, but was eventually restocked.

I would like to see a proper SD-card reader. There's a MicroSD card reader, but that obviously doesn't work for normal SD cards, which would be more broadly compatible anyways (because you can have a MicroSD to SD card adapter, but I have never heard of the reverse). Someone actually found a SD card reader that fits and then someone else managed to cram it in a 3D printed case, which is kind of amazing.

Still, I really like that idea that I can carry all those little adapters in a pouch when I travel and can basically do anything I want. It does mean I need to shuffle through them to find the right one which is a little annoying. I have an elastic band to keep them lined up so that all the ports show the same side, to make it easier to find the right one. But that quickly gets undone and instead I have a pouch full of expansion cards.

Another awesome thing with the expansion cards is that they don't just work on the laptop: anything that takes USB-C can take those cards, which means you can use it to connect an SD card to your phone, for backups, for example. Heck, you could even connect an external display to your phone that way, assuming that's supported by your phone of course (and it probably isn't).

The expansion ports do take up some power, even when idle. See the power management section below, and particularly the power usage tests for details.

USB-C charging

One thing that is really a game changer for me is USB-C charging. It's hard to overstate how convenient this is. I often have a USB-C cable lying around to charge my phone, and I can just grab that thing and pop it in my laptop. And while it will obviously not charge as fast as the provided charger, it will stop draining the battery at least.

(As I wrote this, I had the laptop plugged in the Samsung charger that came with a phone, and it was telling me it would take 6 hours to charge the remaining 15%. With the provided charger, that flew down to 15 minutes. Similarly, I can power the laptop from the power grommet on my desk, reducing clutter as I have that single wire out there instead of the bulky power adapter.)

I also really like the idea that I can charge my laptop with a power bank or, heck, with my phone, if push comes to shove. (And vice-versa!)

This is awesome. And it works from any of the expansion ports, of course. There's a little led next to the expansion ports as well, which indicate the charge status:

  • red/amber: charging
  • white: charged
  • off: unplugged

I couldn't find documentation about this, but the forum answered.

This is something of a recurring theme with the Framework. While it has a good knowledge base and repair/setup guides (and the forum is awesome) but it doesn't have a good "owner manual" that shows you the different parts of the laptop and what they do. Again, something the MNT reform did well.

Another thing that people are asking about is an external sleep indicator: because the power LED is on the main keyboard assembly, you don't actually see whether the device is active or not when the lid is closed.

Finally, I wondered what happens when you plug in multiple power sources and it turns out the charge controller is actually pretty smart: it will pick the best power source and use it. The only downside is it can't use multiple power sources, but that seems like a bit much to ask.

Multimedia and other devices

Those things also work:

  • webcam: splendid, best webcam I've ever had (but my standards are really low)
  • onboard mic: works well, good gain (maybe a bit much)
  • onboard speakers: sound okay, a little metal-ish, loud enough to be annoying, see this thread for benchmarks, apparently pretty good speakers
  • combo jack: works, with slight hiss, see below

There's also a light sensor, but it conflicts with the keyboard brightness controls (see above).

There's also an accelerometer, but it's off by default and will be removed from future builds.

Combo jack mic tests

The Framework laptop ships with a combo jack on the left side, which allows you to plug in a CTIA (source) headset. In human terms, it's a device that has both a stereo output and a mono input, typically a headset or ear buds with a microphone somewhere.

It works, which is better than the Purism (which only had audio out), but is on par for the course for that kind of onboard hardware. Because of electrical interference, such sound cards very often get lots of noise from the board.

With a Jabra Evolve 40, the built-in USB sound card generates basically zero noise on silence (invisible down to -60dB in Audacity) while plugging it in directly generates a solid -30dB hiss. There is a noise-reduction system in that sound card, but the difference is still quite striking.

On a comparable setup (curie, a 2017 Intel NUC), there is also a his with the Jabra headset, but it's quieter, more in the order of -40/-50 dB, a noticeable difference. Interestingly, testing with my Mee Audio Pro M6 earbuds leads to a little more hiss on curie, more on the -35/-40 dB range, close to the Framework.

Also note that another sound card, the Antlion USB adapter that comes with the ModMic 4, also gives me pretty close to silence on a quiet recording, picking up less than -50dB of background noise. It's actually probably picking up the fans in the office, which do make audible noises.

In other words, the hiss of the sound card built in the Framework laptop is so loud that it makes more noise than the quiet fans in the office. Or, another way to put it is that two USB sound cards (the Jabra and the Antlion) are able to pick up ambient noise in my office but not the Framework laptop.

See also my audio page.

Performance tests Compiling Linux 5.19.11

On a single core, compiling the Debian version of the Linux kernel takes around 100 minutes:

5411.85user 673.33system 1:37:46elapsed 103%CPU (0avgtext+0avgdata 831700maxresident)k 10594704inputs+87448000outputs (9131major+410636783minor)pagefaults 0swaps

This was using 16 watts of power, with full screen brightness.

With all 16 cores (make -j16), it takes less than 25 minutes:

19251.06user 2467.47system 24:13.07elapsed 1494%CPU (0avgtext+0avgdata 831676maxresident)k 8321856inputs+87427848outputs (30792major+409145263minor)pagefaults 0swaps

I had to plug the normal power supply after a few minutes because battery would actually run out using my desk's power grommet (34 watts).

During compilation, fans were spinning really hard, quite noisy, but not painfully so.

The laptop was sucking 55 watts of power, steadily:

Time User Nice Sys Idle IO Run Ctxt/s IRQ/s Fork Exec Exit Watts -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Average 87.9 0.0 10.7 1.4 0.1 17.8 6583.6 5054.3 233.0 223.9 233.1 55.96 GeoMean 87.9 0.0 10.6 1.2 0.0 17.6 6427.8 5048.1 227.6 218.7 227.7 55.96 StdDev 1.4 0.0 1.2 0.6 0.2 3.0 1436.8 255.5 50.0 47.5 49.7 0.20 -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Minimum 85.0 0.0 7.8 0.5 0.0 13.0 3594.0 4638.0 117.0 111.0 120.0 55.52 Maximum 90.8 0.0 12.9 3.5 0.8 38.0 10174.0 5901.0 374.0 362.0 375.0 56.41 -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Summary: CPU: 55.96 Watts on average with standard deviation 0.20 Note: power read from RAPL domains: package-0, uncore, package-0, core, psys. These readings do not cover all the hardware in this device. memtest86+

I ran Memtest86+ v6.00b3. It shows something like this:

Memtest86+ v6.00b3 | 12th Gen Intel(R) Core(TM) i5-1240P CLK/Temp: 2112MHz 78/78°C | Pass 2% # L1 Cache: 48KB 414 GB/s | Test 46% ################## L2 Cache: 1.25MB 118 GB/s | Test #3 [Moving inversions, 1s & 0s] L3 Cache: 12MB 43 GB/s | Testing: 16GB - 18GB [1GB of 15.7GB] Memory : 15.7GB 14.9 GB/s | Pattern: -------------------------------------------------------------------------------- CPU: 4P+8E-Cores (16T) SMP: 8T (PAR)) | Time: 0:27:23 Status: Pass \ RAM: 1600MHz (DDR4-3200) CAS 22-22-22-51 | Pass: 1 Errors: 0 -------------------------------------------------------------------------------- Memory SPD Information ---------------------- - Slot 2: 16GB DDR-4-3200 - Crucial CT16G4SFRA32A.C16FP (2022-W23) Framework FRANMACP04 <ESC> Exit <F1> Configuration <Space> Scroll Lock 6.00.unknown.x64

So about 30 minutes for a full 16GB memory test.

Software setup

Once I had everything in the hardware setup, I figured, voilà, I'm done, I'm just going to boot this beautiful machine and I can get back to work.

I don't understand why I am so naïve some times. It's mind boggling.

Obviously, it didn't happen that way at all, and I spent the best of the three following days tinkering with the laptop.

Secure boot and EFI

First, I couldn't boot off of the NVMe drive I transferred from the previous laptop (the Purism) and the BIOS was not very helpful: it was just complaining about not finding any boot device, without dropping me in the real BIOS.

At first, I thought it was a problem with my NVMe drive, because it's not listed in the compatible SSD drives from upstream. But I figured out how to enter BIOS (press F2 manically, of course), which showed the NVMe drive was actually detected. It just didn't boot, because it was an old (2010!!) Debian install without EFI.

So from there, I disabled secure boot, and booted a grml image to try to recover. And by "boot" I mean, I managed to get to the grml boot loader which promptly failed to load its own root file system somehow. I still have to investigate exactly what happened there, but it failed some time after the initrd load with:

Unable to find medium containing a live file system

This, it turns out, was fixed in Debian lately, so a daily GRML build will not have this problems. The upcoming 2022 release (likely 2022.10 or 2022.11) will also get the fix.

I did manage to boot the development version of the Debian installer which was a surprisingly good experience: it mounted the encrypted drives and did everything pretty smoothly. It even offered me to reinstall the boot loader, but that ultimately (and correctly, as it turns out) failed because I didn't have a /boot/efi partition.

At this point, I realized there was no easy way out of this, and I just proceeded to completely reinstall Debian. I had a spare NVMe drive lying around (backups FTW!) so I just swapped that in, rebooted in the Debian installer, and did a clean install. I wanted to switch to bookworm anyways, so I guess that's done too.

Storage limitations

Another thing that happened during setup is that I tried to copy over the internal 2.5" SSD drive from the Purism to the Framework 1TB expansion card. There's no 2.5" slot in the new laptop, so that's pretty much the only option for storage expansion.

I was tired and did something wrong. I ended up wiping the partition table on the original 2.5" drive.

Oops.

It might be recoverable, but just restoring the partition table didn't work either, so I'm not sure how I recover the data there. Normally, everything on my laptops and workstations is designed to be disposable, so that wasn't that big of a problem. I did manage to recover most of the data thanks to git-annex reinit, but that was a little hairy.

Bootstrapping Puppet

Once I had some networking, I had to install all the packages I needed. The time I spent setting up my workstations with Puppet has finally paid off. What I actually did was to restore two critical directories:

/etc/ssh /var/lib/puppet

So that I would keep the previous machine's identity. That way I could contact the Puppet server and install whatever was missing. I used my Puppet optimization trick to do a batch install and then I had a good base setup, although not exactly as it was before. 1700 packages were installed manually on angela before the reinstall, and not in Puppet.

I did not inspect each one individually, but I did go through /etc and copied over more SSH keys, for backups and SMTP over SSH.

LVFS support

It looks like there's support for the (de-facto) standard LVFS firmware update system. At least I was able to update the UEFI firmware with a simple:

apt install fwupd-amd64-signed fwupdmgr refresh fwupdmgr get-updates fwupdmgr update

Nice. The 12th gen BIOS updates, currently (January 2023) beta, can be deployed through LVFS with:

fwupdmgr enable-remote lvfs-testing echo 'DisableCapsuleUpdateOnDisk=true' >> /etc/fwupd/uefi_capsule.conf fwupdmgr update

Those instructions come from the beta forum post. I performed the BIOS update on 2023-01-16T16:00-0500.

Resolution tweaks

The Framework laptop resolution (2256px X 1504px) is big enough to give you a pretty small font size, so welcome to the marvelous world of "scaling".

The Debian wiki page has a few tricks for this.

Console

This will make the console and grub fonts more readable:

cat >> /etc/default/console-setup <<EOF FONTFACE="Terminus" FONTSIZE=32x16 EOF echo GRUB_GFXMODE=1024x768 >> /etc/default/grub update-grub Xorg

Adding this to your .Xresources will make everything look much bigger:

! 1.5*96 Xft.dpi: 144

Apparently, some of this can also help:

! These might also be useful depending on your monitor and personal preference: Xft.autohint: 0 Xft.lcdfilter: lcddefault Xft.hintstyle: hintfull Xft.hinting: 1 Xft.antialias: 1 Xft.rgba: rgb

It my experience it also makes things look a little fuzzier, which is frustrating because you have this awesome monitor but everything looks out of focus. Just bumping Xft.dpi by a 1.5 factor looks good to me.

The Debian Wiki has a page on HiDPI, but it's not as good as the Arch Wiki, where the above blurb comes from. I am not using the latter because I suspect it's causing some of the "fuzziness".

TODO: find the equivalent of this GNOME hack in i3? (gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"), taken from this Framework guide

Issues BIOS configuration

The Framework BIOS has some minor issues. One issue I personally encountered is that I had disabled Quick boot and Quiet boot in the BIOS to diagnose the above boot issues. This, in turn, triggers a bug where the BIOS boot manager (F12) would just hang completely. It would also fail to boot from an external USB drive.

The current fix (as of BIOS 3.03) is to re-enable both Quick boot and Quiet boot. Presumably this is something that will get fixed in a future BIOS update.

Note that the following keybindings are active in the BIOS POST check:

Key Meaning F2 Enter BIOS setup menu F12 Enter BIOS boot manager Delete Enter BIOS setup menu WiFi compatibility issues

I couldn't make WiFi work at first. Obviously, the default Debian installer doesn't ship with proprietary firmware (although that might change soon) so the WiFi card didn't work out of the box. But even after copying the firmware through a USB stick, I couldn't quite manage to find the right combination of ip/iw/wpa-supplicant (yes, after repeatedly copying a bunch more packages over to get those bootstrapped). (Next time I should probably try something like this post.)

Thankfully, I had a little USB-C dongle with a RJ-45 jack lying around. That also required a firmware blob, but it was a single package to copy over, and with that loaded, I had network.

Eventually, I did managed to make WiFi work; the problem was more on the side of "I forgot how to configure a WPA network by hand from the commandline" than anything else. NetworkManager worked fine and got WiFi working correctly.

Note that this is with Debian bookworm, which has the 5.19 Linux kernel, and with the firmware-nonfree (firmware-iwlwifi, specifically) package.

Battery life

I was having between about 7 hours of battery on the Purism Librem 13v4, and that's after a year or two of battery life. Now, I still have about 7 hours of battery life, which is nicer than my old ThinkPad X220 (20 minutes!) but really, it's not that good for a new generation laptop. The 12th generation Intel chipset probably improved things compared to the previous one Framework laptop, but I don't have a 11th gen Framework to compare with).

(Note that those are estimates from my status bar, not wall clock measurements. They should still be comparable between the Purism and Framework, that said.)

The battery life doesn't seem up to, say, Dell XPS 13, ThinkPad X1, and of course not the Apple M1, where I would expect 10+ hours of battery life out of the box.

That said, I do get those kind estimates when the machine is fully charged and idle. In fact, when everything is quiet and nothing is plugged in, I get dozens of hours of battery life estimated (I've seen 25h!). So power usage fluctuates quite a bit depending on usage, which I guess is expected.

Concretely, so far, light web browsing, reading emails and writing notes in Emacs (e.g. this file) takes about 8W of power:

Time User Nice Sys Idle IO Run Ctxt/s IRQ/s Fork Exec Exit Watts -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Average 1.7 0.0 0.5 97.6 0.2 1.2 4684.9 1985.2 126.6 39.1 128.0 7.57 GeoMean 1.4 0.0 0.4 97.6 0.1 1.2 4416.6 1734.5 111.6 27.9 113.3 7.54 StdDev 1.0 0.2 0.2 1.2 0.0 0.5 1584.7 1058.3 82.1 44.0 80.2 0.71 -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Minimum 0.2 0.0 0.2 94.9 0.1 1.0 2242.0 698.2 82.0 17.0 82.0 6.36 Maximum 4.1 1.1 1.0 99.4 0.2 3.0 8687.4 4445.1 463.0 249.0 449.0 9.10 -------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------ Summary: System: 7.57 Watts on average with standard deviation 0.71

Expansion cards matter a lot in the battery life (see below for a thorough discussion), my normal setup is 2xUSB-C and 1xUSB-A (yes, with an empty slot, and yes, to save power).

Interestingly, playing a video in a (720p) window in a window takes up more power (10.5W) than in full screen (9.5W) but I blame that on my desktop setup (i3 + compton)... Not sure if mpv hits the VA-API, maybe not in windowed mode. Similar results with 1080p, interestingly, except the window struggles to keep up altogether. Full screen playback takes a relatively comfortable 9.5W, which means a solid 5h+ of playback, which is fine by me.

Fooling around the web, small edits, youtube-dl, and I'm at around 80% battery after about an hour, with an estimated 5h left, which is a little disappointing. I had a 7h remaining estimate before I started goofing around Discourse, so I suspect the website is a pretty big battery drain, actually. I see about 10-12 W, while I was probably at half that (6-8W) just playing music with mpv in the background...

In other words, it looks like editing posts in Discourse with Firefox takes a solid 4-6W of power. Amazing and gross.

(When writing about abusive power usage generates more power usage, is that an heisenbug? Or schrödinbug?)

Power management

Compared to the Purism Librem 13v4, the ongoing power usage seems to be slightly better. An anecdotal metric is that the Purism would take 800mA idle, while the more powerful Framework manages a little over 500mA as I'm typing this, fluctuating between 450 and 600mA. That is without any active expansion card, except the storage. Those numbers come from the output of tlp-stat -b and, unfortunately, the "ampere" unit makes it quite hard to compare those, because voltage is not necessarily the same between the two platforms.

  • TODO: review Arch Linux's tips on power saving
  • TODO: i915 driver has a lot of parameters, including some about power saving, see, again, the arch wiki, and particularly enable_fbc=1

TL:DR; power management on the laptop is an issue, but there's various tweaks you can make to improve it. Try:

  • powertop --auto-tune
  • apt install tlp && systemctl enable tlp
  • nvme.noacpi=1 mem_sleep_default=deep on the kernel command line may help with standby power usage
  • keep only USB-C expansion cards plugged in, all others suck power even when idle
  • consider upgrading the BIOS to latest beta (3.06 at the time of writing), unverified power savings
  • latest Linux kernels (6.2) promise power savings as well (unverified)
Background on CPU architecture

There were power problems in the 11th gen Framework laptop, according to this report from Linux After Dark, so the issues with power management on the Framework are not new.

The 12th generation Intel CPU (AKA "Alder Lake") is a big-little architecture with "power-saving" and "performance" cores. There used to be performance problems introduced by the scheduler in Linux 5.16 but those were eventually fixed in 5.18, which uses Intel's hardware as an "intelligent, low-latency hardware-assisted scheduler". According to Phoronix, the 5.19 release improved the power saving, at the cost of some penalty cost. There were also patch series to make the scheduler configurable, but it doesn't look those have been merged as of 5.19. There was also a session about this at the 2022 Linux Plumbers, but they stopped short of talking more about the specific problems Linux is facing in Alder lake:

Specifically, the kernel's energy-aware scheduling heuristics don't work well on those CPUs. A number of features present there complicate the energy picture; these include SMT, Intel's "turbo boost" mode, and the CPU's internal power-management mechanisms. For many workloads, running on an ostensibly more power-hungry Pcore can be more efficient than using an Ecore. Time for discussion of the problem was lacking, though, and the session came to a close.

All this to say that the 12gen Intel line shipped with this Framework series should have better power management thanks to its power-saving cores. And Linux has had the scheduler changes to make use of this (but maybe is still having trouble). In any case, this might not be the source of power management problems on my laptop, quite the opposite.

Also note that the firmware updates for various chipsets are supposed to improve things eventually.

On the other hand, The Verge simply declared the whole P-series a mistake...

Attempts at improving power usage

I did try to follow some of the tips in this forum post. The tricks powertop --auto-tune and tlp's PCIE_ASPM_ON_BAT=powersupersave basically did nothing: I was stuck at 10W power usage in powertop (600+mA in tlp-stat).

Apparently, I should be able to reach the C8 CPU power state (or even C9, C10) in powertop, but I seem to be stock at C7. (Although I'm not sure how to read that tab in powertop: in the Core(HW) column there's only C3/C6/C7 states, and most cores are 85% in C7 or maybe C6. But the next column over does show many CPUs in C10 states...

As it turns out, the graphics card actually takes up a good chunk of power unless proper power management is enabled (see below). After tweaking this, I did manage to get down to around 7W power usage in powertop.

Expansion cards actually do take up power, and so does the screen, obviously. The fully-lit screen takes a solid 2-3W of power compared to the fully dimmed screen. When removing all expansion cards and making the laptop idle, I can spin it down to 4 watts power usage at the moment, and an amazing 2 watts when the screen turned off.

Caveats

Abusive (10W+) power usage that I initially found could be a problem with my desktop configuration: I have this silly status bar that updates every second and probably causes redraws... The CPU certainly doesn't seem to spin down below 1GHz. Also note that this is with an actual desktop running with everything: it could very well be that some things (I'm looking at you Signal Desktop) take up unreasonable amount of power on their own (hello, 1W/electron, sheesh). Syncthing and containerd (Docker!) also seem to take a good 500mW just sitting there.

Beyond my desktop configuration, this could, of course, be a Debian-specific problem; your favorite distribution might be better at power management.

Idle power usage tests

Some expansion cards waste energy, even when unused. Here is a summary of the findings from the powerstat page. I also include other devices tested in this page for completeness:

Device Minimum Average Max Stdev Note Screen, 100% 2.4W 2.6W 2.8W N/A Screen, 1% 30mW 140mW 250mW N/A Backlight 1 290mW ? ? ? fairly small, all things considered Backlight 2 890mW 1.2W 3W? 460mW? geometric progression Backlight 3 1.69W 1.5W 1.8W? 390mW? significant power use Radios 100mW 250mW N/A N/A USB-C N/A N/A N/A N/A negligible power drain USB-A 10mW 10mW ? 10mW almost negligible DisplayPort 300mW 390mW 600mW N/A not passive HDMI 380mW 440mW 1W? 20mW not passive 1TB SSD 1.65W 1.79W 2W 12mW significant, probably higher when busy MicroSD 1.6W 3W 6W 1.93W highest power usage, possibly even higher when busy Ethernet 1.69W 1.64W 1.76W N/A comparable to the SSD card

So it looks like all expansion cards but the USB-C ones are active, i.e. they draw power with idle. The USB-A cards are the least concern, sucking out 10mW, pretty much within the margin of error. But both the DisplayPort and HDMI do take a few hundred miliwatts. It looks like USB-A connectors have this fundamental flaw that they necessarily draw some powers because they lack the power negotiation features of USB-C. At least according to this post:

It seems the USB A must have power going to it all the time, that the old USB 2 and 3 protocols, the USB C only provides power when there is a connection. Old versus new.

Apparently, this is a problem specific to the USB-C to USB-A adapter that ships with the Framework. Some people have actually changed their orders to all USB-C because of this problem, but I'm not sure the problem is as serious as claimed in the forums. I couldn't reproduce the "one watt" power drains suggested elsewhere, at least not repeatedly. (A previous version of this post did show such a power drain, but it was in a less controlled test environment than the series of more rigorous tests above.)

The worst offenders are the storage cards: the SSD drive takes at least one watt of power and the MicroSD card seems to want to take all the way up to 6 watts of power, both just sitting there doing nothing. This confirms claims of 1.4W for the SSD (but not 5W) power usage found elsewhere. The former post has instructions on how to disable the card in software. The MicroSD card has been reported as using 2 watts, but I've seen it as high as 6 watts, which is pretty damning.

The Framework team has a beta update for the DisplayPort adapter but currently only for Windows (LVFS technically possible, "under investigation"). A USB-A firmware update is also under investigation. It is therefore likely at least some of those power management issues will eventually be fixed.

Note that the upcoming Ethernet card has a reported 2-8W power usage, depending on traffic. I did my own power usage tests in powerstat-wayland and they seem lower than 2W.

The upcoming 6.2 Linux kernel might also improve battery usage when idle, see this Phoronix article for details, likely in early 2023.

Idle power usage tests under Wayland

Update: I redid those tests under Wayland, see powerstat-wayland for details. The TL;DR: is that power consumption is either smaller or similar.

Idle power usage tests, 3.06 beta BIOS

I redid the idle tests after the 3.06 beta BIOS update and ended up with this results:

Device Minimum Average Max Stdev Note Baseline 1.96W 2.01W 2.11W 30mW 1 USB-C, screen off, backlight off, no radios 2 USB-C 1.95W 2.16W 3.69W 430mW USB-C confirmed as mostly passive... 3 USB-C 1.95W 2.16W 3.69W 430mW ... although with extra stdev 1TB SSD 3.72W 3.85W 4.62W 200mW unchanged from before upgrade 1 USB-A 1.97W 2.18W 4.02W 530mW unchanged 2 USB-A 1.97W 2.00W 2.08W 30mW unchanged 3 USB-A 1.94W 1.99W 2.03W 20mW unchanged MicroSD w/o card 3.54W 3.58W 3.71W 40mW significant improvement! 2-3W power saving! MicroSD w/ card 3.53W 3.72W 5.23W 370mW new measurement! increased deviation DisplayPort 2.28W 2.31W 2.37W 20mW unchanged 1 HDMI 2.43W 2.69W 4.53W 460mW unchanged 2 HDMI 2.53W 2.59W 2.67W 30mW unchanged External USB 3.85W 3.89W 3.94W 30mW new result Ethernet 3.60W 3.70W 4.91W 230mW unchanged

Note that the table summary is different than the previous table: here we show the absolute numbers while the previous table was doing a confusing attempt at showing relative (to the baseline) numbers.

Conclusion: the 3.06 BIOS update did not significantly change idle power usage stats except for the MicroSD card which has significantly improved.

The new "external USB" test is also interesting: it shows how the provided 1TB SSD card performs (admirably) compared to existing devices. The other new result is the MicroSD card with a card which, interestingly, uses less power than the 1TB SSD drive.

Standby battery usage

I wrote some quick hack to evaluate how much power is used during sleep. Apparently, this is one of the areas that should have improved since the first Framework model, let's find out.

My baseline for comparison is the Purism laptop, which, in 10 minutes, went from this:

sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_now = 6045 [mAh]

... to this:

sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_now = 6037 [mAh]

That's 8mAh per 10 minutes (and 2 seconds), or 48mA, or, with this battery, about 127 hours or roughly 5 days of standby. Not bad!

In comparison, here is my really old x220, before:

sep 29 22:13:54 emma systemd-sleep[176315]: /sys/class/power_supply/BAT0/energy_now = 5070 [mWh]

... after:

sep 29 22:23:54 emma systemd-sleep[176486]: /sys/class/power_supply/BAT0/energy_now = 4980 [mWh]

... which is 90 mwH in 10 minutes, or a whopping 540mA, which was possibly okay when this battery was new (62000 mAh, so about 100 hours, or about 5 days), but this battery is almost dead and has only 5210 mAh when full, so only 10 hours standby.

And here is the Framework performing a similar test, before:

sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_full = 3518 [mAh] sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_now = 2861 [mAh]

... after:

sep 29 22:37:08 angela systemd-sleep[4743]: /sys/class/power_supply/BAT1/charge_now = 2812 [mAh]

... which is 49mAh in a little over 10 minutes (and 4 seconds), or 292mA, much more than the Purism, but half of the X220. At this rate, the battery would last on standby only 12 hours!! That is pretty bad.

Note that this was done with the following expansion cards:

  • 2 USB-C
  • 1 1TB SSD drive
  • 1 USB-A with a hub connected to it, with keyboard and LAN

Preliminary tests without the hub (over one minute) show that it doesn't significantly affect this power consumption (300mA).

This guide also suggests booting with nvme.noacpi=1 but this still gives me about 5mAh/min (or 300mA).

Adding mem_sleep_default=deep to the kernel command line does make a difference. Before:

sep 29 23:03:11 angela systemd-sleep[3699]: /sys/class/power_supply/BAT1/charge_now = 2544 [mAh]

... after:

sep 29 23:04:25 angela systemd-sleep[4039]: /sys/class/power_supply/BAT1/charge_now = 2542 [mAh]

... which is 2mAh in 74 seconds, which is 97mA, brings us to a more reasonable 36 hours, or a day and a half. It's still above the x220 power usage, and more than an order of magnitude more than the Purism laptop. It's also far from the 0.4% promised by upstream, which would be 14mA for the 3500mAh battery.

It should also be noted that this "deep" sleep mode is a little more disruptive than regular sleep. As you can see by the timing, it took more than 10 seconds for the laptop to resume, which feels a little alarming as your banging the keyboard to bring it back to life.

You can confirm the current sleep mode with:

# cat /sys/power/mem_sleep s2idle [deep]

In the above, deep is selected. You can change it on the fly with:

printf s2idle > /sys/power/mem_sleep

Here's another test:

sep 30 22:25:50 angela systemd-sleep[32207]: /sys/class/power_supply/BAT1/charge_now = 1619 [mAh] sep 30 22:31:30 angela systemd-sleep[32516]: /sys/class/power_supply/BAT1/charge_now = 1613 [mAh]

... better! 6 mAh in about 6 minutes, works out to 63.5mA, so more than two days standby.

A longer test:

oct 01 09:22:56 angela systemd-sleep[62978]: /sys/class/power_supply/BAT1/charge_now = 3327 [mAh] oct 01 12:47:35 angela systemd-sleep[63219]: /sys/class/power_supply/BAT1/charge_now = 3147 [mAh]

That's 180mAh in about 3.5h, 52mA! Now at 66h, or almost 3 days.

I wasn't sure why I was seeing such fluctuations in those tests, but as it turns out, expansion card power tests show that they do significantly affect power usage, especially the SSD drive, which can take up to two full watts of power even when idle. I didn't control for expansion cards in the above tests — running them with whatever card I had plugged in without paying attention — so it's likely the cause of the high power usage and fluctuations.

It might be possible to work around this problem by disabling USB devices before suspend. TODO. See also this post.

In the meantime, I have been able to get much better suspend performance by unplugging all modules. Then I get this result:

oct 04 11:15:38 angela systemd-sleep[257571]: /sys/class/power_supply/BAT1/charge_now = 3203 [mAh] oct 04 15:09:32 angela systemd-sleep[257866]: /sys/class/power_supply/BAT1/charge_now = 3145 [mAh]

Which is 14.8mA! Almost exactly the number promised by Framework! With a full battery, that means a 10 days suspend time. This is actually pretty good, and far beyond what I was expecting when starting down this journey.

So, once the expansion cards are unplugged, suspend power usage is actually quite reasonable. More detailed standby tests are available in the standby-tests page, with a summary below.

There is also some hope that the Chromebook edition — specifically designed with a specification of 14 days standby time — could bring some firmware improvements back down to the normal line. Some of those issues were reported upstream in April 2022, but there doesn't seem to have been any progress there since.

TODO: one final solution here is suspend-then-hibernate, which Windows uses for this

TODO: consider implementing the S0ix sleep states , see also troubleshooting

TODO: consider https://github.com/intel/pm-graph

Standby expansion cards test results

This table is a summary of the more extensive standby-tests I have performed:

Device Wattage Amperage Days Note baseline 0.25W 16mA 9 sleep=deep nvme.noacpi=1 s2idle 0.29W 18.9mA ~7 sleep=s2idle nvme.noacpi=1 normal nvme 0.31W 20mA ~7 sleep=s2idle without nvme.noacpi=1 1 USB-C 0.23W 15mA ~10 2 USB-C 0.23W 14.9mA same as above 1 USB-A 0.75W 48.7mA 3 +500mW (!!) for the first USB-A card! 2 USB-A 1.11W 72mA 2 +360mW 3 USB-A 1.48W 96mA <2 +370mW 1TB SSD 0.49W 32mA <5 +260mW MicroSD 0.52W 34mA ~4 +290mW DisplayPort 0.85W 55mA <3 +620mW (!!) 1 HDMI 0.58W 38mA ~4 +250mW 2 HDMI 0.65W 42mA <4 +70mW

Conclusions:

  • USB-C cards take no extra power on suspend, possibly less than empty slots, more testing required

  • USB-A cards take a lot more power on suspend (300-500mW) than on regular idle (~10mW, almost negligible)

  • 1TB SSD and MicroSD cards seem to take a reasonable amount of power (260-290mW), compared to their runtime equivalents (1-6W!)

  • DisplayPort takes a surprising lot of power (620mW), almost double its average runtime usage (390mW)

  • HDMI cards take, surprisingly, less power (250mW) in standby than the DP card (620mW)

  • and oddly, a second card adds less power usage (70mW?!) than the first, maybe a circuit is used by both?

A discussion of those results is in this forum post.

Standby expansion cards test results, 3.06 beta BIOS

Framework recently (2022-11-07) announced that they will publish a firmware upgrade to address some of the USB-C issues, including power management. This could positively affect the above result, improving both standby and runtime power usage.

The update came out in December 2022 and I redid my analysis with the following results:

Device Wattage Amperage Days Note baseline 0.25W 16mA 9 no cards, same as before upgrade 1 USB-C 0.25W 16mA 9 same as before 2 USB-C 0.25W 16mA 9 same 1 USB-A 0.80W 62mA 3 +550mW!! worse than before 2 USB-A 1.12W 73mA <2 +320mW, on top of the above, bad! Ethernet 0.62W 40mA 3-4 new result, decent 1TB SSD 0.52W 34mA 4 a bit worse than before (+2mA) MicroSD 0.51W 22mA 4 same DisplayPort 0.52W 34mA 4+ upgrade improved by 300mW 1 HDMI ? 38mA ? same 2 HDMI ? 45mA ? a bit worse than before (+3mA) Normal 1.08W 70mA ~2 Ethernet, 2 USB-C, USB-A

Full results in standby-tests-306. The big takeaway for me is that the update did not improve power usage on the USB-A ports which is a big problem for my use case. There is a notable improvement on the DisplayPort power consumption which brings it more in line with the HDMI connector, but it still doesn't properly turn off on suspend either.

Even worse, the USB-A ports now sometimes fails to resume after suspend, which is pretty annoying. This is a known problem that will hopefully get fixed in the final release.

Battery wear protection

The BIOS has an option to limit charge to 80% to mitigate battery wear. There's a way to control the embedded controller from runtime with fw-ectool, partly documented here. The command would be:

sudo ectool fwchargelimit 80

I looked at building this myself but failed to run it. I opened a RFP in Debian so that we can ship this in Debian, and also documented my work there.

Note that there is now a counter that tracks charge/discharge cycles. It's visible in tlp-stat -b, which is a nice improvement:

root@angela:/home/anarcat# tlp-stat -b --- TLP 1.5.0 -------------------------------------------- +++ Battery Care Plugin: generic Supported features: none available +++ Battery Status: BAT1 /sys/class/power_supply/BAT1/manufacturer = NVT /sys/class/power_supply/BAT1/model_name = Framewo /sys/class/power_supply/BAT1/cycle_count = 3 /sys/class/power_supply/BAT1/charge_full_design = 3572 [mAh] /sys/class/power_supply/BAT1/charge_full = 3541 [mAh] /sys/class/power_supply/BAT1/charge_now = 1625 [mAh] /sys/class/power_supply/BAT1/current_now = 178 [mA] /sys/class/power_supply/BAT1/status = Discharging /sys/class/power_supply/BAT1/charge_control_start_threshold = (not available) /sys/class/power_supply/BAT1/charge_control_end_threshold = (not available) Charge = 45.9 [%] Capacity = 99.1 [%]

One thing that is still missing is the charge threshold data (the (not available) above). There's been some work to make that accessible in August, stay tuned? This would also make it possible implement hysteresis support.

Ethernet expansion card

The Framework ethernet expansion card is a fancy little doodle: "2.5Gbit/s and 10/100/1000Mbit/s Ethernet", the "clear housing lets you peek at the RTL8156 controller that powers it". Which is another way to say "we didn't completely finish prod on this one, so it kind of looks like we 3D-printed this in the shop"....

The card is a little bulky, but I guess that's inevitable considering the RJ-45 form factor when compared to the thin Framework laptop.

I have had a serious issue when trying it at first: the link LEDs just wouldn't come up. I made a full bug report in the forum and with upstream support, but eventually figured it out on my own. It's (of course) a power saving issue: if you reboot the machine, the links come up when the laptop is running the BIOS POST check and even when the Linux kernel boots.

I first thought that the problem is likely related to the powertop service which I run at boot time to tweak some power saving settings.

It seems like this:

echo 'on' > '/sys/bus/usb/devices/4-2/power/control'

... is a good workaround to bring the card back online. You can even return to power saving mode and the card will still work:

echo 'auto' > '/sys/bus/usb/devices/4-2/power/control'

Further research by Matt_Hartley from the Framework Team found this issue in the tlp tracker that shows how the USB_AUTOSUSPEND setting enables the power saving even if the driver doesn't support it, which, in retrospect, just sounds like a bad idea. To quote that issue:

By default, USB power saving is active in the kernel, but not force-enabled for incompatible drivers. That is, devices that support suspension will suspend, drivers that do not, will not.

So the fix is actually to uninstall tlp or disable that setting by adding this to /etc/tlp.conf:

USB_AUTOSUSPEND=0

... but that disables auto-suspend on all USB devices, which may hurt other power usage performance. I have found that a a combination of:

USB_AUTOSUSPEND=1 USB_DENYLIST="0bda:8156"

and this on the kernel commandline:

usbcore.quirks=0bda:8156:k

... actually does work correctly. I now have this in my /etc/default/grub.d/framework-tweaks.cfg file:

# net.ifnames=0: normal interface names ffs (e.g. eth0, wlan0, not wlp166 s0) # nvme.noacpi=1: reduce SSD disk power usage (not working) # mem_sleep_default=deep: reduce power usage during sleep (not working) # usbcore.quirk is a workaround for the ethernet card suspend bug: https: //guides.frame.work/Guide/Fedora+37+Installation+on+the+Framework+Laptop/ 108?lang=en GRUB_CMDLINE_LINUX="net.ifnames=0 nvme.noacpi=1 mem_sleep_default=deep usbcore.quirks=0bda:8156:k" # fix the resolution in grub for fonts to not be tiny GRUB_GFXMODE=1024x768

Other than that, I haven't been able to max out the card because I don't have other 2.5Gbit/s equipment at home, which is strangely satisfying. But running against my Turris Omnia router, I could pretty much max a gigabit fairly easily:

[ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.09 GBytes 937 Mbits/sec 238 sender [ 5] 0.00-10.00 sec 1.09 GBytes 934 Mbits/sec receiver

The card doesn't require any proprietary firmware blobs which is surprising. Other than the power saving issues, it just works.

In my power tests (see powerstat-wayland), the Ethernet card seems to use about 1.6W of power idle, without link, in the above "quirky" configuration where the card is functional but without autosuspend.

Proprietary firmware blobs

The framework does need proprietary firmware to operate. Specifically:

  • the WiFi network card shipped with the DIY kit is a AX210 card that requires a 5.19 kernel or later, and the firmware-iwlwifi non-free firmware package
  • the Bluetooth adapter also loads the firmware-iwlwifi package (untested)
  • the graphics work out of the box without firmware, but certain power management features come only with special proprietary firmware, normally shipped in the firmware-misc-nonfree but currently missing from the package

Note that, at the time of writing, the latest i915 firmware from linux-firmware has a serious bug where loading all the accessible firmware results in noticeable — I estimate 200-500ms — lag between the keyboard (not the mouse!) and the display. Symptoms also include tearing and shearing of windows, it's pretty nasty.

One workaround is to delete the two affected firmware files:

cd /lib/firmware && rm adlp_guc_70.1.1.bin adlp_guc_69.0.3.bin update-initramfs -u

You will get the following warning during build, which is good as it means the problematic firmware is disabled:

W: Possible missing firmware /lib/firmware/i915/adlp_guc_69.0.3.bin for module i915 W: Possible missing firmware /lib/firmware/i915/adlp_guc_70.1.1.bin for module i915

But then it also means that critical firmware isn't loaded, which means, among other things, a higher battery drain. I was able to move from 8.5-10W down to the 7W range after making the firmware work properly. This is also after turning the backlight all the way down, as that takes a solid 2-3W in full blast.

The proper fix is to use some compositing manager. I ended up using compton with the following systemd unit:

[Unit] Description=start compositing manager PartOf=graphical-session.target ConditionHost=angela [Service] Type=exec ExecStart=compton --show-all-xerrors --backend glx --vsync opengl-swc Restart=on-failure [Install] RequiredBy=graphical-session.target

compton is orphaned however, so you might be tempted to use picom instead, but in my experience the latter uses much more power (1-2W extra, similar experience). I also tried compiz but it would just crash with:

anarcat@angela:~$ compiz --replace compiz (core) - Warn: No XI2 extension compiz (core) - Error: Another composite manager is already running on screen: 0 compiz (core) - Fatal: No manageable screens found on display :0

When running from the base session, I would get this instead:

compiz (core) - Warn: No XI2 extension compiz (core) - Error: Couldn't load plugin 'ccp' compiz (core) - Error: Couldn't load plugin 'ccp'

Thanks to EmanueleRocca for figuring all that out. See also this discussion about power management on the Framework forum.

Note that Wayland environments do not require any special configuration here and actually work better, see my Wayland migration notes for details.

Also note that the iwlwifi firmware also looks incomplete. Even with the package installed, I get those errors in dmesg:

[ 19.534429] Intel(R) Wireless WiFi driver for Linux [ 19.534691] iwlwifi 0000:a6:00.0: enabling device (0000 -> 0002) [ 19.541867] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2) [ 19.541881] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2) [ 19.541882] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-72.ucode failed with error -2 [ 19.541890] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2) [ 19.541895] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2) [ 19.541896] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-71.ucode failed with error -2 [ 19.541903] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2) [ 19.541907] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2) [ 19.541908] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-70.ucode failed with error -2 [ 19.541913] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2) [ 19.541916] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2) [ 19.541917] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-69.ucode failed with error -2 [ 19.541922] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2) [ 19.541926] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2) [ 19.541927] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-68.ucode failed with error -2 [ 19.541933] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2) [ 19.541937] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2) [ 19.541937] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-67.ucode failed with error -2 [ 19.544244] iwlwifi 0000:a6:00.0: firmware: direct-loading firmware iwlwifi-ty-a0-gf-a0-66.ucode [ 19.544257] iwlwifi 0000:a6:00.0: api flags index 2 larger than supported by driver [ 19.544270] iwlwifi 0000:a6:00.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 0.63.2.1 [ 19.544523] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2) [ 19.544528] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2) [ 19.544530] iwlwifi 0000:a6:00.0: loaded firmware version 66.55c64978.0 ty-a0-gf-a0-66.ucode op_mode iwlmvm

Some of those are available in the latest upstream firmware package (iwlwifi-ty-a0-gf-a0-71.ucode, -68, and -67), but not all (e.g. iwlwifi-ty-a0-gf-a0-72.ucode is missing) . It's unclear what those do or don't, as the WiFi seems to work well without them.

I still copied them in from the latest linux-firmware package in the hope they would help with power management, but I did not notice a change after loading them.

There are also multiple knobs on the iwlwifi and iwlmvm drivers. The latter has a power_schmeme setting which defaults to 2 (balanced), setting it to 3 (low power) could improve battery usage as well, in theory. The iwlwifi driver also has power_save (defaults to disabled) and power_level (1-5, defaults to 1) settings. See also the output of modinfo iwlwifi and modinfo iwlmvm for other driver options.

Graphics acceleration

After loading the latest upstream firmware and setting up a compositing manager (compton, above), I tested the classic glxgears.

Running in a window gives me odd results, as the gears basically grind to a halt:

Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate. 137 frames in 5.1 seconds = 26.984 FPS 27 frames in 5.4 seconds = 5.022 FPS

Ouch. 5FPS!

But interestingly, once the window is in full screen, it does hit the monitor refresh rate:

300 frames in 5.0 seconds = 60.000 FPS

I'm not really a gamer and I'm not normally using any of that fancy graphics acceleration stuff (except maybe my browser does?).

I installed intel-gpu-tools for the intel_gpu_top command to confirm the GPU was engaged when doing those simulations. A nice find. Other useful diagnostic tools include glxgears and glxinfo (in mesa-utils) and (vainfo in vainfo).

Following to this post, I also made sure to have those settings in my about:config in Firefox, or, in user.js:

user_pref("media.ffmpeg.vaapi.enabled", true);

Note that the guide suggests many other settings to tweak, but those might actually be overkill, see this comment and its parents. I did try forcing hardware acceleration by setting gfx.webrender.all to true, but everything became choppy and weird.

The guide also mentions installing the intel-media-driver package, but I could not find that in Debian.

The Arch wiki has, as usual, an excellent reference on hardware acceleration in Firefox.

Chromium / Signal desktop bugs

It looks like both Chromium and Signal Desktop misbehave with my compositor setup (compton + i3). The fix is to add a persistent flag to Chromium. In Arch, it's conveniently in ~/.config/chromium-flags.conf but that doesn't actually work in Debian. I had to put the flag in /etc/chromium.d/disable-compositing, like this:

export CHROMIUM_FLAGS="$CHROMIUM_FLAGS --disable-gpu-compositing"

It's possible another one of the hundreds of flags might fix this issue better, but I don't really have time to go through this entire, incomplete, and unofficial list (!?!).

Signal Desktop is a similar problem, and doesn't reuse those flags (because of course it doesn't). Instead I had to rewrite the wrapper script in /usr/local/bin/signal-desktop to use this instead:

exec /usr/bin/flatpak run --branch=stable --arch=x86_64 org.signal.Signal --disable-gpu-compositing "$@"

This was mostly done in this Puppet commit.

I haven't figured out the root of this problem. I did try using picom and xcompmgr; they both suffer from the same issue. Another Debian testing user on Wayland told me they haven't seen this problem, so hopefully this can be fixed by switching to wayland.

Graphics card hangs

I believe I might have this bug which results in a total graphical hang for 15-30 seconds. It's fairly rare so it's not too disruptive, but when it does happen, it's pretty alarming.

The comments on that bug report are encouraging though: it seems this is a bug in either mesa or the Intel graphics driver, which means many people have this problem so it's likely to be fixed. There's actually a merge request on mesa already (2022-12-29).

It could also be that bug because the error message I get is actually:

Jan 20 12:49:10 angela kernel: Asynchronous wait on fence 0000:00:02.0:sway[104431]:cb0ae timed out (hint:intel_atomic_commit_ready [i915]) Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000 Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] Resetting chip for stopped heartbeat on rcs0 Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC firmware i915/adlp_guc_70.1.1.bin version 70.1 Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9 Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC authenticated Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC submission enabled Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC SLPC enabled

It's a solid 30 seconds graphical hang. Maybe the keyboard and everything else keeps working. The latter bug report is quite long, with many comments, but this one from January 2023 seems to say that Sway 1.8 fixed the problem. There's also an earlier patch to add an extra kernel parameter that supposedly fixes that too. There's all sorts of other workarounds in there, for example this:

echo "options i915 enable_dc=1 enable_guc_loading=1 enable_guc_submission=1 edp_vswing=0 enable_guc=2 enable_fbc=1 enable_psr=1 disable_power_well=0" | sudo tee /etc/modprobe.d/i915.conf

from this comment... So that one is unsolved, as far as the upstream drivers are concerned, but maybe could be fixed through Sway.

Weird USB hangs / graphical glitches

I have had weird connectivity glitches better described in this post, but basically: my USB keyboard and mice (connected over a USB hub) drop keys, lag a lot or hang, and I get visual glitches.

The fix was to tighten the screws around the CPU on the motherboard (!), which is, thankfully, a rather simple repair.

Shipping details

I ordered the Framework in August 2022 and received it about a month later, which is sooner than expected because the August batch was late.

People (including me) expected this to have an impact on the September batch, but it seems Framework have been able to fix the delivery problems and keep up with the demand.

As of early 2023, their website announces that laptops ship "within 5 days". I have myself ordered a few expansion cards in November 2022, and they shipped on the same day, arriving 3-4 days later.

The supply pipeline

There are basically 6 steps in the Framework shipping pipeline, each (except the last) accompanied with an email notification:

  1. pre-order
  2. preparing batch
  3. preparing order
  4. payment complete
  5. shipping
  6. (received)

This comes from the crowdsourced spreadsheet, which should be updated when the status changes here.

I was part of the "third batch" of the 12th generation laptop, which was supposed to ship in September. It ended up arriving on my door step on September 27th, about 33 days after ordering.

It seems current orders are not processed in "batches", but in real time, see this blog post for details on shipping.

Shipping trivia

I don't know about the others, but my laptop shipped through no less than four different airplane flights. Here are the hops it took:

I can't quite figure out how to calculate exactly how much mileage that is, but it's huge. The ride through Alaska is surprising enough but the bounce back through Winnipeg is especially weird. I guess the route happens that way because of Fedex shipping hubs.

There was a related oddity when I had my Purism laptop shipped: it left from the west coast and seemed to enter on an endless, two week long road trip across the continental US.

Other resources
Categories: FLOSS Project Planets

digiKam 7.10.0 is released

Planet KDE - Sun, 2023-03-12 20:00
Dear digiKam fans and users, After three months of active maintenance and other bugs triage, the digiKam team is proud to present version 7.10.0 of its open source digital photo manager. See below the list of most important features coming with this release. Bundles Internal Component Updates As with the previous releases, we take care about upgrading the internal components from the Bundles. Microsoft Windows Installer, Apple macOS Package, and Linux AppImage binaries now hosts:
Categories: FLOSS Project Planets

Mike Herchel's Blog: Creating Your First Single Directory Component within Drupal

Planet Drupal - Sun, 2023-03-12 19:09
Creating Your First Single Directory Component within Drupal mherchel Sun, 03/12/2023 - 20:00
Categories: FLOSS Project Planets

The Python Coding Blog: Anatomy of a 2D Game using Python’s turtle and Object-Oriented Programming

Planet Python - Sun, 2023-03-12 14:21

When I was young, we played arcade games in their original form on tall rectangular coin-operated machines with buttons and joysticks. These games had a resurgence as smartphone apps in recent years, useful to keep one occupied during a long commute. In this article, I’ll resurrect one as a 2D Python game and use it to show the “anatomy” of such a game.

You can follow this step-by-step tutorial even if you’re unfamiliar with all the topics. In particular, this tutorial will rely on Python’s turtle module and uses the object-oriented programming (OOP) paradigm. However, you don’t need expertise in either topic, as I’ll explain the key concepts you’ll need. However, if you’re already familiar with OOP, you can easily skip the clearly-marked OOP sections.

This is the game you’ll write:

The rules of the game are simple. Click on a ball to bat it up. How long can you last before you lose ten balls?

In addition to getting acquainted with OOP principles, this tutorial will show you how such games are built step-by-step.

Note about content in this article: If you’re already familiar with the key concepts in object-oriented programming, you can skip blocks like this one. If you’re new or relatively new to the topic, I recommend you read these sections as well.

The Anatomy of a 2D Python Game | Summary

I’ll break this game down into eight key steps:

  1. Create a class named Ball and set up what should happen when you create a ball
  2. Make the ball move forward
  3. Add gravity to pull the ball downwards
  4. Add the ability to bat the ball upwards
  5. Create more balls, with each ball created after a certain time interval
  6. Control the speed of the game by setting a frame rate
  7. Add a timer and an end to the game
  8. Add finishing touches to the game

Are you ready to start coding?

A visual summary of the anatomy of a 2D Python game

Here’s another summary of the steps you’ll take to create this 2D Python game. This is a visual summary:

1. Create a Class Named Ball

You’ll work on two separate scripts to create this game:

  • juggling_ball.py
  • juggling_balls_game.py

You’ll use the first one, juggling_ball.py, to create a class called Ball which will act as a template for all the balls you’ll use in the game. In the second script, juggling_balls_game.py, you’ll use this class to write the game.

It’s helpful to separate the class definitions into a standalone module to provide a clear structure and to make it easier to reuse the class in other scripts.

Let’s start working on the class in juggling_ball.py. If you want to read about object-oriented programming in Python in more detail before you dive into this project, you can read Chapter 7 | Object-Oriented Programming. However, I’ll also summarise the key points in this article in separate blocks. And remember that if you’re already familiar with the key concepts in OOP, you can skip these additional note blocks, like the one below.

A class is like a template for creating many objects using that template. When you define a class, such as the class Ball in this project, you’re not creating a ball. Instead, you’re defining the instructions needed to create a ball.

The __init__() special method is normally the first method you define in a class and includes all the steps you want to execute each time you create an object using this class.

Create the class

Let’s start this 2D Python game. You can define the class and its __init__() special method:

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), )

The class Ball inherits from turtle.Turtle which means that an object which is a Ball is also a Turtle. You also call the initialisation method for the Turtle class when you call super().__init__().

The __init__() method creates two data attributes, .width and .height. These attributes set the size in pixels of the area in which the program will create the ball.

self is the name you use as a placeholder to refer to the object that you’ll create. Recall that when you define a class, you’re not creating any objects. At this definition stage, you’re creating the template. Therefore, you need a placeholder name to refer to the objects you’ll create in the future. The convention is to use self for this placeholder name.

You create two new data attributes when you write:
self.width = width
self.height = height

An attribute belongs to an object in a class. There are two types of attributes:
– Data attributes
– Methods

You can think of data attributes as variables attached to an object. Therefore, an object “carries” its data with it. The data attributes you create are self.width and self.height.

The other type of attribute is a method. You’ll read more about methods shortly.

The rest of the __init__() method calls Turtle methods to set the initial state of each ball. Here’s a summary of the four turtle.Turtle methods used:

  • .shape() changes the shape of the turtle
  • .color() sets the colour of the turtle (and anything it draws)
  • .penup() makes sure the turtle will not draw a line when it moves
  • .setposition() places the turtle at a specific _xy-_coordinate on the screen. The centre is (0, 0).

You set the shape of the turtle to be a circle (it’s actually a disc, but the name of the shape in turtle is “circle”). You use a random value between 0 and 1 for each of the red, green, and blue colour values when you call self.color(). And you set the turtle’s position to a random integer within the bounds of the region defined by the arguments of the __init__() method. You use the floor division operator // to ensure you get an integer value when dividing the width and height by 2.

It’s a good idea to define the .__repr__() special method for a class. As you won’t use this explicitly in this project, I won’t add it to the code in the main article. However, it’s included in the final code in the appendix.

Test the class

You can test the class you just created in the second script you’ll work on as you progress through this project, juggling_balls_game.py:

# juggling_balls_game.py import turtle from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 window = turtle.Screen() window.setup(WIDTH, HEIGHT) Ball(WIDTH, HEIGHT) Ball(WIDTH, HEIGHT) turtle.done()

You create a screen and set its size to 600 x 600 pixels. Next, you create two instances of the Ball class. Even though you define Ball in another script, you import it into the scope of your game program using from juggling_ball import Ball.

Here’s the output from this script:

You call Ball() twice in the script. Therefore, you create two instances of the class Ball. Each one has a random colour and moves to a random position on the screen as soon as it’s created.

An instance of a class is each object you create using that class. The class definition is the template for creating objects. You only have one class definition, but you can have several instances of that class.

Data attributes, which we discussed earlier, are sometimes also referred to as instance variables since they are variables attached to an instance. You’ll also read about instance methods soon.

You may have noticed that when you create the two balls, you can see them moving from the centre of the screen where they’re created to their ‘starting’ position. You want more control on when to display the objects on the screen.

To achieve this, you can set window.tracer(0) as soon as you create the screen and then use window.update() when you want to display the turtles on the screen. Any changes that happen to the position and orientation of Turtle objects (and Ball objects, too) will occur “behind the scenes” until you call window.update():

# juggling_balls_game.py import turtle from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) Ball(WIDTH, HEIGHT) Ball(WIDTH, HEIGHT) window.update() turtle.done()

When you run the script now, the balls will appear instantly in the correct starting positions. The final call to turtle.done() runs the main loop of a turtle graphics program and is needed at the end of the script to keep the program running once the final line is reached.

You’re now ready to create a method in the Ball class to make the ball move forward.

2. Make Ball Move Forward

Let’s shift back to juggling_ball.py where you define the class. You’ll start by making the ball move upwards at a constant speed.

You can set the maximum velocity that a ball can have as a class attribute .max_velocity and then create a data attribute .velocity which will be different for each instance. The value of .velocity is a random number that’s limited by the maximum value defined at the class level:

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity)

You also change the ball’s heading using the Turtle method .setheading() so that the object is pointing upwards.

A class attribute is defined for the class overall, not for each instance. This can be used when the same value is needed for every instance you’ll create of the class. You can access a class attribute like you access instance attributes. For example, you use self.max_velocity in the example above.

Next, you can create the method move() which moves the ball forward by the value stored in self.velocity.

A method is a function that’s part of a class. You’ll only consider instance methods in this project. You can think of an instance method as a function that’s attached to an instance of the class.

In Python, you access these using the dot notation. For example, if you have a list called numbers, you can call numbers.append() or numbers.pop(). Both .append() and .pop() are methods of the class list, and every instance of a list carries these methods with it.

The method .move() is an instance method, which means the object itself is passed to the method as its first argument. This is why the parameter self is within the parentheses when you define .move().

We’re not using real world units such as metres in this project. For now, you can think of this velocity as a value measured in pixels per frame instead of metres per second. The duration of each frame is equal to the time it takes for one iteration of the while loop to complete.

Therefore, if you call .move() once per frame in the game, you want the ball to move by the number of pixels in .velocity during that frame. Let’s add the .move() method:

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def move(self): self.forward(self.velocity)

You can test the new additions to the class Ball in juggling_balls_game.py:

# juggling_balls_game.py import turtle from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) ball = Ball(WIDTH, HEIGHT) while True: ball.move() window.update() turtle.done()

You test your code using a single ball for now and you call ball.move() within a while True loop.

Recall that .move() is a method of the class Ball. In the class definition in juggling_ball.py, you use the placeholder name self to refer to any future instance of the class you create. However, now you’re creating an instance of the class Ball, and you name it ball. Therefore, you can access all the attributes using this instance’s name, for example by calling ball.move().

Here’s the output from this script so far:

Note: the speed at which your while loop will run depends on your setup and operating system. We’ll deal with this later in this project. However, if your ball is moving too fast, you can slow it down by dividing its velocity by 10 or 100, say, when you define self.velocity in the __init__() method. If you can’t see any ball when you run this script, the ball may be moving so quickly out of the screen that you need to slow it down significantly.

You can create a ball that moves upwards with a constant speed. The next step in this 2D Python game is to account for gravity to pull the ball down.

3. Add Gravity to Pull Ball Downwards

Last time I checked, when you toss a ball upward, it will slow down, reach a point when, it’s stationary in the air for the briefest of moments, and then starts falling down towards the ground.

Let’s add the effect of gravity to the game. Gravity is a force that accelerates an object. This acceleration is given in metres per second squared (m/s^2). The acceleration due to the Earth’s gravity is 9.8m/s^2, and this is always a downward acceleration. Therefore, when an object is moving upwards, gravity will decelerate the object until its velocity is zero. Then it will start accelerating downwards.

In this project, we’re not using real-world units. So you can think of the acceleration due to gravity as a value in pixels per frame squared. “Frame” is a unit of time in this context, as it refers to the duration of the frame.

Acceleration is the rate of change of velocity. In the real world, we use the change of velocity per second. However, in the game you’re using change of velocity per frame. Later in this article, you’ll consider the time it takes for the while loop to run so you can set the frame time.

You can add a class attribute to define the acceleration due to gravity and define a method called .fall():

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 gravity = 0.07 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def move(self): self.forward(self.velocity) self.fall() def fall(self): self.velocity -= self.gravity

The method .fall() changes the value of the data attribute .velocity by subtracting the value stored in the class attribute .gravity from the current .velocity. You also call self.fall() within .move() so that each time the ball moves in a frame, it’s also pulled back by gravity. In this example, the value of .gravity is 0.07. Recall that you’re measuring velocity in pixels per frame. Therefore, gravity reduces the velocity by 0.07 pixels per frame in each frame.

You could merge the code in .fall() within .move(). However, creating separate methods gives you more flexibility in the future. Let’s assume you want a version of the game in which something that happens in the game suspends gravity. Having separate methods will make future modifications easier.

You could also choose not to call self.fall() within .move() and call the method directly within the game loop in juggling_balls_game.py.

You need to consider another issue now that the balls will be pulled down towards the ground. At some point, the balls will leave the screen from the bottom edge. Once this happens, you want the program to detect this so you can deal with this. You can create another method is_below_lower_edge():

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 gravity = 0.07 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def move(self): self.forward(self.velocity) self.fall() def fall(self): self.velocity -= self.gravity def is_below_lower_edge(self): if self.ycor() < -self.height // 2: self.hideturtle() return True return False

The method .is_below_lower_edge() is an instance method which returns a Boolean value. The method hides the turtle object and returns True if the ball has dipped below the lower edge of the screen. Otherwise, it returns False.

Methods are functions. Therefore, like all functions, they always return a value. You’ll often find methods such as .move() and .fall() that don’t have an explicit return statement. These methods change the state of one or more of the object’s attributes. These methods still return a value. As with all functions that don’t have a return statement, these methods return None.

The purpose of .is_below_lower_edge() is different. Although it’s also changing the object’s state when it calls self.hideturtle(), its main purpose is to return True or False to indicate whether the ball has dropped below the lower edge of the screen.

It’s time to check whether gravity works. You don’t need to change juggling_balls_game.py since the call to ball.move() in the while loop remains the same. Here’s the result of running the script now:

You can see the ball is tossed up in the air. It slows down. Then it accelerates downwards until it leaves the screen. You can also temporarily add the following line in the while loop to check that .is_below_lower_edge() works:

print(ball.is_below_lower_edge())

Since this method returns True or False, you’ll see its decision printed out each time the loop iterates.

This 2D Python game is shaping up nicely. Next, you need to add the option to bat the ball upwards.

4. Add the Ability to Bat Ball Upwards

There are two steps you’ll need to code to bat a ball upwards:

  • Create a method in Ball to bat the ball upwards by adding positive (upward) velocity
  • Call this method each time the player clicks somewhere close to the ball

You can start adding another class attribute .bat_velocity_change and the method .bat_up() in Ball:

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 gravity = 0.07 bat_velocity_change = 8 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint(-self.width // 2, self.width // 2), random.randint(-self.height // 2, self.height // 2), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def move(self): self.forward(self.velocity) self.fall() def fall(self): self.velocity -= self.gravity def is_below_lower_edge(self): if self.ycor() < -self.height // 2: self.hideturtle() return True return False def bat_up(self): self.velocity += self.bat_velocity_change

Each time you call the method .bat_up(), the velocity of the ball increases by the value in the class attribute .bat_velocity_change.

If the ball’s velocity is, say, -10, then “batting it up” will increase the velocity to -2 since .bat_velocity_change is 8. This means the ball will keep falling but at a lower speed.

Suppose the ball’s velocity is -3, then batting up changes this to 5 so the ball will start moving upwards. And if the ball is already moving upwards when you bat it, its upward speed will increase.

You can now shift your attention to juggling_balls_game.py. You need to make no further significant changes to the class Ball itself.

In the game, you need to call the ball’s .bat_up() method when the player clicks within a certain distance of the ball. You can use .onclick() from the turtle module for this:

# juggling_balls_game.py import turtle from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 batting_tolerance = 40 window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) ball = Ball(WIDTH, HEIGHT) while True: ball.move() window.update() turtle.done()

The variable batting_tolerance determines how close you need to be to the centre of the ball for the batting to take effect.

You define the function click_ball(x, y) with two parameters representing xy-coordinates. If the location of these coordinates is within the batting tolerance, then the ball’s .bat_up() method is called.

The call to window.onclick(click_ball) calls the function click_ball() and passes the xy-coordinates to it.

When you run this script, you’ll get the following output. You can test the code by clicking close to the ball:

Now, juggling one ball is nice and easy. How about juggling many balls?

5. Create More Instances of Ball Using a Timer

You can make a few changes to juggling_balls_game.py to have balls appear every few seconds. To achieve this, you’ll need to:

  1. Create a list to store all the Ball instances
  2. Create a new Ball instance every few seconds and add it to the list
  3. Move each of the Ball instances in the while loop
  4. Check all the Ball instances within click_ball() to see if the player clicked the ball

Start by tackling the first two of these steps. You define a tuple named spawn_interval_range. The program will create a new Ball every few seconds and add it to the list balls. The code will choose a new time interval from the range set in spawn_interval_range.

Since all the Ball instances are stored in the list balls, you’ll need to:

  • Add a for loop in the game loop so that all Ball instances move in each frame
  • Add a for loop in bat_up() to check all Ball instances for proximity to the click coordinates

You can update the code with these changes:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 batting_tolerance = 40 spawn_interval_range = (1, 5) window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] spawn_timer = time.time() spawn_interval = 0 while True: if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() for ball in balls: ball.move() window.update() turtle.done()

You start with a spawn_interval of 0 so that a Ball is created in the first iteration of the while loop. The instance of Ball is created and placed directly in the list balls. Each time a ball is created, the code generates a new spawn_interval from the range set at the top of the script.

You then loop through all the instances of Ball in balls to call their .move() method. You use a similar loop in click_ball()

This is where we can see the benefit of OOP and classes for this type of program. You create each ball using the same template: the class Ball. This means all Ball instances have the same data attributes and can access the same methods. However, the values of their data attributes can be different.

Each ball starts at a different location and has a different colour. The .velocity data attribute for each Ball has a different value, too. And therefore, each ball will move independently of the others.

By creating all balls from the same template, you ensure they’re all similar. But they’re not identical as they have different values for their data attributes.

You can run the script to see a basic version of the game which you can test:

The program creates balls every few seconds, and you can click any of them to bat them up. However, if you play this version long enough, you may notice some odd behaviour. In the next section, you’ll see why this happens and how to fix it.

6. Add a Frame Rate to Control the Game Speed

Have a look at this video of balls created by the current script. In particular, look at what happens to those balls that rise above the top edge of the screen. Does their movement look realistic?

Did you notice how, when the ball leaves the top of the screen, it immediately reappears at the same spot falling at high speed? It’s as though the ball is bouncing off the top of the screen. However, it’s not meant to do this. This is different from what you’d expect if the ball was still rising and falling normally while it was out of sight.

Let’s first see why this happens. Then you’ll fix this problem.

How long does one iteration of the while loop take?

Earlier, we discussed how the ball’s velocity is currently a value in pixels per frame. This means the ball will move by a certain number of pixels in each frame. You’re using the time it takes for each frame of the game as the unit of time.

However, there’s a problem with this logic. There are no guarantees that each frame takes the same amount of time. At the moment, the length of each frame is determined by how long it takes for the program to run one iteration of the while loop.

Look at the lines of code in the while loop. Which one do you think is the bottleneck that’s taking up most of the time?

Let’s try a small experiment. To make this a fair test, you’ll initialise the random number generator using a fixed seed so that the same “random” values are picked each time you run the program.

Let’s time 500 iterations of the while loop:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball random.seed(0) WIDTH = 600 HEIGHT = 600 batting_tolerance = 40 spawn_interval_range = (1, 5) window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] spawn_timer = time.time() spawn_interval = 0 start = time.time() count = 0 while count < 500: count += 1 if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() for ball in balls: ball.move() window.update() print(time.time() - start) turtle.done()

You run the while loop until count reaches 500 and print out the number of seconds it takes. When I run this on my system, the output is:

8.363317966461182

It took just over eight seconds to run 500 iterations of the loop.

Now, you can comment the line with window.update() at the end of the while loop. This will prevent the turtles from being displayed on the screen. All the remaining operations are still executed. The code now outputs the following:

0.004825115203857422

The same 500 iterations of the while loop now take around 5ms to run. That’s almost 2,000 times faster!

Updating the display on the screen is by far the slowest part of all the steps which occur in the while loop. Therefore, if there’s only one ball on the screen and it leaves the screen, the program no longer needs to display it. The loop speeds up significantly. This is why using pixels per frame as the ball’s velocity is flawed. The same pixels per frame value results in a much faster speed when the frame time is a lot shorter!

And even if this extreme change in frame time wasn’t an issue, for example, if you’re guaranteed to always have at least one ball on the screen at any one time, you still have no control over how fast the game runs or whether the frame rate will be constant as the game progresses.

How long do you want one iteration of the while loop to take?

To fix this problem and have more control over how long each frame takes, you can set your desired frame rate and then make sure each iteration of the while loop lasts for as long as needed. This is the fixed frame rate approach for running a game and works fine as long as an iteration in the while loop performs all its operations quicker than the frame time.

You can set the frame rate to 30 frames per second (fps), which is fine on most computers. However, you can choose a slower frame rate if needed. A frame rate of 30fps means that each frame takes 1/30 seconds—that’s 0.0333 seconds per frame.

Now that you’ve set the time each frame of the game should take, you can time how long the operations in the while loop take and add a delay at the end of the loop for the remaining time. This ensures each frame lasts 1/30 seconds.

You can implement the fixed frame rate in juggling_balls_game.py:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 frame_rate = 30 batting_tolerance = 40 spawn_interval_range = (1, 5) window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] spawn_timer = time.time() spawn_interval = 0 while True: frame_start_time = time.time() if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() for ball in balls: ball.move() window.update() loop_time = time.time() - frame_start_time if loop_time < 1 / frame_rate: time.sleep(1 / frame_rate - loop_time) turtle.done()

You start the frame timer at the beginning of the while loop. Once all frame operations are complete, you assign the amount of time taken to the variable loop_time. If this is less than the required frame time, you add a delay for the remaining time.

When you run the script now, the game will run more smoothly as you have a fixed frame rate. The velocity of the balls, measured in pixels per frame, is now a consistent value since the frame time is fixed.

You’ve completed the main aspects of the game. However, you need to have a challenge to turn this into a proper game. In the next section, you’ll add a timer and an aim in the game.

7. Add a Timer and an End to the Game

To turn this into a game, you need to:

  • Add a timer
  • Keep track of how many balls are lost

You can start by adding a timer and displaying it in the title bar:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 frame_rate = 30 batting_tolerance = 40 spawn_interval_range = (1, 5) window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] game_timer = time.time() spawn_timer = time.time() spawn_interval = 0 while True: frame_start_time = time.time() if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() for ball in balls: ball.move() window.title(f"Time: {time.time() - game_timer:3.1f}") window.update() loop_time = time.time() - frame_start_time if loop_time < 1 / frame_rate: time.sleep(1 / frame_rate - loop_time) turtle.done()

You start the game timer just before the game loop, and you display the time elapsed in the title bar in each frame of the game. The format specifier :3.1f in the f-string sets the width of the float displayed to three characters and the number of values after the decimal point to one.

Next, you can set the limit of balls you can lose before it’s ‘Game Over’! You must check whether a ball has left the screen through the bottom edge. You’ll recall you wrote the method .is_below_lower_edge() for the class Ball. This method returns a Boolean. Therefore, you can use it directly within an if statement:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball WIDTH = 600 HEIGHT = 600 frame_rate = 30 batting_tolerance = 40 spawn_interval_range = (1, 5) balls_lost_limit = 10 window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.tracer(0) def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] game_timer = time.time() spawn_timer = time.time() spawn_interval = 0 balls_lost = 0 while balls_lost < balls_lost_limit: frame_start_time = time.time() if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() for ball in balls: ball.move() if ball.is_below_lower_edge(): window.update() balls.remove(ball) turtle.turtles().remove(ball) balls_lost += 1 window.title( f"Time: {time.time() - game_timer:3.1f} | Balls lost: {balls_lost}" ) window.update() loop_time = time.time() - frame_start_time if loop_time < 1 / frame_rate: time.sleep(1 / frame_rate - loop_time) turtle.done()

You check whether each instance of Ball has left the screen through the bottom edge as soon as you move the ball. If the ball fell through the bottom of the screen:

  • You update the screen so that the ball is no longer displayed. Otherwise, you may still see the top part of the ball at the bottom of the screen
  • You remove the ball from the list of all balls
  • You also need to remove the ball from the list of turtles kept by the turtle module to ensure the objects you no longer need don’t stay in memory
  • You add 1 to the number of balls you’ve lost

You also show the number of balls lost by adding this value to the title bar along with the amount of time elapsed. The while loop will stop iterating once you’ve lost ten balls, which is the value of balls_lost_limit.

You now have a functioning game. But you can add some finishing touches to make it better.

8. Complete the Game With Finishing Touches

When writing these types of games, the “finishing touches” can take as little or as long as you want. You can always do more refining and further tweaks to make the game look and feel better.

You’ll only make a few finishing touches in this article, but you can refine your game further if you wish:

  • Change the background colour to dark grey
  • Add a final screen to show the time taken in the game
  • Ensure the balls are not created too close to the sides of the screen

You can change the background colour using .bgcolor(), which is one of the methods in the turtle module.

To add a final message on the screen, you can update the screen after the while loop and use .write(), which is a method of the Turtle class:

# juggling_balls_game.py import turtle import time import random from juggling_ball import Ball # Game parameters WIDTH = 600 HEIGHT = 600 frame_rate = 30 batting_tolerance = 40 spawn_interval_range = (1, 5) balls_lost_limit = 10 # Setup window window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.bgcolor(0.15, 0.15, 0.15) window.tracer(0) # Batting function def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] # Game loop game_timer = time.time() spawn_timer = time.time() spawn_interval = 0 balls_lost = 0 while balls_lost < balls_lost_limit: frame_start_time = time.time() # Spawn new ball if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() # Move balls for ball in balls: ball.move() if ball.is_below_lower_edge(): window.update() balls.remove(ball) turtle.turtles().remove(ball) balls_lost += 1 # Update window title window.title( f"Time: {time.time() - game_timer:3.1f} | Balls lost: {balls_lost}" ) # Refresh screen window.update() # Control frame rate loop_time = time.time() - frame_start_time if loop_time < 1 / frame_rate: time.sleep(1 / frame_rate - loop_time) # Game over final_time = time.time() - game_timer # Hide balls for ball in balls: ball.hideturtle() window.update() # Show game over text text = turtle.Turtle() text.hideturtle() text.color("white") text.write( f"Game Over | Time taken: {final_time:2.1f}", align="center", font=("Courier", 20, "normal") ) turtle.done()

After the while loop, when the game ends, you stop the game timer and clear all the remaining balls from the screen. You create a Turtle object to write the final message on the screen.

To add a border at the edge of the screen to make sure no balls are created to close too the edge, you can go back to juggling_ball.py and modify the region where a ball can be created:

# juggling_ball.py import random import turtle class Ball(turtle.Turtle): max_velocity = 5 gravity = 0.07 bat_velocity_change = 8 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint( (-self.width // 2) + 20, (self.width // 2) - 20 ), random.randint( (-self.height // 2) + 20, (self.height // 2) - 20 ), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def move(self): self.forward(self.velocity) self.fall() def fall(self): self.velocity -= self.gravity def is_below_lower_edge(self): if self.ycor() < -self.height // 2: self.hideturtle() return True return False def bat_up(self): self.velocity += self.bat_velocity_change

And this completes the game! Unless you want to keep tweaking and adding more features. Here’s what the game looks like now:

Final Words

In this project, you created a 2D Python game. You started by creating a class called Ball, which inherits from turtle.Turtle. This means that you didn’t have to start from scratch to create Ball. Instead, you built on top of an existing class.

Here’s a summary of the key stages when writing this game:

  • Create a class named Ball and set up what should happen when you create a ball
  • Make the ball move forward
  • Add gravity to pull the ball downwards
  • Add the ability to bat the ball upwards
  • Create more balls, with each ball created after a certain time interval
  • Control the speed of the game by setting a frame rate
  • Add a timer and an end to the game
  • Add finishing touches to the game

You first create the class Ball and add data attributes and methods. When you create an instance of Ball, the object you create already has all the properties and functionality you need a ball to have.

Once you defined the class Ball, writing the game is simpler because each Ball instance carries everything you need it to do with it.

And now, can you beat your high score in the game?

Get the latest blog updates

No spam promise. You’ll get an email when a new blog post is published

Appendix

Here are the final versions of juggling_ball.py and juggling_balls_game.py:

juggling_ball.py # juggling_ball.py import random import turtle class Ball(turtle.Turtle): """Create balls to use for juggling""" max_velocity = 5 gravity = 0.07 bat_velocity_change = 8 def __init__(self, width, height): super().__init__() self.width = width self.height = height self.shape("circle") self.color( random.random(), random.random(), random.random(), ) self.penup() self.setposition( random.randint( (-self.width // 2) + 20, (self.width // 2) - 20 ), random.randint( (-self.height // 2) + 20, (self.height // 2) - 20 ), ) self.setheading(90) self.velocity = random.randint(1, self.max_velocity) def __repr__(self): return f"{type(self).__name__}({self.width}, {self.height})" def move(self): """Move the ball forward by the amount required in a frame""" self.forward(self.velocity) self.fall() def fall(self): """Take the effect of gravity into account""" self.velocity -= self.gravity def is_below_lower_edge(self): """ Check is object fell through the bottom :return: True if object fell through the bottom. False if object is still above the bottom edge """ if self.ycor() < -self.height // 2: self.hideturtle() return True return False def bat_up(self): """Bat the ball upwards by increasing its velocity""" self.velocity += self.bat_velocity_change juggling_balls_game.py # juggling_balls_game.py import turtle import time import random from juggling_ball import Ball # Game parameters WIDTH = 600 HEIGHT = 600 frame_rate = 30 batting_tolerance = 40 spawn_interval_range = (1, 5) balls_lost_limit = 10 # Setup window window = turtle.Screen() window.setup(WIDTH, HEIGHT) window.bgcolor(0.15, 0.15, 0.15) window.tracer(0) # Batting function def click_ball(x, y): for ball in balls: if ball.distance(x, y) < batting_tolerance: ball.bat_up() window.onclick(click_ball) balls = [] # Game loop game_timer = time.time() spawn_timer = time.time() spawn_interval = 0 balls_lost = 0 while balls_lost < balls_lost_limit: frame_start_time = time.time() # Spawn new ball if time.time() - spawn_timer > spawn_interval: balls.append(Ball(WIDTH, HEIGHT)) spawn_interval = random.randint(*spawn_interval_range) spawn_timer = time.time() # Move balls for ball in balls: ball.move() if ball.is_below_lower_edge(): window.update() balls.remove(ball) turtle.turtles().remove(ball) balls_lost += 1 # Update window title window.title( f"Time: {time.time() - game_timer:3.1f} | Balls lost: {balls_lost}" ) # Refresh screen window.update() # Control frame rate loop_time = time.time() - frame_start_time if loop_time < 1 / frame_rate: time.sleep(1 / frame_rate - loop_time) # Game over final_time = time.time() - game_timer # Hide balls for ball in balls: ball.hideturtle() window.update() # Show game over text text = turtle.Turtle() text.hideturtle() text.color("white") text.write( f"Game Over | Time taken: {final_time:2.1f}", align="center", font=("Courier", 20, "normal") ) turtle.done()

The post Anatomy of a 2D Game using Python’s turtle and Object-Oriented Programming appeared first on The Python Coding Book.

Categories: FLOSS Project Planets

Brett Cannon: How virtual environments work

Planet Python - Sun, 2023-03-12 13:18

After needing to do a deep dive on the venv module (which I will explain later in this blog post as to why), I thought I would explain how virtual environments work to help demystify them.

Why do virtual environments exist?

Back in my the day, there was no concept of environments in Python: all you had was your Python installation and the current directory. That meant when you installed something you either installed it globally into your Python interpreter or you just dumped it into the current directory. Both of these approaches had their drawbacks.

Installing globally meant you didn&apost have any isolation between your projects. This led to issues like version conflicts between what one of your projects might need compared to another one. It also meant you had no idea what requirements your project actually had since you had no way of actually testing your assumptions of what you needed. This was an issue if you needed to share you code with someone else as you didn&apost have a way to test that you weren&apost accidentally wrong about what your dependencies were.

Installing into your local directory didn&apost isolate your installs based on Python version or interpreter version (or even interpreter build type, back when you had to compile your extension modules differently for debug and release builds of Python). So while you could install everything into the same directory as your own code (which you did, and thus didn&apost use src directory layouts for simplicity), there wasn&apost a way to install different wheels for each Python interpreter you had on your machine so you could have multiple environments per project (I&aposm glossing over the fact that back in my the day you also didn&apost have wheels or editable installs).

Enter virtual environments. Suddenly you had a way to install projects as a group that was tied to a specific Python interpreter. That got us the isolation/separation of only installing things you depend on (and being able to verify that through your testing), as well has having as many environments as you want to go with your projects (e.g. an environment for each version of Python that you support). So all sorts of wins! It&aposs an important feature to have while doing development (which is why it can be rather frustrating for users when Python distributors leave venv out).

How do virtual environments work?&#x1F4A1;Virtual environments are different than conda environments (in my opinion; some people disagree with me on this view). The key difference is that conda environments allow projects to install arbitrary shell scripts which are run when you activate a conda environment (which is done implicitly when you use conda run). This is why you are always expected to activate a conda environment, as some conda packages require those those shell scripts run. I won&apost be covering conda environments in this post.Their structure

There are two parts to virtual environments: their directories and their configuration file. As a running example, I&aposm going to assume you ran the command py -m venv --without-pip .venv in some directory on a Unix-based OS (you can substitute py with whatever Python interpreter you want, including the Python Launcher for Unix).

❗For simplicity I&aposm going to focus on the Unix case and not cover Windows in depth.

A virtual environment has 3 directories and potentially a symlink in the virtual environment directory (i.e. within .venv):

  1. bin ( Scripts on Windows)
  2. include ( Include on Windows)
  3. lib/pythonX.Y/site-packages where X.Y is the Python version (Lib/site-packages on Windows)
  4. lib64 symlinked to lib if you&aposre using a 64-bit build of Python that&aposs on a POSIX-based OS that&aposs not macOS

The Python executable for the virtual environment ends up in bin as various symlinks back to the original interpreter (e.g. .venv/bin/python is a symlink; Windows has a different story). The site-packages directory is where projects get installed into the virtual environment (including pip if you choose to have it installed into the virtual environment). The include directory is for any header files that might get installed for some reason from a project. The lib64 symlink is for consistency on those Unix OSs where they have such directories.

The configuration file is pyvenv.cfg and it lives at the top of your virtual environment directory (e.v. .venv/pyvenv.cfg). As of Python 3.11, it contains a few entries:

  1. home (the directory where the executable used to create the virtual environment lives; os.path.dirname(sys._base_executable))
  2. include-system-packages (should the global site-packages be included, effectively turning off isolation?)
  3. version (the Python version down to the micro version, but not with the release level, e.g. 3.12.0, but not 3.12.0a6)
  4. executable (the executable used to create the virtual environment; os.path.realpath(sys._base_executable))
  5. command (the CLI command that could have recreated the virtual environment)

On my machine, the pyvenv.cfg contents are:

home = /home/linuxbrew/.linuxbrew/opt/python@3.11/bin include-system-site-packages = false version = 3.11.2 executable = /home/linuxbrew/.linuxbrew/Cellar/python@3.11/3.11.2_1/bin/python3.11 command = /home/linuxbrew/.linuxbrew/opt/python@3.11/bin/python3.11 -m venv --without-pip /tmp/.venvExample pyvenv.cfg

One interesting thing to note is pyvenv.cfg is not a valid INI file according to the configparser module due to lacking any sections. To read fields in the file you are expected to use line.partition("=") and to strip the resulting key and value.

And that&aposs all there is to a virtual environment! When you don&apost install pip they are extremely fast to create: 3 files, a symlink, and a single file. And they are simple enough you can probably create one manually.

One point I would like to make is how virtual environments are designed to be disposable and not relocatable. Because of their simplicity, virtual environments are viewed as something you can throw away and recreate quickly (if it takes your OS a long time to create 3 directories, a symlink, and a file consisting of 292 bytes like on my machine, you have bigger problems to worry about than virtual environment relocation &#x1F609;). Unfortunately, people tend to conflate environment creation with package installation, when they are in fact two separate things. What projects you choose to install with which installer is actually separate from environment creation and probably influences your "getting started" time the most.

How Python uses a virtual environment

During start-up, Python automatically calls the site.main() function (unless you specify the -S flag). That function calls site.venv() which handles setting up your Python executable to use the virtual environment appropriately. Specifically, the site module:

  1. Looks for pyvenv.cfg in either the same or parent directory as the running executable (which is not resolved, so the location of the symlink is used)
  2. Looks for include-system-site-packages in pyvenv.cfg to decide whether the system site-packages ends up on sys.path
  3. Sets sys._home if home is found in pyvenv.cfg (sys._home is used by sysconfig)

That&aposs it! It&aposs a surprisingly simple mechanism for what it accomplishes.

When thing to notice here about how all of this works is virtual environment activation is optional. Because the site module works off of the symlink to the executable in the virtual environment to resolve everything, activation is just a convenience. Honestly, all the activation scripts do are:

  1. Puts the bin/ (or Scripts/) directory at the front of your PATH environment variable
  2. Sets VIRTUAL_ENV to the directory containing your virtual environment
  3. Tweaks your shell prompt to let you know your PATH has been changed
  4. Registers a deactivate shell function which undoes the other steps

In the end, whether you type python after activation or .venv/bin/python makes no difference to Python. Some tooling like the Python extension for VS Code or the Python Launcher for Unix may check for VIRTUAL_ENV to pick up on your intent to use a virtual environment, but it doesn&apost influence Python itself.

Introducing microvenv

In the Python extension for VS Code, we have an issue where Python beginners end up on Debian or a Debian-based distro like Ubuntu and want to create a virtual environment. Due to Debian removing venv from the default Python install and beginners not realizing there was more to install than python3, they often end up failing at creating a virtual environment  (at least initially as you can install python3-venv separately; in the next version of Debian there will be a python3-full package you can install which will include venv and pip, but it will probably take a while for all the instructions online to be updated to suggest that over python3). We believe the lack of venv is a problem as beginners should be using environments, but asking them to install yet more software can be a barrier to getting started (I&aposm also ignoring the fact pip isn&apost installed by default on Debian either which also complicates the getting started experience for beginners).

But venv is not shipped as a separate part of Python&aposs stdlib, so we can&apost simply install it from PyPI somehow or easily ship it as part of the Python extension to work around this. Since venv is in the stdlib, it&aposs developed along with the version of Python it ships with, so there&aposs no single copy which is fully compatible with all maintained versions of Python (e.g. Python 3.11 added support to use sysconfig to get the directories to create for a virtual environment, various fields in pyvenv.cfg have been added over time, use new language features may be used, etc.). While we could ship a copy of venv for every maintained version of Python, we potentially would have to ship for every micro release to guarantee we always had a working copy, and that&aposs a lot of upstream tracking to do. And even if we only shipped copies from minor release of Python, we would still have to track every micro release in case a bug in venv was fixed.

Hence I have created microvenv. It is a project which provides a single .py file which you use to create a minimal virtual environment. You can either execute it as a script or call its create() function that is analogous to venv.create(). It&aposs also compatible with all maintained versions of Python. As I (hopefully) showed above, creating a virtual environment is actually straight-forward, so I was able to replicate the necessary bits in less than 100 lines of Python code (specifically 87 lines in the 2023.1.1 release). That actually makes it small enough to pass in via python -c, which means it could be embedded in a binary as a string constant and passed as an argument when executing a Python executable as a subprocess. Hopefully that means a tool could guarantee it can always construct a virtual environment somehow.

To keep it microvenv simple, small, and maintainable, it does not contain any activation scripts. I personally don&apost want to be a shell script expert for multiple shells, nor do I want to track the upstream activation scripts (and they do change in case you were thinking "it shouldn&apost be that hard to track"). Also, in VS Code we are actually working towards implicitly activating virtual environments by updating your environment variables directly instead of executing any activation shell scripts, so the shell scripts aren&apost needed for our use case (we are actively moving away from using any activation scripts where we can as we have run into race condition problems with them when sending the command to the shell; thank goodness of conda run, but we also know people still want an activated terminal).

I&aposm also skipping Windows support because we have found the lack of venv to be a unique problem for Linux in general, and Debian-based distros specifically.

I honestly don&apost expect anyone except tool providers to use microvenv, but since it could be useful to others beyond VS Code, I decided it was worth releasing on its own. I also expect anyone using the project to only use it as a fallback when venv is not available (which you can deduce by running py -c "from importlib.util import find_spec; print(find_spec(&aposvenv&apos) is not None)"). And before anyone asks why we don&apost just use virtualenv, its wheel is 8.7MB compared to microvenv at 3.9KB; 0.05% the size, or 2175x smaller. Granted, a good chunk of what makes up virtualen&aposs wheel is probably from shipping pip and setuptools in the wheel for fast installation of those projects after virtual environment creation, but we also acknowledge our need for a small, portable, single-file virtual environment creator is rather niche and something virtualenv currently doesn&apost support (for good reason).

Our plan for the Python extension for VS Code is to use microvenv as a fallback mechanism for our Python: Create Environment command (FYI we also plan to bootstrap pip via its pip.pyz file from bootstrap.pypa.io by downloading it on-demand, which is luckily less than 2MB). That way we can start suggesting to users in various UX flows to create and use an environment when one isn&apost already being used (as appropriate, of course). We want beginners to learn about environments if they don&apost already know about them and also remind experienced users when they may have accidentally forgotten to create an environment for their workspace. That way people get the benefit of (virtual) environments with as little friction as possible.

Categories: FLOSS Project Planets

a2ps @ Savannah: a2ps 4.15.1 released [stable]

GNU Planet! - Sun, 2023-03-12 11:06


GNU a2ps is a filter which generates PostScript from various formats,
with pretty-printing features, strong support for many alphabets, and
customizable layout.

See https://www.gnu.org/software/a2ps/ for more information.

This is a bug-fix release. Users of 4.15 should upgrade. See below for more
details.


Here are the compressed sources and a GPG detached signature:
  https://ftpmirror.gnu.org/a2ps/a2ps-4.15.1.tar.gz
  https://ftpmirror.gnu.org/a2ps/a2ps-4.15.1.tar.gz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

8674b90626d6d1505af8b2ae392f2495b589a052  a2ps-4.15.1.tar.gz
l5dwi6AoBa/DtbkeBsuOrJe4WEOpDmbP3mp8Y8oEKyo  a2ps-4.15.1.tar.gz

The SHA256 checksum is base64 encoded, instead of the
hexadecimal encoding that most checksum tools default to.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify a2ps-4.15.1.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa2048 2013-12-11 [SC]
        2409 3F01 6FFE 8602 EF44  9BB8 4C8E F3DA 3FD3 7230
  uid   Reuben Thomas <rrt@sc3d.org>
  uid   keybase.io/rrt <rrt@keybase.io>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key rrt@sc3d.org

  gpg --recv-keys 4C8EF3DA3FD37230

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=a2ps&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify a2ps-4.15.1.tar.gz.sig


This release was bootstrapped with the following tools:
  Autoconf 2.71
  Automake 1.16.5
  Gnulib v0.1-5853-ge0aefd96b6

NEWS

* Noteworthy changes in release 4.15.1 (2023-03-12) [stable]
 * Bug fixes:
   - Use “grep -F” rather than obsolete fgrep.
   - Fix broken a2ps-lpr-wrapper script, and translate to sh for
     portability.


Categories: FLOSS Project Planets

Kushal Das: Everything Curl from Daniel

Planet Python - Sun, 2023-03-12 09:07

Everything Curl is a book about everything related to Curl. I read the book before online. But, now I am proud to have a physical copy signed by the author :)

It was a good evening, along with some amazing fish in dinner and wine and lots of chat about life in general.

Categories: FLOSS Project Planets

Talk Python to Me: #406: Reimagining Python's Packaging Workflows

Planet Python - Sun, 2023-03-12 04:00
The great power of Python is its over 400,000 packages on PyPI to serve as building blocks for your app. How do you get those needed packages on to your dev machine and managed within your project? What about production and QA servers? I don't even know where to start if you're shipping built software to non-dev end users. There are many variations on how this works today. And where we should go from here has become a hot topic of discussion. So today, that's the topic for Talk Python. I have a great panel of guests: Steve Dower, Pradyun Gedam, Ofek Lev, and Paul Moore.<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Python Packaging Strategy Discussion - Part 1</b>: <a href="https://discuss.python.org/t/python-packaging-strategy-discussion-part-1/22420" target="_blank" rel="noopener">discuss.python.org</a><br/> <b>Thoughts on the Python packaging ecosystem</b>: <a href="https://pradyunsg.me/blog/2023/01/21/thoughts-on-python-packaging/" target="_blank" rel="noopener">pradyunsg.me</a><br/> <b>Python Packaging Authority</b>: <a href="https://www.pypa.io/en/latest/" target="_blank" rel="noopener">pypa.io</a><br/> <b>Hatch</b>: <a href="https://hatch.pypa.io/latest/" target="_blank" rel="noopener">hatch.pypa.io</a><br/> <b>Pyscript</b>: <a href="https://pyscript.net" target="_blank" rel="noopener">pyscript.net</a><br/> <b>Dark Matter Developers: The Unseen 99%</b>: <a href="https://www.hanselman.com/blog/dark-matter-developers-the-unseen-99" target="_blank" rel="noopener">hanselman.com</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=z50B6AmQwLw" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/406/reimagining-pythons-packaging-workflows" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div><br/> <strong>Sponsors</strong><br/> <a href='https://talkpython.fm/cox'>Cox Automotive</a><br> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>
Categories: FLOSS Project Planets

ListenData: Complete Guide to Visual ChatGPT

Planet Python - Sat, 2023-03-11 23:26

In this post, we will talk about how to run Visual ChatGPT in Python with Google Colab. ChatGPT has garnered huge popularity recently due to its capability of human style response. As of now, it only provides responses in text format, which means it cannot process, generate or edit images. Microsoft recently released a solution for the same to handle images. Now you can ask ChatGPT to generate or edit the image for you.

Demo of Visual ChatGPT

In the image below, you can see the final output of Visual ChatGPT - how it looks like.

READ MORE »
Categories: FLOSS Project Planets

Pages