Feeds

mark.ie: My LocalGov Drupal contributions for week-ending November 15th, 2024

Planet Drupal - Thu, 2024-11-14 19:09

LocalGov Drupal week + code contributions + getting elected on to the board of Open Digital Cooperative. It's been a busy week.

Categories: FLOSS Project Planets

Matt Layman: Heroku To DigitalOcean - Building SaaS #206

Planet Python - Thu, 2024-11-14 19:00
In this episode, I began a migration of my JourneyInbox app from Heroku to DigitalOcean. The first step to this move, since I’m going to use Kamal, is to put the app into a Docker image. We got the whole app into the Docker image, then cleaned up local development and the CI system after making changes that broke those configurations.
Categories: FLOSS Project Planets

Conclusion of KDE and Google Summer of Code 2024

Planet KDE - Thu, 2024-11-14 19:00

All of KDE's Google Summer of Code (GSoC) projects are complete.

GSoC is a program where students or people who are new to Free and Open Source software make programming contributions to an open source project.

This post summarizes the outcomes of KDE project participating in GSoC 2024.

Projects Arianna

Ajay Chauhan worked on porting Arianna from epub.js to use Foliate-js. The work will hopefully be merged soon.

A screenshot of Arianna using Foliate-js to render a table of contents
(Courtesy of Ajay Chauhan, CC BY-NC-SA 4.0) Frameworks

Python bindings for KDE Frameworks:

Manuel Alcaraz Zambrano, implemented Python bindings for KWidgetAddons, KUnitConversion, KCoreAddons, KGuiAddons, KI18n, KNotifications, and KXmlGUI. This was done using Shiboken. In addition, Manuel wrote a tutorial on how to generate Python bindings using Shiboken. The complicated set of merge requests are still being reviewed, and Manuel continues to interact with the KDE community.

Unit conversion example created using Python and KUnitConversion
(Courtesy of Manuel Alcaraz Zambrano, CC BY-NC-SA 4.0) KDE Connect

Update SSHD library in KDE Connect Android app

The main aim of ShellWen Chen's project was to update Apache Mina SSHD from 0.14.0 to 2.12.1. The older version has a few listed vulnerabilities. The newer library required additional code to enable it to work on older Android phones, up to Android API 21.

KDE Games

Implementing a computerized opponent for the Mancala variant Bohnenspiel:

João Gouveia created Mankala engine, a library to enable easy creation of Mancala games. The engine contains implementations for two Mancala games, Bohnenspiel and Oware. Both games contain computerized opponents, João also started on a QtQuick graphical user interface. The games are functional, but additional investigation on computerized opponents may help improve their effectiveness.

Image of text user interface for Bohnenspiel
(Courtesy of João Gouveia, CC BY-SA 4.0) Kdenlive

Improved subtitling support for Kdenlive:

Subtitling support has been improved for Kdenlive. Chengkun Chen added support for using the Advanced SubStation (ASS) file format and for converting SubRip files to ASS files. To support this format, Chengkun Chen also made subtitling editor improvements. The work has been merged in the main repository. Documentation has been written, and will hopefully be merged soon.

The new Style Editor Widget
(Courtesy of Chengkun Chen, CC BY-SA 4.0) Krita

Creating Pixel Perfect Tool for Krita:

Ken Lo worked on implementing Pixel Perfect lines in Krita. As explained by Ricky Han, such algorithms remove corner pixels from L shaped blocks and ensure the thinnest possible line is 1 pixel wide. Implementing such algorithms well is of use not only in Krita, but also in rendering web graphics where user screen resolutions can vary significantly. The algorithm was implemented to work in close to real time while lines are drawn, rather than as a post processing step. Ken Lo's work has been merged into Krita.

An image showing that pixel perfect lines are obtained most of the time
(Courtesy of Ken Lo, CC BY 4.0) Labplot

Improve Python Interoperability with LabPlot

Israel Galadima worked on improving Python support in LabPlot. Shiboken was used for this and it is now possible to call some of LabPlot functions from Python and integrate these into other applications.

An image of a plot produced using Python bindings to Labplot
(Courtesy of Israel Galadima, CC BY-SA 4.0)

3D Visualization for LabPlot:

Kuntal Bar added 3D graphing capabilities to LabPlot. This was done using QtGraphs. The work has yet to be merged, but there are many nice examples of 3D plots for bar charts, scatter and surface plots.

A 3D bar chart
(Courtesy of Kuntal Bar, MIT license) Okular

Forms/Javascript support improvement for Okular:

Pratham Gandhi worked on improving the forms/Javascript support in Okular. Around 25 requests have been merged to improve various features, some in the backend and some directly visible, such as fixing the size of the radio buttons or check boxes, or the one pictured below to improve the handling of floating numbers in different locales.

An image of showing an incorrect total sum calculation fixed during GSoC
(Courtesy of Pratham Gandhi, CC BY-SA 4.0) Snaps

Improving Snap Ecosystem in KDE:

Snaps are self contained linux application packging formats. Soumyadeep Ghosh worked on improving the tooling necessary to make KDE applications easily available in the Snap Store. In addition, Soumyadeep improved packaging of a number of KDE Snap packages, and packaged MarkNote. Finally, Soumyadeep created Snap KCM, a graphical user interface to manage permissions that Snaps have when running.

Snap KCM
(Courtesy of Soumyadeep Ghosh, CC BY-NC-SA 4.0) Next Steps

The 2024 GSoC period is finally over for KDE. A big thank you to all the mentors and contributors who have participated in GSoC! We look forward to your continuing participation in free and open source software communities and in contributing to KDE.

Categories: FLOSS Project Planets

Python Morsels: Inspecting objects in Python

Planet Python - Thu, 2024-11-14 16:49

I rely on 4 functions for inspecting Python objects: type, help, dir, and vars.

Table of contents

  1. Inspecting an object's structure and data
  2. How to see an object's class
  3. Looking up documentation with help
  4. Getting the methods and attributes for an object
  5. Inspecting direct attributes of an object
  6. Base classes, module paths, and more
  7. Inspect Python objects with type, help, dir, and vars

Inspecting an object's structure and data

The scenario is, we're either in the Python REPL or we've used the built-in breakpoint function to drop into the Python debugger within our code. So we're within some sort of interactive Python environment.

For example, we might be running this file, which we've put a breakpoint call in to drop into a Python debugger:

from argparse import ArgumentParser from collections import Counter from pathlib import Path import re def count_letters(text): return Counter( char for char in text.casefold() if char.isalpha() ) def main(): parser = ArgumentParser() parser.add_argument("file", type=Path) args = parser.parse_args() letter_counts = count_letters(args.file.read_text()) breakpoint() for letter, count in letter_counts.most_common(): print(count, letter.upper()) if __name__ == "__main__": main()

And we've used the PDB interact command to start a Python REPL:

~ $ python3 letter_counter.py frankenstein.txt > /home/trey/letter_counter.py(18)main() -> breakpoint() (Pdb) interact *pdb interact start* >>>

We have a letter_counts variable that refers to some sort of object. We want to know what this object is all about.

What questions could we ask of this object?

Well, to start with, we could simply refer to the object, and then hit Enter:

>>> letter_counts Counter({'e': 46043, 't': 30365, 'a': 26743, 'o': 25225, 'i': 24613, 'n': 24367, 's': 21155, 'r': 20818, 'h': 19725, 'd': 16863, 'l': 12739, 'm': 10604, 'u': 10407, 'c': 9243, 'f': 8731, 'y': 7914, 'w': 7638, 'p': 6121, 'g': 5974, 'b': 5026, 'v': 3833, 'k': 1755, 'x': 677, 'j': 504, 'q': 324, 'z': 243})

We've typed the name of a variable that points to an object, and now we see the programmer-readable representation for that object.

How to see an object's class

Often, the string representation tells …

Read the full article: https://www.pythonmorsels.com/inspecting-python-objects/
Categories: FLOSS Project Planets

Five Jars: Enhancing Code Reliability with Effective Vue.js Testing

Planet Drupal - Thu, 2024-11-14 15:47
Writing tests can improve both your coding skills and product reliability, helping you become a better developer by encouraging structured, concise, well-organized, and well documented code.
Categories: FLOSS Project Planets

Django Weblog: Django’s technical governance challenges, and opportunities

Planet Python - Thu, 2024-11-14 12:00

As of October 29th, two of four members of the Django Software Foundation Steering Council have resigned from their role, with their intentions being to trigger an election of the Steering Council earlier than otherwise scheduled, per our established governance processes.

To our departing members, Simon and Adam, thank you for your contributions to Django and its governance ❤️. The framework and our community owes a lot to your dedication, and we’re confident our community will join us in celebrating your past contributions – and look forward to learning about your future endeavors in the Django ecosystem. And thanks to the remaining members, James and Andrew, for their service over the years.

Our governance challenges

Governance in open source is hard, and community-driven open source even more so. We’re proud that Django’s original two Benevolent Dictators For Life (BDFLs) both retired from the role and turned things over to community governance ten years ago now. The BDFL model can provide  excellent technical governance, but also has its flaws. So the  mantle of technical governance then went on to the Core Developers and the Technical Board (renamed to Steering Council) was introduced.

However, time has revealed flaws in the Steering Council’s governance model and operations. The Steering Council was able to provide decision-making – tiebreaking when the developer community couldn’t lead to consensus – but didn’t provide more forward-looking leadership or vision. Disagreements over how – or if – the Steering Council should approach this part of leadership led us to the current situation, with no functioning technical governance as of a few weeks ago. Even before those recent events, those flaws were also a common source of frustration for our contributors, and a source of concern for Django users who (rightly or not) might have expectations of Django’s direction – such as the publication of a “roadmap” for Django development.

The Django Software Foundation Board of Directors is and was aware of those issues, and recently made attempts to have the Steering Council rectify them, in coordination with other established community members. The DSF Board has tried to be hands-off when it comes to technical leadership, but in retrospect we should have been getting involved sooner, or more decisively. The lack of technical leadership is an existential threat to Django – a slow moving one, but a threat nonetheless. It’s our responsibility to address this threat.

Where we’re heading

We now need new Steering Council members. But we also need governance reform. There’s a lot about the Steering Council that is good and might only need minimal changes. However, the overall question of the Steering Council’s remit, and how it approaches technical leadership for the Django community, needs to be resolved.

We’re going to hold early elections of the Steering Council, as soon as we’ve completed the ongoing 2025 DSF Board elections. Those elections will follow existing processes, and we will want a Steering Council who strives  to meet the group’s intended goals:

  1. To safeguard big decisions that affect Django projects at a fundamental level.
  2. To help shepherd the project’s future direction.

We expect the new Steering Council will take on those known challenges, resolve those questions of technical leadership, and update Django’s technical governance. They will have the full support of the Board of Directors to address this threat to Django’s future. And the Board will also be more decisive in intervening, should similar issues keep arising.

How you can help

We need contributors willing to take on those challenges and help our community come out ahead. It’s a big role, impactful but demanding. And there are strict, often annoying eligibility rules for the Steering Council.

To help you help us, we’ve set up a form: Django 6.x Steering Council elections - Expression of interest.

If you’re interested in stepping up to shepherd Django’s technical direction, fill in our expression of interest form. We’ll let you know whether or not you meet those eligibility rules, take the guesswork out of the way. You get to focus on your motivation for taking on this kind of high-purpose, high-reward governance role.

Django 6.x Steering Council elections - Expression of interest

How everyone can help

Those elections will be crucial for the future of Django, and will be decided thanks to the vote of our Django Software Foundation Individual Members. If you know people who contribute to ​​the DSF’s mission but aren’t Individual Members already -- use our form to nominate them as Individual Members, so they’re eligible to vote. If you’re that person, do nominate yourself. We consider all contributions towards our mission: advancing and promoting Django, protecting the framework’s long-term viability, and advancing the state of the art in web development.

Any questions? Reach out via email to foundation@djangoproject.com.

Categories: FLOSS Project Planets

ImageX: Boost Your Drupal Site with Flavorful Modules Named After Food

Planet Drupal - Thu, 2024-11-14 10:54

Authored by Nadiia Nykolaichuk.

Drupal modules often come with creative, inspiring names, and some even sound downright delicious. Join us on a culinary adventure through modules inspired by foods, and discover the rich features they can bring to your site! Each tool in this collection is powerful enough to supercharge your website’s capabilities, adding its own unique blend of flavors, nutrients, and zest.

Categories: FLOSS Project Planets

MinGW and Side-by-Side Manifests

Planet KDE - Thu, 2024-11-14 10:44

Qt Creator 14 has removed support for its Python 2 pretty printers.

Categories: FLOSS Project Planets

Presenting privact at KDE Akademy

Planet KDE - Thu, 2024-11-14 10:25

Earlier this year I had the pleasure of visiting the KDE Akademy 2024 in Würzburg. It had been a few years since my last visit to Akademy and it was great to see old friends and meet new ones. Besides socializing, my main task was to talk to as many KDE people as possible about the privact project and its integration into KDE. Knowing the KDE community, not surprisingly this resulted in lots of interesting discussions.

Most importantly, I gave a talk about the current state of privact’s integration with KUserFeedback. If you missed it, here is the recording:

As a follow-up, we had 2 BoFs on Monday to discuss the next steps. Felix was kind enough to join me to provide more technical developer insights than I can give.

As a first teaser for you: In the short term, the private approach will allow KDE to do proper user research, thereby enabling us to do data-driven UX without compromising user privacy. In the longer term, privact aims to restore digital privacy for everyone, even outside of KDE, even outside of FLOSS. You can learn more in upcoming posts or on the privact homepage.

The individual feedback on the privact approach during Akademy was very good, which is why we now want to start communicating with the larger KDE community. So this post is not only to report about my attendance at the Akademy, but also to start blogging again on Planet KDE and to check if the aggregation works.

Hello World Planet!

Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds mourns the passing of Lunar

Planet Debian - Thu, 2024-11-14 10:00

The Reproducible Builds community sadly announces it has lost its founding member.

Jérémy Bobbio aka ‘Lunar’ passed away on Friday November 8th in palliative care in Rennes, France.

Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. Many of our earliest status reports were written by him and many of our key tools in use today are based on his design.

Lunar was a resolute opponent of surveillance and censorship, and he possessed an unwavering energy that fueled his work on Reproducible Builds and Tor. Without Lunar’s far-sightedness, drive and commitment to enabling teams around him, Reproducible Builds and free software security would not be in the position it is in today. His contributions will not be forgotten, and his high standards and drive will continue to serve as an inspiration to us as well as for the other high-impact projects he was involved in.

Lunar’s creativity, insight and kindness were often noted. He will be greatly missed.


More information and tributes to Lunar are available [FR], as is a broader history of the Reproducible Builds project.

Categories: FLOSS Project Planets

Drupal Association blog: Why HeroDevs is Raising the Bar for Drupal 7 Security and Support

Planet Drupal - Thu, 2024-11-14 10:00

The Drupal Association has published this guest blog on behalf of HeroDevs.

At HeroDevs, we’re no strangers to the importance of security—especially when it comes to open-source software. As the pioneers of securing deprecated open source software across various communities like AngularJS, Vue, and Spring, we’re excited to bring our expertise to the Drupal 7 ecosystem. We understand the challenges and vulnerabilities that come with maintaining legacy software, and our goal is to ensure your Drupal 7 websites remain secure, compliant, and fully functional for the long term.

Guaranteed SLA for Security and Compliance

When it comes to security vulnerabilities, having a guaranteed response is crucial for your business. HeroDevs offers a dedicated SLA that ensures your systems receive timely attention and resolution. Our service helps you stay compliant with important regulations such as FedRAMP, PCI, HIPAA, and SOC II. With HeroDevs, your business is backed by proactive security measures, so you never have to worry about delayed responses to critical security needs.

Reliable Terms & Conditions Throughout Your Subscription

We know how important stability and reliability are for businesses managing content management systems such as Drupal 7. That’s why our terms and conditions are mutually agreed upon and remain unchanged throughout your Subscription Term. With HeroDevs, you can rely on consistent, dependable support without the worry of unexpected changes to your agreement.

Guaranteed Subscription Term: No Termination for Convenience

Another aspect that sets HeroDevs apart is our Guaranteed Subscription Term. Unlike other providers, HeroDevs cannot terminate your subscription for convenience. This ensures that you receive full, uninterrupted service for the entire duration of your agreement, so you can have peace of mind knowing your Drupal systems are in safe hands for as long as you need them to be.

Warranties and Indemnification: Protecting Your Business

At HeroDevs, we stand behind the services we provide. Our subscription includes warranties and indemnification to ensure that the security services you receive are up to standard. Should anything go wrong, you’re covered—not just with fixes, but with assurances that keep your business protected.

Why Partner with HeroDevs for Drupal Support?

By choosing HeroDevs, you’re partnering with a team of security professionals with a proven track record across various open-source communities. We’re committed to helping your business meet compliance standards, avoid costly security incidents, and maintain seamless functionality—all with the added benefit of faster support and more secure systems.

Contact us to learn more about Drupal 7 NES.

Categories: FLOSS Project Planets

Stefano Zacchiroli: In memory of Lunar

Planet Debian - Thu, 2024-11-14 08:56
In memory of Lunar

I've had the incredible fortune to share the geek path of Lunar through life on multiple occasions. First, in Debian, beginning some 15+ years ago, where we were fellow developers and participated in many DebConf editions together.

Then, on the deontology committee of Nos Oignons, a non-profit organization initiated by Lunar to operate Tor relays in France. This was with the goal of diversifying relay operators and increasing access to censorship-resistance technology for everyone in the world. It was something truly innovative and unheard of at the time in France.

Later, as a member of the steering committee of Reproducible Builds, a project that Lunar brought to widespread geek popularity with a seminal "Birds of a Feather" session at DebConf13 (and then many other talks with fellow members of the project in the years to come). A decade later, Reproducible Builds is having a major impact throughout the software industry, primarily due to growing fears about the security of the software supply chain.

Finally, we had the opportunity to recruit Lunar a couple of years ago at Software Heritage, where he insisted on working until he was able to, as part of a team he loved, and that loved him back. In addition to his numerous technical contributions to the initiative, he also facilitated our first ever multi-day team seminar. The event was so successful that it has been confirmed as a long-awaited yearly recurrence by all team members.

I fondly remember one of the last conversations I had with Lunar, a few months ago, when he told me how proud he was not only of having started Nos Oignons and contributed to the ignition of Reproducible Builds, but specifically about the fact that both initiatives were now thriving without being dependent on him. He was likely thinking about a future world without him, but also realizing how impactful his activism had been on the past and present world.

Lunar changed the world for the better and left behind a trail of love and fond memories.

Che la terra ti sia lieve, compagno.

--- Zack

Categories: FLOSS Project Planets

PyCharm: Inline AI Prompting, Coding Assistance for the dataclass_transform Decorator (PEP 681), and More in PyCharm 2024.3!

Planet Python - Thu, 2024-11-14 08:42

Code smarter, optimize performance, and stay focused on what matters most with the latest updates in PyCharm 2024.3. From enhanced support for AI Assistant and Jupyter notebooks to new features like no-code data filtering, there’s so much to explore. 

Learn about all the updates on our What’s New page, download the latest version from our website, or update your current version through our free Toolbox App.

Download PyCharm 2024.3 Key features of PyCharm 2024.3 AI Assistant Inline AI prompting

Get help with code, generate documentation, or write tests by prompting AI directly in PyCharm’s editor. Just type your request on a new line and hit Enter.

Edits made by AI are marked in purple in the gutter, so changes are easy to spot. Need a fresh suggestion? Press Tab, Ctrl+/ ( ⌘/ on macOS), or manually edit the purple input text yourself. This feature is available for Python, JavaScript, TypeScript, JSON, YAML, and Jupyter notebooks.

For a personalized AI chat experience, you can now also choose from Google Gemini, OpenAI, or your own local models. Moreover, enhanced context management now lets you control what AI Assistant takes into consideration. The brand-new UI auto-includes open files and selected code and comes with options to add or remove files and attach project-wide instructions to guide responses across your codebase.

Ability to convert for loops into list comprehensions

Refactor your code faster with AI Assistant, which can now help you change massive for loops into list comprehensions. This feature works for all for loops, including nested and while loops.

Local multiline AI code completion PyCharm Professional

PyCharm Professional now provides local multiline AI code completion suggestions based on the proprietary JetBrains ML model used for Full Line Code Completion. Note that we don’t use your data to train the model.

Local multiline code completion typically generates 2–4 lines of code in scenarios where it can predict the next sequence of logical steps, such as within loops, when handling conditions, or when completing common code patterns and boilerplate sections.

Coding assistance for the dataclass_transform decorator (PEP 681)

PyCharm now supports intelligent coding assistance for custom data classes created with libraries using the dataclass_transform decorator. Enjoy the same support as for standard data classes, including attribute code completion and type inference for constructor signatures.

Download PyCharm 2024.3 Jupyter Notebook PyCharm Professional Auto-installation for multiple packages 

PyCharm 2024.3 makes it easier to install packages that are imported in your code. A new quick-fix is available for bulk auto-installations, allowing you to download and install several packages in one click.

Ability to open Jupyter table outputs in the Data View window

View Jupyter table outputs in the Data View tool window to access powerful features like heatmaps, formatting, slicing, and AI functions for enhanced dataframe analysis. Just click on the Open in Data View icon to get started. 

No-code data filtering 

Effortlessly filter data in the Data View tool window or within dataframes without writing any code. Just click the Filter icon in the upper-right corner, choose your filter options and see results in the same window. This functionality works with all supported Python frameworks, including pandas, Polars, NumPy, PyTorch, TensorFlow, and Hugging Face Datasets.

Debug port specification PyCharm Professional

PyCharm now allows you to specify a single debugger port for all communications, simplifying debugging in restricted environments like Docker or WSL. After you set the port in the debugger settings, the debugger runs as a server and all communication between it and the IDE flows through the specified port.

Download PyCharm 2024.3

Visit our What’s New page or check out the full release notes for more features and additional details about the features mentioned here. Please report any bugs on our issue tracker so we can address them promptly.

Connect with us on X (formerly Twitter) to share your thoughts on PyCharm 2024.3. We look forward to hearing from you!

Categories: FLOSS Project Planets

Qt Creator 15 RC released

Planet KDE - Thu, 2024-11-14 07:45

We are happy to announce the release of Qt Creator 15 RC!

Categories: FLOSS Project Planets

Metafont, MetaPost and Malayalam font

Planet KDE - Thu, 2024-11-14 07:21

At the International TeX Users Group Conference 2023 (TUG23) in Bonn, Germany, I presented a talk about using Metafont (and its extension Metapost) to develop traditional orthography Malayalam fonts, on behalf of C.V. Radhakrishnan and K.H. Hussain, who were the co-developers and authors. And I forgot to post about it afterwards — as always, life gets in between.

In early 2022, CVR started toying with Metafont to create a few complicated letters of Malayalam script and he showed us a wonderful demonstration that piqued many of our interest. With the same code base, by adjusting the parameters, different variations of the glyphs can be generated, as seen in a screenshot of that demonstration: 16 variations of the same character ഴ generated from same Metafont source.

Hussain, quickly realizing that the characters could be programmatically assembled from a set of base/repeating components, collated an excellent list of basic shapes for Malayalam script.

Excerpts from the Malayalam character basic shape components documented by K.H. Hussain.

I bought a copy of ‘The Metafontbook’ and started learning and experimenting. We found soon that Metafont, developed by Prof. Knuth in the late 1970’s, generates bitmap/raster output; but its extension MetaPost, developed by his Ph.D. student John Hobby, generates vector output (postscript) which is required for opentype fonts. We also found that ‘Metatype1’ developed by Bogusław Jackowski et al. has very useful macros and ideas.

We had a lot of fun programmatically generating the character components and assembling them, splicing them, sometimes cutting them short, and transforming them in all useful manner. I have developed a new set of tools to generate the font from the vector output (SVG files) generated by MetaPost, which is also used in later projects like Chingam font.

At the annual TUG conference 2023 in Bonn, Germany, I have presented our work, and we received good feedback. There were three presentations about Metafont itself at the conference. Among others, I also had the pleasure to meet Linus Romer who shared some ideas about designing variable width reph-shapes for Malayalam characters.

The video of the presentation is available in YouTube.

The article was published in the TUGboat conference proceedings (volume 44): https://www.tug.org/TUGboat/tb44-2/tb137radhakrishnan-malayalam.pdf

Postscript (no pun intended): after the conference, I visited some of my good friends in Belgium and Netherlands. En route, my backpack with passport, identity cards, laptop, a phone and money etc. was stolen at Liège. I can’t thank enough my friends at Belgium and back at home for all their care and help, in the face of a terrible experience. On the day before my return, the stolen backpack with everything except the money was found by the railway authorities and I was able to claim it just in time.

I made yet another visit to the magnificent Plantin–Moretus Museum (it holds the original Garamond types!), where I myself could ink and print a metal typeset block of sonnet by Christoffel Plantijn in 1575, which now hangs at the office of a good friend.

Categories: FLOSS Project Planets

Real Python: Quiz: Namespaces and Scope in Python

Planet Python - Thu, 2024-11-14 07:00

In this quiz, you’ll test your understanding of Python Namespaces and Scope.

You’ll revisit how Python organizes symbolic names and objects in namespaces, when Python creates a new namespace, how namespaces are implemented, and how variable scope determines symbolic name visibility.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

The Open Source Initiative and the Eclipse Foundation to Collaborate on Shaping Open Source AI (OSAI) Public Policy

Open Source Initiative - Thu, 2024-11-14 06:00

BRUSSELS and WEST HOLLYWOOD, Calif.  – 14 November 2024 – The Eclipse Foundation, one of the world’s largest open source foundations, and the Open Source Initiative (OSI), the global non-profit educating about and advocating for the benefits of open source and steward of the Open Source Definition, have signed a Memorandum of Understanding (MOU) to collaborate on promoting the interest of the open source community in the implementation of regulatory initiatives on Open Source Artificial Intelligence (OSAI). This agreement underscores the two organisations’ shared commitment to ensuring that emerging AI regulations align with widely recognised OSI open source definitions and open source values and principles.

“AI is arguably the most transformative technology of our generation,” said Stefano Maffulli, executive director, Open Source Initiative. “The challenge now is to craft policies that not only foster growth of AI but ensure that Open Source AI thrives within this evolving landscape. Partnering with the Eclipse Foundation and its expertise, with its experience in European open source development and regulatory compliance, is important to shape the future of Open Source AI.”

“For decades, OSI has been the ‘gold standard’ the open source community has turned to for building consensus around important issues,”  said Mike Milinkovich, executive director of the Eclipse Foundation. “As AI reshapes industries and societies, there is no more pressing issue for the open source community than the regulatory recognition of open source AI systems. Our combined expertise – OSI’s global leadership in open standards and open source licences and our extensive work with open source regulatory compliance – makes this partnership a  powerful advocate for the design and implementation of sound AI policies worldwide.”

Addressing the Global Challenges of AI Regulation

With AI regulation on the horizon in multiple regions, including the EU, both organisations recognise the urgency of helping policymakers understand the unique challenges and opportunities of OSAI technologies.  The rapid evolution of AI technologies, together with new, upcoming complex regulatory landscapes, demand clear, consistent, and aligned guidance rooted in open source principles.

Through this partnership, the Eclipse Foundation and OSI will endeavour to bring clarity in language and terms that industry, community, civil society, and policymakers can rely upon as public policy is drafted and enforced. The organisations will collaborate by leveraging their respective public platforms and events to raise awareness and advocate on the topic.  Additionally, they will work together on joint publications, presentations, and other promotional activities, while also assisting one another in educating government officials on policy considerations for OSAI and General Purpose AI (GPAI). Through this partnership, they aim to provide clear, consistent guidance that aligns with open source principles.

Key Areas of Collaboration

The MoU outlines several areas of cooperation, including:

  • Information Exchange: OSI and the Eclipse Foundation will share relevant insights and information related to public policy-making and regulatory activities on artificial intelligence.
  • Representation to Policymakers: OSI and the Eclipse Foundation will cooperate in representing the principles and values of open source licences to policymakers and civil society organisations.
  • Promotion of Open Source Principles: Joint efforts will be made to raise awareness of the role of open source in AI, emphasising how it can foster innovation while mitigating risks. 

A Partnership for the Future

As AI continues to revolutionise industries worldwide, the need for thoughtful, balanced regulation is critical. The OSI and Eclipse Foundation are committed to providing the open source community, industry leaders, and policymakers with the tools and knowledge they need to navigate this rapidly evolving field.

This MoU marks the very beginning of a long-term collaboration, with joint initiatives and activities to be announced throughout the remainder of 2024 and into 2025. 

About the Eclipse Foundation

The Eclipse Foundation provides our global community of individuals and organisations with a business-friendly environment for open source software collaboration and innovation. We host the Eclipse IDE, Adoptium, Software Defined Vehicle, Jakarta EE, and over 420 open source projects, including runtimes, tools, specifications, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, open processor designs, and many others. Headquartered in Brussels, Belgium, the Eclipse Foundation is an international non-profit association supported by over 385 members. To learn more, follow us on social media @EclipseFdn, LinkedIn, or visit eclipse.org.

About the Open Source Initiative

Founded in 1998, the Open Source Initiative (OSI) is a non-profit corporation with global scope formed to educate about and advocate for the benefits of Open Source and to build bridges among different constituencies in the Open Source community. It is the steward of the Open Source Definition, setting the foundation for the global Open Source ecosystem. Join and support the OSI mission today at https://opensource.org/join

Third-party trademarks mentioned are the property of their respective owners.

###

Media contacts:

Schwartz Public Relations (Germany)
Gloria Huppert/Marita Bäumer
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 -70/ -62

514 Media Ltd (France, Italy, Spain)
Benoit Simoneau
benoit@514-media.com
M: +44 (0) 7891 920 370

Nichols Communications (Global Press Contact)
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551

Categories: FLOSS Research

My first in-person Akademy: Thessaloniki 2023

Planet KDE - Thu, 2024-11-14 05:00
My first in-person Akademy: Thessaloniki 2023

This year, I was finally able to participate in-person at Akademy. Apart from meeting some familiar faces from the Plasma Spring in May this year, I also met lots of new people.

When waiting for the plane in Frankfurt, a group of KDE people formed. Meaning, we had a get-together even before the Akademy had started ;). On the plane to Thessaloniki, I made a merge requests to fix a Kickoff crash due to a KRunner change. Once that was done, everything was in place for the talks!

On Saturday, I talked once again with Nico and also Volker about KF6. This included topics like the remaining challenges, the estimated timeline for KF6 and some practical porting advice. I also gave a talk about KRunner. This was the conference talk of mine that I gave alone, meaning I was a bit nervous 😅. The title was “KRunner: Past, Present, and Future” and it focused on porting, new features and future plans for KF6. Thanks to everyone who was listening to the talk, both in person and online! Some things like the multithreading refactoring are worth their own blog post, which I will do in the next weeks.

The talks from other community members were also quite interesting. Sometimes it was hard to decide to which talk to go :). Multiple talks and BoFs were about energy efficiency and doing measurements. This perfectly aligned with me doing benchmarking of KRunner and the KCoreAddons plugin infrastructure.

The view of the city from the hotel balcony was also quite nice

Our KF6 and Qt6 porting BoFs were also quite productive. On Tuesday, we had our traditional KF6 weekly. Having this in person was definitely a nice refreshment! Apart from some general questions about documentation and KF6 Phabricator tasks, we discussed the release schedule. The main takeaway is that we want to improve the release automation and have created a small team to handle the KDE Frameworks releases. This includes Harald Sitter, Nicolas Fella, David Edmundson and me. Feel free to join the weeklies in our Big Blue Button room https://meet.kde.org/b/ada-mi8-aem at 17:00 CEST each Tuesday.

Since we had so many talented KDE people in one place, I decided to have a KRunner BoF on Tuesday morning. Subject of discussion was for example the sorting of KRunner, how to better organize the categories and the revival of the so-called “single runner mode”. This mode allows you to query only one specific plugin, instead of all available ones. This was previously only available from the D-Bus interface, but I have added a command line option to KRunner. To better visualize this special mode being in use, a tool-button was added as an indicator. This can also be used to go back to querying all runners. Kai will implement clickable categories that allow you to enable this mode without any command line options being necessary!

Finally, I would like to thank everyone who made this awesome experience possible! I am already looking forward to the next Akademy and the next sprints.

Categories: FLOSS Project Planets

Unifying the KRunner sorting mechanisms for Plasma6 & further plans

Planet KDE - Thu, 2024-11-14 05:00
Unifying the KRunner sorting mechanisms for Plasma6 & further plans

In Plasma5, we had different sorting implementations for KRunner and Kicker. This had historical reasons, because Kicker only used a subset of the available KRunner plugins. Due to the increased reliability, we decided to allow all available plugins to be loaded. However, the model still hard-coded the order in which the categories are displayed. This was reported in this bug which received numerous duplicates.

To address this concern, I focused on refactoring and cleaning up KRunner as part of KDE Frameworks 6. Among the significant architectural changes was the integration of KRunner’s model responsible for sorting into the KRunner framework itself. This integration enabled easier code sharing and simplified code maintenance. Consequently, the custom sorting logic previously present in Kicker could be removed.

Further plans

Now you know some of the improvements that have been done, but more interesting might be the future plans! While the sorting in KRunner was in lots of regards better than the one from Kicker, it still has some flaws. For instance, tweaking the order of results from a plugin developer’s perspective proved challenging, since rearranging categories could occur unintentionally. Also, KRunner implements logic to prioritize often launched results. In practice, this did not work quite well, because it only changed one of two sorting factors that are basically the same (sounds messy, I know :D).

The plan is to have two separate sorting values: One for the categories and one for the results within a category. This allows KRunner to more intelligently learn which categories you use more the most and prioritize them for further queries.

Another feature request to configure the sorting of plugins. With the described change, this is far easier to implement. Some of the visuals were already discussed at the Plasma sprint last month.

Stay tuned for updates!

Categories: FLOSS Project Planets

PyPy: Guest Post: Final Encoding in RPython Interpreters

Planet Python - Thu, 2024-11-14 03:42
Introduction

This post started as a quick note summarizing a recent experiment I carried out upon a small RPython interpreter by rewriting it in an uncommon style. It is written for folks who have already written some RPython and want to take a deeper look at interpreter architecture.

Some experiments are about finding solutions to problems. This experiment is about taking a solution which is already well-understood and applying it in the context of RPython to find a new approach. As we will see, there is no real change in functionality or the number of clauses in the interpreter; it's more like a comparison between endo- and exoskeletons, a different arrangement of equivalent bones and plates.

Overview

An RPython interpreter for a programming language generally does three or four things, in order:

  1. Read and parse input programs
  2. Encode concrete syntax as abstract syntax
  3. Optionally, optimize or reduce the abstract syntax
  4. Evaluate the abstract syntax: read input data, compute, print output data, etc.

Today we'll look at abstract syntax. Most programming languages admit a concrete parse tree which is readily abstracted to provide an abstract syntax tree (AST). The AST is usually encoded with the initial style of encoding. An initial encoding can be transformed into any other encoding for the same AST, looks like a hierarchy of classes, and is implemented as a static structure on the heap.

In contrast, there is also a final encoding. A final encoding can be transformed into by any other encoding, looks like an interface for the actions of the interpreter, and is implemented as an unwinding structure on the stack. From the RPython perspective, Python builtin modules like os or sys are final encodings for features of the operating system; the underlying implementation is different when translated or untranslated, but the interface used to access those features does not change.

In RPython, an initial encoding is built from a hierarchy of classes. Each class represents a type of tree nodes, corresponding to a parser production in the concrete parse tree. Each class instance therefore represents an individual tree node. The fields of a class, particularly those filled during .__init__(), store pre-computed properties of each node; methods can be used to compute node properties on demand. This seems like an obvious and simple approach; what other approaches could there be? We need an example.

Final Encoding of Brainfuck

We will consider Brainfuck, a simple Turing-complete programming language. An example Brainfuck program might be:

[-]

This program is built from a loop and a decrement, and sets a cell to zero. In an initial encoding which follows the algebraic semantics of Brainfuck, the program could be expressed by applying class constructors to build a structure on the heap:

Loop(Plus(-1))

A final encoding is similar, except that class constructors are replaced by methods, the structure is built on the stack, and we are parameterized over the choice of class:

lambda cls: cls.loop(cls.plus(-1))

In ordinary Python, transforming between these would be trivial, and mostly is a matter of passing around the appropriate class. Indeed, initial and final encodings are equivalent; we'll return to that fact later. However, in RPython, all of the types must line up, and classes must be determined before translation. We'll need to monomorphize our final encodings, using some RPython tricks later on. Before that, let's see what an actual Brainfuck interface looks like, so that we can cover all of the difficulties with final encoding.

Before we embark, please keep in mind that local code doesn't know what cls is. There's no type-safe way to inspect an arbitrary semantic domain. In the initial-encoded version, we can ask isinstance(bf, Loop) to see whether an AST node is a loop, but there simply isn't an equivalent for final-encoded ASTs. So, there is an implicit challenge to think about: how do we evaluate a program in an arbitrary semantic domain? For bonus points, how do we optimize a program without inspecting the types of its AST nodes?

What follows is a dissection of this module at the given revision. Readers may find it satisfying to read the entire interpreter top to bottom first; it is less than 300 lines.

Core Functionality

Final encoding is given as methods on an interface. These five methods correspond precisely to the summands of the algebra of Brainfuck.

class BF(object): # Other methods elided def plus(self, i): pass def right(self, i): pass def input(self): pass def output(self): pass def loop(self, bfs): pass

Note that the .loop() method takes another program as an argument. Initial-encoded ASTs have other initial-encoded ASTs as fields on class instances; final-encoded ASTs have other final-encoded ASTs as parameters to interface methods. RPython infers all of the types, so the reader has to know that i is usually an integer while bfs is a sequence of Brainfuck operations.

We're using a class to implement this functionality. Later, we'll treat it as a mixin, rather than a superclass, to avoid typing problems.

Monoid

In order to optimize input programs, we'll need to represent the underlying monoid of Brainfuck programs. To do this, we add the signature for a monoid:

class BF(object): # Other methods elided def unit(self): pass def join(self, l, r): pass

This is technically a unital magma, since RPython doesn't support algebraic laws, but we will enforce the algebraic laws later on during optimization. We also want to make use of the folklore that free monoids are lists, allowing callers to pass a list of actions which we'll reduce with recursion:

class BF(object): # Other methods elided def joinList(self, bfs): if not bfs: return self.unit() elif len(bfs) == 1: return bfs[0] elif len(bfs) == 2: return self.join(bfs[0], bfs[1]) else: i = len(bfs) >> 1 return self.join(self.joinList(bfs[:i]), self.joinList(bfs[i:]))

.joinList() is a little bulky to implement, but Wirth's principle applies: the interpreter is shorter with it than without it.

Idioms

Finally, our interface includes a few high-level idioms, like the zero program shown earlier, which are defined in terms of low-level behaviors. In an initial encoding, these could be defined as module-level functions; here, we define them on the mixin class BF.

class BF(object): # Other methods elided def zero(self): return self.loop(self.plus(-1)) def move(self, i): return self.scalemove(i, 1) def move2(self, i, j): return self.scalemove2(i, 1, j, 1) def scalemove(self, i, s): return self.loop(self.joinList([ self.plus(-1), self.right(i), self.plus(s), self.right(-i)])) def scalemove2(self, i, s, j, t): return self.loop(self.joinList([ self.plus(-1), self.right(i), self.plus(s), self.right(j - i), self.plus(t), self.right(-j)])) Interface-oriented Architecture Applying Interfaces

Now, we hack at RPython's object model until everything translates. First, consider the task of pretty-printing. For Brainfuck, we'll simply regurgitate the input program as a Python string:

class AsStr(object): import_from_mixin(BF) def unit(self): return "" def join(self, l, r): return l + r def plus(self, i): return '+' * i if i > 0 else '-' * -i def right(self, i): return '>' * i if i > 0 else '<' * -i def loop(self, bfs): return '[' + bfs + ']' def input(self): return ',' def output(self): return '.'

Via rlib.objectmodel.import_from_mixin, no stressing with covariance of return types is required. Instead, we shift from a Java-esque view of classes and objects, to an OCaml-ish view of prebuilt classes and constructors. AsStr is monomorphic, and any caller of it will have to create their own covariance somehow. For example, here are the first few lines of the parsing function:

@specialize.argtype(1) def parse(s, domain): ops = [domain.unit()] # Parser elided to preserve the reader's attention

By invoking rlib.objectmodel.specialize.argtype, we make copies of the parsing function, up to one per call site, based on our choice of semantic domain. Oleg calls these "symantics" but I prefer "domain" in code. Also, note how the parsing stack starts with the unit of the monoid, which corresponds to the empty input string; the parser will repeatedly use the monoidal join to build up a parsed expression without inspecting it. Here's a small taste of that:

while i < len(s): char = s[i] if char == '+': ops[-1] = domain.join(ops[-1], domain.plus(1)) elif char == '-': ops[-1] = domain.join(ops[-1], domain.plus(-1)) # and so on

The reader may feel justifiably mystified; what breaks if we don't add these magic annotations? Well, the translator will throw UnionError because the low-level types don't match. RPython only wants to make one copy of functions like parse() in its low-level representation, and each copy of parse() will be compiled to monomorphic machine code. In this interpreter, in order to support parsing to an optimized string and also parsing to an evaluator, we need two copies of parse(). It is okay to not fully understand this at first.

Composing Interfaces

Earlier, we noted that an interpreter can optionally optimize input programs after parsing. To support this, we'll precompose a peephole optimizer onto an arbitrary domain. We could also postcompose with a parser instead, but that sounds more difficult. Here are the relevant parts:

def makePeephole(cls): domain = cls() def stripDomain(bfs): return domain.joinList([t[0] for t in bfs]) class Peephole(object): import_from_mixin(BF) def unit(self): return [] def join(self, l, r): return l + r # Actual definition elided... for now... return Peephole, stripDomain

Don't worry about the actual optimization yet. What's important here is the pattern of initialization of semantic domains. makePeephole is an SML-style functor on semantic domains: given a final encoding of Brainfuck, it produces another final encoding of Brainfuck which incorporates optimizations. The helper stripDomain is a finalizer which performs the extraction from the optimizer's domain to the underlying cls that was passed in at translation time. For example, let's optimize pretty-printing:

AsStr, finishStr = makePeephole(AsStr)

Now, it only takes one line to parse and print an optimized AST without ever building it on the heap. To be pedantic, fragments of the output string will be heap-allocated, but the AST's node structure will only ever be stack-allocated. Further, to be shallow, the parser is written to prevent malicious input from causing a stack overflow, and this forces it to maintain a heap-allocated RPython list of intermediate operations inside loops.

print finishStr(parse(text, AsStr())) Performance

But is it fast? Yes. It's faster than the prior version, which was initial-encoded, and also faster than Andrew Brown's classic version (part 1, part 2). Since Brown's interpreter does not perform much optimization, we will focus on how final encoding can outperform initial encoding.

JIT

First, why is it faster than the same interpreter with initial encoding? Well, it still has initial encoding from the JIT's perspective! There is an Op class with a hierarchy of subclasses implementing individual behaviors. A sincere tagless-final student, or those who remember Stop Writing Classes (2012, Pycon US), will recognize that the following classes could be plain functions, and should think of the classes as a concession to RPython's lack of support for lambdas with closures rather than an initial encoding. We aren't ever going to directly typecheck any Op, but the JIT will generate typechecking guards anyway, so we effectively get a fully-promoted AST inlined into each JIT trace. First, some simple behaviors:

class Op(object): _immutable_ = True class _Input(Op): _immutable_ = True def runOn(self, tape, position): tape[position] = ord(os.read(0, 1)[0]) return position Input = _Input() class _Output(Op): _immutable_ = True def runOn(self, tape, position): os.write(1, chr(tape[position])) return position Output = _Output() class Add(Op): _immutable_ = True _immutable_fields_ = "imm", def __init__(self, imm): self.imm = imm def runOn(self, tape, position): tape[position] += self.imm return position

The JIT does technically have less information than before; it no longer knows that a sequence of immutable operations is immutable enough to be worth unrolling, but a bit of rlib.jit.unroll_safe fixes that:

class Seq(Op): _immutable_ = True _immutable_fields_ = "ops[*]", def __init__(self, ops): self.ops = ops @unroll_safe def runOn(self, tape, position): for op in self.ops: position = op.runOn(tape, position) return position

Finally, the JIT entry point is at the head of each loop, just like with prior interpreters. Since Brainfuck doesn't support mid-loop jumps, there's no penalty for only allowing merge points at the head of the loop.

class Loop(Op): _immutable_ = True _immutable_fields_ = "op", def __init__(self, op): self.op = op def runOn(self, tape, position): op = self.op while tape[position]: jitdriver.jit_merge_point(op=op, position=position, tape=tape) position = op.runOn(tape, position) return position

That's the end of the implicit challenge. There's no secret to it; just evaluate the AST. Here's part of the semantic domain for evaluation, as well as the "functor" to optimize it. In AsOps.join() are the only isinstance() calls in the entire interpreter! This is acceptable because Seq is effectively a type wrapper for an RPython list, so that a list of operations is also an operation; its list is initial-encoded and available for inspection.

class AsOps(object): import_from_mixin(BF) def unit(self): return Shift(0) def join(self, l, r): if isinstance(l, Seq) and isinstance(r, Seq): return Seq(l.ops + r.ops) elif isinstance(l, Seq): return Seq(l.ops + [r]) elif isinstance(r, Seq): return Seq([l] + r.ops) return Seq([l, r]) # Other methods elided! AsOps, finishOps = makePeephole(AsOps)

And finally here is the actual top-level code to evaluate the input program. As before, once everything is composed, the actual invocation only takes one line.

tape = bytearray("\x00" * cells) finishOps(parse(text, AsOps())).runOn(tape, 0) Peephole Optimization

Our peephole optimizer is an abstract interpreter with one instruction of lookahead/rewrite buffer. It implements the aforementioned algebraic laws of the Brainfuck monoid. It also implements idiom recognition for loops. First, the abstract interpreter. The abstract domain has six elements:

class AbstractDomain(object): pass meh, aLoop, aZero, theIdentity, anAdd, aRight = [AbstractDomain() for _ in range(6)]

We'll also tag everything with an integer, so that anAdd or aRight can be exact annotations. This is the actual Peephole.join() method:

def join(self, l, r): if not l: return r rv = l[:] bfHead, adHead, immHead = rv.pop() for bf, ad, imm in r: if ad is theIdentity: continue elif adHead is aLoop and ad is aLoop: continue elif adHead is theIdentity: bfHead, adHead, immHead = bf, ad, imm elif adHead is anAdd and ad is aZero: bfHead, adHead, immHead = bf, ad, imm elif adHead is anAdd and ad is anAdd: immHead += imm if immHead: bfHead = domain.plus(immHead) elif rv: bfHead, adHead, immHead = rv.pop() else: bfHead = domain.unit() adHead = theIdentity elif adHead is aRight and ad is aRight: immHead += imm if immHead: bfHead = domain.right(immHead) elif rv: bfHead, adHead, immHead = rv.pop() else: bfHead = domain.unit() adHead = theIdentity else: rv.append((bfHead, adHead, immHead)) bfHead, adHead, immHead = bf, ad, imm rv.append((bfHead, adHead, immHead)) return rv

If this were to get much longer, then implementing a DSL would be worth it, but this is a short-enough method to inline. The abstract interpretation is assumed by induction for the left-hand side of the join, save for the final instruction, which is loaded into a rewrite register. Each instruction on the right-hand side is inspected exactly once. The logic for anAdd followed by anAdd is exactly the same as for aRight followed by aRight because they both have underlying Abelian groups given by the integers. The rewrite register is carefully pushed onto and popped off from the left-hand side in order to cancel out theIdentity, which itself is merely a unifier for anAdd or aRight of 0.

Note that we generate a lot of garbage. For example, parsing a string of n '+' characters will cause the peephole optimizer to allocate n instances of the underlying domain.plus() action, from domain.plus(1) up to domain.plus(n). An older initial-encoded version of this interpreter used hash consing to avoid ever building an op more than once, even loops. It appears more efficient to generate lots of immutable garbage than to repeatedly hash inputs and search mutable hash tables, at least for optimizing Brainfuck incrementally during parsing.

Finally, let's look at idiom recognition. RPython lists are initial-coded, so we can dispatch based on the length of the list, and then inspect the abstract domains of each action.

def isConstAdd(bf, i): return bf[1] is anAdd and bf[2] == i def oppositeShifts(bf1, bf2): return bf1[1] is bf2[1] is aRight and bf1[2] == -bf2[2] def oppositeShifts2(bf1, bf2, bf3): return (bf1[1] is bf2[1] is bf3[1] is aRight and bf1[2] + bf2[2] + bf3[2] == 0) def loop(self, bfs): if len(bfs) == 1: bf, ad, imm = bfs[0] if ad is anAdd and imm in (1, -1): return [(domain.zero(), aZero, 0)] elif len(bfs) == 4: if (isConstAdd(bfs[0], -1) and bfs[2][1] is anAdd and oppositeShifts(bfs[1], bfs[3])): return [(domain.scalemove(bfs[1][2], bfs[2][2]), aLoop, 0)] if (isConstAdd(bfs[3], -1) and bfs[1][1] is anAdd and oppositeShifts(bfs[0], bfs[2])): return [(domain.scalemove(bfs[0][2], bfs[1][2]), aLoop, 0)] elif len(bfs) == 6: if (isConstAdd(bfs[0], -1) and bfs[2][1] is bfs[4][1] is anAdd and oppositeShifts2(bfs[1], bfs[3], bfs[5])): return [(domain.scalemove2(bfs[1][2], bfs[2][2], bfs[1][2] + bfs[3][2], bfs[4][2]), aLoop, 0)] if (isConstAdd(bfs[5], -1) and bfs[1][1] is bfs[3][1] is anAdd and oppositeShifts2(bfs[0], bfs[2], bfs[4])): return [(domain.scalemove2(bfs[0][2], bfs[1][2], bfs[0][2] + bfs[2][2], bfs[3][2]), aLoop, 0)] return [(domain.loop(stripDomain(bfs)), aLoop, 0)]

This ends the bonus question. How do we optimize an unknown semantic domain? We must maintain an abstract context which describes elements of the domain. In initial encoding, we ask an AST about itself. In final encoding, we already know everything relevant about the AST.

The careful reader will see that I didn't really answer that opening question in the JIT section. Because the JIT still ranges over the same operations as before, it can't really be slower; but why is it now faster? Because the optimizer is now slightly better in a few edge cases. It performs the same optimizations as before, but the rigor of abstract interpretation causes it to emit slightly better operations to the JIT backend.

Concretely, improving the optimizer can shorten pretty-printed programs. The Busy Beaver Gauge measures the length of programs which search for solutions to mathematical problems. After implementing and debugging the final-encoded interpreter, I found that two of my entries on the Busy Beaver Gauge for Brainfuck had become shorter by about 2%. (Most other entries are already hand-optimized according to the standard algebra and have no optimization opportunities.)

Discussion

Given that initial and final encodings are equivalent, and noting that RPython's toolchain is written to prefer initial encodings, what did we actually gain? Did we gain anything?

One obvious downside to final encoding in RPython is interpreter size. The example interpreter shown here is a rewrite of an initial-encoded interpreter which can be seen here for comparison. Final encoding adds about 20% more code in this case.

Final encoding is not necessarily more code than initial encoding, though. All AST encodings in interpreters are subject to the Expression Problem, which states that there is generally a quadratic amount of code required to implement multiple behaviors for an AST with multiple types of nodes; specifically, n behaviors for m types of nodes require n × m methods. Initial encodings improve the cost of adding new types of nodes; final encodings improve the cost of adding new behaviors. Final encoding may tend to win in large codebases for mature languages, where the language does not change often but new behaviors are added frequently and maintained for long periods.

Optimizations in final encoding require a bit of planning. The abstract-interpretation approach is solid but relies upon the monoid and its algebraic laws. In the worst case, an entire class hierarchy could be required to encode the abstraction.

It is remarkable to find a 2% improvement in residual program size merely by reimplementing an optimizer as an abstract interpreter respecting the algebraic laws. This could be the most important lesson for compiler engineers, if it happens to generalize.

Final encoding was popularized via the tagless-final movement in OCaml and Scala, including famously in a series of tutorials by Kiselyov et al. A "tag", in this jargon, is a runtime identifier for an object's type or class; a tagless encoding effectively doesn't allow isinstance() at all. In the above presentation, tags could be hacked in, but were not materially relevant to most steps. Tags were required for the final evaluation step, though, and the tagless-final insight is that certain type systems can express type-safe evaluation without those tags. We won't go further in this direction because tags also communicate valuable information to the JIT.

Summarizing Table Initial Encoding Final Encoding hierarchy of classes signature of interfaces class constructors method calls built on the heap built on the stack traversals allocate stack traversals allocate heap tags are available with isinstance() tags are only available through hacks cost of adding a new AST node: one class cost of adding a new AST node: one method on every other class cost of adding a new behavior: one method on every other class cost of adding a new behavior: one class Credits

Thanks to folks in #pypy on Libera Chat: arigato for the idea, larstiq for pushing me to write it up, and cfbolz and mattip for reviewing and finding mistakes. The original IRC discussion leading to this blog post is available here.

This interpreter is part of the rpypkgs suite, a Nix flake for RPython interpreters. Readers with Nix installed can run this interpreter directly from the flake:

$ nix-prefetch-url https://github.com/MG-K/pypy-tutorial-ko/raw/refs/heads/master/mandel.b $ nix run github:rpypkgs/rpypkgs#bf -- /nix/store/ngnphbap9ncvz41d0fkvdh61n7j2bg21-mandel.b
Categories: FLOSS Project Planets

Pages