Feeds
The Drop Times: Vincenzo Gambino: A Drupal Architect from Palermo
Python Engineering at Microsoft: Python in Visual Studio Code – September 2024 Release
We’re excited to announce the September 2024 release of the Python and Jupyter extensions for Visual Studio Code!
This release includes the following announcements:
- Django unit test support
- Go to definition from inlay hints with Pylance
If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.
Django unit test supportWe are excited to announce support for one of our most requested features: you can now discover and run Django unit tests through the Test Explorer!
In order to enable this feature, you will need to add a MANAGE_PY_PATH environment variable, pointing to your Django application’s manage.py file. To do so, you can follow these steps:
- Set "python.testing.unittestEnabled": true, in your settings.json file.
- Add MANAGE_PY_PATH as an environment variable:
- Create a .env file at the root of your project.
- Add MANAGE_PY_PATH='<path-to-manage.py>' to the .env file, replacing <path-to-manage.py> with the path to your application’s manage.py file.
Tip: You can copy the path by right clicking on the file in the Explorer view and selecting Copy Path.
- Add Django test arguments to "python.testing.unittestArgs": [] in the settings.json file as needed, and remove any arguments that are not compatible with Django.
Note: By default, the Python extension looks for and loads .env files at the project root. If your .env file is not at the project root or you are using VS Code variable substitution, add "python.envFile": "${workspaceFolder}/<path-to-.env>" to your settings.json file, so the Python extension can load the environment variables in this file when running and discovering tests. See our Python environment variables docs for more information on environment variables.
Navigate to the Testing view, and select the Refresh Tests button to have your Django tests displayed!
For troubleshooting tips, please see our Django testing docs. As you explore this newly added feature, please provide feedback and report any issues in our vscode-python repo or by using the Python: Report Issue command.
Go to definition from inlay hints with PylanceWhen enabling inlay hints with Pylance, you can now more conveniently navigate to a type’s definition through Ctrl+Click or Cmd+Click when hovering over it.
Other Changes and EnhancementsWe have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:
- You can now access the VS Code Native REPL for Python from the Command Palette (Ctrl/Cmd + Shift + P) using Python: Start Native REPL(@vscode-python#23727)
- VS Code Native REPL for Python now starts at the project folder (@vscode-python#23821)
- Strings are now normalized when sending commands to the VS Code Native REPL (@vscode-python#23743)
- You can now restart the debugger when debugging tests through the debug control widget (@vscode-python#23752)
As we are planning and prioritizing future work, we value your feedback! Below are a few issues we would love feedback on:
- Design proposal for test coverage in (@vscode-python#22827)
Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.
The post Python in Visual Studio Code – September 2024 Release appeared first on Python.
Real Python: The Real Python Podcast – Episode #219: Astrophysics and Astronomy With Python & PyCon Africa 2024
Are you interested in practicing your Python skills while learning how to solve astrophysics and astronomy problems? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
mark.ie: The Confident: Mark Conroy's new Drupal agency
I've got some big news.
Web Review, Week 2024-36
On my way to Akademy, looking forward to meeting people there. Even though I’m traveling with spotty Internet access for now, let’s not loose good habits. Here is my web review for the week 2024-36.
The Internet Archive just lost its appeal over ebook lending - The VergeTags: tech, copyright, law, library
This is really bad news… Clearly the publishers cartel would try to outlaw libraries if they were invented today.
https://www.theverge.com/2024/9/4/24235958/internet-archive-loses-appeal-ebook-lending
Tags: tech, privacy, surveillance, advertisement
There, now this seems like a real thing… your phone recording you while you’re not aware for advertisement purposes. Nice surveillance apparatus. Thanks but no thanks.
https://futurism.com/the-byte/facebook-partner-phones-listening-microphone
Tags: tech, ai, machine-learning, gpt, art, learning, cognition
An excellent essay about generative AI and art. Goes deep in the topic and explains very well how you can hardly make art with those tools. It’s just too remote from how they work. I also particularly like the distinction between skill and intelligence. Indeed, we can make highly skilled but not intelligent systems using this technology.
https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art
Tags: tech, ai, machine-learning, gpt, criticism
Does a good job listing the main myths the marketing around generative AI is built on. Don’t fall for the marketing, exert critical thinking and rely on real properties of those systems.
https://www.techpolicy.press/challenging-the-myths-of-generative-ai/
Tags: tech, linux, rust, kernel, foss, politics
Interesting analysis. For sure the Rust for Linux drama tells something about the Linux kernel community and its complicated social norms.
https://sporks.space/2024/09/05/is-linux-collapsing-under-its-own-weight-on-rust-for-linux/
Tags: tech, linux, rust, kernel, politics, foss
Politics in the Linux kernel can indeed be tough. The alternative path proposed to the Rust-for-Linux team is indeed an interesting one, it could bear interesting results quickly.
https://drewdevault.com/2024/08/30/2024-08-30-Rust-in-Linux-revisited.html
Tags: tech, semantic, programming, protocols, language
Interesting view about the LSP specification, where it shines, and where it falls short.
https://www.michaelpj.com/blog/2024/09/03/lsp-good-bad-ugly.html
Tags: tech, python, community, data-visualization
Indeed this is a much better visualization. It shows quite well how the Python programmers pool is growing.
https://two-wrongs.com/python-programmers-experience
Tags: tech, oauth, security
Nice post explaining the basics of OAuth. If you wonder why the flow seems so convoluted, this article is for you.
https://stack-auth.com/blog/oauth-from-first-principles
Tags: tech, databases
Lots of things to keep in mind when dealing with databases. This is a nice list of “must know” for developers, false assumptions are widespread (and I fall in some of those traps myself from time to time).
https://rakyll.medium.com/things-i-wished-more-developers-knew-about-databases-2d0178464f78
Tags: tech, performance, latency
An old article but a good reminder: you have to choose between latency and throughput, you can’t have both in the same system.
Tags: tech, security, safety, memory, sandbox
Interesting point. As the memory safety of our APIs will increase, can we reduce the amount of sandboxing we need? This will never remove completely the need if only for logic bugs, but surely we could become more strategic about it.
https://alexgaynor.net/2024/aug/30/impact-of-memory-safety-on-sandboxing/
Tags: tech, tests, storage
That sounds like a very interesting tool to simulate and test potential data loss scenarios. This is generally a bit difficult to do, should make it easier.
https://github.com/dsrhaslab/lazyfs
Tags: tech, programming, craftsmanship
Good set of advices on naming variables, types, etc. Indeed this makes things easier to find in code bases.
https://morizbuesing.com/blog/greppability-code-metric/
Tags: tech, architecture, design, documentation, communication
Very good article. I wish I’d see more organisations writing such design documents. They help a lot, and that allows to have a way to track changes in the design. To me it’s part of the minimal set of documentation you’d want on any non trivial project.
https://ntietz.com/blog/reasons-to-write-design-docs/
Tags: tech, values, organization, team, management
Aligning people with differing core values in a team is indeed necessary but difficult. It can kill your project for small teams, for larger teams you will likely need to think your organization keeping the misalignment in mind.
https://rtpg.co/2024/08/31/cost-of-a-values-gap/
Tags: tech, hiring, interviews
Good set of advices. I wish more people applying for a job would follow them.
https://vurt.org/articles/twelve-rules/
Tags: colors, cognition, funny
One of those essentials questions in life now has some form of answer. Where is the blue/green boundary for you?
Bye for now!
Matt Layman: Kamal On A Droplet - Building SaaS #201
DrupalEasy: How to step down successfully as a Drupal leader
In my 15+ years in the Drupal community, I've been fortunate to have been able to lead a few Drupal-related groups and I sometimes find myself in the position of encouraging other leaders - who are experiencing burnout - on how to gracefully step down from leadership positions after multiple years of service.
When I say "groups," I'm talking about things like:
- Drupal event organizers
- Drupal module/theme/project maintainers
- Drupal initiative leaders
- Drupal working group leaders
It seems counter-intuitive to encourage folks to step away from things they have successfully led, but I'm very fond of the concept that the true sign of a healthy organization is a successful change in leadership to make way for new perspectives, insights and ultimately fresh ideas.
In this article, I'll share some of my thoughts on my experiences in doing this exact thing with two prominent Drupal groups: the Florida DrupalCamp organizing team and the Drupal Community Working Group.
Being a leader in the Drupal community comes with responsibilities, but it also comes with prestige. Leaders tend to be more visible and therefore able to promote themselves or their organizations to their advantage.
BackgroundMy leadership positions were gratifying, and I was still committed to them, but from my perspective, I had remained in them longer than was good for the organization or for me. But, I had an incredibly strong drive to ensure that I left the group in better shape than when I joined.
I was one of the original organizers of Florida DrupalCamp and ended up being the leader of the team by attrition. The other original organizers became less involved as the years went on, and I ended up taking on more-and-more duties. There wasn't a breaking point, but I realized that things weren't heading in the right direction.
For the Drupal Community Working Group, I was added to an incredibly strong team dealing with really difficult issues, but without a structured plan for length of terms or any other way to protect the mental health of its members.
In both cases, I was incredibly proud of the work we were doing, but didn't see a clear path to roll over leave either team in a healthy manner.
The good newsFrom my perspective, there are two things people need to do in order to successfully step down from leadership positions:
- Train your replacement(s).
- Codify roles and responsibilities.
Neither of these two steps can be done overnight.
The detailsTrain your replacement(s)You (yes, you) need to make a concerted effort to identify, approach, and ask someone (or in many cases, "someones") to fill your role when you leave. Once you find these magical people, then it is (again) up to you to train them in what you do. It is important that you communicate not only the work involved in being a leader, but also the advantages that come with the role.
For Florida DrupalCamp, I made it known well in advance that I was looking to step down as its leader (but willing to stay on in a lesser capacity). I knew it would be good for the event and community if there was new leadership. I told the other organizers as well as mentioning it during the event's opening and closing sessions. Most importantly, I did it early and spoke about it often. This directly led to several people stepping up.
This will likely be a time-consuming process, but it will make the team stronger. It will force you to document and organize what you do, and just the act of explaining it to someone else will allow you and your replacement to identify things that need to be documented as well as possible opportunities for efficiency gains.
Assume that you'll need to be training your replacement for at least a few months, but the timeframe really depends on the cadence of your team's primary tasks.
Codify roles and responsibilitiesThis was especially important for the Drupal Community Working Group, as prior to my joining the group, there weren't any guidelines for length of term, how the leader was selected, and how to step away gracefully. Under the leadership of George DeMet, our team implemented all of these, and more. Both George and I led the team for more years than was probably healthy for either of us, but by the time I stepped away, there were clear guidelines for all of these things (with a significant focus on the mentally draining Conflict Resolution Team).
For less formal teams, this could be as simple as a wiki page or an issue in the project's queue with what you and the other leaders do, what your boundaries are, and what your plans for the future are. This can be especially effective when someone makes a request of you that you feel is above-and-beyond - it is nice to have a document that you could point to where roles and responsibilities are detailed.
I'll admit that I skipped this step when stepping down as leader of the Florida DrupalCamp organizing team, as I wasn't leaving the team completely - I just stepped down into a lesser role but was always available to the new leaders for questions and advice.
Getting startedThere are many Drupal groups that have informal leadership roles, with many leaders who definitely feel that if they leave, then the group will fall. Clearly, this is not a healthy situation.
In this case, my advice is this: start by writing up a document/drupal.org page that describes what you do as leader and share it with the rest of the group. Then, be proactive and find a potential replacement and start the training process using the document as a guide.
No replacementIt should be obvious that the "finding your replacement" step requires a human being other than yourself being involved. But what happens if you can't find someone…
This situation can be stressful and heartbreaking at the same time, but I have a strong opinion on this - if you find yourself in this situation, then maybe it is time for the team to be disbanded or go dormant. If there's not enough interest in the community to keep the group alive, it's not your responsibility to sacrifice your time/money/mental-health. My advice is to write up your thoughts, announce your intentions (and time frame) and post it to all members of the group. This can be done in a way that sets up a future leader to use the codified roles and responsibilities as a framework to get things moving again. In a way, you're still training your replacement - just not in realtime.
Will there be people who are disappointed and/or angry with you for "abandoning" the group? Perhaps, but you'll need to do your best to ignore those folks and focus on setting up the next leader for success.
I would suggest that you keep things simple and focus on the main goal of always leaving the group in a positive manner, setting up future leaders for success.
Thanks to AmyJune Hineline, Adam Varn, Mike Herchel, George DeMet, and Gwendolyn Anello (who reviews pretty much everything I write) for reviewing this post prior to publication.
Kanopi Studios: A Handy Visual Guide to Drupal Versions, from 7 to Modern Drupal
If yours is one of the 42% of Drupal sites that are still using Drupal 7, we’re writing this post specifically with you in mind. After all, you’ve probably heard the news by now; as of January 5th, 2025, everyone’s beloved, trusted Drupal version 7 will reach its end-of-life. If you haven’t done so already, […]
The post A Handy Visual Guide to Drupal Versions, from 7 to Modern Drupal appeared first on Kanopi Studios.
mark.ie: My LocalGov Drupal contributions for week-ending September 6th, 2024
One of those weeks where we got lots and lots of smaller issues cleared up, and a new module released, and a very quirky bug discovered.
Horizontal Digital Blog: Why we migrated our blog from Wordpress to Drupal
Horizontal Digital Blog: Drupal's bundle classes offer granular control over node URLs
Tag1 Consulting: Migrating Your Data from D7 to D10: Migrating field widget settings
Today, we continue with the next field-related migration: widgets. While doing so, we will find out that new migrations might uncover issues or misconfigurations with already executed migrations.
Read more mauricio Thu, 09/05/2024 - 04:30Python Software Foundation: Pallets projects added to scope of PSF CVE Numbering Authority
Today, the PSF is expanding our CNA scope to also include Pallets projects, such as Flask, Jinja, Click, and Quart. For a complete list, see the Pallets organization on GitHub. Please report any security vulnerabilities for these projects following the Pallets security policy.
This work is being done to learn how the PSF can better serve Python's large ecosystem of projects in the context of the CVE ecosystem. The PSF previously published a guide on how open source projects can become their own CVE Numbering Authorities. You can learn more about the CVE CNA program on the CVE website.
Pallets is a fiscal sponsoree of the Python Software Foundation. Fiscal sponsorship is a key plank of the PSF’s mission in supporting the Python community. The PSF supports 20 fiscal sponsorees including regional PyCons, Python Meetup and User Groups, and Python projects. Learn more about our Fiscal Sponsorees on our website and consider supporting the groups with a US-tax deductible donation.
Sandro Tosi: TL;DR belongs at the top of an article
TL;DR
- if you are writing an article and plan to add a TL;DR section, then put it at the very top, right after the title.
- that's it, no excuses, end of discussion.
If the reason for "Too Long; Didn't Read" to exist is to avoid the reader to go thru the whole article to get its main points, then the natural place to present it is at the very top of said article.
So if you're planning on writing something and to add a TL;DR section (you don't have to, of course, but if you do that work too) then please position it at the very beginning of your work.
Python GUIs: Build a Translation Application Using Tkinter and OpenAI — Use ChatGPT to Translate Your Text from Python
Translation tools have existed for many years and are incredibly useful if you're learning a new language or wanting to read foreign websites. One of the most popular tools is Google Translate , but there is now another alternative: using OpenAI's ChatGPT tool to translate text.
In this tutorial, we'll build a desktop translator application to translate natural language using ChatGPT APIs. We'll be building the UI using the Tkinter library from the Python standard library:
Example translation of text via OpenAI
Table of Contents- Installing the Required Packages
- Building the Window
- Creating the GUI for the Translator App
- Getting a List of Languages
- Building the Input UI
- Getting an OpenAI API Key
- Implementing the Translation Functionality
- Conclusion
Our Translator uses the openai library to perform the actual translation via OpenAI's ChatGPT tool. Tkinter is already available in the standard library.
The first task will be to set up a Python virtual environment. Open the terminal, and run the following commands:
- Windows
- macOS/Linux
Working through these instructions, first we create a root directory for the Translator app. Next we create and activate a Python virtual environment for the project. Finally, we install the openai package.
Next, create a file named translator.py in the root of your project. Also add a folder called images/ where you'll store the icons for the application. The folder structure should look like this:
python translator/ &boxv &boxvr&boxh&boxh images/ &boxv &boxvr&boxh&boxh arrow.png &boxv &boxur&boxh&boxh logo.png &boxv &boxur&boxh&boxh translator.pyThe images for this project can be downloaded here.
The images/ folder contains the two icons that you'll use for the application. The translator.py is the app's source file.
Building the WindowOpen the translator.py file with your favorite Python code editor. We'll start by creating our main window:
python import tkinter as tk class TranslatorApp(tk.Tk): def __init__(self): super().__init__() self.title("Language Translator") self.resizable(width=False, height=False) if __name__ == "__main__": app = TranslatorApp() app.mainloop()This code imports Tkinter and then defines the application's main class, which we have called TranslatorApp. This class will hold the application's main window and allow us to run the main loop.
Importing tkinter under the alias tk is a common convention in Tkinter code.
Inside the class we define the __init__() method, which handles initialization of the class. In this method, we first call the initializer __init__() of the parent class, tk.Tk, to initialize the app's window. Then, we set the window's title using the title() method. To make the window unresizable, we use the resizable() method with width and height set to False.
At the bottom of the code, we have the if __name__ == "__main__" idiom to check whether the file is being run directly as an executable program. Inside the condition block we first create an instance of TranslatorApp and then run the application's main loop or event loop.
If you run this code, you'll get an empty Tkinter window on your desktop:
python $ python translator.pyThe empty Tkinter window
Creating the GUI for the Translator AppNow that the main window is set up, let's start adding widgets to build the GUI. To do this, we'll create a method called setup_ui(), as shown below:
python import tkinter as tk class TranslatorApp(tk.Tk): def __init__(self): super().__init__() self.title("Language Translator") self.resizable(width=False, height=False) self.setup_ui() def setup_ui(self): frame = tk.Frame(self) frame.pack(padx=10, pady=10) if __name__ == "__main__": app = TranslatorApp() app.mainloop()The setup_ui() method will define the application's GUI. In this method, we first create a frame widget using the tk.Frame class whose master argument is set to self (the application's main window). Next, we position the frame inside the main window using the pack() geometry manager, using padx and pady arguments to set some padding around the frame.
Finally, we add the call to self.setup_ui() to the __init__() method.
We'll continue to develop the UI by adding code to the setup_ui() method.
Net, we'll add the app's logo. In the setup_ui() method add the following code below the frame definition:
python import tkinter as tk class TranslatorApp(tk.Tk): def __init__(self): super().__init__() self.title("Language Translator") self.resizable(width=False, height=False) self.setup_ui() def setup_ui(self): frame = tk.Frame(self) frame.pack(padx=10, pady=10) self.logo = tk.PhotoImage(file="images/logo.png") tk.Label(frame, image=self.logo).grid(row=0, column=0, sticky="w") if __name__ == "__main__": app = TranslatorApp() app.mainloop()This code loads the logo using the tk.PhotoImage class. To resize it, we use the subsample() method. Then, we add the logo to the frame using a tk.Label widget. The label takes the frame and the logo as arguments. Finally, to position the logo, we use the grid() geometry manager with appropriate values for the row, column, and sticky arguments.
The sticky argument determines which side of a cell the widget should align -- North (top), South (bottom), East (right) or West (left). Here we're aligning it on the Wiest or left of the cell with "w":
Tkinter window with the OpenAI logo in it
Getting a List of LanguagesWe need list of languages to shown in the dropdown. There are various lists available online. But since we're using OpenAI for the translations, why not use it to give us the list of languages too. Since this is just for testing purposes, lets grab the top 20 human languages (by first and second language speakers).
We can prompt ChatGPT with something like:
Give me a list of the top 20 human languages with the most first and second language speakers in Python list format
..and it will return the following list:
python languages = [ "English", "Mandarin Chinese", "Hindi", "Spanish", "French", "Standard Arabic", "Bengali", "Russian", "Portuguese", "Urdu" ]I'm going to add Dutch to the list, because it's my second language. Feel free to add your own languages to the list.
Adding the InterfaceLet's start adding some inputs to the UIs. First we'll create the language selection drop down boxes:
python import tkinter as tk import tkinter.ttk as ttk LANGUAGES = [ "English", "Mandarin Chinese", "Hindi", "Spanish", "French", "Standard Arabic", "Bengali", "Russian", "Portuguese", "Urdu", "Dutch", # Gekoloniseerd. ] DEFAULT_SOURCE = "English" DEFAULT_DEST = "Dutch" class TranslatorApp(tk.Tk): def __init__(self): super().__init__() self.title("Language Translator") self.resizable(width=False, height=False) self.setup_ui() def setup_ui(self): frame = tk.Frame(self) frame.pack(padx=10, pady=10) self.logo = tk.PhotoImage(file="images/logo.png") tk.Label(frame, image=self.logo).grid(row=0, column=0, sticky="w") # Source language combobox self.from_language = ttk.Combobox(frame, values=LANGUAGES) self.from_language.current(LANGUAGES.index(DEFAULT_SOURCE)) self.from_language.grid(row=1, column=0, sticky="we") # Arrow icon self.arrows = tk.PhotoImage(file="images/arrow.png").subsample(15, 15) tk.Label(frame, image=self.arrows).grid(row=1, column=1) # Destination language combobox self.to_language = ttk.Combobox(frame, values=LANGUAGES) self.to_language.current(LANGUAGES.index(DEFAULT_DEST)) self.to_language.grid(row=1, column=2, sticky="we") if __name__ == "__main__": app = TranslatorApp() app.mainloop()We have added our language list as the constant LANGUAGES. We also define the default languages for when the application starts up, using constants DEFAULT_SOURCE and DEFAULT_DEST.
Next, we create two combo boxes to hold the list of source and destination languages. The combo boxes are created using the ttk.Combobox class. One to the left and another to the right. Between the combo boxes, we've also added an arrow icon loaded using the tk.PhotoImage class. Again, we've added the icon to the app's window using ttk.Label.
Both combo boxes take frame and values as arguments. The values argument populates the combo boxes with languages. To specify the default language, we use the current() method, looking up the position of our default languages in the languages list with .index().
To position the combo boxes inside the frame, we use the grid() geometry manager with the appropriate arguments. Run the application, and you will see the following window:
Source and destination languages
With the source and destination combo boxes in place, let's add three more widgets: two scrollable text widgets and a button. The scrollable text on the left will hold the source text, while the scrollable text on the right will hold the translated text. The button will allow us to run the actual translation.
Building the Input UIGet back to the code editor and update the setup_ui() method as follows. Note that we also need to import the ScrollText class:
python import tkinter as tk import tkinter.ttk as ttk from tkinter.scrolledtext import ScrolledText LANGUAGES = [ "English", "Mandarin Chinese", "Hindi", "Spanish", "French", "Standard Arabic", "Bengali", "Russian", "Portuguese", "Urdu", "Dutch", ] DEFAULT_SOURCE = "English" DEFAULT_DEST = "Dutch" class TranslatorApp(tk.Tk): def __init__(self): super().__init__() self.title("Language Translator") self.resizable(width=False, height=False) self.setup_ui() def setup_ui(self): frame = tk.Frame(self) frame.pack(padx=10, pady=10) self.logo = tk.PhotoImage(file="images/logo.png").subsample(5, 5) tk.Label(frame, image=self.logo).grid(row=0, column=0, sticky="w") # Source language combobox languages = [lang.title() for lang in LANGUAGES.values()] self.from_language = ttk.Combobox(frame, values=languages) self.from_language.current(languages.index(DEFAULT_SOURCE)) self.from_language.grid(row=1, column=0, sticky="we") # Arrow icon self.arrows = tk.PhotoImage(file="images/arrow.png").subsample(15, 15) tk.Label(frame, image=self.arrows).grid(row=1, column=1) # Destination language combobox self.to_language = ttk.Combobox(frame, values=languages) self.to_language.current(languages.index(DEFAULT_DEST)) self.to_language.grid(row=1, column=2, sticky="we") # Source text self.from_text = ScrolledText( frame, font=("Dotum", 16), width=50, height=20, ) self.from_text.grid(row=2, column=0) # Translated text self.to_text = ScrolledText( frame, font=("Dotum", 16), width=50, height=20, state="disabled", ) self.to_text.grid(row=2, column=2) # Translate button self.translate_button = ttk.Button( frame, text="Translate", command=self.translate, ) self.translate_button.grid(row=3, column=0, columnspan=3, pady=10) def translate(self): pass if __name__ == "__main__": app = TranslatorApp() app.mainloop()In the code snippet, we use the ScrolledText class to create the two scrolled text areas. Both text areas take frame, font, width, and height as arguments. The second text area also takes state as an additional argument. Setting state to "disabled" allows us to create a read-only text area.
Then, we use the ttk.Button class to create a button with frame, text, and command as arguments. The command argument allows us to bind the button's click event to the self.translate() method, which we will define in a moment. For now, we've added a placeholder.
To position all these widgets on the app's window, we use the grid() geometry manager. Now, the app will look something like the following:
Translator app's GUI
Our translation app's GUI is ready! Finally, we can start adding functionality to the application.
Getting an OpenAI API KeyYou can use OpenAPI's APIs for free, with some limitations. To get an OpenAI API key you will need to create an account. Once you have created an account go ahead and get an API key.
Click "Create new secret key" in the top right hand corner to create a key. Give the key a name (it doesn't matter what you use) and then click "Create secret key". Copy the resulting key and keep it safe. You'll need it in the next step.
Implementing the Translation FunctionalityWe'll implement the language translation functionality in the translate() method. This gets the current text from the UI and then uses openai to perform the translation. We need a few more imports, and to create the OpenAI client instance at the top of the application:
python import tkinter as tk import tkinter.ttk as ttk from tkinter.messagebox import showerror from tkinter.scrolledtext import ScrolledText import httpcore from openai import OpenAI client = OpenAI( api_key="<YOUR API KEY HERE>" )Here we've imported the showerror helper for displaying error boxes in our application. We've imported httpcore which we'll use to handle HTTP errors when accessing the API. Finally, we've added an import for the OpenAI class from openai. This is what handles the actual translation.
To use it, we create an instance of the class as OpenAI. Replace <YOUR API KEY HERE> with the API key you generated on OpenAI just now.
We'll continue by implementing the translate() method. Below we're just showing the function itself:
python class TranslatorApp(tk.Tk): # ... def translate(self): source_language = self.from_language.get() destination_language = self.to_language.get() text = self.from_text.get(1.0, tk.END).strip() try: completion = client.chat.completions.create( messages=[ {"role": "system", "content": "You are a language interpreter."}, { "role": "user", "content": ( f"Translate the following text from {source_language} " f"to {destination_language}, only reply with the text: " f"{text}" ), }, ], model="gpt-3.5-turbo", ) reply = completion.choices[0].message.content except httpcore.ConnectError: showerror( title="Error", message="Make sure you have an internet connection", ) return except Exception as e: showerror( title="Error", message=f"An unexpected error occurred: {e}", ) return self.to_text.config(state="normal") self.to_text.delete(1.0, tk.END) self.to_text.insert(tk.END, reply) self.to_text.config(state="disabled")The translate() method handles the entire translation process. It starts by retrieving the source and destination languages from the corresponding combo boxes, and the input text from the box on the left.
If any of these are not defined, we use a showerror dialog to inform the user of the problem.
Once we have the source and destination language and some text to translate, we can perform the actual translation through ChatGPT. First, we give the language model a hint about what we want it to do -- interpret language:
python {"role": "system", "content": "You are a language interpreter."},Next we build the message we want it to respond to. We ask it to translate the provided text from the source to destination language, and to respond with only the translated text. If we don't specify this, we'll get some additional description or context.
You might want to experiment with asking for the text and context separately, as that is often helpful when learning languages.
python { "role": "user", "content": ( f"Translate the following text from {source_language} " f"to {destination_language}, only reply with the text: " f"{text}" ), },The created completion is submitted to the API and we can retrieve the resulting text from the object:
python reply = completion.choices[0].message.contentIf the call to translate() finds a connection error, then we tell the user to check their internet connection. To handle any other exceptions, we catch the generic Exception class and display an error message with the exception details.
If the translation is successful, then we enable the destination scrolled area, display the translated text, and disable the area again so it remains read-only.
The complete final code is shown below:
python import tkinter as tk import tkinter.ttk as ttk from tkinter.messagebox import showerror from tkinter.scrolledtext import ScrolledText import httpcore from openai import OpenAI client = OpenAI( api_key="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" ) LANGUAGES = [ "English", "Mandarin Chinese", "Hindi", "Spanish", "French", "Standard Arabic", "Bengali", "Russian", "Portuguese", "Urdu", "Dutch", ] DEFAULT_SOURCE = "English" DEFAULT_DEST = "Dutch" class TranslatorApp(tk.Tk): def __init__(self): super().__init__() self.title("Language Translator") self.resizable(width=False, height=False) self.setup_ui() def setup_ui(self): frame = tk.Frame(self) frame.pack(padx=10, pady=10) self.logo = tk.PhotoImage(file="images/logo.png") tk.Label(frame, image=self.logo).grid(row=0, column=0, sticky="w") # Source language combobox self.from_language = ttk.Combobox(frame, values=LANGUAGES) self.from_language.current(LANGUAGES.index(DEFAULT_SOURCE)) self.from_language.grid(row=1, column=0, sticky="we") # Arrow icon self.arrows = tk.PhotoImage(file="images/arrow.png").subsample(15, 15) tk.Label(frame, image=self.arrows).grid(row=1, column=1) # Destination language combobox self.to_language = ttk.Combobox(frame, values=LANGUAGES) self.to_language.current(LANGUAGES.index(DEFAULT_DEST)) self.to_language.grid(row=1, column=2, sticky="we") # Source text self.from_text = ScrolledText( frame, font=("Dotum", 16), width=50, height=20, ) self.from_text.grid(row=2, column=0) # Translated text self.to_text = ScrolledText( frame, font=("Dotum", 16), width=50, height=20, state="disabled", ) self.to_text.grid(row=2, column=2) # Translate button self.translate_button = ttk.Button( frame, text="Translate", command=self.translate, ) self.translate_button.grid(row=3, column=0, columnspan=3, pady=10) def translate(self): source_language = self.from_language.get() destination_language = self.to_language.get() text = self.from_text.get(1.0, tk.END).strip() try: completion = client.chat.completions.create( messages=[ {"role": "system", "content": "You are a language interpreter."}, { "role": "user", "content": ( f"Translate the following text from {source_language} " f"to {destination_language}, only reply with the text: " f"{text}" ), }, ], model="gpt-3.5-turbo", ) reply = completion.choices[0].message.content except httpcore.ConnectError: showerror( title="Error", message="Make sure you have an internet connection", ) return except Exception as e: showerror( title="Error", message=f"An unexpected error occurred: {e}", ) return self.to_text.config(state="normal") self.to_text.delete(1.0, tk.END) self.to_text.insert(tk.END, reply) self.to_text.config(state="disabled") if __name__ == "__main__": app = TranslatorApp() app.mainloop()The finished app is shown below:
The completed Translator app
ConclusionIn this tutorial we built a Translator application using the Tkinter GUI library from the Python standard library. We worked step by step through building the UI using a grid layout, and then implemented the language translation functionality with openai & ChatGPT.
Try and take what you've learnt in this tutorial & applying it to your own projects!
Quansight Labs Blog: Announcing Scientific Python Accessibility Events
Goals Sprint Recap
In April we had the combined goals sprint, where a fine group of KDE people working on things around Automation & Systematization, Sustainable Software, and Accessibility got together. It was a nice cross-over of the KDE goals, taking advantage of having people in one room for a weekend to directly discuss topics of the goals and interactions between them. David, Albert, Nate, Nico, and Volker wrote about their impressions from the sprint.
So what happened regarding the Sustainable Software goal at the sprint and where are we today with these topics? There are some more detailed notes of the sprint. Here is a summary of some key topics with an update on current progress.
Kick-Off for the Opt-Green projectThe Opt-Green project is the second funded project of the KDE Eco team. The first one was the Blue Angel for Free Software project, where we worked on creating material helping Free Software projects to assess and meet the criteria for the Blue Angel certification for resource and energy-efficient software products.
The Opt Green project is about promotion of extending the operating life of hardware with Free Software to reduce electronic waste. It's funded for two years by the German Federal Environment Agency and the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection and is running from April 2024 to March 2026.
Figure : Opt-Green presentation
Joseph introduced the project, why it's important, how the environment is suffering from software-induced hardware obsolescence, and how Free Software in general and KDE specifically can help with fighting it. The approach of the project is to go beyond our typical audience and introduce people who are environmentally aware but not necessary very technical to the idea of running sustainable, up-to-date Free Software on their computers, even devices they may think are no longer usable due to lack of vendor support. In many cases this is a perfectly fine solution, and it's surprisingly attractive to a number of people who care about sustainability but haven't really been introduced to Free Software yet.
Where we are todayThe project is in full swing. The project has already been present at quite a number of events to motivate people to install Free Software on their (old) devices and support them in how to do it. See for example the report about the Academy of Games for upcoming 9th graders in Hannover, Germany.
Revamping the KDE Eco websiteWe had a great session putting together ideas and concepts about how we could improve the KDE Eco website. From brainstorming ideas to sketching a wireframe as a group, we discussed and agreed on a direction of how to present what we are doing in the KDE Eco team.
Figure : KDE Eco website sketches
The key idea is to focus on three main audiences (end users, advocates, and developers) and present specific material targeted at these groups. This nicely matches what we already have, e.g., the KDE Eco handbook for how to fulfill the Blue Angel criteria for developers, or the material being produced for events reaching out to end users, and while giving it a much more focused presentation.
Where we are todayThe first iteration of the new design is now live on eco.kde.org. There is more to come, but it already gives an impression where this is going. Anita created a wonderful set of design elements which will help to shape the visual identity of KDE Eco going forward.
Surveying end users about their attitude to hardware reuseMaking use of old hardware by installing sustainable free software on it is a wide field. There are many different variations of devices and what users do with them also varies a lot. What are the factors that might encourage users to reuse old hardware, what is holding them back?
To get a bit more reliable answers to these questions we came up with a concept for a user survey which can be used at events where we present the Opt Green project. This includes questions about what hardware people have and what is holding them back from installing new software on it.
Where we are todayThe concept has been implemented with an online survey on KDE's survey service. It's available in English and German and is being used at the events where the Opt Green project is present.
Figure : Opt-Green Survey
Sustainable AIOne of the big hype topics of the last two years has been Generative AI and the Large Language Models which are behind this technology. They promise to bring revolutionary new features, much closer to how humans interact in natural language, but they also come with new challenges and concerns.
One of the big questions is how this new technology affects our digital freedoms. How does it relate to Free Software? How does licensing and openness work? How does it fit KDE's values? Where does it make sense to use its technology? What are the ethical implications? What are the implications in terms of sustainability?
We had a discussion around the possible idea of adopting something like Nextcloud's Ethical AI rating in KDE as well. This would make it more transparent to users how use of AI features affects their freedoms and gives them a choice to use what they consider to be satisfactory.
Where we are todayThis is still pretty much an open question. The field is moving fast, there are legal questions around copyright and other aspects still to be answered. Local models are becoming more and more an option. But what openness means in AI has become very blurry. KDE still has to find a position here.
KDE’s Annual report for the year 2023 is out
Everything you wanted to know about the things we did last year is in this report: the funds we raised, how we spent them, the sprints and events we attended, the projects we took on, the milestones we hit, and much, much more.
Junichi Uekawa: Google docs has some tab feature.
Plasma Crash Course - DrKonqi
A while ago a colleague of mine asked about our crash infrastructure in Plasma and whether I could give some overview on it. This seems very useful to others as well, I thought. Here I am, telling you all about it!
Our crash infrastructure is comprised of a number of different components.
- KCrash: a KDE Framework performing crash interception and prepartion for handover to…
- coredumpd: a systemd component performing process core collection and handover to…
- DrKonqi: a GUI for crashes sending data to…
- Sentry: a web service and UI for tracing and presenting crashes for developers
We’ve already looked at KCrash and coredumpd. Now it is time to look at DrKonqi.
DrKonqiDrKonqi is the UI that comes up when a crash happens. We’ll explore how it integrates with coredumpd and Sentry.
Crash PickupWhen I outlined the functionality of coredumpd, I mentioned that it starts an instance of systemd-coredump@.service. This not only allows the core dumping itself to be controlled by systemd’s resource control and configuration systems, but it also means other systemd units can tie into the crash handling as well.
That is precisely what we do in DrKonqi. It installs drkonqi-coredump-processor@.service which, among other things, contains the rule:
WantedBy=systemd-coredump@.service…meaning systemd will not only start systemd-coredump@unique_identifier but also a corresponding drkonqi-coredump-processor@unique_identifier. This is similar to how services start as part of the system boot sequence: they all are “wanted by” or “want” some other service, and that is how systemd knows what to start and when (I am simplifying here 😉). Note that unique_identifier is actually a systemd feature called “instances” — one systemd unit can be instantiated multiple times this way.
drkonqi-coredump-processorWhen drkonqi-coredump-processor@unique_identifier runs, it first has some synchronization to do.
As a brief recap from the coredumpd post: coredumpd’s crash collection ends with writing a journald entry that contains all collected data. DrKonqi needs this data, so we wait for it to appear in the journal.
Once the journal entry has arrived, we are good to go and will systemd-socket-activate a helper in the relevant user.
The way this works is a bit tricky: drkonqi-coredump-processor runs as root, but DrKonqi needs to be started as the user the crash happened to. To bridge this gap a new service drkonqi-coredump-launcher comes into play.
drkonqi-coredump-launcherEvery user session has a drkonqi-coredump-launcher.socket systemd unit running that provides a socket. This socket gets connected to by the processor (remember: it is root so it can talk to the user socket). When that happens, an instance of drkonqi-coredump-launcher@.service is started (as the user) and the processor starts streaming the data from journald to the launcher.
The crash has now traveled from the user, through the kernel, to system-level systemd services, and has finally arrived back in the actual user session.
Having been started by systemd and initially received the crash data from the processor, drkonqi-coredump-launcher will now augment that data with the KCrash metadata originally saved to disk by KCrash. Once the crash data is complete, the launcher only needs to find a way to “pick up” the crash. This will usually be DrKonqi, but technically other types of crash pickup are also supported. Most notably, developers can set the environment variable KDE_COREDUMP_NOTIFY=1 to receive system notifications about crashes with an easy way to open gdb for debugging. I’ve already written about this a while ago.
When ready, the launcher will start DrKonqi itself and pass over the complete metadata.
the crashed application └── kernel └── systemd-coredumpd ├── systemd-coredumpd@unique_identifier.service └── drkonqi-coredump-processor@unique_identifier.service ├── drkonqi-coredump-launcher.socket └── drkonqi-coredump-launcher@unique_identifier.service └── drkonqiWhat a journey!
Crash ProcessingDrKonqi kicks off crash processing. This is hugely complicated and probably worth its own post. But let’s at least superficially explore what is going on.
The launcher has provided DrKonqi with a mountain of information so it can now utilize the CLI for systemd-coredump, called coredumpctl, to access the core dump and attach an instance of the debugger GDB to it.
GDB runs as a two step automated process:
Preamble StepAs part of this automation, we run a service called the preamble: a Python program that interfaces with the Python API of GDB. Its most important functionality is to create a well-structured backtrace that can be converted to a Sentry payload. Sentry, for the most part, doesn’t ingest platform specific core dumps or crash reports, but instead relies on an abstract payload format that is generated by so called Sentry SDKs. DrKonqi essentially acts as such an SDK for us. Once the preamble is done, the payload is transferred into DrKonqi and the next step can continue.
Trace StepAfter the preamble, DrKonqi executes an actual GDB trace (i.e. the literal backtrace command in gdb) to generate the developer output. This is also the trace that gets sent to KDE’s Bugzilla instance at bugs.kde.org if the user chooses to file a bug report. The reason this is separate from the already created backtrace is mostly for historic reasons. The trace is then routed through a text parser to figure out if it is of sufficient quality; only when that is the case will DrKonqi allow filing a report in Bugzilla.
TransmissionWith all the trace data assembled, we just need to send them off to Bugzilla or Sentry, depending on what the user chose to do.
BugzillaThe Bugzilla case is simply sending a very long string of the backtrace to the Bugzilla API (albeit surrounded by some JSON).
SentryThe Sentry case on the other hand requires more finesse. For starters, the Sentry code also works when offline. The trace and optional user message get converted into a Sentry envelope tagged with a receiver address — a Sentry-specific URL for ingestion so it knows under which project to file the crash. The envelope is then written to ~/.cache/drkonqi/sentry-envelopes/. At this point, DrKonqi’s job is done; The actual transmission happens in an auxiliary service.
Writing an envelope to disk triggers drkonqi-sentry-postman.service which will attempt to send all pending envelopes to Sentry using the URL inside the payload. It will try to do so every once in a while in case there are pending envelopes as well, thereby making sure crashes that were collected while offline still make it to Sentry eventually. Once sent successfully, the envelopes are archived in ~/.cache/drkonqi/sentry-sent-envelopes/.
This concludes DrKonqi’s activity. There’s much more detail going on behind the scenes but it’s largely inconsequential to the overall flow. Next time we will look at the final piece in the puzzle — Sentry itself.