Feeds

Real Python: Python News: What's New From April 2024

Planet Python - Mon, 2024-05-06 10:00

In April 2024, Python’s core development team released versions 3.13.0a6 and 3.12.3 of the language! The former received several exciting features, improvements, and optimizations, while the latter got more than 300 commits for security improvements and bug fixes.

The 3.13.0a6 release is the last alpha release. In the first half of May, the code will be frozen and won’t accept new features. Note that 3.13.0a6 is a pre-release, so you shouldn’t use it for production environments. However, it provides a great way to try out some new and exciting language features.

There is also great news about PyCon US 2024, which opened its call for volunteers.

Let’s dive into the most exciting Python news from April 2024!

Python 3.13.0 Alpha 6 and 3.12.3 Arrive

This April, Python released its sixth alpha preview release, 3.13.0a6. This version is the last alpha release, as Python 3.13 will enter the beta phase on May 7. Once in beta, it won’t accept any new features.

Python 3.13 brings the following new features:

Meanwhile, the standard library comes with these new features:

  • The dbm module has a new dbm.sqlite3 backend for creating new files.
  • PEP 594 scheduled removals of many deprecated modules: aifc, audioop, chunk, cgi, cgitb, crypt, imghdr, mailcap, msilib, nis, nntplib, ossaudiodev, pipes, sndhdr, spwd, sunau, telnetlib, uu, xdrlib, lib2to3.
  • Many deprecated classes, functions, and methods (dead batteries) were removed.
  • New deprecations appeared, and many of them were scheduled for removal in Python 3.15 or 3.16.

For a detailed list of changes, additions, and removals, you can check out the Changelog document. The next pre-release of Python 3.13 will be 3.13.0b1, which is currently scheduled for May 7.

Read the full article at https://realpython.com/python-news-april-2024/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Mike Driscoll: How to Read and Write Parquet Files with Python

Planet Python - Mon, 2024-05-06 09:57

Apache Parquet files are a popular columnar storage format used by data scientists and anyone using the Hadoop ecosystem. It was developed to be very efficient in terms of compression and encoding. Check out their documentation if you want to know all the details about how Parquet files work.

You can read and write Parquet files with Python using the pyarrow package.

Let’s learn how that works now!

Installing pyarrow

The first step is to make sure you have everything you need. In addition to the Python programming language, you will also need pyarrow and the pandas package. You will use pandas because it is another Python package that uses columns as a data format and works well with Parquet files.

You can use pip to install both of these packages. Open up your terminal and run the following command:

python -m pip install pyarrow pandas

If you use Anaconda, you’ll want to install pyarrow using this command instead.

conda install -c conda-forge pyarrow

Anaconda should already include pandas, but if not, you can use the same command above by replacing pyarrow with pandas.

Now that you have pyarrow and pandas installed, you can use it to read and write Parquet files!

Writing Parquet Files with Python

Writing Parquet files with Python is pretty straightforward. The code to turn a pandas DataFrame into a Parquet file is about ten lines.

Open up your favorite Python IDE or text editor and create a new file. You can name it something like parquet_file_writer.pyor use some other descriptive name. Then enter the following code:

import pandas as pd import pyarrow as pa import pyarrow.parquet as pq def write_parquet(df: pd.DataFrame, filename: str) -> None: table = pa.Table.from_pandas(df) pq.write_table(table, filename) if __name__ == "__main__": data = {"Languages": ["Python", "Ruby", "C++"], "Users": [10000, 5000, 8000], "Dynamic": [True, True, False], } df = pd.DataFrame(data=data, index=list(range(1, 4))) write_parquet(df, "languages.parquet")

For this example, you have three imports:

  • One for pandas, so you can create a DataFrame
  • One for pyarrow, to create a special pyarrow.Table object
  • One for pyarrow.parquetto transform the table object into a Parquet file

The  write_parquet() function takes in a pandas DataFrame and the file name or path to save the Parquet file to. Then, you transform the DataFrame into a pyarrow Table object before converting that into a Parquet File using the write_table() method, which writes it to disk.

Now you are ready to read that file you just created!

Reading Parquet Files with Python

Reading the Parquet file you created earlier with Python is even easier. You’ll need about half as many lines of code!

You can put the following code into a new file called something like parquet_file_reader.pyif you want to:

import pyarrow.parquet as pq def read_parquet(filename: str) -> None: table = pq.read_table(filename) df = table.to_pandas() print(df) if __name__ == "__main__": read_parquet("languages.parquet")

In this example, you read the Parquet file into a pyarrow Table format and then convert it to a pandas DataFrame using the Table’s to_pandas() method.

When you print out the contents of the DataFrame, you will see the following:

Languages Users Dynamic 1 Python 10000 True 2 Ruby 5000 True 3 C++ 8000 False

You can see from the output above that the DataFrame contains all data you saved.

One of the strengths of using a Parquet file is that you can read just parts of the file instead of the whole thing. For example, you can read in just some of the columns rather then the whole file!

Here’s an example of how that works:

import pyarrow.parquet as pq def read_columns(filename: str, columns: list[str]) -> None: table = pq.read_table(filename, columns=columns) print(table) if __name__ == "__main__": read_columns("languages.parquet", columns=["Languages", "Users"])

To read in just the “Languages” and “Users” columns from the Parquet file, you pass in the a list that contains just those column names. Then when you call read_table() you pass in the columns you want to read.

Here’s the output when you run this code:

pyarrow.Table Languages: string Users: int64 ---- Languages: [["Python","Ruby","C++"]] Users: [[10000,5000,8000]]

This outputs the pyarrow Table format, which differs slightly from a pandas DataFrame. It tells you information about the different columns; for example, Languages are strings, and Users are of type int64.

If you prefer to work only with pandas DataFrames, the pyarrow package allows that too. As long as you know the Parquet file contains pandas DataFrames, you can use read_pandas() instead of read_table().

Here’s a code example:

import pyarrow.parquet as pq def read_columns_pandas(filename: str, columns: list[str]) -> None: table = pq.read_pandas(filename, columns=columns) df = table.to_pandas() print(df) if __name__ == "__main__": read_columns_pandas("languages.parquet", columns=["Languages", "Users"])

When you run this example, the output is a DataFrame that contains just the columns you asked for:

Languages Users 1 Python 10000 2 Ruby 5000 3 C++ 8000

One advantage of using the read_pandas() and to_pandas() methods is that they will maintain any additional index column data in the DataFrame, while the pyarrow Table may not.

Reading Parquet File Metadata

You can also get the metadata from a Parquet file using Python. Getting the metadata can be useful when you need to inspect an unfamiliar Parquet file to see what type(s) of data it contains.

Here’s a small code snippet that will read the Parquet file’s metadata and schema:

import pyarrow.parquet as pq def read_metadata(filename: str) -> None: parquet_file = pq.ParquetFile(filename) metadata = parquet_file.metadata print(metadata) print(f"Parquet file: {filename} Schema") print(parquet_file.schema) if __name__ == "__main__": read_metadata("languages.parquet")

There are two ways to get the Parquet file’s metadata:

  • Use pq.ParquetFile to read the file and then access the metadata property
  • Use pr.read_metadata(filename) instead

The benefit of the former method is that you can also access the schema property of the ParquetFile object.

When you run this code, you will see this output:

<pyarrow._parquet.FileMetaData object at 0x000002312C1355D0> created_by: parquet-cpp-arrow version 15.0.2 num_columns: 4 num_rows: 3 num_row_groups: 1 format_version: 2.6 serialized_size: 2682 Parquet file: languages.parquet Schema <pyarrow._parquet.ParquetSchema object at 0x000002312BBFDF00> required group field_id=-1 schema { optional binary field_id=-1 Languages (String); optional int64 field_id=-1 Users; optional boolean field_id=-1 Dynamic; optional int64 field_id=-1 __index_level_0__; }

Nice! You can read the output above to learn the number of rows and columns of data and the size of the data. The schema tells you what the field types are.

Wrapping Up

Parquet files are becoming more popular in big data and data science-related fields. Python’s pyarrow package makes working with Parquet files easy. You should spend some time experimenting with the code in this tutorial and using it for some of your own Parquet files.

When you want to learn more, check out the Parquet documentation.

The post How to Read and Write Parquet Files with Python appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

EuroPython Society: 🐍 Community Call for Venues - EuroPython 2025

Planet Python - Mon, 2024-05-06 09:23

Greetings to all community organizers across Europe! &#x1F30D;

We are thrilled to announce the opening of the Call for Venues for EuroPython 2025! &#x1F310;

EuroPython is the world&aposs oldest volunteer-led Python Conference. We are rooted in principles of diversity, inclusion, and accessibility; advocating for the community, and striving to create a welcoming space for all.

It is our mission to ensure accessibility for the wider community of Pythonistas when selecting conference locations. In addition to ticket prices, we carefully consider the ease of access and sustainability of future venues.

Similar to the process followed for the selection of Prague for EuroPython 2023, we would like your help in choosing the most suitable location for our next editions.

If you want to propose a location on behalf of your community, please send as an email to board@europython.eu with your proposal before May 14th. We will coordinate with you to collect all the necessary data required.

&#x1F4DD; Important Notes:

  • The EPS will also revisit community proposals from previous years.
  • Proposals submitted currently will be retained for future years.

Your city could be the next hub for collaboration, learning, and celebration within the Python ecosystem.

Join us in shaping an unforgettable experience for EuroPython 2025 participants!

✏️ Got any questions/suggestions/comments? Drop us a line at board@europython.eu and we will get back to you.

See you all soon,

EuroPython Society Board

Categories: FLOSS Project Planets

About QML Efficiency: Compilers, Language Server, and Type Annotations

Planet KDE - Mon, 2024-05-06 09:22

In our last post we had a look at how to set up QML Modules and how we can benefit from the QML Linter. Today we’re going to set up the QML Language Server to get an IDE-like experience in an editor of our choice. We’ll also help the the QML Compiler generate more efficient code.

Continue reading About QML Efficiency: Compilers, Language Server, and Type Annotations at basysKom GmbH.

Categories: FLOSS Project Planets

Django Weblog: Django bugfix releases issued: 5.0.5 and 4.2.12

Planet Python - Mon, 2024-05-06 07:57

Today we've issued 5.0.5 and 4.2.12 bugfix releases.

The release package and checksums are available from our downloads page, as well as from the Python Package Index. The PGP key ID used for this release is Sarah Boyce: 3955B19851EA96EF.

Categories: FLOSS Project Planets

LostCarPark Drupal Blog: Drupal Advent Calendar 2024 - Call for Ideas

Planet Drupal - Mon, 2024-05-06 07:01
Drupal Advent Calendar 2024 - Call for Ideas lostcarpark_admin Mon, 05/06/2024 - 12:01 Image Body

DrupalCon Portland starts today, so it seems a good time to start thinking about the 2024 Advent Calendar!

Advent Calendar? In May?

If there’s one thing I learned last year, it’s start early! We had a few hairy moments last year, and a couple of later nights than I would have liked, so this year I want to get the ball moving early.

Why a Drupal Advent Calendar?

For fun, mostly!

But also to promote the great Drupal projects and the people working on them.

It started on a whim in 2022, when I had the idea at the last possible moment, as I was falling asleep on the last day of November. As there wasn’t…

Categories: FLOSS Project Planets

CKEditor: Enhance Your Drupal Experience with the Free CKEditor 5 Plugin Pack

Planet Drupal - Mon, 2024-05-06 06:19
Enhance Drupal with the CKEditor 5 Plugin Pack: all premium and free tools available at no cost for diverse web projects.
Categories: FLOSS Project Planets

The Drop Times: Embracing the Community Spirit: DrupalCon Portland 2024

Planet Drupal - Mon, 2024-05-06 04:56

Today is an exciting day as DrupalCon Portland kicks off. The Drupal community eagerly awaits this event for its wealth of sessions, interactions, and updates. From the highly anticipated Driesnote, where Dries Buytaert shares his latest thoughts and plans, to various featured sessions that delve into specific topics, there's much to look forward to.

At this year’s conference, we're seeing updates on various initiatives and a host of workshops and trainings from different organizations. We also have the Healthcare Summit and the Return of the Nonprofit Summit, a great opportunity for Drupal users working in the nonprofit sector to connect and learn from one another.

Recognizing the need to continuously attract new talents to the Drupal community, DrupalCon 2024 has made significant efforts to reach out to students. This includes targeted advertising to local student communities and focusing on the career-enhancing opportunities at the DrupalCon job fair. Mentorship programs, resume help, and a special student discount ticket priced at only $50 are also included.

Another exciting addition this year is the community-designed DrupalCon T-shirt. The Drupal Association ran a contest for the official T-shirt design, receiving many creative entries. The winning design will be announced at the event and featured on the free attendee T-shirt.

Now, let’s look back at what we covered last week:

Alka Elizabeth, sub-editor, TDT, sat down with Thor Andre Gretland, the dynamic Head of Sales and Business Advisor at Frontkom. They discussed the exciting synergies between Gutenberg and Drupal. Thor shared his extensive knowledge about the groundbreaking Drupal Gutenberg project. During this discussion, it was revealed that the Frontkom team has four updates for the community, including major enhancements that will be part of the Drupal Gutenberg 4.0 release, set to be unveiled here at DrupalCon Portland 2024.

Additionally, Alka Elizabeth talked with Angie Byron, the Community Director at Aiven and a highly respected figure in the open-source community. Throughout their conversation, Angie shared her experiences and the pivotal decisions that have shaped her career. She discussed her challenges and transformations, such as introducing automated testing in Drupal, her leadership roles in various community projects, and her advocacy for diversity and inclusion within tech communities. Dive into the interview here

Last week, I had the opportunity to share insights directly from the Drupal Initiative Lead keynote speakers at DrupalCon Portland. Among the speakers were Cristina Chumillas, Janez Urevc, Ted Bowman, Fran Garcia-Linares, Jürgen Haas, and Mateu Aguiló Bosch, who were all set to provide valuable updates and insights on various aspects of Drupal and its ecosystem.

LagoonCon Portland 2024 is set to take place on  May 6, 2024. Following its successful debut in Pittsburgh last year, this free event hosted by amazee.io is designed for developers and tech leaders to dive into discussions about Lagoon. Alka Elizabeth, sub-editor at The Drop Times, has penned an article featuring detailed discussions with the speakers of Lagoon Portland,  Toby Bellwood, the Lagoon Product Lead at amazee.io, Christoph Weber, Solutions Architect at Pronovix, and Bryan Gruneberg, CEO and CTO at Workshop Orange. 

Norah Medlin brings over two decades of software development immersion to Stanford WebCamp, offering a strategic blueprint for maximizing project success. In her session on forging high-value partnerships and driving transformative change, she unveils essential insights for navigating the complexities of modern project management. 

Linux Foundation has launched the 2024 World of Open Source: Global Spotlight Survey, designed to examine the nuances of open-source technology in different regions and industries.

DrupalCon Barcelona 2024 is calling for submissions of case studies highlighting exceptional Drupal website projects developed between October 2023 and September 2024. The Drupal Camp Pune 2024 organisers seek talented individuals passionate about technology and community building to volunteer and add a unique touch to the array of planned activities.

DevOps professionals on the making can take the Pantheon WebOps certification exam onsite at the DrupalCon Portland venue. The exam registration is free of charge. 

Additionally, Carlos O. launched the IXP Fellowship initiative to help aspiring Drupal developers bridge the skills gap. The program sought community feedback through a survey to define competencies for inexperienced developers aiming to become junior professionals.

Other generic updates are here: Maotic, for the first time, has become a mentor organization for the coveted Google Summer of Code project for 2024. Learn about their winning projects here. Drupal is introducing an experimental navigation module in version 10.3 prior to introducing it officially in Drupal 11. Selwyn Polit's online reference book, "Drupal at your fingertips" is now officially listed in Drupal.org. Last year's Drupal Pitch-burgh contest-winning project, Drupal API Client, has released version 1.0. A new Drupal podcast has begun. Platform.sh DevRel Team has started a new podcast series called 'ChangeMode'. Marine Gandy hosts the show. 

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

Stay tuned with The Drop Times. We are here to ensure you don't miss out on anything happening at the event. Our volunteers are on the ground to keep you updated with interviews, featured articles, coverage of sessions, and short video interviews with attendees and speakers.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely
Kazima Abbas
Sub-editor, The DropTimes.

Categories: FLOSS Project Planets

Zato Blog: What is an API gateway?

Planet Python - Mon, 2024-05-06 03:43
What is an API gateway? 2024-05-06, by Dariusz Suchojad

In this article, we are going to use Zato in its capacity as a multi-protocol Python API gateway - we will integrate a few popular technologies, accepting requests sent over protocols commonly used in frontend systems, enriching and passing them to backend systems and returning responses to the API clients using their preferred data formats. But first, let's define what an API gateway is.

Clearing up the terminology

Although we will be focusing on complex API integrations later on today, to understand the term API gateway we first need to give proper consideration to the very term gateway.

What comes to mind when we hear the word "gateway", and what is correct etymologically indeed, is an opening in an otherwise impermissible barrier. We use a gateway to access that which is in other circumstances inaccessible for various reasons. We use it to leave such a place too.

In fact, both "gate" and the verb "to go" stem from the same basic root and that, again, brings to mind a notion of passing through space specifically set aside for the purpose of granting access to what normally would be unavailable. And once more, when we depart from such an area, we use a gateway too.

From the perspective of its true intended purpose, a gateway letting everyone in and out as they are would amount to little more than a hole in a wall. In other words, a gateway without a gate is not the whole story.

Yes, there is undoubtedly an immense aesthetic gratification to be drawn from being close to marvels of architecture that virtually all medieval or Renaissance gates and gateways represent, but we know that, nowadays, they do not function to the fullest of their capacities as originally intended.

Rather, we can intuitively say that a gateway is in service as a means of entry and departure if it lets its operators achieve the following, though not necessarily all at the same time, depending on one's particular needs:

  • Telling arrivals where they are, including projection of might and self-confidence
  • Confirming that arrivals are who they say they are
  • Checking if their port of origin is friendly or not
  • Checking if they are allowed to enter that which is protected
  • Directing them to specific areas behind the gateway
  • Keeping a long term and short term log of arrivals
  • Answering potential questions right by the gate, if answers are known to gatekeepers
  • Cooperating with translators and coordinators that let arrivals make use of what is required during their stay

We can now recognize that a gateway operates on the border of what is internal and external and in itself, it is a relatively narrow, though possibly deep, piece of an architecture. It is narrow because it is only through the gateway that entry is possible but it may be deeper or not, depending on how much it should offer to arrivals.

We also keep in mind that there may very well be more than a single gateway in existence at a time, each potentially dedicated to different purposes, some overlapping, some not.

Finally, it is crucial to remember that gateways are structural, architectural elements - what a gateway should do and how it should do it is a decision left to architects.

With all of that in mind, it is easy to transfer our understanding of what a physical gateway is into what an API one should be.

  • API clients should be presented with clear information that they are entering a restricted area
  • Source IP addresses or their equivalents should be checked and requests rejected if an IP address or equivalent information is not among the allowed ones
  • Usernames, passwords, API keys and similar representations of what they are should be checked by the gateway
  • Permissions to access backend systems should be checked seeing as not every API client should have access to everything
  • Requests should be dispatched to relevant backend systems
  • Requests and responses should be logged in various formats, some meant to be read by programs and applications, some by human operators
  • If applicable, responses can be served from the gateway's cache, taking the burden off the shoulders of the backend systems
  • Requests and responses can be transformed or enriched which potentially means contacting multiple backend systems before an API caller receives a response

We can now define an API gateway as an element of a systems architecture that is certainly related to security, permissions and granting or rejecting access to backend systems, applications and data sources. On top of it, it may provide audit, data transformation and caching services. The definition will be always fluid to a degree, depending on an architect's vision, but this is what can be expected from it nevertheless.

Having defined what an API gateway is, let's create one in Zato and Python.

Clients and backend systems

In this article, we will integrate two frontend systems and one backend application. Frontend ones will use REST and WebSockets whereas the backend one will use AMQP. Zato will act as an API gateway between them all.

Not granting frontend API clients direct access to backend systems is usually a good idea because the dynamics involved in creation of systems on either side are typically very different. But they still need to communicate and hence the usage of Zato as an API gateway.

Python code

First, let's show the Python code that is needed to integrate the systems in our architecture:

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class APIGateway(Service): """ Dispatches requests to backend systems, enriching them along the way. """ name = 'api.gateway' def handle(self): # Enrich incoming request with metadata .. self.request.payload['_receiver'] = self.name self.request.payload['_correlation_id'] = self.cid self.request.payload['_date_received'] = self.time.utcnow() # .. AMQP configuration .. outconn = 'My Backend' exchange = '/incoming' routing_key = 'api' # .. publish the message to an AMQP broker .. self.out.amqp.send(data, outconn, exchange, routing_key) # .. and return a response to our API client. self.response.payload = {'result': 'OK, data accepted'}

There are a couple of points of interest:

  • The gateway service enriches incoming requests with metadata but it could very well enrich it with business data too, e.g. it could communicate with yet another system to obtain required information and only then pass the request to the final backend system(s)

  • In its current form we send all the information to AMQP brokers only but we could just as well send it to other systems, possibly modifying the requests along the way

  • The code is very abstract and all of its current configuration could be moved to a config file, Redis or another data source to make it even more high-level

  • Security configuration and other details are not declared directly in the body of the gateway service but they need to exist somewhere - we will describe it in the next section

Configuration

In Zato, API clients access the platform's services using channels - let's create a channel for REST and WebSockets then.

First REST:

Now WebSockets:

We create a new outgoing AMQP connection in the same way:

Using the API gateway

At this point, the gateway is ready - you can invoke it from REST or WebSockets and any JSON data it receives will be processed by the gateway service, the AMQP broker will receive it, and API clients will have replies from the gateway as JSON responses.

Let's use curl to invoke the REST channel with JSON payload on input:

$ curl http://api:<password-here>@localhost:11223/api/v1/user ; echo curl --data-binary @request.json http://localhost:11223/api/v1/user ; echo {"result": "OK, data accepted"} $

Taken together, the channels and the service allowed us to achieve this:

  • Multiple API clients can access the backend AMQP systems, each client using its own preferred technology
  • Client credentials are checked on input, before the service starts to process requests (authentication)
  • It is possible to assign RBAC roles to clients, in this way ensuring they have access only to selected parts of the backend API (authorization)
  • Message logs keep track of data incoming and outgoing
  • Responses from channels can be cached which lessens the burden put on the shoulders of backend systems
  • Services accepting requests are free to modify, enrich and transform the data in any way required by business logic. E.g., in the code above we only add metadata but we could as well reach out to other applications before requests are sent to the intended recipients.

We can take it further. For instance, the gateway service is currently completely oblivious to the actual content of the requests.

But, since we just have a regular Python dict in self.request.payload, we can with no effort modify the service to dispatch requests to different backend systems, depending on what the request contains or possibly what other backend systems decide the destination should be.

Such additional logic is specific to each environment or project which is why it is not shown here, and this is also why we end the article at this point, but the central part of it all is already done, the rest is only a matter of customization and plugging in more channels for API clients or outgoing connections for backend systems.

Finally, it is perfectly fine to split access to systems among multiple gateways - each may handle requests from selected technologies on the one hand but on the other hand, each may use different caching or rate-limiting policies. If there is more than one, it may be easier to configure such details on a per-gateway basis.

Next steps More blog posts
Categories: FLOSS Project Planets

The Drop Times: Women in Drupal Luncheon at DrupalCon Portland 2024: A Convergence for Change

Planet Drupal - Mon, 2024-05-06 02:24
Delve into the experiences of women in technology at DrupalCon Portland 2024 with the "Women in Drupal Luncheon" on May 7. This crucial session features Sebastianna Skalisky, Laura Johnson, Jenna Harris, and Shanice Ortiz from Four Kitchens, who will discuss overcoming obstacles and fostering inclusion in the tech sector. Engage with these influential leaders as they share strategies for navigating a male-dominated industry, enhancing female leadership, and advocating for systemic change. Join for a compelling dialogue to empower women and expand their impact on technology.
Categories: FLOSS Project Planets

My work in KDE for April 2024

Planet KDE - Sun, 2024-05-05 20:00

Hello and sorry about the late post. I’ve been busy moving and other stuff that’s gotten in the way. I will also be idling the beginning of this month, so the next update may be shorter too.

Anyway, let’s get into the changes!

Kensa

I originally wanted to bring some of the “power-user” features from KSysGuard into the new System Monitor. I was rightfully turned down because they were hesitant of there being any use for most people and to prevent feature creep.

They suggested creating a seperate application instead. So Kensa, the detailed process viewer is born! It’s still mostly copy & pasted from old KSysGuard/libksysguard code, but updated for Qt6/KF6. And to make it clear, It’s very clearly modeled after the Window’s Process Explorer.

I have the general tab for viewing some basic information about the process. Said tab also includes an “Open With” so you can quickly open the executable in a hex viewer like Okteta.

The general tab

The memory maps tab shows what the process has mapped, mostly notably which shared libraries it’s currently using.

The memory maps tab

The open files tab makes it’s return as well, extremely useful.

The open files tab

And one of my own design, an environment variables tab. In the future I want to add a “Strings” tab for quickly viewing the executable strings and the ones currently in memory.

The environment tab

Note that Kensa is very early in development and not user-friendly. You currently have it give it a PID manually and lacks a process list.

Tokodon

Feature The window title now corresponds to the current page. This makes it easier to identify from a task bar, too. We know the title is duplicated inside the application as well (on desktop), but that’s Kirigami’s design decision. 24.05

Feature If your server doesn’t provide a human-readable error message, the network error is displayed instead. This is useful to see if the DNS lookup failed or some other network-related reason the server is inaccessible. 24.05

Feature Support for QtMultimedia has been added in situations where your system lacks or cannot use libmpv. This is preparatory work for a Windows version. 24.05

Feature In the same vein as the patch above, QtWebView is now optional and I included even more authentication fixes. Previously I enforced an in-app web view to facilitate authentication (compared to the external web browser method or auth code in previous versions.) This was only a stop-gap solution until I had more time to flesh out our authentication system, but now I feel much happier about it’s current state. 24.05

System Monitor

Bugfix Fix the column configuration dialog being shown too small on the Overview page. 6.0.4

Feature Add the About KDE page to the hamburger menu. 6.1

Bugfix Made sure cell tooltips shows up more reliably. 6.1

Feature Added a menu item to copy the current column’s text. This makes System Monitor just as usable as the old KSysGuard for me now, because I tend to copy the command line a lot. (And PIDs.) 6.1

Ruqola

Bugfix Use a better fitting icon for attaching files. The previous icon - when viewed at 16px - turned into something completely different. 2.1.2

PlasmaTube

Feature Added support for viewing a channel’s playlists. 24.05

Feature I also added a Craft blueprint for Android. Note that this is only preliminary and the Android version is nowhere near ready yet. I guess this could be used for a future Windows version too.

Feature I implemented more functionality in the PeerTube backend, so now it’s possible to log in and perform searching. Subscriptions work too, but I’m running into an issue where yt-dlp fails to pull certain videos. If you know anything about using yt-dlp with PeerTube, please let me know if there’s a workaround. 24.05

Feature Added a new feature to import/export OPML subscriptions. This only works for YouTube channels at the moment, PeerTube support TBA. 24.05

Gwenview

Feature I changed the old save bar to use the standard KMessageWidget widget. This isn’t just for looks, it fixes a lot of odd visual bugs and removes a ton of cruft in the process. 6.1.0

The new Gwenview save bar. If it doesn’t look “out of place”, then my patch did it’s job! NeoChat

Bugfix Fixed the share dialog not appearing properly, and improve the keyboard navigation inside of it. 24.05

Frameworks

Bugfix Remove some redundant QML_ELEMENT declarations which in turn reduces runtime warnings. 6.1.0

Bugfix Two KMessageWidget improvements, by fixing handling of color palette changes and making the icon label vertically centered. This is for that Gwenview patch. 6.1.0

Android

I once again sat down and fixed a ton of build and runtime issues for our Android applications, and started fixing some of the Qt 6.7 fallout. NeoChat and Tokodon build and run again, and spent some time ironing out their issues.

That’s all this month!

My work in KDE for March 2024

My Work in KDE

Home
Categories: FLOSS Project Planets

Seth Michael Larson: Backup Game Boy ROMs and saves on Ubuntu

Planet Python - Sun, 2024-05-05 20:00
Backup Game Boy ROMs and saves on Ubuntu AboutBlogNewsletterLinks Backup Game Boy ROMs and saves on Ubuntu

Published 2024-05-06 by Seth Larson
Reading time: minutes

I'm a big fan of retro video games, specifically the Game Boy Color, Advance, and GameCube collections. The physicality of cartridges, link cables, and accessories before the internet was widely available for gaming has a special place in my heart.

With the recent changes to the App Store to allow emulators (and judging by the influx of issues opened on the Delta emulator GitHub repo) there is a growing interest in playing these games on mobile devices.

So if you're using Ubuntu like me, how can you backup your ROMs and saves?


Using GB Operator with my copy of Pokémon FireRed What you'll need

To backup data from Game Boy cartridges I used the following software and hardware:

  • Game Boy, GBC, or GBA cartridge
  • GB Operator from Epilogue ($50 USD + shipping)
  • Playback software from Epilogue
  • Epilogue includes a USB-C to USB-A connector, so an adapter may be needed

There are other options for backing up Game Boy ROMs and saves, some of which are less expensive than the GB Operator, but I went with the GB Operator because it explicitly listed Linux support and from watching reviews appeared to provide a streamlined experience.

Getting started with GB Operator and Playback

Download the Playback AppImage for Linux and the CPU architecture you have. Make the AppImage file executable with:

$ chmod a+x ./Playback.AppImage

If you try to execute this file ($ ./Playback.AppImage) and receive this error:

dlopen(): error loading libfuse.so.2 AppImages require FUSE to run. You might still be able to extract the contents of this AppImage if you run it with the --appimage-extract option. See https://github.com/AppImage/AppImageKit/wiki/FUSE for more information

You'll need to install FUSE on Ubuntu:

$ sudo apt-get install libfuse2

After this you should be able to run Playback:

$ ./Playback.AppImage

From here the application should launch, but even if you have your GB Operator plugged in there's a chance it won't be detected. There's a good chance that your current user doesn't have access to the USB device. Epilogue provides some guides on how to enable access.

After following the above guide and logging in and out for the changes to take effect your GB Operator should be detected. Connecting a cartridge and navigating to "Data" in the menus provides you with options to "Backup Game" and "Backup Save".

Selecting these options might trigger a crash with the following error when starting the export process:

(Playback.AppImage:44475): Gtk-WARNING **: 15:05:20.886: Could not load a pixbuf from icon theme. This may indicate that pixbuf loaders or the mime database could not be found. ** Gtk:ERROR:../../../../gtk/gtkiconhelper.c:494:ensure_surface_for_gicon: assertion failed (error == NULL): Failed to load /usr/share/icons/Yaru/16x16/status/image-missing.png: Unrecognized image file format (gdk-pixbuf-error-quark, 3) Bail out! Gtk:ERROR:../../../../gtk/gtkiconhelper.c:494:ensure_surface_for_gicon: assertion failed (error == NULL): Failed to load /usr/share/icons/Yaru/16x16/status/image-missing.png: Unrecognized image file format (gdk-pixbuf-error-quark, 3) Aborted (core dumped)

The fix that worked for me came from a Redditor who talked with Epilogue support and received the following answer:

LD_PRELOAD=/usr/lib/x86_64-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders/libpixbufloader-svg.so \ ./Playback.AppImage

Running the AppImage with the LD_PRELOAD value set fixed my issue, I've since added this shim to an alias, so I don't have to remember it. Hopefully in a future version of Playback this won't be an issue.

From here backing up your ROMs and saves should work as expected. Happy gaming!

Thanks for reading! ♡ Did you find this article helpful and want more content like it? Get notified of new posts by subscribing to the RSS feed or the email newsletter.

This work is licensed under CC BY-SA 4.0

Categories: FLOSS Project Planets

Intro Blog - Krita GSOC 2024

Planet KDE - Sun, 2024-05-05 19:27

Hi! Welcome to my blog on my project experience with Krita under GSOC!

Quick introduction: I'm Ken, an aspiring software engineer from New York and I'm super excited to work with the awesome Krita team to make Krita even cooler than it already is. Being in the GSOC program means I get to experience open-source development under the supervision of mentors (Tiar and Emmet O'Neill) and I'm sure I will learn a lot from them. This will be where I document my work for KDE's Krita program under GSOC.

And a quick recap of the project, I'm trying to implement an option in Krita called Pixel Perfect, which is an algorithm that visually smoothens pixel art curves. This is achieved by smartly removing corner pixels so that lines look less blocky.

As coding officially starts on May 27th here are a few things I need to straighten out:

  • How do we want the option to be officially implemented? In Aseprite it is just a checkbox on the top. It has been suggested that this option be put under the sharpness widget when creating a brush, alongside a check for 100% sharpness which is the pixel brush requirement

  • Where and how exactly do we want to create this algorithm? There are a few files that I noted that deal with brush presets and files that deal with producing actual artwork on the canvas. We need to figure out exactly how brushes in general work and then figure out how exactly we want this algorithm to interact with the brushes.

That's it for now, thanks for reading!

Categories: FLOSS Project Planets

Event Organizers: Connect with Event Organizers at DrupalCon Portland '24

Planet Drupal - Sun, 2024-05-05 18:31

There are many opportunities to connect with fellow event organizers throughout the week at DrupalCon Portland 2024.

All Week

Community Events Booth
Expo Hall - #106
Visit with the EOWG board and other event organizers in the Expo Hall. Be sure to bring some of your stickers and swag to share with the community!

Monday, May 6 - 2:00 - 3:00pm

Event Organizers Roundtable BOF
Room G132, Table 1
Open discussion time for Drupal Event Organizers to gather and for others who are interested in organizing their own events or learning more about the Event Organizers Working Group.

Wednesday, May 8 - 9:00am - 5:00pm

Contribution Day
Room B115-116
Find us to help improve the Community Events page.

Thursday, May 9 - 9:00am - 4:00pm

Community Summit 
Room C120-122
EOWG Board members will present a panel at 9:15am. Join us for a day of community discussions. The summit is open to everyone in the Drupal community, at no additional cost.

Not joining DrupalCon? Join us online any time:

Open Meeting via Slack!
Tuesday, May 14 starting at 16:00 UTC / 12:00 pm ET

  • Initiative Updates
  • Camp Reports
  • DrupalCon Report

Join us to discuss these and other topics in the #event-organizers channel.

If there is something you want to share or discuss related to your camp, meetup, or other events organizer topics either leave a message in the Slack channel or comment on the Drupal.org meeting agenda issue.

Categories: FLOSS Project Planets

Drupal Association blog: De-jargoning Drupal – working with the community to open up Drupal’s terminology

Planet Drupal - Sun, 2024-05-05 18:18

This blog post was written by Promote Drupal committee member Emma Horrell.

If you’re familiar with Drupal, you will have learned its language. You will be familiar with words like Views, Blocks and Paragraphs, and you will appreciate their respective features and functions. But for those new to Drupal, getting to grips with what words mean can mean a steep learning curve. 

The start of the Drupalisms issue

User research to improve the Drupal admin UI raised an interesting finding. When the Drupal community was asked to complete an exercise where they grouped terms from the UI into categories that made sense to them, the results showed people were unable to place some of the terms. Further investigation indicated that people weren’t sure what these outlier terms meant (and therefore they had struggled to sort them).

How we speak Drupal matters

We wanted to address this finding from the research because we recognised the importance of people understanding Drupal terminology to enable and empower them to use Drupal confidently and competently. We strongly felt that Drupal language shouldn’t be a barrier to people new to the community learning and we wanted to take the opportunity to address this. It felt like the most logical place to start was to identify Drupal terms that caused confusion - ‘Drupalisms’. With a core team of community volunteers behind it, issue 3381734 was begun.

First endeavours to identify Drupalisms

With the issue live, we set to work, picking out terms that matched the ‘Drupalisms’ brief – in other words, terms which we felt were confusing to new Drupal users. Our initial list included words like: ‘Node’, ‘Blocks’, ‘Structure’, ‘Entity’, ‘Paragraphs’. As the issue queue gained momentum, more words flooded in, and people expressed opinions on aspects of Drupal terminology – for example, questioning Drupal’s use of ‘Site Builders’, ‘Developers’ and ‘Themers’ to describe the roles and functions of the Drupal community.

Drupalisms BoF at DrupalCon Lille

Attending DrupalCon Lille presented the chance to run a Drupalisms BoF to encourage people from the community to come together and collaborate on this issue. We spent the time looking at our initial list of terms, thinking of more, and thinking about how we would describe them to people new to Drupal. This exercise helped us appreciate the importance of preventing language being a blocker to new Drupal users and therefore affirm the importance of the issue.

Establishing regular meetings to work together  

Coming together after the BoF, we reflected on possible ways forward. We established that our original goal to identify Drupalisms was part of something bigger, an impetus to make sure our language opens up Drupal to everyone, to ensure we are being as accessible and inclusive as possible. We all agreed this was not something fixable quickly, and we committed to a regular pattern of fortnightly meetings to work on the issue.

Acknowledging challenges and opportunities

Our initial meetings were spent discussing the issue in more depth. We thought about the varied mental models, expectations, native languages and levels of understanding people bring to Drupal, and how their prior experiences (sometimes with other Content Management Systems) shapes their language perceptions. We considered the roles that glossaries tooltips and micro-copy can play in helping people make sense of our terminology in different contexts. We thought about the past – and how historic events had led to the language we use in Drupal today, and then ahead to the future, thinking about the impact of changing terminology and the possible consequences. We also established that language is an emotive subject, and therefore that making decisions about language should be based on evidence over opinions.

Adopting a methodical approach towards a controlled vocabulary

Ralf Koller suggested an objective, multi-step approach to working on the issue inspired by various sources including Abby Covert’s book ‘How to make sense of any mess’, Sophia Prater’s Object-Oriented-User-Experience (OOUX) methodology and the writings of other content design specialists like Erica Jorgensen. The stages we are working through are as follows:

  1. Noun foraging – making a full list of all the Drupal terms
  2. Evaluating/prioritising the most confusing terms based on responses from the community and also how easy it is to define them
  3. Deciding which terms to include in a controlled vocabulary
  4. Producing translations of these terms
  5. Developing a ‘words we don’t say’ list
  6. Establishing a process to maintain the vocabulary going forwards
Collaborating with other open-source Content Management Systems

Addressing issues of CMS language more broadly, and acknowledging that Drupal is not alone in wanting our vocabulary to be intuitive and easy-to-understand, I’ve reached out to others in the wider open-source CMS community to think of ways we can use our collective force to develop a more consistent approach to CMS terminology. It’s early days but there is interest, and we’re looking to establish a date for an initial meeting.  

Where we are now and how you can help

We’re working together on the first stage of the process - the noun foraging, gathering terms from many sources. Our current list stands at around 400 terms and we know there will be more. If you would like to join in helping us with this piece of work or you are interested to know more, please comment in the issue queue, join the Drupal Slack channel #drupalisms-working-group, message me on Drupal Slack or email me directly at emma.horrell@ed.ac.uk and I can provide more information or add you to the meeting series. You can also read the ongoing meeting notes from the issue queue.

Categories: FLOSS Project Planets

Berlin Goals sprint 2024

Planet KDE - Sun, 2024-05-05 10:00

As you have probably heard, two weeks ago a bunch of KDE contributors gathered in Berlin to attend to a sprint to move forward on current KDE community Goals.

I attended the sprint and it was a pleasure to get together with fellow KDE contributors, and discuss our experiences, our plans and aspirations.

Goal Work

My main objective for this Goal was to get automated GUI testing merged for Dolphin. This is important as it paves the way to better testing, accessibility improvements and power-usage measurement, making progress towards the three Goals at once.

I wanted to get this done, since last akademy, and I gave it a try but it wasn't fruitful then. This time with Harald around, selenium webdriver at spi author, I would get a chance to poke him to overcome my difficulties.

And indeed Harald was very helpful. It took me a while to setup properly my local testing environment. But then I could update the existing MR by Marco and iron it up slightly to get it merged.

Another objective I had coming to Berlin, was improving KIO-'s CI-testing situation. Currently a bunch of tests are failing and we don't require tests to pass for KIO. KIO is a very important library for Dolphin, as such I spend an important part of my contributor time on improving it, so this is quite important to me. I sent a fix for a test but CI can't really run this kind of test. Those test the launch of programs using systemd, except our tests run inside containers which limits what tests can do at runtime. We did a little progress, and I learned what needs to be done to fix this case. We need some work on our dev-ops side of things for those ones. A bunch other tests still need fixing.

Dolphin

And I got busy doing Dolphin reviews. A particular one of interests for developpers will be that there will be a git clone dialog in dolphin-plugins. Nikolai Krasheninnikov, the feature contributor, and myself are still improving it to be both nice and very practical.

There is another feature I have been reviewing, but it is too early to speak about publicly.

I also for the first time tried out neochat and I was pleased.

Categories: FLOSS Project Planets

Pages