FLOSS Project Planets

Podcast.__init__: Hunting Black Swans With Bees: Catching Up With The Inimitable Russell Keith-Magee

Planet Python - Mon, 2022-05-23 22:34
Russell Keith-Magee is an accomplished engineer and a fixture of the Python community. His work on the Beeware suite of projects is one of the most ambitious undertakings in the ecosystem and unfailingly forward-looking. With his recent transition to working for Anaconda he is now able to dedicate his full focus to the effort. In this episode he reflects on the journey that he has taken so far, how Beeware is helping to address some of the threats to Python's long term viability, and how he envisions its future in light of the recent release of PyScript, an in-browser runtime for Python.Summary

Russell Keith-Magee is an accomplished engineer and a fixture of the Python community. His work on the Beeware suite of projects is one of the most ambitious undertakings in the ecosystem and unfailingly forward-looking. With his recent transition to working for Anaconda he is now able to dedicate his full focus to the effort. In this episode he reflects on the journey that he has taken so far, how Beeware is helping to address some of the threats to Python’s long term viability, and how he envisions its future in light of the recent release of PyScript, an in-browser runtime for Python.

Announcements
  • Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Your host as usual is Tobias Macey and today I’m interviewing Russell Keith-Magee about the latest status of the Beeware project, the state of Python’s black swans, and how the PyScript project ties into his ambitions for world domination
Interview
  • Introductions
  • How did you get introduced to Python?
  • For anyone who hasn’t been graced with the BeeWare vision, can you give the elevator pitch of what it is and why it matters?
  • At PyCon US 2019 you presented a keynote about the various potential threats to the Python language community and its future viability. With the clarity of 3 years hindsight, how has the landscape shifted?
  • What is PyScript and how does it fit into the venn diagram of BeeWare’s objectives and the portents of black swan events (and what is your involvement with it)?
    • How does it differ from the dozens of other "Python in the browser" and "Python transpiled to Javascript" projects that have sprouted over the years?
  • Now that you have been granted the opportunity to dedicate your full attention to BeeWare and build a team to support it, what new potential does that unlock?
  • What are the current areas of focus/challenges that you are spending your time on for the BeeWare project?
  • What are some of the efforts in the BeeWare suite that proved to be dead-ends?
  • What are the most interesting, innovative, or unexpected ways that you have seen the BeeWare suite/PyScript used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on BeeWare?
  • When is BeeWare the wrong choice?
  • What do you have planned for the future of BeeWare/PyScript/Python/world domination?
Keep In Touch Picks Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Categories: FLOSS Project Planets

Akademy 2022 Call for Participation is open

Planet KDE - Mon, 2022-05-23 17:17
The Call for Participation for Akademy is officially opened!

...and closes relatively shortly! Sunday the 12th of June 2022

You can find more information and summit your talk abstract here: https://akademy.kde.org/2022/cfp

If you have any questions or would like to speak to the organizers, please contact akademy-team@kde.org
Categories: FLOSS Project Planets

Community Working Group posts: Evaluating Drupal's Code of Conduct

Planet Drupal - Mon, 2022-05-23 16:25

Over the past couple of years, the Drupal Community Working Group has talked about reviewing and possibly updating the existing Drupal Code of Conduct. The process has had several starts, but mainly due to contributor bandwidth hasn't gained much traction to this point.

As reported in 2019, 

The current Drupal Code of Conduct was adopted in 2010 and last revised in 2014. Over the last two years, the CWG has received consistent feedback from the community that the Drupal Code of Conduct should be updated so that it is clearer and more actionable.

In the two years since, the Community Working Group has had several meetings focused on the Code of Conduct. They have identified some tasks, goals, and challenges including:

  • Reviewing Codes of Conduct from other communities.
  • Reviewing the findings from the community discussions led by Whitney Hess in 2017. 
  • How best to gather and utilize community feedback.

Recently, a group of Community Health Team members met to kickstart the process once again in hopes of finding the proper balance of tasks and timeline to determine what, if any, updates to the Code of Conduct are necessary. Community Health Team members at the first meeting include: 

The group, led by George DeMet, is still in the initial planning stages, trying to figure out a rough outline of tasks to achieve the goal. Our first activity was to work with a list of tasks and challenges to refine and organize them into logical groups, such as "project goals", "what are we missing", and "action items".


Snippet of the CoC update jamboard

We are still in the very early part of this effort, but are determined to keep momentum by having bi-weekly meetings, along with a blog post (like this) after each meeting to keep the community informed on the progress that’s being made and share opportunities for community participation.

Questions? Concerns? Let us know in the comments below!
 

Categories: FLOSS Project Planets

Kraft Version 0.98

Planet KDE - Mon, 2022-05-23 16:00

We are happy to announce the new Kraft version 0.98 that is available for download.

Kraft is software for the Linux desktop to handle quotes and invoices in the small business.

This is a version packed with bugfixes and also new features. The most important fixes were in the area of the catalog handling: Based on bug reports from the community the catalog window was completely reworked. Drag and drop of items in the catalog, the sorting and reordering of items are now working properly and as planned.

Another big addition is the support of . XRechnung is an E-invoicing format more and more mandatory in the governmental area in Germany. We are very proud that Kraft is the first open source office tool that supports that standard in a user friendly way. All invoices can now also exported in the XRechnung-XML format.

Beside these two big improvements, there are lots of others. For example, the user manual was further improved and is available also in Dutch. A lot of other smaller but non the less important improvements and fixes make version 0.98 a valueable release.

We wish a lot of fun with this new improved version of Kraft!

Categories: FLOSS Project Planets

Arturo Borrero González: Toolforge Jobs Framework

Planet Debian - Mon, 2022-05-23 15:19

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

This post continues the discussion of Toolforge updates as described in a previous post. Every non-trivial task performed in Toolforge (like executing a script or running a bot) should be dispatched to a job scheduling backend, which ensures that the job is run in a suitable place with sufficient resources.

Jobs can be scheduled synchronously or asynchronously, continuously, or simply executed once. The basic principle of running jobs is fairly straightforward:

  • You create a job from a submission server (usually login.toolforge.org).
  • The backend finds a suitable execution node to run the job on, and starts it once resources are available.
  • As it runs, the job will send output and errors to files until the job completes or is aborted.

So far, if a tool developer wanted to work with jobs, the Toolforge Grid Engine backend was the only suitable choice. This is despite the fact that Kubernetes supports this kind of workload natively. The truth is that we never prepared our Kubernetes environment to work with jobs. Luckily that has changed.

We no longer want to run Grid Engine

In a previous blog post we shared information about our desired future for Grid Engine in Toolforge. Our intention is to discontinue our usage of this technology.

Convenient way of running jobs on Toolforge Kubernetes

Some advanced Toolforge users really wanted to use Kubernetes. They were aware of the lack of abstractions or helpers, so they were forced to use the raw Kubernetes API. Eventually, they figured everything out and managed to succeed. The result of this move was in the form of [docs on Wikitech][raws] and a few dozen jobs running on Kubernetes for the first time.

We were aware of this, and this initiative was much in sync with our ultimate goal: to promote Kubernetes over Grid Engine. We rolled up our sleeves and started thinking of a way to abstract and make it easy to run jobs without having to deal with lots of YAML and the raw Kubernetes API.

There is a precedent: the webservice command does exactly that. It hides all the details behind a simple command line interface to start/stop a web app running on Kubernetes. However, we wanted to go even further, be more flexible and prepare ourselves for more situations in the future: we decided to create a complete new REST API to wrap the jobs functionality in Toolforge Kubernetes. The Toolforge Jobs Framework was born.

Toolforge Jobs Framework components

The new framework is a small collection of components. As of this writing, we have three:

  • The REST API — responsible for creating/deleting/listing jobs on the Kubernetes system.
  • A command line interface — to interact with the REST API above.
  • An emailer — to notify users about their jobs activity in the Kubernetes system.

There were a couple of challenges that weren’t trivial to solve. The authentication and authorization against the Kubernetes API was one of them. The other was deciding on the semantics of the new REST API itself. If you are curious, we invite you to take a look at the documentation we have in wikitech.

Open beta phase

Once we gained some confidence with the new framework, in July 2021 we decided to start a beta phase. We suggested some advanced Toolforge users try out the new framework. We tracked this phase in Phabricator, where our collaborators quickly started reporting some early bugs, helping each other, and creating new feature requests.

Moreover, when we launched the Grid Engine migration from Debian 9 Stretch to Debian 10 Buster we took a step forward and started promoting the new jobs framework as a viable replacement for the grid. Some official documentation pages were created on wikitech as well.

As of this writing the framework continues in beta phase. We have solved basically all of the most important bugs, and we already started thinking on how to address the few feature requests that are missing.

We haven’t yet established yet the criteria for leaving the beta phase, but it would be good to have:

  • Critical bugs fixed and most feature requests addressed (or at least somehow planned).
  • Proper automated test coverage. We can do better on testing the different software components to ensure they are as bug free as possible. This also would make sure that contributing changes is easy.
  • REST API swagger integration.
  • Deployment automation. Deploying the REST API and the emailer is tedious. This is tracked in Phabricator.
  • Documentation, documentation, documentation.
Limitations

One of the limitations we bear in mind since early on in the development process of this framework was the support for mixing different programming languages or runtime environments in the same job.

Solving this limitation is currently one of the WMCS team priorities, because this is one of the key workflows that was available on Grid Engine. The moment we address it, the framework adoption will grow, and it will pretty much enable the same workflows as in the grid, if not more advanced and featureful.

Stay tuned for more upcoming blog posts with additional information about Toolforge.

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #348 - A Website’s Carbon Footprint

Planet Drupal - Mon, 2022-05-23 14:00

Today we are talking about A Website’s Carbon Footprint with Gerry McGovern.

www.talkingDrupal.com/348

Topics
  • Earth day
  • What is a carbon footprint
  • How do websites contribute
  • How can you calculate your site’s impact
  • Cloud vs dedicated hosting
  • How do you determine a vendor’s impact
  • Small sites VS FAANG
  • How to improve your site
Resources Guests

Gerry McGovern - gerrymcgovern.com @gerrymcgovern

Hosts

Nic Laflin - www.nLighteneddevelopment.com @nicxvan John Picozzi - www.epam.com @johnpicozzi Chris Wells - redfinsolutions.com - @chrisfromredfin

MOTW

Config Pages At some point I was tired of creating custom pages using menu and form API, writing tons of code just to have a page with an ugly form where a client can enter some settings, and as soon as a client wants to add some interactions to the page (drag&drop, ajax etc) things starts to get hairy. The same story was with the creation of dedicated CT just to theme a single page (like homepage) and explaining why you can only have 1 node of this type, or force it programmatically.

Categories: FLOSS Project Planets

DrupalEasy: Replacing Docker with Colima for use with DDEV - first impressions

Planet Drupal - Mon, 2022-05-23 12:15

Back in March, 2022, the DDEV team announced support for Colima, an open-source Docker Desktop replacement for Mac OS X. Based on the fact that Colima is open-source, Docker Desktop's new license terms, and the apparent performance gains of using Colima it seems like a no-brainer to give it a spin.

First off, it's almost a drop-in replacement for Docker Desktop. I say almost for one reason, as any existing DDEV projects will need to have their databases reimported. In other words, if you have an existing project up-and-running in DDEV, then add Colima, then restart the project, your database won't be found. The easy fix is to first export your database, then start Colima, then import it. Easy.

The reason for this (as I understand it) is because Colima uses the open-source Lima project for managing its containers and volumes (the latter being where DDEV project databases are stored). 

For those of us that are casual Docker users (outside of DDEV), one confusing bit is that we still need the open-source docker client installed - which is installed by default with Docker Desktop for Mac. The docker client is used on the command line to connect to the installed Docker provider (Colima or Docker Desktop for Mac, in this context). If you want to go 100% pure Colima and you uninstall Docker Desktop for Mac, you'll need to install and configure the Docker client independently. Full installation instructions can be found on the DDEV docs site

If you choose to keep using both Colima and Docker Desktop then when issuing docker commands from the command line, you'll need to first specify which containers we want to work with - Docker or Colima. More on this in the next section. 

How I use Colima

I currently have some local projects using Docker and some using Colima. Once I understood the basics, it's not too difficult to switch between. 

Installing Colima alongside Docker Desktop for Mac and starting a fresh Drupal 9 site
  • To get started, I first installed Colima using Homebrew "brew install colima"
  • "ddev powerof"f (just to be safe)
  • Next, I started Colima with "colima start --cpu 4 --memory 4" The --cpu and --memory bits only have to be done once. After the first time, only colima start is necessary. 
  • Next, I spun up a new Drupal 9 site via "ddev config", "ddev start", etc... (It is recommended to enabled DDEV's mutagen functionality to maximize performance). 
Switching between a Colima DDEV project and a Docker Desktop for Mac DDEV project
  • "ddev poweroff"
  • "colima sto"p
  • "docker context use defaul"t - this is the command I alluded to above that tells the Docker client which containers we want to work with. "default" is the traditional Docker Desktop for Mac containers. When colima start is run, it automatically switches docker to the "colima" context.
  • "ddev start" (on an existing project I had previously set up while running Docker Desktop for Mac).

Technically, starting and stopping Colima isn't necessary, but the "ddev poweroff" command when switching between the two contexts is. 

Also - recent versions of Colima revert the Docker context back to "default" when Colima is stopped, so the "docker context use default" command is no longer necessary. Regardless, I use "docker context show" to verify that either the "default" (Docker Desktop for Mac) or "colima" context is in use. Basically, the "context" refers to which Docker provider the Docker client will route commands to. 

Summarizing

Overall, I'm liking what I see so far. I haven't run into any issues, and Colima-based sites seem a bit snappier (especially when DDEV's Mutagen functionality is enabled). I definitely foresee myself migrating project sites to Colima over the next few weeks.

Thanks to Randy Fay for reviewing this blog post. Randy is the lead maintainer of the DDEV project. If you use DDEV, then you should support the DDEV project!

Categories: FLOSS Project Planets

Łukasz Langa: Weekly Report, May 16 - 22

Planet Python - Mon, 2022-05-23 10:16

I need to return to those logs, it’s been a while since I made one. This one isn’t particularly exciting but puts me back on track!

Categories: FLOSS Project Planets

PyCon: PyCon US 2022 Recap and Recording Announcement

Planet Python - Mon, 2022-05-23 10:00
We would like to announce that the first group of PyCon US 2022 recordings, the Keynotes and Lightning Talks, are now available on our YouTube channel. Be sure to subscribe to our channel for notifications of new content. We will be publishing more talks daily as they become available.

The recordings from the Typing Summit are also available to view on the PyCon US channel. The Maintainers Summit recordings are available for you to watch on their channel here.
As we wrap up PyCon US 2022, we can’t express enough gratitude for all who attended this year’s conference, whether in-person or online. We had an amazing and diverse group of community members join us for PyCon US 2022. By the numbers, we saw a total attendance of 2,422 – with 1,753 attendees joining us in person and 669 joining us online. We couldn’t be more grateful for all who supported the Python ecosystem and helped make PyCon US 2022 a great success!
Here is a comprehensive recap of this year’s PyCon US conference:


Without the attendees, volunteers, speakers, and sponsors who were part of PyCon US and the support of the entire community, the work of the Python Software Foundation would not be possible. You have all helped make our return to an in-person PyCon US a great success!

If you have any questions, please feel free to contact us at pycon-reg@python.org.
Categories: FLOSS Project Planets

Real Python: How to Publish an Open-Source Python Package to PyPI

Planet Python - Mon, 2022-05-23 10:00

Python is famous for coming with batteries included, and many sophisticated capabilities are available in the standard library. However, to unlock the full potential of the language, you should also take advantage of the community contributions at PyPI: the Python Packaging Index.

PyPI, typically pronounced pie-pee-eye, is a repository containing several hundred thousand packages. These range from trivial Hello, World implementations to advanced deep learning libraries. In this tutorial, you’ll learn how to upload your own package to PyPI. Publishing your project is easier than it used to be. Yet, there are still a few steps involved.

In this tutorial, you’ll learn how to:

  • Prepare your Python package for publication
  • Handle versioning of your package
  • Build your package and upload it to PyPI
  • Understand and use different build systems

Throughout this tutorial, you’ll work with an example project: a reader package that can be used to read Real Python tutorials in your console. You’ll get a quick introduction to the project before going in depth about how to publish this package. Click the link below to access the GitHub repository containing the full source code of reader:

Get Source Code: Click here to get access to the source code for the Real Python Feed Reader that you’ll work with in this tutorial.

Get to Know Python Packaging

Packaging in Python can seem complicated and confusing for both newcomers and seasoned veterans. You’ll find conflicting advice across the Internet, and what was once considered good practice may now be frowned upon.

The main reason for this situation is that Python is a fairly old programming language. Indeed, the first version of Python was released in 1991, before the World Wide Web became available to the general public. Naturally, a modern, web-based system for distribution of packages wasn’t included or even planned for in the earliest versions of Python.

Instead, Python’s packaging ecosystem has evolved organically over the decades as user needs became clear and technology offered new possibilities. The first packaging support came in the fall of 2000, with the distutils library being included in Python 1.6 and 2.0. The Python Packaging Index (PyPI) came online in 2003, originally as a pure index of existing packages, without any hosting capabilities.

Note: PyPI is often referred to as the Python Cheese Shop in reference to Monty Python’s famous Cheese Shop sketch. To this day, cheeseshop.python.org redirects to PyPI.

Over the last decade, many initiatives have improved the packaging landscape, bringing it from the Wild West and into a fairly modern and capable system. This is mainly done through Python Enhancement Proposals (PEPs) that are reviewed and implemented by the Python Packaging Authority (PyPA) working group.

The most important documents that define how Python packaging works are the following PEPs:

  • PEP 427 describes how wheels should be packaged.
  • PEP 440 describes how version numbers should be parsed.
  • PEP 508 describes how dependencies should be specified.
  • PEP 517 describes how a build backend should work.
  • PEP 518 describes how a build system should be specified.
  • PEP 621 describes how project metadata should be written.
  • PEP 660 describes how editable installs should be performed.

You don’t need to study these technical documents. In this tutorial, you’ll learn how all these specifications come together in practice as you go through the process of publishing your own package.

For a nice overview of the history of Python packaging, check out Thomas Kluyver’s presentation at PyCon UK 2019: Python packaging: How did we get here, and where are we going? You can also find more presentations at the PyPA website.

Create a Small Python Package

In this section, you’ll get to know a small Python package that you can use as an example that can be published to PyPI. If you already have your own package that you’re looking to publish, then feel free to skim this section and join up again at the next section.

The package that you’ll see here is called reader. It can be used both as a library for downloading Real Python tutorials in your own code and as an application for reading tutorials in your console.

Note: The source code as shown and explained in this section is a simplified—but fully functional—version of the Real Python feed reader. Compared to the version currently published on PyPI, this version lacks some error handling and extra options.

First, have a look at the directory structure of reader. The package lives completely inside a directory that can be named anything. In this case, it’s named realpython-reader/. The source code is wrapped inside an src/ directory. This isn’t strictly necessary, but it’s usually a good idea.

Note: The use of an extra src/ directory when structuring packages has been a point of discussion in the Python community for years. In general, a flat directory structure is slightly easier to get started with, but the src/-structure provides several advantages as your project grows.

The inner src/reader/ directory contains all your source code:

realpython-reader/ │ ├── src/ │ └── reader/ │ ├── __init__.py │ ├── __main__.py │ ├── config.toml │ ├── feed.py │ └── viewer.py │ ├── tests/ │ ├── test_feed.py │ └── test_viewer.py │ ├── LICENSE ├── MANIFEST.in ├── README.md └── pyproject.toml

The source code of the package is in an src/ subdirectory together with a configuration file. There are a few tests in a separate tests/ subdirectory. The tests themselves won’t be covered in this tutorial, but you’ll learn how to treat test directories later. You can learn more about testing in general in Getting Started With Testing in Python and Effective Python Testing With Pytest.

Read the full article at https://realpython.com/pypi-publish-python-package/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Drupal Association blog: Imre Gmelig Meijling joining the Board of Directors

Planet Drupal - Mon, 2022-05-23 09:53

The Drupal Association is proud to announce our newest Board of Directors member, Imre Gmelig Meijling. Imre will serve out the remainder of the term for Pedro Cambra, who resigned from the board in March 2022 after serving 1.5 years of a 2-year term. We're happy Imre is willing to step up and contribute to the Board this year.

Involvement with Drupal

Imre has been involved with the Drupal community since 2005 and was the Chair of the Board of the Dutch Drupal Association. He is a co-organizer of the Dutch Drupal camp Drupaljam and started the Splash Awards in 2014. He has also been an active contributor to DrupalCon Europe for the past few years as a member of the DrupalCon Europe Advisory Committee. Imre has worked for various agencies in Europe, creating Drupal adoption and raising awareness for Drupal within regional and international brands and communities. Besides being a Drupal volunteer, Imre is co-owner of Drupal agency React Online in The Netherlands. 

During the most recent at-large community elections, Imre came in second place, which makes him a natural fill-in for the at-large community elected position. Imre is excited to share his experience and join the Board, where he will help amplify Drupal's position and message across the global community and organizations.  

You can read more about Imre on his candidate page.  

The Drupal Association is a non-profit organization focused on accelerating Drupal, fostering the growth of the Drupal community, and supporting the project’s vision to create a safe, secure, and open web for everyone. Are you using Drupal or are you a Drupal community? Feel free to connect.

Categories: FLOSS Project Planets

Python for Beginners: How To Save a Dictionary to File in Python

Planet Python - Mon, 2022-05-23 09:00

A python dictionary is used to store key-value mappings in a program. Sometimes, we might need to store the dictionary directly in a file. In this article, we will discuss how we can save a dictionary directly to a file in Python.

Table of Contents Save Dictionary to File Using Strings in Python

 To save a dictionary into a file, we can first convert the dictionary into a string. After that, we can save the string in a text file. For this, we will follow the following steps.

  •  First, we will convert the dictionary into a string using the str() function. The str() function takes an object as input and returns its string representation. 
  • After obtaining the string representation of the dictionary, we will open a text file in write mode using the open() function. The open() function takes the file name and the mode as input arguments and returns a file stream object say myFile. 
  • After obtaining the file stream object myFile, we will write the string to the text file using the write() method. The write() method, when invoked on a file object, takes a string as an input argument and writes it to the file.
  • After execution of the write() method, we will close the file stream using the close() method.

By following the above steps, you can save a dictionary to a file in string form. After saving the dictionary to the file, you can verify the file content by opening the file. In the following code, we have first saved a python dictionary to a file.

myFile = open('sample.txt', 'w') myDict = {'Roll': 4, 'Name': 'Joel', 'Language': 'Golang'} print("The dictionary is:") print(myDict) myFile.write(str(myDict)) myFile.close() myFile = open('sample.txt', 'r') print("The content of the file after saving the dictionary is:") print(myFile.read())

Output:

The dictionary is: {'Roll': 4, 'Name': 'Joel', 'Language': 'Golang'} The content of the file after saving the dictionary is: {'Roll': 4, 'Name': 'Joel', 'Language': 'Golang'} Save Dictionary to File in Binary Format in Python

Instead of storing the dictionary in the text format, we can directly store the dictionary in the binary format. For this, we will use the pickle module in Python. To save the dictionary to file using the pickle module, we will follow the following steps.

  • First, we will open a file in write binary (wb) mode using the open() function. The open() function takes the file name and the mode as input arguments and returns a file stream object say myFile.
  • The pickle module provides us with the dump() method with the help of which, we can save a dictionary in binary format to the file. The dump() method takes an object as its first input argument and a file stream as the second input argument. After execution, it saves the object to the file in binary format. We will pass the dictionary as the first argument and myFile as the second input argument to the dump() method. 
  • After execution of the dump() method, we will close the file using the close() method.

Following is the python code to save a dictionary to a file in python. 

import pickle myFile = open('sample_file', 'wb') myDict = {'Roll': 4, 'Name': 'Joel', 'Language': 'Golang'} print("The dictionary is:") print(myDict) pickle.dump(myDict,myFile) myFile.close()

After saving the dictionary in the binary format, we can retrieve it using the load() method in the pickle module. The load() method takes the file stream containing the python object in binary form as its input argument and returns the Python object. After saving the dictionary to file using the dump() method, we can recreate the dictionary from the file as shown below.

import pickle myFile = open('sample_file', 'wb') myDict = {'Roll': 4, 'Name': 'Joel', 'Language': 'Golang'} print("The dictionary is:") print(myDict) pickle.dump(myDict,myFile) myFile.close() myFile = open('sample_file', 'rb') print("The content of the file after saving the dictionary is:") print(pickle.load(myFile))

Output:

The dictionary is: {'Roll': 4, 'Name': 'Joel', 'Language': 'Golang'} The content of the file after saving the dictionary is: {'Roll': 4, 'Name': 'Joel', 'Language': 'Golang'} Conclusion

In this article, we have discussed two ways to save a dictionary to a file in python. To know more about dictionaries, you can read this article on dictionary comprehension in python.

The post How To Save a Dictionary to File in Python appeared first on PythonForBeginners.com.

Categories: FLOSS Project Planets

PyCharm: PyCharm 2022.2 EAP is open!

Planet Python - Mon, 2022-05-23 08:39

We’re announcing the next Early Access Program and we invite you to take part in testing and validating new features that are expected to be included in the PyCharm 2022.2 release. The first PyCharm 2022.2 EAP build brings a number of useful improvements to various parts of the product along with moving the IDE to JBR 17, which will boost IDE performance.

You can download the EAP build from our website get it from the free Toolbox App, or use snaps if you are using Ubuntu.

Important: EAP builds are not fully tested and might be unstable.

Download PyCharm EAP

Let’s take a closer look at the updates included in this first EAP build.

WSL Improved WSL support

Back in 2021.1 we added the ability to work with projects stored on the WSL file system without having to copy them to your Windows file system. That same version gave PyCharm the ability to detect the WSL interpreter and configured the PyCharm Terminal to run on WSL. (You can read more about what was originally added in this blog post.)

Now with PyCharm 2022.2, we will be introducing significant improvements for working with WSL. In particular, generating and updating skeletons for WSL 2 interpreters will be faster, which obviously will make working with WSL faster in general.

Moreover, you will be able to set up virtual environments and run Jupyter notebooks on WSL from PyCharm. Stay tuned for more updates about the improved WSL support throughout this EAP.

Python 3.11 Support for PEP 654: Exception groups

With the official release of Python 3.11 approaching, we are starting to implement support for its new features. In this EAP we are adding code insight for the new exception groups and except* operator from PEP 654. 

From now on, PyCharm will warn you about forbidden combinations, like except and except* operators in the same try statement, or continue, break, and return operators inside except* clauses. It will also warn you if you try to catch an ExceptionGroup in a try* clause. We encourage you to give this functionality a try if you are already working with Python 3.11.

User interface Merge All Project Windows action on macOS 

For macOS users, we’ve introduced the ability to merge all open project windows into one, turning them into tabs. This action is available from the Window menu.

New Description field for mnemonic bookmarks

We’ve integrated a Description field into the Add Mnemonic Bookmark dialog so that you can add optional descriptions to your bookmarks right when you create them.

Font size indicator on zoom

When you zoom in on or out from your code within the editor, you will now see an indicator that shows the current font size and an option to revert it back to the default. You can zoom in on and out from the editor by holding / Ctrl and using the mouse wheel. To adjust this shortcut, go to Preferences / Settings | Editor | General and in the Mouse Control section select Change font size with / Ctrl + Mouse Wheel in.

Notable bug fixes:
  • Autoformatting (auto-indentation) for chained methods now works correctly: [PY-28496], [PY-27660].

Those are the main updates for week one. For more details, check out the release notes. Please give the new features a try and provide us with your feedback in the comment section below, on Twitter, or using our issue tracker.

Ready to join the EAP? Ground rules
  • EAP builds are free to use and require a valid JetBrains account.
  • EAP builds expire 30 days after the build date.
  • You can install an EAP build side by side with your stable PyCharm version.
  • These builds are not fully tested and can be unstable.
  • Your feedback is always welcome. Please use our issue tracker to report any bugs or inconsistencies.
Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Brian Skinn

Planet Python - Mon, 2022-05-23 08:30
This week we welcome Brian Skinn (@btskinn) as our PyDev of the Week! Brian maintains the from python import logging RSS feed on Python news / personal blog. Brian is active in the Python community as well. Let's spend some time getting to know Brian better! Can you tell us a little about yourself (hobbies, education, etc): Sure! My background is in chemical engineering--I have a B.S. from Case Western Reserve University and a Ph.D. from MIT. For the last ten years, I've worked at a small business in the Dayton, OH area doing electrochemical process R&D, though I recently made the jump over to the Python world in the form of a developer and services partner relations position at OpenTeams. I'm really excited to start working more directly with Python and open source generally. For fun, I like to read fiction, play music and board games, and hack on my side projects. I'm a big Dresden Files fan, just finished Scalzi's Kaiju Protection Society, and have been working my way through Jane Austen's novels. I play clarinet, piano and guitar, and sing tenor in a men's quartet. Why did you start using Python? I've been programming in various languages since high school (TI-BASIC, a bit of C++ and Java, and a lot of Excel VBA), and I also picked up an interest in quantum chemistry in grad school. Around 2014, I decided that I wanted to try implementing one of the quantum chemical methods I'd been reading about. However, I only really knew Excel VBA well enough to build something substantial, so that's what I started with. It ... was pretty terrible. VBA can do object-oriented programming, technically, but it's awkward and difficult. After a thousand or so lines of code, I realized that I *had* to find something else. I'd been aware of Python through my experience with Linux, so I read up on it a bit, decided it seemed like a promising option, started learning... and have never looked back. What other programming languages do you know and which is your favorite? VBA in its various flavors is definitely the other language I know best, and it's been invaluable throughout the engineering portion of my career -- automating all of the boring stuff in Excel, Word and Outlook has been a huge productivity gain. Other than that, I know just enough Javascript, Java and C to be dangerous, but not to build anything substantial with them. What projects are you working on now? In open-source land, I'm most actively working on my tool for inspecting/manipulating Sphinx objects.inv files, sphobjinv. Short-term, there are some aspects of the CLI that need improvement; medium-term, I want to look into multiprocessing to speed up a handful of key spots in the internals; long-term, I need to revamp the exceptions model and I want to try to improve the user experience around the core Inventory API. In terms of other projects, I just cut a first release of jupyter-tempvars, a Jupyter extension that provides easy temporary variables management. I have some tweaks to make to that, some freshening to do on the underlying tempvars library, and I want to (a) package it for conda-forge and (b) adapt it for JupyterLab. I also want to completely overhaul the parser construction model for pent, which extracts structured data (e.g., numerical arrays) from long-form string text. Which Python libraries are your favorite (core or 3rd party)? I use pathlib and argparse from the standard library a lot. I recently found pprint (pretty printing) and calendar, and they're hugely helpful. itertools (and the 3rd-party more-itertools) is fantastic for writing concise, readable, functional code. In terms of 3rd-party packages, I use attrs for most of my projects, even after the introduction of dataclasses -- I like the flexibility of the conversion, validation, etc. features. I've recently picked up BeautifulSoup (beautifulsoup4) and sqlalchemy and am really liking them. The scientific stack (numpy, scipy, pandas, etc.) is key, of course. For tooling, I heavily use Sphinx, pytest, tox, coverage, and flake8, and I've recently started adding pre-commit to my projects. I build packages using setuptools. How did your pylogging feed come about and how is it doing?  I had done some blogging before, and after having been coding in Python for a couple of years it made sense to restart with a focus on what I was doing in Python. The goals for the blog to date have mostly been split between (i) describing what I've been working on on a relatively high level, and (ii) going into detail on specific technical elements, to try to explain how things work. I also had an early inkling that I might want to switch over to Python from chemical engineering and knew that the portfolio aspect of the blog could be beneficial. The blog has been languishing for the last year-plus, unfortunately, because my free time for code work has been pretty slim, and I've wanted to focus on moving projects forward, instead of blogging. I hope to wake it back up before too long. Is there anything else you’d like to say? If you've ever thought about contributing to an open-source project, but haven't done it yet -- go for it! It's definitely intimidating at first, because there's a lot to learn about the tooling, the processes, the etiquette, and so on. But, most maintainers are very welcoming to new contributors, and are happy to guide them through everything. I would recommend looking into making your first contributions to small- to medium-sized projects, though, and ones with at least a measure of visibility (a dozen or more Github stars, say). This will hopefully guide you toward projects with engaged maintainer(s) that will have bandwidth to engage with you in some detail. (I will note, a larger project may still work for this; you can monitor the flow of issues/PRs on the repo, and if new issues and PRs are getting steady engagement from maintainers, then it might work well.)  Be aware that your first contribution doesn't necessarily need to involve code -- clarifying something in the documentation, fixing a typo in the README or contributor's guide, and other buffs to a project are quite valuable, too! Thanks for doing the interview, Brian!

The post PyDev of the Week: Brian Skinn appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Kdenlive 22.04.1 released

Planet KDE - Mon, 2022-05-23 07:13

The first maintenance release of the 22.04 series is out with two out-of-the-box effect templates: Secondary Color Correction and Shut-off as well as a new Box Blur filter. This version fixes incorrect levels displayed in the audio mixer, timeline preview rendering, thumbnail caching and text alignment in the Titler. There is also a reverse option in same track transitions.

Full log

  • Add ‘reverse’ parameter to transition ‘mix’. Commit.
  • Fix custom effect type sometimes incorrect. Commit.
  • Fix drag incorrectly terminating in icon view. Commit.
  • Fix freeze cause by incorrect duplicate entry in thumbnail cache. Commit.
  • Fix crash trying to drag in empty space in Bin icon view. Commit.
  • Update kdenliveeffectscategory.rc new mlt’s box_blur added to the ‘Blur and Sharpen’ category. Commit.
  • Update CMakeLists.txt adding the new mlt’s Box_Blur. Commit.
  • Add new mlt’s Box_Blur ui. It was not working with the automatic one. Commit.
  • Update secondary_color_correction.xml fixing Transparency default value error. Commit.
  • Fix titler text alignment. Commit.
  • Fix potential deadlock, maybe related to #1380. Commit.
  • Small refactoring of cache get thumbnail. Commit.
  • Fix timeline preview failing when creating a new project. Commit.
  • Timeline preview profiles – remove unused audio parameters, fix interlaced nvenc. Commit.
  • Another set of minor improvements for monitor audio level. Commit.
  • Minor fix in audio levels look. Commit.
  • Ensure all color clips use the RGBA format. Commit.
  • Show dB in mixer tooltip. Commit.
  • Fix audio levels showing incorrect values, and not impacted by master effects. Commit.

The post Kdenlive 22.04.1 released appeared first on Kdenlive.

Categories: FLOSS Project Planets

Django Weblog: The Call for Proposals for DjangoCon US 2022 Is Now Open!

Planet Python - Mon, 2022-05-23 07:00

The DjangoCon 2022 organizers are excited to announce that the first in-person DjangoCon since 2019 is now open for talk submissions: call for proposals! The deadline for submissions is June 10th, 2022 AoE. As long as it’s still June 10th anywhere on earth, you can submit your proposal.

We invite you to submit your proposal no matter your background or experience level with Django. Proposals can be from a wide range of topics; non-Django and community topics are welcome. You can look at our talk schedule from last year for reference.

We fancy first-timers! If you haven’t spoken at a conference or given a tutorial before, this is your invitation to do so. Don’t let the idea that you’re not famous or an expert stop you from submitting. It certainly won’t stop us from selecting your talk or tutorial and it won’t stop the audience from enjoying it!

Plus there are perks! Presenters get free admission to DjangoCon US! Grants to assist with your travel and lodging expenses are available as well. Fill out the Opportunity Grant form by June 10th, 2022. Decision notifications will be sent by July 8, 2022.

For more information on talk and tutorial formats, please check out our speaker information page.

We want everyone attending DjangoCon US to feel safe, welcome, and included. To that end, we have a Code of Conduct for all speakers and attendees.

If you have questions feel free to contact us.

We look forward to your proposals!

Categories: FLOSS Project Planets

KDE Goals Retrospective: Consistency

Planet KDE - Mon, 2022-05-23 06:45

As part of the preparation for the new round of KDE Goals (as described last week), I’ll be interviewing our Goal Champions.

The purpose is to learn what went good, what could’ve gone better and share wisdom to all that are thinking about becoming a new Champion.

This week we start with Niccolò Venerandi. Check it out here:

Thank you Niccolò! You can find him on Twitter and YouTube.

See you next week with another retrospective!

Categories: FLOSS Project Planets

Danny Englander: Implementing a React App and Connecting it to a Custom Drupal Block to Pull in Remote API Data

Planet Drupal - Mon, 2022-05-23 01:00

These days, Drupal core out of the box is sufficient for many website use cases and with the advent of Drupal 8 and 9, one can build a highly functional site with minimal contrib module add-ons. In the past few years, decoupled sites have also become quite popular which usually means a separate front-end framework such as Gatsby / React that queries data from a Drupal data endpoint, typically using something like GraphQL as middleware.

But there are also points in between where you might do something like "progressive decoupling", such as a custom block that is built with React and integrated right inside Drupal. At my current job, I was recently tasked with needing to pull in data from a remote API and have it render as a custom Drupal block. I decided upon using React for this task utilizing the Axios library, a promise based HTTP client for the browser and node.js, which is superb at pulling data from remote APIs.

Basic recipe

Here is an outline of the basic "ingredients" I used for this project.

  1. A custom Drupal module, react_jobs_teaser that will contain our React app, a custom block, and a Twig template for the block
  2. The React app pulls in data from the remote API via Axios
  3. Create a custom Drupal block in react_jobs_teaser/src/Plugin/Block/ReactJobsTeaserBlock.php
  4. Create a Twig template for the custom block with an ID that the React app will target viaReactDOM.render()
  5. A Drupal field that is rendered in the theme that the React app will target so as to make the API data contextual for each node
  6. Render the block in a theme template using Twig Tweak
  7. Expose the custom Drupal field data to React in the same theme Twig template
Getting started: the React app

The basic idea here is to build the React app and test it inside its own environment and then wire it up to Drupal for seamless integration. For my project, I used create-react-app which is a quick way to get up and running with a simple one page React site. We pull data from the USA ...

Categories: FLOSS Project Planets

Ulrike Uhlig: How do kids conceive the internet? - part 3

Planet Debian - Sun, 2022-05-22 18:00

I received some feedback on the first part of interviews about the internet with children that I’d like to share publicly here. Thank you! Your thoughts and experiences are important to me!

In the first interview round there was this French girl.

Asked what she would change if she could, the 9 year old girl advocated for a global usage limit of the internet in order to protect the human brain. Also, she said, her parents spend way too much time on their phones and people should rather spend more time with their children.

To this bit, one person reacted saying that they first laughed when reading her proposal, but then felt extremely touched by it.

Another person reacted to the same bit of text:

That’s just brilliant. We spend so much time worrying about how the internet will affect children while overlooking how it has already affected us as parents. It actively harms our relationship with our children (keeping us distracted from their amazing life) and sets a bad example for them.

Too often, when we worry about children, we should look at our own behavior first. Until about that age (9-10+) at least, they are such a direct reflection of us that it’s frightening…

Yet another person reacted to the fact that many of the interviewees in the first round seemed to believe that the internet is immaterial, located somewhere in the air, while being at the same time omnipresent:

It reminds me of one time – about a dozen years ago, when i was still working closely with one of the city high schools – where i’d just had a terrible series of days, dealing with hardware failure, crappy service followthrough by the school’s ISP, and overheating in the server closet, and had basically stayed overnight at the school and just managed to get things back to mostly-functional before kids and teachers started showing up again.

That afternoon, i’d been asked by the teacher of a dystopian fiction class to join them for a discussion of Feed, which they’d just finished reading. i had read it the week before, and came to class prepared for their questions. (the book is about a near-future where kids have cybernetic implants and their society is basically on a runaway communications overload; not a bad Y[oung]A[dult] novel, really!)

The kids all knew me from around the school, but the teacher introduced my appearance in class as “one of the most Internet-connected people” and they wanted to ask me about whether i really thought the internet would “do this kind of thing” to our culture, which i think was the frame that the teacher had prepped them with. I asked them whether they thought the book was really about the Internet, or whether it was about mobile phones. Totally threw off the teacher’s lesson plans, i think, but we had a good discussion.

At one point, one of the kids asked me “if there was some kind of crazy disaster and all the humans died out, would the internet just keep running? what would happen on it if we were all gone?”

all of my labor – even that grueling week – was invisible to him! The internet was an immaterial thing, or if not immaterial, a force of nature, a thing that you accounted for the way you accounted for the weather, or traffic jams. It didn’t occur to him, even having just read a book that asked questions about what hyperconnectivity does to a culture (including grappling with issues of disparate access, effective discrimination based on who has the latest hardware, etc), it didn’t occur to him that this shit all works to the extent that it does because people make it go.

I felt lost trying to explain it to him, because where i wanted to get to with the class discussion was about how we might decide collectively to make it go somewhere else – that our contributions to it, and our labor to perpetuate it (or not) might actually help shape the future that the network helps us slide into. but he didn’t even see that human decisions or labor played a role it in at all, let alone a potentially directive role. We were really starting at square zero, which wasn’t his fault. Or the fault of his classmates that matter – but maybe a little bit of fault on the teacher, who i thought should have been emphasizing this more – but even the teacher clearly thought of the internet as a thing being done to us not as something we might actually drive one way or another. And she’s not even wrong – most people don’t have much control, just like most people can’t control the weather, even as our weather changes based on aggregate human activity.

I was quite impressed by seeing the internet perceived as a force of nature, so we continued this discussion a bit:

that whole story happened before we started talking about “the cloud”, but “the cloud” really reinforces this idea, i think. not that anyone actually thinks that “the cloud” is a literal cloud, but language shapes minds in subtle ways.

(Bold emphasis in the texts are mine.)

Thanks :) I’m happy and touched that these interviews prompted your wonderful reactions, and I hope that there’ll be more to come on this topic. I’m working on it!

Categories: FLOSS Project Planets

Pages