Feeds
Antoine Beaupré: Major outage with Oricom uplink
The server that normally serves this page, all my email, and many more services was unavailable for about 24 hours. This post explains how and why.
What happened?Starting February 2nd, I started seeing intermittent packet loss on the network. Every hour or so, the link would go down for one or two minutes, then come back up.
At first, I didn't think much of it because I was away and could blame the crappy wifi or the uplink I using. But when I came in the office on Monday, the service was indeed seriously degraded. I could barely do videoconferencing calls as they would cut out after about half an hour.
I opened a ticket with my uplink, Oricom. They replied that it was an issue they couldn't fix on their end and would need someone on site to fix.
So, the next day (Tuesday, at around 10EST) I called Oricom again, and they made me do a full modem reset, which involves plugging a pin in a hole for 15 seconds on the Technicolor TC4400 cable modem. Then the link went down, and it didn't come back up at all.
Boom.
Oricom then escalated this to their upstream (Oricom is a reseller of Videotron, who has basically the monopoly on cable in Québec) which dispatched a tech. This tech, in turn, arrived some time after lunch and said the link worked fine and it was a hardware issue.
At this point, Oricom put a new modem in the mail and I started mitigation.
Mitigation WebsiteThe first thing I did, weirdly, was trying to rebuild this blog. I figured it should be pretty simple: install ikiwiki and hit rebuild. I knew I had some patches on ikiwiki to deploy, but surely those are not a deal breaker, right?
Nope. Turns out I wrote many plugins and those still don't ship with ikiwiki, despite having been sent upstream a while back, some years ago.
So I deployed the plugins inside the .ikiwiki directory of the site in the hope of making things a little more "standalone". Unfortunately, that didn't work either because the theme must be shipped in the system-wide location: I couldn't figure out how to put it to have it bundled with the main repository. At that point I mostly gave up because I had spent too much time on this and I had to do something about email otherwise it would start to bounce.
EmailSo I made a new VM at Linode (thanks 2.5admins for the credits) to build a new mail server.
This wasn't the best idea, in retrospect, because it was really overkill: I started rebuilding the whole mail server from scratch.
Ideally, this would be in Puppet and I would just deploy the right profile and the server would be rebuilt. Unfortunately, that part of my infrastructure is not Puppetized and even if it would, well the Puppet server was also down so I would have had to bring that up first.
At first, I figured I would just make a secondary mail exchanger (MX), to spool mail for longer so that I wouldn't lose it. But I decided against that: I thought it was too hard to make a "proper" MX as it needs to also filter mail while avoiding backscatter. Might as well just build a whole new server! I had a copy of my full mail spool on my laptop, so I figured that was possible.
I mostly got this right: added a DKIM key, installed Postfix, Dovecot, OpenDKIM, OpenDMARC, glue it all together, and voilà, I had a mail server. Oh, and spampd. Oh, and I need the training data, oh, and this and... I wasn't done and it was time to sleep.
The mail server went online this morning, and started accepting mail. I tried syncing my laptop mail spool against it, but that failed because Dovecot generated new UIDs for the emails, and isync correctly failed to sync. I tried to copy the UIDs from the server in the office (which I had still access to locally), but that somehow didn't work either.
But at least the mail was getting delivered and stored properly. I even had the Sieve rules setup so it would get sorted properly too. Unfortunately, I didn't hook that up properly, so those didn't actually get sorted. Thankfully, Dovecot can re-filter emails with the sieve-filter command, so that was fixed later.
At this point, I started looking for other things to fix.
Web, againI figured I was almost done with the website, might as well publish it. So I installed the Nginx Debian package, got a cert with certbot, and added the certs to the default configuration. I rsync'd my build in /var/www/html and boom, I had a website. The Goatcounter analytics were timing out, but that was easy to turn off.
ResolutionAlmost at that exact moment, a bang on the door told me mail was here and I had the modem. I plugged it in and a few minutes later, marcos was back online.
So this was a lot (a lot!) of work for basically nothing. I could have just taken the day off and wait for the package to be delivered. It would definitely have been better to make a simpler mail exchanger to spool the mail to avoid losing it. And in fact, that's what I eventually ended up doing: I converted the linode server in a mail relay to continue accepting mail with DNS propagates, but without having to sort the mail out of there...
Right now I have about 200 mails in a mailbox that I need to move back into marcos. Normally, this would just be a simple rsync, but because both servers have accepted mail simultaneously, it's going to be simpler to just move those exact mails on there. Because dovecot helpfully names delivered files with the hostname it's running on, it's easy to find those files and transfer them, basically:
rsync -v -n --files-from=<(ssh colette.anarc.at find Maildir -name '*colette*' ) colette.anarc.at: colette/ rsync -v -n --files-from=<(ssh colette.anarc.at find Maildir -name '*colette*' ) colette/ marcos.anarc.at:Overall, the outage lasted about 24 hours, from 11:00EST (16:00UTC) on 2023-02-07 to the same time today.
Future workI'll probably keep a mail relay to make those situations more manageable in the future. At first I thought that mail filtering would be a problem, but that happens post queue anyways and I don't bounce mail based on Spamassassin, so back-scatter shouldn't be an issue.
I basically need Postfix, OpenDMARC, and Postgrey. I'm not even sure I need OpenDKIM as the server won't process outgoing mail, so it doesn't need to sign anything, just check incoming signatures, which OpenDMARC can (probably?) do.
Thanks to everyone who supported me through this ordeal, you know who you are (and I'm happy to give credit here if you want to be deanonymized)!
How to report Multiscreen bugs
As announced previously, Plasma 5.27 will have a significantly reworked multiscreen management, and we want to make sure this will be the best LTS Plasma release we had so far.
Of course, this doesn’t mean it will be perfect from day one, and your feedback is really important, as we want to fix any potential issue as fast as they get noticed.
As you know, for our issue tracking we use Bugzilla at this address. We have different products and components that are involved in the multiscreen management.
First, under New bug, chose the “plasma” category. Then there are 4 possible combinations of products and components, depending on the symptoms:
Possible problemProductComponent- The output of the command kscreen-doctor -o looks wrong, such as:
- The listed “priority” is not the one you set in systemsettings
- Geometries look wrong
- Desktops or panels are on the wrong screen
- There are black screens but is possible to move the cursor inside them
- Ordinary application windows appear on the wrong screen or get moved in unexpected screens when screens are connected/disconnected
- Some screens are black and is not possible to move the mouse inside those, but they look enabled in the systemsettings displays module or in the output of the command kscreen-doctor -o
- The systemsettings displays module shows settings that don’t match reality
- The systemsettings displays module shows settings that don’t match the output of the command kscreen-doctor -o
In order to have a good complete information on the affected system, its configuration, and the configuration of our multiscreen management, if you can, the following information would be needed:
- Whether the problem happens in a Wayland or X11 session (or both)
- A good description of the scenario: how many screens, whether is a laptop or desktop, when the problem happens (startup, connecting/disconnectiong, going out of sleep and things like that)
- The output the terminal command: kscreen-doctor -o
- The output of the terminal command: kscreen-console
- The main plasma configuration file: ~/.config/plasma-org.kde.plasma.desktop-appletsrc
Those items of information already help a lot figuring out what problem is and where it resides.
Afterwards we still may ask for more informations, like an archive of the main screen config files that are the directory content of ~/.local/share/kscreen/ but normally, we wouldn’t need that.
One more word on kscreen-doctor and kscreen-consoleThose 2 commands are very useful to understand what Plasma and the rest of the system thinks about every screen that’s connected and how they intend to treat them.
kscreen-doctorHere is a typical output of the command kscreen-doctor - o:
Output: 1 eDP-1 enabled connected priority 2 Panel Modes: 0:1200x1920@60! 1:1024x768@60 Geometry: 1920,0 960x600 Scale: 2 Rotation: 8 Overscan: 0 Vrr: incapable RgbRange: Automatic Output: 2 DP-3 enabled connected priority 3 DisplayPort Modes: 0:1024x768@60! 1:800x600@60 2:800x600@56 3:848x480@60 4:640x480@60 5:1024x768@60 Geometry: 1920,600 1024x768 Scale: 1 Rotation: 1 Overscan: 0 Vrr: incapable RgbRange: Automatic Output: 3 DP-4 enabled connected priority 1 DisplayPort Modes: 0:1920x1080@60*! 1:1920x1080@60 2:1920x1080@60 3:1680x1050@60 4:1600x900@60 5:1280x1024@75 6:1280x1024@60 7:1440x900@60 8:1280x800@60 9:1152x864@75 10:1280x720@60 11:1280x720@60 12:1280x720@60 13:1024x768@75 14:1024x768@70 15:1024x768@60 16:832x624@75 17:800x600@75 18:800x600@72 19:800x600@60 20:800x600@56 21:720x480@60 22:720x480@60 23:720x480@60 24:720x480@60 25:640x480@75 26:640x480@73 27:640x480@67 28:640x480@60 29:640x480@60 30:720x400@70 31:1280x1024@60 32:1024x768@60 33:1280x800@60 34:1920x1080@60 35:1600x900@60 36:1368x768@60 37:1280x720@60 Geometry: 0,0 1920x1080 Scale: 1 Rotation: 1 Overscan: 0 Vrr: incapable RgbRange: AutomaticHere we can see we have 3 outputs, one internal and two via DisplayPort, DP-4 is the primary (priority 1) followed by eDP-1 (internal) and DP-3 (those correcpond to the new reordering UI in the systemsettings screen module).
Important data points, also the screen geometries (in italic in the snippet) which tell their relative positions.
kscreen-consoleThis gives a bit more verbose information, here is a sample (copied here the data of a single screen, as the output is very long):
Id: 3 Name: "DP-4" Type: "DisplayPort" Connected: true Enabled: true Priority: 1 Rotation: KScreen::Output::None Pos: QPoint(0,0) MMSize: QSize(520, 290) FollowPreferredMode: false Size: QSize(1920, 1080) Scale: 1 Clones: None Mode: "0" Preferred Mode: "0" Preferred modes: ("0") Modes: "0" "1920x1080@60" QSize(1920, 1080) 60 "1" "1920x1080@60" QSize(1920, 1080) 60 "10" "1280x720@60" QSize(1280, 720) 60 "11" "1280x720@60" QSize(1280, 720) 60 "12" "1280x720@60" QSize(1280, 720) 59.94 "13" "1024x768@75" QSize(1024, 768) 75.029 "14" "1024x768@70" QSize(1024, 768) 70.069 "15" "1024x768@60" QSize(1024, 768) 60.004 "16" "832x624@75" QSize(832, 624) 74.551 "17" "800x600@75" QSize(800, 600) 75 "18" "800x600@72" QSize(800, 600) 72.188 "19" "800x600@60" QSize(800, 600) 60.317 "2" "1920x1080@60" QSize(1920, 1080) 59.94 "20" "800x600@56" QSize(800, 600) 56.25 "21" "720x480@60" QSize(720, 480) 60 "22" "720x480@60" QSize(720, 480) 60 "23" "720x480@60" QSize(720, 480) 59.94 "24" "720x480@60" QSize(720, 480) 59.94 "25" "640x480@75" QSize(640, 480) 75 "26" "640x480@73" QSize(640, 480) 72.809 "27" "640x480@67" QSize(640, 480) 66.667 "28" "640x480@60" QSize(640, 480) 60 "29" "640x480@60" QSize(640, 480) 59.94 "3" "1680x1050@60" QSize(1680, 1050) 59.883 "30" "720x400@70" QSize(720, 400) 70.082 "31" "1280x1024@60" QSize(1280, 1024) 59.895 "32" "1024x768@60" QSize(1024, 768) 59.92 "33" "1280x800@60" QSize(1280, 800) 59.81 "34" "1920x1080@60" QSize(1920, 1080) 59.963 "35" "1600x900@60" QSize(1600, 900) 59.946 "36" "1368x768@60" QSize(1368, 768) 59.882 "37" "1280x720@60" QSize(1280, 720) 59.855 "4" "1600x900@60" QSize(1600, 900) 60 "5" "1280x1024@75" QSize(1280, 1024) 75.025 "6" "1280x1024@60" QSize(1280, 1024) 60.02 "7" "1440x900@60" QSize(1440, 900) 59.901 "8" "1280x800@60" QSize(1280, 800) 59.91 "9" "1152x864@75" QSize(1152, 864) 75 EDID Info: Device ID: "xrandr-Samsung Electric Company-S24B300-H4MD302024" Name: "S24B300" Vendor: "Samsung Electric Company" Serial: "H4MD302024" EISA ID: "" Hash: "eca6ca3c32c11a47a837d696a970b9d5" Width: 52 Height: 29 Gamma: 2.2 Red: QQuaternion(scalar:1, vector:(0.640625, 0.335938, 0)) Green: QQuaternion(scalar:1, vector:(0.31543, 0.628906, 0)) Blue: QQuaternion(scalar:1, vector:(0.15918, 0.0585938, 0)) White: QQuaternion(scalar:1, vector:(0.3125, 0.329102, 0))Important also the section EDID Info, to see if the screen has a good and unique EDID, as invalid Edids, especially in combination with DisplayPort is a known source or problems.
Real Python: How to Split a Python List or Iterable Into Chunks
Splitting a Python list into chunks is a common way of distributing the workload across multiple workers that can process them in parallel for faster results. Working with smaller pieces of data at a time may be the only way to fit a large dataset into computer memory. Sometimes, the very nature of the problem requires you to split the list into chunks.
In this tutorial, you’ll explore the range of options for splitting a Python list—or another iterable—into chunks. You’ll look at using Python’s standard modules and a few third-party libraries, as well as manually looping through the list and slicing it up with custom code. Along the way, you’ll learn how to handle edge cases and apply these techniques to multidimensional data by synthesizing chunks of an image in parallel.
In this tutorial, you’ll learn how to:
- Split a Python list into fixed-size chunks
- Split a Python list into a fixed number of chunks of roughly equal size
- Split finite lists as well as infinite data streams
- Perform the splitting in a greedy or lazy manner
- Produce lightweight slices without allocating memory for the chunks
- Split multidimensional data, such as an array of pixels
Throughout the tutorial, you’ll encounter a few technical terms, such as sequence, iterable, iterator, and generator. If these are new to you, then check out the linked resources before diving in. Additionally, familiarity with Python’s itertools module can be helpful in understanding some of the code snippets that you’ll find later.
To download the complete source code of the examples presented in this tutorial, click the link below:
Free Sample Code: Click here to download the free source code that you’ll use to split a Python list or iterable into chunks.
Split a Python List Into Fixed-Size ChunksThere are many real-world scenarios that involve splitting a long list of items into smaller pieces of equal size. The whole list may be too large to fit in your computer’s memory. Perhaps it’s more convenient or efficient to process the individual chunks separately rather than all at once. But there could be other reasons for splitting.
For example, when you search for something online, the results are usually presented to you in chunks, called pages, containing an equal number of items. This technique, known as content pagination, is common in web development because it helps improve the website’s performance by reducing the amount of data to transfer from the database at a time. It can also benefit the user by improving their browsing experience.
Most computer networks use packet switching to transfer data in packets or datagrams, which can be individually routed from the source to the destination address. This approach doesn’t require a dedicated physical connection between the two points, allowing the packets to bypass a damaged part of the network. The packets can be of variable length, but some low-level protocols require the data to be split into fixed-size packets.
Note: When splitting sequential data, you need to consider its size while keeping a few details in mind.
Specifically, if the total number of elements to split is an exact multiple of the desired chunk’s length, then you’ll end up with all the chunks having the same number of items. Otherwise, the last chunk will contain fewer items, and you may need extra padding to compensate for that.
Additionally, your data may have a known size up front when it’s loaded from a file in one go, or it can consist of an indefinite stream of bytes—while live streaming a teleconference, for example. Some solutions that you learn in this tutorial will only work when the number of elements is known before the splitting begins.
Most web frameworks, such as Django, will handle content pagination for you. Also, you don’t typically have to worry about some low-level network protocols. That being said, there are times when you’ll need to have more granular control and do the splitting yourself. In this section, you’ll take a look at how to split a list into smaller lists of equal size using different tools in Python.
Standard Library in Python 3.12: itertools.batched()Using the standard library is almost always your best choice because it requires no external dependencies. The standard library provides concise, well-documented code that’s been tested by millions of users in production, making it less likely to contain bugs. Besides that, the standard library’s code is portable across different platforms and typically much more performant than a pure-Python equivalent, as most of it is implemented in C.
Unfortunately, the Python standard library hasn’t traditionally had built-in support for splitting iterable objects like Python lists. At the time of writing, Python 3.11 is the most recent version of the interpreter. But you can put yourself on the cutting edge by downloading a pre-release version of Python 3.12, which gives you access to the new itertools.batched(). Here’s an example demonstrating its use:
>>>>>> from itertools import batched >>> for batch in batched("ABCDEFGHIJ", 4): ... print(batch) ... ('A', 'B', 'C', 'D') ('E', 'F', 'G', 'H') ('I', 'J')The function accepts any iterable object, such as a string, as its first argument. The chunk size is its second argument. Regardless of the input data type, the function always yields chunks or batches of elements as Python tuples, which you may need to convert to something else if you prefer working with a different sequence type. For example, you might want to join the characters in the resulting tuples to form strings again.
Note: The underlying implementation of itertools.batched() could’ve changed since the publishing of this tutorial, which was written against an alpha release of Python 3.12. For example, the function may now yield lists instead of tuples, so be sure to check the official documentation for the most up-to-date information.
Also, notice that the last chunk will be shorter than its predecessors unless the iterable’s length is divisible by the desired chunk size. To ensure that all the chunks have an equal length at all times, you can pad the last chunk with empty values, such as None, when necessary:
>>>>>> def batched_with_padding(iterable, batch_size, fill_value=None): ... for batch in batched(iterable, batch_size): ... yield batch + (fill_value,) * (batch_size - len(batch)) >>> for batch in batched_with_padding("ABCDEFGHIJ", 4): ... print(batch) ... ('A', 'B', 'C', 'D') ('E', 'F', 'G', 'H') ('I', 'J', None, None)This adapted version of itertools.batched() takes an optional argument named fill_value, which defaults to None. If a chunk’s length happens to be less than size, then the function appends additional elements to that chunk’s end using fill_value as padding.
You can supply either a finite sequence of values to the batched() function or an infinite iterator yielding values without end:
>>>>>> from itertools import count >>> finite = batched([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 4) >>> infinite = batched(count(1), 4) >>> finite <itertools.batched object at 0x7f4e0e2ee830> >>> infinite <itertools.batched object at 0x7f4b4e5fbf10> >>> list(finite) [(1, 2, 3, 4), (5, 6, 7, 8), (9, 10)] >>> next(infinite) (1, 2, 3, 4) >>> next(infinite) (5, 6, 7, 8) >>> next(infinite) (9, 10, 11, 12)In both cases, the function returns an iterator that consumes the input iterable using lazy evaluation by accumulating just enough elements to fill the next chunk. The finite iterator will eventually reach the end of the sequence and stop yielding chunks. Conversely, the infinite one will continue to produce chunks as long as you keep requesting them—for instance, by calling the built-in next() function on it.
Read the full article at https://realpython.com/how-to-split-a-python-list-into-chunks/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python for Beginners: Convert YAML to JSON in Python
JSON and YAML are the two most used file formats in software development. The YAML files are mainly used for configuration files whereas JSON files are used to store and transmit data. This article discusses how to convert YAML to JSON in Python.
Table of Contents- What is YAML?
- What is JSON?
- YAML String to JSON String in Python
- YAML string to JSON file
- Convert YAML file to JSON string
- Convert YAML File to JSON File in Python
- Conclusion
YAML is a human-readable data serialization language used for data storage and data exchange formats. It is often used for configuration files. But, we can also use it for data exchange between systems.
YAML uses indentation to denote structure and nested elements, instead of using brackets or tags as in XML or JSON. This makes YAML files more readable and easier to edit than other data serialization formats. In YAML, a list of items is denoted by a dash (-) followed by a space, and key-value pairs are separated by a colon (:) followed by a space. YAML also supports comments, which can be added using the “#” symbol.
YAML supports various data types, including strings, integers, floating-point numbers, booleans, and arrays. It also supports nested structures, allowing for the creation of complex data structures. This makes YAML suitable for a wide range of use cases, from simple configurations to more complex data structures.
Following is a sample YAML file. It contains the details of an employee.
employee: name: John Doe age: 35 job: title: Software Engineer department: IT years_of_experience: 10 address: street: 123 Main St. city: San Francisco state: CA zip: 94102In the above YAML file,
- The employee attribute has four nested attributes i.e. name, age, job, and address.
- The name and age attributes contain the name and age of the employee.
- The job attribute has three nested attributes i.e. title, department, and years_of_experience.
- Similarly, the address attribute has four inner attributes namely street, city, state, and zip.
Hence, you can observe that we can easily represent complex data with multiple levels of hierarchy using the YAML format.
What is JSON?JSON (JavaScript Object Notation) is a lightweight data-interchange format that is used for exchanging data between systems. It is a text-based format that based on a subset of the JavaScript programming language. JSON is widely used for storing and exchanging data on the web, as well as for creating APIs (Application Programming Interfaces) that allow different systems to communicate with each other.
The basic structure of JSON data consists of key-value pairs, where keys are used to identify data elements, and values can be of various data types, including strings, numbers, booleans, arrays, and objects.
In JSON, data is enclosed in curly braces ({}) and separated by commas, while arrays are enclosed in square brackets ([]). JSON also supports comments, but they are not commonly used due to their limited support in various programming languages and applications.
We can represent the data shown in the previous section using JSON format as shown below.
{ "employee": { "name": "John Doe", "age": 35, "job": { "title": "Software Engineer", "department": "IT", "years_of_experience": 10 }, "address": { "street": "123 Main St.", "city": "San Francisco", "state": "CA", "zip": 94102 } } }JSON data is lightweight and easy to parse, making it a popular choice for exchanging data on the web. It is also easily supported by many programming languages, including JavaScript, Python, Ruby, PHP, and Java, making it a versatile and universal format for exchanging data between different systems.
YAML String to JSON String in PythonTo convert a YAML string to a JSON string, we will use the following steps.
- First, we will import the json and yaml modules using the import statement.
- Next, we will convert the YAML string to a python dictionary using the load() method defined in the yaml module. The load() method takes the yaml string as its first input argument and the loader type in its Loader argument. After execution, it returns a python dictionary.
- After obtaining the dictionary, we can convert it to a json string using the dumps() method defined in the json module. The dumps() method takes the python dictionary as its input argument and returns the json string.
You can observe this in the following example.
import json import yaml from yaml import SafeLoader yaml_string="""employee: name: John Doe age: 35 job: title: Software Engineer department: IT years_of_experience: 10 address: street: 123 Main St. city: San Francisco state: CA zip: 94102 """ print("The YAML string is:") print(yaml_string) python_dict=yaml.load(yaml_string, Loader=SafeLoader) json_string=json.dumps(python_dict) print("The JSON string is:") print(json_string)Output:
The YAML string is: employee: name: John Doe age: 35 job: title: Software Engineer department: IT years_of_experience: 10 address: street: 123 Main St. city: San Francisco state: CA zip: 94102 The JSON string is: {"employee": {"name": "John Doe", "age": 35, "job": {"title": "Software Engineer", "department": "IT", "years_of_experience": 10}, "address": {"street": "123 Main St.", "city": "San Francisco", "state": "CA", "zip": 94102}}}In this example, we have converted a YAML string to JSON string.
YAML string to JSON fileInstead of obtaining a json string, you can also convert a yaml string to a json file. For this, we will use the following steps.
- First, we will convert the yaml string to a python dictionary using the load() method defined in the yaml module.
- Next, we will open a json file in write mode using the open() function. The open() function takes the file name as its first input argument and the python literal “w” as its second input argument. After execution, it returns the file pointer.
- After opening the file, we will convert the python dictionary obtained from the yaml string into the json file. For this, we will use the dump() method defined in the json module. The dump() method takes the python dictionary as its first input argument and the file pointer returned by the open() function as the second input argument.
- After execution of the dump() method, the json file will be saved on your machine. Then, we will close the json file using the close() method.
You can observe this in the following example.
import json import yaml from yaml import SafeLoader yaml_string="""employee: name: John Doe age: 35 job: title: Software Engineer department: IT years_of_experience: 10 address: street: 123 Main St. city: San Francisco state: CA zip: 94102 """ print("The YAML string is:") print(yaml_string) python_dict=yaml.load(yaml_string, Loader=SafeLoader) file=open("person_details.json","w") json.dump(python_dict,file) file.close() print("JSON file saved")Output:
The YAML string is: employee: name: John Doe age: 35 job: title: Software Engineer department: IT years_of_experience: 10 address: street: 123 Main St. city: San Francisco state: CA zip: 94102 JSON file savedIn this example, we have converted the YAML string to a JSON file using json and yaml modules in python. The output JSON file looks as follows.
Output JSON file Convert YAML file to JSON stringWe can also convert a yaml file to a json string. For this, we will use the following steps.
- First, we will open the yaml file in read mode using the open() function. The open() function takes the file name of the yaml file as its first input argument and the python literal “r” as its second argument. After execution, it returns a file pointer.
- Next, we will obtain the python dictionary from the yaml file using the load() method. The load() method takes the file pointer as its first input argument and the loader type in its Loader argument. After execution, it returns a dictionary.
- After obtaining the dictionary, we will convert it to JSON string using the dumps() method defined in the json module. The dumps() method takes the python dictionary as its input argument and returns the json string.
You can observe this in the following example.
import json import yaml from yaml import SafeLoader yaml_file=open("person_details.yaml","r") python_dict=yaml.load(yaml_string, Loader=SafeLoader) json_string=json.dumps(python_dict) print("The JSON string is:") print(json_string)Output:
The JSON string is: {"employee": {"name": "John Doe", "age": 35, "job": {"title": "Software Engineer", "department": "IT", "years_of_experience": 10}, "address": {"street": "123 Main St.", "city": "San Francisco", "state": "CA", "zip": 94102}}}In this example, we have converted a YAML file to JSON string.
Convert YAML File to JSON File in PythonInstead of converting it into the JSON string, we can also convert the YAML file into a JSON file. For this, we will use the following steps.
- First, we will obtain the python dictionary from the yaml file using the open() function and the load() method.
- Then, we will open a json file in write mode using the open() function. The open() function takes the file name as its first input argument and the literal “w” as its second input argument. After execution, it returns the file pointer.
- After opening the file, we will convert the python dictionary obtained from the yaml file into the json file. For this, we will use the dump() method defined in the json module. The dump() method takes the python dictionary as its first input argument and the file pointer returned by the open() function as its second input argument.
- After execution of the dump() method, the json file will be saved on your machine. Then, we will close the json file using the close() method.
You can observe the entire process in the following example.
import json import yaml from yaml import SafeLoader yaml_file=open("person_details.yaml","r") python_dict=yaml.load(yaml_string, Loader=SafeLoader) file=open("person_details1.json","w") json.dump(python_dict,file) file.close() print("JSON file saved")After execution of the above code, the YAML file person_details.yaml is saved as JSON into the file person_details1.json.
ConclusionIn this article, we have discussed how to convert a yaml string or file to json format.
To learn more about python programming, you can read this article on how to convert JSON to YAML in Python. You might also like this article on custom json encoders in python.
I hope you enjoyed reading this article. Stay tuned for more informative articles.
Happy Learning!
The post Convert YAML to JSON in Python appeared first on PythonForBeginners.com.
Python Insider: Python 3.11.2, Python 3.10.10 and 3.12.0 alpha 5 are available
Hi everyone,
I am happy to report that after solving some last-time problems we have a bunch of fresh releases for you!
Python 3.12.0 alpha 5Check the new alpha of 3.12 with some Star Trek vibes:
https://www.python.org/downloads/release/python-3120a5/
210 new commits since 3.12.0a4 last month
A shipment of bugfixes and security releases for the newest Python!
https://www.python.org/downloads/release/python-3112/
194 new commits since 3.11.1
Your trusty Python3.10 just got more stable and secure!
https://www.python.org/downloads/release/python-31010/
131 new commits since 3.10.9
Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.
Your friendly release team,
Ned Deily @nad
Steve Dower @steve.dower
Pablo Galindo Salgado @pablogsal
Łukasz Langa @ambv
Thomas Wouters @thomas
Dual boot, secure boot & bitlocker
I have installed GNU/Linux on many a computers in ~20 years (some automated, most individually). In the University, I used to be woken past midnight by someone knocking at the door — who reinstalled Windows — and now they can’t boot because grub was overwritten. I’d rub the eyes, pickup the bunch latest Fedora CDs and go rescue the beast machine. Linux installation, customization and grub-recovery was my specialization (no, the course didn’t have credit for that).
Technologies (libre & otherwise) have improved since then. Instead of MBR, there’s GPT (no, not that one). Instead of BIOS, there’s UEFI. Dual booting Windows with GNU/Linux has become mostly painless. Then there’s Secure Boot. Libre software works with that too. You may still run into issues; I ran into one recently and if someone is in the same position I hope this helps:
A friend of mine got an Ideapad 3 Gaming laptop and we tried to install Fedora 37 on it (of course, remotely; thanks to screensharing and cameras on mobile phones). The bootable USB pendrive was not being listed in boot options (F12), so we fiddled with TPM & Secure Boot settings in EFI settings (F2). No luck, and troubleshooting eventually concluded that the USB pendrive was faulty. Tried with another one, and this time it was detected, happily installed Fedora 37 (under 15 mins, because instead of spinning Hard Disks, there’s SSD). Fedora boots & works fine.
A day later, the friend selects Windows to boot into (from grub menu) and gets greeted by a BitLocker message: “Enter bitlocker recovery key” because “Secure boot is disabled”.
Dang. I thought we re-enabled Secure Boot, but apparently not. Go to EFI settings, and turn it back on; save & reboot; select Windows — but BitLocker kept asking for recovery key but with a different reason: “Secure Boot policy has unexpectedly changed”.
That led to scrambling & searching, as BitLocker was not enabled by the user but OEM, and thus there was no recovery key in the user’s Microsoft online account (if the user had enabled it manually, they can find the key there).
The nature of the error message made me conclude that Fedora installation with secure boot disabled has somehow altered the TPM settings and Windows (rightfully) refuses to boot. EFI settings has an option to ‘Restore Factory Keys’ which will reset the secure boot DB. I could try that to remove Fedora keys, pray Windows boots and if it works, recover grub (my specialty) or reinstall Fedora in the worst case scenario.
Enter Matthew Garret. Matthew was instrumental in making GNU/Linux systems to work with Secure Boot (and was awarded the prestigious Free Software Foundation Award). He is a security researcher who frequently writes about computer security.
I have sought Matthew’s advice before trying anything stupid, and he suggested thus (reproduced with permission):
First, how are you attempting to boot Windows? If you’re
doing this via grub then this will result in the secure boot
measurements changing and this error occurring – if you pick Windows
from the firmware boot menu (which I think should appear if you hit F12
on an Ideapad?) then this might solve the problem.
Secondly, if the owner added a Microsoft account when setting up the
Windows system, they can visit
https://account.microsoft.com/devices/recoverykey and a recovery key
should be available there.
If neither of these approaches work, then please try resetting the
factory keys, reset the firmware to its default settings, and delete any
Fedora boot entries from the firmware (you can recover them later), and
with luck that’ll work.
Thankfully, the first option of booting Windows directly via F12 — without involving grub — works. And the first thing the user does after logging in is back up the recovery keys.
PyCharm: Using PyCharm to Read Data From a MySQL DataBase Into pandas
Sooner or later in your data science journey, you’ll hit a point where you need to get data from a database. However, making the leap from reading a locally-stored CSV file into pandas to connecting to and querying databases can be a daunting task. In the first of a series of blog posts, we’ll explore how to read data stored in a MySQL database into pandas, and look at some nice PyCharm features that make this task easier.
Viewing the database contentsIn this tutorial, we’re going to read some data about airline delays and cancellations from a MySQL database into a pandas DataFrame. This data is a version of the “Airline Delays from 2003-2016” dataset by Priank Ravichandar licensed under CC0 1.0.
One of the first things that can be frustrating about working with databases is not having an overview of the available data, as all of the tables are stored on a remote server. Therefore, the first PyCharm feature we’re going to use is the Database tool window, which allows you to connect to and fully introspect a database before doing any queries.
To connect to our MySQL database, we’re first going to navigate over to the right-hand side of PyCharm and click the Database tool window.
On the top left of this window, you’ll see a plus button. Clicking on this gives us the following dropdown dialog window, from which we’ll select Data Source | MySQL.
We now have a popup window which will allow us to connect to our MySQL database. In this case, we’re using a locally hosted database, so we leave Host as “localhost” and Port as the default MySQL port of “3306”. We’ll use the “User & Password” Authentication option, and enter “pycharm” for both the User and Password. Finally, we enter our Database name of “demo”. Of course, in order to connect to your own MySQL database you’ll need the specific host, database name, and your username and password. See the documentation for the full set of options.
Next, click Test Connection. PyCharm lets us know that we don’t have the driver files installed. Go ahead and click Download Driver Files. One of the very nice features of the Database tool window is that it automatically finds and installs the correct drivers for us.
Success! We’ve connected to our database. We can now navigate to the Schemas tab and select which schemas we want to introspect. In our example database we only have one (“demo”), but in cases where you have very large databases, you can save yourself time by only introspecting relevant ones.
With all of that done, we’re ready to connect to our database. Click OK, and wait a few seconds. You can now see that our entire database has been introspected, down to the level of table fields and their types. This gives us a great overview of what is in the database before running a single query.
Reading in the data using MySQL ConnectorNow that we know what is in our database, we are ready to put together a query. Let’s say we want to see the airports that had at least 500 delays in 2016. From looking at the fields in the introspected airlines table, we see that we can get that data with the following query:
SELECT AirportCode, SUM(FlightsDelayed) AS TotalDelayed FROM airlines WHERE TimeYear = 2016 GROUP BY AirportCode HAVING SUM(FlightsDelayed) > 500;The first way we can run this query using Python is using a package called MySQL Connector, which can be installed from either PyPI or Anaconda. See the linked documentation if you need guidance on setting up pip or conda environments or installing dependencies. Once installation is finished, we’ll open a new Jupyter notebook and import both MySQL Connector and pandas.
import mysql.connector import pandas as pdIn order to read data from our database, we need to create a connector. This is done using the connect method, to which we pass the credentials needed to access the database: the host, the database name, the user, and the password. These are the same credentials we used to access the database using the Database tool window in the previous section.
mysql_db_connector = mysql.connector.connect( host="localhost", database="demo", user="pycharm", password="pycharm" )We now need to create a cursor. This will be used to execute our SQL queries against the database, and it uses the credentials sorted in our connector to get access.
mysql_db_cursor = mysql_db_connector.cursor()We’re now ready to execute our query. We do this using the execute method from our cursor and passing the query as the argument.
delays_query = """ SELECT AirportCode, SUM(FlightsDelayed) AS TotalDelayed FROM airlines WHERE TimeYear = 2016 GROUP BY AirportCode HAVING SUM(FlightsDelayed) > 500; """ mysql_db_cursor.execute(delays_query)We then retrieve the result using the cursor’s fetchall method.
mysql_delays_list = mysql_db_cursor.fetchall()However, we have a problem at this point: fetchall returns the data as a list. To get it into pandas, we can pass it into a DataFrame, but we’ll lose our column names and will need to manually specify them when we want to create the DataFrame.
Luckily, pandas offers a better way. Rather than creating a cursor, we can read our query into a DataFrame in one step, using the read_sql method.
mysql_delays_df2 = pd.read_sql(delays_query, con=mysql_db_connector)We simply need to pass our query and connector as arguments in order to read the data from the MySQL database. Looking at our dataframe, we can see that we have the exact same results as above, but this time our column names have been preserved.
A nice feature you might have noticed is that PyCharm applies syntax highlighting to the SQL query, even when it’s contained inside a Python string. We’ll cover another way that PyCharm allows you to work with SQL later in this blog post.
Reading in the data using SQLAlchemyAn alternative to using MySQL Connector is using a package called SQLAlchemy. This package offers a one-stop method for connecting to a range of different databases, including MySQL. One of the nice things about using SQLAlchemy is that the syntax for querying different database types remains consistent across database types, saving you from remembering a bunch of different commands if you’re working with a lot of different databases.
To get started, we need to install SQLAlchemy either from PyPI or Anaconda. We then import the create_engine method, and of course, pandas.
import pandas as pd from sqlalchemy import create_engineWe now need to create our engine. The engine allows us to tell pandas which SQL dialect we’re using (in our case, MySQL) and provide it with the credentials it needs to access our database. This is all passed as one string, in the form of [dialect]://[user]:[password]@[host]/[database]. Let’s see what this looks like for our MySQL database:
mysql_engine = create_engine("mysql+mysqlconnector://pycharm:pycharm@localhost/demo")With this created, we simply need to use read_sql again, this time passing the engine to the con argument:
mysql_delays_df3 = pd.read_sql(delays_query, con=mysql_engine)As you can see, we get the same result as when using read_sql with MySQL Connector.
Advanced options for working with databasesNow these connector methods are very nice for extracting a query that we already know we want, but what if we want to get a preview of what our data will look like before running the full query, or an idea of how long the whole query will take? PyCharm is here again with some advanced features for working with databases.
If we navigate back over to the Database tool window and right-click on our database, we can see that under New we have the option to create a Query Console.
This allows us to open a console which we can use to query against the database in native SQL. The console window includes SQL code completion and introspection, giving you an easier way to create your queries prior to passing them to the connector packages in Python.
Highlight your query and click the Execute button in the top left corner.
This will retrieve the results of our query in the Services tab, where it can be inspected or exported. One nice thing about running queries against the console is that only the first 500 rows are initially retrieved from the database, meaning you can get a sense of the results of larger queries without committing to pulling all of the data. You can adjust the number of rows retrieved by going to Settings/Preferences | Tools | Database | Data Editor and Viewer and changing the value under Limit page size to:.
Speaking of large queries, we can also get a sense of how long our query will take by generating an execution plan. If we highlight our query again and then right-click, we can select Explain Plan | Explain Analyse from the menu. This will generate an execution plan for our query, showing each step that the query planner is taking to retrieve our results. Execution plans are their own topic, and we don’t really need to understand everything our plan is telling us. Most relevant for our purposes is the Actual Total Time column, where we can see how long it will take to return all of the rows at each step. This gives us a good estimate of the overall query time, as well as whether any parts of our query are likely to be particularly time consuming.
You can also visualize the execution by clicking on the Show Visualization button to the left of the Plan panel.
This will bring up a flowchart that makes it a bit easier to navigate through the steps that the query planner is taking.
Getting data from MySQL databases into pandas DataFrames is straightforward, and PyCharm has a number of powerful tools to make working with MySQL databases easier. In the next blog post, we’ll look at how to use PyCharm to read data into pandas from another popular database type, PostgreSQL databases.
Python Software Foundation: Announcing Python Software Foundation Fellow Members for Q4 2022! 🎉
The PSF is pleased to announce its fourth batch of PSF Fellows for 2022! Let us welcome the new PSF Fellows for Q4! The following people continue to do amazing things for the Python community:
LinkedInSayan ChowdhuryTwitter, GitHub, WebsiteSoong Chee GiTwitter, GitHub, WebsiteYung-Yu ChenLinkedIn, Twitter, Website
Thank you for your continued contributions. We have added you to our Fellow roster online.
The above members help support the Python ecosystem by being phenomenal leaders, sustaining the growth of the Python scientific community, maintaining virtual Python communities, maintaining Python libraries, creating educational material, organizing Python events and conferences, starting Python communities in local regions, and overall being great mentors in our community. Each of them continues to help make Python more accessible around the world. To learn more about the new Fellow members, check out their links above.
Let's continue recognizing Pythonistas all over the world for their impact on our community. The criteria for Fellow members is available online: https://www.python.org/psf/fellows/. If you would like to nominate someone to be a PSF Fellow, please send a description of their Python accomplishments and their email address to psf-fellow at python.org. We are accepting nominations for quarter 1 through February 20, 2023.
Are you a PSF Fellow and want to help the Work Group review nominations? Contact us at psf-fellow at python.org.
Talk Python to Me: #402: Polars: A Lightning-fast DataFrame for Python [updated audio]
ADCI Solutions: Claro: What New Drupal 10 Admin Panel Theme Looks Like
Finally Drupal has a new default admin theme — Claro. Read our post to learn how the Drupal community made its way toward the update and what theme features let this CMS progress to a higher level.
Tryton News: Tryton Unconference 2023 in Berlin on May 22nd - 24th
The Tryton Foundation is happy to announce the next Tryton Unconference in Berlin on May 22nd - 24th.
The first day will be mainly dedicated to presentation at Change Hub.
The second and third day will be dedicated to code sprint at inmedio Berlin.
More information and registration at Tryton - Unconference Berlin - May 22nd-24th, 2023
Many thanks to m-ds to organize the event.
2 posts - 2 participants
Tryton News: The history behind the heptapod migration
As was announced in the February newsletter we’ve migrated our development to heptapod. This means that you no longer need a Google account in order to contribute to Tryton and none of our tools are dependent on them any more. It took us 11 years to reach this point!
Now, one month on from the migration, we have more than 20 members contributing to the Tryton project, who have created more than 200 Merge Requests. These are good numbers for the project and I’m sure they will keep increasing in the future.
Such a migration was only possible with the help of many people, some of which I would like to thank publicly now.
The migration was fully sponsored by Jonathan Levy (@jonl) who contributed all the funds required to create the migration scripts. Jon is an entrepreneur who has been working with Tryton since 2012. Mr. Levy says:
Tryton is truly an undersung gem in the open-source software world. It is beautifully structured, flexible, and reliable, and I continue to be impressed by its core community. I would recommend it, either as an off-the-shelf ERP, or to anyone needing to encode custom business logic for their enterprise. I hope the recent Heptapod migration, which updates Tryton’s old contribution workflow, will help Tryton flourish in the years to come.
Thanks Jon for your contribution and for your wonderful endorsement and best wishes for our project
This migration also required lots of work from other people:
- The Octobus team, who created the scripts to import our repository and all our bug tracker history.
- The B2CK guys (@nicoe and @ced) who coordinated the work with Octobus and ensured everything went smoothly.
- Last but not least, @htgoebel who helped with the CI configuration.
This is a clear proof that a good team and hard work can achieve amazing results and that there is no limit if we share the work between us, following in the spirit of open source.
I cannot end this without giving big thanks to everyone who helped us finish this important task. Please also express your gratitude to them with some likes on this topic.
1 post - 1 participant
Codementor: Silly Mistakes to Avoid while Coding
Stephan Lachnit: Installing Debian on F2FS rootfs with deboostrap and systemd-boot
I recently got a new NVME drive. My plan was to create a fresh Debian install on an F2FS root partition with compression for maximum performance. As it turns out, this is not entirely trivil to accomplish. For one, the Debian installer does not support F2FS (here is my attempt to add it from 2021). And even if it did, grub does not support F2FS with the extra_attr flag that is required for compression support (at least as of grub 2.06).
Luckily, we can install Debian anyway with all these these shiny new features when we go the manual road with debootstrap and using systemd-boot as bootloader. We can break down the process into several steps:
- Creating the partition table
- Creating and mounting the root partition
- Bootstrapping with debootstrap
- Chrooting into the system
- Configure the base system
- Define static file system information
- Installing the kernel and bootloader
- Finishing touches
Warning: Playing around with partitions can easily result in data if you mess up! Make sure to double check your commands and create a data backup if you don’t feel confident about the process.
Creating the partition partbleThe first step is to create the GPT partition table on the new drive. There are several tools to do this, I recommend the ArchWiki page on this topic for details. For simplicity I just went with the GParted since it has an easy GUI, but feel free to use any other tool. The layout should look like this:
Type │ Partition │ Suggested size ───────────┼────────────────┼─────────────── EFI │ /dev/nvme0n1p1 │ 512MiB Linux swap │ /dev/nvme0n1p2 │ 1GiB Linux fs │ /dev/nvme0n1p3 │ remainderNotes:
- The disk names are just an example and have to be adjusted for your system.
- Don’t set disk labels, they don’t appear on the new install anyway and some UEFIs might not like it on your boot partition.
- The size of the EFI partition can be smaller, in practive it’s unlikely that you need more than 300 MiB. However some UEFIs might be buggy and if you ever want to install an additional kernel or something like memtest86+ you will be happy to have the extra space.
- The swap partition can be omitted, it is not strictly needed. If you need more swap for some reason you can also add more using a swap file later (see ArchWiki page). If you know you want to use suspend-to-RAM, you want to increase the size to something more than the size of your memory.
- If you used GParted, create the EFI partition as FAT32 and set the esp flag. For the root partition use ext4 or F2FS if available.
To create the root partition, we need to install the f2fs-tools first:
sudo apt install f2fs-toolsNow we can create the file system with the correct flags:
mkfs.f2fs -O extra_attr,inode_checksum,sb_checksum,compression,encrypt /dev/nvme0n1p3For details on the flags visit the ArchWiki page.
Next, we need to mount the partition with the correct flags. First, create a working directory:
mkdir boostrap cd boostrap mkdir root export DFS=$(pwd)/rootThen we can mount the partition:
sudo mount -o compress_algorithm=zstd:6,compress_chksum,atgc,gc_merge,lazytime /dev/nvme0n1p3 $DFSAgain, for details on the mount options visit the above mentioned ArchWiki page.
Bootstrapping with debootstrapFirst we need to install the debootstrap package:
sudo apt install debootstrapNow we can do the bootstrapping:
debootstrap --arch=amd64 --components=main,contrib,non-free,non-free-firmware unstable $DFS http://deb.debian.org/debianNotes:
- --arch=amd64 sets the CPU architecture (see Debian Wiki).
- --components=main,contrib,non-free,non-free-firmware sets the archive components, if you don’t want non-free pacakges you might want to remove some entries here.
- unstable is the Debian release, you might want to change that to testing or bookworm.
- $DFS points to the mounting point of the root partition.
- http://deb.debian.org/debian is the Debian mirror, you might want to set that to http://ftp.de.debian.org/debian or similar if you have a fast mirror in you area.
Before we can chroot into the newly created system, we need to prepare and mount virtual kernel file systems. First create the directories:
sudo mkdir -p $DFS/dev $DFS/dev/pts $DFS/proc $DFS/sys $DFS/run $DFS/sys/firmware/efi/efivars $DFS/boot/efiThen bind-mount the directories from your system to the mount point of the new system:
sudo mount -v -B /dev $DFS/dev sudo mount -v -B /dev/pts $DFS/dev/pts sudo mount -v -B /proc proc $DFS/proc sudo mount -v -B /sys sysfs $DFS/sys sudo mount -v -B /run tmpfs $DFS/run sudo mount -v -B /sys/firmware/efi/efivars $DFS/sys/firmware/efi/efivarsAs a last step, we need to mount the EFI partition:
sudo mount -v -B /dev/nvme0n1p1 $DFS/boot/efiNow we can chroot into new system:
sudo chroot $DFS /bin/bash Configure the base systemThe first step in the chroot is setting the locales. We need this since we might leak the locales from our base system into the chroot and if this happens we get a lot of annoying warnings.
export LC_ALL=C.UTF-8 LANG=C.UTF-8 apt install locales console-setupSet your locales:
dpkg-reconfigure localesSet your keyboard layout:
dpkg-reconfigure keyboard-configurationSet your timezone:
dpkg-reconfigure tzdataNow you have a fully functional Debian chroot! However, it is not bootable yet, so let’s fix that.
Define static file system informationThe first step is to make sure the system mounts all partitions on startup with the correct mount flags. This is done in /etc/fstab (see ArchWiki page). Open the file and change its content to:
# file system mount point type options dump pass # NVME efi partition UUID=XXXX-XXXX /boot/efi vfat umask=0077 0 0 # NVME swap UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX none swap sw 0 0 # NVME main partition UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX / f2fs compress_algorithm=zstd:6,compress_chksum,atgc,gc_merge,lazytime 0 1You need to fill in the UUIDs for the partitions. You can use
ls -lAph /dev/disk/by-uuid/to match the UUIDs to the more readable disk name under /dev.
Installing the kernel and bootloaderFirst install the systemd-boot and efibootmgr packages:
apt install systemd-boot efibootmgrNow we can install the bootloader:
bootctl install --path=/boot/efiYou can verify the procedure worked with
efibootmgr -vThe next step is to install the kernel, you can find a fitting image with:
apt search linux-image-*In my case:
apt install linux-image-amd64After the installation of the kernel, apt will add an entry for systemd-boot automatically. Neat!
However, since we are in a chroot the current settings are not bootable. The first reason is the boot partition, which will likely be the one from your current system. To change that, navigate to /boot/efi/loader/entries, it should contain one config file. When you open this file, it should look something like this:
title Debian GNU/Linux bookworm/sid version 6.1.0-3-amd64 machine-id 2967cafb6420ce7a2b99030163e2ee6a sort-key debian options root=PARTUUID=f81d4fae-7dec-11d0-a765-00a0c91e6bf6 ro systemd.machine_id=2967cafb6420ce7a2b99030163e2ee6a linux /2967cafb6420ce7a2b99030163e2ee6a/6.1.0-3-amd64/linux initrd /2967cafb6420ce7a2b99030163e2ee6a/6.1.0-3-amd64/initrd.img-6.1.0-3-amd64The PARTUUID needs to point to the partition equivalent to /dev/nvme0n1p3 on your system. You can use
ls -lAph /dev/disk/by-partuuid/to match the PARTUUIDs to the more readable disk name under /dev.
The second problem is the ro flag in options which tell the kernel to boot in read-only mode. The default is rw, so you can just remove the ro flag.
Once this is fixed, the new system should be bootable. You can change the boot order with:
efibootmgr --bootorderHowever, before we reboot we might add well add a user and install some basic software.
Finishing touchesAdd a user:
useradd -m -G sudo -s /usr/bin/bash -c 'Full Name' usernameDebian provides a TUI to install Desktop Environment. To open it, run:
taskselNow you can finally reboot into your new system:
reboot Resources for further reading:https://ivanb.neocities.org/blogs/y2022/debootstrap
https://www.debian.org/releases/stable/amd64/apds03.en.html
https://www.addictivetips.com/ubuntu-linux-tips/set-up-systemd-boot-on-arch-linux/
https://p5r.uk/blog/2020/using-systemd-boot-on-debian-bullseye.html
https://www.linuxfromscratch.org/lfs/view/stable/chapter07/kernfs.html
Thanks for reading!
TestDriven.io: Deploying a Django App to Google App Engine
FSF Events: Talk by RMS on March 17
PyCoder’s Weekly: Issue #563 (Feb. 7, 2023)
#563 – FEBRUARY 7, 2023
View in Browser »
In this step-by-step project, you’ll build your own Wordle clone with Python. Your game will run in the terminal, and you’ll use Rich to ensure your word-guessing app looks good. Learn how to build a command-line application from scratch and then challenge your friends to a wordly competition!
REAL PYTHON
“This article provides an overview of how Python works in WebAssembly environments and provides a step by step guide on how to use it.” See also the associated Hacker News Conversation.
ASEN ALEXANDROV
Discover how Cisco teams use Python and InfluxDB to create custom DevOps and network monitoring solutions →
INFLUXDATA sponsor
An opinion piece on three trends likely to attract attention in the Python world in 2023: Python/Rust co-projects, web apps, and more typing. Read on for examples in each category.
JERRY CODES
This conversation is around Luke Plant’s excellent article Python’s “Disappointing” Superpowers that describes specific uses of Python’s dynamic capabilities that wouldn’t be possible in a static typed language.
HACKER NEWS
Beginners often stumble when it’s finally time to get their Django app online. Instead of another deployment recipe, this post seeks to explain the fundamental concepts of deploying a Django app and equip developers to think through the process for themselves when they’re ready to make the transition from their code editor to the web.
JAMES WALTERS • Shared by James Walters
Most modern websites are powered by a REST API. That way, you can separate the front-end code from the back-end logic, and users can interact with the interface dynamically. In this step-by-step tutorial, you’ll learn how to build a single-page Flask web application with HTML, CSS, and JavaScript.
REAL PYTHON
In this e-book, we share three popular design patterns that developers use with Redis to improve application performance with MEAN and MERN stack applications. We explain each pattern in detail, and accompany it with an overview, typical use cases, and a code example →
REDIS LABS sponsor
A synopsis of a deep paper analyzing Static Python, a Python variant developed at Instagram to move from gradually-typed to statically-typed. Full paper available as PDF.
LU, GREENMAN, MEYER, ET AL
The dictionary dispatch pattern uses a dict to store references to functions, allowing you to replace long if/else statements or as an alternative to the match statement. Read on for how and where to use it.
MARTIN HEINZ
Asyncio is one of several methods of doing parallelism in Python. It uses a co-routine structure. This article describes five common errors people new to asyncio may make and how to avoid them.
JASON BROWNLEE
It is becoming increasingly common to ship Rust components as part of a Python package. This blog post is a dev journal on how Peter did just that with one of his packages.
PETER BAUMGARTNER
This posting is about how to use an object detection model to control a DS emulator to become an expert in playing the Super Mario 64 DS minigame “Wanted!”
MEDIUM.COM/@NATHANCOOPERJONES • Shared by Nate Jones
Take part in the new Developer Nation survey and shape the ecosystem. Plus for every survey response, Developer Nation will donate to one of the charities of respondents’ choosing. Hurry up, the survey is open until February 12! Start now.
SLASHDATA LTD sponsor
Ever want a C-style for-loop in Python? No? Well you can have one anyway. See how Tushar implemented with for (i := var(0), i < 10, i + 2):
TUSHAR SADHWANI
This article walks you through how to use typing.Protocol to help detect and problems caused through circular imports.
BRIAN OKKEN
GITHUB.COM/GUILATROVA • Shared by Gui Latrova
Xorbits: Compatible, Scalable Data ScienceGITHUB.COM/XPROBE-INC • Shared by Chris Qin
flatliner-src: Convert Python Programs Into One Line of Code anywidget: Custom Jupyter Widgets Made Easy pynecone: Web Apps in Pure Python Events pyCologne User Group Treffen February 8, 2023
MEETUP.COM
February 8, 2023
REALPYTHON.COM
February 9, 2023
MEETUP.COM
February 11, 2023
MEETUP.COM
February 11, 2023
MEETUP.COM
February 16 to February 20, 2023
PYCON.FR
Happy Pythoning!
This was PyCoder’s Weekly Issue #563.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
kevinquillen.com: I'm officially a published author!
Data School: How to use Python's f-strings with pandas
Python introduced f-strings back in version 3.6 (six years ago!), but I&aposve only recently realized how useful they can be.
In this post, I&aposll start by showing you some simple examples of how f-strings are used, and then I&aposll walk you through a more complex example using pandas.
Here&aposs what I&aposll cover:
- Substituting objects
- Calling methods and functions
- Evaluating expressions
- Formatting numbers
- Real-world example using pandas
- Further reading
To make an f-string, you simply put an f in front of a string. By putting the name and age objects inside of curly braces, those objects are automatically substituted into the string.
Calling methods and functions:role = &aposDaddy&apos print(f&aposSometimes my 6-year-old yells: {role.upper()}!!!&apos)Sometimes my 6-year-old yells: DADDY!!!Strings have an upper() method, and so I was able to call that method on the role string from within the f-string.
Evaluating expressions:days_completed = 37 print(f&aposThis portion of the year remains: {(365 - days_completed) / 365}&apos)This portion of the year remains: 0.8986301369863013You can evaluate an expression (a math expression, in this case) within an f-string.
Formatting numbers:print(f&aposThis percentage of the year remains: {(365 - days_completed) / 365:.1%}&apos)This percentage of the year remains: 89.9%This looks much nicer, right? The : begins the format specification, and the .1% means "format as a percentage with 1 digit after the decimal point."
Real-world example using pandas:Recently, I was analyzing the survey data submitted by 500+ Data School community members. I asked each person about their level of experience with 11 different data science topics, plus their level of interest in improving those skills this year.
Thus I had 22 columns of data, with names like:
- python_experience
- python_interest
- pandas_experience
- pandas_interest
- ...
Each “experience” column was coded from 0 (None) to 3 (Advanced), and each “interest” column was coded from 0 (Not interested) to 2 (Definitely interested).
Among other things, I wanted to know the mean level of interest in each topic, as well as the mean level of interest in each topic by experience level.
Here&aposs what I did to answer those questions:
cats = [&apospython&apos, &apospandas&apos] # this actually had 11 categories for cat in cats: mean_interest = df[f&apos{cat}_interest&apos].mean() print(f&aposMean interest for {cat.upper()} is {mean_interest:.2f}&apos) print(df.groupby(f&apos{cat}_experience&apos)[f&apos{cat}_interest&apos].mean(), &apos\n&apos)Mean interest for PYTHON is 1.77 python_experience 0 1.590909 1 1.857143 2 1.781759 3 1.630769 Name: python_interest, dtype: float64 Mean interest for PANDAS is 1.67 pandas_experience 0.0 1.500000 1.0 1.825806 2.0 1.709924 3.0 1.262295 Name: pandas_interest, dtype: float64Notice how I used f-strings:
- Because of the naming convention, I could access the DataFrame columns using df[f&apos{cat}_interest&apos] and df[f&apos{cat}_experience&apos].
- I capitalized the category using f&apos{cat.upper()}&apos to help it stand out.
- I formatted the mean interest to 2 decimal places using f&apos{mean_interest:.2f}&apos.
- Guide to f-strings (written by my pal Trey Hunner)
- f-string cheat sheet (also by Trey)
P.S. This blog post originated as one of my weekly data science tips. Sign up below to receive data science tips every Tuesday! 👇
Django Weblog: DSF calls for applicants for a Django Fellow
After five years as part of the Django Fellowship program, Carlton Gibson has decided to step down as a Django Fellow this spring to explore other things. Carlton has made an extraordinary impact as a Django Fellow. The Django Software Foundation is grateful for his service and assistance.
The Fellowship program was started in 2014 as a way to dedicate high-quality and consistent resources to the maintenance of Django. As Django has matured, the DSF has been able to fundraise and earmark funds for this vital role. As a result, the DSF currently supports two Fellows - Carlton and Mariusz Felisiak. With the departure of Carlton, the Django Software Foundation is announcing a call for Django Fellow applications. The new Fellow will work alongside Mariusz.
The position of Fellow is focused on maintenance and community support - the work that benefits most from constant, guaranteed attention rather than volunteer-only efforts. In particular, the duties include:
- Answering contributor questions on Forum and the django-developers mailing list
- Helping new Django contributors land patches and learn our philosophy
- Monitoring the security@djangoproject.com email email alias and ensuring security issues are acknowledged and responded to promptly
- Fixing release blockers and helping to ensure timely releases
- Fixing severe bugs and helping to backport fixes to these and security issues
- Reviewing and merging pull requests
- Triaging tickets on Trac
Being a Django contributor isn't a prerequisite for this position — we can help get you up to speed. We'll consider applications from anyone with a proven history of working with either the Django community or another similar open-source community. Geographical location isn't important either - we have several methods of remote communication and coordination that we can use depending on the timezone difference to the supervising members of Django.
If you're interested in applying for the position, please email us describing why you would be a good fit along with details of your relevant experience and community involvement. Also, please include the amount of time each week you'd like to dedicate to the position (a minimum of 20 hours a week), your preferred hourly rate, and when you'd like to start working. Lastly, please include at least one recommendation.
Applicants will be evaluated based on the following criteria:
- Details of Django and/or other open-source contributions
- Details of community support in general
- Understanding of the position
- Clarity, formality and precision of communications
- Strength of recommendation(s)
Applications will be open until 1200 AoE, February 28, 2023, with the expectation that the successful candidate will be notified around March 15, 2023.