FLOSS Project Planets
Real Python: Summing Values the Pythonic Way With sum()
Python’s built-in function sum() is an efficient and Pythonic way to sum a list of numeric values. Adding several numbers together is a common intermediate step in many computations, so sum() is a pretty handy tool for a Python programmer.
As an additional and interesting use case, you can concatenate lists and tuples using sum(), which can be convenient when you need to flatten a list of lists.
In this video course, you’ll learn how to:
- Sum numeric values by hand using general techniques and tools
- Use Python’s sum() to add several numeric values efficiently
- Concatenate lists and tuples with sum()
- Use sum() to approach common summation problems
- Use appropriate values for the arguments in sum()
- Decide between sum() and alternative tools to sum and concatenate objects
This knowledge will help you efficiently approach and solve summation problems in your code using either sum() or other alternative and specialized tools.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Fixing QtKeychain freezing on Apple devices
Matt Glaman: Using the bundle specific list cache tag for entity types
In a previous blog post, I explained the list cache tag for entity types, which you use when displaying a list of entities. This cache tag ensures that appropriate render caches and response caches are invalidated whenever a new entity is created, or an existing one is saved. One problem is that the {ENTITY_TYPE}_list cache tag is generic for all entities of that entity type. Invalidating it can cause a lot of cache churn for a large site with heavy activity.
What problem does a bundle-specific list cache tag solve?Imagine a simplistic site with two content types: "Pages" (page) and "Blog posts" (blog_post.) We will also have the following assumptions:
Python for Beginners: Convert String to DataFrame in Python
We use strings for text manipulation in Python. On the other hand, we use dataframes to handle tabular data in python. Despite this dissimilarity, we may need to convert a string to a pandas dataframe. This article discusses different ways to convert a string to a dataframe in python.
Table of Contents- Convert String to DataFrame in Python
- Convert String to DataFrame Column
- JSON to Pandas DataFrame in Python
- Create DataFrame From Dictionary String in Python
- List String to DataFrame in Python
- Conclusion
To convert a string into a dataframe of characters in python, we will first convert the string into a list of characters using the list() function. The list() function takes the string as its input argument and returns a list of characters.
Next, we will pass this list to the DataFrame() function to create a dataframe using all the characters of the string. You can observe this in the following example.
import pandas as pd myStr="PFB" print("The string is:") print(myStr) myList=list(myStr) df=pd.DataFrame(myList) print("The output dataframe is:") print(df)Output:
The string is: PFB The output dataframe is: 0 0 P 1 F 2 BIn the above example, we first converted the string "PFB" to a list of characters. Then, we used the DataFrame() function to create a dataframe from the list of characters.
Convert String to DataFrame ColumnIf you want to convert a string to a dataframe column, you can use the columns parameter in the DataFrame() function. When we pass a list of strings to the columns parameter in the DataFrame() function, the newly created dataframe contains all the strings as its column.
To create a dataframe column from a string, we will first put the string into a list. Then, we will pass the list to the columns parameter in the DataFrame() function. After executing the DataFrame() function, we will get the dataframe with the given string as its column name as shown in the following example.
import pandas as pd myStr="PFB" print("The string is:") print(myStr) df=pd.DataFrame(columns=[myStr]) print("The output dataframe is:") print(df)Output:
The string is: PFB The output dataframe is: Empty DataFrame Columns: [PFB] Index: []In this example, you can observe that the string "PFB" is converted to a column of the output dataframe. This is due to the reason that we assigned the list containing the string to the columns parameter as an input argument.
JSON to Pandas DataFrame in PythonJSON strings are used to store and transmit data in software systems. Sometimes, we might need to convert a json string to a dataframe in python. For this, we will use the following step.
- First, we will convert the json string to a python dictionary using the loads() method defined in the json module. The loads() method takes the json string as its input argument and returns the corresponding python dictionary.
- Next, we will put the dictionary into a list. After that, we will pass the list to the DataFrame() function as input.
After execution of the DataFrame() function, we will get the dataframe created from the json string. You can observe this in the following example.
import pandas as pd import json jsonStr='{"firstName": "John", "lastName": "Doe", "email": "john.doe@example.com", "age": 32}' print("The json string is:") print(jsonStr) myDict=json.loads(jsonStr) df=pd.DataFrame([myDict]) print("The output dataframe is:") print(df)Output:
The json string is: {"firstName": "John", "lastName": "Doe", "email": "john.doe@example.com", "age": 32} The output dataframe is: firstName lastName email age 0 John Doe john.doe@example.com 32 Create DataFrame From Dictionary String in PythonTo create a dataframe from a dictionary string, we will use the eval() function. The eval() function is used to evaluate expressions in python. When we pass a string containing a dictionary to the eval() function, it returns a python dictionary.
After creating the dictionary, we will put it into a list and pass it to the DataFrame() function. After executing the DataFrame() function, we will get the output dataframe as shown below.
import pandas as pd dictStr='{"firstName": "John", "lastName": "Doe", "email": "john.doe@example.com", "age": 32}' print("The dictionary string is:") print(dictStr) myDict=eval(jsonStr) df=pd.DataFrame([myDict]) print("The output dataframe is:") print(df)Output:
The dictionary string is: {"firstName": "John", "lastName": "Doe", "email": "john.doe@example.com", "age": 32} The output dataframe is: firstName lastName email age 0 John Doe john.doe@example.com 32In this example, we first converted the dictionary string into a dictionary. Then, we inserted the dictionary into a list. Finally, we converted the list of dictionaries to a dataframe using the DataFrame() function.
List String to DataFrame in PythonInstead of a dictionary string, you can also convert a list string to a dataframe using the eval() function and the DataFrame() function as shown in the following example.
import pandas as pd listStr='[1,22,333,4444,55555]' print("The list string is:") print(listStr) myList=eval(listStr) df=pd.DataFrame([myList]) print("The output dataframe is:") print(df)Output:
The list string is: [1,22,333,4444,55555] The output dataframe is: 0 1 2 3 4 0 1 22 333 4444 55555 ConclusionIn this article, we discussed different ways to convert a string to a dataframe in python. To learn more about python programming, you can read this article on how to convert a pandas series to a dataframe. You might also like this article on how to iterate rows in a pandas dataframe.
I hope you enjoyed reading this article. Stay tuned for more informative articles.
Happy learning!
The post Convert String to DataFrame in Python appeared first on PythonForBeginners.com.
LabPlot 2.10
Today we are announcing the availability of the latest release of LabPlot: Say hello to LabPlot 2.10!
This release comes with many new features, improvements and performance optimizations in different areas, as well as with support for new data formats and visualization types.
The major new features are introduced below. For a more detailed review of the changes in this new release, please refer to the ChangeLog file.
The source code of LabPlot, the Flatpak and Snap packages for Linux, as well as the installer for Windows and the image for macOS are available from our download page.
What’s new in 2.10? WorksheetLabPlot’s worksheet comes with new visualizations and more advanced plots, including:
- Bar plots
- Plot templates that allow you to save and re-use custom plot configurations
- Error bars in histograms
- Rug plots for box plots and histograms
- More flexible and customizable box plots
- Reference ranges, that is, custom areas on the plot to highlight x- and y-ranges
- LaTeX error messages in text labels when rendering with LaTeX
The Spreadsheets gain more functions and operations to modify and generate data:
- Data sampling (random and periodic methods)
- Data flattening to convert pivoted data to column base format
- Baseline subtraction using the arPLS algorithm
- Heat-map formatting for categorical data in text columns
- Column statistics for text columns, including the frequency table, bar and Pareto plots
- Functions to access arbitrary cells of columns with cell (f(index), g(column, ..))
- Functions to work with column statistics (size, mean, stddev, etc.)
- Ability to specify the seed number when generating random numbers
The new analysis tools added to LabPlot 2.10 include:
- Maximum likelihood estimation for several distributions
- Guess start values of fit parameter for polynomial models by linear regression
- Export the results of a computation to a new spreadsheet
- Fourier filtering for DateTime data
LabPlot 2.10 adds support for new file formats and multiple optimizations to improve import performance:
- Import of Excel .xlsx files
- Export spreadsheet and matrices to Excel .xlsx format
- Import of Binary Log File (BLF) files from Vektor Informatik
- HDF5: support VLEN data import
- Reduced memory consumption when importing from a database table into a spreadsheet
- Reduced memory consumption during the spreadsheet export to SQLite databases
- Faster import of files with a large number of columns
The 2.10 release Improves the variable panel and plot export:
- Show the type of a variable, its size (in Bytes), and its dimension (number of rows and columns) for backends that provide this information
- Properly show the values of Octave’s row vectors and matrices
- Allow to copy variable names and values to the clipboard
- Export plot results to vector graphic formats (PDF and SVG)
Bálint Réczey: Building the Linux kernel under 10 seconds with Firebuild
Russell published an interesting post about his first experience with Firebuild accelerating refpolicy‘s and the Linux kernel‘s build. It turned out a few small tweaks could accelerate the builds even more, crossing the 10 second barrier with Linux’s build.
Build performance with 18 coresThe Linux kernel’s build time is a widely used benchmark for compilers, making it a prime candidate to test a build accelerator as well. In the first run on Russell’s 18 core test system the observed user+sys CPU time was cut by 44% with an actual increase in wall clock time which was quite unusual. Firebuild performed much better than that in prior tests. To replicate the results I’ve set up a clean Debian Bookworm VM on my machine:
lxc launch images:debian/bookworm –vm -c limits.cpu=18 -c limits.memory=16GB bookworm-vmCompiling Linux 6.1.10 in this clean Debian VM showed build times closer to what I expected to see, ~72% less wall clock time and ~97% less user+sys CPU time:
$ make defconfig && time make bzImage -j18 real 1m31.157s user 20m54.256s sys 2m25.986s $ make defconfig && time firebuild make bzImage -j18 # first run: real 2m3.948s user 21m28.845s sys 4m16.526s # second run real 0m25.783s user 0m56.618s sys 0m21.622sThere are multiple differences between Russell’s and my test system including having different CPUs (E5-2696v3 vs. virtualized Ryzen 5900X) and different file systems (BTRFS RAID-1 vs ext4), but I don’t think those could explain the observed mismatch in performance. The difference may be worth further analysis, but let’s get back to squeezing out more performance from Firebuild.
Firebuild was developed on Ubuntu. I was wondering if Firebuild was faster there, but I got only slightly better build times in an identical VM running Ubuntu 22.10 (Kinetic Kudu):
$ make defconfig && time make bzImage -j18 real 1m31.130s user 20m52.930s sys 2m12.294s $ make defconfig && time firebuild make bzImage -j18 # first run: real 2m3.274s user 21m18.810s sys 3m45.351s # second run real 0m25.532s user 0m53.087s sys 0m18.578sThe KVM virtualization certainly introduces an overhead, thus builds must be faster in LXC containers. Indeed, all builds are faster by a few percents:
$ lxc launch ubuntu:kinetic kinetic-container ... $ make defconfig && time make bzImage -j18 real 1m27.462s user 20m25.190s sys 2m13.014s $ make defconfig && time firebuild make bzImage -j18 # first run: real 1m53.253s user 21m42.730s sys 3m41.067s # second run real 0m24.702s user 0m49.120s sys 0m16.840s # Cache size: 1.85 GBApparently this ~72% reduction in wall clock time is what one should expect by simply prefixing the build command with firebuild on a similar configuration, but we should not stop here. Firebuild does not accelerate quicker commands by default to save cache space. This howto suggests letting firebuild accelerate all commands including even "sh” by passing "-o 'processes.skip_cache = []'” to firebuild.
Accelerating all commands in this build’s case increases cache size by only 9%, and increases the wall clock time saving to 91%, not only making the build more than 10X faster, but finishing it in less than 8 seconds, which may be a new world record!:
$ make defconfig && time firebuild -o 'processes.skip_cache = []' make bzImage -j18 # first run: real 1m54.937s user 21m35.988s sys 3m42.335s # second run real 0m7.861s user 0m15.332s sys 0m7.707s # Cache size: 2.02 GBThere are even faster CPUs on the market than this 5900X. If you happen to have access to one please leave a comment if you could go below 5 seconds!
Scaling to higher core counts and comparison with ccacheRussell raised the very valid point about Firebuild’s single threaded supervisor being a bottleneck on high core systems and comparison to ccache also came up in comments. Since ccache does not have a central supervisor it could scale better with more cores, but let’s see if ccache could go below 10 seconds with the build times…
firebuild -o ‘processes.skip_cache = []’ and ccache scaling to 24 coresWell, no. The best time time for ccache is 18.81s, with -j24. Both firebuild and ccache keep gaining from extra cores up to 8 cores, but beyond that the wall clock time improvements diminish. The more interesting difference is that firebuild‘s user and sys time is basically constant from -j1 to -j24 similarly to ccache‘s user time, but ccache‘s sys time increases linearly or exponentially with the number of used cores. I suspect this is due to the many parallel ccache processes performing file operations to check if cache entries could be reused, while in firebuild’s case the supervisor performs most of that work – not requiring in-kernel synchronization across multiple cores.
It is true, that the single threaded firebuild supervisor is a bottleneck, but the supervisor also implements a central filesystem cache, thus checking if a command’s cache entry can be reused can be implemented with much fewer system calls and much less user space hashing making the architecture more efficient overall than ccache‘s.
The beauty of Firebuild is not being faster than ccache, but being faster than ccache with basically no hard-coded information about how C compilers work. It can accelerate any other compiler or program that generates deterministic output from its input, just by observing what they did in their prior runs. It is like having ccache for every compiler including in-house developed ones, and also for random slow scripts.
Specbee: What You Need To Know About Continuous Integration and Testing in Drupal
Drupal is a rapidly growing content management system (CMS). It has 1.3 million users, which is increasing daily. This platform helps in creating different websites, intranets, and web applications. Drupal is a widely used application because it integrates with Continuous Integration and Continuous Testing (CI/CT) tools, which have numerous benefits.
This blog will discuss everything about CI/CT and Drupal.
Importance of Continuous IntegrationContinuous testing makes sure that the testing process is easy and automatic. It integrates code changes into a shared repository. Addresses the issue early in the development process and makes finding and removing bugs from the software easier.
Integration is a very important part of the software development method. Here, members of the team have to perform multiple integrations every day. An automated build is used to check those integrations. This automation build includes a test for detecting integration errors faster.
CI helps in testing, reviewing, and integrating the changes into the codebase more quickly and efficiently. Working on isolated code branches can cause several issues. CI prevents those issues and reduces the risk of a merge conflict.
Benefits of Continuous IntegrationContinuous Integration is used in Drupal development for a variety of reasons. Some of them are given below.
The key benefits of Using Continuous Integration are:
● Build Automation and Self-testing
Automated environments help in building and launching the system using a single command. Whereas self-testing makes detecting and eradicating the bugs much easier.
● Daily Commits and Integration machine
It is recommended for developers to commit to the machine every day. This way, build tests will be passed immediately, and the correct code will be generated. Integration machines require regular builds and successful build integration.
● Immediate Fix of broken builds and rapid feedback
Continuous build is done to fix the issues in the mainline build immediately. Also, it is necessary to keep the build fast and provide rapid feedback.
● State of the system and Deployment automation
The working of the system should be visible to everyone. The alterations that have been made must be visible to every team member. Deployment automation requires the testers and developers to have scripts. These scripts will help them deploy the application easily into different environments.
How Does Continuous Integration Work?There are several steps that developers need to follow for successful integration. Alterations must be committed to the repository, and the codes must be thoroughly checked. Developers’ private workspaces must look over the code.
CI server is used to check alterations and build the system. The server runs unit and integration tests and alerts the team members if the build tests fail. The team members fix the issue and continue to test and integrate the project.
The four key steps of CI are code, build, test, and deploy.
- Developers write code and commit changes to a shared code repository.
- A CI server monitors the code repository for changes, and when changes are detected, the server checks out the latest code and builds the software.
- The CI server runs automated tests on the built software to verify that the code changes have introduced no bugs or broken any existing functionality.
- If the tests pass, the CI server may deploy the code changes to a staging or production environment, depending on the organization's release process.
CI typically involves using a version control system (such as Git or SVN) to manage code changes and a build server (such as Jenkins, Travis CI, or CircleCI) to build and test the code changes. Automation testing is critical to CI, enabling developers to catch and fix bugs introduced by code changes quickly.
By catching problems early in the development process, CI can help teams to reduce the time and cost of software development while also improving the quality and reliability of the software being produced.
What Are The Continuous Integration Tools?Many Continuous Integration (CI) tools are available, each with strengths and weaknesses. Here are some of the most popular CI tools used by software development teams:
● Jenkins
This is a popular open-source CI tool with a large user community. It can be easily customized with plugins and has integrations with many other tools.
● Travis CI
This cloud-based CI tool is popular for its ease of use and seamless integration with GitHub.
● CircleCI
This cloud-based CI tool is popular for its speed and scalability. It also integrates with many other tools, such as Docker and AWS.
● GitLab CI/CD
This is a built-in CI/CD tool within GitLab, a popular Git repository management system. It is open source and has integrations with many other tools.
● Bamboo
This is a CI/CD tool from Atlassian, the makers of JIRA and Confluence. It has integrations with many other Atlassian tools, as well as other third-party tools.
● TeamCity
This is a CI tool from JetBrains, the makers of IntelliJ IDEA, and other IDEs. Its adaptability and simplicity make it appealing.
● Azure DevOps
This is a cloud-based CI/CD tool from Microsoft. It integrates with many other Microsoft tools, such as Visual Studio and GitHub.
These are just a few of the many CI tools available. When choosing a CI tool, its important to consider factors such as ease of use, integrations with other tools, cost, and the size and complexity of the development team.
Key Practices That Form An Effective Continuous IntegrationHere are some key practices that form an effective Continuous Integration (CI) process:
Version Control
A CI process starts with version control, essential for managing code changes, resolving conflicts, and collaborating effectively. Git, SVN, and Mercurial are popular version control systems.
Automated Build
In a CI process, code is always committed to the version control system. It triggers an automated build process to compile and package the code. This ensures that the code builds successfully and eliminates any manual errors.
Automated Testing
Automated testing is a critical component of a CI process. Tests should be automated so that they can be run every time code is committed, and they should cover both functional and non-functional aspects of the application.
Continuous Feedback
CI provides continuous feedback to developers through automated build and test processes. Any issues or failures should be identified and reported immediately to be addressed promptly.
Continuous Deployment
Automated deployment can help reduce the time to get code into production and ensure that the deployment process is consistent and reliable.
Continuous ImprovementA CI process should be constantly monitored and improved. This includes reviewing build and test results, identifying and addressing issues, and optimizing the process to make it faster and more effective.
Effective communication and collaboration among team members are essential for a successful CI process. Developers, testers, and operations personnel should work together closely to identify issues and resolve them quickly.
By following these key practices, teams can implement an effective CI process that helps to ensure high-quality software development and deployment.
What Is Continuous Integration For Drupal?Continuous integration (CI) for Drupal involves regularly integrating code changes from multiple developers into a shared code repository, building and testing the code changes, and automatically deploying the changes to a testing or staging environment.
Here are some of the key benefits of implementing CI for Drupal:
● Reduced risk
By regularly integrating and testing code changes, CI can help catch and fix errors early in the development cycle, reducing the risk of introducing bugs or breaking functionality.
● Improved collaboration
Developers can collaborate more easily and effectively by working from a shared code repository.
● Faster feedback
With automated testing, developers can get feedback on their code changes quickly, enabling them to make corrections and improvements more rapidly. Different cloud-based testing platforms like LambdaTest can help you achieve faster feedback on code
commits and get a quicker go-to-market.
LambdaTest is a digital experience testing cloud that allows organizations and enterprises to perform manual and automated testing for web and mobile. It offers different offerings like real-time testing, Selenium testing, Cypress testing, Appium testing, OTT testing, testing on real device cloud, and more.
LambdaTest’s online device farm lets you test at scale across 3000+ real browsers, devices, and OS combinations. It also integrates with many CI/CD tools like Jenkins, CircleCI, and Travis CI.
● Consistency
By using consistent tools and processes for development, testing, and deployment, teams can ensure that all code changes are properly vetted and tested before they are deployed to production.
Implementing CI and Testing In DrupalLike many web application frameworks, Drupal can benefit from continuous integration (CI) and testing practices. Here are some general steps that can be taken to implement CI and test in Drupal:
- Set up a version control system (VCS) such as Git or SVN to manage code changes. All developers should be encouraged to commit their changes to the VCS regularly.
- Use a continuous integration (CI) tool such as Jenkins, Travis CI, or CircleCI to automatically build and test Drupal code changes whenever they are committed to the VCS.
- Write automated Drupal tests using a framework like PHPUnit or Behat. Tests should cover both functional and non-functional aspects of the application.
- Configure the CI tool to run automated tests whenever new code changes are detected. If any tests fail, developers should be notified immediately so they can fix the issue.
- Use tools like CodeSniffer and PHPMD to check for violations of coding standards and best practices.
- Consider using tools like Docker or Vagrant to help automate the setup and configuration of development environments and ensure consistency across development, testing, and production environments.
- There are also contributed modules available for Drupal that can help with testing, such as SimpleTest or the Drupal Extension for Behat.
To implement CI for Drupal, development teams can use various tools like Jenkins, Travis CI, or CircleCI and write automated tests using a testing framework such as PHPUnit or Behat. They can also use tools like Docker or Vagrant to help automate the setup and configuration of development environments and ensure consistency across development, testing, and production environments.
Additionally, contributed Drupal modules are available, such as SimpleTest or the Drupal Extension for Behat, which can help test Drupal-specific functionality. By implementing continuous integration and testing practices in Drupal, developers can catch and fix issues early in the development process, leading to faster, higher-quality development and Deployment.
Guest Author: Shubham Gaur
Shubham Gaur is a freelance writer who writes on the fundamentals and trends of Software testing. With more than 5 years of experience in writing on different technologies, he explains complex and technical testing subjects in a comprehensive language.
Email Address Subscribe Leave this field blank Software Testing Drupal Module Drupal Drupal PlanetLeave us a Comment
Recent Blogs Image What You Need To Know About Continuous Integration and Testing in Drupal Image Mastering Drupal 9 Layout Builder: A Comprehensive Guide to Effortlessly Customize Your Website's Design Image How to Efficiently Fetch Drupal Reference Entities in Custom Modules Want to extract the maximum out of Drupal? TALK TO US Featured Case StudiesUpgrading the web presence of IEEE Information Theory Society, the most trusted voice for advanced technology
ExploreA Drupal powered multi-site, multi-lingual platform to enable a unified user experience at SEMI
ExploreGreat Southern Homes, one of the fastest growing home builders in the US, sees greater results with Drupal
Explore View all Case StudiesPython Bytes: #328 We are going to need some context here
Axelerant Blog: What Is Salesforce Integration? An Introduction
The Salesforce platform makes creating engaging customer and employee experiences with third-party data integrations easier. Experts can combine a composable architecture with building a unified view of all customers. When used strategically, robust tools and powerful APIs can dramatically reduce integration time and unlock modernized back-office systems.
Codementor: Python Position and Keyword Only Arguments
Codementor: Python Position and Keyword Only Arguments
GitHub 2FA
GitHub is rolling out 2FA, and Calamares is one of the repositories I maintain there. Calamares seems like kind-of-important infrastructure for some things (e.g. Microsoft’s own Linux distro). Enabling 2FA was remarkably painless because I already had a bunch of 2FA stuff set up for KDE’s Invent. Invent is a GitLab instance and all-round more pleasant, frankly. Enabling 2FA was funny because the first thing FreeOTP (the 2FA authenticator I use) said was “weak crypto settings” when scanning the GitHub QR code. Good job, folks.
So Calamares is still on GitHub. Thanks to Kevin I’m reminded that GH is like an addiction. Also that there have been calls to leave GH for years. As a maintainer-with-no-time of a repo, there are still no concrete plans to move. KDE Invent still seems like a non-starter because of translations workflow.
Anyway, rest assured that the Calamares repo is now 2FA-safe. And that a 3.3 release will happen someday.
FSF Blogs: From Freedom Trail to free boot and free farms: Charting the course at LibrePlanet day two
From Freedom Trail to free boot and free farms: Charting the course at LibrePlanet day two
Django Weblog: Want to host DjangoCon Europe 2024?
DjangoCon Europe 2023 will be held May 29th-June 2nd in Edinburgh, Scotland, but we're already looking ahead to next year's conference. Could your town - or your football stadium, circus tent, private island or city hall - host this wonderful community event?
Hosting a DjangoCon is an ambitious undertaking. It's hard work, but each year it has been successfully run by a team of community volunteers, not all of whom have had previous experience - more important is enthusiasm, organisational skills, the ability to plan and manage budgets, time and people - and plenty of time to invest in the project.
You'll find plenty of support on offer from previous DjangoCon organisers, so you won't be on your own.
How to applyIf you're interested, we'd love to hear from you. Following the established tradition, the selected hosts will be announced at this year's DjangoCon by last year's organiser but must fall more than one month from DjangoCon US and PyCon US, and EuroPython in the same calendar year. In order to make the announcement at DjangoCon Europe we will need to receive your proposal by May 10.
The more detailed and complete your proposal, the better. Things you should consider, and that we'd like to know about, are:
- dates Ideally between mid May and mid June 2024
- numbers of attendees
- venue(s)
- accommodation
- transport links
- budgets and ticket prices
- committee members
We'd like to see:
- timelines
- pictures
- prices
- draft agreements with providers
- alternatives you have considered
Email you proposals to djangocon-europe-2024-proposals at djangoproject dot com. They will all help show that your plans are serious and thorough and that you have the organisational capacity to make it a success.
We will be hosting a virtual informational session for those that are interested or may be interested in organising a DjangoCon. Please complete indicate your interest here.
If you have any questions or concerns about organising a DjangoCon you can Just drop us a line.
Talking Drupal: Talking Drupal #391 - Building Your Career
Today we are talking about Building Your Career with Mike Anello.
For show notes visit: www.talkingDrupal.com/391
Topics- How we started our careers
- Broad career opportunities
- Mentorship
- Roles
- First step after graduating
- First step in switching
- Common hurdles
- Resources like Drupal Easy
- Value of a career in Drupal
- How do you find jobs
- How do you build and maintain your Drupal career
- How about your Drupal resume
- Any advice
- Drupal easy
- Drupal Jobs
- Kint snippet
Mike Anello - Drupal Easy @ultimike
HostsNic Laflin - www.nLighteneddevelopment.com @nicxvan John Picozzi - www.epam.com @johnpicozzi Jacob Rockowitz - www.jrockowitz.com @jrockowitz
MOTW CorrespondentMartin Anderson-Clutz - @mandclu Devel Debug Log Allows developers to inspect the contents of variables. If those are classes you can inspect nested properties and all methods available.
The Drop Times: To Become a Hedgehog
Last week, TheDropTimes (TDT) was able to publish two interviews. In one of those interviews, Holmes Consulting Group founder Robbie Holmes mentioned a concept.
Many management professionals might know and practice it. But for me, it was new. I am not a management guy, and such concepts seldom graced my reading list. Listening to what others say has helped me, and I can also say the same about watching Alethia’s interview with Robbie.
The concept he shared is not new. Isaiah Berlin proposed it in his 1953 essay, ‘The Hedgehog and the Fox: An Essay on Tolstoy’s View of History’; later, Jim Collins developed it in his book, ‘Good to Great: Why Some Companies Make the Leap, and Others Don’t.’ The core theme of this book is that greatness is not primarily a function of circumstance but largely a matter of conscious choice and discipline.
How Jim Collins describes the hedgehog concept intrigued me. He begins with Berlin’s adaptation of the ancient greek parable, “The fox knows many things, but the hedgehog knows one big thing.” Jim tries to teach us how to find that one big thing. It is by placing your business in the intersectional area of three thought circles:
- What you are deeply passionate about.
- What you can be the best in the world at.
- What drives your economic or resource engine.
Jim explains that transformation from good to great comes about by a series of good decisions made consistently with a Hedgehog Concept, supremely well executed, accumulating one upon another over a long period.
Pardon my audacity in pushing this concept again. But what I saw after going through it is that we at TDT can excel in creating more and more good-to-great interviews with the fantastic people working around Drupal and related projects with your active help. Also, we urge the Drupal agencies to find their one big thing and excel in it.
As I mentioned, you can watch our interview video with Robbie Holmes here. The other interview we published last week was with Chris Wells, the co-lead of Project Browser Initiative. Chris is the founder of Redfin Solutions. You can read the interview here. We made both conversations as part of DrupalCamp NJ.
As for other stories from last week, here are a comprehensive list:
Drupal Developer Days Vienna has started accepting session proposals. MidCamp is happening next month, and here is how you can help organize the camp. OpenSource North Conference has announced the lineup of speakers. Drupal Netherlands opened the sale of early bird tickets for Drupaljam 2023 in June. You may submit sessions to DrupalCon Lille until April 24. Drupal Camping Wolfsburg treats all sponsors as gold sponsors. DrupalSouth Wellington has put out a call for volunteers. You can submit sessions for DrupalCamp Asheville 2023 until April 25. Both DrupalCamp NJ and NERD Summit are over. DrupalCon Pittsburgh is looking for a launch sponsor or co-working space sponsor.
Drupal Community Working Group has asked Drupalers to nominate candidates for Aaron Winborn Award 2023, and you have only five more days to do that. Kanopi and Pantheon have announced a joint webinar on Drupal 7 to 10 migration. Salsa Digital has started a blog series on ‘Rules as code insights.’ SFDUG is hosting a Technical Writing Workshop on April 13. We revisited a blog post from HTML Panda from May 2022, comparing WordPress and Drupal. A Drupal distribution focussed on the publishing industry, ‘Thunder CMS 7’ based on Drupal 10, published its beta release. Von Eaton, Director of Programs in Drupal Association, addressed the ‘Back to Work for Women’ program conducted by ICFOSS and supported by Zyxware.
That is for the week, folks; thank you.
Sincerely,
Sebin A. Jacob
Editor-in-Chief
coreutils @ Savannah: coreutils-9.2 released [stable]
This is to announce coreutils-9.2, a stable release.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
There have been 209 commits by 14 people in the 48 weeks since 9.1.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Arsen Arsenović (1) Jim Meyering (7)
Bernhard Voelker (3) Paul Eggert (90)
Bruno Haible (1) Pierre Marsais (1)
Carl Edquist (2) Pádraig Brady (98)
ChuanGang Jiang (2) Rasmus Villemoes (1)
Dennis Williamson (1) Stefan Kangas (1)
Ivan Radić (1) Álvar Ibeas (1)
Pádraig [on behalf of the coreutils maintainers]
==================================================================
Here is the GNU coreutils home page:
http://gnu.org/s/coreutils/
For a summary of changes and contributors, see:
http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=shortlog;h=v9.2
or run this command from a git-cloned coreutils directory:
git shortlog v9.1..v9.2
To summarize the 665 gnulib-related changes, run these commands
from a git-cloned coreutils directory:
git checkout v9.2
git submodule summary v9.1
==================================================================
Here are the compressed sources:
https://ftp.gnu.org/gnu/coreutils/coreutils-9.2.tar.gz (14MB)
https://ftp.gnu.org/gnu/coreutils/coreutils-9.2.tar.xz (5.6MB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/coreutils/coreutils-9.2.tar.gz.sig
https://ftp.gnu.org/gnu/coreutils/coreutils-9.2.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
6afa9ce3729afc82965a33d02ad585d1571cdeef coreutils-9.2.tar.gz
ebWNqhmcY84g95GRF3NLISOUnJLReVZPkI4yiQFZzUg= coreutils-9.2.tar.gz
3769071b357890dc36d820c597c1c626a1073fcb coreutils-9.2.tar.xz
aIX/R7nNshHeR9NowXhT9Abar5ixSKrs3xDeKcwEsLM= coreutils-9.2.tar.xz
Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify coreutils-9.2.tar.xz.sig
The signature should match the fingerprint of the following key:
pub rsa4096 2011-09-23 [SC]
6C37 DC12 121A 5006 BC1D B804 DF6F D971 3060 37D9
uid [ unknown] Pádraig Brady <P@draigBrady.com>
uid [ unknown] Pádraig Brady <pixelbeat@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key P@draigBrady.com
gpg --recv-keys DF6FD971306037D9
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=coreutils&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify coreutils-9.2.tar.gz.sig
This release was bootstrapped with the following tools:
Autoconf 2.71
Automake 1.16.5
Gnulib v0.1-5857-gf17d397771
Bison 3.8.2
==================================================================
NEWS
* Noteworthy changes in release 9.2 (2023-03-20) [stable]
** Bug fixes
'comm --output-delimiter="" --total' now delimits columns in the total
line with the NUL character, consistent with NUL column delimiters in
the rest of the output. Previously no delimiters were used for the
total line in this case.
[bug introduced with the --total option in coreutils-8.26]
'cp -p' no longer has a security hole when cloning into a dangling
symbolic link on macOS 10.12 and later.
[bug introduced in coreutils-9.1]
'cp -rx / /mnt' no longer complains "cannot create directory /mnt/".
[bug introduced in coreutils-9.1]
cp, mv, and install avoid allocating too much memory, and possibly
triggering "memory exhausted" failures, on file systems like ZFS,
which can return varied file system I/O block size values for files.
[bug introduced in coreutils-6.0]
cp, mv, and install now immediately acknowledge transient errors
when creating copy-on-write or cloned reflink files, on supporting
file systems like XFS, BTRFS, APFS, etc.
Previously they would have tried again with other copy methods
which may have resulted in data corruption.
[bug introduced in coreutils-7.5 and enabled by default in coreutils-9.0]
cp, mv, and install now handle ENOENT failures across CIFS file systems,
falling back from copy_file_range to a better supported standard copy.
[issue introduced in coreutils-9.0]
'mv --backup=simple f d/' no longer mistakenly backs up d/f to f~.
[bug introduced in coreutils-9.1]
rm now fails gracefully when memory is exhausted.
Previously it may have aborted with a failed assertion in some cases.
[This bug was present in "the beginning".]
rm -d (--dir) now properly handles unreadable empty directories.
E.g., before, this would fail to remove d: mkdir -m0 d; src/rm -d d
[bug introduced in v8.19 with the addition of this option]
runcon --compute no longer looks up the specified command in the $PATH
so that there is no mismatch between the inspected and executed file.
[bug introduced when runcon was introduced in coreutils-6.9.90]
'sort -g' no longer infloops when given multiple NaNs on platforms
like x86_64 where 'long double' has padding bits in memory.
Although the fix alters sort -g's NaN ordering, that ordering has
long been documented to be platform-dependent.
[bug introduced 1999-05-02 and only partly fixed in coreutils-8.14]
stty ispeed and ospeed options no longer accept and silently ignore
invalid speed arguments, or give false warnings for valid speeds.
Now they're validated against both the general accepted set,
and the system supported set of valid speeds.
[This bug was present in "the beginning".]
stty now wraps output appropriately for the terminal width.
Previously it may have output 1 character too wide for certain widths.
[bug introduced in coreutils-5.3]
tail --follow=name works again with non seekable files. Previously it
exited with an "Illegal seek" error when such a file was replaced.
[bug introduced in fileutils-4.1.6]
'wc -c' will again efficiently determine the size of large files
on all systems. It no longer redundantly reads data from certain
sized files larger than SIZE_MAX.
[bug introduced in coreutils-8.24]
** Changes in behavior
Programs now support the new Ronna (R), and Quetta (Q) SI prefixes,
corresponding to 10^27 and 10^30 respectively,
along with their binary counterparts Ri (2^90) and Qi (2^100).
In some cases (e.g., 'sort -h') these new prefixes simply work;
in others, where they exceed integer width limits, they now elicit
the same integer overflow diagnostics as other large prefixes.
'cp --reflink=always A B' no longer leaves behind a newly created
empty file B merely because copy-on-write clones are not supported.
'cp -n' and 'mv -n' now exit with nonzero status if they skip their
action because the destination exists, and likewise for 'cp -i',
'ln -i', and 'mv -i' when the user declines. (POSIX specifies this
for 'cp -i' and 'mv -i'.)
cp, mv, and install again read in multiples of the reported block size,
to support unusual devices that may have this constraint.
[behavior inadvertently changed in coreutils-7.2]
du --apparent now counts apparent sizes only of regular files and
symbolic links. POSIX does not specify the meaning of apparent
sizes (i.e., st_size) for other file types, and counting those sizes
could cause confusing and unwanted size mismatches.
'ls -v' and 'sort -V' go back to sorting ".0" before ".A",
reverting to the behavior in coreutils-9.0 and earlier.
This behavior is now documented.
ls --color now matches a file extension case sensitively
if there are different sequences defined for separate cases.
printf unicode \uNNNN, \UNNNNNNNN syntax, now supports all valid
unicode code points. Previously is was restricted to the C
universal character subset, which restricted most points <= 0x9F.
runcon now exits with status 125 for internal errors. Previously upon
internal errors it would exit with status 1, which was less distinguishable
from errors from the invoked command.
'split -n N' now splits more evenly when the input size is not a
multiple of N, by creating N output files whose sizes differ by at
most 1 byte. Formerly, it did this only when the input size was
less than N.
'stat -c %s' now prints sizes as unsigned, consistent with 'ls'.
** New Features
cksum now accepts the --base64 (-b) option to print base64-encoded
checksums. It also accepts/checks such checksums.
cksum now accepts the --raw option to output a raw binary checksum.
No file name or other information is output in this mode.
cp, mv, and install now accept the --debug option to
print details on how a file is being copied.
factor now accepts the --exponents (-h) option to print factors
in the form p^e, rather than repeating the prime p, e times.
ls now supports the --time=modification option, to explicitly
select the default mtime timestamp for display and sorting.
mv now supports the --no-copy option, which causes it to fail when
asked to move a file to a different file system.
split now accepts options like '-n SIZE' that exceed machine integer
range, when they can be implemented as if they were infinity.
split -n now accepts piped input even when not in round-robin mode,
by first copying input to a temporary file to determine its size.
wc now accepts the --total={auto,never,always,only} option
to give explicit control over when the total is output.
** Improvements
cp --sparse=auto (the default), mv, and install,
will use the copy_file_range syscall now also with sparse files.
This may be more efficient, by avoiding user space copies,
and possibly employing copy offloading or reflinking,
for the non sparse portion of such sparse files.
On macOS, cp creates a copy-on-write clone in more cases.
Previously cp would only do this when preserving mode and timestamps.
date --debug now diagnoses if multiple --date or --set options are
specified, as only the last specified is significant in that case.
rm outputs more accurate diagnostics in the presence of errors
when removing directories. For example EIO will be faithfully
diagnosed, rather than being conflated with ENOTEMPTY.
tail --follow=name now works with single non regular files even
when their modification time doesn't change when new data is available.
Previously tail would not show any new data in this case.
tee -p detects when all remaining outputs have become broken pipes, and
exits, rather than waiting for more input to induce an exit when written.
tee now handles non blocking outputs, which can be seen for example with
telnet or mpirun piping through tee to a terminal.
Previously tee could truncate data written to such an output and fail,
and also potentially output a "Resource temporarily unavailable" error.
Python Morsels: What is a context manager?
Context managers power Python's with blocks. They sandwich a code block between enter code and exit code. They're most often used for reusing common cleanup/teardown functionality.
Table of contents
- Files opened with with close automatically
- Context managers work in with statements
- Context managers are like a try-finally block
- Life without a context manager
- Using a with block requires a context manager
Context managers are objects that can be used in Python's with statements.
You'll often see with statements used when working with files in Python.
This code opens a file, uses the f variable to point to the file object, reads from the file, and then closes the file:
>>> with open("my_file.txt") as f: ... contents = f.read() ...Notice that we didn't explicitly tell Python to close our file.
But the file did close:
>>> f.closed TrueThe file closed automatically when the with block was exited.
Context managers work in with statementsAny object that can be …
Read the full article: https://www.pythonmorsels.com/what-is-a-context-manager/