Feeds
Drupal Association blog: Board Election 2023 Candidate: Esaya Jokonya
I started my career in technology in 2005 and since then I have dedicated myself to staying up to date on the latest information technology industry trends. Throughout my career, I have developed a keen interest in open source technologies, particularly Drupal and WordPress.
Throughout the years, I’ve leveraged my expertise in technology, design, and development to lead projects and configure custom solutions that solve challenging problems. I have managed hundreds of successful projects for clients ranging from small businesses.
I believe strongly in the power of collaboration and sharing. This is why I am heavily involved in the open-source community, leading development teams and contributing to projects.
My goal is to use my expertise and knowledge to help organizations become more successful, productive, and secure.
I strongly believe in creating an inclusive and equitable organizational culture where diversity, respect, and collaboration are celebrated. Everyone should have a seat at the table and a voice that is heard regardless of race, gender, age, religion, socioeconomic status, educational background, or any other attribute. In addition, I believe strongly in the importance of taking a stand against discrimination, bigotry, and prejudice. It is absolutely essential that we create a supportive and respectful environment in all facets of corporate and organizational life.
Why are you running for a board seat at the Drupal Association? (mission/motivation)I am running for a board seat at the Drupal Association because I am passionate about advocating for diversity, inclusion, and equity in the realm of open source technology. As a member of a traditionally underrepresented group, I recognize the importance of speaking up and representing different demographics, backgrounds, and perspectives in all aspects of open source building. I believe that software should be crafted with a human touch, especially in terms of considering the full range of voices and life experiences users bring to their engagement with Drupal. I am keen to leverage my experience in the industry, working with organizations of various sizes, to foster equal opportunities for all and create a more diverse and equitable community. My presence on the board will be a testament to the value of the input of those who were traditionally overlooked in the past.
Why should members vote for you? (qualifications)I am a dedicated professional committed to the success of the Drupal Association and the Drupal project. My experience in technology, software, hardware, telecommunications, public speaking and cloud computing gives me a broad understanding of the complex matters that the Drupal Association faces on a daily basis.
My commitment to the success of the Drupal Association and the Drupal Project is unwavering and my dedication to the work of the board is unparalleled. I understand the rights of all members and will be an active and respected member of the board. In addition, I have excellent communication and leadership skills to assist the board in making decisions and delegate the appropriate tasks and processes.
I am confident I can provide valuable insight and advice to the board, helping to ensure that the Drupal Association is successful in meeting its goals. I believe I would be an excellent choice for the board seat and I encourage all members to consider my candidacy when voting.
File attachments: Esaya.pngDrupal Association blog: Board Election 2023 Candidate: Fei Lauren
Since 2012 I have worked in technical roles including Front-End Developer, Tech Lead, and Dev Manager. I fell into Drupal because the possibilities felt endless, the community so capable and inspired. There was always more to learn and even early on, opportunities to give back. 2022 brought a career change for me and I am now a Scrum Master who focuses heavily on Continuous Process Improvement. The title may be different but ultimately, it's just the next step in a long journey through the world of Drupal.
When I am not working or volunteering, I love dismantling electronics and tinkering with Arduinos. One of my more interesting projects involved a mind-controlled RGB LED using tech from a children’s toy. My hope is to continue establishing myself in the community, but to also make time to work on personal projects like this on a more regular basis.
Why are you running for a board seat at the Drupal Association? (mission/motivation)After a decade of working with Drupal, it was time for a change. The importance and value I place on community contribution and participation was a major factor in this decision. I wanted to find opportunities that were more aligned with both my personal values and those of the Drupal community.
In 2021, I signed up to volunteer for DrupalCon Portland helping with setup and Trivia Night. I spoke to nearly every single booth, went to BOFs, and met with initiative leads. By the time DrupalCon Pittsburgh rolled around this year, I had stepped into a role training as a Drupal Diversity and Inclusion lead, found a job that supports volunteer work and community contribution, worked with the EOWG writing a couple of interview articles about hybrid event organization, and helped organize DDI Camp 2022. For DrupalCon itself, I presented a session on neurodiversity, had the honor of participating in the Session Review Committee, and excitingly, was invited back as a Trivia Night Coordinator.
I wasn’t intending to take on anything more, but in the weeks leading up to DrupalCon, a painful rift emerged in our community that started with a LinkedIn post. As challenging as this topic has been, in some ways for me it has also provided a valuable gift of experience and knowledge. I did not show up in Pittsburgh feeling prepared to support the community as a DDI Lead, but over the course of the convention I had the opportunity to experience how deeply important and valuable this work is in a way I had not felt before. I decided to step into a more visible leadership role. It was the right decision and I am so glad I did.
Since then, I have continued to slowly build momentum in the community by promoting our Slack channel, connecting with other community contributors, and hosting video calls. This is just the beginning, I have so much more I would like to accomplish that feels very much in the realm of achievable. Even a very small change can sometimes have an important impact. I believe that there are times when we need to express our frustration or hurt and to feel seen. There are also times when we need to unify our voices and focus on healthy, achievable, positive change. The latter is where I have been very intentionally directing conversation because I do believe that this is what many parts of the community need right now. There will always be challenges and conflict, but strength and unity is how we endure and move forward.
I will continue forging strong relationships and building the DDI space no matter what, but my motivation for self-nominating is two-fold. First, to develop a deeper understanding of the challenges that exist in the broader community. To effectively advocate for change, one must understand where the needs of different groups align. Second, to help amplify the voices of marginalized populations that may lack representation so that we can strive to make informed and thoughtful decisions during strategic planning.
Why should members vote for you? (qualifications)I am passionate about my values, which largely center around lifting up the people around me. As I write these words, I talk about people, empathy, and passion as if this comes naturally. The truth is that I am naturally a technical and analytical thinker who has worked very hard over the years to learn about people. This has become one of the greatest assets in my current work and may be my most powerful qualification for this role. Aside from this, some items from my CV include:
Technical achievements
- Tech lead and developer for Nestle for 2+ years
- Front End Dev Management experience, FE practice area leadership
- Agile Scrum & Iterative Process Improvement
- Post secondary: Comp Sci (Drupal-focused) and Graphic Design
- Curriculum development for an accredited training institution (still taught in Vancouver BC)
Community contributions
- DrupalCon speaking & volunteering
- Upcoming Oct 2023 JiraCon speaker
- EOWG author/contributor
- Community-driven initiative leadership (Drupal Diversity & Inclusion)
- DDI Camp visual design and event organization
Drupal Association blog: Board Election 2023 Candidate: Stephen Mustgrave
smustgrave
Why are you running for a board seat at the Drupal Association? (mission/motivation)Been working with Drupal for almost 10 years now but just got involved in the last year with the community and have loved it! Started in bugsmash, helped start needs-review-queue-initative and would like to see how I can continue to give back!
Why should members vote for you? (qualifications)- Been heavily involved over the last year
- Active on slack and trying to answer community threads
- Tapping into community frustrations and helping resolve those. Think this would be an opportunity to only continue that fantastic work forward!
- Possibly help to start new initiatives to help push Drupal even further.
Drupal Association blog: Board Election 2023 Candidate: Carlos Ospina
I have worked with Drupal since 2012, today I work for Acquia as a Technical Account Manager and focus my community efforts in the Houston Drupal User Groups. I have been part of the team organizing DrupalCon Latin America in 2015, a DrupalCamp Colombia in Medellin in 2020, which was stopped due to the CoVid pandemic, and some other events. I have recently rejoined Drupal Colombia to start planning new events in the future and also working with Drupaleros, a Spain lead initiative that is looking to promote Drupal and bring new people to the project and community. I have previously participated in the board elections, and wish to participate again to promote new ideas.
Why are you running for a board seat at the Drupal Association? (mission/motivation)In the past, my motivation was to bring better participation of the Latin America Drupal community into the larger Drupal community. While this still remains true, I also want to double down in the efforts of bringing new talent into Drupal.
I have been working with different groups and companies that work on training and preparing new Drupal talent and noticed that we, as a community, do not have a real way to bridge the gap for newcomers to become what we may consider junior and even senior Drupal developers.
We are working on proposals so that the whole community not only works in the growth of Drupal as a software but also brings new blood and talent to the ecosystem to improve the lack of talent that many companies face in today's market.
Why should members vote for you? (qualifications)I have been with Drupal for 10 years, I have worked with different Drupal communities in both English and Spanish, being part of the efforts to bridge the gap between the two communities and to promote Drupal.
I have been recognized as a Drupal evangelization person and would love to use this opportunity to further grow the Drupal Community and software to ensure that we have a sustainable model in the future.
File attachments: 20230901_174204.jpgDrupal Association blog: Board Election 2023 Candidate: Brad Jones
Brad Jones is a 14+ year member of the Drupal community and a regular contributor to both core and contrib codebases and initiatives. Professionally, he is founder/CTO at Not Vanilla, Inc., a dating-industry startup with a Drupal and open-source technology stack. He has previously worked in state and local politics, investigative journalism, technology consulting, and served as CTO at a Drupal web development agency. A full-time digital nomad living primarily in his RV throughout the mountain American west, he also serves as a part-time paramedic in a rural Colorado county every summer.
Drupal.org profile: bradjones1
Why are you running for a board seat at the Drupal Association? (mission/motivation)Drupal was recently recognized as a “digital public good,” and this distinction sums up well the important place Drupal has carved out in both the open-source ecosystem as well as the core global tech infrastructure. As Drupal matures from scrappy roots to a longer-term legacy, so should its caretaker organization. The Drupal Association has grown also, from early conference sponsor to infrastructure caretaker to policy advocate. As the association grows and matures – especially with a new CEO - it is important for its board to carefully craft and guide its strategic vision with care. I believe the Drupal Association can serve as a model for open-source project governance and longevity, but we must stay vigilant to balancing many competing pressures (both internal and external) on its mission.
Why should members vote for you? (qualifications)My experience as the chairman of the supervisory (audit) committee at a nonprofit $2bn credit union, as well as an agency executive and now startup founder provides me a unique perspective on nonprofit governance and cooperation with corporate partners and individual contributors.
As a board member, I pledge to be open and accessible to stakeholders as we take on important issues of the association’s mission, vision and program execution. While our visions and beliefs may not always overlap, it is important for a strong diversity of ideas to flourish in boardroom discussions with the shared goal of building a stronger and more effective organization. Building on the strong work of the DA to date, I believe the Drupal project’s best days are still before us.
File attachments: brad.jpegJacob Rockowitz: Building complex content models using the Schema.org Blueprints module’s configuration, mapping sets, and starter kits
I recently recorded a video that walks through installing the Schema.org Blueprints module for Drupal. Taking Schema.org-first approach to building a content model and authoring experience in Drupal can be overwhelming. It is relatively straightforward to generate a single content type based on Schema.org. Still, as one starts to build a full content model, it can take time to understand the relationships between Schema.org types and what submodules are required to provide the ideal authoring experience.
Early on, I implemented the concept of mapping sets, which set up multiple Schema.org types in sequence to test and review the module’s out-of-the-box configuration. As I have worked to implement Schema.org for my client, I’ve realized that the generated Schema.org types and properties need additional configuration to create the ideal content authoring experience and website. While learning more about Schema.org’s history and evolution, it has become apparent that schemas for different sectors are led by working groups that research and recommend additional types and properties. For example, information architects and content experts have collaborated to define comprehensive schemas for the automotive, hospitality, medical, and financial industries. These very thorough industry schemas can feel overwhelming and make it difficult for developers to leverage and understand the available schemas and possibilities. To help developers get started and maintain their organization’s schemas, I’ve implemented a Schema.org Blueprints specific starter kit API, which can either provide the beginning or an example of an approach for implementing...Read More
Real Python: Class Concepts: Object-Oriented Programming in Python
Python includes mechanisms for doing object-oriented programming, where the data and operations on that data are structured together. The class keyword is how you create these structures in Python. Attributes are the data values, and methods are the function-like operations that you can perform on classes. In this course, you’ll explore writing code using class and its associated attributes and methods.
In this video course, you’ll learn about:
- Why you write object-oriented code
- How to write a class
- What attributes and methods are
- How to use the descriptor protocol
This course is the first in a three-part series. Part two covers how to write reusable hierarchies of classes using inheritance, while part three dives deeper into the philosophy behind writing good object-oriented code.
Download
Sample Code (.zip)
5.2 KB
Download
Course Slides (.pdf)
1013.9 KB
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Chromatic: How to Inventory your Drupal 7 Modules for Modern Drupal Readiness
Stack Abuse: Differences Between iloc and loc in Pandas
When working with data in Python, Pandas is a library that often comes to the rescue, especially when dealing with large datasets. One of the most common tasks you'll be performing with Pandas is data indexing and selection. This Byte will introduce you to two powerful tools provided by Pandas for this purpose: iloc and loc. Let's get started!
Indexing in PandasPandas provides several methods to index data. Indexing is the process of selecting particular rows and columns of data from a DataFrame. This can be done in Pandas through explicit index and label-based index methods. This Byte will focus on the latter, specifically on the loc and iloc functions.
What is iloc?iloc is a Pandas function used for index-based selection. This means it indexes based on the integer positions of the rows and columns. For instance, in a DataFrame with n rows, the index of the first row is 0, and the index of the last row is n-1.
Note: iloc stands for "integer location", so it only accepts integers.
Example: Using ilocLet's create a simple DataFrame and use iloc to select data.
import pandas as pd # Creating a simple DataFrame data = {'Name': ['John', 'Anna', 'Peter', 'Linda'], 'Age': [28, 24, 35, 32], 'Profession': ['Engineer', 'Doctor', 'Lawyer', 'Writer']} df = pd.DataFrame(data) print(df)This will output:
Name Age Profession 0 John 28 Engineer 1 Anna 24 Doctor 2 Peter 35 Lawyer 3 Linda 32 WriterLet's use iloc to select the first row of this DataFrame:
first_row = df.iloc[0] print(first_row)This will output:
Name John Age 28 Profession Engineer Name: 0, dtype: objectHere, df.iloc[0] returned the first row of the DataFrame. Similarly, you can use iloc to select any row or column by its integer index.
What is loc?loc is another powerful data selection method provided by Pandas. It's works by allowing you to do label-based indexing, which means you select data based on the data's actual label, not its position. It's one of the two primary ways of indexing in Pandas, along with iloc.
Unlike iloc, which uses integer-based indexing, loc uses label-based indexing. This can be a string, or an integer label, but it's not based on the position. It's based on the label itself.
Note: Label-based indexing means that if your DataFrame's index is a list of strings, for example, you'd use those strings to select data, not their position in the DataFrame.
Example: Using locLet's look at a simple example of how to use loc to select data. First, we'll create a DataFrame:
import pandas as pd data = { 'fruit': ['apple', 'banana', 'cherry', 'date'], 'color': ['red', 'yellow', 'red', 'brown'], 'weight': [120, 150, 10, 15] } df = pd.DataFrame(data) df.set_index('fruit', inplace=True) print(df)Output:
color weight fruit apple red 120 banana yellow 150 cherry red 10 date brown 15Now, let's use loc to select data:
print(df.loc['banana'])Output:
color yellow weight 150 Name: banana, dtype: objectAs you can see, we used loc to select the row for "banana" based on its label.
Differences Between iloc and locThe primary difference between iloc and loc comes down to label-based vs integer-based indexing. iloc uses integer-based indexing, meaning you select data based on its numerical position in the DataFrame. loc, on the other hand, uses label-based indexing, meaning you select data based on its label.
Another key difference is how they handle slices. With iloc, the end point of a slice is not included, just like with regular Python slicing. But with loc, the end point is included.
ConclusionIn this short Byte, we showed examples of using the loc method in Pandas, saw it in action, and compared it with its couterpart, iloc. These two methods are both useful tools for selecting data in Pandas, but they work in slightly different ways.
Specbee: DESIGNING RESPONSIVELY: A DEEP DIVE INTO CSS CONTAINER QUERIES
Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Etiam luctus lacinia augue, ut pulvinar massa rhoncus id. Sed semper mauris id aliquam porttitor. Mauris scelerisque egestas sagittis. Nullam velit odio, iaculis nec tempus sed, ultrices sit amet purus. Nullam a augue urna. Quisque rutrum tristique porta. Nullam sem turpis, laoreet non erat sed, tempor tristique mi. In sed pretium metus.
Details Contact me style.scss .card { display: flex; flex-direction: row; gap: 1rem; border: 1px solid #ccc; padding: 1rem; &__media { flex-shrink: 0; } } On addition of another component on the left, we expect the card to stack into a better view, but without any modifications, it will look something like this: The card looks a little weird without stacking. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc condimentum dolor vel elementum dignissim. Morbi ex justo, consectetur sollicitudin nunc ornare, condimentum hendrerit diam. Nam eu lectus magna. Praesent pellentesque nibh felis, nec porttitor dui pharetra fringilla. Donec sit amet purus dui. Duis est nulla, posuere nec velit et, suscipit sollicitudin justo. Sed finibus diam ut augue dapibus auctor. Morbi pellentesque finibus mauris ac consectetur. Morbi et sagittis odio, eget pulvinar nisi. Pellentesque eu viverra velit. Donec nec ornare leo. Ut ornare quis leo a fringilla. Vivamus faucibus, justo eget lobortis rutrum, elit tellus venenatis neque, id convallis tellus tellus in neque. Etiam libero risus, blandit vitae elementum a, ultrices eu diam. Donec tempor eget tellus eget iaculis. Curabitur consequat blandit sapien, eget ornare diam. Cras vel imperdiet purus. Ut aliquet magna quis porttitor finibus. Vivamus eu leo odio. Suspendisse nec auctor felis. Donec mollis orci ut justo porttitor tempor. Vivamus faucibus ac lacus in fringilla. Pellentesque laoreet, risus sed mollis laoreet, erat augue vehicula nulla, non fermentum risus metus ullamcorper risus. Quisque fringilla eros id nisl accumsan fermentum. Ut elit sapien, placerat et semper in, mollis at arcu. Aenean rhoncus odio nulla, non convallis purus tristique in. Nulla auctor hendrerit egestas. Quisque finibus lacus ut lectus porta venenatis. Ut pellentesque nec lacus quis vehicula. Quisque vel laoreet arcu, ac tincidunt nulla. Nulla sit amet vestibulum lectus. Maecenas lectus ante, auctor et turpis in, egestas lacinia ipsum. Praesent magna lectus, facilisis nec diam vitae, tristique dictum velit. Vestibulum semper, orci consequat bibendum ultrices, turpis mauris ornare nulla, et efficitur ante lectus sit amet orci. In eget turpis sit amet justo interdum viverra vel vitae mi. Integer interdum lobortis sem, quis commodo nulla aliquet in. Nunc aliquam convallis semper. Aliquam tempus tellus eu rutrum pulvinar. Praesent malesuada mi in mollis imperdiet. Quisque aliquet nisi vitae quam consequat egestas. In lacus dolor, rhoncus ac augue in, ultricies facilisis felis. Praesent quis est et est feugiat sollicitudin sed sed justo. Vestibulum eu nisi et felis tincidunt posuere. Maecenas facilisis felis aliquam mi posuere, eget congue tortor placerat. Curabitur a dapibus eros, eget tempus sem. Mauris vel erat auctor, euismod nunc ac, pretium quam. Morbi iaculis, odio a facilisis convallis, mi purus venenatis justo, sit amet cursus velit risus vitae nisl. Praesent vitae faucibus mi, eu consectetur quam. Donec eu lacus hendrerit, auctor arcu maximus, laoreet libero. Ut imperdiet facilisis nunc. Mauris tincidunt consequat metus, a mattis tellus semper ut. Fusce vitae elit eu velit interdum molestie ac non nisi. Integer id eros non quam pellentesque hendrerit. Etiam non arcu commodo, iaculis purus sit amet, viverra tortor. Aliquam ac auctor nibh, ut efficitur leo. Fusce ut semper elit.Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Etiam luctus lacinia augue, ut pulvinar massa rhoncus id. Sed semper mauris id aliquam porttitor. Mauris scelerisque egestas sagittis. Nullam velit odio, iaculis nec tempus sed, ultrices sit amet purus. Nullam a augue urna. Quisque rutrum tristique porta. Nullam sem turpis, laoreet non erat sed, tempor tristique mi. In sed pretium metus.
Details Contact me style.scss .card { display: flex; flex-direction: row; gap: 1rem; border: 1px solid #ccc; padding: 1rem; &__media { flex-shrink: 0; } } // Adding CSS for newly added .card-container { flex-basis: 50%; } .big-wrapper { display: flex; } .article { padding-inline-end: 20px; flex-basis: 50%; }To fix this we can add flex-wrap property to the card which will help in stacking the elements inside the card. .card { display: flex; flex-direction: row; gap: 1rem; border: 1px solid #ccc; padding: 1rem; // Adding flex-wrap flex-wrap: wrap; &__media { flex-shrink: 0; } } But this changes the default card behavior even if it is placed alone. There comes unwanted stacking even when the card has full width to occupy. To fix this problem, we remove the flex-wrap property and introduce the container query based on the container size. This way the component behaves in a convenient manner if placed alone or in a smaller space. style.scss .card { display: flex; flex-direction: row; gap: 1rem; border: 1px solid #ccc; padding: 1rem; &__media { flex-shrink: 0; } } .big-wrapper { display: flex; } .article { padding-inline-end: 20px; flex-basis: 50%; } // Adding container-type .card-container { flex-basis: 50%; container-type: inline-size; } // Adding container query here @container (max-width: 750px) { .card { flex-direction: column; } }Card with Article: Card without Article Thus, the behavior is much better without changing the actual style of the card. Container Units With the advent of container queries, it was obvious that there would be some CSS units to take care of the sizes relative to the container size. The following CSS units have been brought: Browser support Source: MDN Fallback As the older browsers do not support container queries, it is recommended to use one of the following fallback options: Use @supports to check the availability of feature and add relevant CSS fallback eg. media queries, flex, and grid options. Using container-query-polyfill for using container queries as it is. Final Thoughts In wrapping up, it's clear that CSS container queries are more than just a solution to today's design challenges – they are a glimpse into the exciting future of web development. As technology and user expectations continue to evolve, container queries are poised to play a pivotal role in creating adaptive, user-centric websites. Embrace this innovation and stay ahead in the ever-evolving world of web design! Looking for a Drupal agency to bring your web projects to life? With a proven track record in delivering top-notch Drupal solutions, we understand the importance of responsive design and keeping up with the latest web development trends. Talk to us today!Diverse Open Source uses highlight need for precision in Cyber Resilience Act
As the European Cyber Resilience Act (CRA) is entering into the final legislative phase, it still has some needs arising from framing by the Commission or Parliament that result in breakage no matter how issues within its scope are “fixed”.
Here’s a short list to help the co-legislators understand the engagement from the Open Source community.
- OSI and the experts with whom they engage are not trying to get all of Open Source out of scope as maximalist lobbyists do for other aspects of technology. An exclusion from the regulation for Open Source software per se would open a significant loophole for openwashing. But the development of Open Source software in the open needs to be excluded from scope just as the development of software in private is. Our goal in engaging is just to prevent unintentional breakage while largely embracing the new regulation.
- There is no one way to use Open Source. Many of the policymakers we’ve spoken to think of Open Source components in supply chains under the care of foundations like the Eclipse Foundation that are used essentially as-is. But the freedoms of Open Source are also used for stack building, consumer tools, enabling research, hobbyist tinkering, as the basis for European small businesses like XWiki, Open-Xchange, Abilian, and more. All these many other uses exist and are broken differently by the CRA. Software is primarily a cultural artifact and that aspect must be prioritized.
- There is no single Open Source business model. People make money from Open Source (by charging for it, running it as a service and supporting it) and with Open Source (by simplifying their businesses and reducing costs); they shape markets via Open Source by enabling adjacent businesses, commoditising competitors without then monetising their customers, and more – there are a significant number of business models made possible by software freedom. So any attempt to identify commerciality is sure to be model-specific and consequently have unintended consequences for other models.
- Even larger foundations like Linux Foundation do not actually employ the sort of staff who ensure code compliance – Open Source is conceptually disjoint from proprietary software. To comply with the CRA – if they find themselves in-scope – they will need them to hire a whole new operating unit. To them, the burden of compliance is not a cost of development funded by revenue as it would be for a manufactured physical good where staffing exists and just needs adapting.
OSI still recommends the Cyber Resilience Act should exclude all activities prior to commercial deployment of software and clearly ensure that responsibility for CE marks does not rest with any actor who is not a direct commercial beneficiary of deployment.
The post <span class='p-name'>Diverse Open Source uses highlight need for precision in Cyber Resilience Act</span> appeared first on Voices of Open Source.
Russ Allbery: Review: Before We Go Live
Review: Before We Go Live, by Stephen Flavall
Publisher: Spender Books Copyright: 2023 ISBN: 1-7392859-1-3 Format: Kindle Pages: 271Stephen Flavall, better known as jorbs, is a Twitch streamer specializing in strategy games and most well-known as one of the best Slay the Spire players in the world. Before We Go Live, subtitled Navigating the Abusive World of Online Entertainment, is a memoir of some of his experiences as a streamer. It is his first book.
I watch a lot of Twitch. For a long time, it was my primary form of background entertainment. (Twitch's baffling choices to cripple their app have subsequently made YouTube somewhat more attractive.) There are a few things one learns after a few years of watching a lot of streamers. One is that it's a precarious, unforgiving living for all but the most popular streamers. Another is that the level of behind-the-scenes drama is very high. And a third is that the prevailing streaming style has converged on fast-talking, manic, stream-of-consciousness joking apparently designed to satisfy people with very short attention spans.
As someone for whom that manic style is like nails on a chalkboard, I am therefore very picky about who I'm willing to watch and rarely can tolerate the top streamers for more than an hour. jorbs is one of the handful of streamers I've found who seems pitched towards adults who don't need instant bursts of dopamine. He's calm, analytical, and projects a relaxed, comfortable feeling most of the time (although like the other streamers I prefer, he doesn't put up with nonsense from his chat). If you watch him for a while, he's also one of those people who makes you think "oh, this is an interestingly unusual person." It's a bit hard to put a finger on, but he thinks about things from intriguing angles.
Going in, I thought this would be a general non-fiction book about the behind-the-scenes experience of the streaming industry. Before We Go Live isn't really that. It is primarily a memoir focused on Flavall's personal experience (as well as the experience of his business manager Hannah) with the streaming team and company F2K, supplemented by a brief history of Flavall's streaming career and occasional deeply personal thoughts on his own mental state and past experiences. Along the way, the reader learns a lot more about his thought processes and approach to life. He is indeed a fascinatingly unusual person.
This is to some extent an exposé, but that's not the most interesting part of this book. It quickly becomes clear that F2K is the sort of parasitic, chaotic, half-assed organization that crops up around any new business model. (Yes, there's crypto.) People who are good at talking other people out of money and making a lot of big promises try to follow a startup fast-growth model with unclear plans for future revenue and hope that it all works out and turns into a valuable company. Most of the time it doesn't, because most of the people running these sorts of opportunistic companies are better at talking people out of money than at running a business. When the new business model is in gaming, you might expect a high risk of sexism and frat culture; in this case, you would not be disappointed.
This is moderately interesting but not very revealing if one is already familiar with startup culture and the kind of people who start businesses without doing any of the work the business is about. The F2K principals are at best opportunistic grifters, if not actual con artists. It's not long into this story before this is obvious. At that point, the main narrative of this book becomes frustrating; Flavall recognizes the dysfunction to some extent, but continues to associate with these people. There are good reasons related to his (and Hannah's) psychological state, but it doesn't make it easier to read. Expect to spend most of the book yelling "just break up with these people already" as if you were reading Captain Awkward letters.
The real merit of this book is that people are endlessly fascinating, Flavall is charmingly quirky, and he has the rare mix of the introspection that allows him to describe himself without the tendency to make his self-story align with social expectations. I think every person is intriguingly weird in at least some ways, but usually the oddities are smoothed away and hidden under a desire to present as "normal" to the rest of society. Flavall has the right mix of writing skill and a willingness to write with direct honesty that lets the reader appreciate and explore the complex oddities of a real person, including the bits that at first don't make much sense.
Parts of this book are uncomfortable reading. Both Flavall and his manager Hannah are abuse survivors, which has a lot to do with their reactions to their treatment by F2K, and those reactions are both tragic and maddening to read about. It's a good way to build empathy for why people will put up with people who don't have their best interests at heart, but at times that empathy can require work because some of the people on the F2K side are so transparently sleazy.
This not the sort of book I'm likely to re-read, but I'm glad I read it simply for that time spent inside the mind of someone who thinks very differently than I am and is both honest and introspective enough to give me a picture of his thought processes that I think was largely accurate. This is something memoir is uniquely capable of doing if the author doesn't polish all of the oddities out of their story. It takes a lot of work to be this forthright about one's internal thought processes, and Flavall does an excellent job.
Rating: 7 out of 10
Read the Docs: Read the Docs newsletter - September 2023
🚀 We started testing a new flyout menu as part of our beta test for documentation addons. The beta test is currently limited to projects using the build.commands configuration key.
🛣️ We continue to have a number deprecations in progress. We announced this month deprecations of installing using system packages, the configuration key build.image, and installation of pinned versions of Sphinx and MkDocs. Keep an eye on your email for any deprecation notifications, as we will continue to notify maintainers of projects that might be impacted.
📚 The Read the Docs Sphinx theme package, sphinx-rtd-theme, had two releases. Version 1.3.0 was released, adding support for Sphinx 7.0. Version 2.0.0rc2 is also now out. This is a release candidate that will remove support for HTML4 output and will drop support for Sphinx versions prior to 5.0. We will be announcing the release candidate more this month and will be looking to have feedback from users.
🔐 A security advisory involving symlink abuse during project builds was raised and patched.
📉 Changes to our request handling resulted in a 30% reduction in response times for 404 error responses.
Request times for 404 handling have dropped 30% since last release.
🏁 Updates to Proxito are now fully rolled out to Read the Docs Community and Read the Docs for Business.
✨ We upgraded our application to use Django 4.2.
⚠️ In line with our deprecation plans for builds without a configuration file, projects will be required to specify a configuration file to continue building their documentation starting September 25th. Our last brownout date is September 4th, lasting 48 hours. To avoid any problems building your project, ensure it has a configuration file before then.
🚢️ We will be continuing our work on for our beta test of Read the Docs Addons. Our focus is still on improving the new flyout menu, search, and adding more new features.
🛠️ Our new dashboard, currently available for beta testing at https://beta.readthedocs.org, will receive some small feature additions, and we are working towards a beta of the new dashboard for Read the Docs for Business as well. We expect to have more news here in the coming months.
Want to follow along with our development progress? View our full roadmap 📍️
Possible issues⚠️ Make sure to follow directions from notifications regarding deprecations. We have notified project maintainers for any project that could be affected by one of our ongoing deprecations. Updating your package ahead of brownout dates and final deprecation cutoff dates will ensure your project continues to successfully build.
Questions? Comments? Ideas for the next newsletter? Contact us!
Stack Abuse: Deleting DataFrame Rows Based on Column Value in Pandas
Data processing is a common task in any data analysis codebase. And in Python, the Pandas library is one of the most popular tools for data analysis, which also provides high-performance, easy-to-use data structures and data analysis tools, one of which is the DataFrame. In this Byte, we're going to explore how to delete rows in a DataFrame based on the value in a specific column.
DataFrames in PandasA DataFrame is a two-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table, or a dictionary of Series objects. It is generally the most commonly used Pandas object. Let's create a simple DataFrame:
import pandas as pd data = { 'Name': ['John', 'Anna', 'Peter', 'Linda'], 'Age': [28, 24, 35, 32], 'Country': ['USA', 'USA', 'Canada', 'USA'] } df = pd.DataFrame(data) print(df)Output:
Name Age Country 0 John 28 USA 1 Anna 24 USA 2 Peter 35 Canada 3 Linda 32 USA Why Delete Rows Based on Column Value?Data comes in all shapes and sizes, and not all of it is useful or relevant. Sometimes, you might want to remove rows based on a specific column value to clean your data or focus on a subset of it. For instance, you might want to remove all rows related to a particular country in a dataset of global statistics.
How to Delete Rows Based on Column ValueThere are several ways to delete rows in a DataFrame based on column value, which we'll explore here.
Method 1: Using Boolean IndexingBoolean indexing is a powerful feature in Pandas that allows us to select data based on the actual values in the DataFrame. It's a method that should be easy for most to understand, making it a great starting point for our discussion.
Let's say we have a DataFrame of student grades and we want to delete all rows where the grade is below 60. Here's how we can do it with Boolean indexing:
import pandas as pd # Create a simple dataframe df = pd.DataFrame({ 'name': ['Alice', 'Bob', 'Charlie', 'David', 'Eve'], 'grade': [85, 55, 65, 70, 50] }) # Create a boolean mask for grades >= 60 mask = df['grade'] >= 60 # Apply the mask to the dataframe df = df[mask] print(df)Output:
name grade 0 Alice 85 2 Charlie 65 3 David 70As you can see, the rows for Bob and Eve, who scored below 60, have been removed.
Method 2: Using the drop() FunctionPandas' drop() function has another way to remove rows from a DataFrame. This method requires a bit more setup than Boolean indexing, but it can be more intuitive for some users.
First, we need to identify the index values of the rows we want to drop. In our case, we want to drop rows where the grade is below 60. Here's how we can do it:
import pandas as pd # Create a simple dataframe df = pd.DataFrame({ 'name': ['Alice', 'Bob', 'Charlie', 'David', 'Eve'], 'grade': [85, 55, 65, 70, 50] }) # Identify indices of rows with grade < 60 drop_indices = df[df['grade'] < 60].index # Drop these indices from the dataframe df = df.drop(drop_indices) print(df)Output:
name grade 0 Alice 85 2 Charlie 65 3 David 70Again, Bob and Eve's rows have been removed, just like in our first method.
Method 3: Using the query() FunctionThe query() function in Pandas helps us to filter data using a query string syntax, much like SQL. This can be a popular choice, especially for those with experience in SQL.
Let's use query() to delete rows where the grade is below 60:
import pandas as pd # Create a simple dataframe df = pd.DataFrame({ 'name': ['Alice', 'Bob', 'Charlie', 'David', 'Eve'], 'grade': [85, 55, 65, 70, 50] }) # Filter dataframe using query() function df = df.query('grade >= 60') print(df)Output:
name grade 0 Alice 85 2 Charlie 65 3 David 70Once again, we have successfully removed Bob and Eve's rows from our DataFrame.
ConclusionAnd there you have it - three different methods for deleting rows in a Pandas DataFrame based on column values. Each method has its own strengths and use cases, and which one you choose to use may depend on your specific needs and personal preference.
Brian Okken: pytest Course - Ch4 Builtin Fixtures is up
Stack Abuse: Guide to Profiling Python Scripts
Even with a "simple" language like Python, it's not immune to performance issues. As your codebase grows, you may start to notice that certain parts of your code are running slower than expected. This is where profiling comes into play. Profiling is an important tool in every developer's toolbox, allowing you to identify bottlenecks in your code and optimize it accordingly.
Profiling and Why You Should Do ItProfiling, in the context of programming, is the process of analyzing your code to understand where computational resources are being used. By using a profiler, you can gain insights into which parts of your code are running slower than expected and why. This can be due to a variety of reasons like inefficient algorithms, unnecessary computations, bugs, or memory-intensive operations.
Note: Profiling and debugging are very different operations. However, profiling can be used in the process of debugging as it can both help you optimize your code and find issues via performance metrics.
Let's consider an example. Suppose you've written a Python script to analyze a large dataset. The script works fine with a small subset of data, but as you increase the size of the dataset, the script takes an increasingly long time to run. This is a classic sign that your script may need optimization.
Here's a simple Python script that calculates the factorial of a number using recursion:
def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) print(factorial(5))When you run this script, it outputs 120 which is the factorial of 5. However, if you try to calculate the factorial of a very large number, say 10000, you'll notice that the script takes a considerable amount of time to run. This is a perfect candidate for profiling and optimization.
Overview of Python Profiling ToolsProfiling is a crucial aspect of software development, particularly in Python where the dynamic nature of the language can sometimes lead to unexpected performance bottlenecks. Fortunately, Python provides a rich ecosystem of profiling tools that can help you identify these bottlenecks and optimize your code accordingly.
The built-in Python profiler is cProfile. It's a module that provides deterministic profiling of Python programs. A profile is a set of statistics that describes how often and for how long various parts of the program executed.
Note: Deterministic profiling means that every function call, function return, exception, and other CPU-intensive tasks are monitored. This can provide a very detailed view of your application's performance, but it can also slow down your application.
Another popular Python profiling tool is line_profiler. It is a module for doing line-by-line profiling of functions. Line profiler gives you a line-by-line report of time execution, which can be more helpful than the function-by-function report that cProfile provides.
There are other profiling tools available for Python, such as memory_profiler for profiling memory usage, py-spy for sampling profiler, and Py-Spy for visualizing profiler output. The choice of which tool to use depends on your specific needs and the nature of the performance issues you're facing.
How to Profile a Python ScriptNow that we've covered the available tools, let's move on to how to actually profile a Python script. We'll take a look at both cProfile and line_profiler.
Using cProfileWe'll start with the built-in cProfile module. This module can either be used as a command line utility or within your code directly. We'll first look at how to use it in your code.
First, import the cProfile module and run your script within its run function. Here's an example:
import cProfile import re def test_func(): re.compile("test|sample") cProfile.run('test_func()')When you run this script, cProfile will output a table with the number of calls to each function, the time spent in each function, and other useful information.
The ouptut might look something like this:
234 function calls (229 primitive calls) in 0.001 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.001 0.001 <stdin>:1(test_func) 1 0.000 0.000 0.001 0.001 <string>:1(<module>) 1 0.000 0.000 0.001 0.001 re.py:192(compile) 1 0.000 0.000 0.001 0.001 re.py:230(_compile) 1 0.000 0.000 0.000 0.000 sre_compile.py:228(_compile_charset) 1 0.000 0.000 0.000 0.000 sre_compile.py:256(_optimize_charset) 1 0.000 0.000 0.000 0.000 sre_compile.py:433(_compile_info) 2 0.000 0.000 0.000 0.000 sre_compile.py:546(isstring) 1 0.000 0.000 0.000 0.000 sre_compile.py:552(_code) 1 0.000 0.000 0.001 0.001 sre_compile.py:567(compile) 3/1 0.000 0.000 0.000 0.000 sre_compile.py:64(_compile) 5 0.000 0.000 0.000 0.000 sre_parse.py:138(__len__) 16 0.000 0.000 0.000 0.000 sre_parse.py:142(__getitem__) 11 0.000 0.000 0.000 0.000 sre_parse.py:150(append) # ...Now let's see how we can use it as a command line utility. Assume we have the following script:
def calculate_factorial(n): if n == 1: return 1 else: return n * calculate_factorial(n-1) def main(): print(calculate_factorial(10)) if __name__ == "__main__": main()To profile this script, you can use the cProfile module from the command line as follows:
$ python -m cProfile script.pyThe output will show how many times each function was called, how much time was spent in each function, and other useful information.
Using Line ProfilerWhile cProfile provides useful information, it might not be enough if you need to profile your code line by line. This is where the line_profiler tool comes in handy. It's an external tool that provides line-by-line profiling statistics for your Python programs.
First, you need to install it using pip:
$ pip install line_profilerLet's use line_profiler to profile the same script we used earlier. To do this, you need to add a decorator to the function you want to profile:
from line_profiler import LineProfiler def profile(func): profiler = LineProfiler() profiler.add_function(func) return profiler(func) @profile def calculate_factorial(n): if n == 1: return 1 else: return n * calculate_factorial(n-1) def main(): print(calculate_factorial(10)) if __name__ == "__main__": main()Now, if you run your script, line_profiler will output statistics for each line in the calculate_factorial function.
Remember to use the @profile decorator sparingly, as it can significantly slow down your code.
Profiling is an important part of optimizing your Python scripts. It helps you to identify bottlenecks and inefficient parts of your code. With tools like cProfile and line_profiler, you can get detailed statistics about the execution of your code and use this information to optimize it.
Interpreting Profiling ResultsAfter running a profiling tool on your Python script, you'll be presented with a table of results. But what do these numbers mean? How can you make sense of them? Let's break it down.
The results table typically contains columns like ncalls for the number of calls, tottime for the total time spent in the given function excluding calls to sub-functions, percall referring to the quotient of tottime divided by ncalls, cumtime for the cumulative time spent in this and all subfunctions, and filename:lineno(function) providing the respective data of each function.
Here's a sample output from cProfile:
5 function calls in 0.000 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 <ipython-input-1-9e8e3c5c3b72>:1(<module>) 1 0.000 0.000 0.000 0.000 {built-in method builtins.exec} 1 0.000 0.000 0.000 0.000 {built-in method builtins.len} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}The tottime and cumtime columns are particularly important as they help identify which parts of your code are consuming the most time.
Note: The output is sorted by the function name, but you can sort it by any other column by passing the sort parameter to the print_stats method. For example, p.print_stats(sort='cumtime') would sort the output by cumulative time.
Optimization Techniques Based on Profiling ResultsOnce you've identified the bottlenecks in your code, the next step is to optimize them. Here are some general techniques you can use:
-
Avoid unnecessary computations: If your profiling results show that a function is called multiple times with the same arguments, consider using memoization techniques to store and reuse the results of expensive function calls.
-
Use built-in functions and libraries: Built-in Python functions and libraries are usually optimized for performance. If you find that your custom code is slow, see if there's a built-in function or library that can do the job faster.
-
Optimize data structures: The choice of data structure can greatly affect performance. For example, if your code spends a lot of time searching for items in a list, consider using a set or a dictionary instead, which can do this much faster.
Let's see an example of how we can optimize a function that calculates the Fibonacci sequence. Here's the original code:
def fib(n): if n <= 1: return n else: return(fib(n-1) + fib(n-2))Running a profiler on this code will show that the fib function is called multiple times with the same arguments. We can optimize this using a technique called memoization, which stores the results of expensive function calls and reuses them when needed:
def fib(n, memo={}): if n <= 1: return n else: if n not in memo: memo[n] = fib(n-1) + fib(n-2) return memo[n]With these optimizations, the fib function is now significantly faster, and the profiling results will reflect this improvement.
Remember, the key to efficient code is not to optimize everything, but rather focus on the parts where it really counts - the bottlenecks. Profiling helps you identify these bottlenecks, so you can spend your optimization efforts where they'll make the most difference.
ConclusionAfter reading this article, you should have a good understanding of how to profile a Python script. We've discussed what profiling is and why it's crucial for optimizing your code. We've also introduced you to a couple of Python profiling tools, namely cProfile, a built-in Python profiler, and Line Profiler, an advanced profiling tool.
We've walked through how to use these tools to profile a Python script and how to interpret the results. Based on these results, you've learned some optimization techniques that can help you improve the performance of your code.
Just remember that profiling is a powerful tool, but it's not a silver bullet. It can help you identify bottlenecks and inefficient code, but it's up to you to come up with the solutions.
In my experience, the time invested in learning and applying profiling techniques has always paid off in the long run. Not only does it lead to more efficient code, but it also helps you become a more proficient and knowledgeable Python programmer.
Python Morsels: What is recursion?
Recursion is when a function calls itself. Loops are usually preferable to recursion, but recursion is excellent for traversing tree-like structures.
Table of contents
- Recursive functions call themselves
- Use a base case to stop recursion
- Recursion works thanks to the call stack
- Using for loops instead of recursion
- Recursion's most common use case
- Loops are great, but recursion does have its uses
Here's a Python script that counts up to a given number:
from argparse import ArgumentParser def count_to(number): for n in range(1, number+1): print(n) def parse_args(): parser = ArgumentParser() parser.add_argument("stop", type=int) return parser.parse_args() def main(): args = parse_args() count_to(args.stop) if __name__ == "__main__": main()Note that the main function in that program calls the parse_args function as well as the count_to function.
Functions can call other functions in Python. But functions can also call themselves!
Here's a function that calls itself:
def factorial(n): if n < 0: raise ValueError("Negative numbers not accepted") if n == 0: return 1 return n * factorial(n-1)A function that calls itself is called a recursive function.
It might seem like a bad idea for a function to call itself and it often is a bad idea... but not always.
Use a base case to stop recursionIf a function calls itself …
Read the full article: https://www.pythonmorsels.com/what-is-recursion/Talking Drupal: Talking Drupal #414 - Future of Web Content
Today we are talking about The Future of Content Management, What we see for Drupal in the future, and How AI might factor in with guest John Doyle. We’ll also cover Access Policy as our module of the week.
For show notes visit: www.talkingDrupal.com/414
Topics- Digital Polygon
- Content Management can mean many things, what is our definition
- What factors contribute to the changes moving to a more centralized model
- How do organizations manage content for different channels
- Where do design systems collide with content management
- Why is Drupal a good fit
- How does headless fit in
- Maintaining content architecture long term
- Drupal adaptations over the next 5 years
- Talking Drupal #409 - Data Lakes
- Hey everyone! Our friends at the Linux Foundation are offering Talking Drupal Listeners 25% off on any e-learning course, certification exam or bundle. Good from August 22-Sept 30, 2023. With discount code LFDrupal25 Please note Bootcamps, ILTs and FinOps courses are excluded… Again that code is LFDrupal25 and you can use that at https://training.linuxfoundation.org/ Thank you to the linux foundation!
- Flexible Permissions
John Doyle - digitalpolygon.com _doyle_
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Andy Blum - andy-blum.com - andy_blum
MOTW CorrespondentMartin Anderson-Clutz - @mandclu Access Policy
- Brief description:
- Does your Drupal site need a flexible way to manage access to content? There’s a module for that!
- Brief history
- How old: created in Nov 2022
- Versions available: 1.0.0-beta1, works with D9 and 10
- Maintainership
- Actively maintained, most recent release was in the past week
- Number of open issues:
- 4, none of which are bugs
- Test coverage
- Usage stats:
- 12 sites
- Maintainer(s):
- partdigital
- Module features and usage
- Allows a site builder to define different policies that can be used to manage content access or editing capabilities based on various factors, all within the Drupal UI
- Criteria can include field values of the content, field values on the current user’s profile, the time of day, and more
- The policy can restrict access, for example view acces to only selected people or people with a certain role or field value on their profile. I
- Once defined, policies can be assigned manually, or automatically applied based on configurable selection criteria
- The project page describes this as an Attribute Based Access Control (ABAC) architecture, which complements Drupal core’s Role Based Access Control that our listeners are probably familiar with
- I used it for the first recently, and found that given the power and flexibility the module provides, it’s great that it has in-depth documentation
- I filed a couple of issues (technically half of the open issues) and partdigital was very responsive
- The module does also provide an API for defining your own policy type, access rules, rule widgets, and more. So if you need a setup even more custom that what Access Policy can provide out of the box, it’s likely you can extend it to meet your individual use case
FSF Events: GNU40: Hacker meeting in Switzerland
Real Python: Python News: What's New From August 2023
In August 2023, Python 3.12.0rc1 came out! With several exciting features, improvements, and optimizations, this release is only two steps away from the final release scheduled for October. If you want to stay on the cutting edge, then you must give it a try. But note that you shouldn’t use it in production.
Another exciting release was Python in Excel, which allows you to leverage the power of Python inside your Excel workbooks. You’ll be able to use Python’s data science ecosystem while you remain in your Excel comfort zone with known formulas, charts, and more.
But that’s not all! The Python Software Foundation (PSF) announced a new roster of fellows and a safety and security engineer for PyPI. Some key package maintainers were busy with the Python DataFrame Summit 2023, and several key libraries released new versions.
Let’s dive into the most exciting Python news from August 2023!
Join Now: Click here to join the Real Python Newsletter and you'll never miss another Python tutorial, course update, or post.
Python 3.12.0 Release Candidate ArrivesThis August, Python put out its first release candidate version, 3.12.0rc1. This version is only two steps away from the final release, 3.12.0, which is scheduled for October 2. Before that, Python will deliver another release candidate, planned for September 4.
As Python ventures into the release candidate stage, only confirmed bug fixes will be accepted into the codebase. The goal is to have as few code changes as possible. Most likely, the core team will focus on:
- Polishing and documenting all the changes
- Updating the What’s New document
As you can read in the release notes, 3.12 will have the following list of new features compared to Python 3.11:
- More flexible f-string parsing, allowing you to do more with your f-strings than you previously could (PEP 701)
- Support for the buffer protocol in Python code (PEP 688)
- A new debugging/profiling API (PEP 669)
- Support for isolated subinterpreters with separate global interpreter locks (PEP 684)
- Even more improved error messages, meaning that you get suggestions for more exceptions potentially caused by typos
- Support for the Linux perf profiler to report Python function names in traces
- Many large and small performance improvements, such as PEP 709, which deliver an estimated 5 percent overall performance improvement
If you’d like to learn more about some of these improvements, then check out Real Python’s previews of more intuitive and consistent f-strings, ever better error messages, and support for the Linux perf profiler.
As usual, this version also brings several deprecations that you may need to consider. For a detailed list of changes, additions, and removals, you can check the changelog document.
Just like other pre-release versions, Python 3.12.0rc1 is intended for experimentation and testing purposes only and isn’t recommended for use in production.
Python Makes Its Way Into Microsoft ExcelOn August 22, Microsoft announced Python in Excel, a new and exciting feature that combines the flexibility of Excel and the power of Python. This combination may have a considerable impact on the data science industry.
This is a huge announcement, and even Guido van Rossum himself has been helping with the integration of both tools:
Image sourcePython in Excel allows you to natively use Python inside an Excel workbook without any additional setup requirements. You only need the new PY function, which lets you input Python code directly into Excel cells:
Read the full article at https://realpython.com/python-news-august-2023/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]