Feeds

Mike Driscoll: An Intro to Logging with Python and Loguru

Planet Python - Wed, 2024-05-15 10:08

Python’s logging module isn’t the only way to create logs. There are several third-party packages you can use, too. One of the most popular is Loguru. Loguru intends to remove all the boilerplate you get with the Python logging API.

You will find that Loguru greatly simplifies creating logs in Python.

This chapter has the following sections:

  • Installation
  • Logging made simple
  • Handlers and formatting
  • Catching exceptions
  • Terminal logging with color
  • Easy log rotation

Let’s find out how much easier Loguru makes logging in Python!

Installation

Before you can start with Loguru, you will need to install it. After all, the Loguru package doesn’t come with Python.

Fortunately, installing Loguru is easy with pip. Open up your terminal and run the following command:

python -m pip install loguru

Pip will install Loguru and any dependencies it might have for you. You will have a working package installed if you see no errors.

Now let’s start logging!

Logging Made Simple

Logging with Loguru can be done in two lines of code. Loguru is really that simple!

Don’t believe it? Then open up your Python IDE or REPL and add the following code:

# hello.py from loguru import logger logger.debug("Hello from loguru!") logger.info("Informed from loguru!")

One import is all you need. Then, you can immediately start logging! By default, the log will go to stdout.

Here’s what the output looks like in the terminal:

2024-05-07 14:34:28.663 | DEBUG | __main__:<module>:5 - Hello from loguru! 2024-05-07 14:34:28.664 | INFO | __main__:<module>:6 - Informed from loguru!

Pretty neat! Now, let’s find out how to change the handler and add formatting to your output.

Handlers and Formatting

Loguru doesn’t think of handlers the way the Python logging module does. Instead, you use the concept of sinks. The sink tells Loguru how to handle an incoming log message and write it somewhere.

Sinks can take lots of different forms:

  • A file-like object, such as sys.stderr or a file handle
  • A file path as a string or pathlib.Path
  • A callable, such as a simple function
  • An asynchronous coroutine function that you define using async def
  • A built-in logging.Handler. If you use these, the Loguru records convert to logging records automatically

To see how this works, create a new file called file_formatting.py in your Python IDE. Then add the following code:

# file_formatting.py from loguru import logger fmt = "{time} - {name} - {level} - {message}" logger.add("formatted.log", format=fmt, level="INFO") logger.debug("This is a debug message") logger.info("This is an informational message")

If you want to change where the logs go, use the add() method. Note that this adds a new sink, which, in this case, is a file. The logger will still log to stdout, too, as that is the default, and you are adding to the handler list. If you want to remove the default sink, add logger.remove() before you call add().

When you call add(), you can pass in several different arguments:

  • sink – Where to send the log messages
  • level – The logging level
  • format – How to format the log messages
  • filter – A logging filter

There are several more, but those are the ones you would use the most. If you want to know more about add(), you should check out the documentation.

You might have noticed that the formatting of the log records is a little different than what you saw in Python’s own logging module.

Here is a listing of the formatting directives you can use for Loguru:

  • elapsed – The time elapsed since the app started
  • exception – The formatted exception, if there was one
  • extra – The dict of attributes that the user bound
  • file – The name of the file where the logging call came from
  • function – The function where the logging call came from
  • level – The logging level
  • line – The line number in the source code
  • message – The unformatted logged message
  • module – The module that the logging call was made from
  • name – The __name__ where the logging call came from
  • process – The process in which the logging call was made
  • thread – The thread in which the logging call was made
  • time – The aware local time when the logging call was made

You can also change the time formatting in the logs. In this case, you would use a subset of the formatting from the Pendulum package. For example, if you wanted to make the time exclude the date, you would use this: {time:HH:mm:ss} rather than simply {time}, which you see in the code example above.

See the documentation for details on formating time and messages.

When you run the code example, you will see something similar to the following in your log file:

2024-05-07T14:35:06.553342-0500 - __main__ - INFO - This is an informational message

You will also see log messages sent to your terminal in the same format as you saw in the first code example.

Now, you’re ready to move on and learn about catching exceptions with Loguru.

Catching Exceptions

Catching exceptions with Loguru is done by using a decorator. You may remember that when you use Python’s own logging module, you use logger.exception in the except portion of a try/except statement to record the exception’s traceback to your log file.

When you use Loguru, you use the @logger.catch decorator on the function that contains code that may raise an exception.

Open up your Python IDE and create a new file named catching_exceptions.py. Then enter the following code:

# catching_exceptions.py from loguru import logger @logger.catch def silly_function(x, y, z): return 1 / (x + y + z) def main(): fmt = "{time:HH:mm:ss} - {name} - {level} - {message}" logger.add("exception.log", format=fmt, level="INFO") logger.info("Application starting") silly_function(0, 0, 0) logger.info("Finished!") if __name__ == "__main__": main()

According to Loguru’s documentation, the’ @logger.catch` decorator will catch regular exceptions and also work with applications with multiple threads. Add another file handler on top of the stream handler and start logging for this example.

Then you call silly_function() with a bunch of zeroes, which causes a ZeroDivisionError exception.

Here’s the output from the terminal:

If you open up the exception.log, you will see that the contents are a little different because you formatted the timestamp and also because logging those funny lines that show what arguments were passed to the silly_function() don’t translate that well:

14:38:30 - __main__ - INFO - Application starting 14:38:30 - __main__ - ERROR - An error has been caught in function 'main', process 'MainProcess' (8920), thread 'MainThread' (22316): Traceback (most recent call last): File "C:\books\11_loguru\catching_exceptions.py", line 17, in <module> main() └ <function main at 0x00000253B01AB7E0> > File "C:\books\11_loguru\catching_exceptions.py", line 13, in main silly_function(0, 0, 0) └ <function silly_function at 0x00000253ADE6D440> File "C:\books\11_loguru\catching_exceptions.py", line 7, in silly_function return 1 / (x + y + z) │ │ └ 0 │ └ 0 └ 0 ZeroDivisionError: division by zero 14:38:30 - __main__ - INFO - Finished!

On the whole, using the @logger.catch is a nice way to catch exceptions.

Now, you’re ready to move on and learn about changing the color of your logs in the terminal.

Terminal Logging with Color

Loguru will print out logs in color in the terminal by default if the terminal supports color. Colorful logs can make reading through the logs easier as you can highlight warnings and exceptions with unique colors.

You can use markup tags to add specific colors to any formatter string. You can also apply bold and underline to the tags.

Open up your Python IDE and create a new file called terminal_formatting.py. After saving the file, enter the following code into it:

# terminal_formatting.py import sys from loguru import logger fmt = ("<red>{time}</red> - " "<yellow>{name}</yellow> - " "{level} - {message}") logger.add(sys.stdout, format=fmt, level="DEBUG") logger.debug("This is a debug message") logger.info("This is an informational message")

You create a special format that sets the “time” portion to red and the “name” to yellow. Then, you add() that format to the logger. You will now have two sinks: the default root handler, which logs to stderr, and the new sink, which logs to stdout. You do formatting to compare the default colors to your custom ones.

Go ahead and run the code. You should see something like this:

Neat! It would be best if you now spent a few moments studying the documentation and trying out some of the other colors. For example, you can use hex and RGB colors and a handful of named colors.

The last section you will look at is how to do log rotation with Loguru!

Easy Log Rotation

Loguru makes log rotation easy. You don’t need to import any special handlers. Instead, you only need to specify the rotation argument when you call add().

Here are a few examples:

  • logger.add("file.log", rotation="100 MB")
  • logger.add("file.log", rotation="12:00")
  • logger.add("file.log", rotation="1 week")

These demonstrate that you can set the rotation at 100 megabytes at noon daily or even rotate weekly.

Open up your Python IDE so you can create a full-fledged example. Name the file log_rotation.py and add the following code:

# log_rotation.py from loguru import logger fmt = "{time} - {name} - {level} - {message}" logger.add("rotated.log", format=fmt, level="DEBUG", rotation="50 B") logger.debug("This is a debug message") logger.info("This is an informational message")

Here, you set up a log format, set the level to DEBUG, and set the rotation to every 50 bytes. When you run this code, you will get a couple of log files. Loguru will add a timestamp to the file’s name when it rotates the log.

What if you want to add compression? You don’t need to override the rotator like you did with Python’s logging module. Instead, you can turn on compression using the compression argument.

Create a new Python script called log_rotation_compression.py and add this code for a fully working example:

# log_rotation_compression.py from loguru import logger fmt = "{time} - {name} - {level} - {message}" logger.add("compressed.log", format=fmt, level="DEBUG", rotation="50 B", compression="zip") logger.debug("This is a debug message") logger.info("This is an informational message") for i in range(10): logger.info(f"Log message {i}")

The new file is automatically compressed in the zip format when the log rotates. There is also a retention argument that you can use with add() to tell Loguru to clean the logs after so many days:

logger.add("file.log", rotation="100 MB", retention="5 days")

If you were to add this code, the logs that were more than five days old would get cleaned up automatically by Loguru!

Wrapping Up

The Loguru package makes logging much easier than Python’s logging library. It removes the boilerplate needed to create and format logs.

In this chapter, you learned about the following:

  • Installation
  • Logging made simple
  • Handlers and formatting
  • Catching exceptions
  • Terminal logging with color
  • Easy log rotation

Loguru can do much more than what is covered here, though. You can serialize your logs to JSON or contextualize your logger messages. Loguru also allows you to add lazy evaluation to your logs to prevent them from affecting performance in production. Loguru also makes adding custom log levels very easy. For full details about all the things Loguru can do, you should consult Loguru’s website.

The post An Intro to Logging with Python and Loguru appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Real Python: Python's Built-in Exceptions: A Walkthrough With Examples

Planet Python - Wed, 2024-05-15 10:00

Python has a complete set of built-in exceptions that provide a quick and efficient way to handle errors and exceptional situations that may happen in your code. Knowing the most commonly used built-in exceptions is key for you as a Python developer. This knowledge will help you debug code because each exception has a specific meaning that can shed light on your debugging process.

You’ll also be able to handle and raise most of the built-in exceptions in your Python code, which is a great way to deal with errors and exceptional situations without having to create your own custom exceptions.

In this tutorial, you’ll:

  • Learn what errors and exceptions are in Python
  • Understand how Python organizes the built-in exceptions in a class hierarchy
  • Explore the most commonly used built-in exceptions
  • Learn how to handle and raise built-in exceptions in your code

To smoothly walk through this tutorial, you should be familiar with some core concepts in Python. These concepts include Python classes, class hierarchies, exceptions, try … except blocks, and the raise statement.

Get Your Code: Click here to download the free sample code that you’ll use to learn about Python’s built-in exceptions.

Errors and Exceptions in Python

Errors and exceptions are important concepts in programming, and you’ll probably spend a considerable amount of time dealing with them in your programming career. Errors are concrete conditions, such as syntax and logical errors, that make your code work incorrectly or even crash.

Often, you can fix errors by updating or modifying the code, installing a new version of a dependency, checking the code’s logic, and so on.

For example, say you need to make sure that a given string has a certain number of characters. In this case, you can use the built-in len() function:

Python >>> len("Pythonista") = 10 File "<input>", line 1 ... SyntaxError: cannot assign to function call here. Maybe you meant '==' instead of '='? Copied!

In this example, you use the wrong operator. Instead of using the equality comparison operator, you use the assignment operator. This code raises a SyntaxError, which represents a syntax error as its name describes.

Note: In the above code, you’ll note how nicely the error message suggests a possible solution for correcting the code. Starting in version 3.10, the Python core developers have put a lot of effort into improving the error messages to make them more friendly and useful for debugging.

To fix the error, you need to localize the affected code and correct the syntax. This action will remove the error:

Python >>> len("Pythonista") == 10 True Copied!

Now the code works correctly, and the SyntaxError is gone. So, your code won’t break, and your program will continue its normal execution.

There’s something to learn from the above example. You can fix errors, but you can’t handle them. In other words, if you have a syntax error like the one in the example, then you won’t be able to handle that error and make the code run. You need to correct the syntax.

On the other hand, exceptions are events that interrupt the execution of a program. As their name suggests, exceptions occur in exceptional situations that should or shouldn’t happen. So, to prevent your program from crashing after an exception, you must handle the exception with the appropriate exception-handling mechanism.

To better understand exceptions, say that you have a Python expression like a + b. This expression will work if a and b are both strings or numbers:

Python >>> a = 4 >>> b = 3 >>> a + b 7 Copied!

In this example, the code works correctly because a and b are both numbers. However, the expression raises an exception if a and b are of types that can’t be added together:

Python >>> a = "4" >>> b = 3 >>> a + b Traceback (most recent call last): File "<input>", line 1, in <module> a + b ~~^~~ TypeError: can only concatenate str (not "int") to str Copied!

Because a is a string and b is a number, your code fails with a TypeError exception. Since there is no way to add text and numbers, your code has faced an exceptional situation.

Python uses classes to represent exceptions and errors. These classes are generically known as exceptions, regardless of what a concrete class represents, an exception or an error. Exception classes give us information about an exceptional situation and also errors detected during the program’s execution.

The first example in this section shows a syntax error in action. The SyntaxError class represents an error but it’s implemented as a Python exception. This could be confusing, but Python uses exception classes for both errors and exceptions.

Read the full article at https://realpython.com/python-built-in-exceptions/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: How to Get the Most Out of PyCon US

Planet Python - Wed, 2024-05-15 10:00

Congratulations! You’re going to PyCon US!

Whether this is your first time or not, going to a conference full of people who love the same thing as you is always a fun experience. There’s so much more to PyCon than just a bunch of people talking about the Python language, and that can be intimidating for first-time attendees. This guide will help you navigate all there is to see and do at PyCon.

PyCon US is the biggest conference centered around the Python language. Originally launched in 2003, this conference has grown exponentially and has even spawned several other PyCons and workshops around the world.

Everyone who attends PyCon will have a different experience, and that’s what makes the conference truly unique. This guide is meant to help you, but you don’t need to follow it strictly.

By the end of this article, you’ll know:

  • How PyCon consists of tutorials, conference, and sprints
  • What to do before you go
  • What to do during PyCon
  • What to do after the event
  • How to have a great PyCon

This guide will have links that are specific to PyCon 2024, but it should be useful for future PyCons as well.

Free Download: Get a sample chapter from Python Tricks: The Book that shows you Python’s best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.

What PyCon Involves

Before considering how to get the most out of PyCon, it’s important to first understand what PyCon involves.

PyCon is broken up into three stages:

  1. Tutorials: PyCon starts with two days of three-hour workshops, during which you get to learn in depth with instructors. These are great to go to since the class sizes are small, and you can ask questions of the instructors. You should consider going to at least one of these if you can, but they do have an additional cost of $150 per tutorial.

  2. Conference: Next, PyCon offers three days of talks. Each presentation lasts for thirty to forty-five minutes, and there are about five talks going on at a time, including a Spanish language charlas track. But that’s not all: there are also open spaces, sponsors, posters, lightning talks, dinners, and so much more.

  3. Sprints: During this stage, you can take what you’ve learned and apply it! This is a four-day exercise where people group up to work on various open-source projects related to Python. If you’ve got the time, going to one or more sprint days is a great way to practice what you’ve learned, become associated with an open-source project, and network with other smart and talented people. Learn more about sprints in this blog post from an earlier year.

Since most PyCon attendees go to the conference part, that’ll be the focus of this article. However, don’t let that deter you from attending the tutorials or sprints if you can!

You may even learn more technical skills by attending the tutorials rather than listening to the talks. The sprints are great for networking and applying the skills that you’ve already got, as well as learning new ones from the people you’ll be working with.

What to Do Before You Go

In general, the more prepared you are for something, the better your experience will be. The same applies to PyCon.

It’s really helpful to plan and prepare ahead of time, which you’re already doing just by reading this article!

Look through the talk schedule and see which talks sound most interesting to you. This doesn’t mean you need to plan out all of the talks that you’re going to see, in every slot possible. But it helps to get an idea of which topics are going to be presented so that you can decide what you’re most interested in.

Getting the PyCon US mobile app will help you plan your schedule. This app lets you view the schedule for the talks and add reminders for the ones that you want to attend. If you’re having a hard time picking which talks to go to, you can come prepared with a question or problem that you need to solve. Doing this can help you focus on the topics that are important to you.

If you can, come a day early to check in and attend the opening reception. The line to check in on the first day is always long, so you’ll save time if you check in the day before. There’s also an opening reception that evening, so you can meet other attendees and speakers, as well as get a chance to check out the various sponsors and their booths.

If you’re brand-new to PyCon, the Newcomer Orientation can help you get caught up on what the conference involves and how you can participate.

Read the full article at https://realpython.com/pycon-guide/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

The Open Source AI Definition gets closer to reality with a global workshop series

Open Source Initiative - Wed, 2024-05-15 08:05

The OSI community is traveling to five continents seeking diverse input on how to guarantee the freedoms to use, study, share and modify Open Source AI systems.

SAN FRANCISCO – May 14, 2024 Open Source Initiative (OSI), globally recognized by individuals, companies and public institutions as the authority that defines Open Source, is driving a global multi-stakeholder process to define “Open Source AI.” This definition will provide a framework to help AI developers and users determine if an AI system is Open Source or not, meaning that it’s available under terms that allow unrestricted rights to use, study, modify and share. There are currently no accepted means by which openness can be validated for AI, yet many organizations are claiming their AI to be “Open Source.” Just as the Open Source Definition serves as the globally accepted standard for Open Source software, so will the Open Source AI Definition act as a standard for openness in AI systems and their components.

In 2022 the OSI started an in-depth global initiative to engage key players, including corporations, academia, the legal community and organizations and nonprofits representing wider civil society, in a collaborative effort to draft a definition of Open Source AI that ensures that society at large can retain agency and control over the technology. The project has increased in importance as legislators around the world started regulating AI, asking for feedback as guardrails are defined.

This open process has resulted in a massive body of work including podcasts, panel discussions, webinars, published reports, and a plethora of town halls, workshops and conference sessions around the world. A big emphasis was given to make the process as inclusive and representative as possible: 53% of the working groups were composed of people of color. Women and femmes, including transgender women, accounted for 28% of the total and 63% of those individuals are women of color. 

After months of weekly town hall meetings, draft releases and reviews the OSI is nearing a stable version of the Open Source AI Definition. Now, the OSI is embarking on a roadshow of workshops to be held on five continents to solicit input from diverse stakeholders on the draft definition. The goal is to present a stable version of the definition in October at the All Things Open event in Raleigh, North Carolina. This “Open Source AI Definition Roadshow” is sponsored by the Alfred P. Sloan Foundation, and OSI’s sponsors and donors.

“AI is different from regular software and forces all stakeholders to review how the Open Source principles apply to this space,” said Stefano Maffulli, executive director of the OSI. “OSI believes that everybody deserves to maintain agency and control of the technology. We also recognize that markets flourish when clear definitions promote transparency, collaboration and permissionless innovation. After spending almost two years gathering voices from all over the world to identify the principles of Open Source suitable for AI systems, we’re embarking on a worldwide roadshow to refine and validate the release candidate version of the Open Source AI Definition.”

The schedule of workshops is as follows: 

  • North America
  • Europe
  • Africa
    • Nigeria, Lagos, August tentative
  • Asia Pacific
    • Hong Kong, AI_Dev (August 23)
    • Asia – details TBD, DPGA members meeting (November 12 – 14)
  • Latin America
    • Argentina, Buenos Aires, Nerdearla (September 24 – 28)

For weekly updates, town hall recordings and access to all the previously published material, visit opensource.org/deepdive.

Supporters of the Open Source AI Definition Process

The Deep Dive: Defining Open Source AI co-design process is made possible thanks to grant 2024-22486 from Alfred P. Sloan Foundation, donations from Google Open Source, Cisco, Amazon and others, and donations by individual members. The media partner is OpenSource.net.
Others interested in offering support can contact OSI at sponsors@opensource.org.

Categories: FLOSS Research

Real Python: Quiz: What Are CRUD Operations?

Planet Python - Wed, 2024-05-15 08:00

In this quiz, you’ll test your understanding of CRUD Operations.

By working through this quiz, you’ll revisit the key concepts and techniques related to CRUD operations. Good luck!

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

PyCon: PyCon US 2024 Sprints will be here before you know it!

Planet Python - Wed, 2024-05-15 06:03

The Development Sprints are coming soon. Make sure you plan ahead:

When: Sprints will take place on May 20, 2024 8:00am through May 23, 2024 11:00pm EST

Where: At PyCon US at the David L. Lawrence Convention Center in rooms 308-311 and 315-321

Project Signups: Get your project listed so that attendees can help support it by signing up here Submit Sprint Project:

What are Sprints?

PyCon Development Sprints are up to four days of intensive learning and development on an open source project(s) of your choice, in a team environment. It's a time to come together with colleagues, old and new, to share what you've learned and apply it to an open source project.

It's a time to test, fix bugs, add new features, and improve documentation. And it's a time to network, make friends, and build relationships that go beyond the conference.

PyCon US provides the opportunity and infrastructure; you bring your skills, humanity, and brainpower (oh! and don't forget your computer).

For those that have never attended a development sprint before or want to brush up on basics, on Sunday, May 19th, there will be an Introduction to Sprinting Workshop that will guide you through the basics of git, github, and what to expect at a Sprint. The Introduction to Sprint Workshop takes place in Room 402 on Sunday, May 19th from 5:30pm - 8:30pm EST.

Who can participate?

You! All experience levels are welcome; sprints are a great opportunity to get connected with, and start contributing to your favorite Python project. Participation in the sprints is free and included in your conference registration. Please go to your attendee profile on your dashboard and indicate the number of sprint days you will be attending. 

Mentors: we are always looking for mentors to help new sprinters get up and running. Reach out to the sprint organizers for more info. 

Which Projects are Sprinting?

Project Leads: Any Python project can signup and invite sprinters to contribute to their project. If you would like your project to be included, add your project to the list. Attendees, check here to see if which projects have signed up so far.  

Thanks to our sponsors and support team!

Have questions? reach out to pycon-sprints@python.org

Categories: FLOSS Project Planets

Evgeni Golov: Using HPONCFG on CentOS Stream 9 with OpenSSL 3.2

Planet Debian - Wed, 2024-05-15 05:14

Today I've updated an HPE ProLiant DL325 G10 from CentOS Stream 8 to CentOS Stream 9 (details on that to follow) and realized that hponcfg was broken afterwards.

As I do not have a support contract with HPE, I couldn't just yell at them in private, so I am doing this in public now ;-)

# hponcfg HPE Lights-Out Online Configuration utility Version 5.6.0 Date 11/30/2020 (c) 2005,2020 Hewlett Packard Enterprise Development LP Error: Unable to locate SSL library. Install latest SSL library to use HPONCFG.

Welp, what the heck?

But wait, 5.6.0 from 2020 looks old, let's update this first!

hponcfg is part of the "Management Component Pack" (at least if you're not running RHEL or SLES where you get it via the "Service Pack for ProLiant" which requires a support contract) and can be downloaded from the Software Delivery Repository.

The Software Delivery Repository tells you to configure it in /etc/yum.repos.d/mcp.repo as

[mcp] name=Management Component Pack baseurl=http://downloads.linux.hpe.com/repo/mcp/dist/dist_ver/arch/project_ver enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/GPG-KEY-mcp

gpgcheck=0? Suuure! Plain HTTP? Suuure!

But it gets better! When you look at https://downloads.linux.hpe.com/repo/mcp/centos/ (you have to substitute dist with your distribution!) you'll see that there is no 9 folder and thus no packages for CentOS (Stream) 9. There are however folders for Oracle, Rocky and Alma. Phew. Let's take one of these!

[mcp] name=Management Component Pack baseurl=https://downloads.linux.hpe.com/repo/mcp/rocky/9/x86_64/current/ enabled=1 gpgcheck=1 gpgkey=https://downloads.linux.hpe.com/repo/mcp/GPG-KEY-mcp

dnf upgrade hponcfg updates it to hponcfg-6.0.0-0.x86_64 and:

# hponcfg HPE Lights-Out Online Configuration utility Version 6.0.0 Date 10/30/2022 (c) 2005,2022 Hewlett Packard Enterprise Development LP Error: Unable to locate SSL library. Install latest SSL library to use HPONCFG.

Fuck.

ldd doesn't show hponcfg being linked to libssl, do they dlopen() at runtime and fucked something up? ltrace to the rescue!

# ltrace hponcfg … popen("strings /bin/openssl | grep 'Ope"..., "r") = 0x621700 fgets("OpenSSL 3.2.1 30 Jan 2024\n", 256, 0x621700) = 0x7ffd870e2e10 strstr("OpenSSL 3.2.1 30 Jan 2024\n", "OpenSSL 3.0") = nil …

WAT?

They run strings /bin/openssl |grep 'OpenSSL' and compare the result with "OpenSSL 3.0"?!

Sure, OpenSSL 3.2 in EL9 is rather fresh and didn't hit RHEL/Oracle/Alma/Rocky yet, but surely there are better ways to check for a compatible version of OpenSSL than THIS?!

Anyway, I am not going to downgrade my OpenSSL. Neither will I patch it to pretend to be 3.0.

But I can patch the hponcfg binary!

# vim /sbin/hponcfg <go to line 146> <replace 3.0 with 3.2> :x

Yes, I used vim. Yes, it works. No, I won't guarantee this won't kill a kitten somewhere.

# ./hponcfg HPE Lights-Out Online Configuration utility Version 6.0.0 Date 10/30/2022 (c) 2005,2022 Hewlett Packard Enterprise Development LP Firmware Revision = 2.44 Device type = iLO 5 Driver name = hpilo USAGE: hponcfg -? hponcfg -h hponcfg -m minFw hponcfg -r [-m minFw] [-u username] [-p password] hponcfg -b [-m minFw] [-u username] [-p password] hponcfg [-a] -w filename [-m minFw] [-u username] [-p password] hponcfg -g [-m minFw] [-u username] [-p password] hponcfg -f filename [-l filename] [-s namevaluepair] [-v] [-m minFw] [-u username] [-p password] hponcfg -i [-l filename] [-s namevaluepair] [-v] [-m minFw] [-u username] [-p password] -h, --help Display this message -? Display this message -r, --reset Reset the Management Processor to factory defaults -b, --reboot Reboot Management Processor without changing any setting -f, --file Get/Set Management Processor configuration from "filename" -i, --input Get/Set Management Processor configuration from the XML input received through the standard input stream. -w, --writeconfig Write the Management Processor configuration to "filename" -a, --all Capture complete Management Processor configuration to the file. This should be used along with '-w' option -l, --log Log replies to "filename" -v, --xmlverbose Display all the responses from Management Processor -s, --substitute Substitute variables present in input config file with values specified in "namevaluepairs" -g, --get_hostinfo Get the Host information -m, --minfwlevel Minimum firmware level -u, --username iLO Username -p, --password iLO Password

For comparison, here is the diff --text output:

# diff -u --text /sbin/hponcfg ./hponcfg --- /sbin/hponcfg 2022-08-02 01:07:55.000000000 +0000 +++ ./hponcfg 2024-05-15 09:06:54.373121233 +0000 @@ -143,7 +143,7 @@ helpget_hostinforesetwriteconfigallfileinputlogminfwlevelxmlverbosesubstitutetimeoutdbgverbosityrebootusernamepasswordlibpath%Ah*Ag7Ar=AwIAaMAfRAiXAl\AmgAvrAs}At�Ad�Ab�Au�Ap�Azhgrbaw:f:il:m:vs:t:d:z:u:p:tmpXMLinputFile%2d.xmlw+Error: Syntax Error - Invalid options present. =O@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@�M@�M@aQ@�M@aQ@�N@�M@�N@�P@aQ@aQ@�M@�M@aQ@aQ@LN@aQ@�M@�O@�M@�M@�M@�M@aQ@aQ@�M@<!----><LOGINUSER_LOGINPASSWORD<LOGIN USER_LOGIN="%s" PASSWORD="%s"ERROR: LOGIN tag is missing. >ERROR: LOGIN end tag is missing. -strings | grep 'OpenSSL 1' | grep 'OpenSSL 3'OpenSSL 1.0OpenSSL 1.1OpenSSL 3.0which openssl 2>&1/usr/bin/opensslOpenSSL location - %s +strings | grep 'OpenSSL 1' | grep 'OpenSSL 3'OpenSSL 1.0OpenSSL 1.1OpenSSL 3.2which openssl 2>&1/usr/bin/opensslOpenSSL location - %s Current version %s No response from command.

Pretty sure it won't apply like this with patch, but you get the idea.

And yes, double-giggles for the fact that the error message says "Install latest SSL library to use HPONCFG" and the issues is because I have the latest SSL library installed…

Categories: FLOSS Project Planets

Glyph Lefkowitz: How To PyCon

Planet Python - Wed, 2024-05-15 05:12

These tips are not the “right” way to do PyCon, but they are suggestions based on how I try to do PyCon. Consider them reminders to myself, an experienced long-time attendee, which you are welcome to overhear.

See Some Talks

The hallway track is awesome. But the best version of the hallway track is not just bumping into people and chatting; it’s the version where you’ve all recently seen the same thing, and thereby have a shared context of something to react to. If you aren’t going to talks, you aren’t going to get a good hallway track.. Therefore: choose talks that interest you, attend them and pay close attention, then find people to talk to about them.

Given that you will want to see some of the talks, make sure that you have the schedule downloaded and available offline on your mobile device, or printed out on a piece of paper.

Make a list of the talks you think you want to see, but have that schedule with you in case you want to call an audible in the middle of the conference, switching to a different talk you didn’t notice based on some of those “hallway track” conversations.

Participate In Open Spaces

The name “hallway track” itself is antiquated, in a way which is relevant and important to modern conferences. It used to be that conferences were exclusively oriented around their scheduled talks; it was called the “hallway” track because the way to access it was to linger in the hallways, outside the official structure of the conference, and just talk to people.

But however, at PyCon and many other conferences, this unofficial track is now much more of an integrated, official part of the program. In particular, open spaces are not only a more official hallway track, they are considerably better than the historical “hallway” experience, because these ad-hoc gatherings can be convened with a prepared topic and potentially a loose structure to facilitate productive discussion.

With open spaces, sessions can have an agenda and so conversations are easier to start. Rooms are provided, which is more useful than you might think; literally hanging out in a hallway is actually surprisingly disruptive to speakers and attendees at talks; us nerds tend to get pretty loud and can be quite audible even through a slightly-cracked door, so avail yourself of these rooms and don’t be a disruptive jerk outside somebody’s talk.

Consult the open space board, and put up your own proposed sessions. Post them as early as you can, to maximize the chance that they will get noticed. Post them on social media, using the conference's official hashtag, and ask any interested folks you bump into help boost it.1

Remember that open spaces are not talks. If you want to give a mini-lecture on a topic and you can find interested folks you could do that, but the format lends itself to more peer-to-peer, roundtable-style interactions. Among other things, this means that, unlike proposing a talk, where you should be an expert on the topic that you are proposing, you can suggest open spaces where you are curious — but ignorant — about something, in the hopes that some experts will show up and you can listen to their discussion.

Be prepared for this to fail; there’s a lot going on and it’s always possible that nobody will notice your session. Again, maximize your chances by posting it as early as you can and promoting it, but be prepared to just have a free 30 minutes to check your email. Sometimes that’s just how it goes. The corollary here is not to always balance attending others’ spaces with proposing your own. After all if someone else proposed it you know at least one other person is gonna be there.

Take Care of Your Body

Conferences can be surprisingly high-intensity physical activities. It’s not a marathon, but you will be walking quickly from one end of a large convention center to another, potentially somewhat anxiously.

Hydrate, hydrate, hydrate. Bring a water bottle, and have it with you at all times. It might be helpful to set repeating timers on your phone to drink water, since it can be easy to forget in the middle of engaging conversations. If you take advantage of the hallway track as much as you should, you will talk more than you expect; talking expels water from your body. All that aforementioned walking might make you sweat a bit more than you realize.

Hydrate.

More generally, pay attention to what you are eating and drinking. Conference food isn’t always the best, and in a new city you might be tempted to load up on big meals and junk food. You should enjoy yourself and experience the local cuisine, but do it intentionally. While you enjoy the local fare, do so in whatever moderation works best for you. Similarly for boozy night-time socializing. Nothing stings quite as much as missing a morning of talks because you’ve got a hangover or a migraine.

This is worth emphasizing because in the enthusiasm of an exciting conference experience, it’s easy to lose track and overdo it.

Meet Both New And Old Friends: Plan Your Socializing

A lot of the advice above is mostly for first-time or new-ish conferencegoers, but this one might be more useful for the old heads. As we build up a long-time clique of conference friends, it’s easy to get a bit insular and lose out on one of the bits of magic of such an event: meeting new folks and hearing new perspectives.

While open spaces can address this a little bit, there's one additional thing I've started doing in the last few years: dinners are for old friends, but lunches are for new ones. At least half of the days I'm there, I try to go to a new table with new folks that I haven't seen before, and strike up a conversation. I even have a little canned icebreaker prompt, which I would suggest to others as well, because it’s worked pretty nicely in past years: “what is one fun thing you have done with Python recently?”2.

Given that I have a pretty big crowd of old friends at these things, I actually tend to avoid old friends at lunch, since it’s so easy to get into multi-hour conversations, and meeting new folks in a big group can be intimidating. Lunches are the time I carve out to try and meet new folks.

I’ll See You There

I hope some of these tips were helpful, and I am looking forward to seeing some of you at PyCon US 2024!

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!

  1. In PyCon2024's case, #PyConUS on Mastodon is probably the way to go. Note, also, that it is #PyConUS and not #pyconus, which is much less legible for users of screen-readers. 

  2. Obviously that is specific to this conference. At the O’Reilly Software Architecture conference, my prompt was “What is software architecture?” which had some really fascinating answers. 

Categories: FLOSS Project Planets

Electric Citizen: Big Changes Ahead for Drupal

Planet Drupal - Wed, 2024-05-15 04:05

Our team recently attended (and once again sponsored!) the DrupalCon North America conference in Portland, OR. 

This annual conference brings together the Drupal community, from the agencies who provide Drupal services to the industry clients who rely on it, along with contributors and open-source enthusiasts from around the world.

From my perspective on the exhibitors floor, working the booth, I don’t see as many of the great individual sessions that I have in past years. But I did leave with some important takeaways from this year’s event, especially around some upcoming changes for Drupal. 

Categories: FLOSS Project Planets

Talk Python to Me: #462: Pandas and Beyond with Wes McKinney

Planet Python - Wed, 2024-05-15 04:00
This episode dives into some of the most important data science libraries from the Python space with one of its pioneers: Wes McKinney. He's the creator or co-creator of pandas, Apache Arrow, and Ibis projects and an entrepreneur in this space.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/neo4j-graphstuff'>Neo4j</a><br> <a href='https://talkpython.fm/mailtrap'>Mailtrap</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Wes' Website</b>: <a href="https://wesmckinney.com" target="_blank" rel="noopener">wesmckinney.com</a><br/> <b>Pandas</b>: <a href="https://pandas.pydata.org" target="_blank" rel="noopener">pandas.pydata.org</a><br/> <b>Apache Arrow</b>: <a href="https://arrow.apache.org" target="_blank" rel="noopener">arrow.apache.org</a><br/> <b>Ibis</b>: <a href="https://ibis-project.org" target="_blank" rel="noopener">ibis-project.org</a><br/> <b>Python for Data Analysis - Groupby Summary</b>: <a href="https://wesmckinney.com/book/data-aggregation.html#groupby-summary" target="_blank" rel="noopener">wesmckinney.com/book</a><br/> <b>Polars</b>: <a href="https://pola.rs" target="_blank" rel="noopener">pola.rs</a><br/> <b>Dask</b>: <a href="https://www.dask.org" target="_blank" rel="noopener">dask.org</a><br/> <b>Sqlglot</b>: <a href="https://sqlglot.com/sqlglot.html" target="_blank" rel="noopener">sqlglot.com</a><br/> <b>Pandoc</b>: <a href="https://pandoc.org" target="_blank" rel="noopener">pandoc.org</a><br/> <b>Quarto</b>: <a href="https://quarto.org" target="_blank" rel="noopener">quarto.org</a><br/> <b>Evidence framework</b>: <a href="https://evidence.dev" target="_blank" rel="noopener">evidence.dev</a><br/> <b>pyscript</b>: <a href="https://pyscript.net" target="_blank" rel="noopener">pyscript.net</a><br/> <b>duckdb</b>: <a href="https://duckdb.org" target="_blank" rel="noopener">duckdb.org</a><br/> <b>Jupyterlite</b>: <a href="https://jupyter.org/try-jupyter/lab/" target="_blank" rel="noopener">jupyter.org</a><br/> <b>Djangonauts</b>: <a href="https://djangonaut.space" target="_blank" rel="noopener">djangonaut.space</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=iBe1-o8LYE4" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/462/pandas-and-beyond-with-wes-mckinney" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Categories: FLOSS Project Planets

The Drop Times: Policy-Based Access in Core by Kristiaan Van den Enyde

Planet Drupal - Wed, 2024-05-15 02:13
Kristiaan Van den Eynde, Senior Drupal Developer at Factorial, has made substantial contributions to Drupal, including the widely-used Group module and VariationCache. His project, Policy-Based Access in Core, introduced a dynamic system for managing permissions based on predefined policies. This initiative, set to debut in Drupal 10.3, promises enhanced flexibility and security. Kristiaan shares insights into his development process, the challenges faced, and the future of access control in Drupal.
Categories: FLOSS Project Planets

Tag1 Consulting: Migrating Your Data from Drupal 7 to Drupal 10 using the Migrate API: Avoiding entity ID conflicts

Planet Drupal - Wed, 2024-05-15 01:54

By default, the Drupal 7 to 10 upgrade path preserves entity IDs. In the previous article, we explained that this would cause problems if content or configuration already exists in the destination Drupal 10 site. Let’s explore this further and evaluate ways to work around the issue.

Read more mauricio Wed, 05/15/2024 - 14:15
Categories: FLOSS Project Planets

Debug Academy: How to create custom sorting logic for Drupal views

Planet Drupal - Wed, 2024-05-15 01:29
How to create custom sorting logic for Drupal views

Drupal websites sometimes have a need to implement more advanced sorting logic than what's available out of the box.

One of our career-changing Drupal training course alumni asked me how to handle this today. After answering them, I decided to copy the answer into a blogpost.

The views module creates dynamic queries for us based on the configuration options we select. The UI essentially allows us to use any field for sorting in ascending (smallest to largest) or descending (largest to smallest) order. This is extremely helpful and covers the vast majority of use cases - date sorting, alphabetical sorting, and numeric sorting are all supported - but we sometimes run into limitations when we have more complicated requirements.

Some examples of these scenarios include:

ashrafabed Wed, 05/15/2024
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RApiSerialize 0.1.3 on CRAN: Skipping XDR

Planet Debian - Tue, 2024-05-14 19:28

A new bug fix release 0.1.3 of RApiSerialize got onto CRAN earlier today. This is the first release in well over a year, and permits the skip the XDR serialization format which is needed when transfering between big- and little-endian machines. But it comes at a certain run-time cost one can avoid on the (much more common) little-endian machines. This is a new option, and the old behavior is the default. Those who want to can now skip the step.

The RApiSerialize package is used by both my RcppRedis as well as by Travers excellent qs package. We also addressed the recent nag by the CRAN concerning ‘NO_REMAP’.

Changes in version 0.1.3 (2024-05-13)
  • Add an xdr argument to disable XDR for an approx. threefold speed increase (Travers Ching and Dirk in #6)

  • Use R_NO_REMAP and Rf_* prefix for API calls

  • Minor continuous integration updates

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More details are at the RApiSerialize page; code, issue tickets etc at the GitHub repositoryrapiserializerepo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Evgeni Golov: Using Packit to build RPMs for projects that depend on or vendor your code

Planet Debian - Tue, 2024-05-14 16:12

I am a huge fan of Packit as it allows us to provide RPMs to our users and testers directly from a pull-request, thus massively tightening the feedback loop and involving people who otherwise might not be able to apply the changes (for whatever reason) and "quickly test" something out. It's also a great way to validate that a change actually builds in a production environment, where no unnecessary development and test dependencies are installed.

You can also run tests of the built packages on Testing Farm and automate pushing releases into Fedora/CentOS Stream, but this is neither a (plain) Packit advertisement post, nor is that functionality that I can talk about with a certain level of experience.

Adam recently asked why we don't have Packit builds for our our Puppet modules and my first answer was: "well, puppet-* doesn't produce a thing we ship directly, so nobody dared to do it".

My second answer was that I had blogged how to test a Puppet module PR with Packit, but I totally agree that the process was a tad cumbersome and could be improved.

Now some madman did it and we all get to hear his story! ;-)

What is the problem anyway?

The Foreman Installer is a bit of Ruby code1 that provides a CLI to puppet apply based on a set of Puppet modules. As the Puppet modules can also be used outside the installer and have their own lifecycle, they live in separate git repositories and their releases get uploaded to the Puppet Forge. Users however do not want to (and should not have to) install the modules themselves.

So we have to ship the modules inside the foreman-installer package. Packaging 25 modules for two packaging systems (we support Enterprise Linux and Debian/Ubuntu) seems like a lot of work. Especially if you consider that the main foreman-installer package would need to be rebuilt after each module change as it contains generated files based on the modules which are too expensive to generate at runtime.

So we can ship the modules inside the foreman-installer source release, thus vendoring those modules into the installer release.

To do so we use librarian-puppet with a Puppetfile and either a Puppetfile.lock for stable releases or by letting librarian-puppet fetch latest for nightly snapshots.

This works beautifully for changes that land in the development and release branches of our repositories - regardless if it's foreman-installer.git or any of the puppet-*.git ones. It also works nicely for pull-requests against foreman-installer.git.

But because the puppet-* repositories do not map to packages, we assumed it wouldn't work well for pull-requests against those.

How can we solve this?

Well, the "obvious" solution is to build the foreman-installer package via Packit also for pull-requests against the puppet-* repositories. However, as usual, the devil is in the details.

Packit by default clones the repository of the pull-request and tries to create a source tarball from that using git archive. As this might be too simple for many projects, one can define a custom create-archive action that runs after the pull-request has been cloned and produces the tarball instead. We already use that in the Packit configuration for foreman-installer to run the pkg:generate_source rake target which executes librarian-puppet for us.

But now the pull-request is against one of the Puppet modules, so Packit will clone that, not the installer.

We gotta clone foreman-installer on our own. And then point librarian-puppet at the pull-request. Fun.

Cloning is relatively simple, call git clone -- sorry Packit/Copr infrastructure.

But the Puppet module pull-request? One can use :git => 'https://git.example.com/repo.git' in the Puppetfile to fetch a git repository. In fact, that's what we already do for our nightly snapshots. It also supports :ref => 'some_branch_or_tag_name', if the remote HEAD is not what you want.

My brain first went "I know this! GitHub has this magic refs/pull/1/head and refs/pull/1/merge refs you can checkout to get the contents of the pull-request without bothering to add a remote for the source of the pull-request". Well, this requires to know the ID of the pull-request and Packit does not expose that in the environment variables available during create-archive.

Wait, but we already have a checkout. Can we just say :git => '../.git'? Cloning a .git folder is totally possible after all.

[Librarian] --> fatal: repository '../.git' does not exist Could not checkout ../.git: fatal: repository '../.git' does not exist

Seems librarian disagrees. Damn. (Yes, I checked, the path exists.)

💡 does it maybe just not like relative paths?! Yepp, using an absolute path absolutely works!

For some reason it ends up checking out the default HEAD of the "real" (GitHub) remote, not of ../. Luckily this can be fixed by explicitly passing :ref => 'origin/HEAD', which resolves to the branch Packit created for the pull-request.

Now we just need to put all of that together and remember to execute all commands from inside the foreman-installer checkout as that is where all our vendoring recipes etc live.

Putting it all together

Let's look at the diff between the packit.yaml for foreman-installer and the one I've proposed for puppet-pulpcore:

--- a/foreman-installer/.packit.yaml 2024-05-14 21:45:26.545260798 +0200 +++ b/puppet-pulpcore/.packit.yaml 2024-05-14 21:44:47.834162418 +0200 @@ -18,13 +18,15 @@ actions: post-upstream-clone: - "wget https://raw.githubusercontent.com/theforeman/foreman-packaging/rpm/develop/packages/foreman/foreman-installer/foreman-installer.spec -O foreman-installer.spec" + - "git clone https://github.com/theforeman/foreman-installer" + - "sed -i '/theforeman.pulpcore/ s@:git.*@:git => \"#{__dir__}/../.git\", :ref => \"origin/HEAD\"@' foreman-installer/Puppetfile" get-current-version: - - "sed 's/-develop//' VERSION" + - "sed 's/-develop//' foreman-installer/VERSION" create-archive: - - bundle config set --local path vendor/bundle - - bundle config set --local without development:test - - bundle install - - bundle exec rake pkg:generate_source + - bash -c "cd foreman-installer && bundle config set --local path vendor/bundle" + - bash -c "cd foreman-installer && bundle config set --local without development:test" + - bash -c "cd foreman-installer && bundle install" + - bash -c "cd foreman-installer && bundle exec rake pkg:generate_source"
  1. It clones foreman-installer (in post-upstream-clone, as that felt more natural after some thinking)
  2. It adjusts the Puppetfile to use #{__dir__}/../.git as the Git repository, abusing the fact that a Puppetfile is really just a Ruby script (sorry Ben!) and knows the __dir__ it lives in
  3. It fetches the version from the foreman-installer checkout, so it's sort-of reasonable
  4. It performs all building inside the foreman-installer checkout
Can this be used in other scenarios?

I hope so! Vendoring is not unheard of. And testing your "consumers" (dependents? naming is hard) is good style anyway!

  1. three Ruby modules in a trench coat, so to say 

Categories: FLOSS Project Planets

The Accidental Coder: AI Translation - Not Ready for Prime Time?

Planet Drupal - Tue, 2024-05-14 15:49
AI Translation - Not Ready for Prime Time? ayen 14 May, 2024

While working on the latest (D10) version of my blog, I wanted to add multilingual functionality.

Investigation suggested that in order to capture the largest language groups in the U.S./Canada a site should offer:

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #629 (May 14, 2024)

Planet Python - Tue, 2024-05-14 15:30

#629 – MAY 14, 2024
View in Browser »

Flattening a List of Lists in Python

In this video course, you’ll learn how to flatten a list of lists in Python. You’ll use different tools and techniques to accomplish this task. First, you’ll use a loop along with the .extend() method of list. Then you’ll explore other tools, including reduce(), sum(), itertools.chain(), and more.
REAL PYTHON course

What’s New in Python 3.13

Python 3.13 has gone into beta, which means the feature freeze is now in place. This is the official listing of the new features in 3.13. This release includes changes to the REPL, new typing features, experimental support for disabling the GIL, dead battery removal, and more.
PYTHON

[Webinar] Saga Pattern Simplified: Building Sagas with Temporal

Join us on May 30th: we’ll give a brief overview of Sagas, including challenges and benefits. Then we’ll introduce you to Temporal and demonstrate how easy it is to build, test, and run Sagas using our platform and coding in your preferred language. Prior knowledge of Temporal is not required →
TEMPORAL sponsor

Sets as Dictionaries With No Values

A set is a built-in data type that provides fast lookup and insertion with characteristics similar to those of dictionary keys. This article explores the relationship between sets and dictionaries by implementing a set class.
RODRIGO GIRÃO SERRÃO

2023 PSF Annual Impact Report

PYTHON

Python Software Foundation Board Election Dates for 2024

PYTHON SOFTWARE FOUNDATION

Python 3.13.0 Beta 1 Released

CPYTHON DEV BLOG

Articles & Tutorials A 100x Speedup With Unsafe Python

This is a deep, in the weeds analysis of how different packages can store the same kinds of data in a different order, and how row-based vs column-based storage order can affect NumPy’s speed to process the data. The not often examined “strides” value of a NumPy array specifies how things are stored and this article shows an interesting approach to getting around this value for speed-up.
YOSSI KREININ

The New REPL in Python 3.13

Python 3.13 just hit feature freeze with the first beta release, and it includes a host of improvements to the REPL. Automatic indenting, block-level editing, and more make the built-in REPL more powerful and easier to use.
TREY HUNNER

How to Read and Write Parquet Files With Python

Apache Parquet files are a popular columnar storage format used by data scientists and anyone using the Hadoop ecosystem. By using the pyarrow package, you can read and write Parquet files, this tutorial shows you how.
MIKE DRISCOLL

Generating Fake Django Model Instances With Factory Boy

Writing good tests means having data to test with. The factory-boy library helps you create fake data that you can use with your tests. This article shows you how to use factory-boy with Django ORM models.
AIDAS BENDORAITIS

Python Sequences: A Comprehensive Guide

This tutorial dives into Python sequences, which is one of the main categories of data types. You’ll learn about the properties that make an object a sequence and how to create user-defined sequences.
REAL PYTHON

How LLMs Work, Explained Without Math

You’ve probably come across articles on Large Language Models (LLMs) and may have tried products like ChatGPT. This article explains how these tools work without resorting to advanced math.
MIGUEL GRINBERG

Creating a Calculator With wxPython

wxPython is a GUI toolkit for the Python programming language. This article introduces you to building GUIs by creating a personal calculator.
MIKE DRISCOLL

Asyncio Run Multiple Concurrent Event Loops

Ever wanted to add concurrency to your concurrency? You can run multiple asyncio event loops by using threading. This articles shows you how.
JASON BROWNLEE

How Python asyncio Works: Recreating It From Scratch

This article explains how asyncio works by showing you how to re-create it using generators and the __await__ method.
JACOB PADILLA • Shared by Jacob Padilla

Comments on Understanding Software

Nat responds to a presentation by C J Silverio on how software gets made at small to medium sized organizations.
NAT BENNETT

Projects & Code A Raspberry Pi Document Scanner

CAELESTIS COSPLAY

horus: An OSINT, Digital Forensics Tool

GITHUB.COM/6ABD

simple-spaced-repetition: Simple Spaced Repetition Scheduler

GITHUB.COM/VLOPEZFERRANDO

Best Python Chart Examples

PYTHON-GRAPH-GALLERY.COM

WireViz: Easily Document Cables and Wiring Harnesses

GITHUB.COM/WIREVIZ

Events Weekly Real Python Office Hours Q&A (Virtual)

May 15, 2024
REALPYTHON.COM

PyCon US 2024

May 15 to May 24, 2024
PYCON.ORG

PyData Bristol Meetup

May 16, 2024
MEETUP.COM

PyLadies Dublin

May 16, 2024
PYLADIES.COM

Flask Con 2024

May 17 to May 18, 2024
FLASKCON.COM

PyGrunn 2024

May 17 to May 18, 2024
PYGRUNN.ORG

Django Girls Ecuador 2024

May 17, 2024
OPENLAB.EC

Happy Pythoning!
This was PyCoder’s Weekly Issue #629.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Aten Design Group: Drupal API Development Simplified with APITools Module

Planet Drupal - Tue, 2024-05-14 14:55
Drupal API Development Simplified with APITools Module jenna Tue, 05/14/2024 - 12:55 Drupal

One of Drupal’s most important features is its ability to integrate seamlessly with other systems (CRMs, eCommerce Platforms, Event Management Platforms, etc). Drupal can expose data using modules like JSON:API, which are integral parts of Drupal Core. Moreover, it can also consume data and make HTTP requests using standard HTTP methods. This post will focus primarily on the latter—highlighting how a module named APITools simplifies the process for Drupal developers.

Background

In researching the history of HTTP request handling in Drupal, I discovered that drupal_http_request has been around since version 5.x. It was described as:

"A flexible and powerful HTTP client implementation that correctly handles GET, POST, PUT, or any other HTTP requests, including handling redirects."

Throughout Drupal versions 6.x and 7.x, drupal_http_request continued to be a go-to option, seemingly simpler than using PHP's CURL function directly—a tool that many developers find intricate. With the release of Drupal 8, Drupal::httpClient replaced drupal_http_request, granting developers access to Guzzle—the de facto HTTP client in the PHP community.

While httpClient/Guzzle is typically the preferred choice for HTTP requests, it's rare that any request happens without some form of authentication. Although OAuth 2 has emerged as a standard for API authentication, the specifics can vary considerably between different APIs. This variability doesn’t mean the principles of OAuth 2 aren’t followed; rather, the implementations differ just enough that attempts to abstract this functionality into a universal module have faced challenges. As a result, developers frequently find themselves writing slightly different code for each API integration to accommodate these nuances. APITools attempts to be just helpful enough in these sorts of situations without making too many assumptions.

Leveraging APITools for the Drupal Zoom API Module

I personally maintain the Drupal Zoom API module, and over the past year, Zoom has changed their authentication requirements. This challenge prompted me to explore the APITools module, maintained by my friend and colleague Alan Sherry. What attracted me most to APITools was its ability to offer configurable options for storing credentials and an extensible client plugin that routes all API requests through a specified authentication method. By using APITools, I significantly reduced the amount of code in the Zoom API module and quickly released a version 3.x, which is compatible with Zoom’s "Server-to-Server OAuth" authentication method. The configuration form and the majority of the API client are now provided by APITools, reducing the amount of code I’ll need to maintain in the Zoom API module.

If you, like me, maintain an API-focused contrib module or need a reliable HTTP client for one-off tasks, I highly encourage you to explore APITools. With a little setup time, you can configure your ApiToolsClient and start making requests effortlessly.

The fact is, there are numerous API client modules on Drupal.org, each tailored for different services. APITools offers an opportunity for a more consistent and efficient approach. I hope you'll check it out!

Getting Started / Examples

We've written some documentation on Drupal.org for you to reference. For a fairly complete example in the Drupal contrib space, checkout the client plugin that is part of the Zoom API module.

Additional Examples

We’ve created a repository with some various API clients that will hopefully help with getting started.

  • Acalog - Simple api key implementation
  • Auth0 - Access token request with audience and grant type
  • Brandfolder - An example of using an sdk as a base with an apitools client wrapper around it
  • Localist - Example of a static access token created by an administrator
  • Sharepoint - Access token with audience / grant type, and “ext_expires_in” instead of “expires_in”

If you decide to use APITools, we’d love to hear about your experience in the blog comments below.

Joel Steidl
Categories: FLOSS Project Planets

Pages