FLOSS Project Planets

Matt Layman: Settings and Billing Portal - Building SaaS with Python and Django #190

Planet Python - Wed, 2024-05-15 20:00
In this episode, I worked on the settings page for the user. This was a vital addition because it allows users to access the Stripe billing portal and close their account if they no longer wish to use JourneyInbox.
Categories: FLOSS Project Planets

Armin Ronacher: Using Rust Macros for Custom VTables

Planet Python - Wed, 2024-05-15 20:00

Given that building programming languages and interpreters is the developer's most favorite hobby, I will never stop writing templating engines. About three years ago I first wanted to see if I can make an implementation of my Jinja2 template engine for Rust. It's called MiniJinja and very close in behavior to Jinja2. Close enought that I have seen people pick it up more than I thought they would. For instance the Hugging Face Text Generation Inference uses it for chat templates.

I wrote it primarily just to see how you would introduce dynamic things into a language that doesn't have much of a dynamic runtime. A few weeks ago I released a major new version of the engine that has a very different internal object model for values and in this post I want to share a bit how it works, and what you can learn from it. At the heart of it is a type_erase! macro originally contributed by Sergio Benitez. This post goes into the need and usefulness of that macro.

Runtime Values

To understand the problem you first need to understand that a template engine like Jinja2 has requirements for runtime types that are a bit different from how Rust likes to think about data. The runtime is entirely dynamic and requires a form of garbage collection for those values. In case of a simple templating engine like Jinja2 you can largely get away with reference counting. The way this works in practice is that MiniJinja has a type called Value which can be cloned to increment the refcount, and when it's dropped the refcount is decremented. The value is the basic type that can hold all kinds of things (integers, strings, functions, sequences, etc.). In MiniJinja you can thus do something like this:

use minijinja::Value; // primitives let int_val = Value::from(42); let str_val = Value::from("Maximilian"); let bool_val = Value::from(true); // complex objects let vec_val = Value::from(vec![1, 2, 3]); // reference counting let vec_val2 = vec_val.clone(); // refcount = 2 drop(vec_val); // refcount = 1 drop(vec_val2); // refcount = 0 -> gone

Inside the engine these objects have all kinds of behaviors to make templates like this work:

{{ int_val }} 42 {{ str_val|upper }} MAXIMILIAN {{ not bool_val }} false {{ vec_val }} [1, 2, 3] {{ vec_val|reverse }} [3, 2, 1]

Some of that functionality is also exposed via Rust APIs. So for instance you can iterate over values if they contain sequences:

let vec_val = Value::from(vec![1, 2, 3]); for value in vec_val.try_iter()? { println!("{} ({})", value, value.kind()); }

If you run this, this will print the following:

1 (number) 2 (number) 3 (number)

So each value in this object has itself been “boxed” in a value. As far as the engine is concerned, everything is a value.

Objects

But how do you get something interesting into these values that is not just a basic type that could be hardcoded (such as a vector)? Imagine you have a custom object that you want to efficently expose to the engine. This is in fact even something the engine itself needs to do internally. For instance Jinja has first class functions in the form of macros so it needs to expose that into the engine as well. Additionally Rust functions passed to the engine also need to be represented.

This is why a Value type can hold objects internally. These objects also support downcasting:

// box a vector in a value let value = Value::from_object(vec![1i32, 2, 3]); println!("{} ({})", value, value.kind()); // downcast it back into a reference of the original object let v: &Vec<i32> = value.downcast_object_ref().unwrap(); println!("{:?}", value);

In order to do this, MiniJinja provides a trait called Object which if a type implements can be boxed into a value. All the dynamic operations of the value are forwarded into the internal Object. These operations are the following:

  • repr(): returns the “representation” of the object. The representation define is how the object is represented (serialized) and how it behaves. Valid representations are Seq (the object is a list or sequence), Map (the object is a struct or map), Iterable (the object can be iterated over but not indexed), Plain (the object is just a plain object, for instance used for functions)
  • get_value(key): looks up a key in the object
  • enumerate(): returns the contents of the object if there are any

Additionally there is quite a few extra API (to render them to strings, to make them callable etc.) but we can ignore this for now. In addition there are a few more but some of them just have default implementations. For instance the “length” of an object by default comes from the length of the enumerator returned by enumerate().

So how would one design a trait like this? For sake of keeping this post brief let's pretend there is only repr, get_value and enumerate. Remember that we need to reference count, so we might be encouraged to make a trait like the following:

pub trait Object: Debug + Send + Sync { fn repr(self: &Arc<Self>) -> ObjectRepr { ObjectRepr::Map } fn get_value(self: &Arc<Self>, key: &Value) -> Option<Value> { None } fn enumerate(self: &Arc<Self>) -> Enumerator { Enumerator::NonEnumerable } }

This trait looks pretty appealing. The self receiver type is reference counted (thanks to &Arc<Self>) and the interface is pretty minimal. Enumerator maybe needs a bit of explanation before we go further. In Rust usually when you iterate over an object you have something called an Iterator. Iterators usually borrow and you use traits to give the iterator additional functionality. For instance a DoubleEndedIterator can be reversed. In a template engine like Jinja we however need to do everything dynamically and we also need to ensure that we do not end up borrowing with lifetimes from the object. The engine needs to be able to hold on to the iterator independent of the object that you iterate. To simplify this process the engine uses this Enumerator type internally. It looks a bit like the following:

#[non_exhaustive] pub enum Enumerator { // object cannot be enumerated NonEnumerable, // object is empty Empty, // iterate over static strings Str(&'static [&'static str]), // iterate over an actual dynamic iterator Iter(Box<dyn Iterator<Item = Value> + Send + Sync>), // iterate by calling `get_value` in senquence from 0 to `usize` Seq(usize), }

There are many more versions (for instance for DoubleEndedIterators and a few more) but again, let's keep it simple.

Why Arc Receiver?

So why do you need an &Arc<Self> as receiver? Because in a lot of cases you really need to bump your own refcount to do something useful. For instance here is how the iteration of an object is implemented for sequences:

fn try_iter(self: &Arc<Self>) -> Option<Box<dyn Iterator<Item = Value> + Send + Sync>> where Self: 'static, { match self.enumerate() { Enumerator::Seq(l) => { let self_clone = self.clone(); Some(Box::new((0..l).map(move |idx| { self_clone.get_value(&Value::from(idx)).unwrap_or_default() }))) } // ... } }

If we did not have a way to bump our own refcount, we could not implement something like this.

Boxing Up Objects

We can now use this to implement a custom struct for instance (say a 2D point with two attributes: x and y):

#[derive(Debug)] struct Point(f32, f32); impl Object for Point { fn repr(self: &Arc<Self>) -> ObjectRepr { ObjectRepr::Map } fn get_value(self: &Arc<Self>, key: &Value) -> Option<Value> { match key.as_str()? { "x" => Some(Value::from(self.0)), "y" => Some(Value::from(self.1)), _ => None, } } fn enumerate(self: &Arc<Self>) -> Enumerator { Enumerator::Str(&["x", "y"]) } }

Or alternatively as a custom sequence:

#[derive(Debug)] struct Point(f32, f32); impl Object for Point { fn repr(self: &Arc<Self>) -> ObjectRepr { ObjectRepr::Seq } fn get_value(self: &Arc<Self>, key: &Value) -> Option<Value> { match key.as_usize()? { 0 => Some(Value::from(self.0)), 1 => Some(Value::from(self.1)), _ => None, } } fn enumerate(self: &Arc<Self>) -> Enumerator { Enumerator::Seq(2) } }

Now that we have the object, we need to box it up into an Arc. Unfortunatley this is where we hit a hurdle:

error[E0038]: the trait `Object` cannot be made into an object --> src/main.rs:29:15 | 29 | let val = Arc::new(Point(1.0, 2.5)) as Arc<dyn Object>; | ^^^^^^^^^^^^^^^^^^^^^^^^^ `Object` cannot be made into an object | note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically

The reason it cannot be made into an object is because we declare the receiver as &Arc<Self> instead of &Self. This is a limitation because Rust is not capable of building a vtable for us. A vtable is nothing more than a struct that holds a field with a function pointer for each method on the trait. So our plan of using Arc<dyn Object> won't work, but we can in fact build out own version of this. To accomplish this we just need to build something like a DynObject which internally implements trampolines to call into the original methods and to manage the refcounting for us.

Macro Magic

Since this requires a lot of unsafe code, and we want to generate all the necessary trampolines to put into the vtable automatically, we will use a macro. The invocation of that macro which generates the final type looks like this:

type_erase! { pub trait Object => DynObject { fn repr(&self) -> ObjectRepr; fn get_value(&self, key: &Value) -> Option<Value>; fn enumerate(&self) -> Enumerator; } }

You can read this as “map trait Object into a DynObject smart pointer”. The actual macro has a few extra things (it also supports building the necessary vtable entries for fmt::Debug and other traits) but let's focus on the simple pieces. This macro generates some pretty wild output.

I cleaned it up and added some comments about what it does. Later I will show you the macro that generates it. First let's start with the definition of the fat pointer:

use std::sync::Arc; use std::any::{type_name, TypeId}; pub struct DynObject { /// ptr points to the payload of the Arc<T> ptr: *const (), /// this points to our vtable. The actual type is hidden /// (`VTable`) in a local scope. vtable: *const (), }

And this is the implementation of the vtable and the type:

// this is a trick that is useful for generated macros to hide a type // at a local scope const _: () = { /// This is the actual vtable. struct VTable { // regular trampolines repr: fn(*const ()) -> ObjectRepr, get_value: fn(*const (), key: &Value) -> Option<Value>, enumerate: fn(*const ()) -> Enumerator, // method to return the type ID of the internal type for casts __type_id: fn() -> TypeId, // method to return the type name of the internal type __type_name: fn() -> &'static str, // method used to drop the refcount by one __drop: fn(*const ()), } /// Utility function to return a reference to the real vtable. fn vt(e: &DynObject) -> &VTable { unsafe { &*(e.vtable as *const VTable) } } impl DynObject { /// Takes ownership of an Arc<T> and boxes it up. pub fn new<T: Object + 'static>(v: Arc<T>) -> Self { // "shrinks" an Arc into a raw pointer. This returns the // address of the payload it carries, just behind the // refcount. let ptr = Arc::into_raw(v) as *const T as *const (); let vtable = &VTable { // example trampoline that is generated for each method repr: |ptr| unsafe { // before we reconstruct the Arc<T>, first ensure // we have incremented the refcount for panic // safety. If `repr()` panics, we will decref the // arc on unwind. Arc::<T>::increment_strong_count(ptr as *const T); // now take ownership of the ptr let arc = Arc::<T>::from_raw(ptr as *const T); // and invoke the original method via the arc <T as Object>::repr(&arc) }, get_value: |ptr, key| unsafe { Arc::<T>::increment_strong_count(ptr as *const T); let arc = Arc::<T>::from_raw(ptr as *const T); <T as Object>::get_value(&arc, key) }, enumerate: |ptr| unsafe { Arc::<T>::increment_strong_count(ptr as *const T); let arc = Arc::<T>::from_raw(ptr as *const T); <T as Object>::enumerate(&arc) }, // these are pretty trivial, they are modelled after // rust's `Any` type. __type_id: || TypeId::of::<T>(), __type_name: || type_name::<T>(), // on drop take ownership of the pointer (decrements // refcount by one) __drop: |ptr| unsafe { Arc::from_raw(ptr as *const T); }, }; Self { ptr, vtable: vtable as *const VTable as *const (), } } /// DynObject::repr() just calls via the vtable into the /// original type. pub fn repr(&self) -> ObjectRepr { (vt(self).repr)(self.ptr) } pub fn get_value(&self, key: &Value) -> Option<Value> { (vt(self).get_value)(self.ptr, key) } pub fn enumerate(&self) -> Enumerator { (vt(self).enumerate)(self.ptr) } } };

At this point the object is functional, but it's kind of problematic because it does not yet have memory management so we would just leak memory. So we need to add that:

Memory management:

/// Clone just increments the strong refcount of the Arc. impl Clone for DynObject { fn clone(&self) -> Self { unsafe { Arc::increment_strong_count(self.ptr); } Self { ptr: self.ptr, vtable: self.vtable } } } /// Drop decrements the refcount via a method in the vtable. impl Drop for DynObject { fn drop(&mut self) { (vt(self).__drop)(self.ptr); } }

Additionally to make the object useful, we need to add support for downcasting which is surprisingly easy at this point. If the type ID matches we're good to cast:

impl DynObject { pub fn downcast_ref<T: 'static>(&self) -> Option<&T> { if (vt(self).__type_id)() == TypeId::of::<T>() { unsafe { return Some(&*(self.ptr as *const T)); } } None } pub fn downcast<T: 'static>(&self) -> Option<Arc<T>> { if (vt(self).__type_id)() == TypeId::of::<T>() { unsafe { Arc::<T>::increment_strong_count(self.ptr as *const T); return Some(Arc::<T>::from_raw(self.ptr as *const T)); } } None } pub fn type_name(&self) -> &'static str { (vt(self).__type_name)() } } The Macro

So now that we know what we want, we can actually use a Rust macro to generate this stuff for us. I will leave most of this undocumented given that you know now what it expands to. Here just some notes to better understand what is going on:

  1. The const _:() = { ... } trick is useful as macros today cannot generate custom identifiers. Unlike with C macros where you can concatenate identifiers to create temporary names, that is unavailable in Rust. But you can use that to hide a type in a local scope as we are doing with the VTable struct.
  2. Since we cannot prefix identifiers, there is a potential conflict with the names in the struct for the methods and the internal names (__type_id etc.) To reduce the likelihood of collision the internal names are prefixed with two underscores.
  3. All names are fully canonicalized (eg: std::sync::Arc instead of Arc) to make the macro work without having to bring types into scope.

The macro is surprisingly only a bit awful:

macro_rules! type_erase { ($v:vis trait $t:ident => $erased_t:ident { $(fn $f:ident(&self $(, $p:ident: $t:ty $(,)?)*) $(-> $r:ty)?;)* }) => { $v struct $erased_t { ptr: *const (), vtable: *const (), } const _: () = { struct VTable { $($f: fn(*const (), $($p: $t),*) $(-> $r)?,)* $($($f_impl: fn(*const (), $($p_impl: $t_impl),*) $(-> $r_impl)?,)*)* __type_id: fn() -> std::any::TypeId, __type_name: fn() -> &'static str, __drop: fn(*const ()), } fn vt(e: &$erased_t) -> &VTable { unsafe { &*(e.vtable as *const VTable) } } impl $erased_t { $v fn new<T: $t + 'static>(v: std::sync::Arc<T>) -> Self { let ptr = std::sync::Arc::into_raw(v) as *const T as *const (); let vtable = &VTable { $( $f: |ptr, $($p),*| unsafe { std::sync::Arc::<T>::increment_strong_count(ptr as *const T); let arc = std::sync::Arc::<T>::from_raw(ptr as *const T); <T as $t>::$f(&arc, $($p),*) }, )* __type_id: || std::any::TypeId::of::<T>(), __type_name: || std::any::type_name::<T>(), __drop: |ptr| unsafe { std::sync::Arc::from_raw(ptr as *const T); }, }; Self { ptr, vtable: vtable as *const VTable as *const () } } $( $v fn $f(&self, $($p: $t),*) $(-> $r)? { (vt(self).$f)(self.ptr, $($p),*) } )* $v fn type_name(&self) -> &'static str { (vt(self).__type_name)() } $v fn downcast_ref<T: 'static>(&self) -> Option<&T> { if (vt(self).__type_id)() == std::any::TypeId::of::<T>() { unsafe { return Some(&*(self.ptr as *const T)); } } None } $v fn downcast<T: 'static>(&self) -> Option<Arc<T>> { if (vt(self).__type_id)() == std::any::TypeId::of::<T>() { unsafe { std::sync::Arc::<T>::increment_strong_count(self.ptr as *const T); return Some(std::sync::Arc::<T>::from_raw(self.ptr as *const T)); } } None } } impl Clone for $erased_t { fn clone(&self) -> Self { unsafe { std::sync::Arc::increment_strong_count(self.ptr); } Self { ptr: self.ptr, vtable: self.vtable, } } } impl Drop for $erased_t { fn drop(&mut self) { (vt(self).__drop)(self.ptr); } } }; }; }

The full macro that is in MiniJinja is a bit more feature rich. It also generates documentation and implementations for other traits. If you want to see the full one look here: type_erase.rs.

Putting it Together

So now that we have this DynObject internally it's trivially possible to use it in the internals of our value type:

#[derive(Clone)] pub(crate) enum ValueRepr { Undefined, Bool(bool), U64(u64), I64(i64), F64(f64), None, String(Arc<str>, StringType), Bytes(Arc<Vec<u8>>), Object(DynObject), } #[derive(Clone)] pub struct Value(pub(crate) ValueRepr);

And make the downcasting and construction of such types directly available:

impl Value { pub fn from_object<T: Object + Send + Sync + 'static>(value: T) -> Value { Value::from(ValueRepr::Object(DynObject::new(Arc::new(value)))) } pub fn downcast_object_ref<T: 'static>(&self) -> Option<&T> { match self.0 { ValueRepr::Object(ref o) => o.downcast_ref(), _ => None, } } pub fn downcast_object<T: 'static>(&self) -> Option<Arc<T>> { match self.0 { ValueRepr::Object(ref o) => o.downcast(), _ => None, } } }

What do we learn from this? Not sure. I at least learned that just because Rust tells you that you cannot make something into an object does not mean that you actually can't. It just requires some creativity and the willingness to actually use unsafe code. Another thing is that this yet again makes a pretty good argument in favor of compile time introspection. Zig programmers will laugh / cry about this since comptime is a much more powerful system to make something like this work compared to the ridiculous macro abuse necessary in Rust.

Anyways. Maybe this is useful to you.

Categories: FLOSS Project Planets

PreviousNext: Starshot and Experience Builder

Planet Drupal - Wed, 2024-05-15 16:13

Last week, I attended DrupalCon Portland 2024, and, like many others, I was swept up in the excitement of the Starshot announcement. The PreviousNext team is ready to support this initiative, focusing our efforts on the Experience Builder project for maximum impact.

by kim.pepper / 16 May 2024Starshot

Starshot is a new concept that accelerates Drupal innovation by providing recipes or templates of best-practice features and configurations when creating a new Drupal site. It’s a separate product built on top of Drupal Core and has the working title “Drupal CMS”.

For years, we’ve pondered the question, “Is Drupal a product or a framework?” The answer has always been “both.” However, we can now clearly distinguish between the two.

We’re fully committed to the vision of bringing Drupal to new audiences by offering a straightforward way to create new Drupal sites using best-practice contributed modules and configuration. Combining Recipes with Project Browser, Automated Updates, and the new Experience Builder initiative will demonstrate Drupal’s full potential for product evaluators.

Releases for Drupal CMS will not be tied to Drupal Core, allowing it to innovate rapidly and evolve as contributed module updates and new best practices emerge. Drupal Core can simultaneously focus on maintaining quality and stability.

Experience Builder

Experience Builder is an ambitious initiative to reinvent how we build pages (experiences) in Drupal.  Core committer Lauri Eskola undertook an extensive review of our own tools (Layout Builder, Paragraphs) and research into competing products to find a model that would best combine innovative user interface design with Drupal’s strengths in structured data.

Our team is in a strong and unique position to meaningfully contribute to the Experience Builder initiative. We have successfully delivered the Pitchburgh competition winner Decoupled Layout Builder prototype. We also provided numerous contributions to Layout Builder in core and contributed modules.

Experience Builder will become our primary contribution focus for the short and medium term, so watch this space.

We hope you are as excited as we are about the future of Drupal. We’re just getting started!

Categories: FLOSS Project Planets

Five Jars: Drupal Starshot: Reflections on DrupalCon Portland 2024

Planet Drupal - Wed, 2024-05-15 12:05
The opening day of DrupalCon Portland 2024 was met as usual with the keynote by Dries Buytaert, addressing a not-so-usual announcement.
Categories: FLOSS Project Planets

Five Jars: The Driesnote 2024 at DrupalCon Portland

Planet Drupal - Wed, 2024-05-15 12:05
The opening day of DrupalCon Portland 2024 was met as usual with the keynote by Dries Buytaert, addressing a not-so-usual announcement.
Categories: FLOSS Project Planets

Mike Driscoll: An Intro to Logging with Python and Loguru

Planet Python - Wed, 2024-05-15 10:08

Python’s logging module isn’t the only way to create logs. There are several third-party packages you can use, too. One of the most popular is Loguru. Loguru intends to remove all the boilerplate you get with the Python logging API.

You will find that Loguru greatly simplifies creating logs in Python.

This chapter has the following sections:

  • Installation
  • Logging made simple
  • Handlers and formatting
  • Catching exceptions
  • Terminal logging with color
  • Easy log rotation

Let’s find out how much easier Loguru makes logging in Python!

Installation

Before you can start with Loguru, you will need to install it. After all, the Loguru package doesn’t come with Python.

Fortunately, installing Loguru is easy with pip. Open up your terminal and run the following command:

python -m pip install loguru

Pip will install Loguru and any dependencies it might have for you. You will have a working package installed if you see no errors.

Now let’s start logging!

Logging Made Simple

Logging with Loguru can be done in two lines of code. Loguru is really that simple!

Don’t believe it? Then open up your Python IDE or REPL and add the following code:

# hello.py from loguru import logger logger.debug("Hello from loguru!") logger.info("Informed from loguru!")

One import is all you need. Then, you can immediately start logging! By default, the log will go to stdout.

Here’s what the output looks like in the terminal:

2024-05-07 14:34:28.663 | DEBUG | __main__:<module>:5 - Hello from loguru! 2024-05-07 14:34:28.664 | INFO | __main__:<module>:6 - Informed from loguru!

Pretty neat! Now, let’s find out how to change the handler and add formatting to your output.

Handlers and Formatting

Loguru doesn’t think of handlers the way the Python logging module does. Instead, you use the concept of sinks. The sink tells Loguru how to handle an incoming log message and write it somewhere.

Sinks can take lots of different forms:

  • A file-like object, such as sys.stderr or a file handle
  • A file path as a string or pathlib.Path
  • A callable, such as a simple function
  • An asynchronous coroutine function that you define using async def
  • A built-in logging.Handler. If you use these, the Loguru records convert to logging records automatically

To see how this works, create a new file called file_formatting.py in your Python IDE. Then add the following code:

# file_formatting.py from loguru import logger fmt = "{time} - {name} - {level} - {message}" logger.add("formatted.log", format=fmt, level="INFO") logger.debug("This is a debug message") logger.info("This is an informational message")

If you want to change where the logs go, use the add() method. Note that this adds a new sink, which, in this case, is a file. The logger will still log to stdout, too, as that is the default, and you are adding to the handler list. If you want to remove the default sink, add logger.remove() before you call add().

When you call add(), you can pass in several different arguments:

  • sink – Where to send the log messages
  • level – The logging level
  • format – How to format the log messages
  • filter – A logging filter

There are several more, but those are the ones you would use the most. If you want to know more about add(), you should check out the documentation.

You might have noticed that the formatting of the log records is a little different than what you saw in Python’s own logging module.

Here is a listing of the formatting directives you can use for Loguru:

  • elapsed – The time elapsed since the app started
  • exception – The formatted exception, if there was one
  • extra – The dict of attributes that the user bound
  • file – The name of the file where the logging call came from
  • function – The function where the logging call came from
  • level – The logging level
  • line – The line number in the source code
  • message – The unformatted logged message
  • module – The module that the logging call was made from
  • name – The __name__ where the logging call came from
  • process – The process in which the logging call was made
  • thread – The thread in which the logging call was made
  • time – The aware local time when the logging call was made

You can also change the time formatting in the logs. In this case, you would use a subset of the formatting from the Pendulum package. For example, if you wanted to make the time exclude the date, you would use this: {time:HH:mm:ss} rather than simply {time}, which you see in the code example above.

See the documentation for details on formating time and messages.

When you run the code example, you will see something similar to the following in your log file:

2024-05-07T14:35:06.553342-0500 - __main__ - INFO - This is an informational message

You will also see log messages sent to your terminal in the same format as you saw in the first code example.

Now, you’re ready to move on and learn about catching exceptions with Loguru.

Catching Exceptions

Catching exceptions with Loguru is done by using a decorator. You may remember that when you use Python’s own logging module, you use logger.exception in the except portion of a try/except statement to record the exception’s traceback to your log file.

When you use Loguru, you use the @logger.catch decorator on the function that contains code that may raise an exception.

Open up your Python IDE and create a new file named catching_exceptions.py. Then enter the following code:

# catching_exceptions.py from loguru import logger @logger.catch def silly_function(x, y, z): return 1 / (x + y + z) def main(): fmt = "{time:HH:mm:ss} - {name} - {level} - {message}" logger.add("exception.log", format=fmt, level="INFO") logger.info("Application starting") silly_function(0, 0, 0) logger.info("Finished!") if __name__ == "__main__": main()

According to Loguru’s documentation, the’ @logger.catch` decorator will catch regular exceptions and also work with applications with multiple threads. Add another file handler on top of the stream handler and start logging for this example.

Then you call silly_function() with a bunch of zeroes, which causes a ZeroDivisionError exception.

Here’s the output from the terminal:

If you open up the exception.log, you will see that the contents are a little different because you formatted the timestamp and also because logging those funny lines that show what arguments were passed to the silly_function() don’t translate that well:

14:38:30 - __main__ - INFO - Application starting 14:38:30 - __main__ - ERROR - An error has been caught in function 'main', process 'MainProcess' (8920), thread 'MainThread' (22316): Traceback (most recent call last): File "C:\books\11_loguru\catching_exceptions.py", line 17, in <module> main() └ <function main at 0x00000253B01AB7E0> > File "C:\books\11_loguru\catching_exceptions.py", line 13, in main silly_function(0, 0, 0) └ <function silly_function at 0x00000253ADE6D440> File "C:\books\11_loguru\catching_exceptions.py", line 7, in silly_function return 1 / (x + y + z) │ │ └ 0 │ └ 0 └ 0 ZeroDivisionError: division by zero 14:38:30 - __main__ - INFO - Finished!

On the whole, using the @logger.catch is a nice way to catch exceptions.

Now, you’re ready to move on and learn about changing the color of your logs in the terminal.

Terminal Logging with Color

Loguru will print out logs in color in the terminal by default if the terminal supports color. Colorful logs can make reading through the logs easier as you can highlight warnings and exceptions with unique colors.

You can use markup tags to add specific colors to any formatter string. You can also apply bold and underline to the tags.

Open up your Python IDE and create a new file called terminal_formatting.py. After saving the file, enter the following code into it:

# terminal_formatting.py import sys from loguru import logger fmt = ("<red>{time}</red> - " "<yellow>{name}</yellow> - " "{level} - {message}") logger.add(sys.stdout, format=fmt, level="DEBUG") logger.debug("This is a debug message") logger.info("This is an informational message")

You create a special format that sets the “time” portion to red and the “name” to yellow. Then, you add() that format to the logger. You will now have two sinks: the default root handler, which logs to stderr, and the new sink, which logs to stdout. You do formatting to compare the default colors to your custom ones.

Go ahead and run the code. You should see something like this:

Neat! It would be best if you now spent a few moments studying the documentation and trying out some of the other colors. For example, you can use hex and RGB colors and a handful of named colors.

The last section you will look at is how to do log rotation with Loguru!

Easy Log Rotation

Loguru makes log rotation easy. You don’t need to import any special handlers. Instead, you only need to specify the rotation argument when you call add().

Here are a few examples:

  • logger.add("file.log", rotation="100 MB")
  • logger.add("file.log", rotation="12:00")
  • logger.add("file.log", rotation="1 week")

These demonstrate that you can set the rotation at 100 megabytes at noon daily or even rotate weekly.

Open up your Python IDE so you can create a full-fledged example. Name the file log_rotation.py and add the following code:

# log_rotation.py from loguru import logger fmt = "{time} - {name} - {level} - {message}" logger.add("rotated.log", format=fmt, level="DEBUG", rotation="50 B") logger.debug("This is a debug message") logger.info("This is an informational message")

Here, you set up a log format, set the level to DEBUG, and set the rotation to every 50 bytes. When you run this code, you will get a couple of log files. Loguru will add a timestamp to the file’s name when it rotates the log.

What if you want to add compression? You don’t need to override the rotator like you did with Python’s logging module. Instead, you can turn on compression using the compression argument.

Create a new Python script called log_rotation_compression.py and add this code for a fully working example:

# log_rotation_compression.py from loguru import logger fmt = "{time} - {name} - {level} - {message}" logger.add("compressed.log", format=fmt, level="DEBUG", rotation="50 B", compression="zip") logger.debug("This is a debug message") logger.info("This is an informational message") for i in range(10): logger.info(f"Log message {i}")

The new file is automatically compressed in the zip format when the log rotates. There is also a retention argument that you can use with add() to tell Loguru to clean the logs after so many days:

logger.add("file.log", rotation="100 MB", retention="5 days")

If you were to add this code, the logs that were more than five days old would get cleaned up automatically by Loguru!

Wrapping Up

The Loguru package makes logging much easier than Python’s logging library. It removes the boilerplate needed to create and format logs.

In this chapter, you learned about the following:

  • Installation
  • Logging made simple
  • Handlers and formatting
  • Catching exceptions
  • Terminal logging with color
  • Easy log rotation

Loguru can do much more than what is covered here, though. You can serialize your logs to JSON or contextualize your logger messages. Loguru also allows you to add lazy evaluation to your logs to prevent them from affecting performance in production. Loguru also makes adding custom log levels very easy. For full details about all the things Loguru can do, you should consult Loguru’s website.

The post An Intro to Logging with Python and Loguru appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Real Python: Python's Built-in Exceptions: A Walkthrough With Examples

Planet Python - Wed, 2024-05-15 10:00

Python has a complete set of built-in exceptions that provide a quick and efficient way to handle errors and exceptional situations that may happen in your code. Knowing the most commonly used built-in exceptions is key for you as a Python developer. This knowledge will help you debug code because each exception has a specific meaning that can shed light on your debugging process.

You’ll also be able to handle and raise most of the built-in exceptions in your Python code, which is a great way to deal with errors and exceptional situations without having to create your own custom exceptions.

In this tutorial, you’ll:

  • Learn what errors and exceptions are in Python
  • Understand how Python organizes the built-in exceptions in a class hierarchy
  • Explore the most commonly used built-in exceptions
  • Learn how to handle and raise built-in exceptions in your code

To smoothly walk through this tutorial, you should be familiar with some core concepts in Python. These concepts include Python classes, class hierarchies, exceptions, try … except blocks, and the raise statement.

Get Your Code: Click here to download the free sample code that you’ll use to learn about Python’s built-in exceptions.

Errors and Exceptions in Python

Errors and exceptions are important concepts in programming, and you’ll probably spend a considerable amount of time dealing with them in your programming career. Errors are concrete conditions, such as syntax and logical errors, that make your code work incorrectly or even crash.

Often, you can fix errors by updating or modifying the code, installing a new version of a dependency, checking the code’s logic, and so on.

For example, say you need to make sure that a given string has a certain number of characters. In this case, you can use the built-in len() function:

Python >>> len("Pythonista") = 10 File "<input>", line 1 ... SyntaxError: cannot assign to function call here. Maybe you meant '==' instead of '='? Copied!

In this example, you use the wrong operator. Instead of using the equality comparison operator, you use the assignment operator. This code raises a SyntaxError, which represents a syntax error as its name describes.

Note: In the above code, you’ll note how nicely the error message suggests a possible solution for correcting the code. Starting in version 3.10, the Python core developers have put a lot of effort into improving the error messages to make them more friendly and useful for debugging.

To fix the error, you need to localize the affected code and correct the syntax. This action will remove the error:

Python >>> len("Pythonista") == 10 True Copied!

Now the code works correctly, and the SyntaxError is gone. So, your code won’t break, and your program will continue its normal execution.

There’s something to learn from the above example. You can fix errors, but you can’t handle them. In other words, if you have a syntax error like the one in the example, then you won’t be able to handle that error and make the code run. You need to correct the syntax.

On the other hand, exceptions are events that interrupt the execution of a program. As their name suggests, exceptions occur in exceptional situations that should or shouldn’t happen. So, to prevent your program from crashing after an exception, you must handle the exception with the appropriate exception-handling mechanism.

To better understand exceptions, say that you have a Python expression like a + b. This expression will work if a and b are both strings or numbers:

Python >>> a = 4 >>> b = 3 >>> a + b 7 Copied!

In this example, the code works correctly because a and b are both numbers. However, the expression raises an exception if a and b are of types that can’t be added together:

Python >>> a = "4" >>> b = 3 >>> a + b Traceback (most recent call last): File "<input>", line 1, in <module> a + b ~~^~~ TypeError: can only concatenate str (not "int") to str Copied!

Because a is a string and b is a number, your code fails with a TypeError exception. Since there is no way to add text and numbers, your code has faced an exceptional situation.

Python uses classes to represent exceptions and errors. These classes are generically known as exceptions, regardless of what a concrete class represents, an exception or an error. Exception classes give us information about an exceptional situation and also errors detected during the program’s execution.

The first example in this section shows a syntax error in action. The SyntaxError class represents an error but it’s implemented as a Python exception. This could be confusing, but Python uses exception classes for both errors and exceptions.

Read the full article at https://realpython.com/python-built-in-exceptions/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: How to Get the Most Out of PyCon US

Planet Python - Wed, 2024-05-15 10:00

Congratulations! You’re going to PyCon US!

Whether this is your first time or not, going to a conference full of people who love the same thing as you is always a fun experience. There’s so much more to PyCon than just a bunch of people talking about the Python language, and that can be intimidating for first-time attendees. This guide will help you navigate all there is to see and do at PyCon.

PyCon US is the biggest conference centered around the Python language. Originally launched in 2003, this conference has grown exponentially and has even spawned several other PyCons and workshops around the world.

Everyone who attends PyCon will have a different experience, and that’s what makes the conference truly unique. This guide is meant to help you, but you don’t need to follow it strictly.

By the end of this article, you’ll know:

  • How PyCon consists of tutorials, conference, and sprints
  • What to do before you go
  • What to do during PyCon
  • What to do after the event
  • How to have a great PyCon

This guide will have links that are specific to PyCon 2024, but it should be useful for future PyCons as well.

Free Download: Get a sample chapter from Python Tricks: The Book that shows you Python’s best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.

What PyCon Involves

Before considering how to get the most out of PyCon, it’s important to first understand what PyCon involves.

PyCon is broken up into three stages:

  1. Tutorials: PyCon starts with two days of three-hour workshops, during which you get to learn in depth with instructors. These are great to go to since the class sizes are small, and you can ask questions of the instructors. You should consider going to at least one of these if you can, but they do have an additional cost of $150 per tutorial.

  2. Conference: Next, PyCon offers three days of talks. Each presentation lasts for thirty to forty-five minutes, and there are about five talks going on at a time, including a Spanish language charlas track. But that’s not all: there are also open spaces, sponsors, posters, lightning talks, dinners, and so much more.

  3. Sprints: During this stage, you can take what you’ve learned and apply it! This is a four-day exercise where people group up to work on various open-source projects related to Python. If you’ve got the time, going to one or more sprint days is a great way to practice what you’ve learned, become associated with an open-source project, and network with other smart and talented people. Learn more about sprints in this blog post from an earlier year.

Since most PyCon attendees go to the conference part, that’ll be the focus of this article. However, don’t let that deter you from attending the tutorials or sprints if you can!

You may even learn more technical skills by attending the tutorials rather than listening to the talks. The sprints are great for networking and applying the skills that you’ve already got, as well as learning new ones from the people you’ll be working with.

What to Do Before You Go

In general, the more prepared you are for something, the better your experience will be. The same applies to PyCon.

It’s really helpful to plan and prepare ahead of time, which you’re already doing just by reading this article!

Look through the talk schedule and see which talks sound most interesting to you. This doesn’t mean you need to plan out all of the talks that you’re going to see, in every slot possible. But it helps to get an idea of which topics are going to be presented so that you can decide what you’re most interested in.

Getting the PyCon US mobile app will help you plan your schedule. This app lets you view the schedule for the talks and add reminders for the ones that you want to attend. If you’re having a hard time picking which talks to go to, you can come prepared with a question or problem that you need to solve. Doing this can help you focus on the topics that are important to you.

If you can, come a day early to check in and attend the opening reception. The line to check in on the first day is always long, so you’ll save time if you check in the day before. There’s also an opening reception that evening, so you can meet other attendees and speakers, as well as get a chance to check out the various sponsors and their booths.

If you’re brand-new to PyCon, the Newcomer Orientation can help you get caught up on what the conference involves and how you can participate.

Read the full article at https://realpython.com/pycon-guide/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: Quiz: What Are CRUD Operations?

Planet Python - Wed, 2024-05-15 08:00

In this quiz, you’ll test your understanding of CRUD Operations.

By working through this quiz, you’ll revisit the key concepts and techniques related to CRUD operations. Good luck!

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

PyCon: PyCon US 2024 Sprints will be here before you know it!

Planet Python - Wed, 2024-05-15 06:03

The Development Sprints are coming soon. Make sure you plan ahead:

When: Sprints will take place on May 20, 2024 8:00am through May 23, 2024 11:00pm EST

Where: At PyCon US at the David L. Lawrence Convention Center in rooms 308-311 and 315-321

Project Signups: Get your project listed so that attendees can help support it by signing up here Submit Sprint Project:

What are Sprints?

PyCon Development Sprints are up to four days of intensive learning and development on an open source project(s) of your choice, in a team environment. It's a time to come together with colleagues, old and new, to share what you've learned and apply it to an open source project.

It's a time to test, fix bugs, add new features, and improve documentation. And it's a time to network, make friends, and build relationships that go beyond the conference.

PyCon US provides the opportunity and infrastructure; you bring your skills, humanity, and brainpower (oh! and don't forget your computer).

For those that have never attended a development sprint before or want to brush up on basics, on Sunday, May 19th, there will be an Introduction to Sprinting Workshop that will guide you through the basics of git, github, and what to expect at a Sprint. The Introduction to Sprint Workshop takes place in Room 402 on Sunday, May 19th from 5:30pm - 8:30pm EST.

Who can participate?

You! All experience levels are welcome; sprints are a great opportunity to get connected with, and start contributing to your favorite Python project. Participation in the sprints is free and included in your conference registration. Please go to your attendee profile on your dashboard and indicate the number of sprint days you will be attending. 

Mentors: we are always looking for mentors to help new sprinters get up and running. Reach out to the sprint organizers for more info. 

Which Projects are Sprinting?

Project Leads: Any Python project can signup and invite sprinters to contribute to their project. If you would like your project to be included, add your project to the list. Attendees, check here to see if which projects have signed up so far.  

Thanks to our sponsors and support team!

Have questions? reach out to pycon-sprints@python.org

Categories: FLOSS Project Planets

Evgeni Golov: Using HPONCFG on CentOS Stream 9 with OpenSSL 3.2

Planet Debian - Wed, 2024-05-15 05:14

Today I've updated an HPE ProLiant DL325 G10 from CentOS Stream 8 to CentOS Stream 9 (details on that to follow) and realized that hponcfg was broken afterwards.

As I do not have a support contract with HPE, I couldn't just yell at them in private, so I am doing this in public now ;-)

# hponcfg HPE Lights-Out Online Configuration utility Version 5.6.0 Date 11/30/2020 (c) 2005,2020 Hewlett Packard Enterprise Development LP Error: Unable to locate SSL library. Install latest SSL library to use HPONCFG.

Welp, what the heck?

But wait, 5.6.0 from 2020 looks old, let's update this first!

hponcfg is part of the "Management Component Pack" (at least if you're not running RHEL or SLES where you get it via the "Service Pack for ProLiant" which requires a support contract) and can be downloaded from the Software Delivery Repository.

The Software Delivery Repository tells you to configure it in /etc/yum.repos.d/mcp.repo as

[mcp] name=Management Component Pack baseurl=http://downloads.linux.hpe.com/repo/mcp/dist/dist_ver/arch/project_ver enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/GPG-KEY-mcp

gpgcheck=0? Suuure! Plain HTTP? Suuure!

But it gets better! When you look at https://downloads.linux.hpe.com/repo/mcp/centos/ (you have to substitute dist with your distribution!) you'll see that there is no 9 folder and thus no packages for CentOS (Stream) 9. There are however folders for Oracle, Rocky and Alma. Phew. Let's take one of these!

[mcp] name=Management Component Pack baseurl=https://downloads.linux.hpe.com/repo/mcp/rocky/9/x86_64/current/ enabled=1 gpgcheck=1 gpgkey=https://downloads.linux.hpe.com/repo/mcp/GPG-KEY-mcp

dnf upgrade hponcfg updates it to hponcfg-6.0.0-0.x86_64 and:

# hponcfg HPE Lights-Out Online Configuration utility Version 6.0.0 Date 10/30/2022 (c) 2005,2022 Hewlett Packard Enterprise Development LP Error: Unable to locate SSL library. Install latest SSL library to use HPONCFG.

Fuck.

ldd doesn't show hponcfg being linked to libssl, do they dlopen() at runtime and fucked something up? ltrace to the rescue!

# ltrace hponcfg … popen("strings /bin/openssl | grep 'Ope"..., "r") = 0x621700 fgets("OpenSSL 3.2.1 30 Jan 2024\n", 256, 0x621700) = 0x7ffd870e2e10 strstr("OpenSSL 3.2.1 30 Jan 2024\n", "OpenSSL 3.0") = nil …

WAT?

They run strings /bin/openssl |grep 'OpenSSL' and compare the result with "OpenSSL 3.0"?!

Sure, OpenSSL 3.2 in EL9 is rather fresh and didn't hit RHEL/Oracle/Alma/Rocky yet, but surely there are better ways to check for a compatible version of OpenSSL than THIS?!

Anyway, I am not going to downgrade my OpenSSL. Neither will I patch it to pretend to be 3.0.

But I can patch the hponcfg binary!

# vim /sbin/hponcfg <go to line 146> <replace 3.0 with 3.2> :x

Yes, I used vim. Yes, it works. No, I won't guarantee this won't kill a kitten somewhere.

# ./hponcfg HPE Lights-Out Online Configuration utility Version 6.0.0 Date 10/30/2022 (c) 2005,2022 Hewlett Packard Enterprise Development LP Firmware Revision = 2.44 Device type = iLO 5 Driver name = hpilo USAGE: hponcfg -? hponcfg -h hponcfg -m minFw hponcfg -r [-m minFw] [-u username] [-p password] hponcfg -b [-m minFw] [-u username] [-p password] hponcfg [-a] -w filename [-m minFw] [-u username] [-p password] hponcfg -g [-m minFw] [-u username] [-p password] hponcfg -f filename [-l filename] [-s namevaluepair] [-v] [-m minFw] [-u username] [-p password] hponcfg -i [-l filename] [-s namevaluepair] [-v] [-m minFw] [-u username] [-p password] -h, --help Display this message -? Display this message -r, --reset Reset the Management Processor to factory defaults -b, --reboot Reboot Management Processor without changing any setting -f, --file Get/Set Management Processor configuration from "filename" -i, --input Get/Set Management Processor configuration from the XML input received through the standard input stream. -w, --writeconfig Write the Management Processor configuration to "filename" -a, --all Capture complete Management Processor configuration to the file. This should be used along with '-w' option -l, --log Log replies to "filename" -v, --xmlverbose Display all the responses from Management Processor -s, --substitute Substitute variables present in input config file with values specified in "namevaluepairs" -g, --get_hostinfo Get the Host information -m, --minfwlevel Minimum firmware level -u, --username iLO Username -p, --password iLO Password

For comparison, here is the diff --text output:

# diff -u --text /sbin/hponcfg ./hponcfg --- /sbin/hponcfg 2022-08-02 01:07:55.000000000 +0000 +++ ./hponcfg 2024-05-15 09:06:54.373121233 +0000 @@ -143,7 +143,7 @@ helpget_hostinforesetwriteconfigallfileinputlogminfwlevelxmlverbosesubstitutetimeoutdbgverbosityrebootusernamepasswordlibpath%Ah*Ag7Ar=AwIAaMAfRAiXAl\AmgAvrAs}At�Ad�Ab�Au�Ap�Azhgrbaw:f:il:m:vs:t:d:z:u:p:tmpXMLinputFile%2d.xmlw+Error: Syntax Error - Invalid options present. =O@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@�M@�M@aQ@�M@aQ@�N@�M@�N@�P@aQ@aQ@�M@�M@aQ@aQ@LN@aQ@�M@�O@�M@�M@�M@�M@aQ@aQ@�M@<!----><LOGINUSER_LOGINPASSWORD<LOGIN USER_LOGIN="%s" PASSWORD="%s"ERROR: LOGIN tag is missing. >ERROR: LOGIN end tag is missing. -strings | grep 'OpenSSL 1' | grep 'OpenSSL 3'OpenSSL 1.0OpenSSL 1.1OpenSSL 3.0which openssl 2>&1/usr/bin/opensslOpenSSL location - %s +strings | grep 'OpenSSL 1' | grep 'OpenSSL 3'OpenSSL 1.0OpenSSL 1.1OpenSSL 3.2which openssl 2>&1/usr/bin/opensslOpenSSL location - %s Current version %s No response from command.

Pretty sure it won't apply like this with patch, but you get the idea.

And yes, double-giggles for the fact that the error message says "Install latest SSL library to use HPONCFG" and the issues is because I have the latest SSL library installed…

Categories: FLOSS Project Planets

Glyph Lefkowitz: How To PyCon

Planet Python - Wed, 2024-05-15 05:12

These tips are not the “right” way to do PyCon, but they are suggestions based on how I try to do PyCon. Consider them reminders to myself, an experienced long-time attendee, which you are welcome to overhear.

See Some Talks

The hallway track is awesome. But the best version of the hallway track is not just bumping into people and chatting; it’s the version where you’ve all recently seen the same thing, and thereby have a shared context of something to react to. If you aren’t going to talks, you aren’t going to get a good hallway track.. Therefore: choose talks that interest you, attend them and pay close attention, then find people to talk to about them.

Given that you will want to see some of the talks, make sure that you have the schedule downloaded and available offline on your mobile device, or printed out on a piece of paper.

Make a list of the talks you think you want to see, but have that schedule with you in case you want to call an audible in the middle of the conference, switching to a different talk you didn’t notice based on some of those “hallway track” conversations.

Participate In Open Spaces

The name “hallway track” itself is antiquated, in a way which is relevant and important to modern conferences. It used to be that conferences were exclusively oriented around their scheduled talks; it was called the “hallway” track because the way to access it was to linger in the hallways, outside the official structure of the conference, and just talk to people.

But however, at PyCon and many other conferences, this unofficial track is now much more of an integrated, official part of the program. In particular, open spaces are not only a more official hallway track, they are considerably better than the historical “hallway” experience, because these ad-hoc gatherings can be convened with a prepared topic and potentially a loose structure to facilitate productive discussion.

With open spaces, sessions can have an agenda and so conversations are easier to start. Rooms are provided, which is more useful than you might think; literally hanging out in a hallway is actually surprisingly disruptive to speakers and attendees at talks; us nerds tend to get pretty loud and can be quite audible even through a slightly-cracked door, so avail yourself of these rooms and don’t be a disruptive jerk outside somebody’s talk.

Consult the open space board, and put up your own proposed sessions. Post them as early as you can, to maximize the chance that they will get noticed. Post them on social media, using the conference's official hashtag, and ask any interested folks you bump into help boost it.1

Remember that open spaces are not talks. If you want to give a mini-lecture on a topic and you can find interested folks you could do that, but the format lends itself to more peer-to-peer, roundtable-style interactions. Among other things, this means that, unlike proposing a talk, where you should be an expert on the topic that you are proposing, you can suggest open spaces where you are curious — but ignorant — about something, in the hopes that some experts will show up and you can listen to their discussion.

Be prepared for this to fail; there’s a lot going on and it’s always possible that nobody will notice your session. Again, maximize your chances by posting it as early as you can and promoting it, but be prepared to just have a free 30 minutes to check your email. Sometimes that’s just how it goes. The corollary here is not to always balance attending others’ spaces with proposing your own. After all if someone else proposed it you know at least one other person is gonna be there.

Take Care of Your Body

Conferences can be surprisingly high-intensity physical activities. It’s not a marathon, but you will be walking quickly from one end of a large convention center to another, potentially somewhat anxiously.

Hydrate, hydrate, hydrate. Bring a water bottle, and have it with you at all times. It might be helpful to set repeating timers on your phone to drink water, since it can be easy to forget in the middle of engaging conversations. If you take advantage of the hallway track as much as you should, you will talk more than you expect; talking expels water from your body. All that aforementioned walking might make you sweat a bit more than you realize.

Hydrate.

More generally, pay attention to what you are eating and drinking. Conference food isn’t always the best, and in a new city you might be tempted to load up on big meals and junk food. You should enjoy yourself and experience the local cuisine, but do it intentionally. While you enjoy the local fare, do so in whatever moderation works best for you. Similarly for boozy night-time socializing. Nothing stings quite as much as missing a morning of talks because you’ve got a hangover or a migraine.

This is worth emphasizing because in the enthusiasm of an exciting conference experience, it’s easy to lose track and overdo it.

Meet Both New And Old Friends: Plan Your Socializing

A lot of the advice above is mostly for first-time or new-ish conferencegoers, but this one might be more useful for the old heads. As we build up a long-time clique of conference friends, it’s easy to get a bit insular and lose out on one of the bits of magic of such an event: meeting new folks and hearing new perspectives.

While open spaces can address this a little bit, there's one additional thing I've started doing in the last few years: dinners are for old friends, but lunches are for new ones. At least half of the days I'm there, I try to go to a new table with new folks that I haven't seen before, and strike up a conversation. I even have a little canned icebreaker prompt, which I would suggest to others as well, because it’s worked pretty nicely in past years: “what is one fun thing you have done with Python recently?”2.

Given that I have a pretty big crowd of old friends at these things, I actually tend to avoid old friends at lunch, since it’s so easy to get into multi-hour conversations, and meeting new folks in a big group can be intimidating. Lunches are the time I carve out to try and meet new folks.

I’ll See You There

I hope some of these tips were helpful, and I am looking forward to seeing some of you at PyCon US 2024!

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!

  1. In PyCon2024's case, #PyConUS on Mastodon is probably the way to go. Note, also, that it is #PyConUS and not #pyconus, which is much less legible for users of screen-readers. 

  2. Obviously that is specific to this conference. At the O’Reilly Software Architecture conference, my prompt was “What is software architecture?” which had some really fascinating answers. 

Categories: FLOSS Project Planets

Electric Citizen: Big Changes Ahead for Drupal

Planet Drupal - Wed, 2024-05-15 04:05

Our team recently attended (and once again sponsored!) the DrupalCon North America conference in Portland, OR. 

This annual conference brings together the Drupal community, from the agencies who provide Drupal services to the industry clients who rely on it, along with contributors and open-source enthusiasts from around the world.

From my perspective on the exhibitors floor, working the booth, I don’t see as many of the great individual sessions that I have in past years. But I did leave with some important takeaways from this year’s event, especially around some upcoming changes for Drupal. 

Categories: FLOSS Project Planets

Talk Python to Me: #462: Pandas and Beyond with Wes McKinney

Planet Python - Wed, 2024-05-15 04:00
This episode dives into some of the most important data science libraries from the Python space with one of its pioneers: Wes McKinney. He's the creator or co-creator of pandas, Apache Arrow, and Ibis projects and an entrepreneur in this space.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/neo4j-graphstuff'>Neo4j</a><br> <a href='https://talkpython.fm/mailtrap'>Mailtrap</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Wes' Website</b>: <a href="https://wesmckinney.com" target="_blank" rel="noopener">wesmckinney.com</a><br/> <b>Pandas</b>: <a href="https://pandas.pydata.org" target="_blank" rel="noopener">pandas.pydata.org</a><br/> <b>Apache Arrow</b>: <a href="https://arrow.apache.org" target="_blank" rel="noopener">arrow.apache.org</a><br/> <b>Ibis</b>: <a href="https://ibis-project.org" target="_blank" rel="noopener">ibis-project.org</a><br/> <b>Python for Data Analysis - Groupby Summary</b>: <a href="https://wesmckinney.com/book/data-aggregation.html#groupby-summary" target="_blank" rel="noopener">wesmckinney.com/book</a><br/> <b>Polars</b>: <a href="https://pola.rs" target="_blank" rel="noopener">pola.rs</a><br/> <b>Dask</b>: <a href="https://www.dask.org" target="_blank" rel="noopener">dask.org</a><br/> <b>Sqlglot</b>: <a href="https://sqlglot.com/sqlglot.html" target="_blank" rel="noopener">sqlglot.com</a><br/> <b>Pandoc</b>: <a href="https://pandoc.org" target="_blank" rel="noopener">pandoc.org</a><br/> <b>Quarto</b>: <a href="https://quarto.org" target="_blank" rel="noopener">quarto.org</a><br/> <b>Evidence framework</b>: <a href="https://evidence.dev" target="_blank" rel="noopener">evidence.dev</a><br/> <b>pyscript</b>: <a href="https://pyscript.net" target="_blank" rel="noopener">pyscript.net</a><br/> <b>duckdb</b>: <a href="https://duckdb.org" target="_blank" rel="noopener">duckdb.org</a><br/> <b>Jupyterlite</b>: <a href="https://jupyter.org/try-jupyter/lab/" target="_blank" rel="noopener">jupyter.org</a><br/> <b>Djangonauts</b>: <a href="https://djangonaut.space" target="_blank" rel="noopener">djangonaut.space</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=iBe1-o8LYE4" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/462/pandas-and-beyond-with-wes-mckinney" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Categories: FLOSS Project Planets

The Drop Times: Policy-Based Access in Core by Kristiaan Van den Enyde

Planet Drupal - Wed, 2024-05-15 02:13
Kristiaan Van den Eynde, Senior Drupal Developer at Factorial, has made substantial contributions to Drupal, including the widely-used Group module and VariationCache. His project, Policy-Based Access in Core, introduced a dynamic system for managing permissions based on predefined policies. This initiative, set to debut in Drupal 10.3, promises enhanced flexibility and security. Kristiaan shares insights into his development process, the challenges faced, and the future of access control in Drupal.
Categories: FLOSS Project Planets

Tag1 Consulting: Migrating Your Data from Drupal 7 to Drupal 10 using the Migrate API: Avoiding entity ID conflicts

Planet Drupal - Wed, 2024-05-15 01:54

By default, the Drupal 7 to 10 upgrade path preserves entity IDs. In the previous article, we explained that this would cause problems if content or configuration already exists in the destination Drupal 10 site. Let’s explore this further and evaluate ways to work around the issue.

Read more mauricio Wed, 05/15/2024 - 14:15
Categories: FLOSS Project Planets

Debug Academy: How to create custom sorting logic for Drupal views

Planet Drupal - Wed, 2024-05-15 01:29
How to create custom sorting logic for Drupal views

Drupal websites sometimes have a need to implement more advanced sorting logic than what's available out of the box.

One of our career-changing Drupal training course alumni asked me how to handle this today. After answering them, I decided to copy the answer into a blogpost.

The views module creates dynamic queries for us based on the configuration options we select. The UI essentially allows us to use any field for sorting in ascending (smallest to largest) or descending (largest to smallest) order. This is extremely helpful and covers the vast majority of use cases - date sorting, alphabetical sorting, and numeric sorting are all supported - but we sometimes run into limitations when we have more complicated requirements.

Some examples of these scenarios include:

ashrafabed Wed, 05/15/2024
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RApiSerialize 0.1.3 on CRAN: Skipping XDR

Planet Debian - Tue, 2024-05-14 19:28

A new bug fix release 0.1.3 of RApiSerialize got onto CRAN earlier today. This is the first release in well over a year, and permits the skip the XDR serialization format which is needed when transfering between big- and little-endian machines. But it comes at a certain run-time cost one can avoid on the (much more common) little-endian machines. This is a new option, and the old behavior is the default. Those who want to can now skip the step.

The RApiSerialize package is used by both my RcppRedis as well as by Travers excellent qs package. We also addressed the recent nag by the CRAN concerning ‘NO_REMAP’.

Changes in version 0.1.3 (2024-05-13)
  • Add an xdr argument to disable XDR for an approx. threefold speed increase (Travers Ching and Dirk in #6)

  • Use R_NO_REMAP and Rf_* prefix for API calls

  • Minor continuous integration updates

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More details are at the RApiSerialize page; code, issue tickets etc at the GitHub repositoryrapiserializerepo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Evgeni Golov: Using Packit to build RPMs for projects that depend on or vendor your code

Planet Debian - Tue, 2024-05-14 16:12

I am a huge fan of Packit as it allows us to provide RPMs to our users and testers directly from a pull-request, thus massively tightening the feedback loop and involving people who otherwise might not be able to apply the changes (for whatever reason) and "quickly test" something out. It's also a great way to validate that a change actually builds in a production environment, where no unnecessary development and test dependencies are installed.

You can also run tests of the built packages on Testing Farm and automate pushing releases into Fedora/CentOS Stream, but this is neither a (plain) Packit advertisement post, nor is that functionality that I can talk about with a certain level of experience.

Adam recently asked why we don't have Packit builds for our our Puppet modules and my first answer was: "well, puppet-* doesn't produce a thing we ship directly, so nobody dared to do it".

My second answer was that I had blogged how to test a Puppet module PR with Packit, but I totally agree that the process was a tad cumbersome and could be improved.

Now some madman did it and we all get to hear his story! ;-)

What is the problem anyway?

The Foreman Installer is a bit of Ruby code1 that provides a CLI to puppet apply based on a set of Puppet modules. As the Puppet modules can also be used outside the installer and have their own lifecycle, they live in separate git repositories and their releases get uploaded to the Puppet Forge. Users however do not want to (and should not have to) install the modules themselves.

So we have to ship the modules inside the foreman-installer package. Packaging 25 modules for two packaging systems (we support Enterprise Linux and Debian/Ubuntu) seems like a lot of work. Especially if you consider that the main foreman-installer package would need to be rebuilt after each module change as it contains generated files based on the modules which are too expensive to generate at runtime.

So we can ship the modules inside the foreman-installer source release, thus vendoring those modules into the installer release.

To do so we use librarian-puppet with a Puppetfile and either a Puppetfile.lock for stable releases or by letting librarian-puppet fetch latest for nightly snapshots.

This works beautifully for changes that land in the development and release branches of our repositories - regardless if it's foreman-installer.git or any of the puppet-*.git ones. It also works nicely for pull-requests against foreman-installer.git.

But because the puppet-* repositories do not map to packages, we assumed it wouldn't work well for pull-requests against those.

How can we solve this?

Well, the "obvious" solution is to build the foreman-installer package via Packit also for pull-requests against the puppet-* repositories. However, as usual, the devil is in the details.

Packit by default clones the repository of the pull-request and tries to create a source tarball from that using git archive. As this might be too simple for many projects, one can define a custom create-archive action that runs after the pull-request has been cloned and produces the tarball instead. We already use that in the Packit configuration for foreman-installer to run the pkg:generate_source rake target which executes librarian-puppet for us.

But now the pull-request is against one of the Puppet modules, so Packit will clone that, not the installer.

We gotta clone foreman-installer on our own. And then point librarian-puppet at the pull-request. Fun.

Cloning is relatively simple, call git clone -- sorry Packit/Copr infrastructure.

But the Puppet module pull-request? One can use :git => 'https://git.example.com/repo.git' in the Puppetfile to fetch a git repository. In fact, that's what we already do for our nightly snapshots. It also supports :ref => 'some_branch_or_tag_name', if the remote HEAD is not what you want.

My brain first went "I know this! GitHub has this magic refs/pull/1/head and refs/pull/1/merge refs you can checkout to get the contents of the pull-request without bothering to add a remote for the source of the pull-request". Well, this requires to know the ID of the pull-request and Packit does not expose that in the environment variables available during create-archive.

Wait, but we already have a checkout. Can we just say :git => '../.git'? Cloning a .git folder is totally possible after all.

[Librarian] --> fatal: repository '../.git' does not exist Could not checkout ../.git: fatal: repository '../.git' does not exist

Seems librarian disagrees. Damn. (Yes, I checked, the path exists.)

💡 does it maybe just not like relative paths?! Yepp, using an absolute path absolutely works!

For some reason it ends up checking out the default HEAD of the "real" (GitHub) remote, not of ../. Luckily this can be fixed by explicitly passing :ref => 'origin/HEAD', which resolves to the branch Packit created for the pull-request.

Now we just need to put all of that together and remember to execute all commands from inside the foreman-installer checkout as that is where all our vendoring recipes etc live.

Putting it all together

Let's look at the diff between the packit.yaml for foreman-installer and the one I've proposed for puppet-pulpcore:

--- a/foreman-installer/.packit.yaml 2024-05-14 21:45:26.545260798 +0200 +++ b/puppet-pulpcore/.packit.yaml 2024-05-14 21:44:47.834162418 +0200 @@ -18,13 +18,15 @@ actions: post-upstream-clone: - "wget https://raw.githubusercontent.com/theforeman/foreman-packaging/rpm/develop/packages/foreman/foreman-installer/foreman-installer.spec -O foreman-installer.spec" + - "git clone https://github.com/theforeman/foreman-installer" + - "sed -i '/theforeman.pulpcore/ s@:git.*@:git => \"#{__dir__}/../.git\", :ref => \"origin/HEAD\"@' foreman-installer/Puppetfile" get-current-version: - - "sed 's/-develop//' VERSION" + - "sed 's/-develop//' foreman-installer/VERSION" create-archive: - - bundle config set --local path vendor/bundle - - bundle config set --local without development:test - - bundle install - - bundle exec rake pkg:generate_source + - bash -c "cd foreman-installer && bundle config set --local path vendor/bundle" + - bash -c "cd foreman-installer && bundle config set --local without development:test" + - bash -c "cd foreman-installer && bundle install" + - bash -c "cd foreman-installer && bundle exec rake pkg:generate_source"
  1. It clones foreman-installer (in post-upstream-clone, as that felt more natural after some thinking)
  2. It adjusts the Puppetfile to use #{__dir__}/../.git as the Git repository, abusing the fact that a Puppetfile is really just a Ruby script (sorry Ben!) and knows the __dir__ it lives in
  3. It fetches the version from the foreman-installer checkout, so it's sort-of reasonable
  4. It performs all building inside the foreman-installer checkout
Can this be used in other scenarios?

I hope so! Vendoring is not unheard of. And testing your "consumers" (dependents? naming is hard) is good style anyway!

  1. three Ruby modules in a trench coat, so to say 

Categories: FLOSS Project Planets

The Accidental Coder: AI Translation - Not Ready for Prime Time?

Planet Drupal - Tue, 2024-05-14 15:49
AI Translation - Not Ready for Prime Time? ayen 14 May, 2024

While working on the latest (D10) version of my blog, I wanted to add multilingual functionality.

Investigation suggested that in order to capture the largest language groups in the U.S./Canada a site should offer:

Categories: FLOSS Project Planets

Pages