Feeds

Calligra Office 4.0 is Out!

Planet KDE - Tue, 2024-08-27 07:05

Calligra is the office and graphics suite developed by KDE and is the successor to KOffice. With some traditional parts like Kexi and Plan having an independent release schedule, this release only contains the four following components:

The most significant updates are that Calligra has been fully transitioned to Qt6 and KF6, along with a major overhaul of its user interface.

General

Words, Sheets, and Stage now feature a new sidebar design. Currently, this is implemented using a proxy style, which will no longer be necessary once the related merge request in Breeze is merged.

Sidebar with the new immutable tab design

I revamped the content of each sidebar tab, addressing various visual glitches and making the spacing much more consistent.

The “Custom Shape” docker has been removed, and custom shapes are now accessible through a popup menu in the toolbar across all Calligra applications.

Custom shapes popup

Regarding the toolbar, I streamlined the default layout by removing basic actions like copy, cut, and paste.

The settings dialogs were also cleaned up and are now using the new FlatList style also used by System Settings and most Kirigami applications.

Settings Dialog

Words

Word now features the new sidebar design, and the main view uses a shadow to define the document borders.

Calligra Words

The Style Manager and Page Layout dialog were also updated.

Style Manager

Page Layout

Stage

Stage didn’t really change aside of the sidebar redesign. But I am using it to work on my slides for Akademy and it is a pretty solid choice.

Calligra Stage

The tooltip for the slides are now compatible with Wayland.

Tooltip showing a slide

Calligra Sheets

As part of the Qt6 port, Sheets lost its scripting system based on the unmaintained Kross framework. In the future, it would be possible to add Python scriping, thanks to the work of Manuel Alcaraz Zambrano on getting Python bindings for the KDE Frameworks.

Visually a noticable change is that the cell editor moved from a docker positioned on the left of the spreadsheet view by default to a normal widget on the top. This takes a lot less space which can be used by the spreadsheet.

Calligra Sheets

Karbon

Karbon didn’t received much change outside of the one affecting the whole platform.

Karbon

Launcher

The intial window when opening one of the Calligra application was redesign and adopted the new “frameless style”.

Custom Document tab of the launcher page

Template tab of the launcher page

Other
  • Braindump is now able to compile again, but since it lacks an active maintainer, the component is disabled in release builds.

  • The webshape plugin has been ported from the outdated QtWebkit module to QtWebEngine and is no longer exclusive to Braindump. This means you can now embed websites directly into your word documents, slides, and spreadsheets.

Webshape

  • The AppStream id of every components is prefixed by org.kde.calligra. This allow Flatpak to expose every Calligra applications to your application launcher.
Get Involved

Calligra needs your support! You can contribute by getting involved in development, providing new or updated templates, or making a donation to KDE e.V.. Join the discussion in our Matrix channel.

Credits

This release would not have been possible without the high quality mockups provided by Manuel Jesús de la Fuente. Also big thanks to everyone who contributed to this Calligra release: Evgeniy Harchenko, Dmitrii Fomchenkov and bob sayshilol.

Packager Section

You can find the package on download.kde.org and it has been signed with my GPG key.

Categories: FLOSS Project Planets

Specbee: Why User Experience (UX) matters and how it can transform your website

Planet Drupal - Tue, 2024-08-27 06:10
UX (User Experience) and UI (User Interface) design are the backbone of any successful digital product. They don’t just create pretty interfaces—they shape how users interact with your website or app. A well-crafted UX and UI keep users engaged, make their journey smooth, and leave them satisfied. When done right, they turn casual visitors into loyal users. In this article, you'll learn what UX is, how it differs from UI design, and why both are crucial in crafting engaging, intuitive experiences for users. We'll break down the UX design process, from research to testing, and highlight key goals like accessibility, usability, and delight.  What is UX UX is all about how someone interacts with a digital product. It’s the user’s thoughts, feelings, and actions before, during, and after they use it. Good UX meets their needs efficiently and leaves them with a positive vibe. It’s not just about usability—it’s about making sure it’s accessible, useful, and emotionally satisfying too. UX vs UI: The Ketchup Bottle Example To get the difference between UX and UI, think about a ketchup bottle: UI: This is the look and feel of the ketchup bottle—the shape, color, label design, and even the texture. UI design is all about making it visually appealing and easy to interact with. UX: UX goes beyond just the bottle. It’s everything about how you use the ketchup and how it makes you feel. From how easy it is to open, to the consistency and taste, to your overall satisfaction. UX design focuses on making sure every part of your experience, from start to finish, works smoothly and leaves you happy. Image Source: UXDesign.CC Image Source: Patrick Hansen.com In a nutshell, UI is all about the look and feel of a product's interface. UX, on the other hand, covers the whole user experience. It’s about making sure interactions are meaningful, intuitive, and match what users need and expect. Looking to boost engagement and increase conversion rates with expert UI/UX design and research services? Let’s create a website your users will love. Talk to us today. Goals of UX: Accessibility, Usability, Utility, Delight Effective UX design revolves around four key goals: Accessibility: Design for everyone. Accessibility means making digital products usable for people with all abilities. It’s about adding features that cater to diverse needs so everyone can access and use the product easily. Usability: Keep it simple and engaging. Usability is all about reducing friction and making sure users can accomplish their tasks smoothly and with satisfaction. Utility: Make it useful. Utility means ensuring the product does what users need it to do. It’s about understanding what users want and designing features that meet those needs. Delight: Add a little joy. Delight is about creating experiences that evoke positive emotions and build loyalty by surprising and exceeding user expectations. Now, let’s talk about the UX of a banana: Accessibility: The peel is easy to remove, whether you like it ripe or unripe. Usability: It’s straightforward to eat, with minimal mess. Utility: Bananas provide quick energy and nutrition. Delight: They smell great, taste good, and even come in biodegradable packaging. Green Banana: Not ready for consumption yet—still ripening. Best to wait for optimal flavor and texture. Brown Banana: Overripe—Not ideal for eating fresh; may be too mushy. The outside color clearly signals that it’s not suitable for immediate consumption. A perfect example of a banana. Image Source: accubits A perfect banana vs. rotten banana. UX Design Process A process is a mix of different methods all aimed at one goal: creating a delightful user experience. It’s a structured sequence of stages designed to understand what users need and improve the final product to go beyond their expectations. Research:The first step is research. This means talking to users through interviews and surveys to gather insights into their preferences, expectations, and challenges. We also analyze data from tools like Google Analytics to spot user patterns and behaviors. Checking out what competitors are doing gives us a look at industry standards and opportunities for improvement. Analysis:Next up is analysis. We take all that data and look for patterns and insights that can guide our design. Creating user personas helps us represent key user types. Journey mapping shows us the user experience step by step, highlighting where things go right or wrong. Defining user goals helps us focus on features that make the biggest impact. Design:Design is at the heart of everything. Here, we come up with solutions that meet user needs and align with business goals. We start with wireframing—simple sketches that outline key screens and interactions. Then, we build interactive prototypes to test how users will interact with the design. Finally, we add visual elements like branding, typography, and colors to make the design look and feel just right. Testing:Once the designs are ready, we move to testing. We let real users interact with the prototype to see how it works for them. Usability testing helps us spot any issues and gather feedback. We refine the design based on this feedback, ensuring it’s as user-friendly and satisfying as possible. By following this structured UX design process, teams can keep improving the user experience. The result? A product that not only meets but exceeds user expectations and aligns with business goals. Each stage adds valuable insights, making sure the design is user-centered and effective. Final thoughts UX/UI design matters because it affects how appealing and functional a product is. By focusing on accessibility, usability, utility, and delight, designers create experiences that resonate with users, boosting engagement and satisfaction. This approach is especially important in Drupal development, ensuring that websites and apps are not just visually appealing but also optimized for a great user experience.
Categories: FLOSS Project Planets

Russ Allbery: Review: Dark Horse

Planet Debian - Mon, 2024-08-26 22:22

Review: Dark Horse, by Michelle Diener

Series: Class 5 #1 Publisher: Eclipse Copyright: June 2015 ISBN: 0-9924559-3-6 Format: Kindle Pages: 366

Dark Horse is a science fiction romance novel, the first of a five book series as of this writing. It is self-published, although it is sufficiently well-edited and packaged that I had to do some searching to confirm that.

Rose was abducted by aliens. The Tecrans picked her up along with a selection of Earth animals, kept her in a cell in their starship, and experimented on her. As the book opens, she has managed to make her escape with the aid of an AI named Sazo who was also imprisoned on the Tecran ship. Sazo dealt with the Tecrans, dropped the ship in the middle of Grih territory, and then got Rose and most of the animals on shuttles to a nearby planet.

Dav Jallan is the commander of the ship the Grih sent to investigate the unexplained appearance of a Class 5 Tecran warship in the middle of their territory. The Grih and the Tecran, along with three other species, are members of the United Council, which means in theory they're all at peace. With the Tecran, that theory is often strained. Dav is not going to turn down one of their highly-advanced Class 5 warships delivered to him on a silver platter. There is only the matter of the unexpected cargo, the first orange dots (indicating unknown life forms) that most of the Grih have ever seen.

There is a romance. That romance did not work for me. I thought it was highly unprofessional on Dav's part and a bit too obviously constructed on the author's part. It also leans on the subgenre convention that aliens can be remarkably physically similar and sexually compatible, which always causes problems for my suspension of disbelief even though I know it's no less plausible than faster-than-light travel.

Despite that, I had so much fun with this book! It was absolutely delightful and weirdly grabby in a way that caught me by surprise. I was skimming some parts of it to write this review and found myself re-reading multiple pages before I dragged myself back on task.

I think the most charming part of this book is that the United Council has a law called the Sentient Beings Agreement that makes what the Tecran were doing extremely illegal, and the Grih and the other non-Tecran aliens take this very seriously and with a refreshing lack of cynicism. Rose has a typical human reaction to ending up in a place where she doesn't know the rules and isn't entirely an expected guest. She almost reflexively smoothes over miscommunications and tensions, trying to adapt to their expectations. And then, repeatedly, the Grih realize how much work she's doing to adapt to them, feel enraged at the Tecran and upset that they didn't understand or properly explain something, and find some way to make Rose feel more comfortable. It's surprisingly soothing and comforting to read.

It occurred to me in several places that Dark Horse could be read as a wish-fulfillment fantasy of what life as a woman could be like if men took their fair share of the mental load. (This concept is usually applied to housework, but I think it generalizes to other social and communication contexts.) I suspect this was not an accident.

There is a lot of wish fulfillment in this book. The Grih are very human-like but hunky, which is convenient for the romance subplot. They struggle to sing, value music exceptionally highly, and consider Rose's speaking voice beautifully musical. Her typical human habit of singing to herself is a source of immediate and almost overwhelming fascination. The supplies Rose takes from the Tecran ship when she flees just happen to be absurdly expensive scented shampoo and equally expensive luxury adaptable clothing. The world she lands on, and the Grih ship, are low-gravity compared to Earth, so Rose is unusually strong for her size. Grih military camouflage has no effect on her human vision. The book is set up to make Rose special.

If that type of wish fulfillment is going to grate, wait on this book until you're more in the mood for it. But I like wish fulfillment books when they're done well. Part of why I like to read is to imagine a better world. And Rose isn't doted on; despite their hospitality, she's constantly underestimated by the Grih. Even with their deep belief in the Sentient Beings Agreement, the they find it hard to believe that an unknown sentient, even an advanced sentient, is really their equal. Their concern at the start is somewhat patronizing, so watching Rose constantly surprise them delighted the part of my brain that likes both competence porn and deserved reversals, even though the competence here is often due to accidents of biology. It helps that Diener tells the story in alternating perspectives, so the reader first watches Rose do something practical and straightforward from her perspective and then gets to enjoy the profound surprise and chagrin of the aliens.

There is a plot beneath this first contact story, and beyond the political problem of figuring out what to do with Rose and the Tecran. Sazo, Rose's AI friend, does not want the Grih to know he exists. He has a history that Rose does not know about and may not be entirely safe. As the political situation with the Tecran escalates, Sazo is pursuing goals of his own, and Rose has a firm opinion about where her loyalties should lie. The resolution is nothing ground-breaking as far as SF goes, but I thought it was satisfyingly tense and complex. Dark Horse leaves obvious room for a sequel, but it comes to a satisfying conclusion.

The writing is serviceable, particularly once you get into the story. I would not call it great, and it's not going to win any literary awards, but it didn't interfere with my enjoyment of the story.

This is not the sort of book that will make anyone's award list, but it is easily in the top five of books I had the most fun reading this year. Maybe save it for when you're looking for something light and wholesome and don't mind some rather obvious tropes, but if you're in the mood for imagining people who take laws seriously and sincerely try to help other people, I found this an utterly delightful way to pass the time. I immediately bought the sequel. Recommended.

Followed by Dark Deeds.

Rating: 8 out of 10

Categories: FLOSS Project Planets

Armin Ronacher: MiniJinja: Learnings from Building a Template Engine in Rust

Planet Python - Mon, 2024-08-26 20:00

Given that I can't stop creating template engines, I figured I might write a bit about my learnings of creating MiniJinja which is an implementation of my Jinja2 template engine for Rust. Disclaimer: this post might be a bit more technical.

There is a good chance you have come across Jinja2 templates before as they became quite common place in various places over the years. They look a bit like this:

{% extends "layout.html" %} {% block body %} <p>Hello {{ name }}!</p> {% endblock %}

If you want to play around it yourself, here are some links:

Why MiniJinja?

Maybe we start with the initial question of why I wrote MiniJinja. It's the year 2024 and people don't create a ton of HTML with server side rendered template engines any more. While there is some resurgence of that model thanks to HTMX, hotwire and livewire, I personally use SolidJS for my internal UI needs. There is however always a need to generate some form of text and so somehow Jinja2's need never really went away. When I originally created it, it was clearly meant for generating HTML with some JavaScript sprinkled on top, but in the years since I have encountered Jinja templates in many more places, primarily for generating YAML and similar formats. Lately it comes up for LLM prompt generation.

My personal need for MiniJinja came out of an experiment I built for infrastructure automation. Since the templates had to be loaded dynamically I could not use a system like Askama. Askama has type-safe templates that just generate Rust code. On the other hand most Jinja inspired template engines that are dynamic in Rust really do not try very hard to be Jinja compatible. Because writing template engines is also fun, I figured I might give it another try.

Over the last two years I kept adding to the engine until it got to the point where it's at almost feature parity with Jinja2 and quite enjoyable to use.

Runtime Values

When building a template engine for Rust you end up building a little dynamic programming language that is optimized for text generation. Consequently you pull in most of the challenges of building a dynamic language. Particularly when working in Rust the immediate challenge is memory management and exposing native Rust objects to the embedded language. So the interesting bit here is how to create a system that allows interactions between the template engine and the Rust world around it.

MiniJinja, unlike Jinja2 does not use code generation but has a basic stack based VM and a AST based bytecode compiler. Since MiniJinja follows Jinja2 it inherits a lot of the realities of the underlying object system that Jinja2 inherits from Python. For instance macros (functions) are first class objects and they can have closures. This has challenges because it's easy to create cycles and Rust has no garbage collector that can help with this problem.

The core object model in MiniJinja is a Value type which is represented by an enum that looks as follows (some less important variants removed):

#[derive(Clone)] pub struct Value(ValueRepr); #[derive(Clone)] pub(crate) enum ValueRepr { Undefined, None, Bool(bool), U64(u64), I64(i64), F64(f64), String(Arc<str>, StringType), SmallStr(SmallStr), Invalid(Arc<Error>), Object(DynObject), }

Externaly everything is a Value. If you Clone it, you usually bump a reference count or you make a cheap memcopy. Values are either primitives such as strings, numbers etc. or objects.

For objects MiniJinja provides a tait called Object which can be implemented by most Rust types. The engine provides a DynObject wrapper is a fancy Arc<dyn Object> which supports borrowing and object safety. I wrote about this before. What you will notice is that quite a few of the types involved have an Arc. That's because these values are for the most part reference counted. Since values here are really fat (they are 24 bytes in memory) a SmallStr type is used to hold up to 22 bytes of string data inline. One byte is used to encode the length of the string, and another byte is then used by the ValueRepr to mark which enum variant is in use. In pure theory this is all wrong. We never use weak references, so the weak count in the Arc is not used and clever bit hackery could be used to greatly reduce the size of the value type. I think one could get the whole thing down to 16 bytes trivially or even 8 bytes with NaN tagging. However I did not want to walk into the world of unsafe code more than feels appropriate.

MiniJinjia is also plenty fast.

One variant that is worth calling out is Invalid. That's a value that can exist in the system but it carries an error. When you're trying to interact with it in most cases it will propagate this error. That's used in the engine in places where the API assumes infallability (particularly during iteration) but it needs a way to emit an error. This concept is quite common when writing an engine in C though typically the actual error is carried out of bounds. For instance in QuickJS there is a marker value that indicates a failure, but the actual error is held on the interpreter runtime.

The trait definition for objects looks like this:

pub trait Object: Debug + Send + Sync { fn repr(self: &Arc<Self>) -> ObjectRepr { ... } fn get_value(self: &Arc<Self>, key: &Value) -> Option<Value> { ... } fn enumerate(self: &Arc<Self>) -> Enumerator { ... } fn enumerator_len(self: &Arc<Self>) -> Option<usize> { ... } fn is_true(self: &Arc<Self>) -> bool { ... } fn call( self: &Arc<Self>, state: &State<'_, '_>, args: &[Value], ) -> Result<Value, Error> { ... } fn call_method( self: &Arc<Self>, state: &State<'_, '_>, method: &str, args: &[Value], ) -> Result<Value, Error> { ... } fn render(self: &Arc<Self>, f: &mut Formatter<'_>) -> Result where Self: Sized + 'static { ... } }

Some of these methods are implemented automatically. For instance many of the methods such as is_true or enumerator_len have a default implementation that is based on object repr and the return value from enumerate. But they can be overridden to change the default behavior or to add some potential optimizations.

One of the most important types in Jinja is a map as it holds the template context. They are implemented as you can imagine as Object. The implementation is in fact pretty trivial:

impl<V> Object for BTreeMap<Value, V> where V: Into<Value> + Clone + Send + Sync + fmt::Debug + 'static, { fn get_value(self: &Arc<Self>, key: &Value) -> Option<Value> { self.get(key).cloned().map(|v| v.into()) } fn enumerate(self: &Arc<Self>) -> Enumerator { self.mapped_enumerator(|this| Box::new(this.keys().cloned())) } }

This reveals two interesting aspects of the object model: First that Value implements Hash. That means any value can be used as the key in a value. While this is untypical for Rust and even not what happens in Python, it simplifies the system greatly. When in the template engine you write {{ object.key }}, behind the scenes object.get_value(Value::from("key")) is called. Since most keys are typically less than 22 characters, creating a dummy Value wrapper around is not too problematic.

The second and probably more interesting part here is that you can sort of borrow out of an object for the enumerator. The mapped_enumerator helper takes a reference to self and invokes a closure which itself can borrow from self. This adjacent borrowing is implemented with unsafe code as there is no other way to make it work. The combination of repr (defaults to Map), get_value and enumerate gives the object the behavior, shape and contents.

Vectors look quite similar:

impl<T> Object for Vec<T> where T: Into<Value> + Clone + Send + Sync + fmt::Debug + 'static, { fn repr(self: &Arc<Self>) -> ObjectRepr { ObjectRepr::Seq } fn get_value(self: &Arc<Self>, key: &Value) -> Option<Value> { self.get(key.as_usize()?).cloned().map(|v| v.into()) } fn enumerate(self: &Arc<Self>) -> Enumerator { Enumerator::Seq(self.len()) } } Enumerators and Object Behaviors

Enumeration in MiniJinja is a way to allow an object to describe what's inside of it. In combination with the return values from repr() the engine changes how iteration is performed. These are possible enumerators:

pub enum Enumerator { NonEnumerable, Empty, Iter(Box<dyn Iterator<Item = Value> + Send + Sync>), Seq(usize), Values(Vec<Value>), }

It's probably easier to explain how enumerators turn into iterators by showing you the try_iter method in the engine:

impl DynObject { fn try_iter(self: &Self) -> Option<Box<dyn Iterator<Item = Value> + Send + Sync>> where Self: 'static, { match self.enumerate() { Enumerator::NonEnumerable => None, Enumerator::Empty => Some(Box::new(None::<Value>.into_iter())), Enumerator::Seq(l) => { let self_clone = self.clone(); Some(Box::new((0..l).map(move |idx| { self_clone.get_value(&Value::from(idx)).unwrap_or_default() }))) } Enumerator::Iter(iter) => Some(iter), Enumerator::Values(v) => Some(Box::new(v.into_iter())), } } }

Some of the trivial enumerators are quick to explain: Enumerator::NonEnumerable just does not support iteration and Enumerator::Empty does but won't yield any values. The more interesting one is Enumerator::Seq(n) which basically tells the engine to call get_value from 0 to n to yield items from the object. This is how sequences are implemented. The rest are enumerators that just directly yield values.

So when you want to iterate over a map, you will usually use something like Enumerator::Iter and iterate over all the keys in the map.

The engine then uses ObjectRepr to figure out what to do with it. For a value marked as ObjectRepr::Seq it will display like a sequence, you can index it with integers, and that it iterates over the values in the sequence. If the repr is ObejctRepr::Map then the expectation is that it will be indexable by key and it will iterate over the keys when used in a loop. Its default rendering also is a key-value pair list wrapped in curly braces.

Now quite frankly I don't like that iteration protocol. I think it's more sensible for maps to naturally iterate over the key-value pairs, but since MiniJinja follows Jinja2 and Jinja2 follows Python emulating was important.

Enumerators are a bit different than iterators because they might only define how iteration is performed (see: Enumerator::Seq). To actually create an iterator, the object is then passed to it. They are also asked to provide a length. When an enumerator provides a length it's an indication to the engine that the object can be iterated over more than once (you can re-create the enumerator). This is why objects land in a MiniJinja template that looks like a list, but is actually just an iterable object with a known length. For this MiniJinja uses a trick where it will inspect the size hint of the iterator to make assumptions about it. Internally every enumerator allows the engine to query the length of it:

impl Enumerator { fn query_len(&self) -> Option<usize> { Some(match self { Enumerator::Empty => 0, Enumerator::Values(v) => v.len(), Enumerator::Iter(i) => match i.size_hint() { (a, Some(b)) if a == b => a, _ => return None, }, Enumerator::RevIter(i) => match i.size_hint() { (a, Some(b)) if a == b => a, _ => return None, }, Enumerator::Seq(v) => *v, Enumerator::NonEnumerable => return None, }) } }

The important part here is the call to size_hint. If the upper bound is known, and the lower bound matches the upper bound then MiniJinja will assume the iterator will always have that length (for as long as not iterated). As a result it will change the way the object is interacted with. This for instance means that if you run range(10) in a template it looks like a list when printed even though iteration and number creation is lazy. On the other hand if you use the Value::make_one_shot_iterator API the length hint will always be disabled and MiniJinja will not attempt to interact with the iterator when printing it:

{{ range(4) }} -> prints [0, 1, 2, 3] {{ a_real_iterator }} -> prints <iterator> Building a VM

Lexing and parsing I think is not too puzzling in Rust, but making an AST and making a VM is kinda unusual. The first thing is that Rust is just not particularly amazing at tree structures. In MiniJinja I really wanted to avoid having the AST at all, but it does come in in handy to implement some of the functionality that Jinja2 requires. For instance to establish closures it will just walk the AST to figure out which names are looked up within a function. I tried a few things to improve how memory allocations work with the AST. There are great crates out there for doing this, but I really wanted MiniJinja to be light on dependencies so I ended up opting against all of them.

For the AST design I went with large enums that hold Spanned<T> values:

pub enum Expr<'a> { Var(Spanned<Var<'a>>), Const(Spanned<Const>), ... } pub struct Var<'a> { pub id: &'a str, } pub struct Const { pub value: Value, }

You might now be curious what Spanned<T> is. It's a wrapper type that does two things: it boxes the inner node and it stores and adjacent Span which is basically the code location in the original input template for debugging:

pub struct Spanned<T> { node: Box<T>, span: Span, }

It implements Deref like a smart pointer so you can poke right through it to interact with the node. The code generator just walks the AST and emits instructions for it.

The instructions themselves are a large enum but the number of arguments to the variants is kept rather low to not waste too much memory. The base size of the instruction is dominated by it being able to hold a Value which as we have established is a pretty hefty thing:

pub enum Instruction<'source> { EmitRaw(&'source str), StoreLocal(&'source str), Lookup(&'source str), LoadConst(Value), Jump(usize), JumpIfFalse(usize), JumpIfFalseOrPop(usize), JumpIfTrueOrPop(usize), ... }

The VM keeps most of the runtime state on a State object that is passed to a few places. For instance you have already seen this in the call signature further up. The state for instance holds the loaded instructions or the template context. The VM itself maintains a stack of values and then just steps through a list of instructions on the state in a loop. Since there are a lot of instructions you can have a look on GitHub to see it in its entirety. Here however is a small part that shows roughly how this works:

let mut pc = 0; loop { let instr = state.instructions.get(pc) { Some(instr) => instr, None => break, }; let a; let b; match instr { Instruction::EmitRaw(val) => { out.write_str(val).map_err(Error::from)?; } Instruction::Emit => { self.env.format(&stack.pop(), state, out)?; } Instruction::StoreLocal(name) => { state.ctx.store(name, stack.pop()); } Instruction::Lookup(name) => { stack.push(assert_valid!(state .lookup(name) .unwrap_or(Value::UNDEFINED))); } Instruction::GetAttr(name) => { a = stack.pop(); stack.push(match a.get_attr_fast(name) { Some(value) => value, None => undefined_behavior.handle_undefined(a.is_undefined())?, }); } Instruction::LoadConst(value) => { stack.push(value.clone()); } Instruction::Jump(jump_target) => { pc = *jump_target; continue; } Instruction::JumpIfFalse(jump_target) => { a = stack.pop(); if !undefined_behavior.is_true(&a)? { pc = *jump_target; continue; } } // ... } pc += 1; }

Basically the current instruction is held in pc (short for program counter), normally it's advanced by one but jump instructions can change the pc to any other location. If you run out of instructions the evaluation ends.

One piece of complexity in the VM comes down to macros. That's because lifetimes make that really tricky. A macro is just a Value that holds a Macro Object internally. So how can that macro reference the instructions, if the instructions themselves have a lifetime to the template 'source? The answer is that they can't (at least I have not found a reasonable way). So instead a macro has an ID which acts as a handle to look up the instructions dynamically from the execution state. Additionally each state has a unique ID so the engine can assert that nothing funny was happening. The downside of this is that a macro cannot be "returned" from a template. They can however be imported from one template into another.

Here is what a macro object looks like in code (abbreviated):

pub(crate) struct Macro { pub name: Value, pub arg_spec: Vec<Value>, pub macro_ref_id: usize, // id of the macro pub state_id: isize, pub closure: Value, pub caller_reference: bool, } impl Object for Macro { fn call(self: &Arc<Self>, state: &State<'_, '_>, args: &[Value]) -> Result<Value, Error> { // we can only call macros that point to loaded template state. // if a template would be returned from a template this will // fail. if state.id != self.state_id { return Err(Error::new( ErrorKind::InvalidOperation, "cannot call this macro. template state went away.", )); } // ... argument parsing let arg_values = ...; // find referenced instructions let (instructions, offset) = &state.macros[self.macro_ref_id]; // created a nested vm and evaluate the macro let vm = Vm::new(state.env()); let mut rv = String::new(); let mut out = Output::with_string(&mut rv); let closure = self.closure.clone(); ok!(vm.eval_macro( instructions, *offset, self.closure.clone(), state.ctx.clone_base(), caller, &mut out, state, arg_values )); // return rendered template as string from the call Ok(if !matches!(state.auto_escape(), AutoEscape::None) { Value::from_safe_string(rv) } else { Value::from(rv) }) } }

Additionally the closure is a good source of cycles. For that reason the engine keeps track of all closures during the execution and breaks cycles caused by closures manually by clearning them out.

Cool APIs

The last part that I want to go over is the magic that makes this work:

fn slugify(value: String) -> String { value.to_lowercase().split_whitespace().collect::<Vec<_>>().join("-") } fn timeformat(state: &State, ts: f64) -> String { let configured_format = state.lookup("TIME_FORMAT"); let format = configured_format .as_ref() .and_then(|x| x.as_str()) .unwrap_or("HH:MM:SS"); format_unix_timestamp(ts, format) } let mut env = Environment::new(); env.add_filter("slugify", slugify); env.add_filter("timeformat", timeformat);

You might have seem something like this in Rust before, but it's still a bit magical. How can you make functions with seemingly different signatures register with the add_filter function? How does the engine perform the type conversions (as we know the engine has Value types, so where does the String conversion take place?). This is a topic for a blog post on its own but the answer behind this lies in a a lot of clever trait hackery. The add_filter function reveals a bit of that hackery:

pub fn add_filter<N, F, Rv, Args>(&mut self, name: N, f: F) where N: Into<Cow<'source, str>>, F: Filter<Rv, Args> + for<'a> Filter<Rv, <Args as FunctionArgs<'a>>::Output>, Rv: FunctionResult, Args: for<'a> FunctionArgs<'a>, { let filter = BoxedFilter(Arc::new(move |state, args| -> Result<Value, Error> { f.apply_to(Args::from_values(Some(state), args)?).into_result() })); self.filters.insert(name.into(), filter); }

Hidden behind this rather complex set of traits are some basic ideas:

  1. FunctionArgs is a helper trait for type conversions. It's implemented for tuples of different sizes made of ArgType values. These tuples represent the signature of the function. It has a method called from_values which performs that conversion via ArgType.
  2. ArgType which you can't really see in the code above, is a trait that knows how to convert a Value into whatever the function desires as argument.
  3. Filter is a trait implemented for function with qualifying FunctionArgs signatures returning a FunctionResult.
  4. A FunctionResult is a trait that represents potential return values from the function such as a Value, something that can be converted into a Value or a Result.
  5. The BoxedFilter type is what converts the passed closure into a reference counted object that is held in the environment.
Conclusion

I think a lot of the patterns in MiniJinja are useful for projects outside of MiniJinja. Quite is quite a bit more hidden in it that I have talked about before such as how MiniJinja is abusing serde. If you have a need for a Jinja2 compatible template engine I would love if you get some use out of it. If you're curious about how to build a runtime and object system in Rust, you might also find some utility in the codebase.

I myself learned quite a bit about what creative API design can look like in Rust by building it. At this point I am incredibly happy with how the public API of the engine shaped out to be. The engine is extensively documented both internally and publicly and you can read all about it in the API docs.

Categories: FLOSS Project Planets

GSoC '24 Update- Porting Arianna to Foliate-js

Planet KDE - Mon, 2024-08-26 16:59
Project Recap

As my Google Summer of Code 2024 journey concludes, I'm excited to share the updates on my project: Porting Arianna to Foliate-js. The main goal was to replace the outdated epub.js with actively maintained Foliate-js. In my previous blog post, I discussed the initial progress on integrating Foliate-js into Arianna, including the implementation of Table of Contents (TOC) and metadata handling.

My work done so far Overcoming Rendering Challenges
  • Rendering Issues: One of the major hurdles was fixing the rendering issues that were causing the book to not be visible on the screen. This was a complex problem, but with the guidance of my mentor,we were able to resolve it successfully, and the book was able to visible on the screen.

  • Text Color in Light Theme: I also addressed the text color issues in the light theme mode, ensuring other color can be visible and maintaining visual consistency across different themes.

  • Navigation Buttons: Enabled the navigation buttons by setting backend.locationsReady to true when the book is ready. This was a key fix to enhance user navigation within the ebook, to move from one page to another expect through the arrow keys.

  • Theme Color Handling: Lastly, I worked on the handling of theme colors to provide a consistent visual experience across different themes.

light: { fg: Config.invert ? Kirigami.Theme.backgroundColor.toString() : Kirigami.Theme.textColor.toString(), bg: Config.invert ? Kirigami.Theme.textColor.toString() : Kirigami.Theme.backgroundColor.toString() } User Experience and Functionality Refinements
  • Slider and Progress Percentage: I fixed the slider functionality, making sure it accurately reflects the reading progress. This update ensures that users can track their progress through the ebook with precision.
QC2.Slider { anchors.fill: parent value: backend.progress * value onMoved: { backend.progress = value } onPressedChanged: { if (pressed) { backend.progress = value } } live: true Layout.fillWidth: true }
  • Reading Position Accuracy: I also ensured that the slider accurately reflects the reading position when users interact with it, improving the overall usability.

  • Book Progress Display: I resolved issues with the book progress display, update time left calculation, and popup behavior, refining the user interface for a smoother reading experience.

> Reflections and Takeaways

Project has been a significant learning experience. The most challenging part for me was making things work and realizing that not everything is as straightforward as I initially thought. It was daunting to dive into a large codebase and try to understand how everything fits together, but this experience taught me the importance of patience and how to reduce problems so they can be simply solved.

Looking Ahead

While significant progress has been made, many things are left to do:

  • Fixing right-click copy and search functionality
  • Implementing Ctrl+ shortcut for increasing font size
  • Addressing link color and redirect issues
  • Features like bookmarking and annotations
What’s next?

While my GSoC journey is coming to an end, my contributions to Arianna and the open-source community will continue.

I'd like to thank my mentor, Carl Schwan, for the guidance and support throughout the project, the KDE community, and the Google Summer of Code program for this opportunity. This experience has not only improved Arianna but has also been a transformative journey for me as a developer.

Thank you for following along with my progress, see you in my next blog with more progress.

Categories: FLOSS Project Planets

Open Source AI – Weekly update August 26

Open Source Initiative - Mon, 2024-08-26 15:33
Week 34 summary  Share your thoughts about draft v0.0.9

As we move toward the release of the first-ever Open Source AI Definition in October at All Things Open, the publication of the 0.0.9 draft brings us one step closer to realizing this goal.

  • OSAID 0.0.9 draft definition is live
  •   Changelog includes:
    • New Feature: Clarified Open Source Models and Weights
      • Added a new paragraph under “What is Open Source AI” to define “system” as including both models and weights.
      • Clarified that all components of a larger system must meet the standard.
      • Updated paragraph after the “share” bullet to emphasize this point.  
    • New Section: Open Source Models and Open Source Weights
      • Added descriptions of components for both models and weights in machine learning systems.
      • Edited subsequent paragraphs to eliminate redundancy.
    • Training Data: Defined as a Benefit, Not a Requirement
      • Defined open, public, and unshareable non-public training data.
      • Explained the role of training data in studying AI systems and understanding biases.
      • Emphasized extra requirements for data to advance openness, especially in private-first areas like healthcare.
    • Separation of Checklist
      • The Checklist is now a separate document from the main Definition.
      • Fully aligned Checklist content with the Model Openness Framework (MOF).
    • Terminology Changes
      • Replaced “Model” with “Weights” under “Preferred form to make modifications” for consistency.
    • Explicit Reference to Recipients of the Four Freedoms
      • Added specific references to developers, deployers, and end users of AI systems.
    • Credits and References
      • Incorporated credit to the Free Software Definition.
      • Added references to conditions of availability of components, referencing the Open Source Definition.
  • Initial reactions on the forum: 
    • @shujisado praises the updates in version 0.0.9, particularly the decision to separate the checklist from the main document, which clarifies the intent behind OSAID. He also supports the separation of “code” and “weights,” noting that in Japan, “code” clearly falls under copyright, making this distinction logical. He acknowledges revisions in the checklist that consider the importance of complete datasets, even though he disagrees with making datasets mandatory. 
  • Comments on the draft on HackMD
    • @Joshua Gay adds that instead of narrowing the focus to machine-learning systems, the emphasis should be on “parameters” as a whole since weights are just one type of parameter. He suggests a rewrite that highlights making model parameters, such as weights and other settings, available under OSI-approved terms, with examples across various AI models.
      • He further suggests using broader language that covers more AI systems instead of narrower terminology. Specifically, he proposes replacing “Open Source models and Open Source weights” with “Open Source models and Open Source parameters,” and using “AI systems” instead of “machine learning systems.” Additionally, he recommends redefining an AI model to include architecture, parameters like weights and decision boundaries, and inference code, while referring to AI parameters as configuration settings that produce outputs from inputs.
    • Under “Open Source models and Open Source weights”, @shujisado adds that the last paragraph titled “Open Source models and Open Source weights” actually explains “AI model” and “AI weights,” leading to a mismatch between the title and content, and notes that these terms are not used elsewhere in the definition.
    • Under “Preferred form to make modifications to machine-learning systems”, @shujisado suggests some grammatical corrections.
  • Next steps
    • The OSI has recently presented at the following events: 
    • Iterate Drafts: Continue refining drafts with feedback from the worldwide roadshow, considering new dissenting opinions.
    • Review Licenses: Decide on the best approach for reviewing new licenses for datasets, documentation, and model parameters.
    • Enhance FAQ: Continue improving the FAQ to address emerging questions.
    • Post-Stable Release Plan: Establish a process for reviewing and updating future versions of the Open Source AI Definition.
 Explaining the concept of Data information
  •  @Kjetilk points out the legal distinction between using copyrighted works for AI training (reproduction) and incorporating them into publishable datasets, questioning the fairness of allowing exploitative models without compensation while potentially banning those that benefit society.
  • @Shujisadoclarifies that compensation for copyrighted works used in AI training is possible for both open source and closed models, distinguishing it from “royalty,” and notes that Japan’s copyright law exempts such uses for machine learning.
    • @Kjetilk reiterates the relevance of “royalty” for compensation in closed, non-published models, suggesting it makes sense under copyright law if required, but if not, it could benefit science and the arts.
Open Source AI Definition Town Hall
  • The slides and recording from the town hall meeting held on August 23, 2024 are available here.
  • The next town hall meeting will be held on September 6th. Sign up for the event here.
Categories: FLOSS Research

Kate & Fonts

Planet KDE - Mon, 2024-08-26 14:45

With the Qt 6.7 release, Qt introduced a wide range of improvement for the text rendering and font shaping.

One element of this is that you can now configure OpenType font features.

Many of the 'new cool' programming fonts have such features integrated. That includes both free fonts like Cascadia Code or paid fonts like MonoLisa.

Let's use the features of Cascadia Code as an example, that is the stuff they promote on their GitHub page:

For example if you set the ss01 feature, you get some alternative italics. The same holds for MonoLisa, there that is the feature ss02. Already that shows: these feature are often not very usefully named and very font specific.

Thanks to Waqar and me, with the upcoming KF 6.6 release, one will be able to configure that in Kate and other KTextEditor based applications.

The generic KDE Frameworks font chooser allows now to configure that stuff and KTextEditor will keep these settings around.

See here enabled alternative italics in Kate with the enhanced font chooser still open (look at the SPDX markers in the code):

A remaining issue is how to best handle the configuration saving in a more generic way. Ideas how to add that to KConfig without breaking compatibility of the configuration files we write with older applications would be welcome. For KTextEditor we just add some extra key for just the features that will be ignored by old versions.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #464 - Drupal Content Production

Planet Drupal - Mon, 2024-08-26 14:00

Today we are talking about Producing content with Drupal, How Drupal can help content producers, and ways it could be better with guest Jerry Ta. We’ll also cover Stage File Proxy as our module of the week.

For show notes visit: www.talkingDrupal.com/464

Topics
  • Brief overview of Urban Institute using Drupal
  • What are the day to day responsibilities of a content producer
  • Layout Builder or Paragraphs
    • What is your opinion
  • You've been in content production for almost 2 decades, what was your first website editing tool.
  • How long have you been using Drupal
  • What is your number one wish the Drupal community would solve
  • Drupalcon
    • What value do you look for for a content producer
  • What is the hardest part of using Drupal
  • Starshot reaction
  • Predictions for Drupal in 5 years for content producers
Resources Guests

Jerry Ta - joshmiller

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Josh Miller - joshmiller

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you ever wanted to work on code or configuration changes to your Drupal site in a non-production environment, without having to copy over all the images and other content files? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in Jan 2011 by netaustin, by recent releases are by Stephen Mustgrave, who listeners will probably recognize from the Needs Review initiative, among his many other Drupal contributions
    • Versions available: 7.x-1.10, 3.0.0-alpha2, and 3.1.0, the last of which works with Drupal 10.3 and 11
  • Maintainership
    • Actively maintained
    • Security coverage
    • Test coverage
    • Documentation - not a lot, but it has been the subject of numerous blog posts over the years
    • Number of open issues: 15 open issues, 2 of which are bugs against the current branch
  • Usage stats:
    • 16,710 sites
  • Module features and usage
    • Once you have Stage File Proxy site up on your non-production site, when the environment gets a request for a content file it doesn’t have like an image, it will query the production site to create a local copy
    • It also has a mode where those requests are served 301 redirects to their location on the production server, so no files are ever copied
    • Once you have the module installed, you can set the origin website URL using the admin UI, using a drush variable-set command, or you can add a line to your settings.php file.
    • Also, if you have simple HTTP authentication set up on the site you want to pull from (for example using the Shield module), you can add URL-encoded versions of the username and password to the origin URL, and the module will still be able to copy down the files.
    • This module was previously covered in this podcast way back in episode #33, but I thought it was worth bring back because it is so useful for working on site locally or across non-production environments
Categories: FLOSS Project Planets

ImageX: Gutenberg Editor: an Alternative Approach to Creating Drupal Content Pages

Planet Drupal - Mon, 2024-08-26 11:36

Authored by Nadiia Nykolaichuk.

It’s great to have a choice of different options when it comes to creating content pages. In addition to Drupal core’s Layout Builder and CKEditor, you are always free to consider installing alternative contributed tools if that’s what resonates with your team’s preferences. One of the prominent examples is Drupal Gutenberg.

Categories: FLOSS Project Planets

Members Newsletter – August 2024

Open Source Initiative - Mon, 2024-08-26 11:27
August 2024 Members Newsletter

The lively conversation about the role of data in building and modifying AI systems will continue as the OSI travels to China this month for AI_dev (August 21-23 in Hong Kong) and Open Source Congress (August 25-27 in Beijing). The OSI has been able to chime in on news stories on the topic, several of which are linked here in the newsletter.

Last month the OSI was at the United Nations in New York City for OSPOs for Good, an event that covered key areas of open source policy, as well as emerging examples of ‘Open Source for good’ from across the globe. I participated in a panel on Open Source AI.

Creating an Open Source AI Definition has been an arduous task over the past couple of years, but we know the importance of creating this standard so the freedoms to use, study, share and modify AI systems can be guaranteed. Those are the core tenets of Open Source, and it warrants the dedicated work it has required. Please read about the people who have played key roles in bringing the Definition to life in our Voices of Open Source AI Definition on the blog.

Stefano Maffulli

Executive Director, OSI 

I hold weekly office hours on Fridays with OSI members: book time if you want to chat about OSI’s activities, if you want to volunteer or have suggestions.

News from the OSI OSI at the United Nations OSPOs for Good

From the Research and Advocacy program

Earlier this month the Open Source Initiative participated in the “OSPOs for Good” event promoted by the United Nations in NYC. Read more.

The Open Source Initiative joins CMU in launching Open Forum for AI: A human-centered approach to AI development

From the Research and Advocacy program

The Open Source Initiative (OSI) is pleased to share that we are joining the founding team of Open Forum for AI (OFAI), an initiative designed by Carnegie Mellon University (CMU). Read more

GUAC adopts license metadata from ClearlyDefined

From the License and Legal program

The software supply chain just gained some transparency thanks to an integration of the Open Source Initiative (OSI) project, ClearlyDefined, into GUAC (Graph for Understanding Artifact Composition), an OpenSSF project from the Linux Foundation. Read more.

Better identifying conda packages with ClearlyDefined

From the License and Legal program

ClearlyDefined now provides a new harvester implementation for conda, a popular package manager with a large collection of pre-built packages for various domains, including data science, machine learning, scientific computing and more. Read more.

OSI in the news Can AI even be open source? It’s complicated

OSI at ZDNet

AI can’t exist without open source, but the top AI vendors are unwilling to commit to open-sourcing their programs and data sets. To complicate matters further, defining open-source AI is a messy issue that has yet to be settled. Read more.

Open Source AI: What About Data Transparency?

OSI at The New Stack

AI uses both code and data, and this combination continues to be a challenge for open source, said experts at the United Nations OSPOs for Good Conference. Read more.

A new White House report embraces open-source AI

OSI at ZDNet

The National Telecommunications and Information Administration (NTIA) issued a report supporting open-source and open models to promote innovation in AI, while emphasizing the need for vigilant risk monitoring. Read more.

With Open Source Artificial Intelligence, Don’t Forget the Lessons of Open Source Software

OSI at CISA

While there is not yet a consensus on the definition of what constitutes “open source AI”, the Open Source Initiative, which maintains the “Open Source Definition” and a list of approved OSS licenses, has been “driving a multi-stakeholder process to define an ‘Open Source AI’”. Read more.

Meta inches toward open source AI with new LLaMA 3.1

OSI at ZDNet

Is Meta’s 405 billion parameter model really open source? Depends on who you ask. Here’s how to try out the new engine for yourself​. Read more.

Other news News from OSI affiliates News from OpenSource.net Voices of the Open Source AI Definition

The Open Source Initiative (OSI) is running a series of stories about a few of the people involved in the Open Source AI Definition (OSAID) co-design process.

7th annual OSPO and Open Source Management Survey

The TODO Group and Linux Foundation Research, in partnership with Cisco, NGINX, Open Source Initiative, InnerSource Commons, and CHAOSS, are excited to be launching the 7th annual OSPO and Open Source Management survey! Take survey here.

2024 Open Source Software Funding Survey

This survey tries to better understand how organizations fund, contribute to, and support open source software projects. This survey is a collaboration between GitHub, Inc., the Linux Foundation, and researchers from Harvard University. Take survey here.

Events Upcoming events Thanks to our sponsors New members and renewals
  • Cisco
  • Microsoft
  • Bloomberg
  • SAS
  • Intel
  • Look to the right

Interested in sponsoring, or partnering with, the OSI? Please see our Sponsorship Prospectus and our Annual Report. We also have a dedicated prospectus for the Deep Dive: Defining Open Source AI. Please contact the OSI to find out more about how your company can promote open source development, communities and software.

Support OSI by becoming a member!

Let’s build a world where knowledge is freely shared, ideas are nurtured, and innovation knows no bounds! 

Join the Open Source Initiative!

Categories: FLOSS Research

The Drop Times: 'Drupal at Your Fingertips' Is Designed as a Quick Reference for Experienced Developers: Selwyn Polit

Planet Drupal - Mon, 2024-08-26 10:07
Explore how Selwyn Polit, an experienced Drupal developer, created "Drupal at Your Fingertips" to serve as a quick reference guide for seasoned developers. Learn about the book's focus on providing concise, actionable information and how it continues to evolve with the Drupal ecosystem.
Categories: FLOSS Project Planets

Real Python: How to Install Python on Your System: A Guide

Planet Python - Mon, 2024-08-26 10:00

Installing the latest version of Python on your computer could be a common requirement for you as a Python programmer. Fortunately, you’ll have a multitude of installation options. For example, you can download the official Python installer from Python.org, use your operating system’s package manager or app store, and more.

In this tutorial, you’ll focus on official CPython distributions, which are generally the best option for learning to program with the language. However, you’ll also learn about a few other distributions, like the one available on Homebrew for macOS users.

In this tutorial, you’ll learn how to:

  • Check whether a version of Python is installed on your system
  • Install or update to the latest Python on Windows, macOS, and Linux
  • Install Python on mobile devices like phones or tablets
  • Use Python on your browser with online interpreters

This tutorial covers installing the latest Python on the most important platforms or operating systems, such as Windows, macOS, Linux, iOS, and Android. However, it doesn’t cover all the existing Linux distributions, which would be a huge task. Anyway, you’ll find instructions for the most popular distros nowadays.

To get the most out of this tutorial, you should be comfortable using your operating system’s terminal or command line.

Free Bonus: Click here to get a Python Cheat Sheet and learn the basics of Python 3, like working with data types, dictionaries, lists, and Python functions.

Take the Quiz: Test your knowledge with our interactive “Python Installation and Setup” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Python Installation and Setup

In this quiz, you'll test your understanding of how to install or update Python on your computer. With this knowledge, you'll be able to set up Python on various operating systems, including Windows, macOS, and Linux.

Windows: How to Check or Get Python

In this section, you’ll learn to check whether Python is installed on your Windows operating system (OS) and which version you have. You’ll also explore three installation options that you can use on Windows.

Note: In this tutorial, you’ll focus on installing the latest version of Python in your current operating system (OS) rather than on installing multiple versions of Python. If you want to install several versions of Python in your OS, then check out the Managing Multiple Python Versions With pyenv tutorial. Note that on Windows machines, you’d have to use pyenv-win instead of pyenv.

For a more comprehensive guide on setting up a Windows machine for Python programming, check out Your Python Coding Environment on Windows: Setup Guide.

Checking the Python Version on Windows

To check whether you already have Python on your Windows machine, open a command-line application like PowerShell or the Windows Terminal.

Follow the steps below to open PowerShell on Windows:

  1. Press the Win key.
  2. Type PowerShell.
  3. Press Enter.

Alternatively, you can right-click the Start button and select Windows PowerShell or Windows PowerShell (Admin). In some versions of Windows, you’ll find Terminal or Terminal (admin).

Note: To learn more about your options for the Windows terminal, check out Using the Terminal on Windows.

With the command line open, type in the following command and press the Enter key:

Windows PowerShell PS> python --version Python 3.x.z Copied!

Using the --version switch will show you the installed version. Note that the 3.x.z part is a placeholder here. In your machine, x and z will be numbers corresponding to the specific version you have installed.

Alternatively, you can use the -V switch:

Windows PowerShell PS> python -V Python 3.x.z Copied!

Using the python -V or python—-version command, you can check whether Python is installed on your system and learn what version you have. If Python isn’t installed on your OS, you’ll get an error message.

Knowing the Python Installation Options on Windows Read the full article at https://realpython.com/installing-python/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Ned Batchelder: Coverage branches instead of arcs

Planet Python - Mon, 2024-08-26 09:18

As I mentioned in a few recent posts, I’ve been working on some significant work in coverage.py to take advantage of new capabilities in Python.

Mark Shannon has been improving the sys.monitoring API so that branch coverage can be done with low overhead. I want to take advantage of that in coverage.py, but I needed to do some refactoring work first. The tests were focused on mapping the complete set of code pathways (which I called arcs), but using low-overhead branch monitoring won’t provide those complete pathways. If the tests continued to focus on them, they would fail with sys.monitoring.

But the complete pathways aren’t actually needed. The useful information is where the branches are, and which branches were taken. That can be measured with sys.monitoring. So a first step was to refactor the tests to focus on branches instead of arcs. That took a while, but is now done.

Not needing all those arcs also meant I could simplify the AST-based parser that found the arcs, removing about 150 lines. I suspect there’s more that could be removed. Maybe it will happen over time. Also, the new code.co_branches() method might make it all obsolete over time.

If you read Coverage at a crossroads on this blog, I talked about using ideas from SlipCover like inserting fake lines with an import hook. Those exotic ideas were appealing in their way, but are no longer needed, and they would have brought a bunch of complexity. With the two new sys.monitoring events, we can get the branch information directly without advanced shenanigans.

There’s more work to do, including attending to incoming bug reports. If you’d like to help, or learn more about any of this, we have a #coverage-py channel in the Python Discord.

Categories: FLOSS Project Planets

The Drop Times: Out-of-the-Box Functionality Survey Reveals the Community's Enthusiasm for Starshot

Planet Drupal - Mon, 2024-08-26 05:34

The Drupal community has taken another step forward under the Starshot Initiative. Recently, the team concluded a survey aimed at pinpointing the most desired out-of-the-box features and contributed modules for the upcoming ‘Drupal CMS’. This survey targeted ambitious marketers as part of the broader Drupal Starshot strategy, resulting in 60 detailed submissions and over 100 feature suggestions. These insights, now available on Drupal.org thanks to Pamela Barone's announcement, will play a crucial role in shaping the platform’s future.

The feedback received from the survey highlights a strong community interest in several key areas. Among the most frequently mentioned were enhancements to page-building tools, SEO capabilities, improved form builders, and content management functionalities. The desire for better security, media management, and multilingual support also stood out as significant themes. Interestingly, while many of these suggestions align with existing development initiatives, the survey also introduced several fresh ideas that are now under consideration by the Drupal leadership team.

Particularly noteworthy are the suggestions for modules that could elevate Drupal’s out-of-the-box experience. Modules like Metatag, Webform, and Admin Toolbar were repeatedly mentioned and are now being evaluated for possible inclusion in future releases. These modules, known for their functionality and ease of use, could significantly enhance the user experience if integrated into the out-of-the-box Drupal CMS offering.

While the survey is not being treated as a direct vote, it serves as a powerful validation tool. The results ensure that the Drupal development tracks are closely aligned with the needs and expectations of its community. As the leadership team assesses these suggestions, they are keenly aware of the balance between innovation and the consistency of user experience that Drupal is known for.

Curious about the detailed findings and how they might shape the next generation of Drupal? You can dive deeper into the survey results here: Community Demands Enhanced Out-of-the-Box Features in DrupalCMS. As the Starshot Initiative continues to gather momentum, the community eagerly awaits the next steps in this exciting journey.

As we turn our attention to the latest from The Drop Times, the focus has been on the ongoing Drupal Association Board Elections. As part of their "Meet the Candidate" campaign, several candidates have shared their visions and plans for Drupal's future.

Matthew Saunders discusses his candidacy in an interview with Alka Elizabeth, a sub-editor at The Drop Times. Focusing on improving governance, fostering inclusivity, and supporting neurodiverse individuals, Matthew outlines his motivations for running for the Drupal Association Board. His ideas provide valuable insights for voters as the election progresses.

Kevin Quillen, Practice Lead at Velir, brings over 16 years of experience to his candidacy. In his interview with Alka Elizabeth, Kevin emphasizes the importance of modernizing Drupal.org, attracting new developers, and enhancing Drupal's global appeal. His vision for the future could significantly impact the platform’s evolution.

Albert Hughes, Product Owner at Stanford University, offers a unique perspective on expanding Drupal’s reach. His candidacy is grounded in his diverse experiences and a strong commitment to innovation. As the election continues, Albert’s ideas for growth and development resonate with many in the community.

In the final installment of The Drop Times' campaign, Alejandro Moreno Lopez, Partner Manager and Developer Relations at Pantheon, shares his journey within the Drupal community. Alejandro is passionate about reducing the Association's dependency on DrupalCon and fostering collaboration and innovation. His interview provides a compelling case for his candidacy as voting continues until September 5th.

Discover why Drupal's latest product will be called 'Drupal CMS' and not just 'Drupal.' An insightful article authored by Sebin A Jacob, Editor-in-Chief of The Drop Times, explore the strategic decision-making, community feedback, and future implications behind this significant naming shift that redefines the way we think about Drupal's evolution. 

The Drupal Decoupled project, also known as headless Drupal, has introduced a new feature to simplify the adoption and implementation of decoupled architecture. This project, which separates the back-end content management from the front-end display, now leverages "Recipes" and can be easily adopted as a Composer Project Template. Jesús Manuel Olivas, Co-Founder and CEO of Octahedroid and Composabase, recently announced this update.

Morpht has launched its "Content Recommendation Playbook," showcasing how personalized content recommendations using Recombee's service can enhance user experiences. The playbook explains how to integrate these systems into Drupal and GovCMS to deliver tailored content based on user behavior, boosting engagement. 

During DrupalCon Portland 2022, concerns over the sustainability of free software led to the conception of Drupal Forge, a platform aimed at financially supporting project maintainers. The idea, sparked by Webform developer Jacob Rockowitz, was further developed by Darren Oh, who proposed adding a launch button for trial sites on project pages to generate recurring revenue. While the initiative has garnered interest, challenges remain in implementing and scaling this solution.

Sponsorship opportunities for BADCamp 2024, set for October 24-25 in Oakland, California, are now open, offering extensive visibility to organizations within the Drupal community. With packages ranging from $1,000 to $2,000, sponsors can gain exposure through speaking engagements, branding at summits, and hosting social events.

Chattanooga Open Source Camp, featuring DrupalCamp Chattanooga 2024, seeks sponsors for its November 2nd event at Chattanooga State Community College. Sponsorships range from $20 to $2,000, offering opportunities for businesses to gain visibility within the tech community. In-kind sponsorships are also welcomed, with a total event budget of $6,500.

The Drop Times has been named the official Media Partner for DrupalCamp Pune 2024, set for October 19-20 at Yashada, Pune. This partnership will ensure comprehensive coverage of the event, featuring sessions, workshops, and keynotes from industry leaders. Organized by the Drupalers Association Pune, the camp aims to foster innovation, learning, and networking within the Drupal community.

The Splash Awards will debut in Asia during DrupalCon Singapore 2024, with submissions open until September 27. The prestigious event, recognizing excellence in Drupal web development, will culminate in a ceremony on December 9 at the Garden Ballroom, PARKROYAL Collection Marina Bay.

The Drupal CEO Network and the Drupal Association have extended the deadline for the 2024 Drupal Business Survey to September 4th. This annual survey gathers crucial insights from Drupal business leaders, shaping an anonymized industry report to guide strategic decisions. The results will be unveiled at DrupalCon Barcelona 2024, with discussions set for September 25 and 26.

The Aten Design Group will host an online session on August 28, 2024, at 2:00 PM EDT to discuss the recent release of Drupal 11. Seth Hill, Senior Developer at Aten, will lead the session designed for Drupal site owners, content administrators, and developers who want to learn more about the new version and its potential benefits.

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. You can also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely
Kazima Abbas
Sub-editor, The DropTimes.

Categories: FLOSS Project Planets

FrOScon 2024

Planet KDE - Mon, 2024-08-26 04:45

This year, I attended FrOScon for the first time . FrOScon is the biggest conference about free and open-source software in Germany. It takes place every year in Bonn/Siegburg (Germany) at the weekend and is free to attend.

For the first time, I was not at a conference to staff a KDE stand. My employer had a stand there, and it was a great occasion for me to meet some colleagues, fellow KDE, and Matrix contributors.

GnuPG Stand

So I spent the majority of my time at the GnuPG stand and discussing many things with Volker, including KDE PIM and the future of KWallet.

I also meet many Matrix community members and am excited to attend the Matrix Conference next month.

Matrix Stand

All in one, it was a great conference and I hope to see more KDE people there next year and maybe even having out own KDE stand.

Categories: FLOSS Project Planets

Python Bytes: #398 Open source makes you rich? (and other myths)

Planet Python - Mon, 2024-08-26 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>Open Source Myths</strong></li> <li><a href="https://astral.sh/blog/uv-unified-python-packaging?featured_on=pythonbytes"><strong>uv 0.3.0 and all the excitement</strong></a></li> <li><a href="https://pythontest.com/top-pytest-plugins/?featured_on=pythonbytes"><strong>Top pytest Plugins</strong></a></li> <li><strong><a href="https://github.com/hbmartin/comparison-hosts-serverless-cloud-function-faas-for-python?tab=readme-ov-file&featured_on=pythonbytes">A comparison of hosts / providers for Python serverless functions</a><a href="https://github.com/hbmartin/comparison-hosts-serverless-cloud-function-faas-for-python?tab=readme-ov-file&featured_on=pythonbytes"> </a><a href="https://github.com/hbmartin/comparison-hosts-serverless-cloud-function-faas-for-python?tab=readme-ov-file&featured_on=pythonbytes">(aka</a><a href="https://github.com/hbmartin/comparison-hosts-serverless-cloud-function-faas-for-python?tab=readme-ov-file&featured_on=pythonbytes"> Faas)</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=whaXyRCrrtc' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="398">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com?featured_on=pythonbytes"><strong>pytest courses and community at PythonTest.com</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it. </p> <p><strong>Brian #1:</strong> <strong>Open Source Myths</strong></p> <ul> <li><a href="https://infosec.exchange/@joshbressers?featured_on=pythonbytes">Josh Bressers</a></li> <li><a href="https://infosec.exchange/@joshbressers/112845039329832564?featured_on=pythonbytes">Mastodon post kicking off a list of open source myths</a></li> <li><a href="https://docs.google.com/document/d/1fzq8I67inb4725EYMhTGHsNhrSUviSU97lFYnt8sgtc/edit?featured_on=pythonbytes">Feedback and additional myths compiled to a doc</a></li> <li>Some favorites <ul> <li>All open source developers live in Nebraska</li> <li>It’s all run by hippies</li> <li>Everything is being rewritten in rust</li> <li>Features are planned</li> <li>If the source code is available, it’s open source</li> <li>A project with no commits for 12 months is abandoned</li> <li>Many eyes make all bugs shallow</li> <li>Open source has worse UX</li> <li>Open source has better UX</li> <li>Open source makes you rich</li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://astral.sh/blog/uv-unified-python-packaging?featured_on=pythonbytes"><strong>uv 0.3.0 and all the excitement</strong></a></p> <ul> <li>Thanks to Skyler Kasko and John Hagen for the emails.</li> <li><a href="https://simonwillison.net/2024/Aug/20/uv-unified-python-packaging/?featured_on=pythonbytes">Additional write up</a> by Simon Willison</li> <li><a href="https://lucumr.pocoo.org/2024/8/21/harvest-season/?featured_on=pythonbytes">Additional write up</a> by Armin Ronacher</li> <li>End-to-end project management: uv run, uv lock, and uv sync</li> <li>Tool management: uv tool install and uv tool run (aliased to uvx)</li> <li>Python installation: uv python install</li> <li>Script execution: uv can now manage hermetic, single-file Python scripts with inline dependency metadata based on PEP 723.</li> </ul> <p><strong>Brian #3:</strong> <a href="https://pythontest.com/top-pytest-plugins/?featured_on=pythonbytes"><strong>Top pytest Plugins</strong></a></p> <ul> <li>Inspired by (and assisted by) Hugo’s <a href="https://hugovk.github.io/top-pypi-packages/?featured_on=pythonbytes">Top PyPI Packages</a></li> <li>Write up for <a href="https://pythontest.com/pytest/finding-top-pytest-plugins/?featured_on=pythonbytes">Finding the top pytest plugins</a></li> <li>BTW, <a href="https://pypi.org/project/pytest-check/?featured_on=pythonbytes">pytest-check</a> has made it to 25.</li> <li>Same day, <a href="https://micro.webology.dev/2024/08/25/using-claude-sonnet.html?featured_on=pythonbytes">Jeff Triplett throws my code into Claude 3.5 Sonnet and refactors it</a></li> <li>Thanks <a href="https://fosstodon.org/@brianokken/113024832168707843">Jeff Triplett &amp; Hugo for answering how to add Summary and other info</a></li> </ul> <p><strong>Michael #4:</strong> <a href="https://github.com/hbmartin/comparison-hosts-serverless-cloud-function-faas-for-python?tab=readme-ov-file&featured_on=pythonbytes">A comparison of hosts / providers for Python serverless functions</a><a href="https://github.com/hbmartin/comparison-hosts-serverless-cloud-function-faas-for-python?tab=readme-ov-file&featured_on=pythonbytes"> </a><a href="https://github.com/hbmartin/comparison-hosts-serverless-cloud-function-faas-for-python?tab=readme-ov-file&featured_on=pythonbytes">(aka</a><a href="https://github.com/hbmartin/comparison-hosts-serverless-cloud-function-faas-for-python?tab=readme-ov-file&featured_on=pythonbytes"> Faas)</a></p> <ul> <li>Nice feature matrix of all the options, frameworks, costs, and more</li> <li>The WASM ones look particularly interesting to me.</li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li><a href="#">When is the next live episode of Python Bytes?</a> - via arewemeetingyet.com <ul> <li>Thanks to Hugo van Kemenade</li> </ul></li> <li>Some <a href="https://github.com/hugovk?featured_on=pythonbytes">more cool projects by Hugo</a> <ul> <li><a href="https://hugovk.github.io/python-logos/?featured_on=pythonbytes">Python Logos</a></li> <li><a href="https://hugovk.github.io/pypi-tools/charts?featured_on=pythonbytes">PyPI Downloads</a> by Python version for various Python tools, in pretty colors</li> <li><a href="https://hugovk.github.io/python-core-devs/?featured_on=pythonbytes">Python Core Developers </a>over time</li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://www.codeinacastle.com/python-zero-to-hero-2024?featured_on=pythonbytes">Code in a Castle Course event</a> - just a couple of weeks left</li> <li><a href="https://ladybird.org?featured_on=pythonbytes">Ladybird</a>: A truly independent browser</li> <li>“I'm also interested in your video recording setup, would be nice to have that in the extras too :D” <ul> <li><a href="https://obsproject.com?featured_on=pythonbytes">OBS Studio</a></li> <li><a href="https://www.elgato.com/us/en/s/welcome-to-stream-deck?featured_on=pythonbytes">Elgato Streamdeck</a></li> <li><a href="https://www.elgato.com/us/en/p/key-light?featured_on=pythonbytes">Elgato Key light</a></li> <li><a href="https://www.blackmagicdesign.com/products/davinciresolve?featured_on=pythonbytes">DaVinci Resolve</a></li> </ul></li> </ul> <p><strong>Joke:</strong> DevOps Support Group</p> <p>via Blaise</p> <ul> <li>Hi, my name is Bob</li> <li><em>Group</em>: Hi Bob</li> <li>I's been 42 days since I last ssh'd into production.</li> <li><em>Group</em>: Applause</li> <li>But only 4 days since I accidentally took down the website</li> <li><em>Someone in back</em>: Oh Bob…</li> </ul>
Categories: FLOSS Project Planets

Zato Blog: Integrating with Jira APIs

Planet Python - Mon, 2024-08-26 04:00
Integrating with Jira APIs 2024-08-26, by Dariusz Suchojad Overview

Continuing in the series of articles about newest cloud connections in Zato 3.2, this episode covers Atlassian Jira from the perspective of invoking its APIs to build integrations between Jira and other systems.

There are essentially two use modes of integrations with Jira:

  1. Jira reacts to events taking place in your projects and invokes your endpoints accordingly via WebHooks. In this case, it is Jira that explicitly establishes connections with and sends requests to your APIs.
  2. Jira projects are queried periodically or as a consequence of events triggered by Jira using means other than WebHooks.

The first case is usually more straightforward to conceptualize - you create a WebHook in Jira, point it to your endpoint and Jira invokes it when a situation of interest arises, e.g. a new ticket is opened or updated. I will talk about this variant of integrations with Jira in a future instalment as the current one is about the other situation, when it is your systems that establish connections with Jira.

The reason why it is more practical to first speak about the second form is that, even if WebHooks are somewhat easier to reason about, they do come with their own ramifications.

To start off, assuming that you use the cloud-based version of Jira (e.g. https://example.atlassian.net), you need to have a publicly available endpoint for Jira to invoke through WebHooks. Very often, this is undesirable because the systems that you need to integrate with may be internal ones, never meant to be exposed to public networks.

Secondly, your endpoints need to have a TLS certificate signed by a public Certificate Authority and they need to be accessible on port 443. Again, both of these are something that most enterprise systems will not allow at all or it may take months or years to process such a change internally across the various corporate departments involved.

Lastly, even if a WebHook can be used, it is not always a given that the initial information that you receive in the request from a WebHook will already contain everything that you need in your particular integration service. Thus, you will still need a way to issue requests to Jira to look up details of a particular object, such as tickets, in this way reducing WebHooks to the role of initial triggers of an interaction with Jira, e.g. a WebHook invokes your endpoint, you have a ticket ID on input and then you invoke Jira back anyway to obtain all the details that you actually need in your business integration.

The end situation is that, although WebHooks are a useful concept that I will write about in a future article, they may very well not be sufficient for many integration use cases. That is why I start with integration methods that are alternative to WebHooks.

Alternatives to WebHooks

If, in our case, we cannot use WebHooks then what next? Two good approaches are:

  1. Scheduled jobs
  2. Reacting to emails (via IMAP)

Scheduled jobs will let you periodically inquire with Jira about the changes that you have not processed yet. For instance, with a job definition as below:

Now, the service configured for this job will be invoked once per minute to carry out any integration works required. For instance, it can get a list of tickets since the last time it ran, process each of them as required in your business context and update a database with information about what has been just done - the database can be based on Redis, MongoDB, SQL or anything else.

Integrations built around scheduled jobs make most sense when you need to make periodic sweeps across a large swaths of business data, these are the "Give me everything that changed in the last period" kind of interactions when you do not know precisely how much data you are going to receive.

In the specific case of Jira tickets, though, an interesting alternative may be to combine scheduled jobs with IMAP connections:

The idea here is that when new tickets are opened, or when updates are made to existing ones, Jira will send out notifications to specific email addresses and we can take advantage of it.

For instance, you can tell Jira to CC or BCC an address such as zato@example.com. Now, Zato will still run a scheduled job but instead of connecting with Jira directly, that job will look up unread emails for it inbox ("UNSEEN" per the relevant RFC).

Anything that is unread must be new since the last iteration which means that we can process each such email from the inbox, in this way guaranteeing that we process only the latest updates, dispensing with the need for our own database of tickets already processed. We can extract the ticket ID or other details from the email, look up its details in Jira and the continue as needed.

All the details of how to work with IMAP emails are provided in the documentation but it would boil down to this:

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class MyService(Service): def handle(self): conn = self.email.imap.get('My Jira Inbox').conn for msg_id, msg in conn.get(): # Process the message here .. process_message(msg.data) # .. and mark it as seen in IMAP. msg.mark_seen()

The natural question is - how would the "process_message" function extract details of a ticket from an email?

There are several ways:

  1. Each email has a subject of a fixed form - "[JIRA] (ABC-123) Here goes description". In this case, ABC-123 is the ticket ID.
  2. Each email will contain a summary, such as the one below, which can also be parsed:
Summary: Here goes description Key: ABC-123 URL: https://example.atlassian.net/browse/ABC-123 Project: My Project Issue Type: Improvement Affects Versions: 1.3.17 Environment: Production Reporter: Reporter Name Assignee: Assignee Name
  1. Finally, each email will have an "X-Atl-Mail-Meta" header with interesting metadata that can also be parsed and extracted:
X-Atl-Mail-Meta: user_id="123456:12d80508-dcd0-42a2-a2cd-c07f230030e5", event_type="Issue Created", tenant="https://example.atlassian.net"

The first option is the most straightforward and likely the most convenient one - simply parse out the ticket ID and call Jira with that ID on input for all the other information about the ticket. How to do it exactly is presented in the next chapter.

Regardless of how we parse the emails, the important part is that we know that we invoke Jira only when there are new or updated tickets - otherwise there would not have been any new emails to process. Moreover, because it is our side that invokes Jira, we do not expose our internal system to the public network directly.

However, from the perspective of the overall security architecture, email is still part of the attack surface so we need to make sure that we read and parse emails with that in view. In other words, regardless of whether it is Jira invoking us or our reading emails from Jira, all the usual security precautions regarding API integrations and accepting input from external resources, all that still holds and needs to be part of the design of the integration workflow.

Creating Jira connections

The above presented the ways in which we can arrive at the step of when we invoke Jira and now we are ready to actually do it.

As with other types of connections, Jira connections are created in Zato Dashboard, as below. Note that you use the email address of a user on whose behalf you connect to Jira but the only other credential is that user's API token previously generated in Jira, not the user's password.

Invoking Jira

With a Jira connection in place, we can now create a Python API service. In this case, we accept a ticket ID on input (called "a key" in Jira) and we return a few details about the ticket to our caller.

This is the kind of a service that could be invoked from a service that is triggered by a scheduled job. That is, we would separate the tasks, one service would be responsible for opening IMAP inboxes and parsing emails and the one below would be responsible for communication with Jira.

Thanks to this loose coupling, we make everything much more reusable - that the services can be changed independently is but one part and the more important side is that, with such separation, both of them can be reused by future services as well, without tying them rigidly to this one integration alone.

# -*- coding: utf-8 -*- # stdlib from dataclasses import dataclass # Zato from zato.common.typing_ import cast_, dictnone from zato.server.service import Model, Service # ########################################################################### if 0: from zato.server.connection.jira_ import JiraClient # ########################################################################### @dataclass(init=False) class GetTicketDetailsRequest(Model): key: str @dataclass(init=False) class GetTicketDetailsResponse(Model): assigned_to: str = '' progress_info: dictnone = None # ########################################################################### class GetTicketDetails(Service): class SimpleIO: input = GetTicketDetailsRequest output = GetTicketDetailsResponse def handle(self): # This is our input data input = self.request.input # type: GetTicketDetailsRequest # .. create a reference to our connection definition .. jira = self.cloud.jira['My Jira Connection'] # .. obtain a client to Jira .. with jira.conn.client() as client: # Cast to enable code completion client = cast_('JiraClient', client) # Get details of a ticket (issue) from Jira ticket = client.get_issue(input.key) # Observe that ticket may be None (e.g. invalid key), hence this 'if' guard .. if ticket: # .. build a shortcut reference to all the fields in the ticket .. fields = ticket['fields'] # .. build our response object .. response = GetTicketDetailsResponse() response.assigned_to = fields['assignee']['emailAddress'] response.progress_info = fields['progress'] # .. and return the response to our caller. self.response.payload = response # ########################################################################### Creating a REST channel and testing it

The last remaining part is a REST channel to invoke our service through. We will provide the ticket ID (key) on input and the service will reply with what was found in Jira for that ticket.

We are now ready for the final step - we invoke the channel, which invokes the service which communicates with Jira, transforming the response from Jira to the output that we need:

$ curl localhost:17010/jira1 -d '{"key":"ABC-123"}' { "assigned_to":"zato@example.com", "progress_info": { "progress": 10, "total": 30 } } $

And this is everything for today - just remember that this is just one way of integrating with Jira. The other one, using WebHooks, is something that I will go into in one of the future articles.

More resources

➤ Python API integration tutorial
What is an integration platform?
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?

More blog posts
Categories: FLOSS Project Planets

Haruna 1.2.0

Planet KDE - Sun, 2024-08-25 23:00

Haruna version 1.2.0 is out with a new footer style.

Availability of other package formats depends on your distro and the people who package Haruna.

Windows version:

If you like Haruna then support its development: GitHub Sponsors | Liberapay | PayPal

Feature requests and bugs should be posted on bugs.kde.org, but for bugs make sure to fill in the template and provide as much information as possible.

Changelog: 1.2.0
  • Added floating footer/bottom toolbar style with 2 ways to trigger it:
    • on every mouse movement of the video area
    • only when the mouse is in the lower part of the video area
  • Removed the docbook and moved its content to tooltips
  • Middle clicking the playlist scrolls to the playing item
Categories: FLOSS Project Planets

KDE Goals - Our Cumulative Culture

Planet KDE - Sun, 2024-08-25 20:00

Every two years, the KDE community selects three goals that serve as focal points for the entire community's efforts in the coming years. This cyclical process of goal-setting and community-wide focus is a great example of KDE's Cumulative Culture in action.

This concept, typically observed in human societies, refers to the ability to build upon previous knowledge and innovations to create increasingly complex and effective solutions. In KDE's case, each cycle of goals represents a new layer of accumulated wisdom, i.e. new features and more stability.

The First Cycle (2018-2020)

The first cycle of goals laid the groundwork with its focus on community growth, privacy, and usability.

  • Streamlined Onboarding: Focused on attracting and retaining new contributors by making the onboarding process smoother and more engaging.
  • Privacy Software: Prioritized user privacy and security, ensuring KDE software respects user data and complies with security standards.
  • Usability & Productivity: Aimed to enhance the usability and productivity of KDE software, making it powerful yet easy to use.
The Second Cycle (2020-2022)

The second cycle tackled more complex challenges. Goals like Wayland implementation improvements (which layed the foundation for the Plasma 6 release), improving the app ecosystem, and ensuring consistency in design and functionality.

  • Wayland: This task aimed at stabilizing Wayland support accross KDE apps.
  • All About the Apps: Improved KDE's app infrastructure, enabling more efficient app delivery and better support services.
  • Improve Consistency across the Board: Ensured uniformity in design and functionality across KDE software, improving usability and reducing redundancy.
The Third Cycle (2022-2024)

The third cycle, which is currently coming to an end, was about progress and adaptation. A focus to include environmental responsibility, operational efficiency, and inclusive design.

  • Sustainable Software: Focused on making KDE software more energy-efficient and environmentally friendly by implementing practices that reduce resource consumption and ensure long-term sustainability.
  • Automate and Systematize Internal Processes: Aimed to streamline KDE’s internal workflows by automating repetitive tasks, adding code tests across projects and creating a Quality Assurance team to name a few.
  • KDE For All: Seeked to make KDE software accessible and inclusive for all users.
A New Cycle A Comin' (2024-2026)

Now, as we enter the fourth cycle of the KDE Goals, we see the full power of this cumulative process. Each goal, whether fully achieved or not, contributes to the collective knowledge and capability of the KDE community. Ideas and partial solutions from past cycles become a solid foundation of knowledge and experience that support future efforts.

The commmunity is currently voting on the following proposals for the next KDE Goals cycle that will guide our efforts and shape our focus for the coming years:

KDE Goals at Akademy

The three most voted goals will be announced at Akademy, where there will also be a wrap-up talk about the achievements of the current goals. Also, there will be Birds-of-a-feather (BoF) sessions with the new goal champions.

Join the Matrix room and keep an eye on the website for the latest KDE Goals updates.

Categories: FLOSS Project Planets

Matt Layman: Layman's Guide to Python Built-in Functions

Planet Python - Sun, 2024-08-25 20:00
Quick Jump List A: abs, aiter, all, anext, any, ascii, B: bin, bool, breakpoint, bytearray, bytes, C: callable, chr, classmethod, compile, complex, D: delattr, dict, dir, divmod E: enumerate, eval, exec, F: filter, float, format, frozenset, G: getattr, globals, H: hasattr, hash, help, hex, I: id, input, int, isinstance, issubclass, iter, L: len, list, locals, M: map, max, memoryview, min, N: next, O: object, oct, open, ord, P: pow, print, property, R: range, repr, reversed, round, S: set, setattr, slice, sorted, staticmethod, str, sum, super, T: tuple, type, V: vars, Z: zip, _: __import__,
Categories: FLOSS Project Planets

Pages