FLOSS Project Planets

PyBites: From Excel to Python and succeeding as a Developer by building your portfolio

Planet Python - Thu, 2023-02-09 13:08

Listen here:

Or watch here:

Welcome back to the Pybites podcast. This week we have an inspirational chat with Juanjo:

– How he started his  programming journey and what passionates him about this craft.

– How he fell in love with Python.

– How he overcame tutorial paralysis.

– How PDM helped him improve his skills and how the positive effect it’s having on his daily work and moving forward.

– How important succeeding as a developer is for him.

– How he coped with imposter syndrome as he grew as a developer.

– Tips for people aspiring to become software developers who want to make a greater impact using Python.

– The importance of choosing a good community as your support system towards your goal.

“Harness the power of the long term achievements by focusing on the short term actions”

We also celebrate wins and share what we’re reading (books linked below).

– PDM program
– Connect with Juanjo on LinkedIn or Slack

– Dynamic Economic Systems
– Why Stock Markets Crash 
– Crafting Test-Driven Software with Python 

Find your strength! We hope you enjoy this episode and reach out if you have any feedback: info@pybit.es

Categories: FLOSS Project Planets

PyCharm: “Building the tooling I wish I’d had”. An Interview With Charlie Marsh

Planet Python - Thu, 2023-02-09 10:03

Python has a rich ecosystem of quality, mature tooling: linting, formatting, type checking, and the like. Each has decent performance, but what if the tooling was fast? Really fast – as in, instantaneous?

This is the argument posed by Charlie Marsh when he introduced Ruff: a linter with the engine written in Rust. The performance numbers were incredible from the start, as was the reception in the Python community. Ruff is developing quickly – not just filling in the details, but expanding beyond just linting.

PyCharm is hosting Charlie for a February 14th webinar. We caught up with Charlie to collect some background on him and the project.

Register for the webinar

Quick hit: Why should people care about Ruff?

Ruff’s “flagship feature” is performance – it aims to be orders of magnitude faster than existing tools. Even if you don’t think of your existing linter as slow, you might be surprised by how different it feels to use Ruff. Even on very large projects, Ruff can give you what is effectively an instant feedback loop.

But even beyond performance, I think Ruff offers something pretty different: it’s a single tool that can replace dozens of existing tools, plugins, and packages – it’s one tool to learn and configure that gives you access to hundreds of rules and automated code transformations, with new capabilities shipping every day.

To get a detailed overview of Ruff, check out this recent Talk Python podcast episode.

Now, some introductions: Tell us about Charlie Marsh.

I’ve kind of inadvertently spent my career jumping between programming ecosystems. At Khan Academy, I worked professionally on web, Android, iOS, and with Python on the back-end. At Spring Discovery, I joined as the second engineer and led the development of our software,

data, and machine learning platforms, which meant mostly Python with frequent detours into web (and, later on, Rust).

Moving between these ecosystems has really influenced how I think about tooling – I see something that the web does well, and I want to bring it to Python, or vice versa. Ruff is based on a lot of those observations, and motivated by a lot of my own experiences at Spring – it’s the tooling I wish I’d had.

Outside of work: I live in Brooklyn, NY with my wife and four-month-old son.

The JavaScript world has embraced fast tooling. Can you describe the what/why/how?

From my perspective, the first project that comes to mind is esbuild, a fast JavaScript “bundler” written in Go. I always loved this line from their homepage (“The main goal of the esbuild bundler project is to bring about a new era of build tool performance”), because along with being fast, esbuild was able to change expectations around how fast tooling could be.

Later on came swc, a TypeScript and JavaScript compiler written in Rust. And since then, we’ve seen Turbopack, Parcel, Bun, and more, all of which contain native implementations of functionality that was once implemented in pure JavaScript.

Ignoring the details of what these tools actually do, the higher-level thesis is that web tooling doesn’t have to be written in JavaScript. Instead, it can be written in lower-level, more performant languages, with users reaping the benefits of greatly-increased performance.

(We’ve seen this trend outside of the JavaScript world too. For example, the rubyfmt autoformatter was recently rewritten in Rust.)

How does this translate to the what/why/how for Python?

Most Python tooling is written in Python. There are of course exceptions: Mypy is compiled to a C extension via Mypyc; Pyright is written in Node; the scientific Python stack like NumPy is written in C and other languages; much of CPython itself is written in C and highly optimized; etc. But if you look at, for example, the existing linters, or the modal popular Python developer tool, it’s probably written in Python.

That’s not meant as a criticism – I’m not a Rust maximalist. That is: I don’t believe that every piece of software ever should be rewritten in Rust. (If I did, it’d be a strange choice to work on Python tooling!) But if you buy into the lessons learned from the web ecosystem, it suggests that there’s room to innovate on Python tooling by, in some cases, exploring implementations in more performant languages, and exposing those implementations via Python interfaces.

And if you accept that premise, then Rust is a natural fit, since Python has a really good story when it comes to integrating and interoperating with Rust: you can ship pure Rust and mixed Rust-Python projects to PyPI using Maturin, and your users can install them with pip just like any other Python package; you can implement your “Python” library in Rust, and expose it on the Python side with PyO3. It feels magical, and my experience with those tools at Spring Discovery were a big part of why I considered building a Rust-based Python linter in the first place.

While the Rust-Python community still feels nascent in some ways, I think Ruff is part of a bigger trend here. Polars is another good example of this kind of thinking, where they’ve built an extremely performant DataFrame library in Rust, and exposed it with Python bindings.

You’ve been on a performance adventure. What has surprised you?

Ideas are great, but benchmarks are where they meet reality, and are either proven or disproven. Seemingly small optimizations can have a significant impact. However, not all apparent optimizations end up improving performance in practice.

When I have an idea for an optimization, my goal is always to benchmark it as quickly as possible, even if it means cutting corners, skipping cases, writing messy code, etc. Sometimes, the creative and exciting ideas make no measurable difference. Other times, a rote change can move the needle quite a bit. You have to benchmark your changes, on “real” code, to be certain.

Another project-related tension that I hadn’t anticipated is that, if you really care about performance, you’re constantly faced with decisions around how to prioritize. Almost every new feature will reduce performance in some way, since you’re typically doing more work than you were before. So what’s the acceptable limit? What’s the budget? If you make something slower, can you speed up something else to balance the scales?

Is it true you might be thinking beyond linting?

It’s absolutely true! I try to balance being open about the scope of my own interests against a fear of overcommitting and overpromising.

But with that caveat… My dream is for Ruff to evolve into a unified linter, autoformatter, and type checker – in short, a complete static analysis toolchain. There are significant benefits to bundling all of that functionality: you can do less repeated work, and each tool can do a better job than if any of them were implemented independently.

I think we’re doing a good job with the linting piece, and I’ve been starting to work on the autoformatter. I’ll admit that I don’t know anything about building a type checker, except that it’s complicated and hard, so I consider that to be much further out. But it’s definitely on my mind.

You’re a PyCharm user. We also think a lot about tooling. What’s your take on the Python world’s feelings about tooling?

I talk to a lot of people about Python tooling, and hear a lot of complaints – but those complaints aren’t always the same.

Even still, I look back just a few years and see a lot of progress, both technologically and culturally – better tools, better practices (e.g., autoformatters, lockfiles), PEPs that’ve pushed standards forward. So I try to remain optimistic, and view every complaint as an opportunity.

On a more specific note: there’s been a lot of discussion around packaging lately, motivated by the Python Packaging Survey that the PSF facilitated. (Pradyun Gedam wrote a nice blog post in response.) One of the main critiques was around the amount of fragmentation in the ecosystem: you need to use a bunch of different tools, and there are multiple tools to do any one job. The suggestion of consolidating a lot of that functionality into a single, blessed tool (like Rust’s cargo) came up a few times.

I tend to like bundling functionality; but I also believe that competition can push tooling forward. You see this a lot in the web ecosystem, where npm, yarn, pnpm, and bun are all capable of installing packages. But they all come with different tradeoffs.

I’d like to see tools in the Python ecosystem do a better job of articulating those tradeoffs. Python is used by so many different audiences and userbases, for so many different things. Who’s your target user? Who’s not? What tradeoffs are they making by choosing your tool over another?

Register for the webinar

To help fill the seats: give us a teaser for what folks will see in the webinar.

I’d like to give viewers a sense for what it feels like to use Ruff, and the kinds of superpowers it can give you as a developer – not just in terms of performance, but code transformation, and simplicity of configuration too.

Categories: FLOSS Project Planets

Jacob Rockowitz: Is there no future for the Schema.org Blueprints module?

Planet Drupal - Thu, 2023-02-09 06:40

In the last three months of 2022, I built out the Schema.org Blueprints module. As of last week, there are finally two reported installs after hundreds of hours of work and thought. I am unsure if anyone’s using the module or even understands its use case and goals.

The Schema.org Blueprints module has 49 stars, 5 open issues with 33 tickets, and 113 passing tests. The project page is descriptive, with links to documentation. Every sub-module, of which there are many, has an up-to-date README.md file. There are half a dozen screencasts and presentations, with a few other people posting videos and discussions on YouTube. On the Talking Drupal podcast we discussed the module's history, use cases, and its future. My organization is only beginning to plan for a gradual migration to a decoupled Drupal instance using the Schema.org Blueprints module. Lastly, I submitted a session proposal to discuss the module at DrupalCon Pittsburgh.

After all my effort, this project could sit idle on Drupal.org with the nightly automated tests running until an API change or a new version of Drupal core suddenly breaks the tests.

Did I do something wrong, or did I waste my time?

Prior to writing this blog post and everything I mentioned above, I thought a need existed, however, the lack of activity I’m seeing means I need to ask these hard questions. I’m hoping to be able to provide answers, document the module’s status and understand better what has been completed, and ask what is needed to move forward.

As a developer, "wasting...Read More

Categories: FLOSS Project Planets

QObjects, Ownership, propagate_const and C++ Evolution

Planet KDE - Thu, 2023-02-09 05:00

A very common implementation pattern for QObject subclasses is to declare its child QObjects as data members of type “pointer to child.” Raise your hand No, keep your hand on your computer input device 🙂 Nod if you have ever seen code like this (and maybe even written code like this yourself):

class MyWidget : public QWidget { Q_OBJECT public: explicit MyWidget(QWidget *parent = nullptr); private: DataModel *m_dataModel; QTreeView *m_view; QLineEdit *m_searchField; };

A fairly common question regarding this pattern is: “Why are we using (raw) pointers for data members”?

Of course, if we are just accessing an object that we don’t own/manage, it makes perfect sense to simply store a pointer to it (so that we’re actually able to use the object). A raw pointer is actually the Modern C++™ design here, to deliberately express the lack of ownership.

Ownership Models

The answer becomes slightly more nuanced when MyWidget actually owns the objects in question. In this case, the implementation designs are basically two:

  1. As shown above, use pointers to the owned objects. Qt code would typically still use raw pointers in this case, and rely on the parent/child relationship in order to manage the ownership. In other words, we will use the pointers to access the objects but not to manage their lifetime. class MyWidget : public QWidget { Q_OBJECT public: explicit MyWidget(QWidget *parent = nullptr); private: DataModel *m_dataModel; QTreeView *m_view; QLineEdit *m_searchField; }; MyWidget::MyWidget(QWidget *parent) : QWidget(parent) { // Children parented to `this` m_dataModel = new DataModel(this); m_view = new QTreeView(this); m_searchField = new QLineEdit(this); }

    This approach makes developers familiar with Modern C++ paradigms slightly uncomfortable: a bunch of “raw news” in the code, no visible deletes — eww! Sure, you can replace the raw pointers with smart pointers, if you wish to. But that’s a discussion for another blog post.

  2. Another approach is to declare the owned objects as…objects, and not pointers: class MyWidget : public QWidget { Q_OBJECT public: explicit MyWidget(QWidget *parent = nullptr); private: DataModel m_dataModel; // not pointers QTreeView m_view; QLineEdit m_searchField; }; MyWidget::MyWidget(QWidget *parent) : QWidget(parent) { }

    This makes it completely clear that the lifetime of those objects is tied to the lifetime of MyWidget.

To Point, or Not to Point?

“So what is the difference between the two approaches?”, you may be wondering. That is just another way to ask: “which one is better and should I use in my code?”

The answers to this question involve a lot of interesting aspects, such as:

  • Using pointers will significantly increase the number of memory allocations: we are going to do one memory allocation per child object. The irony here is that, most of the time, those child objects are, themselves, pimpl’d. Creating a QLineEdit object will, on its own, already allocate memory for its private data; we’re compounding that allocation with another one. Eventually, in m_searchField, we’ll store a pointer to a heap-allocated…”pointer” (the QLineEdit object, itself, which just contains the pimpl pointer) that points to another heap-allocate private object (QLineEditPrivate). Yikes!
  • Pointers allow the user to forward declare the pointed-to datatypes in MyWidget‘s header. This means that users of MyWidget do not necessarily have to include the headers that define QTreeView, QlineEdit, etc. This improves compilation times.
  • Pointers allow you to establish “grandchildren” and similar, not just direct children. A grandchild is going to be deleted by someone else (its parent), and not directly by our MyWidget instances. If that grandchild is a sub-object of MyWidget (like in the second design), this will mean destroying a sub-object via delete, and that’s bad.
  • Pointers force (or, at least, should force) users to properly parent the allocated objects. This has an impact in a few cases, for instance if one moves the parent across threads. When using full objects as data members, it’s important to remember to establish a parent/child relationship by parenting them. class MyObject : public QObject { Q_OBJECT QTimer m_timer; // timer as sub-object public: MyObject(QObject *parent) : QObject(parent) , m_timer(this) // remember to do this...! {} }; MyObject *obj = new MyObject; obj->moveToThread(anotherThread); // ...or this will likely break

    Here the parent/child relationship is not going to be used to manage memory, but only to keep the objects together when moving them between threads (so that the entire subtree is moved by moveToThread).

So, generally speaking, using pointers seems to offer more advantages than disadvantages, and that is why they are so widely employed by developers that use Qt.

Const Correctness

All this is good and everything — and I’ve heard it countless times. What does all of this have to do with the title of the post? We’re getting there!

A consideration that I almost never hear as an answer to the pointer/sub-object debate is const correctness.

Consider this example:

class MyWidget : public QWidget { Q_OBJECT QLineEdit m_searchField; // sub-object public: explicit MyWidget(QWidget *parent = nullptr) : QWidget(parent) { } void update() const { m_searchField.setText("Search"); // ERROR } };

This does not compile; update() is a const method and, inside of it, m_searchField is a const QLineEdit. This means we cannot call clear() (a non-const method) on it.

From a design perspective, this makes perfect sense; in a const method we are not supposed to modify the “visible state” of *this and the “visible state” of QLineEdit logically belongs to the state of the *this object, as it’s a sub-object.

However, we have just discussed that the design of using sub-objects isn’t common; Qt developers usually employ pointers. Let’s rewrite the example:

class MyWidget : public QWidget { Q_OBJECT QLineEdit *m_searchField; // pointer public: explicit MyWidget(QWidget *parent = nullptr) : QWidget(parent) { m_searchField = new QLineEdit(this); } void update() const { m_searchField->setText("Search"); } };

What do you think happens here? Is the code still broken?

No, this code compiles just fine. We’re modifying the contents of m_searchField from within a const method.

Pointer-to-const and Const Pointers

This is not entirely surprising to seasoned C++ developers. In C++ pointers and references are shallow const; a const pointer can point to a non-const object and allow the user to mutate it.

Constness of the pointer and constness of the pointed-to object are two independent qualities:

// Non-const pointer, pointing to non-const object QLineEdit *p1 = ~~~; p1 = new QLineEdit; // OK, can mutate the pointer p1->mutate(); // OK, can mutate the pointed-to object // Const pointer, pointing to non-const object QLineEdit *const p2 = ~~~; p2 = new QLineEdit; // ERROR, cannot mutate the pointer p2->mutate(); // OK, can mutate the pointed-to // Non-const pointer, pointing to const object const QLineEdit *p3 = ~~~; p3 = new QLineEdit; // OK, can mutate the pointer p3->mutate(); // ERROR, cannot mutate the pointed-to object // Non-const pointer, just like p3, but using East Const (Qt uses West Const) QLineEdit const *p3b = ~~~; // Const pointer, pointing to const object const QLineEdit * const p4 = ~~~ ; p4 = new QLineEdit; // ERROR, cannot mutate the pointer p4->mutate(); // ERROR, cannot mutate the pointed-to object // Const pointer, just like p4, using East Const QLineEdit const * const p4b = ~~~;

This is precisely what is happening in our update() method. In there, *this is const, which means that m_searchField is a const pointer. (In code, this type would be expressed as QLineEdit * const.)

I’ve always felt mildly annoyed by this situation and have had my share of bugs due to modifications of subobjects from const methods. Sure, most of the time, it was entirely my fault, calling the wrong function on the subobject in the first place. But some of the time, this has made me have “accidental” const functions with visible side-effects (like the update() function above)!

Deep-const Propagation

The bad news is that there isn’t a solution for this issue inside the C++ language. The good news is that there is a solution in the “C++ Extensions for Library Fundamentals”, a set of (experimental) extensions to the C++ Standard Library. This solution is called std::experimental::propagate_const (cppreference, latest proposal at the time of this writing).

propagate_const acts as a pointer wrapper (wrapping both raw pointers and smart pointers) and will deeply propagate constness. If a propagate_const object is const itself, then the pointed-to object will be const.

// Non-const wrapper => non-const pointed-to object propagate_const<QLineEdit *> p1 = ~~~; p1->mutate(); // OK // Const wrapper => const pointed-to object const propagate_const<QLineEdit *> p2 = ~~~; p2->mutate(); // ERROR // Const reference to the wrapper => const pointed-to object const propagate_const<QLineEdit *> & const_reference = p1; const_reference->mutate(); // ERROR

This is great, because it means that we can use propagate_const as a data member instead of a raw pointer and ensure that we can’t accidentally mutate a child object:

class MyWidget : public QWidget { Q_OBJECT std::experimental::propagate_const<QLineEdit *> m_searchField; // drop-in replacement public: explicit MyWidget(QWidget *parent = nullptr) : QWidget(parent) { m_searchField = new QLineEdit(this); } void update() const { m_searchField->clear(); // ERROR! } };

Again, if you’re a seasoned C++ developer, this shouldn’t come as a surprise to you. My colleage Marc Mutz wrote about this a long time ago, although in the context of the pimpl pattern. KDAB’s RND efforts in this area led to things such as a deep-const alternative to QExplicitlySharedDataPointer (aptly named QExplicitlySharedDataPointerV2) and QIntrusiveSharedPointer.

propagate_const and Child QObjects

As far as I know, there hasn’t been much research about using propagate_const at large in a Qt-based project in order to hold child objects. Recently, I’ve had the chance to use it in a medium-sized codebase and I want to share my findings with you.

Compiler Support

Compiler support for propagate_const is still a problem, notably because MSVC does not implement the Library Fundamentals TS. It is, however, shipped by GCC and Clang and there are third party implementations available, including one in KDToolBox (keep reading!).

Source Compatibility

If one already has an existing codebase, one may want to start gradually adopting propagate_const by replacing existing usages of raw pointers. Unfortunately, in a lot of cases, propagate_const isn’t simply a drop-in replacement and will cause a number of source breaks.

What I’ve discovered (at the expense of my own sanity) is that some of these incompatibilities are caused by accidentally using niche C++ features; in order to be compatible with older C++ versions, implementations accidentally introduce quirks, some by compiler bugs.

Here’s all the nitty gritty details; feel free to skim over them 🙂

Broken Conversions to Superclasses

Consider this example:

class MyWidget : public QWidget { Q_OBJECT QLineEdit *m_searchField; public: explicit MyWidget(QWidget *parent = nullptr) : QWidget(parent) { // ... set up layouts, etc. ... m_searchField = new QLineEdit(this); layout()->addWidget(m_searchField); // this is addWidget(QWidget *) } };

Today, this works just fine. However, this does not compile:

class MyWidget : public QWidget { Q_OBJECT // change to propagate_const ... std::experimental::propagate_const<QLineEdit *> m_searchField; public: explicit MyWidget(QWidget *parent = nullptr) : QWidget(parent) { // ... set up layouts, etc. ... m_searchField = new QLineEdit(this); layout()->addWidget(m_searchField); // ^^^ ERROR, cannot convert propagate_const to QWidget * } };

C++ developers know the drill: smart pointer classes are normally not a 1:1 replacement of raw pointers. Most smart pointer classes do not implicitly convert to raw pointers, for very good reasons. In situations where raw pointers are expected (for instance, addWidget in the snippet above wants a parameter of type QWidget *), one has to be slightly more verbose, for instance, by calling m_searchField.get().

Here, I was very confused. propagate_const was meant to be a 1:1 replacement. Here are a couple of quotes from the C++ proposal for propagate_const:

The change required to introduce const-propagation to a class is simple and local enough to be enforced during code review and taught to C++ developers in the same way as smart-pointers are taught to ensure exception safety.

operator value*

When T is an object pointer type operator value* exists and allows implicit conversion to a pointer. This avoids using get to access the pointer in contexts where it was unnecesary before addition of the propagate_const wrapper.

In other words, it has always been a design choice to keep source compatibility in cases like the one above! propagate_const<T *> actually has a conversion operator to T *.

So why doesn’t the code above work? Here’s a reduced testcase:

std::experimental::propagate_const<Derived *> ptr; Derived *d1 = ptr; // Convert precisely to Derived *: OK Base *b1 = ptr; // Convert to a pointer to a base: ERROR Base *b2 = static_cast<Derived *>(ptr); // OK Base *b3 = static_cast<Base *>(ptr); // ERROR Base *b4 = ptr.get(); // OK

At first, this almost caused me to ditch the entire effort; Qt uses inheritance very aggressively and we pass pointers to derived classes to functions taking pointers to base classes all the time. If that required sprinkling calls to .get() (or casts) everywhere, it would have been a massive refactoring. This is something that a tool like clazy can automate for you; but that’s a topic for another time.

Still, I couldn’t find a justification for the behavior shown above. Take a look at this testcase where I implement a skeleton of propagate_const, focusing on the conversion operators:

template <typename T> class propagate_const { public: using element_type = std::remove_reference_t<decltype(*std::declval<T>())>; operator element_type *(); operator const element_type *() const; }; propagate_const<Derived *> ptr; Base *b = ptr; // OK

This now compiles just fine. Let’s make it 100% compatible with the specification by adding the necessary constraints:

template <typename T> class propagate_const { public: using element_type = std::remove_reference_t<decltype(*std::declval<T>())>; operator element_type *() requires (std::is_pointer_v<T> || std::is_convertible_v<T, element_type *>); operator const element_type *() const requires (std::is_pointer_v<T> || std::is_convertible_v<const T, const element_type *>); }; propagate_const<Derived *> ptr; Base *b = ptr; // still OK

This still works flawlessly. So what’s different with propagate_const as shipped by GCC or Clang?

libstdc++ and libc++ do not use constraints (as in, the C++20 feature) on the conversion operators because they want to make propagate_const also work in earlier C++ versions. In fact, they use the pre-C++20 way — we have to make overloads available only when certain conditions are satisfied: SFINAE.

The conversion operators in libstdc++ and libc++ are implemented like this:

template <typename T> class propagate_const { public: using element_type = std::remove_reference_t<decltype(*std::declval<T>())>; template <typename U = T, std::enable_if_t<(std::is_pointer_v<U> || std::is_convertible_v<U, element_type *>), bool> = true> operator element_type *(); template <typename U = T, std::enable_if_t<(std::is_pointer_v<U> || std::is_convertible_v<const U, const element_type *>), bool> = true> operator const element_type *() const; }; propagate_const<Derived *> ptr; Base *b = ptr; // ERROR

This specific implementation is broken on all major compilers, which refuse to use the operators for conversions other than to precisely element_type * and nothing else.

What’s going on? The point is that converting propagate_const to Base * is a user-defined conversion sequence: first we convert propagate_const to a Derived * through the conversion operator. Then, perform a pointer conversion from Derived * to Base *.

But here’s what The Standard says when templates are involved:

If the user-defined conversion is specified by a specialization of a conversion function template, the second standard conversion sequence shall have exact match rank.

That is, we cannot “adjust” the return type of a conversion function template, even if the conversion would be implicit.

The takeaway is: SFINAE on conversion operators is user-hostile.

Restoring the Conversions Towards Superclasses

We can implement a workaround here by deviating a bit from the specification. If we add conversions towards any pointer type that T implicitly converts to, then GCC and Clang (and MSVC) are happy:

template <typename T> class non_standard_propagate_const // not the Standard one! { public: using element_type = std::remove_reference_t<decltype(*std::declval<T>())>; // Convert to "any" pointer type template <typename U, std::enable_if_t<std::is_convertible_v<T, U *>, bool> = true> operator U *(); template <typename U, std::enable_if_t<std::is_convertible_v<const T, const U *>, bool> = true> operator const U *() const; }; non_standard_propagate_const<Derived *> ptr; Base *b = ptr; // OK

This is by far the simplest solution; it is, however, non-standard.

I’ve implemented a better solution in KDToolBox by isolating the conversion operators in base classes and applying SFINAE on the base classes instead.

For instance, here’s the base class that defines the non-const conversion operator:

// Non-const conversion template <typename T, bool = std::disjunction_v< std::is_pointer<T>, std::is_convertible<T, propagate_const_element_type<T> *> > > struct propagate_const_non_const_conversion_operator_base { }; template <typename T> struct propagate_const_non_const_conversion_operator_base<T, true> { constexpr operator propagate_const_element_type<T> *(); };

Then, propagate_const<T> will inherit from propagate_const_non_const_conversion_operator_base<T> and the non-template operator will be conditionally defined.

Deletion and Pointer Arithmetic

Quoting again from the original proposal for propagate_const:

Pointer arithemtic [sic] is not supported, this is consistent with existing practice for standard library smart pointers.

… or is it? Herb Sutter warned us about having implicit conversions to pointers, as they would also enable pointer arithmetic.

Consider again the C++20 version:

template <typename T> class propagate_const { public: using element_type = std::remove_reference_t<decltype(*std::declval<T>())>; operator element_type *() requires (std::is_pointer_v<T> || std::is_convertible_v<T, element_type *>); operator const element_type *() const requires (std::is_pointer_v<T> || std::is_convertible_v<const T, const element_type *>); }; propagate_const<SomeClass *> ptr; SomeClass *ptr2 = ptr + 1; // OK!

The arithmetic operators are not deleted for propagate_const. This means that the implicit conversion operators to raw pointers can (and will) be used, effectively enabling pointer arithmetic — something that the proposal said it did not want to support!

non_standard_propagate_const (as defined above), instead, does not support pointer arithmetic, as it needs to deduce the pointer type to convert to, and that deduction is not possible.

On a similar note, what should delete ptr; do, if ptr is a propagate_const object? Yes, there is some Qt-based code that simply deletes objects in an explicit way. (Luckily, it’s very rare.)

Again GCC and Clang reject this code, but MSVC accepts it:

template <typename T> class propagate_const { public: using element_type = std::remove_reference_t<decltype(*std::declval<T>())>; operator element_type *() requires (std::is_pointer_v<T> || std::is_convertible_v<T, element_type *>); operator const element_type *() const requires (std::is_pointer_v<T> || std::is_convertible_v<const T, const element_type *>); }; propagate_const<SomeClass *> ptr; delete ptr; // ERROR on GCC, Clang; OK on MSVC

It is not entirely clear to me which compiler is right, here. A delete expression requires us to convert its argument to a pointer type, using what the Standard defines as a contextual implicit conversion. Basically, the compiler needs to search for a conversion operator that can convert a propagate_const to a pointer and find only one of such conversion operators. Yes, there are two available but shouldn’t overload resolution select a best one? According to MSVC, yes; according to GCC, no.

(Note that this wording has been introduced by N3323, which allegedly GCC does not fully implement. I have opened a bug report against GCC.)

Anyways, remember what we’ve just learned: we are allowed to perform pointer arithmetic! That means that we can “fix” these (rare) usages by deploying a unary operator plus:

propagate_const<SomeClass *> ptr; delete +ptr; // OK! (ewww...)

That is pretty awful. I’ve decided instead to take my time and, in my project, refactor those “raw delete” by wrapping smart pointers instead:

propagate_const<std::unique_ptr<SomeClass>> ptr; // better A Way Forward

In order to support propagate_const on all compilers, and work around the limitations of the upstream implementations explained above, I have reimplemented propagate_const in KDToolBox, KDAB’s collection of miscellaneous useful C++ classes and stuff. You can find it here.

I’ve, of course, also submitted bug reports against libstdc++ and libc++. It was promptly fixed in GCC (GCC 13 will ship with the fix). Always report bugs upstream!

The Results
  • You can start using propagate_const and similar wrappers in your Qt projects, today. Some source incompatibilities are unfortunately present, but can be mitigated.
  • An implementation of propagate_const is available in KDToolBox. You can use it while you wait for an upgraded toolchain with the bugs fixed. 🙂
  • C++17 costs more. I cannot emphasize this enough. Not using the latest C++ standards costs more in development time, design, and debugging. While both GCC and MSVC (and upstream Clang) have very good C++20 support, Apple Clang is still lagging behind.
  • SFINAE on conversion operators is user-hostile. The workarounds are even worse. Use concepts and constraints instead.

Thank you for reading!

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post QObjects, Ownership, propagate_const and C++ Evolution appeared first on KDAB.

Categories: FLOSS Project Planets

The Drop Times: Drupal 10 Development Cookbook Releasing Tomorrow

Planet Drupal - Thu, 2023-02-09 04:09
To be released on 10th February 2023. The Drupal Drupal 10 Development Cookbook is Matt Glaman 3rd book . This book touches upon various areas of Drupal 10 to level up your site building and development chops. Kevin Quillen co-authors this edition.
Categories: FLOSS Project Planets

The Drop Times: Upcoming Interview with Adam Varn

Planet Drupal - Thu, 2023-02-09 03:39
Our next interview of the series will be with Adam Varn. Stay Tuned for the release of the interview on 10th February 2023!
Categories: FLOSS Project Planets

Codementor: Pointers in C Language

Planet Python - Thu, 2023-02-09 00:37
basics of c programming series continued...
Categories: FLOSS Project Planets

Armin Ronacher: Everybody is More Complex Than They Seem

Planet Python - Wed, 2023-02-08 19:00

This year I decided that I want to share my most important learnings about engineering, teams and quite frankly personal mental health. My hope is that those who want to learn from me find it useful.

When I wake up in the morning I usually have something to do. That doesn't necessarily mean I will do that, but it grounds me. When I was 21 my existence was quite monochromatic. I went to bed in the evening and I continued my work in the morning where I left it off the day before. And like a good performing stock, through that I went “up and to the right”. Probably all the metrics I would have used to measure my life were trending in only one direction and life was good. Work defined me and by my own standards and enough people that I interacted with I was successful.

But this monochromatic experience eventually becomes a lot more complex because you're forced to make choices in live. When I went to conferences or interacted with other people online it was impossible not to compare myself in one way or another. My expectations and ambitions were steered by the lives of others around me. As much as I wanted to not compare myself to others, I did. Social media in particular is an awful way to do that. Everybody self censors. You will see much more of people's brightest sides of their life than all the things that go wrong.

However even armed with that knowledge, it took me a long time to figure out how to think about myself in that. In the most trivial of all comparisons you take yourself and you plot yourself against other people of similar age that you aspire to and then measure yourself against in some form and then you keep doing that over time.

There are some metrics that are somewhat obvious: your salary or income, your wealth, your debts, how much money you're able to spend without thinking about it. These are somewhat obvious and usually you're on some sort of trajectory about all of these. However there are less obvious things that are harder to measure. For instance if you are married, if you have children, what clout you have in your field or at work, if you are doing well mentally or physically.

I realized more than once that for me to be happy, I have to balance out a lot of these and sometimes they are at odds with each other, and sometimes you don't know what you have been missing until after you made a decision. I did not know I want to be a father until we decided to become parents. But the moment we made that decision, everything changed. Now that this is part of me it's part of my personality going forward. The act of being a parent does not make me a better or worse person, but it makes my life just be fundamentally different than before. These significant changes to how we live our lives, are sudden and deep. We are not ballistic objects flying along a single trajectory representing our success and life accomplishments, our lives are too nuanced for that. The graph you can plot about your income might not correlate with the graph about the state of your mental health or the graph of the quality of your relationships. It might be nice if they all go up simultaneously at once, but will they ever?

I still wake up in the morning with a purpose and goals. What has changed is that what starts me into the day is now more colorful. I make more explicit choices in the evening about what my next day comprises of. The tasks of the day feed from many different parts of my life. There is work, there is career progression, there is health, there is family, there is amusement. There are good days where all these things line up well and there are days where nothing really wants to work.

The most important lesson for me was loving myself and the path I'm on, and how utterly destructive it can be to myself to not be in balance about my true goals and desires. Finding this balance for me became significantly easier by recognizing that my goals and desires have to come from myself and not by looking outwards to others. Something that became significantly easier for me when I started picturing others as the complex and multifaceted beings they are.

Categories: FLOSS Project Planets

Chris Lamb: Most Anticipated Films of 2023

Planet Debian - Wed, 2023-02-08 18:42

Very few highly-anticipated movies appear in January and February, as the bigger releases are timed so they can be considered for the Golden Globes in January and the Oscars in late February or early March, so film fans have the advantage of a few weeks after the New Year to collect their thoughts on the year ahead. In other words, I'm not actually late in outlining below the films I'm most looking forward to in 2023...



No, seriously! If anyone can make a good film about a doll franchise, it's probably Greta Gerwig. Not only was Little Women (2019) more than admirable, the same could be definitely said for Lady Bird (2017). More importantly, I can't help feel she was the real 'Driver' behind Frances Ha (2012), one of the better modern takes on Claudia Weill's revelatory Girlfriends (1978). Still, whenever I remember that Barbie will be a film about a billion-dollar toy and media franchise with a nettlesome history, I recall I rubbished the "Facebook film" that turned into The Social Network (2010). Anyway, the trailer for Barbie is worth watching, if only because it seems like a parody of itself.



It's difficult to overstate just how important the aerial bombing of London during World War II is crucial to understanding the British psyche, despite it being a constructed phenomenon from the outset. Without wishing to underplay the deaths of over 40,000 civilian deaths, Angus Calder pointed out in the 1990s that the modern mythology surrounding the event "did not evolve spontaneously; it was a propaganda construct directed as much at [then neutral] American opinion as at British." It will therefore be interesting to see how British—Grenadian—Trinidadian​​ director Steve McQueen addresses a topic so essential to the British self-conception. (Remember the controversy in right-wing circles about the sole Indian soldier in Christopher Nolan's Dunkirk (2017)?) McQueen is perhaps best known for his 12 Years a Slave (2013), but he recently directed a six-part film anthology for the BBC which addressed the realities of post-Empire immigration to Britain, and this leads me to suspect he sees the Blitz and its surrounding mythology with a more critical perspective. But any attempt to complicate the story of World War II will be vigorously opposed in a way that will make the recent hullabaloo surrounding The Crown seem tame. All this is to say that the discourse surrounding this release may be as interesting as the film itself.


Dune, Part II

Coming out of the cinema after the first part of Denis Vileneve's adaptation of Dune (2021), I was struck by the conception that it was less of a fresh adaptation of the 1965 novel by Frank Herbert than an attempt to rehabilitate David Lynch's 1984 version… and in a broader sense, it was also an attempt to reestablish the primacy of cinema over streaming TV and the myriad of other distractions in our lives. I must admit I'm not a huge fan of the original novel, finding within it a certain prurience regarding hereditary military regimes and writing about them with a certain sense of glee that belies a secret admiration for them... not to mention an eyebrow-raising allegory for the Middle East. Still, Dune, Part II is going to be a fantastic spectacle.



It'll be curious to see how this differs substantially from the recent Ford v Ferrari (2019), but given that Michael Mann's Heat (1995) so effectively re-energised the gangster/heist genre, I'm more than willing to kick the tires of this about the founder of the eponymous car manufacturer. I'm in the minority for preferring Mann's Thief (1981) over Heat, in part because the former deals in more abstract themes, so I'd have perhaps prefered to look forward to a more conceptual film from Mann over a story about one specific guy.


How Do You Live

There are a few directors one can look forward to watching almost without qualification, and Hayao Miyazaki (My Neighbor Totoro, Kiki's Delivery Service, Princess Mononoke Howl's Moving Castle, etc.) is one of them. And this is especially so given that The Wind Rises (2013) was meant to be the last collaboration between Miyazaki and Studio Ghibli. Let's hope he is able to come out of retirement in another ten years.


Indiana Jones and the Dial of Destiny

Given I had a strong dislike of Indiana Jones and the Kingdom of the Crystal Skull (2008), I seriously doubt I will enjoy anything this film has to show me, but with 1981's Raiders of the Lost Ark remaining one of my most treasured films (read my brief homage), I still feel a strong sense of obligation towards the Indiana Jones name, despite it feeling like the copper is being pulled out of the walls of this franchise today.



I only know Polish filmmaker Agnieszka Holland through her Spoor (2017), an adaptation of Olga Tokarczuk's 2009 eco-crime novel Drive Your Plow Over the Bones of the Dead. I wasn't an unqualified fan of Spoor (nor the book on which it is based), but I am interested in Holland's take on the life of Czech author Franz Kafka, an author enmeshed with twentieth-century art and philosophy, especially that of central Europe. Holland has mentioned she intends to tell the story "as a kind of collage," and I can hope that it is an adventurous take on the over-furrowed biopic genre. Or perhaps Gregor Samsa will awake from uneasy dreams to find himself transformed in his bed into a huge verminous biopic.


The Killer

It'll be interesting to see what path David Fincher is taking today, especially after his puzzling and strangely cold Mank (2020) portraying the writing process behind Orson Welles' Citizen Kane (1941). The Killer is said to be a straight-to-Netflix thriller based on the graphic novel about a hired assassin, which makes me think of Fincher's Zodiac (2007), and, of course, Se7en (1995). I'm not as entranced by Fincher as I used to be, but any film with Michael Fassbender and Tilda Swinton (with a score by Trent Reznor) is always going to get my attention.


Killers of the Flower Moon

In Killers of the Flower Moon, Martin Scorsese directs an adaptation of a book about the FBI's investigation into a conspiracy to murder Osage tribe members in the early years of the twentieth century in order to deprive them of their oil-rich land. (The only thing more quintessentially American than apple pie is a conspiracy combined with a genocide.) Separate from learning more about this disquieting chapter of American history, I'd love to discover what attracted Scorsese to this particular story: he's one of the few top-level directors who have the ability to lucidly articulate their intentions and motivations.



It often strikes me that, despite all of his achievements and fame, it's somehow still possible to claim that Ridley Scott is relatively underrated compared to other directors working at the top level today. Besides that, though, I'm especially interested in this film, not least of all because I just read Tolstoy's War and Peace (read my recent review) and am working my way through the mind-boggling 431-minute Soviet TV adaptation, but also because several auteur filmmakers (including Stanley Kubrick) have tried to make a Napoleon epic… and failed.



In a way, a biopic about the scientist responsible for the atomic bomb and the Manhattan Project seems almost perfect material for Christopher Nolan. He can certainly rely on stars to queue up to be in his movies (Robert Downey Jr., Matt Damon, Kenneth Branagh, etc.), but whilst I'm certain it will be entertaining on many fronts, I fear it will fall into the well-established Nolan mould of yet another single man struggling with obsession, deception and guilt who is trying in vain to balance order and chaos in the world.


The Way of the Wind

Marked by philosophical and spiritual overtones, all of Terrence Malick's few films have explored themes of transcendence, nature and the inevitable conflict between instinct and reason. My particular favourite is his Days of Heaven (1978), but The Thin Red Line (1998) and A Hidden Life (2019) touched me in a way that is difficult to relate, and are one of the few films about the Second World War that don't touch off my sensitivity about them (see my remarks about Blitz above). It is therefore somewhat Malickian that his next film will be a biblical drama about the life of Jesus. Given Malick's filmography, I suspect this will be far more subdued than William Wyler's 1959 Ben-Hur and significantly more equivocal in its conviction compared to Paolo Pasolini's ardently progressive The Gospel According to St. Matthew (1964). However, little beyond that can be guessed, and the film may not even appear until 2024 or even 2025.


Zone of Interest

I was mesmerised by Jonathan Glazer's Under the Skin (2013), and there is much to admire in his borderline 'revisionist gangster' film Sexy Beast (2000), so I will definitely be on the lookout for this one. The only thing making me hesitate is that Zone of Interest is based on a book by Martin Amis about a romance set inside the Auschwitz concentration camp. I haven't read the book, but Amis has something of a history in his grappling with the history of the twentieth century, and he seems to do it in a way that never sits right with me. But if Paul Verhoeven's Starship Troopers (1997) proves anything at all, it's all in the adaption.

Categories: FLOSS Project Planets

Gary Benson: Flask on Elastic Beanstalk

GNU Planet! - Wed, 2023-02-08 18:35

I had a play with Elastic Beanstalk the other day. It’s one of those things people turn their noses up at, but it seems pretty good for prototyping and small things. My biggest issue so far has been that, for Python applications, it expects a WSGI callable called application in the file application.py… but I was using Flask, and every single Flask application ever has a WSGI callable called app in the file app.py. I tried to not care, but it got too much after about an hour so I went and found how to override it:

$ cat .ebextensions/01_wsgi.config option_settings: aws:elasticbeanstalk:container:python: WSGIPath: "app:app"

Thank you Nik Tomazic for that! (=⌒‿‿⌒=)

Categories: FLOSS Project Planets

Packaging recommendations

Planet KDE - Wed, 2023-02-08 17:41

I’d like to draw attention to a fairly new wiki page that might be of interest to both packagers and users of DIY-style distros like Arch Linux: our Packaging Recommendations. This page is a reference for how Plasma developers would like to see Plasma set up, and it goes over topics like packages to pre-install by default, packages to avoid, and recommended system configuration tweaks.

This data comes from years of experience with distros that didn’t ship a complete Plasma experience, not out of malice or neglect, but rather because it’s really hard to know the full list of things to do and install! Most of us have had the experience of distro-hopping, only to discover that some issue that was solved in one distro is present in another. Maybe you gained video thumbnails by default after switching, but KDE Connect stopped working, or maybe color Emojis started working but Samba sharing broke. Not fun! This page aims to solve that by providing a reference of how to ship and configure Plasma vis-a-vis these topics for an optimal user experience.

So if you’re a KDE packager, please have a look and adjust your packaging if you find that you’re currently missing anything!

Categories: FLOSS Project Planets

TestDriven.io: Introduction to Django Channels

Planet Python - Wed, 2023-02-08 17:28
This tutorial shows how to use Django Channels to create a real-time application.
Categories: FLOSS Project Planets

Drupal Association blog: Drupal Association Board Member Announcement - Welcome, Rosa Ordinana!

Planet Drupal - Wed, 2023-02-08 15:24

Please join the Drupal Association in welcoming our new board member:

Rosa Ordinana

“As a passionate advocate for open-source technology in the public sector, I am proud to have been invited to serve on the Drupal association Directors Board,” Rosa shared upon accepting her position. “I am eager to contribute my expertise and experience to help the Drupal community grow, support interoperability, digital sovereignty, and create a safe, secure, and open web for everyone.”

About Rosa
Rosa is currently working in the Informatics directorate of the European Commission in Brussels. Since 2009 she and her team have promoted the use of open source solutions and Drupal as CMS for public websites of the European Commission and other EU institutions.

Today, her multicultural team manages hundreds of Drupal sites and is one of the largest Drupal teams in Europe. Rose also has a master’s degree in Computer Science.

We are thrilled to have Rosa on the Drupal Association Board!

Categories: FLOSS Project Planets

The Drop Times: Accessibility Not an Option; Should Be a Default: AmyJune Hineline |FLDC

Planet Drupal - Wed, 2023-02-08 15:24
"When we choose inaccessibility, we are disabling people," says AmyJune Hineline. She was speaking to TDT over an email interview with Alethia Braganza prior to the upcoming Florida DrupalCamp for which, she is one of the lead organizers. Read on to know more.
Categories: FLOSS Project Planets

Drupal Association blog: Drupal Association Board Member Announcement - Welcome, Lynne Capozzi!

Planet Drupal - Wed, 2023-02-08 15:13

Please join the Drupal Association in welcoming our new board member:

Lynne Capozzi

When accepting her position, Lynne shared: “I’m excited to join the Drupal Association Board, and I’m hopeful that my past experience at Acquia and my nonprofit and marketing experience can benefit the community and help the community to grow!”

About Lynne
Most recently, Lynne was the Chief Marketing Officer (CMO) at Acquia in Boston. Lynne was at Acquia for eight years in this role.

As Acquia’s CMO, Lynne Capozzi oversaw all global marketing functions, including digital marketing, demand generation, operations, regional and field marketing, customer and partner marketing, events, vertical strategy, analyst relations, content, and corporate communications.
Lynne is one of Acquia’s boomerang stories, first serving as Acquia’s CMO in 2009. Lynne left Acquia in 2011 to pursue her nonprofit work full-time and then returned to Acquia in late 2016 to lead the marketing organization into its next growth stage.

Prior to her experience at Acquia, Lynne held various marketing leadership roles in the technology space. She served as CMO at JackBe, an enterprise mashup software company for real-time intelligence applications that Software AG acquired. Before that, Lynne was CMO at Systinet, which Mercury Interactive acquired. Lynne was also a VP at Lotus Development, which IBM later acquired.

Lynne is on the board of directors at the Boston Children’s Hospital Trust and runs a nonprofit through the hospital. She is also on the Advisory Board of Family Services of Merrimack Valley and the Board chair of the West Parish Garden Cemetery and Chapel in Andover, Mass.

We are thrilled to have Lynne on the Drupal Association Board!

Categories: FLOSS Project Planets

Stephan Lachnit: Setting up fast Debian package builds using sbuild, mmdebstrap and apt-cacher-ng

Planet Debian - Wed, 2023-02-08 13:49

In this post I will give a quick tutorial on how to set up fast Debian package builds using sbuild with mmdebstrap and apt-cacher-ng.

  1. Background
  2. Setting up apt-cacher-ng
  3. Setting up mmdebstrap
  4. Setting up sbuild
  5. Finishing touches

The usual tool for building Debian packages is dpkg-buildpackage, or a user-friendly wrapper like debuild. These are geat tools, however if you want to upload something to the Debian archive they lack the required separation from the system they are run on to ensure that your packaging also works on a different system. The usual candidate here is sbuild. But setting up a schroot is tedious and performance tuning can be annoying. There is an alternative backend for sbuild that promises to make everything simpler: unshare. In this tutorial I will show you how to set up sbuild with this backend.

Additionally to the normal performance tweaking, caching downloaded packages can be a huge performance increase when rebuilding packages. I do rebuilds quite often, mostly when a new dependency got introduced I didn’t specify in debian/control yet or lintian notices a something I can easily fix. So let’s begin with setting up this caching.

Setting up apt-cacher-ng

Install apt-cacher-ng:

sudo apt install apt-cacher-ng

A pop-up will appear, if you are unsure how to answer it select no, we don’t need it for this use-case.

To enable apt-cacher-ng on your system, create /etc/apt/apt.conf.d/02proxy and insert:

Acquire::http::proxy ""; Acquire::https::proxy "DIRECT";

In /etc/apt-cacher-ng/acng.conf you can increase the value of ExThreshold to hold packages for a shorter or longer duration. The length depends on your specific use case and resources. A longer threshold takes more disk space, a short threshold like one day effecitvely only reduces the build time for rebuilds.

If you encounter weird issues on apt update at some point the future, you can try to clean the cache from apt-cacher-ng. You can use this script:

Setting up mmdebstrap

Install mmdebstrap:

sudo apt install mmdebstrap

We will create a small helper script to ease creating a chroot. Open ~/.local/bin/mmupdate and insert:

#!/bin/sh mmdebstrap \ --variant=buildd \ --aptopt='Acquire::http::proxy "";' \ --arch=amd64 \ --components=main,contrib,non-free \ unstable \ ~/.cache/sbuild/unstable-amd64.tar.xz \ http://deb.debian.org/debian


  • aptopt enables apt-cacher-ng inside the chroot.
  • --arch sets the CPU architecture (see Debian Wiki).
  • --components sets the archive components, if you don’t want non-free pacakges you might want to remove some entries here.
  • unstable sets the Debian release, you can also set for example bookworm-backports here.
  • unstable-amd64.tar.xz is the output tarball containing the chroot, change accordingly to your pick of the CPU architecture and Debian release.
  • http://deb.debian.org/debian is the Debian mirror, you should set this to the same one you use in your /etc.apt/sources.list.

Make mmupdate executable and run it once:

chmod +x ~/.local/bin/mmupdate mkdir -p ~/.cache/sbuild ~/.local/bin/mmupdate

If you execute mmupdate again you can see that the downloading stage is much faster thanks to apt-cacher-ng. For me the difference is from about 115s to about 95s. Your results may vary, this depends on the speed of your internet, Debian mirror and disk.

If you have used the schroot backend and sbuild-update before, you probably notice that creating a new chroot with mmdebstrap is slower. It would be a bit annoying to do this manually before we start a new Debian packaging session, so let’s create a systemd service that does this for us.

First create a folder for user services:

mkdir -p ~/.config/systemd/user

Create ~/.config/systemd/user/mmupdate.service and add:

[Unit] Description=Run mmupdate Wants=network-online.target [Service] Type=oneshot ExecStart=%h/.local/bin/mmupdate

Start the service and test that it works:

systemctl --user daemon-reload systemctl --user start mmupdate systemctl --user status mmupdate

Create ~/.config/systemd/user/mmupdate.timer:

[Unit] Description=Run mmupdate daily [Timer] OnCalendar=daily Persistent=true [Install] WantedBy=timers.target

Enable the timer:

systemctl --user enable mmupdate.timer

Now every day mmupdte will be run automatically. You can adjust the period if you think daily rebuilds are a bit excessive.

A neat advantage of period rebuilds is that they the base files in your apt-cacher-ng cache warm every time they run.

Setting up sbuild:

Install sbuild and (optionally) autopkgtest:

sudo apt install --no-install-recommends sbuild autopkgtest

Create ~/.sbuildrc and insert:

# backend for using mmdebstrap chroots $chroot_mode = 'unshare'; # build in tmpfs $unshare_tmpdir_template = '/dev/shm/tmp.sbuild.XXXXXXXXXX'; # upgrade before starting build $apt_update = 1; $apt_upgrade = 1; # build everything including source for source-only uploads $build_arch_all = 1; $build_arch_any = 1; $build_source = 1; $source_only_changes = 1; # go to shell on failure instead of exiting $external_commands = { "build-failed-commands" => [ [ '%SBUILD_SHELL' ] ] }; # always clean build dir, even on failure $purge_build_directory = "always"; # run lintian $run_lintian = 1; $lintian_opts = [ '-i', '-I', '-E', '--pedantic' ]; # do not run piuparts $run_piuparts = 0; # run autopkgtest $run_autopkgtest = 1; $autopkgtest_root_args = ''; $autopkgtest_opts = [ '--apt-upgrade', '--', 'unshare', '--release', '%r', '--arch', '%a' ]; # set uploader for correct signing $uploader_name = 'Stephan Lachnit <stephanlachnit@debian.org>';

You should adjust uploader_name. If you don’t want to run autopkgtest or lintian by default you can also disable it here. Note that for packages that need a lot of space for building, you might want to comment the unshare_tmpdir_template line to prevent a OOM build failure.

You can now build your Debian packages with the sbuild command :)

Finishing touches

You can add these variables to your ~/.bashrc as bonus (with adjusted name / email):

export DEBFULLNAME="<your_name>" export DEBEMAIL="<your_email>" export DEB_BUILD_OPTIONS="parallel=<threads>"

In particular adjust the value of parallel to ensure parallel builds.

If you are new to signing / uploading your package, first install the required tools:

sudo apt install devscripts dput-ng

Create ~/.devscripts and insert:

DEBSIGN_KEYID=<your_gpg_fingerpring> USCAN_SYMLINK=rename

You can now sign the .changes file with:

debsign ../<pkgname_version_arch>.changes

And for source-only uploads with:

debsign -S ../<pkgname_version_arch>_source.changes

If you don’t introduce a new binary package, you always want to go with source-only changes.

You can now upload the package to Debian with

dput ../<filename>.changes Resources for further reading:

Thanks for reading!

Categories: FLOSS Project Planets

Thorsten Alteholz: My Debian Activities in January 2023

Planet Debian - Wed, 2023-02-08 13:45
FTP master

This month I accepted 419 and rejected 46 packages. The overall number of packages that got accepted was 429. Looking at these numbers and comparing them to the previous month, one can see: the freeze is near. Everybody wants to get some packages into the archive and I hope nobody is disappointed.

Debian LTS

This was my hundred-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3272-1] sudo (embargoed) security update for one CVE
  • [DLA 3286-1] tor security update for one CVE
  • [DLA 3290-1] libzen security update for one CVE
  • [libzen Bullseye] debdiff sent to maintainer
  • [DLA 3294-1] libarchive security update for one CVE

I also attended the monthly LTS meeting and did some days of frontdesk duties.

Debian ELTS

This month was the fifty fourth ELTS month.

  • [ELA-772-1] sudo security update of Jessie and Stretch for one CVE
  • [ELA-781-1] libzen security update of Stretch for one CVE
  • [ELA-782-1] xorg-server security update of Jessie and Stretch for six CVEs
  • [ELA-790-1] libarchive security update of Jessie and Stretch for one CVEs

Last but not least I did some days of frontdesk duties.

Debian Astro

This month I uploaded improved packages or new versions of:

I also uploaded new packages:

Debian IoT

This month I uploaded improved packages of:

Debian Printing

This month I uploaded new versions or improved packages of:

I also uploaded new packages:

Categories: FLOSS Project Planets

Antoine Beaupré: Major outage with Oricom uplink

Planet Debian - Wed, 2023-02-08 12:45

The server that normally serves this page, all my email, and many more services was unavailable for about 24 hours. This post explains how and why.

What happened?

Starting February 2nd, I started seeing intermittent packet loss on the network. Every hour or so, the link would go down for one or two minutes, then come back up.

At first, I didn't think much of it because I was away and could blame the crappy wifi or the uplink I using. But when I came in the office on Monday, the service was indeed seriously degraded. I could barely do videoconferencing calls as they would cut out after about half an hour.

I opened a ticket with my uplink, Oricom. They replied that it was an issue they couldn't fix on their end and would need someone on site to fix.

So, the next day (Tuesday, at around 10EST) I called Oricom again, and they made me do a full modem reset, which involves plugging a pin in a hole for 15 seconds on the Technicolor TC4400 cable modem. Then the link went down, and it didn't come back up at all.


Oricom then escalated this to their upstream (Oricom is a reseller of Videotron, who has basically the monopoly on cable in Québec) which dispatched a tech. This tech, in turn, arrived some time after lunch and said the link worked fine and it was a hardware issue.

At this point, Oricom put a new modem in the mail and I started mitigation.

Mitigation Website

The first thing I did, weirdly, was trying to rebuild this blog. I figured it should be pretty simple: install ikiwiki and hit rebuild. I knew I had some patches on ikiwiki to deploy, but surely those are not a deal breaker, right?

Nope. Turns out I wrote many plugins and those still don't ship with ikiwiki, despite having been sent upstream a while back, some years ago.

So I deployed the plugins inside the .ikiwiki directory of the site in the hope of making things a little more "standalone". Unfortunately, that didn't work either because the theme must be shipped in the system-wide location: I couldn't figure out how to put it to have it bundled with the main repository. At that point I mostly gave up because I had spent too much time on this and I had to do something about email otherwise it would start to bounce.


So I made a new VM at Linode (thanks 2.5admins for the credits) to build a new mail server.

This wasn't the best idea, in retrospect, because it was really overkill: I started rebuilding the whole mail server from scratch.

Ideally, this would be in Puppet and I would just deploy the right profile and the server would be rebuilt. Unfortunately, that part of my infrastructure is not Puppetized and even if it would, well the Puppet server was also down so I would have had to bring that up first.

At first, I figured I would just make a secondary mail exchanger (MX), to spool mail for longer so that I wouldn't lose it. But I decided against that: I thought it was too hard to make a "proper" MX as it needs to also filter mail while avoiding backscatter. Might as well just build a whole new server! I had a copy of my full mail spool on my laptop, so I figured that was possible.

I mostly got this right: added a DKIM key, installed Postfix, Dovecot, OpenDKIM, OpenDMARC, glue it all together, and voilà, I had a mail server. Oh, and spampd. Oh, and I need the training data, oh, and this and... I wasn't done and it was time to sleep.

The mail server went online this morning, and started accepting mail. I tried syncing my laptop mail spool against it, but that failed because Dovecot generated new UIDs for the emails, and isync correctly failed to sync. I tried to copy the UIDs from the server in the office (which I had still access to locally), but that somehow didn't work either.

But at least the mail was getting delivered and stored properly. I even had the Sieve rules setup so it would get sorted properly too. Unfortunately, I didn't hook that up properly, so those didn't actually get sorted. Thankfully, Dovecot can re-filter emails with the sieve-filter command, so that was fixed later.

At this point, I started looking for other things to fix.

Web, again

I figured I was almost done with the website, might as well publish it. So I installed the Nginx Debian package, got a cert with certbot, and added the certs to the default configuration. I rsync'd my build in /var/www/html and boom, I had a website. The Goatcounter analytics were timing out, but that was easy to turn off.


Almost at that exact moment, a bang on the door told me mail was here and I had the modem. I plugged it in and a few minutes later, marcos was back online.

So this was a lot (a lot!) of work for basically nothing. I could have just taken the day off and wait for the package to be delivered. It would definitely have been better to make a simpler mail exchanger to spool the mail to avoid losing it. And in fact, that's what I eventually ended up doing: I converted the linode server in a mail relay to continue accepting mail with DNS propagates, but without having to sort the mail out of there...

Right now I have about 200 mails in a mailbox that I need to move back into marcos. Normally, this would just be a simple rsync, but because both servers have accepted mail simultaneously, it's going to be simpler to just move those exact mails on there. Because dovecot helpfully names delivered files with the hostname it's running on, it's easy to find those files and transfer them, basically:

rsync -v -n --files-from=<(ssh colette.anarc.at find Maildir -name '*colette*' ) colette.anarc.at: colette/ rsync -v -n --files-from=<(ssh colette.anarc.at find Maildir -name '*colette*' ) colette/ marcos.anarc.at:

Overall, the outage lasted about 24 hours, from 11:00EST (16:00UTC) on 2023-02-07 to the same time today.

Future work

I'll probably keep a mail relay to make those situations more manageable in the future. At first I thought that mail filtering would be a problem, but that happens post queue anyways and I don't bounce mail based on Spamassassin, so back-scatter shouldn't be an issue.

I basically need Postfix, OpenDMARC, and Postgrey. I'm not even sure I need OpenDKIM as the server won't process outgoing mail, so it doesn't need to sign anything, just check incoming signatures, which OpenDMARC can (probably?) do.

Thanks to everyone who supported me through this ordeal, you know who you are (and I'm happy to give credit here if you want to be deanonymized)!

Categories: FLOSS Project Planets

How to report Multiscreen bugs

Planet KDE - Wed, 2023-02-08 11:14

As announced previously, Plasma 5.27 will have a significantly reworked multiscreen management, and we want to make sure this will be the best LTS Plasma release we had so far.

Of course, this doesn’t mean it will be perfect from day one, and your feedback is really important, as we want to fix any potential issue as fast as they get noticed.

As you know, for our issue tracking we use Bugzilla at this address. We have different products and components that are involved in the multiscreen management.

First, under New bug, chose the “plasma” category. Then there are 4 possible combinations of products and components, depending on the symptoms:

Possible problemProductComponent
  • The output of the command kscreen-doctor -o looks wrong, such as:
  • The listed “priority” is not the one you set in systemsettings
  • Geometries look wrong
  • Desktops or panels are on the wrong screen
  • There are black screens but is possible to move the cursor inside them
plasmashellMulti Screen Support
  • Ordinary application windows appear on the wrong screen or get moved in unexpected screens when screens are connected/disconnected
  • Some screens are black and is not possible to move the mouse inside those, but they look enabled in the systemsettings displays module or in the output of the command kscreen-doctor -o
  • The systemsettings displays module shows settings that don’t match reality
  • The systemsettings displays module shows settings that don’t match the output of the command kscreen-doctor -o

In order to have a good complete information on the affected system, its configuration, and the configuration of our multiscreen management, if you can, the following information would be needed:

  • Whether the problem happens in a Wayland or X11 session (or both)
  • A good description of the scenario: how many screens, whether is a laptop or desktop, when the problem happens (startup, connecting/disconnectiong, going out of sleep and things like that)
  • The output the terminal command: kscreen-doctor -o
  • The output of the terminal command: kscreen-console
  • The main plasma configuration file: ~/.config/plasma-org.kde.plasma.desktop-appletsrc

Those items of information already help a lot figuring out what problem is and where it resides.

Afterwards we still may ask for more informations, like an archive of the main screen config files that are the directory content of ~/.local/share/kscreen/ but normally, we wouldn’t need that.

One more word on kscreen-doctor and kscreen-console

Those 2 commands are very useful to understand what Plasma and the rest of the system thinks about every screen that’s connected and how they intend to treat them.


Here is a typical output of the command kscreen-doctor - o:

Output: 1 eDP-1 enabled connected priority 2 Panel Modes: 0:1200x1920@60! 1:1024x768@60 Geometry: 1920,0 960x600 Scale: 2 Rotation: 8 Overscan: 0 Vrr: incapable RgbRange: Automatic Output: 2 DP-3 enabled connected priority 3 DisplayPort Modes: 0:1024x768@60! 1:800x600@60 2:800x600@56 3:848x480@60 4:640x480@60 5:1024x768@60 Geometry: 1920,600 1024x768 Scale: 1 Rotation: 1 Overscan: 0 Vrr: incapable RgbRange: Automatic Output: 3 DP-4 enabled connected priority 1 DisplayPort Modes: 0:1920x1080@60*! 1:1920x1080@60 2:1920x1080@60 3:1680x1050@60 4:1600x900@60 5:1280x1024@75 6:1280x1024@60 7:1440x900@60 8:1280x800@60 9:1152x864@75 10:1280x720@60 11:1280x720@60 12:1280x720@60 13:1024x768@75 14:1024x768@70 15:1024x768@60 16:832x624@75 17:800x600@75 18:800x600@72 19:800x600@60 20:800x600@56 21:720x480@60 22:720x480@60 23:720x480@60 24:720x480@60 25:640x480@75 26:640x480@73 27:640x480@67 28:640x480@60 29:640x480@60 30:720x400@70 31:1280x1024@60 32:1024x768@60 33:1280x800@60 34:1920x1080@60 35:1600x900@60 36:1368x768@60 37:1280x720@60 Geometry: 0,0 1920x1080 Scale: 1 Rotation: 1 Overscan: 0 Vrr: incapable RgbRange: Automatic

Here we can see we have 3 outputs, one internal and two via DisplayPort, DP-4 is the primary (priority 1) followed by eDP-1 (internal) and DP-3 (those correcpond to the new reordering UI in the systemsettings screen module).

Important data points, also the screen geometries (in italic in the snippet) which tell their relative positions.


This gives a bit more verbose information, here is a sample (copied here the data of a single screen, as the output is very long):

Id: 3 Name: "DP-4" Type: "DisplayPort" Connected: true Enabled: true Priority: 1 Rotation: KScreen::Output::None Pos: QPoint(0,0) MMSize: QSize(520, 290) FollowPreferredMode: false Size: QSize(1920, 1080) Scale: 1 Clones: None Mode: "0" Preferred Mode: "0" Preferred modes: ("0") Modes: "0" "1920x1080@60" QSize(1920, 1080) 60 "1" "1920x1080@60" QSize(1920, 1080) 60 "10" "1280x720@60" QSize(1280, 720) 60 "11" "1280x720@60" QSize(1280, 720) 60 "12" "1280x720@60" QSize(1280, 720) 59.94 "13" "1024x768@75" QSize(1024, 768) 75.029 "14" "1024x768@70" QSize(1024, 768) 70.069 "15" "1024x768@60" QSize(1024, 768) 60.004 "16" "832x624@75" QSize(832, 624) 74.551 "17" "800x600@75" QSize(800, 600) 75 "18" "800x600@72" QSize(800, 600) 72.188 "19" "800x600@60" QSize(800, 600) 60.317 "2" "1920x1080@60" QSize(1920, 1080) 59.94 "20" "800x600@56" QSize(800, 600) 56.25 "21" "720x480@60" QSize(720, 480) 60 "22" "720x480@60" QSize(720, 480) 60 "23" "720x480@60" QSize(720, 480) 59.94 "24" "720x480@60" QSize(720, 480) 59.94 "25" "640x480@75" QSize(640, 480) 75 "26" "640x480@73" QSize(640, 480) 72.809 "27" "640x480@67" QSize(640, 480) 66.667 "28" "640x480@60" QSize(640, 480) 60 "29" "640x480@60" QSize(640, 480) 59.94 "3" "1680x1050@60" QSize(1680, 1050) 59.883 "30" "720x400@70" QSize(720, 400) 70.082 "31" "1280x1024@60" QSize(1280, 1024) 59.895 "32" "1024x768@60" QSize(1024, 768) 59.92 "33" "1280x800@60" QSize(1280, 800) 59.81 "34" "1920x1080@60" QSize(1920, 1080) 59.963 "35" "1600x900@60" QSize(1600, 900) 59.946 "36" "1368x768@60" QSize(1368, 768) 59.882 "37" "1280x720@60" QSize(1280, 720) 59.855 "4" "1600x900@60" QSize(1600, 900) 60 "5" "1280x1024@75" QSize(1280, 1024) 75.025 "6" "1280x1024@60" QSize(1280, 1024) 60.02 "7" "1440x900@60" QSize(1440, 900) 59.901 "8" "1280x800@60" QSize(1280, 800) 59.91 "9" "1152x864@75" QSize(1152, 864) 75 EDID Info: Device ID: "xrandr-Samsung Electric Company-S24B300-H4MD302024" Name: "S24B300" Vendor: "Samsung Electric Company" Serial: "H4MD302024" EISA ID: "" Hash: "eca6ca3c32c11a47a837d696a970b9d5" Width: 52 Height: 29 Gamma: 2.2 Red: QQuaternion(scalar:1, vector:(0.640625, 0.335938, 0)) Green: QQuaternion(scalar:1, vector:(0.31543, 0.628906, 0)) Blue: QQuaternion(scalar:1, vector:(0.15918, 0.0585938, 0)) White: QQuaternion(scalar:1, vector:(0.3125, 0.329102, 0))

Important also the section EDID Info, to see if the screen has a good and unique EDID, as invalid Edids, especially in combination with DisplayPort is a known source or problems.

Categories: FLOSS Project Planets

Real Python: How to Split a Python List or Iterable Into Chunks

Planet Python - Wed, 2023-02-08 09:00

Splitting a Python list into chunks is a common way of distributing the workload across multiple workers that can process them in parallel for faster results. Working with smaller pieces of data at a time may be the only way to fit a large dataset into computer memory. Sometimes, the very nature of the problem requires you to split the list into chunks.

In this tutorial, you’ll explore the range of options for splitting a Python list—or another iterable—into chunks. You’ll look at using Python’s standard modules and a few third-party libraries, as well as manually looping through the list and slicing it up with custom code. Along the way, you’ll learn how to handle edge cases and apply these techniques to multidimensional data by synthesizing chunks of an image in parallel.

In this tutorial, you’ll learn how to:

  • Split a Python list into fixed-size chunks
  • Split a Python list into a fixed number of chunks of roughly equal size
  • Split finite lists as well as infinite data streams
  • Perform the splitting in a greedy or lazy manner
  • Produce lightweight slices without allocating memory for the chunks
  • Split multidimensional data, such as an array of pixels

Throughout the tutorial, you’ll encounter a few technical terms, such as sequence, iterable, iterator, and generator. If these are new to you, then check out the linked resources before diving in. Additionally, familiarity with Python’s itertools module can be helpful in understanding some of the code snippets that you’ll find later.

To download the complete source code of the examples presented in this tutorial, click the link below:

Free Sample Code: Click here to download the free source code that you’ll use to split a Python list or iterable into chunks.

Split a Python List Into Fixed-Size Chunks

There are many real-world scenarios that involve splitting a long list of items into smaller pieces of equal size. The whole list may be too large to fit in your computer’s memory. Perhaps it’s more convenient or efficient to process the individual chunks separately rather than all at once. But there could be other reasons for splitting.

For example, when you search for something online, the results are usually presented to you in chunks, called pages, containing an equal number of items. This technique, known as content pagination, is common in web development because it helps improve the website’s performance by reducing the amount of data to transfer from the database at a time. It can also benefit the user by improving their browsing experience.

Most computer networks use packet switching to transfer data in packets or datagrams, which can be individually routed from the source to the destination address. This approach doesn’t require a dedicated physical connection between the two points, allowing the packets to bypass a damaged part of the network. The packets can be of variable length, but some low-level protocols require the data to be split into fixed-size packets.

Note: When splitting sequential data, you need to consider its size while keeping a few details in mind.

Specifically, if the total number of elements to split is an exact multiple of the desired chunk’s length, then you’ll end up with all the chunks having the same number of items. Otherwise, the last chunk will contain fewer items, and you may need extra padding to compensate for that.

Additionally, your data may have a known size up front when it’s loaded from a file in one go, or it can consist of an indefinite stream of bytes—while live streaming a teleconference, for example. Some solutions that you learn in this tutorial will only work when the number of elements is known before the splitting begins.

Most web frameworks, such as Django, will handle content pagination for you. Also, you don’t typically have to worry about some low-level network protocols. That being said, there are times when you’ll need to have more granular control and do the splitting yourself. In this section, you’ll take a look at how to split a list into smaller lists of equal size using different tools in Python.

Standard Library in Python 3.12: itertools.batched()

Using the standard library is almost always your best choice because it requires no external dependencies. The standard library provides concise, well-documented code that’s been tested by millions of users in production, making it less likely to contain bugs. Besides that, the standard library’s code is portable across different platforms and typically much more performant than a pure-Python equivalent, as most of it is implemented in C.

Unfortunately, the Python standard library hasn’t traditionally had built-in support for splitting iterable objects like Python lists. At the time of writing, Python 3.11 is the most recent version of the interpreter. But you can put yourself on the cutting edge by downloading a pre-release version of Python 3.12, which gives you access to the new itertools.batched(). Here’s an example demonstrating its use:

>>>>>> from itertools import batched >>> for batch in batched("ABCDEFGHIJ", 4): ... print(batch) ... ('A', 'B', 'C', 'D') ('E', 'F', 'G', 'H') ('I', 'J')

The function accepts any iterable object, such as a string, as its first argument. The chunk size is its second argument. Regardless of the input data type, the function always yields chunks or batches of elements as Python tuples, which you may need to convert to something else if you prefer working with a different sequence type. For example, you might want to join the characters in the resulting tuples to form strings again.

Note: The underlying implementation of itertools.batched() could’ve changed since the publishing of this tutorial, which was written against an alpha release of Python 3.12. For example, the function may now yield lists instead of tuples, so be sure to check the official documentation for the most up-to-date information.

Also, notice that the last chunk will be shorter than its predecessors unless the iterable’s length is divisible by the desired chunk size. To ensure that all the chunks have an equal length at all times, you can pad the last chunk with empty values, such as None, when necessary:

>>>>>> def batched_with_padding(iterable, batch_size, fill_value=None): ... for batch in batched(iterable, batch_size): ... yield batch + (fill_value,) * (batch_size - len(batch)) >>> for batch in batched_with_padding("ABCDEFGHIJ", 4): ... print(batch) ... ('A', 'B', 'C', 'D') ('E', 'F', 'G', 'H') ('I', 'J', None, None)

This adapted version of itertools.batched() takes an optional argument named fill_value, which defaults to None. If a chunk’s length happens to be less than size, then the function appends additional elements to that chunk’s end using fill_value as padding.

You can supply either a finite sequence of values to the batched() function or an infinite iterator yielding values without end:

>>>>>> from itertools import count >>> finite = batched([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 4) >>> infinite = batched(count(1), 4) >>> finite <itertools.batched object at 0x7f4e0e2ee830> >>> infinite <itertools.batched object at 0x7f4b4e5fbf10> >>> list(finite) [(1, 2, 3, 4), (5, 6, 7, 8), (9, 10)] >>> next(infinite) (1, 2, 3, 4) >>> next(infinite) (5, 6, 7, 8) >>> next(infinite) (9, 10, 11, 12)

In both cases, the function returns an iterator that consumes the input iterable using lazy evaluation by accumulating just enough elements to fill the next chunk. The finite iterator will eventually reach the end of the sequence and stop yielding chunks. Conversely, the infinite one will continue to produce chunks as long as you keep requesting them—for instance, by calling the built-in next() function on it.

Read the full article at https://realpython.com/how-to-split-a-python-list-into-chunks/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets