FLOSS Project Planets

KDE Gear 21.04 is coming this week! But what is KDE Gear?

Planet KDE - Sun, 2021-04-18 18:52

Let's dig a bit in our history.

In the "good old days" (TM) there was KDE, life was simple, everything we did was KDE and everything we released was KDE [*]


Then at some point we realized we wanted to release some stuff with different frequency, so KDE Extragear[**] was born.

Then we said "KDE is the community" so we couldn't release KDE anymore, thus we said "ok, the thing we release with all the stuff that releases at the same time will be KDE Software Compilation", which i think we all agree it was not an awesome name, but "names are hard" (TM) (this whole blog is about that :D)

We went on like that for a while, but then we realized we wanted different schedules for the things that were inside the KDE Software Compilation. 


We thought it made sense for the core libraries to be released monthly and the Plasma team also wanted to have it's own release schedule (has been tweaked over the years).

That meant that "KDE Frameworks" and "Plasma" (of KDE Plasma) names as things we release were born (Plasma was already a name used before, so that one was easy). The problem was that we had to find a name for "KDE Software Compilation" minus "KDE Frameworks" minus "Plasma".

One option would have been to keep calling it "KDE Software Compilation", but we thought it would be confusing to keep the name but make it contain lots of less things so we used the un-imaginative name (which as far as i remember i proposed) "KDE Applications"

And we released "KDE Applications" for a long time, but you know what, "KDE Applications" is not a good name either. First reason "KDE Applications" was not only applications, it also contained libraries, but that's ok, no one would have really cared if that was the only problem. The important issue was that if you call something "KDE Applications" you make it seem like these are all the applications KDE releases, but no, that's not the truth, remember our old friend KDE Extragear independently released applications?

So we sat down in the Akademy 2019 in Milan and tried to find a better name. And we couldn't. So we all said let's go with the "there's no spoon" route, you don't need a name if you don't have a thing. We basically de-branded the whole thing. The logic was that after all it's just a bunch of applications that are released at the same time because it makes things super easy from a release engineering point of view, but Okular doesn't "have anything to do" with Dolphin nor with krdc nor with kpat, they just happen to be released at the good time.

So we kept the release engineering side under the boring and non-capitalized name of "release service" and we patted ourselves on the back for having solved a decade long problem.

Narrator voice: "they didn't solve the problem"

After a few releases it became clear that our promotion people were having some trouble writing announcements, because "Dolphin, Okular, Krdc, kpat .. 100 app names.. is released" doesn't really sell very well.

Since promotion is important we sat down again and did some more thinking, ok we need a name, but it can't be a name that is "too specific" about applications because otherwise it will have the problem of "KDE Applications". So it had to be a bit generic, at some point, i jokingly suggested "KDE Gear", tied with our logo and with our old friend that would we have almost killed by now "KDE Extragear"

Narrator voice: "they did not realize it was a joke"


And people liked "KDE Gear", so yeah, this week we're releasing KDE Gear 21.04 whose heritage can be traced to "release service 21.04", "KDE Applications 21.04", "KDE Software Compilation 21.04" and "KDE 21.04" [***]

P.S: Lots of these decisions happened long time ago, so my recollection, specially my involvement in the suggestion of the names, may not be as accurate as i think it is.

[*] May not be an accurate depiction, I wasn't around in the "good old days"

[**] A term we've been killing over the last years, because the term "extra" implied to some degree this were not important things, and they are totally important, the only difference is that they are released on their own, so personally i try to use something like "independently released"

[***] it'd be great if you could stop calling the things we release as "KDE", we haven't used that name for  releases of code for more than a decade now

Categories: FLOSS Project Planets

One more week of CfP

Planet KDE - Sun, 2021-04-18 15:08

Usually foss-north takes place ~April. This year, foss-north 2021 will be virtual. We shifted the date to the end of May to try to make it possible to at least go hybrid and have some sort of conference experience, but in light of the current COVID-19 situation and the pace of the roll-out of the vaccination programmes, we decided for a virtual event.

One of the benefits of going virtual is that it is a lot easier to attend – both as a speaker and as audience. For those of you who want to speak, you have one week left to submit a talk proposal to the Call for Papers.

To register a talk requires you to log in using oauth via either github or google. We are working on adding more login alternatives, but as with many volunteer run efforts, time is the current limiting factor. If you feel that this is a blocker, please reach out to me over email and we can sort it out.

Categories: FLOSS Project Planets

The effect of CPU, link-time (LTO) and profile-guided (PGO) optimizations on the compiler itself

Planet KDE - Sun, 2021-04-18 14:18

 In other words, how much faster will a compiler be after it's been built with various optimizations?

Given the recent Clang12 release, I've decided to update my local build of Clang11 that I've been using for building LibreOffice. I switched to using my own Clang build instead of openSUSE packages somewhen in the past because it was faster. I've meanwhile forgot how much faster :), and openSUSE packages now build with LTO, so I've built Clang12 in several different ways to test the effect and this is it:

The file compiled is LO Calc's document.cxx, a fairly large source file, in a debug LO build. The compilation of the file is always the same, the only thing that differs is the compiler used and whether LO's PCH support is enabled. And the items are:

  1. Base - A release build of Clang12, with (more or less) the default options.
  2. CPU - As above, with -march=native -mtune=native added.
  3. LTO - As above, with link-time optimization used. Building Clang this way takes longer.
  4. LTO+PGO - As above, also with profile-guided optimization used. Building Clang this way takes even longer, as it needs two extra Clang builds to collect the PGO data.
  5. Base PCH - As Base, and the file is built with PCH used.
  6. LTO+PGO PCH - As LTO+PGO, again with PCH used.

Or, if you want this as numbers, then with Base being 100%, CPU is 85%, LTO is 78%, LTO+PGO is 59%, Base PCH is 37% and LTO+PGO PCH is 25%. Not bad.

Mind you, this is just for one randomly selected file. YMMV. For the build from the video from the last time, the original time of 4m39s with Clang11 LTO PCH goes down to 3m31s for Clang12 LTO+PGO PCH, which is 76%, which is consistent with the LTO->LTO+PGO change above.


Categories: FLOSS Project Planets

Speeding up apps.kde.org

Planet KDE - Sun, 2021-04-18 14:00

Apps.kde.org is a great website listing all the KDE applications and their addons. Under the hood, it’s using AppStream, a standard for adding metadata to Linux applications. The Linux application managers (Discover, GNOME Softwares, …) are displaying them, so they stay up to date and are translated. There was only one problem, apps.kde.org has always been a bit slow to load. It is a problem since slow websites tend to be less visible on Google search results and for the users there aren’t a good browsing experience.

There were many reasons why it was slow. We designed it earlier in a way that on each page load the backend load the data from json files. On the individual application pages, we only need to read one json file but on the homepage every single JSON files was loaded. The Linux kernel is probably caching every JSON file in the memory so that the IO load wasn’t that bad, but parsing the JSON files still needed to be done on each page load.

Using PHP and a Symfony (a big PHP framework) for doing the rooting, templating and internalization (loading additional mo files) of the website adds an overhead. The final result was that the load time for just one html files, was between 300ms and 500ms.

That doens’t sounds like a big deal, but it is one. Rendering a page takes more than just downloading one HTML file, the browser needs to load the other assets (images, CSS and javascript files); it needs to parse the HTML and CSS files and compute the layout. These operations are done in parallel, but the initial loading of the HTML files blocks every other operation. Also, these 300ms loading times can be a lot longer on bad internet connections.

As often to solve the performance problem, my usual solution is to port the websites to static site generators. This isn’t always a solution that can be used but in this case, a generator was already generating the data as JSON files twice a day, so there was no technical reasons for dynamically generating the pages. I choose Hugo, because it is my preferred static site generator is Hugo, support internalization and we have a shared theme for it in KDE so most of the layouts, CSS files and translations can be shared with the other KDE websites.

Porting to Hugo was not difficult, I was already using a templating engine (TWIG) in the old websites and porting to the Hugo templating engine wasn’t complicated. I also wrote a small python script, converting the json files containing the AppStream metadata to markdown files in a way that Hugo can read it. The side-effect is that this makes the generation slower. Generating apps.kde.org is really pushing Hugo to its limits and even if it is one of the fastest static site generator, it now takes 20s for hugo to generate all the pages. This can be explained by the fact that it generates 240 pages in almost 30 languages.

Fortunately while making it slower to generate in the CI, it made it faster to load for the users. It moved the loading time from ~300-500ms to under 100ms. This is already a nice speedup, but wasn’t enough for me. The second steps was to improve the rendering time of homepage. Looking at Google Lighthouse, one of the reasons why the homepage was still slow was the large amount of DOM elements. It was over 2000, so I used some tricks to decrease the amount:

  • First one was to remove all the addons from the homepage, they aren’t really helpful for the visitors to know about them and they 15% of the content was about them. Instead I moved these addons information to the applications they extends. This makes it easier to get a list of addons for each applications without adding too much clutter to the homepage.
  • The second trick was to port the application grid from a CSS flex element to grid. Flex has the disadvantage that it doesn’t allow to specify a gab between the items, so to create spacing, I needed to wrap every element in a div with padding. (Flex support gap but only on Firefox for now). display: grid also has more advantages, for example I don’t need to specify how many element I want per row on each screen sizes but instead can specify the width of an item and its spacing will be automatically adjusted. Here is the new css rule for …
.application-list { display: grid; grid-template-columns: repeat(auto-fill, minmax(240px, 1fr)); align-items: top; grid-gap: 1em; }

… and the corresponding changes applied to the HTML. I also merged the two <a> element together.

- <div class="application-list row align-items-stretch"> + <div class="application-list"> {{ $category := .Params.categoryName }} {{ range where (where site.RegularPages "Section" "applications") ".Params.appType" "!=" "addon" }} {{ if and (eq $category .Params.MainCategory) (ne .Params.appType "addon") }} - <div class="app text-center col-12 col-sm-6 col-md-4 col-lg-3 p-2"> - <div class="p-3 h-100"> - <div aria-hidden="true"> - <a href="{{ .Permalink }}"> - <img width="48" height="48" src="//carlschwan.eu/app-icons/{{ .Params.icon }}" - loading="lazy" - alt="{{ .Params.Name }}" title="{{ .Params.name }}"/> - </a> - </div> - <a><h3>{{ .Params.name }}</h3></a> - <p>{{ .Params.GenericName }}</p> - </div> + <div class="app text-center"> + <a href="{{ .Permalink }}" class="d-flex flex-column"> + <img width="48" height="48" aria-hidden=true class="icon" src="//carlschwan.eu/app-icons/{{ .Params.icon }}" + loading="lazy" + alt="{{ .Params.Name }}" title="{{ .Params.name }}"/> + <h3>{{ .Params.name }}</h3> + </a> + <p>{{ .Params.GenericName }}</p> </div> {{ end }} {{ end }} </div>

This change removed 2 DOM elements per item, again reducing the amount of items that the browser need to download, parse and render by another 5%.

I’m actually wondering if using display: grid is also easier for the browser engine to render, since the improvement from my impression seemed faster than just 5%.

The end result is that the homepage get a score of 96/100 on Google Lighthouse, this is a big improvement from 56/100 we were getting a few weeks ago. Hopefully Google PageRank algorithm also like the change.

There is still a few improvements possibilities, mostly slimming down the CSS files. That should be doable thanks to the fact we can use Hugo to ship a customized version of the scss code shared with the other websites with less imported modules and just the one required for apps.kde.org.

Categories: FLOSS Project Planets

BreadcrumbsCollector: Meet python-mockito and leave built-in mock &amp; patch behind

Planet Python - Sun, 2021-04-18 13:40
Batteries included can give you headache

unittest.mock.[Magic]Mock and unittest.patch are powerful utilities in the standard library that can help us in writing tests. Although it is easy to start using them, there are several pitfalls waiting for unaware beginners. For example, forgetting about optional spec or spec_set can give us green tests for code that will fail in prod immediately. You can find several other examples + solutions in the second half of my other post – How to mock in Python? Almost definitive guide.

Last but not least – vocabulary used in the standard library stands at odds with the general testing nomenclature. This has a negative effect on learning effective testing techniques. Whenever a Pythonista needs to replace a dependency in tests, they use a mock. Generally, this type of replacement object is called Test Double. Mock is merely one specialized type of a Test Double. What is more, there are limited situations when it’s the right Test Double. You can find more details in Robert Martin’s post The Little Mocker. Or just stay with me for the rest of this article – I’ll guide you through. To summarise, if a philosopher Ludwig Wittgenstein was right by saying…

The limits of my language means the limits of my world

…then Pythonistas are missing A LOT by sticking to “mocking”.

python-mockito – a modern replacement for Python mock & patch

It is said that experience is the best teacher. However, experience does not have to be our own – if we can learn from others’ mistakes, then it’s even better. Developers of other programming languages also face the challenges of testing. The library I want to introduce to you – python-mockito – is a port of Java’s testing framework with the same name. It’s safe by default unlike mock from the standard library. python-mockito has a nice, easy to use API. It also helps you with the maintenance of your tests by being very strict about unexpected behaviours. Plus, it has a pytest integration – pytest-mockito for seamless use and automatic clean up.

Introduction to test double types

I must admit that literature is not 100% consistent on taxonomy of test doubles, but generally accepted definitions are:

  • Dummy – an object required to be passed around (e.g. to __init__) but often is not used at all during test execution
  • Stub – an object returning hardcoded data which was set in advance before test execution
  • Spy – an object recording interactions and exposing API to query it for (e.g. which methods were called and with what arguments)
  • Mock – an object with calls expectations set in advance before test execution
  • Fake – an object that behaves just like a production counterpart, but has a simpler implementation that makes it unusable outside tests (e.g. in-memory storage).

If that’s the first time you see test double types and you find it a bit imprecise or overlapping, that’s good. Their implementation can be similar at times. What makes a great difference is how they are used during the Assert phase of a test. (Quick reminder – a typical test consists of Arrange – Act – Assert phases).

A great rule of thumb I found recently gives following hints when to use which type:

  • use Dummy when a dependency is expected to remain unused
  • use Stub for read-only dependency
  • use Spy for write-only dependency
  • use Mock for write-only dependency used across a few tests (DRY expectation)
  • use Fake for dependency that’s used for both reading and writing.

plus mix features when needed or intentionally break the rules when you have a good reason to do so.

This comes from The Test Double Rule of Thumb article by Matt Parker, linked at the end of this post.

Of course we use test doubles only when we have to. Don’t write only unit-tests separately for each class/function, please.

python-mockito versus built-in mock and patch Installation

I’m using Python3.9 for the following code examples. unitttest.mock is included. To get python-mockito run

pip install mockito pytest-mockito

pytest-mockito will be get handy a bit later

Implementing Dummy

Sometimes Dummy doesn’t even require any test double library. When a dependency doesn’t really have any effect on the test and/or is not used during execution, we could sometimes pass just always None. If mypy (or other type checker) complains and a dependency is simple to create (e.g. it is an int), we create and pass it.

def test_sends_request_to_3rd_party(): # setting up spy (ommitted) interfacer = ThirdPartyInterfacer(max_returned_results=0) # "0" is a dummy interfacer.create_payment(...) # spy assertions (ommitted)

If a dependency is an instance of a more complex class, then we can use unittest.mock.Mock + seal or mockito.mock. In the following example, we’ll be testing is_healthy method of some Facade. Facades by design can get a bit incohesive and use dependencies only in some methods. Dummy is an ideal choice then:

from logging import Logger from unittest.mock import Mock, seal from mockito import mock class PaymentsFacade: def __init__(self, logger: Logger) -&gt; None: self._logger = logger def is_healthy(self) -&gt; bool: # uncomment this line if you want to see error messages # self._logger.info("Checking if is healthy!") return True def test_returns_true_for_healthcheck_stdlib(): logger = Mock(spec_set=Logger) seal(logger) facade = PaymentsFacade(logger) assert facade.is_healthy() is True def test_returns_true_for_healthcheck_mockito(): logger = mock(Logger) facade = PaymentsFacade(logger) assert facade.is_healthy() is True

python-mockito requires less writing and also error message is much better (at least in Python 3.9). Unittest Mock (part of a HUGE stack trace):

if self._mock_sealed: attribute = "." + kw&#91;"name"] if "name" in kw else "()" mock_name = self._extract_mock_name() + attribute &gt; raise AttributeError(mock_name) E AttributeError: mock.info # WTH? /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/unittest/mock.py:1017: AttributeError


self = &lt;dummy.PaymentsFacade object at 0x7fb3880cba00&gt; def is_healthy(self) -&gt; bool: &gt; self._logger.info("Checking if is healthy!") E AttributeError: 'Dummy' has no attribute 'info' configured # CLEAR dummy.py:12: AttributeError

Dummies are useful when we know they will not (or should not) be used during the test execution. As a side note, dependencies like logger are rarely problematic in tests and we could also write the same test scenario without using test double at all.

Implementing Stub

With stubs we are only interested in ensuring they will return some pre-programmed data. WE DO NOT EXPLICITLY VERIFY IF THEY WERE CALLED DURING ASSERT. Ideally, we should see if they were used or not purely by looking at the test itself.

In the following example, our PaymentsFacade has a dependancy on PaymentsProvider that is an interfacer to some external API. Obviously, we cannot use the real implementation it in the test. For this particular case, we have a read-only collaboration. Facade asks for payment status and interprets it to tell if the payment is complete.

from enum import Enum from unittest.mock import Mock, seal from mockito import mock class PaymentStatus(Enum): AUTHORIZED = 'AUTHORIZED' CAPTURED = 'CAPTURED' RELEASED = 'RELEASED' class PaymentsProvider: def __init__(self, username: str, password: str) -&gt; None: self._auth = (username, password) def get_payment_status(self, payment_id: int) -&gt; PaymentStatus: # make some requests using auth info raise NotImplementedError class PaymentsFacade: def __init__(self, provider: PaymentsProvider) -&gt; None: self._provider = provider def is_paid(self, payment_id: int) -&gt; None: status = self._provider.get_payment_status(payment_id) is_paid = status == PaymentStatus.CAPTURED return is_paid def test_returns_true_for_status_captured_stdlib(): provider = Mock(spec_set=PaymentsProvider) provider.get_payment_status = Mock(return_value=PaymentStatus.CAPTURED) seal(provider) facade = PaymentsFacade(provider) assert facade.is_paid(1) is True def test_returns_true_for_status_captured_mockito(when): provider = mock(PaymentsProvider) when(provider).get_payment_status(2).thenReturn(PaymentStatus.CAPTURED) facade = PaymentsFacade(provider) assert facade.is_paid(2) is True

python-mockito gives a test-specific api. when (coming from pytest-mockito) is called on a mock specifying the argument. Next, thenReturn defines what will be returned. Analogously, there is a method thenRaise for raising an exception. Notice a difference (except length) – if we called a mock with an unexpected argument, mockito raises an exception:

def test_returns_true_for_status_captured_mockito(when): provider = mock(PaymentsProvider) when(provider).get_payment_status(2).thenReturn(PaymentStatus.CAPTURED) facade = PaymentsFacade(provider) assert facade.is_paid(3) is True # stub is configured with 2, not 3 # stacktrace def is_paid(self, payment_id: int) -&gt; None: &gt; status = self._provider.get_payment_status(payment_id) E mockito.invocation.InvocationError: E Called but not expected: E E get_payment_status(3) E E Stubbed invocations are: E E get_payment_status(2) stub.py:28: InvocationError

If we don’t want this behaviour, we can always use ellipsis:

def test_returns_true_for_status_captured_mockito(when): provider = mock(PaymentsProvider) when(provider).get_payment_status(...).thenReturn(PaymentStatus.CAPTURED) facade = PaymentsFacade(provider) assert facade.is_paid(3) is True

If we want to remain safe in every case, we should also use type checker (e.g. mypy).

Digression – patching

when can be also used for patching. Let’s assume PaymentsFacade for some reason creates an instance of PaymentsProvider, so we cannot explicitly pass mock into __init__:

class PaymentsFacade: def __init__(self) -&gt; None: self._provider = PaymentsProvider( os.environ&#91;"PAYMENTS_USERNAME"], os.environ&#91;"PAYMENTS_PASSWORD"], ) def is_paid(self, payment_id: int) -&gt; None: status = self._provider.get_payment_status(payment_id) is_paid = status == PaymentStatus.CAPTURED return is_paid

Then, monkey patching is a usual way to go for Pythonistas:

def test_returns_true_for_status_captured_stdlib_patching(): with patch.object(PaymentsProvider, "get_payment_status", return_value=PaymentStatus.CAPTURED) as mock: seal(mock) facade = PaymentsFacade() assert facade.is_paid(1) is True def test_returns_true_for_status_captured_mockito_patching(when): when(PaymentsProvider).get_payment_status(...).thenReturn( PaymentStatus.CAPTURED ) facade = PaymentsFacade() assert facade.is_paid(3) is True

python-mockito implementation is even shorter with patching than without it. But do not treat this as an invitation for patching An important note – context manager with patch.object makes sure there is a cleanup. For pytest, I strongly recommend using fixtures provided by pytest-mockito. They will do cleanup automatically for you, Otherwise, one would have to call function mockito.unstub manually. More details in the documentation of pytest-mockito and python-mockito. Documentation of python-mockito states there is also a way to use it with context managers, but personally I’ve never done so.

Monkey patching is dubious practice at best – especially if done on unstable interfaces. It should be avoided because it tightly couples tests with the implementation. It can be your last resort, though. The frequent need for patching in tests is a strong indicator of untestable design or poor test or both.

Digression – pytest integration

For daily use with the standard library mocks, there is a lib called pytest-mock. It provides mocker fixture for easy patching and automatic cleanup. The outcome is similar to pytest-mockito.

Implementing Spy

Now, let’s consider a scenario of starting a new payment. PaymentsFacade calls PaymentsProvider after validating input and converting money amount to conform to API’s expectation.

from dataclasses import dataclass from decimal import Decimal from unittest.mock import Mock, seal from mockito import mock, verify @dataclass(frozen=True) class Money: amount: Decimal currency: str def __post_init__(self) -&gt; None: if self.amount &lt; 0: raise ValueError("Money amount cannot be negative!") class PaymentsProvider: def __init__(self, username: str, password: str) -&gt; None: self._auth = (username, password) def start_new_payment(self, card_token: str, amount: int) -&gt; None: raise NotImplementedError class PaymentsFacade: def __init__(self, provider: PaymentsProvider) -&gt; None: self._provider = provider def init_new_payment(self, card_token: str, money: Money) -&gt; None: assert money.currency == "USD", "Only USD are currently supported" amount_in_smallest_units = int(money.amount * 100) self._provider.start_new_payment(card_token, amount_in_smallest_units) def test_calls_provider_with_799_cents_stdlib(): provider = Mock(spec_set=PaymentsProvider) provider.start_new_payment = Mock(return_value=None) seal(provider) facade = PaymentsFacade(provider) facade.init_new_payment("nonsense", Money(Decimal(7.99), "USD")) provider.start_new_payment.assert_called_once_with("nonsense", 799) def test_calls_provider_with_1099_cents_mockito(when): provider = mock(PaymentsProvider) when(provider).start_new_payment(...).thenReturn(None) facade = PaymentsFacade(provider) facade.init_new_payment("nonsense", Money(Decimal(10.99), "USD")) verify(provider).start_new_payment("nonsense", 1099)

Here, a major difference between unittest.mock and mockito is that the latter:

  • lets us specify input arguments (not shown here, but present in previous examples)
  • (provided input arguments were specified) fails if there are any additional, unexpected interactions.

The second behaviour is added by pytest-mockito that apart from calling unstub automatically, it also calls verifyNoUnwantedInvocations.

Implementing Mock

Let’s consider identical test scenario as for Spy – but this time assume we have some duplication in verification and want to refactor Spy into Mock. Now, the funniest part – it turns out that standard library that has only classes called “Mock” does not really make it any easier to create mocks as understood by literature. On the other hand, it’s such a simple thing that we can do it by hand without any harm. To make this duel even, I’ll use pytest fixtures for both:

@pytest.fixture() def stdlib_provider(): provider = Mock(spec_set=PaymentsProvider) provider.start_new_payment = Mock(return_value=None) seal(provider) yield provider provider.start_new_payment.assert_called_once_with("nonsense", 799) def test_returns_none_for_new_payment_stdlib(stdlib_provider): facade = PaymentsFacade(stdlib_provider) result = facade.init_new_payment("nonsense", Money(Decimal(7.99), "USD")) assert result is None @pytest.fixture() def mockito_provider(expect): provider = mock(PaymentsProvider) expect(provider).start_new_payment("nonsense", 1099) return provider def test_returns_none_for_new_payment_mockito(mockito_provider): facade = PaymentsFacade(mockito_provider) result = facade.init_new_payment("nonsense", Money(Decimal(10.99), "USD")) assert result is None

expect will also call verifyUnwantedInteractions to make sure there are no unexpected calls.

Implementing Fake

For Fakes, there are no shortcuts or libraries. We are better off writing them manually. You can find an example here – InMemoryAuctionsRepository . It is meant to be a test double for a real implementation that uses a relational database.


Initially this blog post was meant to be only about the tool, but I couldn’t resist squeezing in some general advice about testing techniques.

While python-mockito does not solve an issue with calling every test double a mock, it definitely deserves attention. Test doubles created with it require less code and are by default much more secure and strict than those using unittest.mock. Regarding cons, camelCasing can be a little distracting at first, but this is not a huge issue. The most important thing is that safety we get out-of-the-box with python-mockito has been being added to python standard library over several versions and is not as convenient.

I strongly recommend to read python-mockito’s documentation and try it out!

Further reading

The post Meet python-mockito and leave built-in mock & patch behind appeared first on Breadcrumbs Collector.

Categories: FLOSS Project Planets

poke @ Savannah: GNU poke 1.2 released

GNU Planet! - Sun, 2021-04-18 13:21

I am happy to announce a new release of GNU poke, version 1.2.

This is a bug fix release in the poke 1.x series, and is the
result of all the user feedback we have received since we did
the last release.  Our big thanks to everyone who provided
feedback :)

See the file NEWS in the released tarball for a detailed list
of changes in this release.

The tarball poke-1.2.tar.gz is now available at

  GNU poke (http://www.jemarch.net/poke) is an interactive,
  extensible editor for binary data.  Not limited to editing basic
  entities such as bits and bytes, it provides a full-fledged
  procedural, interactive programming language designed to describe
  data structures and to operate on them.

This release is the product of a month of work resulting in 37
commits, made by 5 contributors.

Thanks to the people who contributed with code and/or
documentation to this release.  In certain but no significant
order they are:

   Mohammad-Reza Nabipoor
   David Faust
   Egeyar Bagcioglu
   Konstantinos Chasialis

Thank you all!  It is a real pleasure to hack with you.

And this is all for now.
Happy poking!

Jose E. Marchesi
Frankfurt am Main
18 April 2021

Categories: FLOSS Project Planets

Community Working Group posts: Crafting the 2021 Aaron Winborn Award

Planet Drupal - Sun, 2021-04-18 11:20

A few years ago, during our preparations for the 2018 Aaron Winborn Award, we had the idea that the award should be created by a community member. 

Rachel Lawson, a former member of the Drupal Community Working Group's conflict resolution team, created hand-blown glass awards for both the 2018 and 2019 winners, Kevin Thull and Leslie Glynn. Last year, Bo Shipley created the award for Baddý Breidert. We were lucky to have Bo create the award for this year's winner as well, AmyJune Hineline

Bo crafted the awards out of leather - stacking, gluing, carving, then oiling the leather into its final shape and finish. He was generous to share some photos of the process.

Sanding the stacked leather.

Applying the stencil.

Cutting and chiseling the design.

Cutting and chiseling the design.

Closeup of the texture.

Mounting to the base.

Mounting to the base.

We cannot thank Bo enough for donating his time and talent for this project!

If you are interested in crafting a future Aaron Winborn Award, please let us know at drupal-cwg at drupal dot org!

Categories: FLOSS Project Planets

Tales of the KDE Network Kerala

Planet KDE - Sun, 2021-04-18 08:55

Post written by Aiśwarya KK (Aish)

Kerala is a state on the southwest coast of the Indian subcontinent. Those who attended Akademy 2019 may already know or at least have heard the name once because Timothée and I gave a talk on the topic “GCompris in Kerala-Part 2”. This is also where I met Bhavisha and we could make a good connection. (I really miss in-person Akademy )
It was Bhavisha who told me about the nice idea of KDE Networks last year. And she motivated me to join one of the meetings. Inspired by the discussions, I contacted Subin, coordinator of the KDE Malayalam translation team (Malayalam is the language of Kerala), to inform him about the program. We decided to start KDE Network Kerala, and he started to add members. We have Sreeram, Kannan and Akhil on board now.

Kerala is a fertile ground for Free/Libre Software. If you want to know a more detailed history, please see the article by Sasi Kumar. Though KDE couldn’t get the momentum it deserves there. Yet, various KDE products are well integrated into the curriculum of public schools, such as GCompris, Kpaint, Kstars etc. Apart from those, Kdenlive is used as a tool in the creative works of school children, Krita is getting famous in universities and Plasma desktop is used by a daily newspaper.

Our strategy to reach out to more users and spread the community is through the already famous applications. But we felt that we are not equipped enough to reach out to more people. So in the first half of the year 2021, we decided to fix that. We are making Plasma workspace localization as good and complete as we can. And we decided to do GCompris and Krita tutorials in Malayalam. But our work is not restricted to it. We didn’t miss any occasion where we can talk and spread the news about KDE and its products. You can see some important accomplishments till today here:

• The team has decided to use some Malayalam channels for people in Kerala to contact the KDE Network Kerala team. Links for KDE Malayalam channels are Matrix: #kde-ml:poddery.com and Telegram: https://t.me/kde_ml

• Malayalam and Assamese localization teams have joined hands together; both are on the same server now.

• GCompris was featured during Pehia Annual Summit 2021 conducted by Pehia foundation, a community based non-profit working towards the cause of bridging the gender gap in technology with a primary focus on computer science & programming in Kerala.

• Subin gave a talk on January 22nd 2021 about FOSS at the College of Engineering Trivandrum, where he discussed about KDE too. Thanks to Aniqa and Paul for collecting all the promo documents in one place which we can easily refer to and use.

• Aiswarya and Timothée participated in the ICEFOSS conference to talk about GCompris.

• Public schools in the state of Kerala in India are being equipped to use the Qt version of GCompris. We will officially announce it later when the schools start using it.

• Plasma workspace Malayalam translation is 79% complete.

See you next time!

Categories: FLOSS Project Planets

Skrooge 2.25.0 released

Planet KDE - Sun, 2021-04-18 07:57

The Skrooge Team announces the release 2.25.0 version of its popular Personal Finances Manager based on KDE Frameworks

  • Correction bug 429356: Please make qtwebengine dependency optional
  • Correction bug 430535: % increase calculation in dashboard account widgets
  • Correction bug 430242: changing filter criteria while grouping Operations by Category expands a different set of categories
  • Correction bug 432423: Skrooge crashes when copying tables in categories with subcategories unfold
  • Correction bug 433514: Currency download from Yahoo uses "low" instead of "close"
  • Correction bug 435330: Sub operations view's tab confusingly looks identical to just "Operations"
  • Correction bug 435847: Bookmarks are not created with tab name and icon
  • Correction bug 422273: strange appearance of Skrooge bank icon pop-up menu if no scrollbar
  • Correction: Column combo box not visible in report in dashboard
  • Feature: opt-out accounts from the "accounts" widget in the Dashboard (see https://forum.kde.org/viewtopic.php?f=210&t=165735)
  • Feature: New dashboard layouts: by columns and with layouts in layouts
Get it, Try it, Love it...

Grab Skrooge from your distro's packaging system. If it is not yet included in repositories, go get it from our website, and bug your favorite distro for inclusion.

Now, you can try the appimage or the flatpak too !

Get Involved

To enhance Skrooge, we need you ! There are many ways you can help us:

  • Submit bug reports
  • Discuss on the KDE forum
  • Contact us, give us your ideas, explain us where we can improve...
  • Can you design good interfaces ? Can you code ? Have webmaster skills ? Are you a billionaire looking for a worthy investment ? We will be very pleased in welcoming you in the skrooge team, contact us !
Categories: FLOSS Project Planets

Russell Coker: IMA/EVM Certificates

Planet Debian - Sun, 2021-04-18 07:56

I’ve been experimenting with IMA/EVM. Here is the Sourceforge page for the upstream project [1]. The aim of that project is to check hashes and maybe public key signatures on files before performing read/exec type operations on them. It can be used as the next logical step from booting a signed kernel with TPM. I am a long way from getting that sort of thing going, just getting the kernel to boot and load keys is my current challenge and isn’t helped due to the lack of documentation on error messages. This blog post started as a way of documenting the error messages so future people who google errors can get a useful result. I am not trying to document everything, just help people get through some of the first problems.

I am using Debian for my work, but some of this will apply to other distributions (particularly the kernel error messages). The Debian distribution has the ima-evm-utils but no other support for IMA/EVM. To get this going in Debian you need to compile your own kernel with IMA support and then boot it with kernel command-line options to enable IMA, in recent kernels that includes “lsm=integrity” as a mandatory requirement to prevent a kernel Oops after mounting the initrd (there is already a patch to fix this).

If you want to just use IMA (not get involved in development) then a good option would be to use RHEL (here is their documentation) [2] or SUSE (here is their documentation) [3]. Note that both RHEL and SUSE use older kernels so their documentation WILL lead you astray if you try and use the latest kernel.org kernel.

The Debian initrd

I created a script named /etc/initramfs-tools/hooks/keys with the following contents to copy the key(s) from /etc/keys to the initrd where the kernel will load it/them. The kernel configuration determines whether x509_evm.der or x509_ima.der (or maybe both) is loaded. I haven’t yet worked out which key is needed when.

#!/bin/bash mkdir -p ${DESTDIR}/etc/keys cp /etc/keys/* ${DESTDIR}/etc/keys Making the Keys #!/bin/sh GENKEY=ima.genkey cat << __EOF__ >$GENKEY [ req ] default_bits = 1024 distinguished_name = req_distinguished_name prompt = no string_mask = utf8only x509_extensions = v3_usr [ req_distinguished_name ] O = `hostname` CN = `whoami` signing key emailAddress = `whoami`@`hostname` [ v3_usr ] basicConstraints=critical,CA:FALSE #basicConstraints=CA:FALSE keyUsage=digitalSignature #keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectKeyIdentifier=hash authorityKeyIdentifier=keyid #authorityKeyIdentifier=keyid,issuer __EOF__ openssl req -new -nodes -utf8 -sha1 -days 365 -batch -config $GENKEY \ -out csr_ima.pem -keyout privkey_ima.pem openssl x509 -req -in csr_ima.pem -days 365 -extfile $GENKEY -extensions v3_usr \ -CA ~/kern/linux-5.11.14/certs/signing_key.pem -CAkey ~/kern/linux-5.11.14/certs/signing_key.pem -CAcreateserial \ -outform DER -out x509_evm.der

To get the below result I used the above script to generate a key, it is the /usr/share/doc/ima-evm-utils/examples/ima-genkey.sh script from the ima-evm-utils package but changed to use the key generated from kernel compilation to sign it. You can copy the files in the certs directory from one kernel build tree to another to have the same certificate and use the same initrd configuration. After generating the key I copied x509_evm.der to /etc/keys on the target host and built the initrd before rebooting.

[ 1.050321] integrity: Loading X.509 certificate: /etc/keys/x509_evm.der [ 1.092560] integrity: Loaded X.509 cert 'xev: etbe signing key: 99d4fa9051e2c178017180df5fcc6e5dbd8bb606' Errors

Here are some of the kernel error messages I received along with my best interpretation of what they mean.

[ 1.062031] integrity: Loading X.509 certificate: /etc/keys/x509_ima.der
[ 1.063689] integrity: Problem loading X.509 certificate -74

Error -74 means -EBADMSG, which means there’s something wrong with the certificate file. I have got that from /etc/keys/x509_ima.der not being in der format and I have got it from a der file that contained a key pair that wasn’t signed.

[ 1.049170] integrity: Loading X.509 certificate: /etc/keys/x509_ima.der [ 1.093092] integrity: Problem loading X.509 certificate -126

Error -126 means -ENOKEY, so the key wasn’t in the file or the key wasn’t signed by the kernel signing key.

[ 1.074759] integrity: Unable to open file: /etc/keys/x509_evm.der (-2)

Error -2 means -ENOENT, so the file wasn’t found on the initrd. Note that it does NOT look at the root filesystem.


Related posts:

  1. Basics of Linux Kernel Debugging Firstly a disclaimer, I’m not an expert on this and...
  2. Xen for Training I’m setting up a training environment based on Xen. The...
  3. Using LetsEncrypt Lets Encrypt is a new service to provide free SSL...
Categories: FLOSS Project Planets

Junichi Uekawa: Rewrote my pomodoro technique timer.

Planet Debian - Sun, 2021-04-18 04:56
Rewrote my pomodoro technique timer. I've been iterating on how I operate and focus. Too much focus exhausts me. I'm trying out Focusmate's method of 50 minutes of focus time and 10 minutes of break. Here is a web app that tries to start the timer at the hour and starts break at the last 10 minutes.

Categories: FLOSS Project Planets

Talk Python to Me: #312 Python Apps that Scale to Billions of Users

Planet Python - Sun, 2021-04-18 04:00
How do you build Python applications that can handling literally billions of requests. I has certainly been done to great success with places like YouTube (handling 1M requests / sec) and Instagram as well as internal pricing APIs at places like PayPal and other banks. <br/> <br/> While Python can be fast at some operations and slow at others, it's generally not so much about language raw performance as it is about building an architecture for this scale. That's why it's great to have Julian Danjou on the show today. We'll dive into his book "The Hacker's Guide to Scaling Python" as well as some of his performance work he's doing over at Datadog.<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Julian on Twitter</b>: <a href="https://twitter.com/juldanjou" target="_blank" rel="noopener">@juldanjou</a><br/> <b>Scaling Python Book</b>: <a href="https://scaling-python.com/" target="_blank" rel="noopener">scaling-python.com</a><br/> <br/> <b>DD Trace production profiling code</b>: <a href="https://github.com/DataDog/dd-trace-py" target="_blank" rel="noopener">github.com</a><br/> <b>Futurist package</b>: <a href="https://pypi.org/project/futurist/" target="_blank" rel="noopener">pypi.org</a><br/> <b>Tenacity package</b>: <a href="https://tenacity.readthedocs.io/en/latest/" target="_blank" rel="noopener">tenacity.readthedocs.io</a><br/> <b>Cotyledon package</b>: <a href="https://cotyledon.readthedocs.io/en/latest/" target="_blank" rel="noopener">cotyledon.readthedocs.io</a><br/> <b>Locust.io Load Testing</b>: <a href="https://locust.io/" target="_blank" rel="noopener">locust.io</a><br/> <b>Datadog</b>: <a href="talkpython.fm/datadog" target="_blank" rel="noopener">datadoghq.com</a><br/> <b>daiquiri package</b>: <a href="https://daiquiri.readthedocs.io/en/latest" target="_blank" rel="noopener">daiquiri.readthedocs.io</a><br/> <br/> <b>YouTube Live Stream Video</b>: <a href="https://www.youtube.com/watch?v=MEyxf7fOoxg" target="_blank" rel="noopener">youtube.com</a><br/></div><br/> <strong>Sponsors</strong><br/> <br/> <a href='https://talkpython.fm/45drives'>45Drives</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>
Categories: FLOSS Project Planets

Bits from Debian: Debian Project Leader election 2021, Jonathan Carter re-elected.

Planet Debian - Sun, 2021-04-18 04:00

The voting period and tally of votes for the Debian Project Leader election has just concluded, and the winner is Jonathan Carter!

455 of 1,018 Developers voted using the Condorcet method.

More information about the results of the voting are available on the Debian Project Leader Elections 2021 page.

Many thanks to Jonathan Carter and Sruthi Chandran for their campaigns, and to our Developers for voting.

Categories: FLOSS Project Planets

Fabio Zadrozny: PyDev 8.3.0 (Java 11, Flake 8 , Code-completion LRU, issue on Eclipse 4.19)

Planet Python - Sun, 2021-04-18 00:53

PyDev 8.3.0 is now available!

Let me start with some warnings here:

First, PyDev now requires Java 11. I believe that Java 11 is pretty standard nowadays and the latest Eclipse also requires Java 11 (if you absolutely need Java 8, please keep using PyDev 8.2.0 -- or earlier -- indefinitely, otherwise, if you are still using Java 8, please upgrade to Java 11 -- or higher).

Second, Eclipse 2021-03 (4.19) is broken and cannot be used with any version of PyDev due to https://bugs.eclipse.org/bugs/show_bug.cgi?id=571990, so, if you use PyDev, please keep to Eclipse 4.18 (or get a newer if available) -- the latest version of PyDev warns about this, older versions will not complain but some features will not function properly, so, please skip on using Eclipse 4.19 if you use PyDev.

Now, on to the goodies ;)

On the linters front, the configurations for the linters can now be saved to the project or user settings and flake8 has an UI for configuration which is much more flexible, allowing to change the severity of any error.

A new option which allows all comments to be added to a single indent was added (and this is now the default).

The code-completion and quick fixes which rely on automatically adding some import will now cache the selection so that if a given token is imported that selection is saved and when asked again it'll be reused (so, for instance, if you just resolved Optional to be typing.Optional, that'll be the first choice the next time around).

Environment variables are now properly supported in .pydevproject. The expected format is: ${env_var:VAR_NAME}.


Thanks to Luis Cabral, who is now helping in the project for doing many of those improvements (and to the Patrons at https://www.patreon.com/fabioz which enabled it to happen).


Categories: FLOSS Project Planets

hussainweb.me: What I look for in Drupal contrib modules

Planet Drupal - Sat, 2021-04-17 23:56
As of this writing, there are 47,008 modules available on Drupal.org. Even if you filter for Drupal 8 or Drupal 9, there is still an impressive number of modules available (approximately 10,000 and 5,000 respectively). Chances are that you would find just the module you are looking for to build what you want. In fact, chances are that you will find more than one module to do what you want. How do you decide which module to pick?
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppAPT 0.0.7: Micro Update

Planet Debian - Sat, 2021-04-17 12:33

A new version of the RcppAPT package interfacing from R to the C++ library behind the awesome apt, apt-get, apt-cache, … commands and their cache powering Debian, Ubuntu and the like arrived on CRAN yesterday. This comes a good year after the previous maintenance update for release 0.0.6.

RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.

The maintenance release responds to call for updates from CRAN desiring that make all implicit dependencies on packages markdown and rmarkdown explicit via a Suggests: entry. Two of the many packages I maintain were part of the (large !!) list in the CRAN email, and this is one of them. While making the update, we refreshed two other packaging details.

Changes in version 0.0.7 (2021-04-16)
  • Add rmarkdown to Suggests: as an implicit conditional dependency

  • Switch vignette to minidown and its water framework, add minidown to Suggests as well

  • Update two URLs in the README.md file

Courtesy of my CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as as the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Chris Lamb: Tour d'Orwell: Wallington

Planet Debian - Sat, 2021-04-17 10:56

Previously in George Orwell travel posts: Sutton Courtenay, Marrakesh, Hampstead, Paris, Southwold & The River Orwell.


Wallington is a small village in Hertfordshire, approximately fifty miles north of London and twenty-five miles from the outskirts of Cambridge. George Orwell lived at No. 2 Kits Lane, better known as 'The Stores', on a mostly-permanent basis from 1936 to 1940, but he would continue to journey up from London on occasional weekends until 1947.

His first reference to The Stores can be found in early 1936, where Orwell wrote from Lancashire during research for The Road to Wigan Pier to lament that he would very much like "to do some work again — impossible, of course, in the [current] surroundings":

I am arranging to take a cottage at Wallington near Baldock in Herts, rather a pig in a poke because I have never seen it, but I am trusting the friends who have chosen it for me, and it is very cheap, only 7s. 6d. a week [£20 in 2021].

For those not steeped in English colloquialisms, "a pig in a poke" is an item bought without seeing it in advance. In fact, one general insight that may be drawn from reading Orwell's extant correspondence is just how much he relied on a close network of friends, belying the lazy and hagiographical picture of an independent and solitary figure. (Still, even Orwell cultivated this image at times, such as in a patently autobiographical essay he wrote in 1946. But note the off-hand reference to varicose veins here, for they would shortly re-appear as a symbol of Winston's repressed humanity in Nineteen Eighty-Four.)

Nevertheless, the porcine reference in Orwell's idiom is particularly apt, given that he wrote the bulk of Animal Farm at The Stores — his 1945 novella, of course, portraying a revolution betrayed by allegorical pigs. Orwell even drew inspiration for his 'fairy story' from Wallington itself, principally by naming the novel's farm 'Manor Farm', just as it is in the village. But the allusion to the purchase of goods is just as appropriate, as Orwell returned The Stores to its former status as the village shop, even going so far as to drill peepholes in a door to keep an Orwellian eye on the jars of sweets. (Unfortunately, we cannot complete a tidy circle of references, as whilst it is certainly Napoleon — Animal Farm's substitute for Stalin — who is quoted as describing Britain as "a nation of shopkeepers", it was actually the maraisard Bertrand Barère who first used the phrase).


"It isn't what you might call luxurious", he wrote in typical British understatement, but Orwell did warmly emote on his animals. He kept hens in Wallington (perhaps even inspiring the opening line of Animal Farm: "Mr Jones, of the Manor Farm, had locked the hen-houses for the night, but was too drunk to remember to shut the pop-holes.") and a photograph even survives of Orwell feeding his pet goat, Muriel. Orwell's goat was the eponymous inspiration for the white goat in Animal Farm, a decidedly under-analysed character who, to me, serves to represent an intelligentsia that is highly perceptive of the declining political climate but, seemingly content with merely observing it, does not offer any meaningful opposition. Muriel's aesthetic of resistance, particularly in her reporting on the changes made to the Seven Commandments of the farm, thus rehearses the well-meaning (yet functionally ineffective) affinity for 'fact checking' which proliferates today. But I digress.

There is a tendency to "read Orwell backwards", so I must point out that Orwell wrote several other works whilst at The Stores as well. This includes his Homage to Catalonia, his aforementioned The Road to Wigan Pier, not to mention countless indispensable reviews and essays as well. Indeed, another result of focusing exclusively on Orwell's last works is that we only encounter his ideas in their highly-refined forms, whilst in reality, it often took many years for concepts to fully mature — we first see, for instance, the now-infamous idea of "2 + 2 = 5" in an essay written in 1939.

This is important to understand for two reasons. Although the ostentatiously austere Barnhill might have housed the physical labour of its writing, it is refreshing to reflect that the philosophical heavy-lifting of Nineteen Eighty-Four may have been performed in a relatively undistinguished North Hertfordshire village. But perhaps more importantly, it emphasises that Orwell was just a man, and that any of us is fully capable of equally significant insight, with — to quote Christopher Hitchens — "little except a battered typewriter and a certain resilience."


The red commemorative plaque not only limits Orwell's tenure to the time he was permanently in the village, it omits all reference to his first wife, Eileen O'Shaughnessy, whom he married in the village church in 1936. Wallington's Manor Farm, the inspiration for the farm in Animal Farm. The lower sign enjoins the public to inform the police "if you see anyone on the [church] roof acting suspiciously". Non-UK-residents may be surprised to learn about the systematic theft of lead.
Categories: FLOSS Project Planets

Steve Kemp: Having fun with CP/M on a Z80 single-board computer.

Planet Debian - Sat, 2021-04-17 06:45

In the past, I've talked about building a Z80-based computer. I made some progress towards that goal, in the sense that I took the initial (trivial steps) towards making something:

  • I built a clock-circuit.
  • I wired up a Z80 processor to the clock.
  • I got the thing running an endless stream of NOP instructions.
    • No RAM/ROM connected, tying all the bus-lines low, meaning every attempted memory-read returned 0x00 which is the Z80 NOP instruction.

But then I stalled, repeatedly, at designing an interface to RAM and ROM, so that it could actually do something useful. Over the lockdown I've been in two minds about getting sucked back down the rabbit-hole, so I compromised. I did a bit of searching on tindie, and similar places, and figured I'd buy a Z80-based single board computer. My requirements were minimal:

  • It must run CP/M.
  • The source-code to "everything" must be available.
  • I want it to run standalone, and connect to a host via a serial-port.

With those goals there were a bunch of boards to choose from, rc2014 is the standard choice - a well engineered system which uses a common backplane and lets you build mini-boards to add functionality. So first you build the CPU-card, then the RAM card, then the flash-disk card, etc. Over-engineered in one sense, extensible in another. (There are some single-board variants to cut down on soldering overhead, at a cost of less flexibility.)

After a while I came across https://8bitstack.co.uk/, which describes a simple board called the the Z80 playground.

The advantage of this design is that it loads code from a USB stick, making it easy to transfer files to/from it, without the need for a compact flash card, or similar. The downside is that the system has only 64K RAM, meaning it cannot run CP/M 3, only 2.2. (CP/M 3.x requires more RAM, and a banking/paging system setup to swap between pages.)

When the system boots it loads code from an EEPROM, which then fetches the CP/M files from the USB-stick, copies them into RAM and executes them. The memory map can be split so you either have ROM & RAM, or you have just RAM (after the boot the ROM will be switched off). To change the initial stuff you need to reprogram the EEPROM, after that it's just a matter of adding binaries to the stick or transferring them over the serial port.

In only a couple of hours I got the basic stuff working as well as I needed:

  • A z80-assembler on my Linux desktop to build simple binaries.
  • An installation of Turbo Pascal 3.00A on the system itself.
  • An installation of FORTH on the system itself.
    • Which is nice.
  • A couple of simple games compiled from Pascal
    • Snake, Tetris, etc.
  • The Zork trilogy installed, along with Hitchhikers guide.

I had some fun with a CP/M emulator to get my hand back in things before the board arrived, and using that I tested my first "real" assembly language program (cls to clear the screen), as well as got the hang of using the wordstar keyboard shortcuts as used within the turbo pascal environment.

I have some plans for development:

  • Add command-line history (page-up/page-down) for the CP/M command-processor.
  • Add paging to TYPE, and allow terminating with Q.

Nothing major, but fun changes that won't be too difficult to implement.

Since CP/M 2.x has no concept of sub-directories you end up using drives for everything, I implemented a "search-path" so that when you type "FOO" it will attempt to run "A:FOO.COM" if there is no file matching on the current-drive. That's a nicer user-experience at all.

I also wrote some Z80-assembly code to search all drives for an executable, if not found in current drive and not already qualified. Remember CP/M doesn't have a concept of sub-directories) that's actually pretty useful:


I've also written some other trivial assembly language tools, which was surprisingly relaxing. Especially once I got back into the zen mode of optimizing for size.

I forked the upstream repository, mostly to tidy up the contents, rather than because I want to go into my own direction. I'll keep the contents in sync, because there's no point splitting a community even further - I guess there are fewer than 100 of these boards in the wild, probably far far fewer!

Categories: FLOSS Project Planets

Building Android release packages with KDE Craft

Planet KDE - Sat, 2021-04-17 05:45

One of the probably biggest gaps to make KDE Itinerary widely usable is the fact it is not available as a released package in any of the major APK stores such as F-Droid or Google Play. Unlike on Linux platforms there are no distributors handling this for us on Android-based platforms, we need to take care of that ourselves.

Current Situation

So far there’s only of a few KDE applications with release packages for Android, namely KDE Connect which is special in the technology and build system it uses compared to most other of out applications, and Krita, which uses a custom build script.

On the other hand we do have the nightly debug builds, provided via a spearate F-Droid repository, which are build by a common infrastructure for more than 25 apps. Besides tracking the latest development branches of the apps and their dependencies, those are fairly heavy packages due to all being based on a common base build and thus including a number of unnecessary content. At the same time those are not complete either and for example miss translations.

None of the above is immediately useful for getting to KDE Itinerary release APKs, so we need something else. Ideally this would:

  • Reuse as much as possible of already existing build infrastructure and build metadata, to minimize maintenance and scale easier to more KDE mobile apps in the future.

  • Provide full flexibility to fine-tune and tweak the build of the app and all its dependencies. This is necessary in order to strip out everything we don’t need to get the package to an acceptable size. For dependencies outside our control we might need the ability to apply patches on top of their official releases.

Possible Approaches

Since we want to reuse something we already have, let’s review the options:

  • The existing nightly debug APK infrastructure: already deployed and working for Android builds, but is very much focused on build the latest branches of everything. Support for using release tarballs, patches, etc would need to be added.

  • Adapting Krita’s solution: already working and offering sufficient flexibility for customization (it seems to bypass androiddeployqt which in that regard is appealing), however there’s likely little room for generalizing things sufficiently for reuse without ending up with something entirely different.

  • kdesrc-build: well known in the community, works for Android builds and is driven by config files containing dependencies and their build configurations. However it’s primarily focused on building code from version control systems and no support for patches.

  • Craft: also well known in the community, and driving the existing release package builds for Windows, macOS and Linux AppImages. Can build release tarballs and version control system checkouts, patch those if necessary and can be configured in great detail, and a large set of relevant build recipes exist already. There is was however no support for Android yet, or any cross-compiled platform for that matter.

  • Yocto: the ultimate level of flexibility and cross-compilation support. That comes at a high complexity cost though and with a steep learning curve. It’s also primarily aimed at embedded Linux, whether adding Android as a target platforms would be feasible I don’t know.

Of course there’s a spoiler in the title already, based on those option I picked Craft as the first option to explore further.

Android Support for Craft

On its currently supported desktop platforms, Craft basically sets up the entire development environment from scratch, compiling missing host tools itself where necessary. Doing the same for Android (or any other cross-compiled platform) would be a massive task, basically reviewing every single platform check and every single build job whether it refers to the host or the target platform.

That’s why Craft for Android currently doesn’t do the complete setup, but assumes it’s running inside a properly set up environment with all required host tools, in particular the KDE Android SDK Docker image which we already have for this purpose. This simplifies the necessary changes on Craft considerably, from adding full cross-compilation support to merely adding yet another Unix-like platform.

All of the required changes have passed review by now, basically falling into two categories:

  • Configuring build systems to pick up the desired cross-compilation toolchain. This has been done for CMake and Autotools builds so far, as well as some custom build systems relevant for us (OpenSSL, Qt).
  • Adjusting package dependencies and build options for things not being available on Android, or being otherwise different there. These are typically very small and straightforward changes.

Additionally there were are number of changes to enable more fine-grained control over build options or to allow static builds. Those changes are generally useful and not specific to Android.

With all that done, I managed to build working APKs of at least KDE Itinerary and KTrip for four different architectures (ARM 32/64bit, x86 32/64bit), with working translations. Other applications likely need additional fixes, at least if they use additional dependencies.

Release Builds on Binary Factory

Being able to build a package locally isn’t good enough though, it needs to be automated on central infrastructure and work reliably there. For this we can however reuse a lot of parts on KDE’s Binary Factory:

  • The KDE Android SDK docker image provides us with a reproducible environment that has all the needed host tools installed, and that is in use for CI and nightly debug package builds. With that latest build it already contains everything we need for Craft-based release builds.

  • The orchestration of Windows, macOS and Linux AppImage builds on Binary Factory can be extended with minimal changes to cover additional platforms and configurations. Patches for this are currently in review.

  • For signing and publishing packages, we should be able to reuse the existing system for signing and publishing nightly debug packages, the necessary changes for this have yet to be implemented though.


Building and publishing packages is only part of the job though, we also need to review the packaged content, and where possible, remove unneeded things to keep the package sizes manageable. We also need to collect the required metadata for the app stores.

Since this is getting too long already, that’s going to be subject of another post next week then.

Categories: FLOSS Project Planets