Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 7 hours 48 min ago

Steinar H. Gunderson: When your profiler fools you

19 hours 34 min ago

If you've been following my blog, you'll know about Nageru, my live video mixer, and Futatabi, my instant replay program with slow motion. Nageru and Futatabi both work on the principle that the GPU should be responsible for all the pixel pushing—it's just so much better suited than the CPU—but to do that, the GPU first needs to get at the data.

Thus, in Nageru, pushing the data from the video card to the GPU is one of the main CPU drivers. (The CPU also runs the UI, does audio processing, runs an embedded copy of Chromium if needed—we don't have full GPU acceleration there yet—and not the least encodes the finished video with x264 if you don't want to use Quick Sync for that.) It's a simple task; take two pre-generated OpenGL textures (luma and chroma) with an associated PBO, take the frame that the video capture card has DMAed into system RAM, and copy it while splitting luma from chroma. It goes about as fast as memory bandwidth will allow.

However, when you also want to run Futatabi, it runs on a different machine and also needs to get at the video somehow. Nageru solves this by asking Quick Sync (through VA-API) to encode it to JPEG, and then sending it over the network. Assuming your main GPU is an NVIDIA one (which gives oodles of more headroom for complicated things than the embedded Intel GPU), this means you now need an extra copy, and that's when things started to get hairy for me.

To simplify a long and confusing search, what I found was that with five inputs (three 1080p, two 720p), Nageru would be around two cores (200% CPU in top), and with six (an aditional 1080p), it would go to 300%. (This is on a six-core machine.) This made no sense, so I pulled up “perf record”, which said… nothing. The same routines and instructions were hot (mostly the memcpy into the NVIDIA GPU's buffers); there was no indication of anything new taking time. How could it be?

Eventually I looked at top instead of perf, and saw that some of the existing threads went from 10% to 30% CPU. In other words, everything just became uniformly slower. I had a hunch, and tried “perf stat -ddd“ instead. One value stood out:

3 587 503 LLC-load-misses # 94,72% of all LL-cache hits (71,91%)

Aha! It was somehow falling off a cliff in L3 cache usage. It's a bit like going out of RAM and starting to swap, just more subtle.

So, what was using so much extra cache? I'm sure someone extremely well-versed in perf or Intel VTune would figure out a way to look at all the memory traffic with PEBS or something, but I'm not a perf expert, so I went for the simpler choice: Remove things. E.g., if you change a memcpy to a memset(dst, 0, len), the GPU will be still be encoding a frame, and you'll still be writing things to memory, but you won't be reading anymore. I noticed a really confusing thing: Occasionally, when I did less work, CPU usage would be going up. E.g., have a system at 2.4 cores used, remove some stuff, go up to 3.1 cores! Even worse, occasionally after start, it would be at e.g. 2.0 cores, then ten seconds later, it would be at 3.0. I suspected overheating, but even stopping and waiting ten minutes would not be enough to get it reliably down to 2.0 again.

It eventually turned out (again through perf stat) that power saving was getting the better of me. I hadn't expected it to kick in, but seemingly it did, and top doesn't compensate in any way. Change from the default powersave governor to performance (I have electric heating anyway, and it's still winter up here), and tada... My original case was now down from 3.0 to 2.6 cores.

However, this still wasn't good enough, so I had to do some real (if simple) optimization. Some rearchitecting saved a copy; instead of the capture card receiver thread memcpy-ing into a buffer and the MJPEG encoder thread memcpy-ing it from there to the GPU's memory-mapped buffers, I let the capture card receiver memcpy it directly over instead. (This wasn't done originally due to some complications around locking and such; it took a while to find a reasonable architecture. Of course, when you look at the resulting patch, it doesn't look hard at all.)

This took me down from 3.0 to 2.1 cores for the six-camera case. Success! But still too much; the cliff was still there. It was pretty obvious that it was stalling on something in the six-camera case:

8 552 429 486 cycle_activity.stalls_mem_any # 5 inputs 17 082 914 261 cycle_activity.stalls_mem_any # 6 inputs

I didn't get a whole lot of more L3 misses, though; they seemed to just get much slower (occasionally as much as 1200 cycles on average, which is about four times as much as normal). Unfortunately, you have zero insight in what the Intel GPU is doing to your memory bus, so it's not easy to know if it's messing up your memory or what. Turning off the actual encoding didn't help much, though; it was really the memcpy into them that hurt. And since it was into uncached (write-combined) memory, doing things like non-temporal writes didn't help at all. Neither did combining the two copies into one loop (read once, write twice). Or really anything else I could think of.

Eventually someone on ##intel-media came up with a solution; instead of copying directly into the memory-mapped buffers (vaDeriveImage), copy into a VAImage in system RAM, and then let the GPU do the copy. For whatever reason, this helped a lot; down from 2.1 to 1.4 cores! And 0.6 of those cores were non-VA-API related things. So more patch and happiness all around.

At this point, I wanted to see how far I could go, so I borrowed more capture inputs and set up my not-so-trusty SDI matrix to create an eight-headed capture monster:

Seemingly after a little more tuning of freelist sizes and such, it could sustain eight 1080p59.94 MJPEG inputs, or 480 frames per second if you wanted to—at around three cores again. Now the profile was starting to look pretty different, too, so there were more optimization opportunities, resulting in this pull request (helping ~15% of a core). Also, setting up the command buffers for the GPU copy seemingly takes ~10% of a core now, but I couldn't find a good way of improving it. Most of the time now is spent in the original memcpy to NVIDIA buffers, and I don't think I can do much better than that without getting the capture card to peer-to-peer DMA directly into the GPU buffers (which is a premium feature you'll need to buy Quadro cards for, it seems). In any case, my original six-camera case now is a walk in the park (leaving CPU for a high-quality x264 encode), which was the goal of the exercise to begin with.

So, lesson learned: Sometimes, you need to look at the absolutes, because the relative times (which is what you usually want) can fool you.

Categories: FLOSS Project Planets

Jonathan Carter: Running for DPL

Mon, 2019-03-18 23:30

I am running for Debian Project Leader, my official platform is published on the Debian website (currently looks a bit weird, but a fix is pending publication), with a more readable version available on my website as well as a plain-text version.

Shortly after I finished writing the first version of my platform page, I discovered an old talk from Ian Murdock at Microsoft Research where he said something that resonated well with me, and I think also my platform. Paraphrasing him:

You don’t want design by committee, but you want to tap in to the wisdom of the crowd. Lay out a vision for solving the problem, but be flexible enough to understand that you don’t have all the answers yourself, that other people are equally if not more intelligent than you are, and collectively, the crowd is the most intelligent of all.

– paraphrasing of Ian Murdock during a talk at Microsoft Research.

I’m feeling oddly calm about all of this and I like it. It has been a good exercise taking the time to read what everyone has said across all the mediums and trying to piece together all the pertinent parts and form a workable set of ideas.

I’m also glad that we have multiple candidates, I hope that it will inspire some thought, ideas and actions around the topics that occupy our collective head space.

The campaign period remains open until 2019-04-06. During this period, you can can ask me or any of the other candidates about how they envision their term as DPL.

There are 5 candidates in total, here’s the full list of candidates (in order of self-nomination with links to platforms as available):

Categories: FLOSS Project Planets

Laura Arjona Reina: A weekend for the Debian website and friends

Mon, 2019-03-18 16:05

Last weekend (15-17 March 2019) some members of the Debian web team have met at my place in Madrid to advance work together in what we call a Debian Sprint. A report will be published in the following days, but I want to say thank you to everybody that made possible this meeting happen.

We have shared 2-3 very nice days, we have agreed in many topics and started to design an new homepage focused in newcomers (since a Debianite usually just go to the subpage they need) and showing that Debian the operating system is made by a community of people. We are committed to simplify the content of and the structure of www.debian.org, we have fixed some bugs already, and talked about many other aspects. We shared some time to know each other, and I think all of us became motivated to continue working on the website (because there is still a lot of work to do!) and make easy for new contributors to get involved too.

For me, a person who rarely finds the way to attend Debian events, it was amazing to meet in person with people who I have shared work and mails and IRC chats since years in some cases, and to offer my place for the meeting. I have enjoyed a lot preparing all the logistics and I’m very happy that it went very well. Now I walk through my neighbourhood for my daily commute and every place is full of small memories of these days. Thanks, friends!

Categories: FLOSS Project Planets

Jonathan Dowland: WadC 3.0

Mon, 2019-03-18 11:12

blockmap.wl being reloaded (click for animation)

A couple of weeks ago I release version 3.0 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

3.0 introduces more flexible randomness with rand; two new test maps (blockmap and bsp) that demonstrate approaches to random dungeon generation; some useful data structures in the library; better Hexen support and a bunch of other improvements.

Check the release notes for the full details.

Version 3.0 of WadC is dedicated to Lu (1972-2019). RIP.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RQuantLib 0.4.8: Small updates

Sun, 2019-03-17 15:12

A new version 0.4.8 of RQuantLib reached CRAN and Debian. This release was triggered by a CRAN request for an update to the configure.ac script which was easy enough (and which, as it happens, did not result in changes in the configure script produced). I also belatedly updated the internals of RQuantLib to follow suit to an upstream change in QuantLib. We now seamlessly switch between shared_ptr<> from Boost and from C++11 – Luigi wrote about the how and why in an excellent blog post that is part of a larger (and also excellent) series of posts on QuantLib internals.

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

In other news, we finally have a macOS binary package on CRAN. After several rather frustrating months of inaction on the pull request put together to enable this, it finally happened last week. Yay. So CRAN currently has an 0.4.7 macOS binary and should get one based on this release shortly. With Windows restored with the 0.4.7 release, we are in the best shape we have been in years. Yay and three cheers for Open Source and open collaboration models!

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.8 (2019-03-17)
  • Changes in RQuantLib code:

    • Source code supports Boost shared_ptr and C+11 shared_ptr via QuantLib::ext namespace like upstream.
  • Changes in RQuantLib build system:

    • The configure.ac file no longer upsets R CMD check; the change does not actually change configure.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: Rcpp 1.0.1: Updates

Sun, 2019-03-17 09:49

Following up on the 10th anniversary and the 1.0.0. release, we excited to share the news of the first update release 1.0.1 of Rcpp. package turned ten on Monday—and we used to opportunity to mark the current version as 1.0.0! It arrived at CRAN overnight, Windows binaries have already been built and I will follow up shortly with the Debian binary.

We had four years of regular bi-monthly release leading up to 1.0.0, and having now taken four months since the big 1.0.0 one. Maybe three (or even just two) releases a year will establish itself a natural cadence. Time will tell.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1598 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 152 in BioConductor release 3.8. Per the (partial) logs of CRAN downloads, we currently average 921,000 downloads a month.

This release feature a number of different pull requests detailed below.

Changes in Rcpp version 1.0.1 (2019-03-17)
  • Changes in Rcpp API:

    • Subsetting is no longer limited by an integer range (William Nolan in #920 fixing #919).

    • Error messages from subsetting are now more informative (Qiang and Dirk).

    • Shelter increases count only on non-null objects (Dirk in #940 as suggested by Stepan Sindelar in #935).

    • AttributeProxy::set() and a few related setters get Shield<> to ensure rchk is happy (Romain in #947 fixing #946).

  • Changes in Rcpp Attributes:

    • A new plugin was added for C++20 (Dirk in #927)

    • Fixed an issue where 'stale' symbols could become registered in RcppExports.cpp, leading to linker errors and other related issues (Kevin in #939 fixing #733 and #934).

    • The wrapper macro gets an UNPROTECT to ensure rchk is happy (Romain in #949) fixing #948).

  • Changes in Rcpp Documentation:

    • Three small corrections were added in the 'Rcpp Quickref' vignette (Zhuoer Dong in #933 fixing #932).

    • The Rcpp-modules vignette now has documentation for .factory (Ralf Stubner in #938 fixing #937).

  • Changes in Rcpp Deployment:

    • Travis CI again reports to CodeCov.io (Dirk and Ralf Stubner in #942 fixing #941).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Andy Simpkins: Race for science: A Prostate Cancer Research Centre charity challenge

Sat, 2019-03-16 12:18

This post may feel like an advert – to an extent it is.  I won’t plug events very often; however as a charity event in aid of prostate cancer and I genuinely think it will be great fun for anybody able to take part, so hence the unashamed plug.  The Race for Science will happen on 30th March in and around Cambridge. There is still time to enter, have fun and raise some money for a deserving charity at the same time.

A few weeks ago friends of ours, Jo (Randombird) and Josh were celebrating their birthday’s, we split into a couple of groups and had a go at a couple of escape room challenges at Lock House Games.  Four of us, Josh, Isy, Jane and myself had a go at the Egyptian Tomb.  Whilst this was the 2nd time Isy has tried an escape room the rest of us were N00bs.   I won’t describe the puzzles inside because that would spoil anyone’s enjoyment who would later goon to play the game.   We did make it out of the room inside our allotted time with only a couple of hints.  It was great fun, and is suitable for all ages our group ranged from 12 to 46, and this would work well for all adult teams as well as more mature children..

We escaped the tomb…

Anyway having had a good time, and whilst we waited for other group to finish their challenge, I spotted a request for people to Beta test some puzzles for the Prostate Cancer Research Centre‘s 2019 Race For Science to happen a couple of days later.

I volunteered my time and re-arranged my Monday so that I could spend the afternoon to trial the escape room elements of the challenge.  Some great people involved and the puzzles were very good indeed.  I visited 3 different locations in Cambridge each with very different puzzles to solve.  One challenge stood head and shoulders above the others; not that the others were poor – they are great, at least as good as a commercial escape room.  The reason that this challenge that was so outstanding wasn’t the because of the challenge itself, it was because it had been set specifically for the venue that will host this challenge, using the venue as part of the puzzle (The Race For Science is a scavenger hunt and contains several challenges at different locations within the city).

Beta testing one of the puzzles

Last Thursday I went back into town in the afternoon to beta test yet another location – Once again this was outstanding, the challenge differed from those that I had trialed the previous week.  This was another fantastic puzzle to solve and took full advantage of its location, both for it’s ‘back story’ and for the puzzle itself.  After this we moved onto testing  the scavenger hunt part of the event.  This is an played on the streets of the city on foot following clues will take you in and around some of the city’s museums and landmarks – and will unlock access to the escape room challenges I had been testing earlier.  My only concern is that it is played using a browser on a mobile device (i.e. phone).  I had to move around a bit in some locations to ensure that I had adequate signal.  You may want to make sure that you have fully charged battery!

The event is open to teams of up-to 6 people and will take the form of an “immersive scavenger hunt adventure”.  Unfortunately I can not take part as I have already played the game, but there is still time for you to register and take part.  Anyway if you are able to get to Cambridge at the end of the month please enter the Race for Science

Categories: FLOSS Project Planets

Dirk Eddelbuettel: littler 0.3.7: Small tweaks

Fri, 2019-03-15 20:15

The eight release of littler as a CRAN package is now available, following in the thirteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. And it is (in my very biased eyes) better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript converted to rather recently.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!).

A few examples as highlighted at the Github repo, as well as in the examples vignette.

This release brings an small update (thanks to Gergely) to scripts install2.r and installGithub.r allow more flexible setting of repositories, and fixes a minor nag from CRAN concerning autoconf programming style.

The NEWS file entry is below.

Changes in littler version 0.3.6 (2019-01-26)
  • Changes in examples

    • The scripts installGithub.r and install2.r get a new option -r | --repos (Gergely Daroczi in #67)
  • Changes in build system

    • The AC_DEFINE macro use rewritten to please R CMD check.

CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Arnaud Rebillout: Building your Pelican website with schroot

Fri, 2019-03-15 20:00

Lately I moved my blog to Pelican. I really like how simple and flexible it is. So in this post I'd like to highlight one particular aspect of my Pelican's workflow: how to setup a Debian-based environment to build your Pelican's website, and how to leverage Pelican's Makefile to transparently use this build environment. Overall, this post has more to do with the Debian tooling, and little with Pelican.

Introduction

First thing first, why would you setup a build environment for your project?

Imagine that you run Debian stable on your machine, then you want to build your website with a fancy theme, that requires the latest bleeding edge features from Pelican. But hey, in Debian stable you don't have these shiny new things, and the version of Pelican you need is only available in Debian unstable. How do you handle that? Will you start messing around with apt configuration and pinning, and try to install an unstable package in your stable system? Wrong, please stop.

Another scenario, the opposite: you run Debian unstable on your system. You have all the new shiny things, but sometimes an update of your system might break things. What if you update, and then can't build your website because of this or that? Will you wait a few days until another update comes and fixes everything? How many days before you can build your blog again? Or will you dive in the issues, and debug, which is nice and fun, but can also keep you busy all night, and is not exactly what you wanted to do in the first place, right?

So, for both of these issues, there's one simple answer: setup a build environment for your project. The most simple way is to use a chroot, which is roughly another filesystem hierarchy that you create and install somewhere, and in which you will run your build process. A more elaborate build environment is a container, which brings more isolation from the host system, and many more features, but for something as simple as building your website on your own machine, it's kind of overkill.

So that's what I want to detail here, I'll show you the way to setup and use a chroot. There are many tools for the job, and for example Pelican's official documentation recommends virtualenv, which is kind of the standard Python solution for that. However, I'm not too much of a Python developer, and I'm more familiar with the Debian tools, so I'll show you the Debian way instead.

Version-wise, it's 2019, we're talking about Pelican 4.x, if ever it matters.

Create the chroot

To create a basic, minimal Debian system, the usual command is debootstrap. Then in order to actually use this new system, we'll use schroot. So be sure to have these two packages installed on your machine.

sudo apt install debootstrap schroot

It seems that the standard location for chroots is /srv/chroot, so let's create our chroot there. It also seems that the traditional naming scheme for these chroots is something like SUITE-ARCH-APPLICATION, at least that's what other tools like sbuild do. While you're free to do whatever you want, in this tutorial we'll try to stick to the conventions.

Let's go and create a buster chroot:

SYSROOT=/srv/chroot/buster-amd64-pelican sudo mkdir -p ${SYSROOT:?} sudo debootstrap --variant=minbase buster ${SYSROOT:?}

And there we are, we just installed a minimal Debian system in $SYSROOT, how easy and neat is that! Just run a quick ls there, and see by yourself:

ls ${SYSROOT:?}

Now let's setup schroot to be able to use it. schroot will require a bit of a configuration file that tells it how to use this chroot. This is where things might get a bit complicated and cryptic for the newcomer.

So for now, stick with me, and create the schroot config file as follow:

cat << EOF | sudo tee /etc/schroot/chroot.d/buster-amd64-pelican.conf [buster-amd64-pelican] users=$LOGNAME root-users=$LOGNAME source-users=$LOGNAME source-root-users=$LOGNAME type=directory union-type=overlay directory=/srv/chroot/buster-amd64-pelican EOF

Here, we tell schroot who can use this chroot ($LOGNAME, it's you), as normal user and root user. We also say where is the chroot directory located, and that we want an overlay, which means that the chroot will actually be read-only, and during operation a writable overlay will be stacked up on top of it, so that modifications are possible, but are not saved when you exit the chroot.

In our case, it makes sense because we have no intention to modify the build environment. The basic idea with a build environment is that it's identical for every build, we don't want anything to change, we hate surprises. So we make it read-only, but we also need a writable overlay on top of it, in case some process might want to write things in /var, for example. We don't care about these changes, so we're fine discarding this data after each build, when we leave the chroot.

And now, for the last step, let's install Pelican in our chroot:

schroot -c source:buster-amd64-pelican -u root -- \ bash -c "apt update && apt install --yes make pelican && apt clean"

In this command, we log into the source chroot as root, and we install the two packages make and pelican. We also clean up after ourselves, to save a bit of space on the disk.

At this point, our chroot is ready to be used. If you're new to all of this, then read the next section, I'll try to explain a bit more how it works.

A quick introduction to schroot

In this part, let me try to explain a bit more how schroot works. If you're already acquainted, you can skip this part.

So now that the chroot is ready, let's experiment a bit. For example, you might want to start by listing the chroots available:

$ schroot -l chroot:buster-amd64-pelican source:buster-amd64-pelican

Interestingly, there are two of them... So, this is due to the overlay thing that I mentioned just above. Using the regular chroot (chroot:) gives you the read-only version, for daily use, while the source chroot (source:) allows you to make persistent modifications to the filesystem, for install and maintenance basically. In effect, the source chroot has no overlay mounted on top of it, and is writable.

So you can experiment some more. For example, to have a shell into your regular chroot, run:

$ schroot -c chroot:buster-amd64-pelican

Notice that the namespace (eg. chroot: or source:) is optional, if you omit it, schroot will be smart and choose the right namespace. So the command above is equivalent to:

$ schroot -c buster-amd64-pelican

Let's try to see the overlay thing in action. For example, once inside the chroot, you could create a file in some writable place of the filesystem.

(chroot)$ touch /var/tmp/this-is-an-empty-file (chroot)$ ls /var/tmp this-is-an-empty-file

Then log out with <Ctrl-D>, and log in again. Have a look in /var/tmp: the file is gone. The overlay in action.

Now, there's a bit more to that. If you look into the current directory, you will see that you're not within any isolated environment, you can still see all your files, for example:

(chroot)$ pwd /home/arno/my-pelican-blog (chroot)$ ls content Makefile pelicanconf.py ...

Not only are all your files available in the chroot, you can also create new files, delete existing ones, and so on. It doesn't even matter if you're inside or outside the chroot, and the reason is simple: by default, schroot will mount the /home directory inside the chroot, so that you can access all your files transparently. For more details, just type mount inside the chroot, and see what's listed.

So, this default of schroot is actually what makes it super convenient to use, because you don't have to bother about bind-mounting every directory you care about inside the chroot, which is actually quite annoying. Having /home directly available saves time, because what you want to isolate are the tools you need for the job (so basically /usr), but what you need is the data you work with (which is supposedly in /home). And schroot gives you just that, out of the box, without having to fiddle too much with the configuration.

If you're not familiar with chroots, containers, VMs, or more generally bind mounts, maybe it's still very confusing. But you'd better get used to it, as virtual environment are very standard in software development nowadays.

But anyway, let's get back to the topic. How to make use of this chroot to build our Pelican website?

Chroot usage with Pelican

Pelican provides two helpers to build and manage your project: one is a Makefile, and the other is a Python script called fabfile.py. As I said before, I'm not really a seasoned Pythonista, but it happens that I'm quite a fan of make, hence I will focus on the Makefile for this part.

So, here's how your daily blogging workflow might look like, now that everything is in place.

Open a first terminal, and edit your blog posts with your favorite editor:

$ nano content/bla-bla-bla.md

Then open a second terminal, enter the chroot, build your blog and serve it:

$ schroot -c buster-amd64-pelican (chroot)$ make html (chroot)$ make serve

And finally, open your web browser at http://localhost:8000 and enjoy yourself.

This is easy and neat, but guess what, we can even do better. Open the Makefile and have a look at the very first lines:

PY?=python3 PELICAN?=pelican

It turns out that the Pelican developers know how to write Makefiles, and they were kind enough to allow their users to easily override the default commands. In our case, it means that we can just replace these two lines with these ones:

PY?=schroot -c buster-amd64-pelican -- python3 PELICAN?=schroot -c buster-amd64-pelican -- pelican

And after these changes, we can now completely forget about the chroot, and simply type make html and make serve. The chroot invocation is now handled automatically in the Makefile. How neat!

Maintenance

So you might want to update your chroot from time to time, and you do that with apt, like for any Debian system. Remember the distinction between regular chroot and source chroot due to the overlay? If you want to actually modify your chroot, what you want is the source chroot. And here's the one-liner:

schroot -c source:$PROJECT -u root -- \ bash -c "apt update && apt --yes dist-upgrade && apt clean"

If one day you stop using it, just delete the chroot directory, and the schroot configuration file:

sudo rm /etc/schroot/chroot.d/buster-amd64-pelican.conf sudo rm -fr /srv/chroot/buster-amd64-pelican

And that's about it.

Last words

The general idea of keeping your build environments separated from your host environment is a very important one if you're a software developer, and especially if you're doing consulting and working on several projects at the same time. Installing all the build tools and dependencies directly on your system can work at the beginning, but it won't get you very far.

schroot is only one of the many tools that exist to address this, and I think it's Debian specific. Maybe you have never heard of it, as chroots in general are far from the container hype, even though they have some common use-cases. schroot has been around for a while, it works great, it's simple, flexible, what else? Just give it a try!

It's also well integrated with other Debian tools, for example you might use it through sbuild to build Debian packages (another daily task that is better done in a dedicated build environment), so I think it's a tool worth knowing if you're doing some Debian work.

That's about it, in the end it was mostly a post about schroot, hope you liked it.

Categories: FLOSS Project Planets

Romain Perier: Hello planet !

Fri, 2019-03-15 14:07
Introducing myself
My name is Romain, I have been nominated to the status of  Debian Maintainer recently. I am part of the debian-kernel team (still a padawan) since few months, and, as a DM, I will co-maintain the package raspi3-firmware with Gunnar Wolf.

Current contributionsAs a kernel and linux en enginner, I focus on embedded stuffs and kernel development. This is a summary of what I have done in the previous months.
Kernel teamAs a contributor, I work a various things, I try to work where help is the more needed. I have wrote a python script for generating debian changelog in firmware-nonfree, I have bumped the package for new releases. I bump the linux kernel for new upstream releases, I help to close and resolve bugs, I backport new features when it makes sense for doing so, enable new hardware and recently I have added  new flavour for the RPI 1 and RPI Zero in armel ! (spoil)
Raspi3-firmwareI have recently added a new mode in the configuration file of the package that let you device what you would like to boot from the firmware. You can either boot a linux kernel directly, passing the adress of the initramfs to use, a baremetal application, or a second level bootloader like u-boot or barebox (personnally I prefer u-boot). From u-boot then, you can use extlinux and get a nice generated menu by uboot menu. I have also added the support for using the devicetree-blob of the RPI 1 and the RPI Zero W when the firmware boots the kernel directly. I am also participating for reducing lintian warnings, new upstream release and improvements in general.
U-BootI have recently sent a MR for enabling support for the RPI Zero W in uboot for armel and it was accepted (thanks to Vagrant). As I use U-Boot everyday on my boards, I will probably send others MR ;) Raspberry Pi ZeroAs written described above, I have added a flavour for enabling support for the RPI1 and RPI Zero in armel for Linux 4.19.x. Like the Raspberry PI 3, there are no official images for this, but you can use debos or vmdb2 for building a buster image for your PI Zero. I have personally tried it, at home. I was able to run an LXDE session, with llvmpipe (I am still investigating if vc4 in gallium works for this SoC or not, while it's working perfectly fine for the PI3, it fallback to llvmpipe on the PI Zero).Raspberry Pi 3As posted on planet recently by Gunnar, you can find an unofficial image for the PI 3 if you want to try it. On buster you will be able to run a kernel 4.19.x LTS with an excellent DRM/KMS support and Gallium support in mesa. I was able to run a LXDE session with VC4 gallium here !Future workI will try my best to get an excellent support for all Raspberry PI in Debian (with unofficial images at the beginning). Including kernel support, kernel bugs fixes or improvements, debos and/or vmdb2 recipes for generating buster images easily, and even graphical stack hacks :) . I will continue my work in the kernel-team, because there are a tons of things to do, and of courses as co-maintainer, maintain raspi3-firmware (that will be probably renamed to something more generic, *spoil*).
Categories: FLOSS Project Planets

John Goerzen: The Rightward, Establishment Bias of Lazy Journalism

Fri, 2019-03-15 11:21

Note: I also posted this post on medium.

I remember clearly the moment I’d had enough of NPR for the day. It was early morning January 25 of this year, still pretty dark outside. An NPR anchor was interviewing an NPR reporter — they seem to do that a lot these days — and asked the following simple but important question:

“So if we know that Roger Stone was in communications with WikiLeaks and we know U.S. intelligence agencies have said WikiLeaks was operating at the behest of Russia, does that mean that Roger Stone has been now connected directly to Russia’s efforts to interfere in the U.S. election?”

The factual answer, based on both data and logic, would have been “yes”. NPR, in fact, had spent much airtime covering this; for instance, a June 2018 story goes into detail about Stone’s interactions with WikiLeaks, and less than a week before Stone’s arrest, NPR referred to “internal emails stolen by Russian hackers and posted to Wikileaks.” In November of 2018, The Atlantic wrote, “Russia used WikiLeaks as a conduit — witting or unwitting — and WikiLeaks, in turn, appears to have been in touch with Trump allies.”

Why, then, did the NPR reporter begin her answer with “well,” proceed to hedge, repeat denials from Stone and WikiLeaks, and then wind up saying “authorities seem to have some evidence” without directly answering the question? And what does this mean for bias in the media?

Let us begin with a simple principle: facts do not have a political bias. Telling me that “the sky is blue” no more reflects a Democratic bias than saying “3+3=6” reflects a Republican bias. In an ideal world, politics would shape themselves around facts; ideas most in agreement with the data would win. There are not two equally-legitimate sides to questions of fact. There is no credible argument against “the earth is round”, “climate change is real,” or “Donald Trump is an unindicted co-conspirator in crimes for which jail sentences have been given.” These are factual, not political, statements. If you feel, as I do, a bit of a quickening pulse and escalating tension as you read through these examples, then you too have felt the forces that wish you to be uncomfortable with unarguable reality.

That we perceive some factual questions as political is a sign of a deep dysfunction in our society. It’s a sign that our policies are not always guided by fact, but that a sustained effort exists to cause our facts to be guided by policy.

Facts do not have a political bias. There are not two equally-legitimate sides to questions of fact. “Climate change is real” is a factual, not a political, statement. Our policies are not always guided by fact; a sustained effort exists to cause our facts to be guided by policy.

Why did I say right-wing bias, then? Because at this particular moment in time, it is the political right that is more engaged in the effort to shape facts to policy. Whether it is denying the obvious lies of the President, the clear consensus on climate change, or the contours of various investigations, it is clear that they desire to confuse and mislead in order to shape facts to their whim.

It’s not always so consequential when the media gets it wrong. When CNN breathlessly highlights its developing story — that an airplane “will struggle to maintain altitude once the fuel tanks are empty” —it gives us room to critique the utility of 24/7 media, but not necessarily a political angle.

Hey thanks CNN for making sure we all knew the crucial fact that planes cannot fly on empty fuel tanks pic.twitter.com/P6uXuTCUgI

— Katie Pavlich (@KatiePavlich) March 31, 2014

But ask yourself: who benefits when the media is afraid to report a simple fact about an investigation with political connotations? The obvious answer, in the NPR example I gave, is that Republicans benefit. They want the President to appear innocent, so every hedge on known facts about illegal activities of those in Trump’s orbit is a gift to the right. Every time a reporter gives equal time to climate change deniers is a gift to the right and a blow to informed discussion in a democracy.

Not only is there a rightward bias, but there is also an establishment bias that goes hand-in-hand. Consider this CNN report about Facebook’s “pivot to privacy”, in which CEO Zuckerberg is credited with “changing his tune somewhat”. To the extent to which that article highlights “problems” with this, they take Zuckerberg at face-value and start to wonder if it will be harder to clamp down on fake news in the news feed if there’s more privacy. That is a total misunderstanding of what was being proposed; a more careful reading of the situation was done by numerous outlets, resulting in headlines such as this one in The Intercept: “Mark Zuckerberg Is Trying to Play You — Again.” They correctly point out the only change actually mentioned pertained only to instant messages, not to the news feed that CNN was talking about, and even that had a vague promise to happen “over the next few years.” Who benefited from CNN’s failure to read a press release closely? The established powers — Facebook.

Pay attention to the media and you’ll notice that journalists trip all over themselves to report a new dot in a story, but they run away scared from being the first to connect the dots. Much has been written about the “media narrative,” often critical, with good reason. Back in November of 2018, an excellent article on “The Ubearable Rightness of Seth Abramson” covered one particular case in delightful detail.

Journalists trip all over themselves to report a new dot in a story, but they run away scared from being the first to connect the dots.

Seth Abramson himself wrote, “Trump-Russia is too complex to report. We need a new kind of journalism.” He argues the culprit is not laziness, but rather that “archive of prior relevant reporting that any reporter could review before they publish their own research is now so large and far-flung that more and more articles are frustratingly incomplete or even accidentally erroneous than was the case when there were fewer media outlets, a smaller and more readily navigable archive of past reporting for reporters to sift through, and a less internationalized media landscape.” Whether laziness or not, the effect is the same: a failure to properly contextualize facts leading to misrepresented or outright wrong outcomes that, at present, have a distinct bias towards right-wing and establishment interests.

Yes, the many scandals in Trumpland are extraordinarily complex, and in this age of shrinking newsroom budgets, it’s no wonder that reporters have trouble keeping up. Highly-paid executives like Zuckerberg and politicians in Congress have years of practice with obfuscation, and it takes skill to find the truth (if there even is any) behind a corporate press release or political talking point. One would hope, though, that reporters would be less quick to opine if they lack those skills or the necessary time to dig in.

There’s not just laziness; there’s also, no doubt, a confusion about what it means to be a balanced journalist. It is clear that there are two sides to a debate over, say, whether to give a state’s lottery money to the elementary schools or the universities. When there is the appearance of a political debate over facts, shouldn’t that also receive equal time for each side? I argue no. In fact, politicians making claims that contradict establish fact should be exposed by journalists, not covered by them.

And some of it is, no doubt, fear. Fear that if they come out and say “yes, this implicates Stone with Russian hacking” that the Fox News crowd will attack them as biased. Of course this will happen, but that attack will be wrong. The right has done an excellent job of convincing both reporters and the public that there’s a big left-leaning bias that needs to be corrected, by yelling about it every time a fact is mentioned that they don’t like. The unfortunate result is that the fact-leaning bias in the media is being whittled away.

Politicians making claims that contradict establish fact should be exposed by journalists, not covered by them. The fact-leaning bias in the media is being whittled away.

Regardless of the cause, media organizations and their reporters need to be cognizant of the biases actors of all stripes wish them to display, and refuse to play along. They need to be cognizant of the demands they put on their own reporters, and give them space to understand the context of a story before explaining it. They need to stand up to those that try to diminish facts, to those that would like them to be uninformed.

A world in which reporters know the context of their stories and boldly state facts as facts, come what may, is a world in which reporters strengthen the earth’s democracies. And, by extension, its people.

Categories: FLOSS Project Planets

Ritesh Raj Sarraf: Linux Desktop Usage 2019

Fri, 2019-03-15 10:35

If I look back now, it must be more than 20 years since I got fascinated with GNU/Linux ecosystem and started using it.

Back then, it was more curiosity of a young teenager and the excitement to learn something. There’s one thing that I have always admired/respected about Free Software’s values, is: Access for everyone to learn. This is something I never forget and still try to do my bit.

It was perfect timing and I was lucky to be part of it. Free Software was (and still is) a great platform to learn upon, if you have the willingness and desire for it.

Over the years, a lot lot lot has changed, evolved and improved. From the days of writing down the XF86Config configuration file to get the X server running, to a new world where now everything is almost dynamic, is a great milestone that we have achieved.

All through these years, I always used GNU/Linux platform as my primary computing platform. The CLI, Shell and Tools, have all been a great source of learning. Most of the stuff was (and to an extent, still is) standardized and focus was usually on a single project.

There was less competition on that front, rather there was more collaboration. For example, standard tools like: sed, awk, grep etc were single tools. Like you didn’t have 2 variants of it. So, enhancements to these tools was timely and consistent and learning these tools was an incremental task.

On the other hand, on the Desktop side of things, it started and stood for a very long time, to do things their own ways. But eventually, quite a lot of those things have standardized, thankfully.

For the larger part of my desktop usage, I have mostly been a KDE user. I have used other environments like IceWM, Enlightenment briefly but always felt the need to fallback to KDE, as it provided a full and uniform solution. For quite some time, I was more of a user preferring to only use the K* tools, as in if it wasn’t written with kdelibs, I’d try to avoid it. But, In the last 5 years, I took at detour and tried to unlearn and re-learn the other major desktop environment, GNOME.

GNOME is an equally beautiful and elegant desktop environment with a minimalistic user interface (but which at many times ends up plaguing its application’s feature set too, making it “minimalistic feature set applications”). I realized that quite a lot of time and money is invested into the GNOME project, especially by the leading Linux Distribution Vendors.

But the fact is that GNU/Linux is still not a major player on the Desktop market. Some believe that the Desktop Market itself has faded and been replaced by the Mobile market. I think Desktop Computing still has a critical place in the near foreseeable future and the Mobile Platform is more of an extension shell to it. For example, for quickies, the Mobile platform is perfect. But for a substantial amount of work to be done, we still fallback to using our workstations. Mobile platform is good for a quick chat or email, but if you need to write a review report or a blog post or prepare a presentation or update an excel sheet, you’d still prefer to use your workstation.

So…. After using GNOME platform for a couple of years, I realized that there’s a lot of work and thought put into this platform too, just like the KDE platform. BUT To really be able to dream about the “Year of the dominance of the GNU/Linux desktop platform”, all these projects need to work together and synergise their efforts.

Pain points:

  • Multiple tools, multiple efforts wasted. Could be synergised.
  • Data accessiblity
  • Integration and uniformity
Multiple tools

Kmail used to be an awesome email client. Evoltuion today is an awesome email client. Thunderbird was an awesome email client, which from what I last remember, Mozilla had lack of funds to continue maintaining it. And then there’s the never ending stream of new/old projects that come and go. Thankfully, email is pretty standardized in its data format. Otherwise, it would be a nightmare to switch between these client. But still, GNU/Linux platforms have the potential to provide a strong and viable offering if they could synergise their work together. Today, a lot of resource is just wasted and nobody wins. Definitely not the GNU/Linux platform. Who wins are: GMail, Hotmail etc.

If you even look at the browser side of things, Google realized the potential of the Web platform for its business. So they do have a Web client for GNU/Linux. But you’ll never see an equivalent for Email/PIM. Not because it is obsolete. But more because it would hurt their business instead.

Data accessibility

My biggest gripe is data accessiblity. Thankfully, for most of the stuff that we rely upon (email, documents etc), things are standardized. But there still are annoyances. For example, when KDE 4.x debacle occured, kwallet could not export its password database to the newer one. When I moved to GNOME, I had another very very very hard time extracting passwords from kwallet and feeding them to SeaHorse. Then, when recently, I switched back to KDE, I had to similarly struggle exporting back my data from SeaHorse (no, not back to KWallet). Over the years, I realized that critical data should be kept in its simplest format. And let the front-ends do all the bling they want to. I realized this more with Emails. Maildir is a good clean format to store my email in, irrespective of how I access my email. Whether it is dovecot, Evolution, Akonadi, Kmail etc, I still have my bare data intact.

I had burnt myself on the password front quite a bit, so on this migration back to KDE, I wanted an email like solution. So there’s pass, a password store, which fits the bill just like the Email use case. It would make a lot more sense for all Desktop Password Managers to instead just be a frontend interface to pass and let it keep the crucial data in bare minimal format, and accessbile at all times, irrespective of the overhauling that the Desktop projects tend to do every couple of years or so.

Data is critical. Without retaining its compatibility (both backwards and forward), no battle can you win.

I honestly feel the Linux Desktop Architects from the different projects should sit together and agree on a set of interfaces/tools (yes yes there is fd.o) and stick to it. Too much time and energy is wasted otherwise.

Integration and Uniformity

This is something I have always desired and I was quite impressed (and delighted) to see some progress on the KDE desktop in the UI department. On GNOME, I developed a liking for the Evolution email client. Infact, it is my client currently, for Email, NNTP and other PIM. And I still get to use it nicely in a KDE environment. Thank you.

Categories: FLOSS Project Planets

Enrico Zini: gitpython: list all files in a git commit

Fri, 2019-03-15 06:41

A little gitpython recipe to list the paths of all files in a commit:

#!/usr/bin/python3 import git from pathlib import Path import sys def list_paths(root_tree, path=Path(".")): for blob in root_tree.blobs: yield path / blob.name for tree in root_tree.trees: yield from list_paths(tree, path / tree.name) repo = git.Repo(".", search_parent_directories=True) commit = repo.commit(sys.argv[1]) for path in list_paths(commit.tree): print(path)

It can be a good base, for example, for writing a script that, given two git branches, shows which django migrations are in one and not in the other, without doing any git checkout of the code.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: #20: Dependencies. Now with badges!

Thu, 2019-03-14 23:41

Welcome to post number twenty in the randomly redundant R rant series of posts, or R4 for short. It has been a little quiet since the previous post last June as we’ve been busy with other things but a few posts (or ideas at least) are queued.

Dependencies. We wrote about this a good year ago in post #17 which was (in part) tickled by the experience of installing one package … and getting a boatload of others pulled in. The topic and question of dependencies has seen a few posts over the year, and I won’t be able to do them all justice. Josh and I have been added a few links to the tinyverse.org page. The (currently) last one by Russ Cox titled Our Software Dependency Problem is particularly trenchant.

And just this week the topic came up in two different, and unrelated posts. First, in What I don’t like in you repo, Oleg Kovalov lists a brief but decent number of items by which a repository can be evaluated. And one is about [b]loated dependencies where he nails it with a quick When I see dozens of deps in the lock file, the first question which comes to my mind is: so, am I ready to fix any failures inside any of them? This is pretty close to what we have been saying around the tinyverse.

Second, in Beware the data science pin factory, Eric Colson brings an equation. Quoting from footnote 2: […] the number of relationships (r) grows as a function number of members (n) per this equation: r = (n^2-n) / 2. Granted, this was about human coordination and ideal team size. But let’s just run with it: For n=10, we get r=9 which is not so bad. For n=20, it is r=38. And for n=30 we are at r=87. You get the idea. “Big-Oh-N-squared”.

More dependencies means more edges between more nodes. Which eventually means more breakage.

Which gets us to announcement embedded in this post. A few months ago, in what still seems like a genuinely extra-clever weekend hack in an initial 100 or so lines, Edwin de Jonge put together a remarkable repo on GitLab. It combines Docker / Rocker via hourly cron jobs with deployment at netlify … giving us badges which visualize the direct as well as recursive dependencies of a package. All in about 100 lines, fully automated, autonomously running and deployed via CDN. Amazing work, for which we really need to praise him! So a big thanks to Edwin.

With these CRAN Dependency Badges being available, I have been adding them to my repos at GitHub over the last few months. As two quick examples you can see

  • Rcpp
  • RcppArmadillo

to get the idea. RcppArmadillo (or RcppEigen or many other packages) will always have one: Rcpp. But many widely-used packages such as data.table also get by with a count of zero. It is worth showing this – and the badge does just that! And I even sent a PR to the badger package: if you’re into this, you can have a badge made for your via badger::badge_depdencies(pkgname).

Otherwise, more details at Edwin’s repo and of course his actual tinyverse.netlify.com site hosting the badges. It’s easy as all other badges: reference the CRAN package, get a badge.

So if you buy into the idea that lightweight is the right weight then join us and show it via the dependency badges!

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Hideki Yamane: pbuilder hack with new debootstrap option

Thu, 2019-03-14 22:17
Suddenly I noticed that maybe I can use --cache-dir option that I've added to debootstrap some time ago for pbuilder, too. Then hacked it.

> original
real    3m34.811s
user    1m6.676s
sys     0m33.051s

> use aptcache for debootstrap
real    2m52.397s
user    0m59.660s
sys     0m28.631s
It cuts 40s for creating base.tgz. Nice, isn't it? :) Hope pbuilder team will accept this Merge Request and push it to buster since it's worth for stable release, IMHO.
Categories: FLOSS Project Planets

Ben Hutchings: Debian LTS work, February 2019

Thu, 2019-03-14 19:46

I was assigned 19.5 hours of work by Freexian's Debian LTS initiative and carried over 1 hour from January. I worked only 4 hours and so will carry over 16.5 hours.

I backported various security fixes to Linux 3.16, but did not upload a new release yet.

Categories: FLOSS Project Planets

Craig Small: WordPress 5.1.1

Thu, 2019-03-14 07:09

The Debian packages for WordPress version 5.1.1 are being updated as I write this. This is a security fix for WordPress that stops comments causing a cross-site scripting bug. It’s an important one to update.

The backports should happen soon so even if you are using Debian stable you’ll be covered.

Categories: FLOSS Project Planets

Shirish Agarwal: The road to hell is paved with good intentions

Wed, 2019-03-13 19:41

First of all I would like to share about a video which I should have talked or added about in the ‘Celebrating National Science Day at GMRT‘ blog post. It’s a documentary called ‘The Most Unknown‘ . It’s a great documentary as it gives you a glimpse of how much is there yet to discover. The reason I shared this is I have seen lot of money being removed from Government Research and put god knows where. Just a fair warning, it would be somewhat of a long conversation.

Almost all of IIT’s are in bad shape, in fact IIT Mumbai which I know and have often had the privilege to associate myself with has been going through tough times. This is when institutes such as IIT Mumbai, NCRA, GMRT, FTII and all such institutions have made loads of contributions in creating awareness and has given the public the ability to question rather than just ‘believe’ . For any innovation to happen, you have to question, investigate, prove, share the findings and the way you have done things so it could be reproduced.

Even Social Sciences as shared in the Documentary and my brief learnings and takeaways from TISS has been the same. The reason is even they are in somewhat dire-straits. I was just sharing or having a conversation with another friend few days back who is into higher education that IISER Pune where the recent Wordcamp happened had to commercialize and open its doors to events in order to sustain itself. While I and perhaps all wordcampers would forever be grateful for sharing with us such a great place as well as a studios vibe which also influenced how Wordcamp was held, I did feel sad that we intruded in their study areas which should be meant for IISER’s only.

Before I get too carried away, I should point to people that people should look at Ian Cheney’s some of the old documentaries as well (the one who just did the most Unknown) and has found his previous work compelling as well. The City Dark is a beautiful masterpiece and shares lot of insights about light pollution which India could use well to improve both our lighting as well as reduce light pollution in the atmosphere.

Meeting with Bhakts and ‘Good intentions’

The reason I shared the above was also keeping in mind the conversations I have whenever I meet Bhakts. The term bhakt comes from bhakti in Sanskrit which at one time meant spirituality and purity although now in politics it means one who choose to believe in the leader and the party absolutely. Whenever a bhakt starts losing an logical argument, one of the argument that is often meted out is whatever you say you cannot doubt Mr. Narendra Modi’s intentions which is the reason why I took the often used proverb to prove the same point and is the heading of the blog post. The problem with the whole ‘good intentions’ part is, it’s pretty much a strawman argument. The problem with intentions is everybody can either say or mask their intentions. Even ISIS says that they want to bring back the golden phase of Islam. We have seen their actions, should we believe what they say ? Or even Hitler who said ‘One people, one empire, one leader’ who claimed that the Aryans were superior to the Jewish people while history has gone to show the exact opposite. Israel, today is the eight-biggest arms supplier in the world, our military is the second-biggest buyer of arms from them as well as far more prosperous than us and many other countries. Their work on drip-irrigation and water retention, agricultural techniques, there is much we could learn from them. Same thing about manning borders and such. While I could give many such examples the easiest example to share in context of good intentions gone wrong is Demonetisation in India which deserves its own paragraph.

Demonetisation

Demonetisation was heralded by Mr. Modi with great fanfare . It was supposed to take out black money. While we learned later that black money didn’t get wiped out but has become more into your face. This we learned later was debunked by the earlier R.B.I. Governor Raghuram Rajan and then now from R.B.I. itself. This is before Mr. Narendra Modi announced demonetisation. Sharing below an excerpt from the Freakonomics Radio show which has Mr. Rajan’s interview. Makes for interesting reading or listening as the case may be.

DUBNER: Now, shortly after your departure as governor of the R.B.I., Prime Minister Modi executed a sudden, controversial plan to abolish 500- and 1,000-rupee banknotes, hoping to crack down on the shadow economy and tax evasion. I understand you had not been in favor of that idea, correct? p, li { white-space: pre-wrap; }

RAJAN: Absolutely. It didn’t make sense. I was asked for my opinion, and I said, “Look, it is taking away money that people use in transactions. It’s going to create enormous disruption unless we replace it overnight with freshly printed money.” And it’s very important that we have all that in place, difficult to maintain secrecy, and then the fundamental sort of objective of this, which was to get people to bring out the money that they hoarded in their basements and pay taxes on them — I said, “That’s probably not going to work out, because they’re going to find ways to infuse the money back into the system without paying those taxes.”

DUBNER: It’s been roughly two years now. What have been the effects of this demonetization?

RAJAN: Well, I think more than the numbers suggest, because India was growing at that time. And we had numbers which were in the 7.5 percent growth range at that point, at a time, in 2016, when the world was actually growing quite slowly. When growth picked up in 2017, instead of going along with the world, which we typically do and we exceed world growth significantly, we went down. That suggests it had a tremendous effect on growth, but that, the numbers don’t capture it all, because what actually got killed was the informal sector — the people who were doing work with notes rather than with checks, who didn’t have formal bank accounts. And when you look at the job numbers that some private-sector people estimate 10, 12 million jobs were lost in that episode. And, of course, we haven’t recovered them yet. It was one of those places where more economic thinking would have helped.

DUBNER: Was it a coincidence that Prime Minister Modi went ahead with the plan only after you’d left?

RAJAN: Well, I can’t speak on that. I can only say that I made my objections very, very clear.

Freakonomics Podcast Stephen J. Dubner interviewing Mr. Raghuram Rajan, RBI Governor 5th September 2013 – 5th September 2017. Aired on 6th February 2019 .

I would urge people to listen to Freakonomics Radio as there are lots of pearls of wisdom in there. There is also the Good ideas are not enough podcast which is very relevant to the topic at hand but would digress about the Freakonomics radio for now.

The interesting part to ask from the details known from R.B.I. are –


a. Why did Mr. Narendra Modi feel the need to have the permission of R.B.I. after 38 days ?


b. If Mr. Modi were confident of the end-result then shouldn’t he have instead of asking permission and have the PMO have taken all the responsibility.


In any case, as was seen from R.B.I. counting only 0.3% of the money did not come even though many people’s valid claims were thrown out and the expense in the whole exercise was much more than the shortfall. R.B.I. didn’t get INR 10k crore while it spent INR 13k crore for the new currency. Does anybody see any saving here ?

The bhakts counter-argument is that the bankers were bad, if everybody had done their work, then it would have all worked out as Mr. Modi wanted. The statement itself implies that they didn’t know the reality. Even if we take the statement at face-value that all the bankers were cheaters (which I don’t agree with at all) , didn’t they know it when they were the opposition. Where was the party’s economic intelligence, didn’t it tell them so many years they were in Opposition. This is what the Opposition should be doing and knowing about the state of the Economy and know the workings to say the least.

There is also this https://www.scribd.com/document/401570379/Minutes-of-RBI-s-board-meeting-on-demonetisation

These are minutes obtained by Venkatesh Nayak under the RTI tool.

To rub salt to the wounds, now the IPP is at low of 1.7 percent as well . As always, I’m sure BJP will say these are not the final numbers.

I am curious to know what RSS people would think or make of this video –

https://www.youtube.com/watch?v=DYKKrnG26YA

The other terminology people do use when they are unable to win an argument is I don’t know in detail, why don’t you come to the shaaka and meet our ‘prachaarak’ . Pracharaak while a hindi word used to mean a wise man disseminating knowledge but in RSS-speak he is a political spinner. In style, mannerisms they are very close to how born again Jesuists or Missionaries do, the ones who are forceful, don’t want to have a meaningful conversation. The most interesting video on this topic can be seen on twitter

https://twitter.com/akashbanerjee/status/1105309181240885248/

Important Legislations which need Public Scrutiny

The other interesting development or regression which I and probably many tech-savvy Indians would probably noted is the total lack of comments on any of the new regulations by Mozilla about Internet, whether it is the News and Draft Intermedairy rules, the Aadhar Amendment Bill, 2019 which was passed recently, the Draft E-commerce Bill , Data Protection Bill, all of which are important pieces of legislation which need and needed careful study. While the Government of India isn’t going to do anything apart from asking comments, people should have come forward and made better systems. One of the things that any social group could do is either have Stet or a co-ment instance so it could capture people comments and also mail it from their to Meity .

The BJP Site hack

Last and not the least was the BJP site hack which is now in ninth day where it is still under maintainance . It was a hack because there was a meme which went viral on the web. Elliot in his inimitable style also shared how they should have backed up their site . In an un-related event I was attending devops event where how web apps, websites should be done was shared. It’s not rocket science, even if one of the people had looked at ‘high-availability’ they would have got loads of web-pages and links which tell how to be secure and still serve content. Apparently, they just did not take BJP site data but probably also donor data (people who have donated wealth to BJP) . There is a high possibility if it’s a real hack that crackers, foreign agents could put BJP to ransom and have dominion on India if BJP comes back to power. While I do hope such a scenario doesn’t play out you never know. I would probably share about the devops event some other time as there was much happening there and deserves its own blog post.

On the other hand, it could be a ploy to tell to EC (Election Commission) we don’t have the data, our website got hacked when EC asks from where you got the funding for the Elections.

In either way it doesn’t seem a good time for either BJP or even India as a whole if it has such weak I.T. Government

Frankly speaking, when I first heard it, I was thinking hopefully they wouldn’t have put their donor details on the same server and they probably would be taking backups. I chided myself for thinking such stupid thoughts. A guy like me could be screwy with his backups due to time, budget constraints but a political party like BJP which has unbridled wealth wouldn’t do such rookie mistakes and would have the best technical talent available. Many BJP well-wishers were thinking they would be able to punish the culprit and I had to tell them he could use any number of tools to hide his real identity. You could use VPN, tor or even plain old IP spoofing. The possibilities are just endless. This is just when I’m a infosec rookie or baby.

The other interesting part of this hack is till date BJP has neither acknowledged the hack, nor have they shared what went wrong. I think we have been to comfortable and been used to hacks on reddit, gmail, twitter where the developers feel that the users should know the extent of the hack, what was lost and not lost, what they are doing to recover and till when we can expect services to start and then later a full disclosure report as to what they could determine. Of course, there could be disinformation in such information as well but that would have been a better response method than how the BJP IT Cell has responded. Not something I would expect from a well-oiled structure.

Update – 14/03/19 – Seems fb, instagram, whatsapp is down . Also see how fb responded on twitter . Also Indian Express ran an article on the recent BJP hack. We will know what happens hopefully in the next few days .

Categories: FLOSS Project Planets

Jo Shields: Too many cores

Wed, 2019-03-13 10:48
Arming yourself

ARM is important for us. It’s important for IOT scenarios, and it provides a reasonable proxy for phone platforms when it comes to developing runtime features.

We have big beefy ARM systems on-site at Microsoft labs, for building and testing Mono – previously 16 Softiron Overdrive 3000 systems with 8-core AMD Opteron A1170 CPUs, and our newest system in provisional production, 4 Huawei Taishan XR320 blades with 2×32-core HiSilicon Hi1616 CPUs.

The HiSilicon chips are, in our testing, a fair bit faster per-core than the AMD chips – a good 25-50%. Which begged the question “why are our Raspbian builds so much slower?”

Blowing a raspberry

Raspbian is the de-facto main OS for Raspberry Pi. It’s basically Debian hard-float ARM, rebuilt with compiler flags better suited to ARM11 76JZF-S (more precisely, the ARMv6 architecture, whereas Debian targets ARMv7). The Raspberry Pi is hugely popular, and it is important for us to be able to offer packages optimized for use on Raspberry Pi.

But the Pi hardware is also slow and horrible to use for continuous integration (especially the SD-card storage, which can be burned through very quickly, causing maintenance headaches), so we do our Raspbian builds on our big beefy ARM64 rack-mount servers, in chroots. You can easily do this yourself – just grab the raspbian-archive-keyring package from the Raspbian archive, and pass the Raspbian mirror to debootstrap/pbuilder/cowbuilder instead of the Debian mirror.

These builds have always been much slower than all our Debian/Ubuntu ARM builds (v5 soft float, v7 hard float, aarch64), but on the new Huawei machines, the difference became much more stark – the same commit, on the same server, took 1h17 to build .debs for Ubuntu 16.04 armhf, and 9h24 for Raspbian 9. On the old Softiron hardware, Raspbian builds would rarely exceed 6h (which is still outrageously slow, but less so). Why would the new servers be worse, but only for Raspbian? Something to do with handwavey optimizations in Raspbian? No, actually.

When is a superset not a superset

Common wisdom says ARM architecture versions add new instructions, but can still run code for older versions. This is, broadly, true. However, there are a few cases where deprecated instructions become missing instructions, and continuity demands those instructions be caught by the kernel, and emulated. Specifically, three things are missing in ARMv8 hardware – SWP (swap data between registers and memory), SETEND (set the endianness bit in the CPSR), and CP15 memory barriers (a feature of a long-gone control co-processor). You can turn these features on via abi.cp15_barrier, abi.setend, and abi.swp sysctl flags, whereupon the kernel fakes those instructions as required (rather than throwing SIGILL).

CP15 memory barrier emulation is slow. My friend Vince Sanders, who helped with some of this analysis, suggested a cost of order 1000 cycles per emulated call. How many was I looking at? According to dmesg, about a million per second.

But it’s worse than that – CP15 memory barriers affect the whole system. Vince’s proposal was that the HiSilicon chips were performing so much worse than the AMD ones, because I had 64 cores not 8 – and that I could improve performance by running a VM, with only one core in it (so CP15 calls inside that environment would only affect the entire VM, not the rest of the computer).

Escape from the Pie Folk

I already had libvirtd running on all my ARM machines, from a previous fit of “hey one day this might be useful” – and as it happened, it was. I had to grab a qemu-efi-aarch64 package, containing a firmware, but otherwise I was easily able to connect to the system via virt-manager on my desktop, and get to work setting up a VM. virt-manager has vastly improved its support for non-x86 since I last used it (once upon a time it just wouldn’t boot systems without a graphics card), but I was easily able to boot an Ubuntu 18.04 arm64 install CD and interact with it over serial just as easily as via emulated GPU.

Because I’m an idiot, I then wasted my time making a Raspbian stock image bootable in this environment (Debian kernel, grub-efi-arm64, battling file-size constraints with the tiny /boot, etc) – stuff I would not repeat. Since in the end I just wanted to be as near to our “real” environment as possible, meaning using pbuilder, this simply wasn’t a needed step. The VM’s host OS didn’t need to be Raspbian.

Point is, though, I got my 1-core VM going, and fed a Mono source package to it.

Time taken? 3h40 – whereas the same commit on the 64-core host took over 9 hours. The “use a single core” hypothesis more than proven.

Next steps

The gains here are obvious enough that I need to look at deploying the solution non-experimentally as soon as possible. The best approach to doing so is the bit I haven’t worked out yet. Raspbian workloads are probably at the pivot point between “I should find some amazing way to automate this” and “automation is a waste of time, it’s quicker to set it up by hand”

Many thanks to the #debian-uk community for their curiosity and suggestions with this experiment!

Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible Builds: Weekly report #202

Wed, 2019-03-13 10:24

Here’s what happened in the Reproducible Builds effort between Sunday March 3 and Saturday March 9 2019:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week:

Chris Lamb uploaded version 113 to Debian unstable fixing a long list of issues. It included contributions already covered in previous weeks as well as new ones by Chris, including:

  • Provide explicit help when the libarchive system package is missing or “incomplete”. (#50)
  • Explicitly mention when the guestfs module is missing at runtime and we are falling back to a binary diff. (#45)

Vagrant Cascadian made the corresponding update to GNU Guix. []

Packages reviewed and fixed, and bugs filed Test framework development

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This week, Holger Levsen made the following improvements:

  • Analyse node maintenance job runs to determine whether to mark nodes offline. []
  • Detect hanging health check runs, not just failed ones. []
  • Allow members of the jenkins UNIX group to sudo(8) to the jenkins user [] and simplify adding users to said group [].
  • Improve the “SHA1 checker” script to deal with packages with more than one version [] and to re-download buildinfo.debian.net’s files if they are older than two weeks. []
  • Node maintenance. [][][][]
  • In the version checker, correctly deal with a rare situation when several, say, diffoscope versions are available in one Debian suite at the same time. []

In addition, Alexander “lynxis” Couzens, made a number of changes to our OpenWrt support, including:

  • Add OpenWrt support to our database. []
  • Adding a reproducible_openwrt_package_parser.py script. []
  • Strip unreproducible certificates from images. []
Outreachy

Don’t forget that Reproducible Builds is part of May/August 2019 round of Outreachy. Outreachy provides internships to work free software. Internships are open to applicants around the world, working remotely and are not required to move. Interns are paid a stipend of $5,500 for the three month internship and have an additional $500 travel stipend to attend conferences/events.

So far, we received more than ten initial requests from candidates. The closing date for applicants is April 2nd. More information is available on the application page.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Categories: FLOSS Project Planets

Pages