Planet Debian

Syndicate content
Planet Debian -
Updated: 12 hours 27 min ago

Steinar H. Gunderson: Offline routing in Maps

Thu, 2015-05-28 19:30

My[1] work project for the last year or so was shown on Google I/O! And the reaction was seemingly positive, also in the tech press.

[1]: Of course, I'm only one of many developers.

Categories: FLOSS Project Planets

Sven Hoexter: RMS, free software and where I fail the goal

Thu, 2015-05-28 15:31

You might have already read this comment by RMS in the Guardian. That comment and a recent discussion about the relevance of GPL changes post GPLv2 made me think again about the battle RMS started to fight. While some think RMS should "retire", at least I still fail on my personal goal to not depend on non-free software and services. So for me this battle is far from over, and here is my personal list of "non-free debt" I've to pay off.

general purpose systems aka your computer

Looking at the increasing list of firmware blobs required to use a GPU, wireless chipsets and more and more wired NICs, the situation seems to be worse then in the late 90s. Back then the primary issue was finding supported hardware, but the driver was free. Nowadays even the open sourced firmware often requires obscure patched compilers to build. If I look at this stuff I think the OpenBSD project got that right with the more radical position.

Oh and then there is CPU microcode. I'm not yet sure what to think about it, but in the end it's software and it's not open source. So it's non-free software running on my system.

Maybe my memory is blurred due to the fact, that the seperation of firmware from the Linux kernel, and proper firmware loading got implemented only years later. I remember the discussion about the pwc driver and its removal from Linux. Maybe the situation wasn't better at that time but the firmware was just hidden inside the Linux driver code?

On my system at work I've to add the Flash plugin to the list due to my latest test with Prezi which I'll touch later.

I also own a few Humble Indie bundles. I played parts of Osmos after a recommendation by Joey Hess, I later finished to play through Limbo and I got pretty far with Machinarium on a Windows system I still had at that time. I also tried a few others but never got far or soon lost interest.

Another thing I can not really get rid of is unrar because of stuff I need to pull from xda-developer links just to keep a cell phone running. Update: Josh Triplett pointed out that there is unar available in the Debian archive. And indeed that one works on the rar file I just extracted.

Android ecosystem

I will soon get rid of a stock S3 mini and try to replace it with a moto g loaded with CyanogenMod. That leaves me with a working phone with a OS that just works because of a shitload of non-free blobs. The time and work required to get there is another story. Among others you need a new bootloader that requires a newer fastboot compared to what we have in Jessie, and later you also need the newer adb to be able to sideload the CM image. There I gave in and just downloaded the pre build SDK from Google. And there you've another binary I did not even try to build from source. Same for the CM image itself, though that's not that much different from using a GNU/Linux distribution if you ignore the trust issues.

It's hard to trust the phone I've build that way, but it's the best I can get at the moment with at least some bigger chunks of free software inside. So let's move to the applications on the phone. I do not use GooglePlay, so I rely on f-droid and freeware I can download directly from the vendor.

  • AndFTP: best sftp client I could find so far
  • Threema: a bit (a single one) more trustworthy then WhatsApp, they started around the company of Michael Kasper
  • Wunderlist: well done shared shopping list, also non-free webservice
  • Opera: the compression proxy is awesome, also kind of a non-free webservice
"Cloud" services

This category mixes a lot with the stuff listed above, most of them are not only an application, in fact Threema and Wunderlist are useless without the backend service. And Opera is just degraded to a browser - and to be replaced with Firefox - if you discount the compression proxy.

The other big addition in this category is Prezi. We tried it out at work after it got into my focus due to a post by Dave Aitel. It's kind of the poster child of non-freeness. It requires a non-free, unstable, insecure and half way deprecated browser plugin to work, you can not download your result in a useful format, you've to buy storage for your presentation at this one vendor, you've to pay if you want to keep your presentation private. It's the perfect lockin situation. But still it's very convenient, prevents a lot of common mistakes you can make when you create a presentation and they invented a new concept of presenting.

I know about impress.js(hosted on a non-free platform by the way, but at least you can export it from there) and I also know about hovercraft. I'm impressed by them, but it's still not close to the ease of use of Prezi. So here you can also very prominently see the cost of free and non-free software. Invest the time and write something cool with CSS3 and impress.js or pay Prezi to just klick yourself through. To add something about the instability - I had to use a windows laptop for presenting with Prezi because the Flash plugin on Jessie crashed in the presentation mode, I did not yet check the latest Flash update. I guess that did not make the situation worse, it already is horrible.

I also use kind of database services like and When I was younger you bought such things printed on dead trees but they did not update very well.

Thinking a bit further, a Certification Authority is not only questionable due to the whole trust issue, they also provide OCSP responder as kind of a web service. And I've already had the experience what the internet looks like when the OCSP systems of GlobalSign failed.

So there is still a lot to fight for and a lot of "personal non-free debt" to pay off.

Categories: FLOSS Project Planets

Richard Hartmann: On SourceForge

Thu, 2015-05-28 08:09

You either die a hero or you live long enough to see yourself become the villain.

And yes, we all know that that SF decided to wrap crapware around Windows installers ages ago and then made it opt-in after the backlash. Doing so for stale accounts makes sense from their PoV, which makes it all the worse.

And no, I don't know how stale that account actually was, but that's irrelevant in this context either way.

Categories: FLOSS Project Planets

Steve Kemp: A brief examination of tahoe-lafs

Wed, 2015-05-27 20:00

Continuing the theme from the last post I made, I've recently started working my way down the list of existing object-storage implementations.

tahoe-LAFS is a well-established project which looked like a good fit for my needs:

  • Simple API.
  • Good handling of redundancy.

Getting the system up and running, on four nodes, was very simple. Setup a single/simple "introducer" which is a well-known node that all hosts can use to find each other, and then setup four deamons for storage.

When files are uploaded they are split into chunks, and these chunks are then distributed amongst the various nodes. There are some configuration settings which determine how many chunks files are split into (10 by default), how many chunks are required to rebuild the file (3 by default) and how many copies of the chunks will be created.

The biggest problem I have with tahoe is that there is no rebalancing support: Setup four nodes, and the space becomes full? You can add more nodes, new uploads go to the new nodes, while old ones stay on the old. Similarly if you change your replication-counts because you're suddenly more/less paranoid this doesn't affect existing nodes.

In my perfect world you'd distribute blocks around pretty optimistically, and I'd probably run more services:

  • An introducer - To allow adding/removing storage-nodes on the fly.
  • An indexer - to store the list of "uploads", meta-data, and the corresponding block-pointers.
  • The storage-nodes - to actually store the damn data.

The storage nodes would have the primitives "List all blocks", "Get block", "Put block", and using that you could ensure that each node had sent its data to at least N other nodes. This could be done in the background.

The indexer would be responsible for keeping track of which blocks live where, and which blocks are needed to reassemble upload N. There's probably more that it could do.

Categories: FLOSS Project Planets

Norbert Preining: USB stick update: TAILS 1.4, GParted 0.22, SysResCD 4.5.2, Debian Jessie

Wed, 2015-05-27 18:48

I have posted a view times (here and here) about how to get a multi-boot/multi-purpose USB stick working. Now that TAILS has seens a major upgrade, and Debian 8.0 Jessie has been released, I think it is time to update the procedure to reflect the latest releases. That turned out to be a painful experience, in particular since Debian removed support for any reasonable boot method.

So going through these explanations one will end up with a usable USB stick that can boot you into TAILS, System Rescue CD, GNU Parted Live CD, but unfortunately not anymore to boot into an installation of Debian 8.0 Jessie installation. But the USB stick will still be usable as normal media.

Let us repeat some things from the original post concerning the wishlist and the main players:

I have a long wishlist of items a boot stick should fulfill

  • boots into Tails, SystemRescueCD, and GParted
  • boots on both EFI and legacy systems
  • uses the full size of the USB stick (user data!)
  • allows installation of Debian (not possible anymore)
  • if possible, preserve already present user data on the stick

This time I have added the GNOME/GNU Partition Editor gparted as it came in useful at times.


A USB stick, the iso images of TAILS 1.4, SystemRescueCD 4.5.2, GParted Lice CD 0.22.0, and some tool to access iso images, for example ISOmaster (often available from your friendly Linux distribution).

I assume that you have already an USB stick prepared as described previously. If this is not the case, please go there and follow the section on preparing your usb stick.

Two types of boot options

We will employ two different approaches to boot special systems: the one is directly from an iso image, the other via extraction of the necessary kernels and images.

At the moment we have the following status with respect to boot methods:

  • Booting directly from ISO image: System Rescue CD and GNOME Parted Live CD
  • Extraction of kernels/images: TAILS (Debian Jessie does not work in any way)

What is a pity is that during the testing phase, booting and installation from testing images worked for Debian as documented previously. But with the final images (and my guess it has to do with systemd, wouldn’t surprise me), there is no Debian CD detected as it cannot find the iso image on the USB stick. Bummer. That means that for having a Debian/USB stick you need a dedicated one.

Booting from ISO image

Grub has gained quite some time ago the ability to boot directly from an ISO image. In this case the iso image is mounted via loopback, and the kernel and initrd are specified relatively to the iso image root.

For both SystemRescueCD and GNOME Partition Live CD, just drop the iso files into /boot/iso/, in my case /boot/iso/systemrescuecd-x86-4.5.2.iso and /boot/iso/gparted-live-0.22.0-1-i586.iso.

After that, entries like the following have to be added to grub.cfg. For the full list see grub.cfg:

submenu "System Rescue CD 4.5.2 (via ISO) ---> " { set isofile="/boot/iso/systemrescuecd-x86-4.5.2.iso" menuentry "SystemRescueCd (64bit, default boot options)" { set gfxpayload=keep loopback loop (hd0,1)$isofile linux (loop)/isolinux/rescue64 isoloop=$isofile initrd (loop)/isolinux/initram.igz } ... }   submenu "GNU/Gnome Parted Live CD 0.22.0 (via ISO) ---> " { set isofile="/boot/iso/gparted-live-0.22.0-1-i586.iso" menuentry "GParted Live (Default settings)"{ loopback loop (hd0,1)$isofile linux (loop)/live/vmlinuz boot=live username=user config components quiet noswap noeject ip= nosplash findiso=$isofile initrd (loop)/live/initrd.img } ... }

Note the added isoloop=$isofile and findiso=$isofile that helps the installer find the iso images.

Booting via extraction of kernels and images

This is a bit more tedious, but still not too bad.

Installation of TAILS files

Assuming you have access to the files on the TAILS CD via the directory ~/tails, execute the following commands:

mkdir -p /usbstick/boot/tails cp -a ~/tails/live/* /usbstick/boot/tails/

The grub.cfg entries look now similar to the following:

submenu "TAILS Environment 1.4 ---> " { menuentry "Tails64 Live System" { linux /boot/tails/vmlinuz2 boot=live live-media-path=/boot/tails config live-media=removable nopersistent noprompt timezone=Etc/UTC block.events_dfl_poll_msecs=1000 splash noautologin module=Tails libata.force=noncq initrd /boot/tails/initrd2.img } ... }

The important part here is the live-media-path=/boot/tails, otherwise TAILS will not find the correct files for booting. The rest of the information was extracted from the boot setup of TAILS itself.

Current status of USB stick

Just to make sure, the usb stick should contain at the current stage the following files:

/boot/ iso/ gparted-live-0.22.0-1-i586.iso systemrescuecd-x86-4.5.2.iso tails/ vmlinuz Tails.module initrd.img .... grub/ fonts/ lots of files locale/ lots of files x86_64-efi/ lots of files font.pf2 grubenv grub.cfg *this file we create in the next step!!* /EFI BOOT/ BOOTX64.EFI The Grub config file grub.cfg

The final step is to provide a grub config file in /usbstick/boot/grub/grub.cfg. I created one by looking at the isoboot.cfg files both in the SystemRescueCD, TAILS iso images, GParted iso image, and the Debian/Jessie image, and converting them to grub syntax. Excerpts have been shown above in the various sections.

I spare you all the details, grab a copy here: grub.cfg


That’s it. Now you can anonymously provide data about your evil government, rescue your friends computer, fix a forgotten Windows password, and above all, install a proper free operating system.

If you have any comments, improvements or suggestions, please drop me a comment. I hope this helps a few people getting a decent USB boot stick running.


Postscriptum concerning Debian/Jessie

So I have first tried to boot directly into the Debian Jessie firmware ISO image by dropping the ISO into /boot/iso/firmware-8.0.0-amd64-i386-netinst.iso and adding a grub entry like:

submenu "Debian 8.0 Jessie NetInstall ---> " { set isofile="/boot/iso/firmware-8.0.0-amd64-i386-netinst.iso"   menuentry '64 bit Install' { set background_color=black loopback loop (hd0,1)$isofile linux (loop)/install.amd/vmlinuz iso-scan/ask_second_pass=true iso-scan/filename=$isofile vga=788 -- quiet initrd (loop)/install.amd/initrd.gz } ... }

That was a no go. I have added the iso-scan/ask_second_pass=true iso-scan/filename=$isofile thingy after some research in forum and web, without any change. Of course I have also tried the official ISO image debian-8.0.0-amd64-netinst.iso, with the same effect.

Although I was sure it doesn’t make any difference, I have also tried to extract the kernel and initrd, and boot directly from it, i.e., copying the files to /usbstick/boot/debian/ as follows:

mkdir -p /usbstick/boot/debian cp -a ~/tails/install.amd /usbstick/boot/debian/ cp -a ~/tails/install.386 /usbstick/boot/debian/

In addition, copy the iso image itself into /usbstick/boot/iso (or directly into the root of the usb stick, didn’t change anything), and added a grub.cfg entry as follows:

submenu "Debian 8.0 Jessie NetInstall ---> " { menuentry '64 bit Install' { set background_color=black linux /boot/debian/install.amd/vmlinuz vga=788 -- quiet initrd /boot/debian/install.amd/initrd.gz } ... }

All these were without success, always ending up in error messages like: Mounting sdb, this is not a Debian Installation CD, etc etc.

I have also submitted a bug report to the installation reports, unfortunately no answer (not that I expected one). In case someone has a better idea, please let me know!

It is very sad that these kind of support is removed. The Installation Manual lists some options to install from USB, and the only difference is that there syslinux on a FAT16 system is used. This restricts the USB stick size a lot, and does not allow for easy multi boot via grub.

As reported here, I had this actually running. Unfortunately, I stupidly removed the old iso image and replaced it with the current Jessie ISO image, assuming that there will be no regression. Wrong assumption. Now I cannot even investigate the changes by looking into the initrd. Looking at the date of the original post I see that it is more or less one year ago, so before the Systemd introduction. I don’t know whether this has any effect, I doubt, as the installer is separate. But something happened in the mean time. Bad for us.

Categories: FLOSS Project Planets

Daniel Pocock: Quick start using Blender for video editing

Wed, 2015-05-27 16:14

Although it is mostly known for animation, Blender includes a non-linear video editing system that is available in all the current stable versions of Debian, Ubuntu and Fedora.

Here are some screenshots showing how to start editing a video of a talk from a conference.

In this case, there are two input files:

  • A video file from a DSLR camera, including an audio stream from a microphone on the camera
  • A separate audio file with sound captured by a lapel microphone attached to the speaker's smartphone. This is a much better quality sound and we would like this to replace the sound included in the video file.
Open Blender and choose the video editing mode

Launch Blender and choose the video sequence editor from the pull down menu at the top of the window:

Now you should see all the video sequence editor controls:

Setup the properties for your project

Click the context menu under the strip editor panel and change the panel to a Properties panel:

The video file we are playing with is 720p, so it seems reasonable to use 720p for the output too. Change that here:

The input file is 25fps so we need to use exactly the same frame rate for the output, otherwise you will either observe the video going at the wrong speed or there will be a conversion that is CPU intensive and degrades the quality:

Now specify an output filename and location:

Specify the file format:

and the video codec:

and specify the bitrate (smaller bitrate means smaller file but lower quality):

Specify the AAC audio codec:

Now your basic rendering properties are set. When you want to generate the output file, come back to this panel and use the Animation button at the top.

Editing the video

Use the context menu to change the properties panel back to the strip view panel:

Add the video file:

and then right click the video strip (the lower strip) to highlight it and then add a transform strip:

Audio waveform

Right click the audio strip to highlight it and then go to the properties on the right hand side and click to show the waveform:

Rendering length

By default, Blender assumes you want to render 250 frames of output. Looking in the properties to the right of the audio or video strip you can see the actual number of frames. Put that value in the box at the bottom of the window where it says 250:

Enable AV-sync

Also at the bottom of the window is a control to enable AV-sync. If your audio and video are not in sync when you preview, you need to set this AV-sync option and also make sure you set the frame rate correctly in the properties:

Add the other sound strip

Now add the other sound file that was recorded using the lapel microphone:

Enable the waveform display for that sound strip too, this will allow you to align the sound strips precisely:

You will need to listen to the strips to make an estimate of the time difference. Use this estimate to set the "start frame" in the properties for your audio strip, it will be a negative value if the audio strip starts before the video. You can then zoom the strip panel to show about 3 to 5 seconds of sound and try to align the peaks. An easy way to do this is to look for applause at the end of the audio strips, the applause generates a large peak that is easily visible.

Once you have synced the audio, you can play the track and you should not be able to hear any echo. You can then silence the audio track from the camera by right clicking it, look in the properties to the right and change volume to 0.

Make any transforms you require

For example, to zoom in on the speaker, right click the transform strip (3rd from the bottom) and then in the panel on the right, click to enable "Uniform Scale" and then set the scale factor as required:

Next steps

There are plenty of more comprehensive tutorials, including some videos on Youtube, explaining how to do more advanced things like fading in and out or zooming and panning dynamically at different points in the video.

If the lighting is not good (faces too dark, for example), you can right click the video strip, go to the properties panel on the right hand side and click Modifiers, Add Strip Modifier and then select "Color Balance". Use the Lift, Gamma and Gain sliders to adjust the shadows, midtones and highlights respectively.

Categories: FLOSS Project Planets

Vincent Bernat: Live patching QEMU for VENOM mitigation

Wed, 2015-05-27 11:52

CVE-2015-3456, also known as VENOM, is a security vulnerability in QEMU virtual floppy controller:

The Floppy Disk Controller (FDC) in QEMU, as used in Xen […] and KVM, allows local guest users to cause a denial of service (out-of-bounds write and guest crash) or possibly execute arbitrary code via the FD_CMD_READ_ID, FD_CMD_DRIVE_SPECIFICATION_COMMAND, or other unspecified commands.

Even when QEMU has been configured with no floppy drive, the floppy controller code is still active. The vulnerability is easy to test1:

#define FDC_IOPORT 0x3f5 #define FD_CMD_READ_ID 0x0a int main() { ioperm(FDC_IOPORT, 1, 1); outb(FD_CMD_READ_ID, FDC_IOPORT); for (size_t i = 0;; i++) outb(0x42, FDC_IOPORT); return 0; }

Once the fix installed, all processes still have to be restarted for the upgrade to be effective. It is possible to minimize the downtime by leveraging virsh save.

Another possibility would be to patch the running processes. The Linux kernel attracted a lot of interest in this area, with solutions like Ksplice (mostly killed by Oracle), kGraft (by Red Hat) and kpatch (by Suse) and the inclusion of a common framework in the kernel. The userspace has far less out-of-the-box solutions2.

I present here a simple and self-contained way to patch a running QEMU to remove the vulnerability without requiring any sensible downtime. Here is a short demonstration:

Proof of concept

First, let’s find a workaround that would be simple to implement through live patching: while modifying running code text is possible, it is easier to modify a single variable.


Looking at the code of the floppy controller and the patch, we can avoid the vulnerability by not accepting any command on the FIFO port. Each request would be answered by “Invalid command” (0x80) and a user won’t be able to push more bytes to the FIFO until the answer is read and the FIFO queue reset. Of course, the floppy controller would be rendered useless in this state. But who cares?

The list of commands accepted by the controller on the FIFO port is contained in the handlers[] array:

static const struct { uint8_t value; uint8_t mask; const char* name; int parameters; void (*handler)(FDCtrl *fdctrl, int direction); int direction; } handlers[] = { { FD_CMD_READ, 0x1f, "READ", 8, fdctrl_start_transfer, FD_DIR_READ }, { FD_CMD_WRITE, 0x3f, "WRITE", 8, fdctrl_start_transfer, FD_DIR_WRITE }, /* [...] */ { 0, 0, "unknown", 0, fdctrl_unimplemented }, /* default handler */ };

To avoid browsing the array each time a command is received, another array is used to map each command to the appropriate handler:

/* Associate command to an index in the 'handlers' array */ static uint8_t command_to_handler[256]; static void fdctrl_realize_common(FDCtrl *fdctrl, Error **errp) { int i, j; static int command_tables_inited = 0; /* Fill 'command_to_handler' lookup table */ if (!command_tables_inited) { command_tables_inited = 1; for (i = ARRAY_SIZE(handlers) - 1; i >= 0; i--) { for (j = 0; j < sizeof(command_to_handler); j++) { if ((j & handlers[i].mask) == handlers[i].value) { command_to_handler[j] = i; } } } } /* [...] */ }

Our workaround is to modify the command_to_handler[] array to map all commands to the fdctrl_unimplemented() handler (the last one in the handlers[] array).

Testing with gdb

To check if the workaround works as expected, we test it with gdb. Unless you have compiled QEMU yourself, you need to install a package with debug symbols. Unfortunately, on Debian, they are not available, yet3. On Ubuntu, you can install the qemu-system-x86-dbgsym package after enabling the appropriate repositories.

The following function for gdb maps every command to the unimplemented handler:

define patch set $handler = sizeof(handlers)/sizeof(*handlers)-1 set $i = 0 while ($i < 256) set variable command_to_handler[$i++] = $handler end printf "Done!\n" end

Attach to the vulnerable process (with attach), call the function (with patch) and detach of the process (with detach). You can check that the exploit is not working anymore. This could be easily automated.


Using gdb has two main limitations:

  1. It needs to be installed on each host to be patched.
  2. The debug packages need to be installed as well. Moreover, it can be difficult to fetch previous versions of those packages.
Writing a custom patcher

To overcome those limitations, we can write a customer patcher using the ptrace() system call without relying on debug symbols being present.

Finding the right memory spot

Before being able to modify the command_to_handler[] array, we need to know its location. The first clue is given by the symbol table. To query it, use readelf -s:

$ readelf -s /usr/lib/debug/.build-id/09/95121eb46e2a4c13747ac2bad982829365c694.debug | \ > sed -n -e 1,3p -e /command_to_handler/p Symbol table '.symtab' contains 27066 entries: Num: Value Size Type Bind Vis Ndx Name 8485: 00000000009f9d00 256 OBJECT LOCAL DEFAULT 26 command_to_handler

This table is usually stripped out of the executable to save space, like shown below:

$ file -b /usr/bin/qemu-system-x86_64 | tr , \\n ELF 64-bit LSB shared object x86-64 version 1 (SYSV) dynamically linked interpreter /lib64/ for GNU/Linux 2.6.32 BuildID[sha1]=0995121eb46e2a4c13747ac2bad982829365c694 stripped

If your distribution provides a debug package, the debug symbols are installed in /usr/lib/debug. Most modern distributions are now relying on the build ID4 to map an executable to its debugging symbols, like the example above. Without a debug package, you need to recompile the existing package without stripping debug symbols in a clean environment5. On Debian, this can be done by setting the DEB_BUILD_OPTIONS environment variable to nostrip.

We have now two possible cases:

  • the easy one, and
  • the hard one.
The easy case

On x86, here is the standard layout of a regular Linux process in memory6:

The random gaps (ASLR) are here to prevent an attacker from reliably jumping to a particular exploited function in memory. On x86-64, the layout is quite similar. The important point is that the base address of the executable is fixed.

The memory mapping of a process is also available through /proc/PID/maps. Here is a shortened and annotated example on x86-64:

$ cat /proc/3609/maps 00400000-00401000 r-xp 00000000 fd:04 483 not-qemu [text segment] 00601000-00602000 r--p 00001000 fd:04 483 not-qemu [data segment] 00602000-00603000 rw-p 00002000 fd:04 483 not-qemu [BSS segment] [random gap] 02419000-0293d000 rw-p 00000000 00:00 0 [heap] [random gap] 7f0835543000-7f08356e2000 r-xp 00000000 fd:01 9319 /lib/x86_64-linux-gnu/ 7f08356e2000-7f08358e2000 ---p 0019f000 fd:01 9319 /lib/x86_64-linux-gnu/ 7f08358e2000-7f08358e6000 r--p 0019f000 fd:01 9319 /lib/x86_64-linux-gnu/ 7f08358e6000-7f08358e8000 rw-p 001a3000 fd:01 9319 /lib/x86_64-linux-gnu/ 7f08358e8000-7f08358ec000 rw-p 00000000 00:00 0 7f08358ec000-7f083590c000 r-xp 00000000 fd:01 5138 /lib/x86_64-linux-gnu/ 7f0835aca000-7f0835acd000 rw-p 00000000 00:00 0 7f0835b08000-7f0835b0c000 rw-p 00000000 00:00 0 7f0835b0c000-7f0835b0d000 r--p 00020000 fd:01 5138 /lib/x86_64-linux-gnu/ 7f0835b0d000-7f0835b0e000 rw-p 00021000 fd:01 5138 /lib/x86_64-linux-gnu/ 7f0835b0e000-7f0835b0f000 rw-p 00000000 00:00 0 [random gap] 7ffdb0f85000-7ffdb0fa6000 rw-p 00000000 00:00 0 [stack]

With a regular executable, the value given in the symbol table is an absolute memory address:

$ readelf -s not-qemu | \ > sed -n -e 1,3p -e /command_to_handler/p Symbol table '.dynsym' contains 9 entries: Num: Value Size Type Bind Vis Ndx Name 47: 0000000000602080 256 OBJECT LOCAL DEFAULT 25 command_to_handler

So, the address of command_to_handler[], in the above example, is just 0x602080.

The hard case

To enhance security, it is possible to load some executables at a random base address, just like a library. Such an executable is called a Position Independent Executable (PIE). An attacker won’t be able to rely on a fixed address to find some helpful function. Here is the new memory layout:

With a PIE process, the value in the symbol table is now an offset from the base address.

$ readelf -s not-qemu-pie | sed -n -e 1,3p -e /command_to_handler/p Symbol table '.dynsym' contains 17 entries: Num: Value Size Type Bind Vis Ndx Name 47: 0000000000202080 256 OBJECT LOCAL DEFAULT 25 command_to_handler

If we look at /proc/PID/maps, we can figure out where the array is located in memory:

$ cat /proc/12593/maps 7f6c13565000-7f6c13704000 r-xp 00000000 fd:01 9319 /lib/x86_64-linux-gnu/ 7f6c13704000-7f6c13904000 ---p 0019f000 fd:01 9319 /lib/x86_64-linux-gnu/ 7f6c13904000-7f6c13908000 r--p 0019f000 fd:01 9319 /lib/x86_64-linux-gnu/ 7f6c13908000-7f6c1390a000 rw-p 001a3000 fd:01 9319 /lib/x86_64-linux-gnu/ 7f6c1390a000-7f6c1390e000 rw-p 00000000 00:00 0 7f6c1390e000-7f6c1392e000 r-xp 00000000 fd:01 5138 /lib/x86_64-linux-gnu/ 7f6c13b2e000-7f6c13b2f000 r--p 00020000 fd:01 5138 /lib/x86_64-linux-gnu/ 7f6c13b2f000-7f6c13b30000 rw-p 00021000 fd:01 5138 /lib/x86_64-linux-gnu/ 7f6c13b30000-7f6c13b31000 rw-p 00000000 00:00 0 7f6c13b31000-7f6c13b33000 r-xp 00000000 fd:04 4594 not-qemu-pie [text segment] 7f6c13cf0000-7f6c13cf3000 rw-p 00000000 00:00 0 7f6c13d2e000-7f6c13d32000 rw-p 00000000 00:00 0 7f6c13d32000-7f6c13d33000 r--p 00001000 fd:04 4594 not-qemu-pie [data segment] 7f6c13d33000-7f6c13d34000 rw-p 00002000 fd:04 4594 not-qemu-pie [BSS segment] [random gap] 7f6c15c46000-7f6c15c67000 rw-p 00000000 00:00 0 [heap] [random gap] 7ffe823b0000-7ffe823d1000 rw-p 00000000 00:00 0 [stack]

The base address is 0x7f6c13b31000, the offset is 0x202080 and therefore, the location of the array is 0x7f6c13d33080. We can check with gdb:

$ print &command_to_handler $1 = (uint8_t (*)[256]) 0x7f6c13d33080 <command_to_handler> Patching a memory spot

Once we know the location of the command_to_handler[] array in memory, patching it is quite straightforward. First, we start tracing the target process:

/* Attach to the running process */ static int patch_attach(pid_t pid) { int status; printf("[.] Attaching to PID %d...\n", pid); if (ptrace(PTRACE_ATTACH, pid, NULL, NULL) == -1) { fprintf(stderr, "[!] Unable to attach to PID %d: %m\n", pid); return -1; } if (waitpid(pid, &status, 0) == -1) { fprintf(stderr, "[!] Error while attaching to PID %d: %m\n", pid); return -1; } assert(WIFSTOPPED(status)); /* Tracee may have died */ if (ptrace(PTRACE_GETSIGINFO, pid, NULL, &si) == -1) { fprintf(stderr, "[!] Unable to read siginfo for PID %d: %m\n", pid); return -1; } assert(si.si_signo == SIGSTOP); /* Other signals may have been received */ printf("[*] Successfully attached to PID %d\n", pid); return 0; }

Then, we retrieve the command_to_handler[] array, modify it and put it back in memory7.

static int patch_doit(pid_t pid, unsigned char *target) { int ret = -1; unsigned char *command_to_handler = NULL; size_t i; /* Get the table */ printf("[.] Retrieving command_to_handler table...\n"); command_to_handler = ptrace_read(pid, target, QEMU_COMMAND_TO_HANDLER_SIZE); if (command_to_handler == NULL) { fprintf(stderr, "[!] Unable to read command_to_handler table: %m\n"); goto out; } /* Check if the table has already been patched. */ /* [...] */ /* Patch it */ printf("[.] Patching QEMU...\n"); for (i = 0; i < QEMU_COMMAND_TO_HANDLER_SIZE; i++) { command_to_handler[i] = QEMU_NOT_IMPLEMENTED_HANDLER; } if (ptrace_write(pid, target, command_to_handler, QEMU_COMMAND_TO_HANDLER_SIZE) == -1) { fprintf(stderr, "[!] Unable to patch command_to_handler table: %m\n"); goto out; } printf("[*] QEMU successfully patched!\n"); ret = 0; out: free(command_to_handler); return ret; }

Since ptrace() only allows to read or write a word at a time, ptrace_read() and ptrace_write() are wrappers to read or write arbitrary large chunks of memory8. Here is the code for ptrace_read():

/* Read memory of the given process */ static void * ptrace_read(pid_t pid, void *address, size_t size) { /* Allocate the buffer */ uword_t *buffer = malloc((size/sizeof(uword_t) + 1)*sizeof(uword_t)); if (!buffer) return NULL; /* Read word by word */ size_t readsz = 0; do { errno = 0; if ((buffer[readsz/sizeof(uword_t)] = ptrace(PTRACE_PEEKTEXT, pid, (unsigned char*)address + readsz, 0)) && errno) { fprintf(stderr, "[!] Unable to peek one word at address %p: %m\n", (unsigned char *)address + readsz); free(buffer); return NULL; } readsz += sizeof(uword_t); } while (readsz < size); return (unsigned char *)buffer; } Putting the pieces together

The patcher is provided with the following information:

  • the PID of the process to be patched,
  • the command_to_handler[] offset from the symbol table, and
  • the build ID of the executable file used to get this offset (as a safety measure).

The main steps are:

  1. Attach to the process with ptrace().
  2. Get the executable name from /proc/PID/exe.
  3. Parse /proc/PID/maps to find the address of the text segment (it’s the first one).
  4. Do some sanity checks:
    • check there is a ELF header at this location (4-byte magic number),
    • check the executable type (ET_EXEC for regular executables, ET_DYN for PIE), and
    • get the build ID and compare with the expected one.
  5. From the base address and the provided offset, compute the location of the command_to_handler[] array.
  6. Patch it.

You can find the complete patcher on GitHub.

$ ./patch --build-id 0995121eb46e2a4c13747ac2bad982829365c694 \ > --offset 9f9d00 \ > --pid 16833 [.] Attaching to PID 16833... [*] Successfully attached to PID 16833 [*] Executable name is /usr/bin/qemu-system-x86_64 [*] Base address is 0x7f7eea912000 [*] Both build IDs match [.] Retrieving command_to_handler table... [.] Patching QEMU... [*] QEMU successfully patched!
  1. The complete code for this test is on GitHub

  2. An interesting project seems to be Katana. But there are also some insightful hacking papers on the subject. 

  3. Some packages come with a -dbg package with debug symbols, some others don’t. Fortunately, a proposal to automatically produce debugging symbols for everything is near completion. 

  4. The Fedora Wiki contains the rationale behind the build ID

  5. If the build is incorrectly reproduced, the build ID won’t match. The information provided by the debug symbols may or may not be correct. Debian currently has a reproducible builds effort to ensure that each package can be reproduced. 

  6. Anatomy of a program in memory is a great blog post explaining in more details how a program lives in memory. 

  7. Being an uninitialized static variable, the variable is in the BSS section. This section is mapped to a writable memory segment. If it wasn’t the case, with Linux, the ptrace() system call is still allowed to write. Linux will copy the page and mark it as private. 

  8. With Linux 3.2 or later, process_vm_readv() and process_vm_writev() can be used to transfer data from/to a remote process without using ptrace() at all. However, ptrace() would still be needed to reliably stop the main thread. 

Categories: FLOSS Project Planets

Jonathan Carter: Of course I support Jonathan

Wed, 2015-05-27 11:41

Spending yesterday mostly away from the computer screen, I was shocked this morning when I read about the Ubuntu Community Council’s request for Jonathan Ridell to step down from the Kubuntu Council. I knew that things have been rough lately and honestly there were some situations that Jonathan could have handled better, but I didn’t expect anything as drastic and sudden as this without seeing any warning signs.

Looking at the mails that Scott Kitterman posted sent by the Kubuntu Council, it seems like it’s been a surprise to KC as well.

I’m disappointed in the way the Ubuntu Community Council has handled this and I think the way they treated Jonathan is appalling, even taking into account that he could’ve communicated his grievances better. I’m also unconvinced that the Ubuntu Community Council is as beneficial to the Ubuntu community in its current form as it could be. The way it is structured and reports to the SABDFL makes that it will always favour Canonical when there’s a conflict of interest. I brought this up with two different CC members last year who both provided shruggy answers in the vein of “Sorry, but we have a framework that’s set up on how we can work in here and there’s just so much we can do about it.” – they seem to fear the leadership too much to question it, and it’s a pity, because everyone makes mistakes.

This request to step down is probably going to sour the Ubuntu project’s relationship with Jonathan Ridell even more, which is especially sad because he’s one of the really good community guys left that keeps both the CoC and the original Ubuntu manifesto ethos in high regard while striving for technical excellence. On top of that, it seems like it may result in at least another such person leaving.

I hope that the CC also takes this opportunity to take a step back and re-avaluate it’s structure and purpose, instead of just shrugging it off with a corporate-sounding statement. I’d also urge them to retract their statement to Jonathan Ridell and attempt to find a more amicable solution.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: drat 0.0.4: Yet more features and documentation

Wed, 2015-05-27 08:06

A new version, now at 0.0.4, of the drat package arrived on CRAN yesterday. Its name stands for drat R Archive Template, and it helps with easy-to-create and easy-to-use repositories for R packages, and is finding increasing by other projects.

Version 0.0.4 brings both new code and more documentation:

  • support for binary repos on Windows and OS X thanks to Jan Schulz;
  • new (still raw) helper functions initRepo() to create a git-based repository, and pruneRepo() to remove older versions of packages;
  • the insertRepo() functions now uses tryCatch() around git commands (with thanks to Carl Boettiger);
  • when adding a file to a drat repo we ensure that the repo path does not contains spaces (with thank to Stefan Bache);
  • stress that file-based repos need a URL of the form file:/some/path with one colon but not two slashes (also thanks to Stefan Bache);
  • new Using Drat with Travis CI vignette thanks to Colin Gillespie;
  • new Drat FAQ vignette;
  • other fixes and extensions.

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Matthew Garrett: This is not the UEFI backdoor you are looking for

Wed, 2015-05-27 02:38
This is currently the top story on the Linux subreddit. It links to this Tweet which demonstrates using a System Management Mode backdoor to perform privilege escalation under Linux. This is not a story.

But first, some background. System Management Mode (SMM) is a feature in most x86 processors since the 386SL back in 1990. It allows for certain events to cause the CPU to stop executing the OS, jump to an area of hidden RAM and execute code there instead, and then hand off back to the OS without the OS knowing what just happened. This allows you to do things like hardware emulation (SMM is used to make USB keyboards look like PS/2 keyboards before the OS loads a USB driver), fan control (SMM will run even if the OS has crashed and lets you avoid the cost of an additional chip to turn the fan on and off) or even more complicated power management (some server vendors use SMM to read performance counters in the CPU and adjust the memory and CPU clocks without the OS interfering).

In summary, SMM is a way to run a bunch of non-free code that probably does a worse job than your OS does in most cases, but is occasionally helpful (it's how your laptop prevents random userspace from overwriting your firmware, for instance). And since the RAM that contains the SMM code is hidden from the OS, there's no way to audit what it does. Unsurprisingly, it's an interesting vector to insert malware into - you could configure it so that a process can trigger SMM and then have the resulting SMM code find that process's credentials structure and change it so it's running as root.

And that's what Dmytro has done - he's written code that sits in that hidden area of RAM and can be triggered to modify the state of the running OS. But he's modified his own firmware in order to do that, which isn't something that's possible without finding an existing vulnerability in either the OS or (or more recently, and) the firmware. It's an excellent demonstration that what we knew to be theoretically possible is practically possible, but it's not evidence of such a backdoor being widely deployed.

What would that evidence look like? It's more difficult to analyse binary code than source, but it would still be possible to trace firmware to observe everything that's dropped into the SMM RAM area and pull it apart. Sufficiently subtle backdoors would still be hard to find, but enough effort would probably uncover them. A PC motherboard vendor managed to leave the source code to their firmware on an open FTP server and copies leaked into the wild - if there's a ubiquitous backdoor, we'd expect to see it there.

But still, the fact that system firmware is mostly entirely closed is still a problem in engendering trust - the means to inspect large quantities binary code for vulnerabilities is still beyond the vast majority of skilled developers, let alone the average user. Free firmware such as Coreboot gets part way to solving this but still doesn't solve the case of the pre-flashed firmware being backdoored and then installing the backdoor into any new firmware you flash.

This specific case may be based on a misunderstanding of Dmytro's work, but figuring out ways to make it easier for users to trust that their firmware is tamper free is going to be increasingly important over the next few years. I have some ideas in that area and I hope to have them working in the near future.

Categories: FLOSS Project Planets

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: The last planned Qt 4 release is here: Qt 4.8.7. Is your app runnning with Qt5?

Tue, 2015-05-26 12:36
Qt 4.8.7 has been released today. Quoting from the blog post (emphasis is mine):

Many users have already moved their active projects to Qt 5 and we encourage also others to do so. With a high degree of source compatibility, we have ensured that switching to Qt 5 is smooth and straightforward. It should be noted that Qt 4.8.7 provides only the basic functionality to run Qt based applications on Mac OS X 10.10, full support is in Qt 5.
Qt 4.8.7 is planned to be the last patch release of the Qt 4 series. Standard support is available until December 2015, after which extended support will be available. We recommend all active projects to migrate to Qt 5, as new operating systems and compilers with Qt 4.8 will not be supported. If you have challenges migrating to Qt 5, please contact us or some of our service partners for assistance
Have you started to port your project?
Categories: FLOSS Project Planets

Lunar: Reproducible builds: week 4 in Stretch cycle

Tue, 2015-05-26 08:19

What happened about the reproducible builds effort for this week:

Toolchain fixes

Lunar rebased our custom dpkg on the new release, removing a now unneeded patch identified by Guillem Jover. An extra sort in the buildinfo generator prevented a stable order and was quickly fixed once identified.

Mattia Rizzolo also rebased our custom debhelper on the latest release.

Packages fixed

The following 30 packages became reproducible due to changes in their build dependencies: animal-sniffer, asciidoctor, autodock-vina, camping, cookie-monster, downthemall, flashblock, gamera, httpcomponents-core, https-finder, icedove-l10n, istack-commons, jdeb, libmodule-build-perl, libur-perl, livehttpheaders, maven-dependency-plugin, maven-ejb-plugin, mozilla-noscript, nosquint, requestpolicy, ruby-benchmark-ips, ruby-benchmark-suite, ruby-expression-parser, ruby-github-markup, ruby-http-connection, ruby-settingslogic, ruby-uuidtools, webkit2gtk, wot.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which did not make their way to the archive yet:

  • #775531 on console-setup by Reiner Herrmann: update and split patch written in January.
  • #785535 on maradns by Reiner Herrmann: use latest entry in debian/changelog as build date.
  • #785549 on dist by Reiner Herrmann: set hostname and domainname to predefined value.
  • #785583 on s5 by Juan Picca: set timezone to UTC when unzipping files.
  • #785617 on python-carrot by Juan Picca: use latest entry in debian/changelog as documentation build date.
  • #785774 on afterstep by Juan Picca: modify documentation generator to allow a build date to be set instead of the current time, then use latest entry in debian/changelog as reference.
  • #786508 on ttyload by Juan Picca: remove timestamp from documentation.
  • #786568 on linux-minidisc by Lunar: use latest entry in debian/changelog as build date.
  • #786615 on kfreebsd-10 by Steven Chamberlain: make order of file in source tarballs stable.
  • #786633 on webkit2pdf by Reiner Herrmann: use latest entry in debian/changelog as documentation build date.
  • #786634 on libxray-scattering-perl by Reiner Herrmann: tell Storable::nstore to produce sorted output.
  • #786637 on nvidia-settings by Lunar: define DATE, WHOAMI, andHOSTNAME_CMD` to stable values.
  • #786710 on armada-backlight by Reiner Herrmann: use latest entry in debian/changelog as documentation build date.
  • #786711 on leafpad by Reiner Herrmann: use latest entry in debian/changelog as documentation build date.
  • #786714 on equivs by Reiner Herrmann: use latest entry in debian/changelog as documentation build date.

Also, the following bugs have been reported:

  • #785536 on maradns by Reiner Herrmann: unreproducible deadwood binary.
  • #785624 on doxygen by Christoph Berg: timestamps in manpages generated makes builds non-reproducible.
  • #785736 on git-annex by Daniel Kahn Gillmor: documentation should be made reproducible.
  • #786593 on wordwarvi by Holger Levsen: please provide a --distrobuild build switch.
  • #786601 on sbcl by Holger Levsen: FTBFS when locales-all is installed instead of locales.
  • #786669 on ruby-celluloid by Holger Levsen: tests sometimes fail, causing ftbfs sometimes.
  • #786743 on obnam by Holger Levsen: FTBFS.

Holger Levsen made several small bug fixes and a few more visible changes:

  • For packages in testing, comparisions will be done using the sid version of debbindiff.
  • The scheduler will now schedule old packages from sid twice often as the ones in testing as we care more about the former at the moment.
  • More statistics are now visible and the layout has been improved.
  • Variations between the first and second build are now explained on the statistics page.

Version 0.007-1 of strip-nondeterminism—the tool to post-process various file formats to normalize them—has been uploaded by Holger Levsen. Version 0.006-1 was already in the reproducible repository, the new version mainly improve the detection of Maven's files.

debbindiff development

At the request of Emmanuel Bourg, Reiner Herrmann added a comparator for Java .class files.

Documentation update

Christoph Berg created a new page for the timestamps in manpages created by Doxygen.

Package reviews

93 obsolete reviews have been removed, 76 added and 43 updated this week.

New identified issues: timestamps in manpages generated by Doxygen, modification time differences in files extracted by unzip, tstamp task used in Ant build.xml, timestamps in documentation generated by ASDocGen. The description for build id related issues has been clarified.


Holger Levsen announced a first meeting on Wednesday, June 3rd, 2015, 19:00 UTC. The agenda is amendable on the wiki.


Lunar worked on a proof-of-concept script to import the build environment found in .buildinfo files to UDD. Lucas Nussbaum has positively reviewed the proposed schema.

Holger Levsen cleaned up various experimental toolchain repositories, marking merged brances as such.

Categories: FLOSS Project Planets

Ricardo Mones: Downgrading to stable

Tue, 2015-05-26 06:11
This weekend I had to downgrade my home desktop to stable thanks to a strange Xorg bug which I've been unable to identify among the current ones. Both testing and sid versions seem affected and all you can see after booting is this:

The system works fine otherwise and can be accessed via ssh, but restarting kdm doesn't help to fix it, it just changes the pattern. Anyway, as explaining a toddler he cannot watch his favourite youtube cartoons because suddenly the computer screen has become an abstract art work is not easy I quickly decided to downgrade.

Downgrading went fine, using APT pinning to fix stable and apt-get update/upgrade/dist-upgrade after that, but today I noticed libreoffice stopped working with this message:

Warning: failed to launch javaldx - java may not function correctly /usr/lib/libreoffice/program/soffice.bin: error while loading shared libraries: cannot open shared object file: No such file or directory

All I found related to that is a post on forums, which didn't help much (neither the original poster nor me). But just found the library was not missing, it was installed:

# locate /usr/lib/ure/lib/

But that was not part of any ldconfig conf file, hence the fix was easy:

# echo '/usr/lib/ure/lib' > /etc/ # ldconfig

And presto! libreoffice is working again :-)
Categories: FLOSS Project Planets

Julien Danjou: OpenStack Summit Liberty from a Ceilometer & Gnocchi point of view

Tue, 2015-05-26 05:39

Last week I was in Vancouver, BC for the OpenStack Summit, discussing the new Liberty version that will be released in 6 months.

I've attended the summit mainly to discuss and follow-up new developments on Ceilometer, Gnocchi and Oslo. It has been a pretty good week and we were able to discuss and plan a few interesting things.

Ops feedback

We had half a dozen Ceilometer sessions, and the first one was dedicated to getting feedbacks from operators using Ceilometer. We had a few operators present, and a few of the Ceilometer team. We had constructive discussion, and my feeling is that operators struggles with 2 things so far: scaling Ceilometer storage and having Ceilometer not killing the rest of OpenStack.

We discussed the first point as being addressed by Gnocchi, and I presented a bit Gnocchi itself, as well as how and why it will fix the storage scalability issue operators encountered so far.

Ceilometer putting down the OpenStack installation is more interesting problem. Ceilometer pollsters request information from Nova, Glance… to gather statistics. Until Kilo, Ceilometer used to do that regularly and at fixed interval, causing high pike load in OpenStack. With the introduction of jitter in Kilo, this should be less of a problem. However, Ceilometer hits various endpoints in OpenStack that are poorly designed, and hitting those endpoints of Nova or other components triggers a lot of load on the platform. Unfortunately, this makes operators blame Ceilometer rather than blaming the components being guilty of poor designs. We'd like to push forward improving these components, but it's probably going to take a long time.


When I started the Gnocchi project last year, I pretty soon realized that we would be able to split Ceilometer itself in different smaller components that could work independently, while being able to leverage each others. For example, Gnocchi can run standalone and store your metrics even if you don't use Ceilometer – nor even OpenStack itself.

My fellow developer Chris Dent had the same idea about splitting Ceilometer a few months ago and drafted a proposal. The idea is to have Ceilometer split in different parts that people could assemble together or run on their owns.

Interestingly enough, we had three 40 minutes sessions planned to talk and debate about this division of Ceilometer, though we all agreed in 5 minutes that this was the good thing to do. Five more minutes later, we agreed on which part to split. The rest of the time was allocated to discuss various details of that split, and I engaged to start doing the work with Ceilometer alarming subsystem.

I wrote a specification on the plane bringing me to Vancouver, that should be approved pretty soon now. I already started doing the implementation work. So fingers crossed, Ceilometer should have a new components in Liberty handling alarming on its own.

This would allow users for example to only deploys Gnocchi and Ceilometer alarm. They would be able to feed data to Gnocchi using their own system, and build alarms using Ceilometer alarm subsystem relying on Gnocchi's data.


We didn't have a Gnocchi dedicated slot – mainly because I indicated I didn't feel we needed one. We anyway discussed a few points around coffee, and I've been able to draw a few new ideas and changes I'd like to see in Gnocchi. Mainly changing the API contract to be more asynchronously so we can support InfluxDB more correctly, and improve Carbonara (the library we created to manipulate timeseries) based drivers to be faster.

All of those should – plus a few Oslo tasks I'd like to tackle – should keep me busy for the next cycle!

Categories: FLOSS Project Planets

Norbert Preining: Debian/TeX Live 2015 preparations

Mon, 2015-05-25 21:20

I have uploaded a preliminary version of the texlive-bin based on the 2015 sources (plus the first fixes) to the Debian archive, targeting experimental. As there are four new packages built from the sources (libtexlua52, -dev, libtexluajit2, -dev) the packages have to go through the NEW queue, which at the moment is an impressive 500+ entries long (nearly top in total history). But ftp-masters are currently very active and I hope they continue for some time.

Anyway, there are still many things to work out, especially a rework of tex-common for the new fmtutil, in the same way we did some time ago for updmap That also means that all packages shipping formats will need to be rebuild with the new tex-common. Fortunately there are not many additional formats shipped, I think all of them are under my control, so that should be easy.

For those who want to peak at the current packages, here they are compiled for amd64. Also available as apt-sources:

deb exp/ deb-src exp/

WARNING Do not try to actually run these binaries unless you know what you are doing

Categories: FLOSS Project Planets

Elena 'valhalla' Grandi: Free Software on Free Hardware

Mon, 2015-05-25 15:53
Free Software on Free Hardware

I've posted a new article on my website:
Categories: FLOSS Project Planets

Russ Allbery: Catch-up haul

Sun, 2015-05-24 19:44

As always, even though I've not been posting much, I'm still buying books. This is a catch-up post listing a variety of random purchases.

Katherine Addison — The Goblin Emperor (sff)
Milton Davis — From Here to Timbuktu (sff)
Mark Forster — How to Make Your Dreams Come True (non-fiction)
Angela Highland — Valor of the Healer (sff)
Marko Kloos — Terms of Enlistment (sff)
Angela Korra'ti — Faerie Blood (sff)
Cixin Liu — The Three-Body Problem (sff)
Emily St. John Mandel — Station Eleven (sff)
Sydney Padua — The Thrilling Adventures of Lovelace and Babbage (graphic novel)
Melissa Scott & Jo Graham — The Order of the Air Omnibus (sff)
Andy Weir — The Martian (sff)

Huh, for some reason I thought I'd bought more than that.

I picked up the rest of the Hugo nominees that aren't part of a slate, and as it happens have already read all the non-slate nominees at the time of this writing (although I'm horribly behind on reviews). I also picked up the first book of Marko Kloos's series, since he did the right thing and withdrew from the Hugos once it became clear what nonsense was going on this year.

The rest is a pretty random variety of on-line recommendations, books by people who made sense on the Internet, and books by authors I like.

Categories: FLOSS Project Planets

Norbert Preining: TeX Live 2015 DVD preparation starts

Sun, 2015-05-24 18:50

As the last step in the preparation of the TeX Live 2015 release we have now completely frozen updates to the repository and built the (hopefully) final image. The following weeks will see more testing and preparation of the gold master for DVD preparation.

The last weeks were full of frantic rebuilds of binaries, in particular due to the always in the last minute found bugs in several engines. Also already after the closing time we found a serious problem with Windows installations in administrator mode, where unprivileged users didn’t have read access to the format dumps. This was due to the File::Temp usage on Windows which sets too restrictive ACLs.

Unless really serious bugs show up, further changes will have to wait till after the release. Let’s hope for some peaceful two weeks.

So, it is time to prepare for release parties

Categories: FLOSS Project Planets

Wouter Verhelst: Fixing CVE-2015-0847 in Debian

Sun, 2015-05-24 15:18

Because of CVE-2015-0847 and CVE-2013-7441, two security issues in nbd-server, I've had to updates for nbd, for which there are various supported versions: upstream, unstable, stable, oldstable, oldoldstable, and oldoldstable-backports. I've just finished uploading security fixes for the various supported versions of nbd-server in Debian. There're various relevant archives, and unfortunately it looks like they all have their own way of doing things regarding security:

  • For squeeze-lts (oldoldstable), you check out the secure-testing repository, run a script from that repository that generates a DLA number and email template, commit the result, and send a signed mail (whatever format) to the relevant mailinglist. Uploads go to ftp-master with squeeze-lts as target distribution.
  • For backports, you send a mail to the team alias requesting a BSA number, do the upload, and write the mail (based on a template that you need to modify yourself), which you then send (inline signed) to the relevant mailinglist. Uploads go to ftp-master with $dist-backports as target distribution, but you need to be in a particular ACL to be allowed to do so. However, due to backports policy, packages should never be in backports before they are in the distribution from which they are derived -- so I refrained from uploading to backports until the regular security update had been done. Not sure whether that's strictly required, but I didn't think it would do harm; even so, that did mean the procedure for backports was even more involved.
  • For the distributions supported by the security team (stable and oldstable, currently), you prepare the upload yourself, ask permission from the security team (by sending a debdiff), do the upload, and then ask the security team to send out the email. Uploads go to security-master, which implies that you may have to use dpkg-buildpackage's -sa parameter in order to make sure that the orig.tar.gz is actually in the security archive.
  • For unstable and upstream, you Just Upload(TM), because it's no different from a regular release.

While I understand how the differences between the various approaches have come to exist, I'm not sure I understand why they are necessary. Clearly, there's some room for improvement here.

As anyone who reads the above may see, doing an upload for squeeze-lts is in fact the easiest of the three "stable" approaches, since no intermediate steps are required. While I'm not about to advocate dropping all procedures everywhere, a streamlining of them might be appropriate.

Categories: FLOSS Project Planets

Russ Allbery: git-pbuilder 1.34

Sat, 2015-05-23 20:59

Long time without a blog post. My time got eaten by work and travel and work-related travel. Hopefully more content soon.

This is just a quick note about the release of version 1.34 of the git-pbuilder script (which at some point really should just be rewritten in Python and incorporated entirely into the git-buildpackage package). Guido Günther added support for creating chroots for LTS distributions.

You can get the latest version from my scripts page.

Categories: FLOSS Project Planets