Planet Debian

Syndicate content
Planet Debian - http://planet.debian.org/
Updated: 1 day 18 min ago

John Goerzen: ssh suddenly stops communicating with some hosts

Mon, 2015-03-30 18:13

Here’s a puzzle I’m having trouble figuring out. This afternoon, ssh from my workstation or laptop stopped working to any of my servers (at OVH). The servers are all running wheezy, the local machines jessie. This happens on both my DSL and when tethered to my mobile phone. They had not applied any updates since the last time ssh worked. When looking at it with ssh -v, they were all hanging after:

debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr umac-64@openssh.com none debug1: kex: client->server aes128-ctr umac-64@openssh.com none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY

Now, I noticed that a server on my LAN — running wheezy — could successfully connect. It was a little different:

debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY

And indeed, if I run ssh -o MACs=hmac-md5, it works fine.

Now, I tried rebooting machines at multiple ends of this. No change. I tried connecting from multiple networks. No change. And then, as I was writing this blog post, all of a sudden it works normally again. Supremely weird! Any ideas what I can blame here?

Categories: FLOSS Project Planets

Carl Chenet: Verify the backups of backup-manager

Mon, 2015-03-30 18:00

Follow me on Identi.ca  or Twitter  or Diaspora*

Backup-manager is a tool creating backups and storing them locally. It’s really usefult to keep a regular backup of a quickly-changing trees of files (like a development environment) or for traditional backups if you have a NFS mount on your server. Backup-managers is also able to send backup itself to another server by FTP.

In order to verify the backups created by backup-manager, we will use also Backup Checker (stars appreciated :) ), the automated tool to verify backups. For each newly-created backup we want to control that:

  • the directory wip/data exists
  • the file wip/dump/db.sql exists and has a size greater than 100MB
  • the files wip/config/accounts did not change and has a specific md5 hash sum.
Installing what we need

We install backup-manager and backup checker. If you use Debian Wheezy, just use the following command:

apt-key adv --keyserver pgp.mit.edu --recv-keys 2B24481A \ && echo "deb http://debian.mytux.fr wheezy main" > /etc/apt/sources.list.d/mytux.list \ && apt-get update \ && apt-get install backupchecker backup-manager

Backup Checker is also available for Debian Squeeze, Debian Sid, FreeBSD. Check out the documentation to install it from PyPi or from sources.

Configuring Backup-Manager

Backup-manager will ask what directory you want to store backups, in our case we choose /home/joe/dev/wip

In the configuration file /etc/backup-manager.conf, you need to have the following lines:

export BM_BURNING_METHOD="none" export BM_UPLOAD_METHOD="none" export BM_POST_BACKUP_COMMAND="backupchecker -c /etc/backupchecker -l /var/log/backupchecker.log" Configuring Backup Checker

In order to configure Backup Checker, use the following commands:

# mkdir /etc/backupchecker && touch /var/log/backupchecker.log

Then write the following in /etc/backupchecker/backupmanager.conf:

[main] name=backupmanager type=archive path=/var/archives/laptop-home-joe-dev-wip.%Y%m%d.master.tar.gz files_list=/etc/backupchecker/backupmanager.list

You can see we’re using placeholders for the path value, in order to match each time the latest archive. More information about Backup Checker placeholders in the official documentation.

Last step, the description of your controls on the backup:

[files] wip/data| type|d wip/config/accounts| md5|27c9d75ba5a755288dbbf32f35712338 wip/dump/dump.sql| >100mb Launch Backup Manager

Just launch the following command:

# backup-manager

After Backup Manager is launched, Backup Checker is automatically launched and verify the new backup of the day where Backup Manager stores the backups.

Possible control failures

Lets say the dump does not have the expected size. It means someone may have messed up with the database! Backup Checker will warn you with the following message in /var/log/backupchecker.log:

$ cat /var/log/backupchecker.log WARNING:root:1 file smaller than expected while checking /var/archives/laptop-home-joe-dev-wip-20150328.tar.gz: WARNING:root:wip/dump/dump.sql size is 18. Should have been bigger than 104857600.

Other possible failures : someone created an account without asking anyone. The hash sum of the file will change. Here is the alert generated by Backup Checker:

$ cat /var/log/backupchecker.log WARNING:root:1 file with unexpected hash while checking /var/archives/laptop-home-joe-dev-wip-20150328.tar.gz: WARNING:root:wip/config/accounts hash is 27c9d75ba5a755288dbbf32f35712338. Should have been 27c9d75ba3a755288dbbf32f35712338.

Another possible failure: someone accidentally (or not) removed the data directory! Backup Checker will detect the missing directory and warn you:

$ cat /var/log/backupchecker.log WARNING:root:1 file missing in /var/archives/laptop-home-joe-dev-wip-20150328.tar.gz: WARNING:root:wip/data

Awesome isn’t it? The power of a backup tool combined with an automated backup checker. No more surprise when you need your backups. Moreover you spare the waste of time and efforts to control the backup by yourself.

What about you? Let us know what you think of it. We would be happy to get your feedbacks. The project cares about our users and the outdated feature was a awesome idea in a feature request by one of the Backup Checker user, thanks Laurent!

 


Categories: FLOSS Project Planets

Yves-Alexis Perez: 3.2.68 Debian/grsec kernel and update on the process

Mon, 2015-03-30 16:27

It's been a long time since I updated my repository with a recent kernel version, sorry for that. This is now done, the kernel (sources, i386 and amd64) is based on the (yet unreleased) 3.2.68-1 Debian kernel, patched with grsecurity 3.1-3.2.68-201503251805, and has the version 3.2.68-1~grsec1.

It works fine here, but as always, no warranty. If any problem occurs, try to reproduce using vanilla 3.2.68 + grsec patch before reporting here.

And now that Jessie release approaches, the question of what to do with those Debian/grsec kernel still arrise: the Jessie kernel is based on the 3.16 branch, which is not a (kernel.org) long term branch. Actually, the support already ended some times ago, and the (long term) maintainance is now assured by the Canonical Kernel Team (thus the -ckt suffix) with some help from the Debian kernel maintainers. So there's no Grsecurity patch following 3.16, and there's no easy way to forward-port the 3.14 patches.

At that point, and considering the support I got the last few years on this initiative, I don't think it's really worth it to continue providing those kernels.

One initiative which might be interesting, though, is the Mempo kernels. The Mempo team works on kernel reproducible builds, but they also include the grsecurity patch. Unfortunately, it seems that building the kernel their way involves calling a bash script which calls another one, and another one. A quick look at the various repositories is only enough to confuse me about how actually they build the kernel, in the end, so I'm unsure it's the perfect fit for a supposedly secure kernel. Not that the Debian way of building the kernel doesn't involves calling a lot of scripts (either bash or python), but still. After digging a bit, it seems that they're using make-kpkg (from the kernel-package package), which is not the recommended way anymore. Also, they're currently targeting Wheezy, so the 3.2 kernel, and I have no idea what they'll chose for Jessie.

In the end, for myself, I might just do a quick script which takes a git repository at the right version, pick the latest grsec patch for that branch, applies it, then run make deb-pkg and be done with it. That still leaves the problem of which branch to follow:

  • run a 3.14 kernel instead of the 3.16 (I'm unsure how much I'd lose / not gain from going to 3.2 to 3.14 instead of 3.16);
  • run a 3.19 kernel, then upgrade when it's time, until a new LTS branch appears.

There's also the config file question, but if I'm just using the kernels for myself and not sharing them, it's also easier, although if some people are actually interested it's not hard to publish them.

Categories: FLOSS Project Planets

Matthias Klumpp: Limba Project: Another progress report

Mon, 2015-03-30 15:46

And once again, it’s time for another Limba blogpost

Limba is a solution to install 3rd-party software on Linux, without interfering with the distribution’s native package manager. It can be useful to try out different software versions, use newer software on a stable OS release or simply to obtain software which does not yet exist for your distribution.

Limba works distribution-independent, so software authors only need to publish their software once for all Linux distributions.

I recently released version 0.4, with which all most important features you would expect from a software manager are complete. This includes installing & removing packages, GPG-signing of packages, package repositories, package updates etc. Using Limba is still a bit rough, but most things work pretty well already.

So, it’s time for another progress report. Since a FAQ-like list is easier to digest. compared to a long blogpost, I go with this again. So, let’s address one important general question first:

How does Limba relate to the GNOME Sandboxing approach?

(If you don’t know about GNOMEs sandboxes, take a look at the GNOME Wiki – Alexander Larsson also blogged about it recently)

First of all: There is no rivalry here and no NIH syndrome involved. Limba and GNOMEs Sandboxes (XdgApp) are different concepts, which both have their place.

The main difference between both projects is the handling of runtimes. A runtime is the shared libraries and other shared ressources applications use. This includes libraries like GTK+/Qt5/SDL/libpulse etc. XdgApp applications have one big runtime they can use, built with OSTree. This runtime is static and will not change, it will only receive critical security updates. A runtime in XdgApp is provided by a vendor like GNOME as a compilation of multiple single libraries.

Limba, on the other hand, generates runtimes on the target system on-the-fly out of several subcomponents with dependency-relations between them. Each component can be updated independently, as long as the dependencies are satisfied. The individual components are intended to be provided by the respective upstream projects.

Both projects have their individual up and downsides: While the static runtime of XdgApp projects makes testing simple, it is also harder to extend and more difficult to update. If something you need is not provided by the mega-runtime, you will have to provide it by yourself (e.g. we will have some applications ship smaller shared libraries with their binaries, as they are not part of the big runtime).

Limba does not have this issue, but instead, with its dynamic runtimes, relies on upstreams behaving nice and not breaking ABIs in security updates, so existing applications continue to be working even with newer software components.

Obviously, I like the Limba approach more, since it is incredibly flexible, and even allows to mimic the behaviour of GNOMEs XdgApp by using absolute dependencies on components.

Do you have an example of a Limba-distributed application?

Yes! I recently created a set of package for Neverball – Alexander Larsson also created a XdgApp bundle for it, and due to the low amount of stuff Neverball depends on, it was a perfect test subject.

One of the main things I want to achieve with Limba is to integrate it well with continuous integration systems, so you can automatically get a Limba package built for your application and have it tested with the current set of dependencies. Also, building packages should be very easy, and as failsafe as possible.

You can find the current Neverball test in the Limba-Neverball repository on Github. All you need (after installing Limba and the build dependencies of all components) is to run the make_all.sh script.

Later, I also want to provide helper tools to automatically build the software in a chroot environment, and to allow building against the exact version depended on in the Limba package.

Creating a Limba package is trivial, it boils down to creating a simple “control” file describing the dependencies of the package, and to write an AppStream metadata file. If you feel adventurous, you can also add automatic build instructions as a YAML file (which uses a subset of the Travis build config schema)

This is the Neverball Limba package, built on Tanglu 3, run on Fedora 21:

Which kernel do I need to run Limba?

The Limba build tools run on any Linux version, but to run applications installed with Limba, you need at least Linux 3.18 (for Limba 0.4.2). I plan to bump the minimum version requirement to Linux 4.0+ very soon, since this release contains some improvements in OverlayFS and a few other kernel features I am thinking about making use of.

Linux 3.18 is included in most Linux distributions released in 2015 (and of course any rolling release distribution and Fedora have it).

Building all these little Limba packages and keeping them up-to-date is annoying…

Yes indeed. I expect that we will see some “bigger” Limba packages bundling a few dependencies, but in general this is a pretty annoying property of Limba currently, since there are so few packages available you can reuse. But I plan to address this. Behind the scenes, I am working on a webservice, which will allow developers to upload Limba packages.

This central ressource can then be used by other developers to obtain dependencies. We can also perform some QA on the received packages, map the available software with CVE databases to see if a component is vulnerable and publish that information, etc.

All of this is currently planned, and I can’t say a lot more yet. Stay tuned! (As always: If you want to help, please contact me)

Are the Limba interfaces stable? Can I use it already?

The Limba package format should be stable by now – since Limba is still Alpha software, I will however, make breaking changes in case there is a huge flaw which makes it reasonable to break the IPK package format. I don’t think that this will happen though, as the Limba packages are designed to be easily backward- and forward compatible.

For the Limba repository format, I might make some more changes though (less invasive, but you might need to rebuilt the repository).

tl;dr: Yes! Plase use Limba and report bugs, but keep in mind that Limba is still in an early stage of development, and we need bug reports!

Will there be integration into GNOME-Software and Muon?

From the GNOME-Software side, there were positive signals about that, but some technical obstancles need to be resolved first. I did not yet get in contact with the Muon crew – they are just implementing AppStream, which is a prerequisite for having any support for Limba[1].

Since PackageKit dropped the support for plugins, every software manager needs to implement support for Limba.

So, thanks for reading this (again too long) blogpost There are some more exciting things coming soon, especially regarding AppStream on Debian/Ubuntu!

 

[1]: And I should actually help with the AppStream support, but currently I can not allocate enough time to take that additional project as well – this might change in a few weeks. Also, Muon does pretty well already!

Categories: FLOSS Project Planets

Daniel Leidert: Prevent suspend/hibernate if system is remotely backed up via rdiff-backup

Mon, 2015-03-30 13:01

I usually use rdiff-backup to backup several of my systems. One is a workstation which goes to sleep after some time of idling around. Now having a user logged in running rdiff-backup (or rsync, rsnapshot etc for that matter) won't prevent the system from being put to sleep. Naturally this happens before the backup is complete. So some time ago I was looking for a resolution and recieved a suggestion to use a script in /etc/pm/sleep.d/. I had to modify the script a bit, because the query result always was true. So this is my solution in /etc/pm/sleep.d/01_prevent_sleep_on_backup now:


#!/bin/sh

. "${PM_FUNCTIONS}"

command_exists rdiff-backup || exit $NA

case "$1" in
hibernate|suspend)
if ps cax | grep -q rdiff-backup
then
exit 1
fi
;;
esac

exit 0

Currently testing ...

Update

The above works with pm-utils; but it fails with systemd. Seems I have to move and modify the script for my system.

Update 2

It doesn't work. In short: exit 1 doesn't prevent systemd from going to suspend. I can see, that the script itself leads to the results I want, so the logic is correct. But I cannot find a way to tell systemd, to stop suspend. Shouldn't it be doing this automtically in a case, where a remote user is logged in and runs a command?

Update 3

There is also a related bug report.

Categories: FLOSS Project Planets

Dimitri John Ledkov: Boiling frog, or when did we loose it with /etc ?

Mon, 2015-03-30 11:15
$ sudo find /etc -type f | wc -l
2794StatelessWhen was the last time you looked at /etc and thought - "I honestly know what every single file in here is". Or for example had a thought "Each file in here is configuration changes that I made". Or for example do you have confidence that your system will continue to function correctly if any of those files and directories are removed?

Traditionally most *NIX utilities are simple enough utilities, that do not require any configuration files what's so ever. However most have command line arguments, and environment variables to manipulate their behavior. Some of the more complex utilities have configuration files under /etc, sometimes with "layer" configuration from user's home directory (~/). Most of them are generally widely accepted. However, these do not segregate upstream / distribution / site administrator / local administrator / user configuration changes. Most update mechanisms created various ways to deal with merging and maintaining the correct state of those. For example both dpkg & RPM (%config) have elaborate strategies and policies and ways to deal with them. However, even today, still, they cause problems: prompting user for whitespace changes in config files, not preserving user changes, or failing to migrate them.

I can't find exact date, but it has now been something like 12 years since XDG Base directory specification was drafted. It came from Desktop Environment requirements, but one thing it achieves is segregation between upstream / distro / admin / user induced changes. When applications started to implement Base directory specification, I started to feel empowered. Upstream ships sensible configs in /usr, distribution integrators ship their overlay tweaks packaged in /usr, my site admin applies further requirements in /etc, and as I user I am free to improve or brake everything with configs in ~/. One of the best things from this setup - no upgrade prompts, and ease of reverting each layer of those configs (or at least auditing where the settings are coming from).

However, the uptake of XDG Base directory spec is slow / non-existing among the core components of any OS today. And at the same time /etc has grown to be a dumping ground for pretty much everything under the sun:
  • Symlink farms - E.g. /etc/rc*.d/*, /etc/systemd/system/*.wants/*, /etc/ssl/certs/*
  • Cache files - E.g. /etc/ld.so.cache
  • Empty (and mandatory) directories
  • Empty (and mandatory) "configuration" files. - E.g. whitespace & comments only
Let's be brutally honest and say that none of the above belongs in /etc. /etc must be for end-user configuration only, made by the end user alone and nobody else (or e.g. an automation tool driven by the end-user, like puppet).

Documentation of available configuration options and syntax to specify those in the config files should be shipped... in the documentation. E.g. man pages, /usr/share/doc, and so on. And not as the system-wide "example" config files. Absence of the files in /etc must not be treated as fatal, but a norm, since most users use default settings (especially for the most obscure options). Lastly compiled-in defaults should be used where possible, or e.g. layer configuration from multiple locations (e.g. /usr, /etc, ~/ where appropriate).

Above observations are not novel, and shared by most developers and users in the wider open source ecosystem. There are many projects and concepts to deal with this problem by using automation (e.g. puppet, chef), by migrating to new layouts (e.g. implementing / supporting XDG base dir spec), using "app bundles" (e.g. mobile apps, docker), or fully enumerating/abstracting everything in a generic manner (e.g. NixOS). Whilst fixing the issue at hand, these solutions do increase the dependency on files in /etc to be available. In other words we grew a de-facto user-space API we must not break, because modifications to the well known files in /etc are expected to take effect by both users and many administrator tools.

Since August last year, I have joined Open Source Technology Center at Intel, and have been working on Clear Linux* Project for Intel Architecture. One of the goals we have set out is to achieve stateless operation - that is to have empty /etc by default, reserved for user modification alone, yet continuing to support all legacy / well-known configuration paths. The premise is that all software can be patched with auto-detection, built-in defaults or support for layered configuration to achieve this. I hope that this work would interest everyone and will be widely adopted.

Whilst the effort to convert everything is still on going, I want to discuss a few examples of any core system.
ShadowThe login(1) command, whilst having built-in default for every single option exits with status 1, if it cannot stat(2) login.defs(5) file.

The passwd(1) command will write out the salted/hashed password in the passwd(5) file, rather than in shadow(5), if it cannot stat the shadow(5) file. There is similar behavior with gshadow. I found it very ironic, that upstream project "shadow" does not use shadow(5) by default.

Similarly, stock files manipulated by passwd/useradd/groupadd utilities are not created, if missing.

Some settings in login.defs(5) are not applicable, when compiled with PAM support, yet present in the default shipped login.defs(5) file.

Patches to resolve above issues are undergoing review on the upstream mailing list.
DBusIn xml based configuration, `includedir' elements are mandatory to exist on disk, that is empty directory must be present, if referenced. If these directories are non-existant, the configuration fails to load and the system or session bus are not started.

Similarly, upstream have general agreement with the stateless concept and patches to move all of dbus default configurations from /etc to /usr are being reviewed for inclusion at the bug tracker. I hope this change will make into the 1.10 stable release.
GNU Lib CToday, we live in a dual-stack IPv4 and IPv6 world, where even the localhost has multiple IP addresses. As a slightly ageist time reference, the first VCS I ever used was git. Thus when I read below, I get very confused:
$ cat /etc/host.conf
# The "order" line is only used by old versions of the C library.
order hosts,bind
multi onWhy not simply do this:
--- a/resolv/res_hconf.c
+++ b/resolv/res_hconf.c
@@ -309,6 +309,8 @@ do_init (void)
   if (hconf_name == NULL)
     hconf_name = _PATH_HOSTCONF;

+  arg_bool (ENV_MULTI, 1, "on", HCONF_FLAG_MULTI);
+
   fp = fopen (hconf_name, "rce");
   if (fp)
     {
There are still many other packages that needed fixes similar to above. Stay tuned for further stateless observations about Glibc, OpenSSH, systemd and other well known packages.

In the mean time, you can try out https://clearlinux.org/ images that implement above and more already. If you want to chat about it more, comment on G+, find myself on irc - xnox @ irc.freenode.net #clearlinux and join our mailing list to kick the conversation off, if you are interested in making the world more stateless.

ps.
I am a professional Linux Distribution developer, currently employed by Intel, however the postings on this site are my own and don't necessarily represent Intel's or any other past/present/future employer positions, strategies, or opinions.

* Other names and brands may be claimed as the property of others


Categories: FLOSS Project Planets

Steve McIntyre: UEFI Debian installer work for Jessie, part 6

Sun, 2015-03-29 21:44

One final update on my work for UEFI improvements in Jessie!

All of my improvements have been committed into the various Debian packages involved, and the latest release candidate for Jessie's debian-installer build (RC2) works just as well as my test builds on the Bay Trail system I've been using (Asus X205TA). Job done! :-)

I'm still hoping to maybe get more hardware support for this particular hardware included in Jessie, but I can't promise. The mixed EFI work has also improved things for a lot of Mac users, and I'm planning to write up a more comprehensive list of supported machines in the Debian wiki (for now).

There's now no need to use any of the older test installer images - please switch to RC2 for now. See http://cdimage.debian.org/cdimage/jessie_di_rc2/ for the images. If you want to install a 64-bit system with the 32-bit UEFI support, make sure you use the multi-arch amd64/i386 netinst or DVD. Otherwise, any of the standard i386 images should work for a 32-bit only system.

Upstreaming

My kernel patch to add the new /sys file was accepted upstream a while back, and has been in Linus' master branch for some time. It'll be in 4.0 unless something goes horribly wrong, and as it's such a tiny piece of code it's trivial to backport to anything remotely recent too.

I've also just seen that my patch for grub2 to use this new /sys file has been accepted upstream this week. Again, the change is small and self-contained so should be easy to copy across into other trees too.

Mixed EFI systems should now have better support across all distros in the near future, I hope.

Categories: FLOSS Project Planets

Eddy Petrișor: HOWTO: Disassemble a big endian Arm raw memory dump with objdump

Sun, 2015-03-29 09:30
This is trivial and very useful for embedded code dumps, but in case somebody (including future me) needs this, here it goes:
arm-none-eabi-objdump -D -b binary -m arm -EB dump.bin | lessThe options mean:
  • -D - disassemble
  • -b binary - input file is a raw file
  • -m arm - arm architecture
  • -EB - big endian
By default, endianness is assumed to be little endian, or at least that's happened with my toolchain.
Categories: FLOSS Project Planets

Zlatan Todorić: Its all about fun

Sat, 2015-03-28 21:00

The percentage that women in Debian occupy as DDs is ~2%. Yes, just ~2% ladies that are DDs! So that means ~98% of DDs are gentelmen.

I know there are more of ladies in Debian, so I firstly urge you, for love of Debian, to apply if you are contributing to this project, love its community and want to see Debian taking over the universe (okay, it seems that we conquered outer space so we need a help on Earth).

So why is the number this low? Well maybe it's too precious to us currently inside that we want to prevent it being spoiled from outside. Also there seems to be not that much of younger DDs. Why is that important - well, young people like to do it and not to think about it. Many time they just break it, but many time they also do a breakthrough. Why is difference important and why should we embrace it? It's very important because it breaks a monopoly on view and behavior. It brings views not just from a larger number of people, but also from people from different backgrounds, and in constructive conversation it can put even more pluses on current workflow or it can counter it with good arguments. In a project of its size and worldwide geolocation of its developers, this is true for Debian more then any other projects I know. We need more women so we can balance our inner workings and have a better understanding of humanity and how is it moving, what and why does it need and where is it steering. That way we can produce a community which will improve quality of OS that we produce - because of sheer number of different people working on the same thing bringing to it its own personal touch. So, ladies and youth all over the world, unite and join in Debian because without diversity Debian can't grow beyond its current size. Also, no, Debian is not about code only, it needs painters, musicians, people that want to talk about Debian, people that share love and happiness, people that want to build better communities, UI/UX designers, makers, people who know how to repair a bike, athletes, homebrew beer producers, lawyers (just while world gets rid of laws, then we don't need you), actors, writters... Why, well because world and communities are made up from all that diversity and that's what makes it a better and not a monotone place.

But I just use Debian. Well, do you feel love towards Debian and its work? Would you like to feel more as integral part of community? If the answer is big fat YES, then you should be a DD too. Every person that feels it's part of Debians philosophy about freedom and behaving in good manner should join Debian. Every person that feels touched and enhanced by Debian's work should become part of community and share its experience how Debian touched their soul, impacted their life. If you love Debian, you should be free to contribute to it in whatever manner and you should be free to express your love towards it. If you think lintian is sexy, or shebang is a good friends of yours, or you enjoy talking to MadameZou about Debian and zombies (yeah, we do have all kinds of here), or you like Krita, or you hate the look of default XFCE theme, or you can prove that you a more crazy developer then paultag - just hop into community and try to integrate in it. You will meet great folks, have a lot of conversation about wine and cheese, play some dangerous card games and even learn about things like bokononism (yeah I am looking at you dkg!).

Now for the current Debian community - what the hell is packaging and non-packaging Debian Developer? Are one better then others? Do others stink? They don't know to hug? WHAT? Yes I know that inexperienced person shouldn't have a permission to access Debian packaging infrastructure, but I have the feeling that even that person knows that. Every person should have a place in Debian and acknowledge other fields. So yes, software developers need access to Debian packaging infrastructure, painters don't. I think we can agree on this. So lets abolish the stupid term and remove the difference in our community. Lets embrace the difference, because if someone writes a good poem about Debian heroism I could like it more then flashplugin-nonfree! Yep, I made that comparison on purpose so you can give a thought about it.

Debian has excellent community regarding operating system that it's producing. And it's not going away, not at least anytime soon. But it will not go forward if we don't give additional push as human beings, as people who care about their fellow Debianites. And we do care, I know that, we just need to push it more public. We don't hide bugs, we for sure shouldn't hide features. It will probably bring bad seeds too, but we have mechanisms and will to counter that. If we, on average 10 bad seeds, get some crazy good hacker or crazy lovely positive person like this lady, we will be on right path. Debian is a better place, it should lead in effort to bring more people into FLOSS world and it should allow people to bring more of diversity into Debian.

Categories: FLOSS Project Planets

Eddy Petrișor: Net Neutrality

Sat, 2015-03-28 19:40
I have seen this awesomeness way too late, but is still awesome.
Categories: FLOSS Project Planets

Leo 'costela' Antunes: Go linear programming library

Sat, 2015-03-28 16:55

After a way too long hiatus, I finally got back to working on some side-projects and wrote a small go library for solving linear programming problems. Say hi to golp!

Since I’m no LP expert, golp makes use of GLPK to do the actual weight-lifting. Unfortunately, GLPK currently isn’t reentrant, so it can’t really be used with go’s great goroutines. Still, works well enough to be used for a next little project.

Now, if only I could get back to working on Debian…

Categories: FLOSS Project Planets

Matt Zimmerman: What I think about thought

Sat, 2015-03-28 12:50

Only parts of us will ever
touch o̶n̶l̶y̶ parts of others –
one’s own truth is just that really — one’s own truth.
We can only share the part that is u̶n̶d̶e̶r̶s̶t̶o̶o̶d̶ ̶b̶y̶ within another’s knowing acceptable t̶o̶ ̶t̶h̶e̶ ̶o̶t̶h̶e̶r̶—̶t̶h̶e̶r̶e̶f̶o̶r̶e̶ so one
is for most part alone.
As it is meant to be in
evidently in nature — at best t̶h̶o̶u̶g̶h̶ ̶ perhaps it could make
our understanding seek
another’s loneliness out.

– unpublished poem by Marilyn Monroe, via berlin-artparasites

This poem inspired me to put some ideas into words this morning, an attempt to summarize my current working theory of consciousness.

Ideas travel through space and time. An idea that exists in my mind is filtered through my ability to express it somehow (words, art, body language, …), and is then interpreted by your mind and its models for understanding the world. This shifts your perspective in some way, some or all of which may be unconscious. When our minds encounter new ideas, they are accepted or rejected, reframed, and integrated with our existing mental models. This process forms a sort of living ecosystem, which maintains equilibrium within the realm of thought. Ideas are born, divide, mutate, and die in the process. Language, culture, education and so on are stable structures which form and support this ecosystem.

Consciousness also has analogues of the immune system, for example strongly held beliefs and models which tend to reject certain ideas. Here again these can be unconscious or conscious. I’ve seen it happen that if someone hears an idea they simply cannot integrate, they will behave as if they did not hear it at all. Some ideas can be identified as such a serious threat that ignoring them is not enough to feel safe: we feel compelled to eliminate the idea in the external world. The story of Christianity describes a scenario where an idea was so threatening to some people that they felt compelled to kill someone who expressed it.

A microcosm of this ecosystem also exists within each individual mind. There are mental structures which we can directly introspect and understand, and others which we can only infer by observing our thoughts and behaviors. These structures communicate with each other, and this communication is limited by their ability to “speak each other’s language”. A dream, for example, is the conveyance of an idea from an unconscious place to a conscious one. Sometimes we get the message, and sometimes we don’t. We can learn to interpret, but we can’t directly examine and confirm if we’re right. As in biology, each part of this process introduces uncountable “errors”, but the overall system is surprisingly robust and stable.

This whole system, with all its many minds interacting, can be thought of as an intelligence unto itself, a gestalt consciousness. This interpretation leads to some interesting further conclusions:

  • The notion that an individual person possesses a single, coherent point of view seems nonsensical
  • The separation between “my mind” and “your mind” seems arbitrary
  • The attribution of consciousness only to humans, or only to living beings, seems absurd

Naturally, this is by no means an original idea (can such a thing exist?). It is my own take on the subject, informed both consciously and unconsciously by my own study, first-hand experience, conversations I’ve had with others, and so on. It’s informed by the countless thinkers who have influenced me. Its expression is limited by my ability to write about it in a way that makes sense to other people.
Maybe some of this makes sense to you, and maybe I seem insane, or maybe both. Hopefully you don’t find that you have an inexplicable unconscious desire to kill me!


Categories: FLOSS Project Planets

Joachim Breitner: An academic birthday present

Sat, 2015-03-28 08:30

Yesterday, which happened to be my 30th birthday, a small package got delivered to my office: The printed proceedings of last year's “Trends in Functional Programming” conference, where I published a paper on Call Arity (preprint). Although I doubt the usefulness of printed proceedings, it was a nicely timed birthday present.

Looking at the rather short table of contents – only 8 papers, after 27 presented and 22 submitted – I thought that this might mean that, with some luck, I might have chances to get the “Best student paper award”, which I presumed to be announced at the next iteration of the conference.

For no particular reason I was leisurely browsing through the book, and started to read the preface. And what do I read there?

Among the papers selected for these proceedings, two papers stood out. The award for Best Student Paper went to Joachim Breitner for his paper entitled Call Arity, and the award for Best Paper Overall went to Edwin Brady for his paper entitled Resource-dependent Algebraic Effects. Congratulations!

Now, that is a real nice birthday present! Not sure if I even would have found out about it, had I not have thrown a quick glance at page V...

I hope that it is a good omen for my related ICFP'15 submission.

Categories: FLOSS Project Planets

Richard Hartmann: Release Critical Bug report for Week 13

Fri, 2015-03-27 16:42

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1039 (Including 155 bugs affecting key packages)
    • Affecting Jessie: 97 (key packages: 65) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 77 (key packages: 51) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 13 bugs are tagged 'patch'. (key packages: 9) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 4 bugs are marked as done, but still affect unstable. (key packages: 1) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 60 bugs are neither tagged patch, nor marked done. (key packages: 41) Help make a first step towards resolution!
      • Affecting Jessie only: 20 (key packages: 14) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 11 bugs are in packages that are unblocked by the release team. (key packages: 7)
        • 9 bugs are in packages that are not unblocked. (key packages: 7)

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 187 (132+55) 5 2 (0+2) 224 (132+92) 175 (124+51) 6 release! 212 (129+83) 161 (109+52) 7 release+1 194 (128+66) 147 (106+41) 8 release+2 206 (144+62) 147 (96+51) 9 release+3 174 (105+69) 152 (101+51) 10 release+4 120 (72+48) 112 (82+30) 11 release+5 115 (74+41) 97 (68+29) 12 release+6 93 (47+46) 87 (71+16) 13 release+7 50 (24+26) 97 (77+20) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Categories: FLOSS Project Planets

Michal Čihař: Porting python-gammu to Python 3

Fri, 2015-03-27 13:00

Over the time I started to get more and more requests to have python-gammu working with Python 3. Of course this request makes sense, but I somehow failed to find time for that.

Also for quite some time python-gammu has been distributed together with Gammu sources. This was another struggle to overcome when supporting Python 3 as in many cases users will want to build the module for both Python 2 and 3 (at least most distributions will want to do so) and with current CMake based build system this did not seem to be easy to achieve.

So I've decided it's time to split python module out of the library. The reasons for having that together are no longer valid (libGammu has quite stable API these days) and having standard module which can be installed by pip is a nice thing.

Once the code has been put into separate git module, I've slowly progressed on porting to Python 3. Most of the problems were on the C side of the code, where Python really does not make it easy to support both Python 2 and 3. So the code ended up with many #ifdefs, but I see no other way. While doing these changes, many points in the API were fixed to accept unicode stings in Python 2 as well.

Anyway, today we have first successful build of python-gammu working on both Python 2 and 3. I'm afraid there is still some bug leading to occasional segfaults on Travis, but not reproducible locally. But hopefully this will be fixed in upcoming weeks and we can release separate python-gammu module again.

Filed under: English Gammu python-gammu Wammu | 0 comments | Flattr this!

Categories: FLOSS Project Planets

Olivier Berger: New short paper : “Designing a virtual laboratory for a relational database MOOC” with Vagrant, Debian, etc.

Fri, 2015-03-27 07:07

Here’s a short preview of our latest accepted paper (to appear at CSEDU 2015), about the construction of VMs for the Relational Database MOOC using Vagrant, Debian, PostgreSQL (previous post), etc. :

Designing a virtual laboratory for a relational database MOOC

Olivier Berger, J Paul Gibson, Claire Lecocq and Christian Bac

Keywords: Remote Learning, Virtualization, Open Education Resources, MOOC, Vagrant

Abstract: Technical advances in machine and system virtualization are creating opportunities for remote learning to
provide significantly better support for active education approaches. Students now, in general, have personal
computers that are powerful enough to support virtualization of operating systems and networks. As a conse-
quence, it is now possible to provide remote learners with a common, standard, virtual laboratory and learn-
ing environment, independent of the different types of physical machines on which they work. This greatly
enhances the opportunity for producing re-usable teaching materials that are actually re-used. However, con-
figuring and installing such virtual laboratories is technically challenging for teachers and students. We report
on our experience of building a virtual machine (VM) laboratory for a MOOC on relational databases. The
architecture of our virtual machine is described in detail, and we evaluate the benefits of using the Vagrant tool
for building and delivering the VM.

TOC :

  • Introduction
    • A brief history of distance learning
    • Virtualization : the challenges
    • The design problem
  • The virtualization requirements
    • Scenario-based requirements
    • Related work on requirements
    • Scalability of existing approaches
  • The MOOC laboratory
    • Exercises and lab tools
    • From requirements to design
  • Making the VM as a Vagrant box
    • Portability issues
    • Delivery through Internet
    • Security
    • Availability of the box sources
  • Validation
    • Reliability Issues with VirtualBox
    • Student feedback and evaluation
  • Future work
    • Laboratory monitoring
    • More modular VMs
  • Conclusions

Bibliography

  • Alario-Hoyos et al., 2014
    Alario-Hoyos, C., Pérez-Sanagustın, M., Kloos, C. D., and Muñoz Merino, P. J. (2014).
    Recommendations for the design and deployment of MOOCs: Insights about the MOOC digital education of the future deployed in MiríadaX.
    In Proceedings of the Second International Conference on Technological Ecosystems for Enhancing Multiculturality, TEEM ’14, pages 403-408, New York, NY, USA. ACM.
  • Armbrust et al., 2010
    Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., and Zaharia, M. (2010).
    A view of cloud computing.
    Commun. ACM, 53:50-58.
  • Billingsley and Steel, 2014
    Billingsley, W. and Steel, J. R. (2014).
    Towards a supercollaborative software engineering MOOC.
    In Companion Proceedings of the 36th International Conference on Software Engineering, pages 283-286. ACM.
  • Brown and Duguid, 1996
    Brown, J. S. and Duguid, P. (1996).
    Universities in the digital age.
    Change: The Magazine of Higher Learning, 28(4):11-19.
  • Bullers et al., 2006
    Bullers, Jr., W. I., Burd, S., and Seazzu, A. F. (2006).
    Virtual machines – an idea whose time has returned: Application to network, security, and database courses.
    SIGCSE Bull., 38(1):102-106.
  • Chen and Noble, 2001
    Chen, P. M. and Noble, B. D. (2001).
    When virtual is better than real [operating system relocation to virtual machines].
    In Hot Topics in Operating Systems, 2001. Proceedings of the Eighth Workshop on, pages 133-138. IEEE.
  • Cooper, 2005
    Cooper, M. (2005).
    Remote laboratories in teaching and learning-issues impinging on widespread adoption in science and engineering education.
    International Journal of Online Engineering (iJOE), 1(1).
  • Cormier, 2014
    Cormier, D. (2014).
    Rhizo14-the MOOC that community built.
    INNOQUAL-International Journal for Innovation and Quality in Learning, 2(3).
  • Dougiamas and Taylor, 2003
    Dougiamas, M. and Taylor, P. (2003).
    Moodle: Using learning communities to create an open source course management system.
    In World conference on educational multimedia, hypermedia and telecommunications, pages 171-178.
  • Gomes and Bogosyan, 2009
    Gomes, L. and Bogosyan, S. (2009).
    Current trends in remote laboratories.
    Industrial Electronics, IEEE Transactions on, 56(12):4744-4756.
  • Hashimoto, 2013
    Hashimoto, M. (2013).
    Vagrant: Up and Running.
    O’Reilly Media, Inc.
  • Jones and Winne, 2012
    Jones, M. and Winne, P. H. (2012).
    Adaptive Learning Environments: Foundations and Frontiers.
    Springer Publishing Company, Incorporated, 1st edition.
  • Lowe, 2014
    Lowe, D. (2014).
    MOOLs: Massive open online laboratories: An analysis of scale and feasibility.
    In Remote Engineering and Virtual Instrumentation (REV), 2014 11th International Conference on, pages 1-6. IEEE.
  • Ma and Nickerson, 2006
    Ma, J. and Nickerson, J. V. (2006).
    Hands-on, simulated, and remote laboratories: A comparative literature review.
    ACM Computing Surveys (CSUR), 38(3):7.
  • Pearson, 2013
    Pearson, S. (2013).
    Privacy, security and trust in cloud computing.
    In Privacy and Security for Cloud Computing, pages 3-42. Springer.
  • Prince, 2004
    Prince, M. (2004).
    Does active learning work? A review of the research.
    Journal of engineering education, 93(3):223-231.
  • Romero-Zaldivar et al., 2012
    Romero-Zaldivar, V.-A., Pardo, A., Burgos, D., and Delgado Kloos, C. (2012).
    Monitoring student progress using virtual appliances: A case study.
    Computers & Education, 58(4):1058-1067.
  • Sumner, 2000
    Sumner, J. (2000).
    Serving the system: A critical history of distance education.
    Open learning, 15(3):267-285.
  • Watson, 2008
    Watson, J. (2008).
    Virtualbox: Bits and bytes masquerading as machines.
    Linux J., 2008(166).
  • Winckles et al., 2011
    Winckles, A., Spasova, K., and Rowsell, T. (2011).
    Remote laboratories and reusable learning objects in a distance learning context.
    Networks, 14:43-55.
  • Yeung et al., 2010
    Yeung, H., Lowe, D. B., and Murray, S. (2010).
    Interoperability of remote laboratories systems.
    iJOE, 6(S1):71-80.
Categories: FLOSS Project Planets

Michal Čihař: Spring is here

Fri, 2015-03-27 01:00

Finally winter seems to be over and it's time to take out camera and make some pictures. Out of many areas where you can see spring snowflakes, we've chosen area Čtvrtě near Mcely, village which is less famous, but still very nice.

Filed under: English Photography Travelling | 0 comments | Flattr this!

Categories: FLOSS Project Planets

Daniel Pocock: WebRTC: DruCall in Google Summer of Code 2015?

Thu, 2015-03-26 17:58

I've offered to help mentor a Google Summer of Code student to work on DruCall. Here is a link to the project details.

The original DruCall was based on SIPml5 and released in 2013 as a proof-of-concept.

It was later adapted to use JSCommunicator as the webphone implementation. JSCommunicator itself was updated by another GSoC student, Juliana Louback, in 2014.

It would be great to take DruCall further in 2015, here are some of the possibilities that are achievable in GSoC:

  • Updating it for Drupal 8
  • Support for logged-in users (currently it just makes anonymous calls, like a phone box)
  • Support for relaying shopping cart or other session cookie details to the call center operative who accepts the call
Help needed: could you be a co-mentor?

My background is in real-time and server-side infrastructure and I'm providing all the WebRTC SIP infrastructure that the student may need. However, for the project to have the most impact, it would also be helpful to have some input from a second mentor who knows about UI design, the Drupal way of doing things and maybe some Drupal 8 experience. Please contact me ASAP if you would be keen to participate either as a mentor or as a student. The deadline for student applications is just hours away but there is still more time for potential co-mentors to join in.

WebRTC at mini-DebConf Lyon in April

The next mini-DebConf takes place in Lyon, France on April 11 and 12. On the Saturday morning, there will be a brief WebRTC demo and there will be other opportunities to demo or test it and ask questions throughout the day. If you are interested in trying to get WebRTC into your web site, with or without Drupal, please see the RTC Quick Start guide.

Categories: FLOSS Project Planets

Zlatan Todorić: Random bits

Thu, 2015-03-26 11:04
Gogs

I installed today Gogs and configured it with mysql (yes, yes, I know - use postgres you punk!). I will not post details of how I did it because:

  • It still has "weird" coding as pointed already by others
  • It doesn't have fork and pull request ability yet

And there was end of journey. When they code in fork/PR , I will close my eyes on other coding stuff and try it again because Gitlab is not close to my heart and installing their binary takes ~850MB of space which means a lot of ruby code that could go wrong way.

It would be really awesome to have in archive something to apt install and have github-like place. It would be great if Debian infrastructure would have the possibility to have that.

Diaspora*

Although I am thrilled about it finally reaching Debian archive, it still isn't ready. Not even closely. I couldn't even finish installation of it and it's not suitable for main archive as it takes files from github repo of diaspora. Maybe poking around Bitnami folks about how they did it.

The power of Free software

Text Secure is was an mobile app that I thought it could take on Viber or WhatsUp. Besides all its goodies it had chance to send encrypted SMS to other TS users. Not anymore. Fortunate, there is a fork called SMSSecure which still has that ability.

Trolls

So there is this Allwinner company that does crap after crap. Their latest will reach wider audience and I hope it gets resolved in a matter how they would react if some big proprietary company was stealing their code. It seems Allwinner is a pseudo for Alllooser. Whoa, that was fun!

A year old experiment

So I had a bet with a friend that I will run for a year Debian Unstable mixed with some packages from experimental and do some random testings on packages of interest to them. Also I promised to update aggressively so it was to be twice a day. This was my only machine so the bet was really good as it by theory could break very often. Well on behalf of Debian community, I can say that Debian hasn't had a single big breakage. Yay!

The good side: on average I had ~3000 packages installed (they were in range from 2500-3500). I had for example xmonad, e17, gnome, cinnamon, xfce, systemd from experimental, kernels from experimental, nginx, apache, a lot of heavy packages, mixed packages from pip, npm, gems etc. So that makes it even more incredible that it stayed stable. There is no bigger kudos to people working on Debian, then when some sadist tries countless of ways to break it and Debian is just keeps running. I mean, I was doing my $PAID_WORK on this machine!

The bad side: there were small breakages. It's seems that polkit and systemd-side of gnome were going through a lot of changes because sometimes system would ask password for every action (logout, suspend, poweroff, connect to network etc), audio would work and would not work, would often by itself just mute sound on every play or it would take it to 100% (which would blow my head when I had earplugs), bluetooth is almost de facto not working in gnome (my bluetooth mice worked without single problem in lenny, squeeze, in wheezy it maybe had once or twice a problem, but in this year long test it's almost useless). System would also have random hangs from time to time.

The test: in the beginning my radeon card was too new and it was not supported by FLOSS driver so I ended up using fglrx which caused me a lot of annoyance (no brightness control, flickering of screen) but once FLOSS driver got support I was on it, and it performed more fluid (no glitches while moving windows). So as my friends knew that I have radeon and they want to play games on their machines (I play my Steam games on FLOSS driver) they set me the task to try fglrx driver every now end then. End result - there is no stable fglrx driver for almost a year, it breaks graphical interface so I didn't even log into DE with it for at least 8 months if not more. On the good side my expeditions in flgrx where quick - install it, boot into disaster, remove it, boot into freedom. Downside seems to be that removing fglrx driver, leaves a lot of its own crap on system (I may be mistaking but it seems I am not).

Well, that's all for today. I think so. You can never be sure.

Categories: FLOSS Project Planets

Patrick Matthäi: More wheezy-backports work

Thu, 2015-03-26 04:01

Hello,

now you can install the following package versions from wheezy-backports:

  • apt-dater-host (Source split, 0.9.0-3+wheezy1 => 1.0.0-2~bpo70+1)
  • glusterfs (3.2.7-3+deb7u1 => 3.5.2-1~bpo70+1)
  • geoip-database (20141009-1~bpo70+1 => 20150209-1~bpo70+1)

geoip-database introduces a new package geoip-database-extra, which includes the free GeoIP City and GeoIP ASNum databases.

glusterfs will get an update in a few days ago to fix CVE-2014-3619.

Categories: FLOSS Project Planets