Planet Debian

Syndicate content
Planet Debian - http://planet.debian.org/
Updated: 21 hours 42 min ago

Joey Hess: late summer

Tue, 2016-08-30 21:15

With days beginning to shorten toward fall, my house is in initial power saving mode. Particularly, the internet gateway is powered off overnight. Still running electric lights until bedtime, and still using the inverter and other power without much conservation during the day.

Indeed, I had two laptops running cpu-melting keysafe benchmarks for much of today and one of them had to charge up from empty too. That's why the house power is a little low, at 11.0 volts now, despite over 30 amp-hours of power having been produced on this mostly clear day. (1 week average is 18.7 amp-hours)

September/October is the tricky time where it's easy to fall off a battery depletion cliff and be stuck digging out for a long time. So time to start dusting off the conservation habits after summer's excess.

I think this is the first time I've mentioned any details of living off grid with a bare minimum of PV capacity in over 4 years. Solar has a lot of older posts about it, and I'm going to note down the typical milestones and events over the next 8 months.

Categories: FLOSS Project Planets

Mike Gabriel: credential-sheets: User Account Credential Sheets Tool

Tue, 2016-08-30 16:05
Preface This little piece of work has been pending on my todo list for about two years now. For our local school project "IT-Zukunft Schule" I wrote the little tool credential-sheets. It is a little Perl script that turns a series of import files (CSV format) as they have to be provided for user mass import into GOsa² (i.e. LDAP) into a series of A4 sheets with little cards on them, containing initial user credential information. The upstream sources are on Github and I have just uploaded this little tool to Debian. Introduction After mass import of user accounts (e.g. into LDAP) most site administrators have to create information sheets (or snippets) containing those new credentials (like username, password, policy of usage, etc.). With this tiny tool, providing these pieces of information to multiple users, becomes really simple. Account data is taken from a CSV file and the sheets are output as PDF using easily configurable LaTeX template files. Usage Synopsis: credential-sheets [options] <CSV-file-1> [<CSV-file-2> [...]] Options The credential-sheets command accepts the following command-line options: --help Display a help with all available command line options and exit. --template=<tpl-name> Name of the template to use. --cols=<x> Render <x> columns per sheet. --rows=<y> Render <y> rows per sheet. --zip Do create a ZIP file at the end. --zipfilename=<zip-file-name> Alternative ZIP file name (default: name of parent folder). --debug Don't remove temporary files. CSV File Column Arrangement The credential-sheets tool can handle any sort of column arrangement in given CSV file(s). It expects the CSV file(s) to have column names in their first line. The given column names have to map to the VAR-<column-name> placeholders in credential-sheets's LaTeX templates. The shipped-with templates (students, teachers) can handle these column names:
  • login -- The user account's login id (uid)
  • lastName -- The user's last name(s)
  • firstName -- The user's first name(s)
  • password -- The user's password
  • form -- The form name/ID (student template only)
  • subjects -- A list of subjects taught by a teacher (teacher template only)
If you create your own templates, you can be very flexible in using your own column names and template names. Only make sure that the column names provided in the CSV file(s)'s first line match the variables used in the customized LaTeX template. Customizations The shipped-with credential sheets templates are expected to be installed in /usr/share/credential-sheets/ for system-wide installations. When customizing templates, simply place a modified copy of any of those files into ~/.credential-sheets/ or /etc/credential-sheets/. For further details, see below. The credential-sheets tool uses these configuration files:
  • header.tex (LaTeX file header)
  • <tpl-name>-template.tex (where as <tpl-name> students and teachers is provided on default installations, this is extensible by defining your own template files, see below).
  • footer.tex (LaTeX file footer)
Search paths for configuration files (in listed order):
  • $HOME/.credential-sheets/
  • ./
  • /etc/credential-sheets/
  • /usr/local/share/credential-sheets/
  • /usr/share/credential-sheets/
You can easily customize the resulting PDF files generated with this tool by placing your own template files, header and footer where appropriate. Dependencies This project requires the following dependencies:
  • Text::CSV Perl module
  • Archive::Zip Perl module
  • texlive-latex-base
  • texlive-fonts-extra
Copyright and License Copyright © 2012-2016, Mike Gabriel <mike.gabriel@das-netzwerkteam.de>. Licensed under GPL-2+ (see COPYING file).
Categories: FLOSS Project Planets

Daniel Stender: My work for Debian in August

Tue, 2016-08-30 13:42

Here's again a little list of my humble off-time contributions I'm happy to add to the large amount of work we're completing all together each month. Then there is one more "new in Debian" (meaning: "new in unstable") announcement. First, the uploads (a few of them are from July):

  • afl/2.21b-1
  • djvusmooth/0.2.17-1
  • python-bcrypt/3.1.0-1
  • python-latexcodec/1.0.3-4 (closed #830604)
  • pylint-celery/0.3-2 (closed #832826)
  • afl/2.28b-1 (closed #828178)
  • python-afl/0.5.4-1
  • vulture/0.10-1
  • afl/2.30b-1
  • prospector/0.12.2-1
  • pyinfra/0.1.1-1
  • python-afl/0.5.4-2 (fix of elinks_dump_varies_output_with_locale)
  • httpbin/0.5.0-1
  • python-afl/0.5.5-1 (closed #833675)
  • pyinfra/0.1.2-1
  • afl/2.33b-1 (experimental, build/run on llvm 3.8)
  • pylint-flask/0.3-2 (closed #835601)
  • python-djvulibre/0.8-1
  • pylint-flask/0.5-1
  • pytest-localserver/0.3.6-1
  • afl/2.33b-2
  • afl/2.33b-3

New packages:

  • keras/1.0.7-1 (initial packaging into experimental)
  • lasagne/0.1+git20160728.8b66737-1

Sponsored uploads:

  • squirrel3/3.1-4 (closed #831210)

Requested resp. suggested for packaging:

  • yapf: Python code formatter
  • spacy: industrial-strength natural language processing for Python
  • ralph: asset management and DCIM tool for data centers
  • pytest-cookies: Pytest plugin for testing Cookiecutter templates
  • blocks: another deep learning framework build on the top of Theano
  • fuel: data provider for Blocks and Python DNN frameworks in general
New in Debian: Lasagne (deep learning framework)

Now that the mathematical expression compiler Theano is available in Debian, deep learning frameworks resp. toolkits which have been build on top of it can become available within Debian, too (like Blocks, mentioned before). Theano is an own general computing engine which has been developed with a focus on machine learning resp. neural networks, which features an own declarative tensor language. The toolkits which have build upon it vary in the way how much they abstract the bare features of Theano, if they are "thick" or "thin" so to say. When the abstraction gets higher you gain more end user convenience up to the level that you have the architectural components of neural networks available for combination like in a lego box, while the more complicated things which are going on "under the hood" (like how the networks are actually implemented) are hidden. The downside is, thick abstraction layers usually makes it difficult to implement novel features like custom layers or loss functions. So more experienced users and specialists might to seek out for the lower abstraction toolkits, where you have to think more in terms of Theano.

I've got an initial package of Keras in experimental (1.0.7-1), it runs (only a Python 3 package is available so far) but needs some more work (e.g. building the documentation with mkdocs). Keras is a minimalistic, high modular DNN library inspired by Torch1. It has a clean, rather easy API for experimenting and fast prototyping. It can also run on top of Google's TensorFlow, and we're going to have it ready for that, too.

Lasagne follows a different approach. It's, like Keras and Blocks, a Python library to create and train multi-layered artificial neural networks in/on Theano for applications like image recognition resp. classification, speech recognition, image caption generation or other purposes like style transfers from paintings to pictures2. It abstracts Theano as little as possible, and could be seen rather like an extension or an add-on than an abstraction3. Therefore, knowledge on how things are working in Theano would be needed to make full use out of this piece of software.

With the new Debian package (0.1+git20160728.8b66737-1)4, the whole required software stack (the corresponding Theano package, NumPy, SciPy, a BLAS implementation, and the nividia-cuda-toolkit and NVIDIA kernel driver to carry out computations on the GPU5) could be installed most conveniently just by a single apt-get install python{,3}-lasagne command6. If wanted with the documentation package lasagne-doc for offline use (no running around on remote airports seeking for a WIFI spot), either in the Python 2 or the Python 3 branch, or both flavours altogether7. While others have to spend a whole weekend gathering, compiling and installing the needed libraries you can grab yourself a fresh cup of coffee. These are the advantages of a fully integrated system (sublime message, as always: desktop users switch to Linux!).

When the installation of packages has completed, the MNIST example of Lasagne could be used for a quick check if the whole library stack works properly8:

$ THEANO_FLAGS=device=gpu,floatX=float32 python /usr/share/doc/python-lasagne/examples/mnist.py mlp 5 Using gpu device 0: GeForce 940M (CNMeM is disabled, cuDNN 5005) Loading data... Downloading train-images-idx3-ubyte.gz Downloading train-labels-idx1-ubyte.gz Downloading t10k-images-idx3-ubyte.gz Downloading t10k-labels-idx1-ubyte.gz Building model and compiling functions... Starting training... Epoch 1 of 5 took 2.488s training loss: 1.217167 validation loss: 0.407390 validation accuracy: 88.79 % Epoch 2 of 5 took 2.460s training loss: 0.568058 validation loss: 0.306875 validation accuracy: 91.31 %

The example on how to train a neural network on the MNIST database of handwritten digits is refined (it also provides --help) and explained in detail in the Tutorial section of the documentation in /usr/share/doc/lasagne-doc/html/. Very good starting points are also the IPython notebooks that are available from the tutorials by Eben Olson9 and Geoffrey French on the PyData London 201610. There you have Theano basics, examples for employing convolutional neural networks (CNN) and recurrent neural networks (RNN) for a range of different purposes, how to use pre-trained networks for image recognition, etc.

  1. For a quick comparison of Keras and Lasagne with other toolkits, see Alex Rubinsteyn's PyData NYC 2015 presentation on using LSTM (long short term memory) networks on varying length sequence data like Grimm's fairy tales (https://www.youtube.com/watch?v=E92jDCmJNek 27:30 sq.) 

  2. https://github.com/Lasagne/Recipes/tree/master/examples/styletransfer 

  3. Great introduction to Theano and Lasagne by Eben Olson on the PyData NYC 2015: https://www.youtube.com/watch?v=dtGhSE1PFh0 

  4. The package is "freelancing" currently being in collab-maint, to set up a deep learning packaging team within Debian is in the stage of discussion. 

  5. Only available for amd64 and ppc64el. 

  6. You would need "testing" as package source in /etc/apt/sources.list to install it from the archive at the present time (I have that for years, but if Debian Testing could be advised as productive system is going to be discussed elsewhere), but it's coming up for Debian 9. The cuda-toolkit and pycuda are in the non-free section of the archive, thus non-free (mostly used in combination with contrib) must be added to main. Plus, it's a mere suggestion of the Theano packages to keep Theano in main, so --install-suggests is needed to pull it automatically with the same command, or this must be given explicitly. 

  7. For dealing with Theano in Debian, see this previous blog posting 

  8. Like suggested in the guideline From Zero to Lasagne on Ubuntu 14.04. cuDNN isn't available as official Debian package yet, but could be downloaded as a .deb package after registration at https://developer.nvidia.com/cudnn. It integrates well out of the box. 

  9. https://github.com/ebenolson/pydata2015 

  10. https://github.com/Britefury/deep-learning-tutorial-pydata2016, video: https://www.youtube.com/watch?v=DlNR1MrK4qE 

Categories: FLOSS Project Planets

Christoph Egger: DANE and DNSSEC Monitoring

Tue, 2016-08-30 13:11

At this year's FrOSCon I repeted my presentation on DNSSEC. In the audience, there was the suggestion of a lack of proper monitoring plugins for a DANE and DNSSEC infrastructure that was easily available. As I already had some personal tools around and some spare time to burn I've just started a repository with some useful tools. It's available on my website and has mirrors on Gitlab and Github. I intent to keep this repository up-to-date with my personal requirements (which also means adding a xmpp check soon) and am happy to take any contributions (either by mail or as "pull requests" on one of the two mirrors). It currently has smtp (both ssmtp and starttls) and https support as well as support for checking valid DNSSEC configuration of a zone.

While working on it it turned out some things can be complicated. My language of choice was python3 (if only because the ssl library has improved since 2.7 a lot), however ldns and unbound in Debian lack python3 support in their bindings. This seems fixable as the source in Debian is buildable and useable with python3 so it just needs packaging adjustments. Funnily the ldns module, which is only needed for check_dnssec, in debian is currently buggy for python2 and python3 and ldns' python3 support is somewhat lacking so I spent several hours hunting SWIG problems.

Categories: FLOSS Project Planets

Rhonda D'Vine: Thomas D

Tue, 2016-08-30 12:12

It's not often that an artist touches you deeply, but Thomas D managed to do so to the point of that I am (only half) jokingly saying that if there would be a church of Thomas D I would absolutely join it. His lyrics always did stand out for me in the context of the band I found about him, and the way he lives his life is definitely outstanding. And additionally there are these special songs that give so much and share a lot. I feel sorry for the people who don't understand German to be able to appreciate him.

Here are three songs that I suggest you to listen to closely:

  • Fluss: This song gave me a lot of strengh in a difficult time of my life. And it still works wonders when I feel down to get my ass up from the floor again.
  • Gebet an den Planeten: This songs gives me shivers. Let the lyrics touch you. And take the time to think about it.
  • An alle Hinterbliebenen: This song might be a bit difficult to deal with. It's about loss and how to deal with suffering.

Like always, enjoy!

/music | permanent link | Comments: 1 | Flattr this

Categories: FLOSS Project Planets

Joachim Breitner: Explicit vertical alignment in Haskell

Tue, 2016-08-30 09:35

Chris Done’s automatic Haskell formatter hindent is released in a new version, and getting quite a bit of deserved attention. He is polling the Haskell programmers on whether two or four spaces are the right indentation. But that is just cosmetics…

I am in principle very much in favor of automatic formatting, and I hope that a tool like hindent will eventually be better at formatting code than a human.

But it currently is not there yet. Code is literature meant to be read, and good code goes at length to be easily readable, and formatting can carry semantic information.

The Haskell syntax was (at least I get that impression) designed to allow the authors to write nicely looking, easy to understand code. One important tool here is vertical alignment of corresponding concepts on different lines. Compare

maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2

with

maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2

The former is a quick to grasp specification, the latter (the output of hindent at the moment) is a desert of numbers and operators.

I see two ways forward:

  • Tools like hindent get improved to the point that they are able to detect such patterns, and indent it properly (which would be great, but very tricky, and probably never complete) or
  • We give the user a way to indicate intentional alignment in a non-obtrusive way that gets detected and preserved by the tool.

What could such ways be?

  • For guards, it could simply detect that within one function definitions, there are multiple | on the same column, and keep them aligned.
  • More general, one could take the approach lhs2Tex (which, IMHO, with careful input, a proportional font and with the great polytable LaTeX backend, produces the most pleasing code listings) takes. There, two spaces or more indicate an alignment point, and if two such alignment points are in the same column, their alignment is preserved – even if there are lines in between!

    With the latter approach, the code up there would be written

    maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2

    And now the intended alignment is explicit.

(This post is cross-posted on reddit.)

Categories: FLOSS Project Planets

Petter Reinholdtsen: First draft Norwegian Bokmål edition of The Debian Administrator's Handbook now public

Tue, 2016-08-30 04:10

In April we started to work on a Norwegian Bokmål edition of the "open access" book on how to set up and administrate a Debian system. Today I am happy to report that the first draft is now publicly available. You can find it on get the Debian Administrator's Handbook page (under Other languages). The first eight chapters have a first draft translation, and we are working on proofreading the content. If you want to help out, please start contributing using the hosted weblate project page, and get in touch using the translators mailing list. Please also check out the instructions for contributors. A good way to contribute is to proofread the text and update weblate if you find errors.

Our goal is still to make the Norwegian book available on paper as well as electronic form.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RProtoBuf 0.4.5: now with protobuf v2 and v3!

Mon, 2016-08-29 22:55

A few short weeks after the 0.4.4 release of RProtoBuf, we are happy to announce a new version 0.4.5 which appeared on CRAN earlier today.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

This release brings support to the recently-release 'version 3' Protocol Buffers standard, used e.g. by the (very exciting) gRPC project (which was just released as version 1.0). RProtoBuf continues to supportv 'version 2' but now also cleanly support 'version 3'.

Changes in RProtoBuf version 0.4.5 (2016-08-29)
  • Support for version 3 of the Protcol Buffers API

  • Added 'syntax = "proto2";' to all proto files (PR #17)

  • Updated Travis CI script to test against both versions 2 and 3 using custom-built .deb packages of version 3 (PR #16)

  • Improved build system with support for custom CXXFLAGS (Craig Radcliffe in PR #15)

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

David Moreno: Webhook Setup with Facebook::Messenger::Bot

Mon, 2016-08-29 14:49

The documentation for the Facebook Messenger API points out how to setup your initial bot webhook. I just committed a quick patch that would make it very easy to setup a quick script to get it done using the unreleased and still in progress Perl’s Facebook::Messenger::Bot:

use Facebook::Messenger::Bot; use constant VERIFY_TOKEN => 'imsosecret'; my $bot = Facebook::Messenger::Bot->new(); # no config specified! $bot->expect_verify_token( VERIFY_TOKEN ); $bot->spin();

This should get you sorted. What endpoint would that be, though? Well that depends on how you’re giving Facebook access to your Plack’s .psgi application.

Categories: FLOSS Project Planets

Michal &#268;iha&#345;: motranslator 1.1

Mon, 2016-08-29 12:00

Four months after 1.0 release, motranslator 1.1 is out. If you happen to use it for untrusted data, this might be as well called security release, though this is still not good idea until we remove usage of eval() used to evaluate plural formula.

Full list of changes:

  • Improved handling of corrupted mo files
  • Minor performance improvements
  • Stricter validation of plural expression

The motranslator is a translation library used in current phpMyAdmin master (upcoming 4.7.0) with focus on speed and memory usage. It uses Gettext MO files to load the translations. It also comes with testsuite (100% coverage) and basic documentation.

Recommended way to install it is using composer from Packagist repository:

composer require phpmyadmin/motranslator

The Debian package will be available probably at point phpMyAdmin 4.7.0 will be out, but if you see need to have it earlier, just let me know.

Filed under: Debian English phpMyAdmin | 0 comments

Categories: FLOSS Project Planets

Zlatan Todorić: Support open source motion comic

Mon, 2016-08-29 09:25

There is an ongoing campaign for motion comic. It will be done entirely with FLOSS tools (Blender, Krita, GNU/Linux) and besides that, it really looks great (and no, it is not only for the kids!). Please support this effort if you can because it also shows the power of Free software tools. All will be released Creative Commons Atribution-ShareAlike license together with all sources.

Categories: FLOSS Project Planets

Michal &#268;iha&#345;: Improving phpMyAdmin Docker container

Mon, 2016-08-29 04:00

Since I've created the phpMyAdmin container for Docker I've always felt strange about using PHP's built in web server there. It really made it poor choice for any production setup and probably was causing lot of problems users saw with this container. During the weekend, I've changed it to use more complex setup with Supervisor, nginx and PHP FPM.

As building this container is one of my first experiences with Docker (together with Weblate container), it was not as straightforward as I'd hope for, but in the end is seems to be working just fine. While touching the code, I've also improved testing of the Docker container to tests all supported setups and to better report in case of test fails.

The nice side effect of this is that the PHP code is no longer being executed under root in the container, so that should make it more sane for production use as well (honestly I never liked this approach that almost everything is executed as root in Docker containers).

Filed under: Debian English phpMyAdmin | 2 comments

Categories: FLOSS Project Planets

Gergely Nagy: Recruitment mistakes: part 3

Mon, 2016-08-29 04:00

It has been a while that I have been contacted by a recruiter, and the last few ones were fairly decent conversations, where they made an effort to research me first, and even if they did not get everything right, they still listened, and we had a productive talk. But four days ago, I had another recruiter reach out to me, from a company I know oh so well: one I ranted about before: Google. Apparently, their recruiters still do carpet-bombing style outreach. My first thought was "what took them so long?" - it has been five years since my last contact with a Google recruiter. I almost started missing them. Almost. To think that Google is now powerful enough to read my mind, is scary. Yet, I believe, this is not the case; rather, it's just another embarrassing mistake.

To make my case, let me quote the full e-mail I was sent, with the name of the sender redacted, and my comments - which I'm also sending to the recruiter:

Hi Gergely,

Hi!

How are you? Hope you're well.

Thank you, I'm fine, just back from vacation, and I was thrilled to read your e-mail. Although, I did find it surprising too, and considering past events, I thought it to be spam first.

My name is XXX and I am recruiting for the Google Engineering team. I have just found your details on Github...

I'm happy that you found me through GitHub, but I'm curious why you mailed me at my debian.org address then? That address is not on my GitHub profile, and even though I have some code signed with that address, that's not what I use normally. My GitHub profile also links to a page I created especially for recruiters, which I don't think you have read - but more on that below.

...and your experience with software development combined with your open source contributions is particularly relevant to Google

Which part of my recent contributions are relevant to Google? For the past few months, the vast majority (over 90%) of my open source work has been on keyboard firmware. If Google is developing a keyboard, then this may be relevant, otherwise, I find it doubtful.

Some of my past OSS contributions may be more relevant, but that's in the past, and it would take some digging to see those from GitHub. And if you did that kind of digging, you would have found the page on my site for recruiters, and would not have e-mailed me at my Debian address, either.

We are always interested in talking to top engineers with your mix of skills and I was wondering if you are at all open to exploring roles with Google in EMEA.

To make things short, my stance is the same as it was five years ago, when I wrote - and I quote - "So, here's a public request: do not email me ever again about job opportunities within Google. I do not wish to work for Google. Not now, not tomorrow, not ever."

This still stands. Please do not ever e-mail me about job opportunities within Google. I do not wish to work for Google, not now, not tomorrow, not five years from now, not ever. This will not change. Many things may change, this however, will not.

But even if I ignore this, let me ask you mix of skills are you exactly interested in? Keyboard firmware hacking, mixed with Emacs Lisp, some Clojure, and hacking on Hy (which, if you have explored my GitHub profile, will surely know) from time to time? Or is it my Debian Developer hat that got your interest? If so, why didn't you say so? Not that it would change anything, but I'm curious.

Is it my participation in 24pullrequests? Or my participation in GSoC in years past?

Nevertheless, my list of conditions I mentioned above, apply. Is Google able to fulfill all my requirements, and the preferences too? Last time I heard, working for Google required one to relocate, which I clearly said on that page, I'm not willing to do.

The teams I recruit for are responsible for Google's planet-scale systems. Just to give you more of an idea, some of the projects we work on that might be of interest to you would include:

  • MapReduce
  • App Engine
  • Mesa
  • Maglev

And how would these be interesting to me, considering my recent OSS work? (Hint: none of them interest me, not a bit. Ok perhaps MapReduce, a little.)

I think you could be a great fit here, where you can develop a great career and at the same time you will be part of the most mission critical team, designing and developing systems which run Google Search, Gmail, YouTube, Google+ and many more as you can imagine.

My career is already developing fine, thank you, I do not need Google to do that. I am already working on things I consider important and interesting, if I wanted to change, I would. But I certainly would not consider a company which I had asked numerous times NOT to contact me about opportunities. If you can't respect this wish, and forget about this in mere five years, why would I trust you to keep any other promises you make now?

Thank you so much for your time and I look forward to hearing from you.

I wish you had spent as much time researching me - or even half that - as I have spent replying to you. I suppose this is not the reply you were expecting, or looking for, but this is the only one I'll ever give to anyone from Google.

And here, at the end, if you read this far, I'm asking you, and everyone at Google to never contact me about job opportunities. I will not work for Google. Not now, not tomorrow, not five years from now, not ever. Please do not e-mail me again, and do not reply to this e-mail either. I'm not interested in neither apologies, nor promises that you won't contact me - just simply don't do it.

Thank you.

Categories: FLOSS Project Planets

Russell Coker: Monitoring of Monitoring

Sun, 2016-08-28 22:23

I was recently asked to get data from a computer that controlled security cameras after a crime had been committed. Due to the potential issues I refused to collect the computer and insisted on performing the work at the office of the company in question. Hard drives are vulnerable to damage from vibration and there is always a risk involved in moving hard drives or systems containing them. A hard drive with evidence of a crime provides additional potential complications. So I wanted to stay within view of the man who commissioned the work just so there could be no misunderstanding.

The system had a single IDE disk. The fact that it had an IDE disk is an indication of the age of the system. One of the benefits of SATA over IDE is that swapping disks is much easier, SATA is designed for hot-swap and even systems that don’t support hot-swap will have less risk of mechanical damage when changing disks if SATA is used instead of IDE. For an appliance type system where a disk might be expected to be changed by someone who’s not a sysadmin SATA provides more benefits over IDE than for some other use cases.

I connected the IDE disk to a USB-IDE device so I could read it from my laptop. But the disk just made repeated buzzing sounds while failing to spin up. This is an indication that the drive was probably experiencing “stiction” which is where the heads stick to the platters and the drive motor isn’t strong enough to pull them off. In some cases hitting a drive will get it working again, but I’m certainly not going to hit a drive that might be subject to legal action! I recommended referring the drive to a data recovery company.

The probability of getting useful data from the disk in question seems very low. It could be that the drive had stiction for months or years. If the drive is recovered it might turn out to have data from years ago and not the recent data that is desired. It is possible that the drive only got stiction after being turned off, but I’ll probably never know.

Doing it Properly

Ever since RAID was introduced there was never an excuse for having a single disk on it’s own with important data. Linux Software RAID didn’t support online rebuild when 10G was a large disk. But since the late 90’s it has worked well and there’s no reason not to use it. The probability of a single IDE disk surviving long enough on it’s own to capture useful security data is not particularly good.

Even with 2 disks in a RAID-1 configuration there is a chance of data loss. Many years ago I ran a server at my parents’ house with 2 disks in a RAID-1 and both disks had errors on one hot summer. I wrote a program that’s like ddrescue but which would read from the second disk if the first gave a read error and ended up not losing any important data AFAIK. BTRFS has some potential benefits for recovering from such situations but I don’t recommend deploying BTRFS in embedded systems any time soon.

Monitoring is a requirement for reliable operation. For desktop systems you can get by without specific monitoring, but that is because you are effectively relying on the user monitoring it themself. Since I started using mon (which is very easy to setup) I’ve had it notify me of some problems with my laptop that I wouldn’t have otherwise noticed. I think that ideally for desktop systems you should have monitoring of disk space, temperature, and certain critical daemons that need to be running but which the user wouldn’t immediately notice if they crashed (such as cron and syslogd).

There are some companies that provide 3G SIMs for embedded/IoT applications with rates that are significantly cheaper than any of the usual phone/tablet plans if you use small amounts of data or SMS. For a reliable CCTV system the best thing to do would be to have a monitoring contract and have the monitoring system trigger an event if there’s a problem with the hard drive etc and also if the system fails to send a “I’m OK” message for a certain period of time.

I don’t know if people are selling CCTV systems without monitoring to compete on price or if companies are cancelling monitoring contracts to save money. But whichever is happening it’s significantly reducing the value derived from monitoring.

Related posts:

  1. Health and Status Monitoring via Smart Phone Health Monitoring Eric Topol gave an interesting TED talk about...
  2. Planning Servers for Failure Sometimes computers fail. If you run enough computers then you...
  3. Shelf-life of Hardware Recently I’ve been having some problems with hardware dying. Having...
Categories: FLOSS Project Planets

Joey Hess: hiking the Roan

Sun, 2016-08-28 22:10

Three moments from earlier this week..

Sprawled under a tree after three hours of hiking with a heavy, water-filled pack, I look past my feet at six ranges of mountains behind mountains behind flowers.

From my campsite, I can see the rest of the path of the Appalachian Trail across the Roan balds, to Big Hump mountain. It seems close enough to touch, but not this trip. Good to have a goal.

Near sunset, land and sky merge as the mist moves in.

Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible builds: week 70 in Stretch cycle

Sun, 2016-08-28 19:01

What happened in the Reproducible Builds effort between Sunday August 21 and Saturday August 27 2016:

GSoC and Outreachy updates Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

10 package reviews have been added and 6 have been updated this week, adding to our knowledge about identified issues.

A large number of issue types have been updated:

Weekly QA work

29 FTBFS bugs have been reported by:

  • Chris Lamb (27)
  • Daniel Stender (1)
  • Santiago Vila (1)
diffoscope development

Holger also created another test job for diffoscope on jenkins.debian.net, so that now also all commits to branches other than master are being tested.

strip-nondeterminism development

strip-nondeterminism 0.023-1 was uploaded by Chris Lamb:

* Support Android .apk files with the JAR normalizer. * handlers/png.pm: Drop unused Archive::Zip import * Remove hyphen from non-determinism and non-deterministic. * javaproperties.pm: Match more styles of .properties and loosen filename matching. * Improve tests: - Make fixture runner generic to all normalizer types. - Replace (single) pearregistry test with a fixture. - Set a canonical time for fixture tests. - Add gzip testcase fixture. - Replace t/javadoc.t with fixture - Replace t/ar.t with a fixture. - t/javaproperties: move pom.properties and version.properties tests to fixtures - t/fixtures.t: move to using subtests - t/fixtures.t: Explicitly test that we can find a normalizer - t/fixtures.t: Don't run normalizer if we didn't find one.

strip-nondeterminism 0.023-2 uploaded by Mattia Rizzolo to allow stderr in autopkgtest.

disorderfs development tests.reproducible-builds.org

Debian:

  • Since we introduced build path variations for unstable and experimental last week, our IRC channel has been flooded with notifcations about packages becoming unreproducible - and you might have noticed some of your packages having become unreproducible recently too. To make our IRC more bearable again, notifications for status changes on i386 and armhf have been disabled, so that now we only get notifications for status changes in unstable. (h01ger)
  • Link to jenkins documentation in every page (h01ger)
  • The "pre build" check, whether a node is up, now also detects if a node has a read-only filesystem, which sometimes happens on some broken armhf nodes. (h01ger)
  • To further improve monitoring of those armhf nodes Work to make them send mails (through an ISP which is blocking outgoing mails) has been started and should be finished next week. (h01ger)
  • As one of the armhf nodes (opi2a) is acting strange, a workaround has been added to make it's deployment work despite that. (h01ger)
  • Collapse whitespace to avoid ugly "trailing underlines" in hyperlinks for diffoscope results and pkg sets (Chris Lamb)
  • Give details HTML elements "cursor: pointer" CSS property to highlight they are clickable. (Chris Lamb)
  • The db connection timeout has been raised to a minute when using SQLAlchemy too. (h01ger).

Somewhat related to reproducible builds there has been a first Debian jenkins team maintainance meeting on the #debian-qa IRC channel, to discuss current issues with the setup and to start the work of migrating jenkins.debian.net to jenkins.debian.org. The next meeting will take place on September 28th 2016 at 19 UTC.

Misc.

This week's edition was written by Chris Lamb and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Categories: FLOSS Project Planets

Martín Ferrari: On speaking at community conferences

Sun, 2016-08-28 16:06

Many people reading this have already suffered me talking to them about Prometheus. In personal conversation, or in the talks I gave at DebConf15 in Heidelberg, the Debian SunCamp in Lloret de Mar, BRMlab in Prague, and even at a talk on a different topic at the RABS in Cluj-Napoca.

Since their public announcement, I have been trying to support the project in the ways I could: by packaging it for Debian, and by spreading the word.

Last week the first ever Prometheus conference took place, so this time I did the opposite thing and I spoke about Debian to Prometheus people: I gave a 5 minutes lightning talk on Debian support for Prometheus.

What blew me away was the response I've got: from this tiny non-talk I prepared in 30 minutes, many people stopped me later to thank me for packaging Prometheus, and for Debian in general. They told me they were using my packages, gave me feedback, and even some RFPs!

At the post-conference beers, I had quite a few interesting discussions about Debian, release processes, library versioning, and culture clashes with upstream. I was expecting some Debian-bashing (we are old-fashioned, slow, etc.), instead I had intelligent and enriching conversations.

To me, this enforces once again my support and commitment to community conferences, where nobody is a VIP and everybody has something to share. It also taught me the value of intersecting different groups, even when there seems to be little in common.

Comment

Categories: FLOSS Project Planets

Dirk Eddelbuettel: rfoaas 1.1.0

Sun, 2016-08-28 10:08

FOAAS upstream came out with release 1.1.0 earlier last week. Tyler Hunt was kind enough to provide an almost immediate pull request adding support for the extended capabilties. Following yesterday's upload we now have version 1.1.0 of rfoaas on CRAN.

It brings six more accessors: maybe(), blackadder(), horse(), deraadt(), problem(), cocksplat() and too().

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker. More background information is on the project page as well as on the github repo

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Craig Sanders: fakecloud

Sun, 2016-08-28 01:05

I wrote my first Mojolicious web app yesterday, a cloud-init meta-data server to enable running pre-built VM images (e.g. as provided by debian, ubuntu, etc) without having to install and manage a complete, full-featured cloud environment like openstack.

I hacked up something similar several years ago when I was regularly building VM images at home for openstack at work, with just plain-text files served by apache, but that had pretty-much everything hard-coded. fakecloud does a lot more and allows per-VM customisation of user-data (using the IP address of the requesting host). Not bad for a day’s hacking with a new web framework.

https://github.com/craig-sanders/fakecloud

fakecloud is a post from: Errata

Categories: FLOSS Project Planets

Hideki Yamane: please help to maintain net-snmp package

Sat, 2016-08-27 23:30
I stepped down from net-snmp package maintainers, and want someone to maintain it.
Categories: FLOSS Project Planets