FLOSS Project Planets

SystemSeed: Apple Pay on Concern.net

Planet Drupal - Mon, 2017-04-03 03:35

We are very happy to announce that after several months of combined work with Concern Worldwide and Apple we now support Apple Pay via desktop, iPhone, iPad and Apple Watch @ Concern.net!

This was a fascinating endeavour for our team with many requirements spanning both hardware and software.

We now have tools in place for Concern.net donation forms to enable existing donors to make a donation, or increase their regular donations by following a marketing link and press a single button on the form before using their fingerprint to authenticate payment.

There is a famous saying that “The best GUI is no GUI” This ideal is now a tangible possibility.

Stay tuned for further developments as to how you can donate without fuss to one of the world’s leading charities.

Our many thanks to all the teams at SystemSeed, Concern and Apple that helped facilitate this work.

Please consider making the world a brighter place for those in need by going to Concern.net and donating via Apple Pay or any other method today.

Categories: FLOSS Project Planets

Sean Whitton: A different reason there are so few tenure-track jobs in philosophy

Planet Debian - Sun, 2017-04-02 21:05

Recently I heard a different reason suggested as to why there are fewer and fewer tenure-track jobs in philosophy. University administrators are taking control of the tenure review process; previously departments made decisions and the administrators rubber-stamped them. The result of this is that it is easier to get tenure. This is because university administrators grant tenure based on quantitively-measurable achievements, rather than a qualitative assessment of the candidate qua philosopher. If a department thought that someone shouldn’t get tenure, the administration might turn around and say that they are going to grant it because the candidate has fulfilled such-and-such requirements.

Since it is easier to get tenure, hiring someone at the assistant professor level is much riskier for a philosophy department: they have to assume the candidate will get tenure. So the pre-tenure phase is no longer a probationary period. That is being pushed onto post-docs and graduate students. This results in the intellectual maturity of published work going down.

There are various assumptions in the above that could be questioned, but what’s interesting is that it takes a lot of the blame for the current situation off the shoulders of faculty members (there have been accusations that they are not doing enough). If tenure-track hires are a bigger risk for the quality of the academic philosophers who end up with permanent jobs, it is good that they are averse to that risk.

Categories: FLOSS Project Planets

Wingware News: Wing Python IDE 6.0.4: April 3, 2017

Planet Python - Sun, 2017-04-02 21:00
This release fixes remote development to systems with Python 3, addresses problems seen when switching between remote projects and when remote host configurations are missing or invalid, fixes text drag-and-drop, solves problems with analysis of some type annotations, and makes about 30 other minor improvements.
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RApiDatetime 0.0.3

Planet Debian - Sun, 2017-04-02 20:55

A brown bag bug fix release 0.0.3 of RApiDatetime is now on CRAN.

RApiDatetime provides six entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. These six functions are all fairly essential and useful, but not one of them was previously exported by R.

I left two undefined variables in calls in the exported header file; this only become an issue once I actually tried accessing the API from another package as I am now doing in anytime.

Changes in RApiDatetime version 0.0.3 (2017-04-02)
  • Correct two simple copy-and-paste errors in RApiDatetime.h

  • Also enable registration in useDynLib, and explicitly export known and documented R access functions provided for testing

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the rapidatetime page.

For questions or comments please use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

PyBites: Code Challenge 13 - Highest Rated Movie Directors

Planet Python - Sun, 2017-04-02 18:25

Hi Pythonistas, a new week, a new 'bite' of Python coding! After last week's (tictactoe game), we'd like to sharpen your data analysis skills this week by parsing a movie data set in search for highest rated directors. Enjoy and we review solutions end of this week.

Categories: FLOSS Project Planets

Carl Chenet: DiscountBot, Your Bot To Create And Send A Discount To Customers

Planet Python - Sun, 2017-04-02 18:00

You are the founder of a SaaS product and lots of your users create an account, but don’t buy the service? What about sending them a 20% discount after a while, maybe one week, in order to motivate them to subscribe? You don’t do anything and dream they’ll come back. Or you try your last call… and convert some of them!

Some context

The original idea of this bot was motivated by a Tweet by Peter Levels, a.k.a @levelsio, Serial Entrepreneur and the Founder of NomadList, RemoteOK and lots of other websites dedicated to the Nomad and Remote workers. Many thanks to him to share his business ideas on a regular basis.

Details

DiscountBot, a Free Software written in Python, is able to connect to your database (all types supported by the SQLAlchemy library), get a list of customers to send a discount to (thanks to the SQL request you provide), create a discount (thanks to another SQL request you also provide) and send the discount to the customer by email.

The Project DiscountBot Example configuration of DiscountBot

Here is the example of a configuration of DiscountBot you could use to automate sending discounts for your SaaS product:

[discount] code=MYBIGCOMPANY-WELCOME duration=2 [database] db_connector=mysql+mysqlconnector db_name=customers db_user=customers_app db_pass=V3rYS3cR3tP4sSw0rd, sql_get_users=select custome.email from customerinfos where customerinfos.id not in (select customer_id from payments) sql_create_discount=insert into discounts (code,percentage,start,end,status,created) values ('{code}',20.00,'{start}','{end}',1,'{created}') [email] email_template_path=/usr/share/discountbot/mybigcompany.discount.template.txt email_subject=Your Incredible Discount! email_from=sales@mybigcompany.com [sqlite] sqlite_path=/var/lib/discountbot/discountbot.db

Let’s explain a bit:

  • the [discount] section needs code, the prefix of your discount code (suffix will be generated with the current time) and duration, the time-to-live of your discount offer.
  • the [database] section needs db_connector, db_name, db_user and db_pass to connect to your database. It also needs two complete and valid SQL requests :
    • sql_get_users provide a single column with all the emails of the customers to send discount to
    • sql_create_discount allows to create a discount in your database. You can use placeholders for dynamic values. Read the official documentation of DiscountBot about it.
  • the [email] section defines the needed values to send emails with the discount. The email_template_path provides the path to the HTML template of your discount email. The email_subject value is the subject of your email. Customize the sender email addres with email_from.
Less Talk More walk

Ok, ok jeeeeeez, here is a full example of how to setup and configure DiscountBot on your server.

The first step is to install DiscountBot from PyPI

# pip3 install discountbot[mysql] #if you use mysql, postgresql also supported or you'll have to install the driver manually

Create a dedicated user in order to safely manipulate your data

# adduser --home /var/lib/discountbot discountbot

Create a /etc/discountbot directory to store your DiscountBot configuration and configure the good permissions of the directory and file

# mkdir -p /etc/discountbot # vim /etc/discountbot/discountbot.ini # chown -R discountbot:root /etc/discountbot/ # chmod 640 /etc/discountbot/discountbot.ini

Create a /usr/share/discountbot directory to store the html template

# mkdir -p /usr/share/discountbot # vim /usr/share/discountbot/mybigcompany.discount.template.txt # chown -R discountbot:root /usr/share/discountbot # chmod 640 /usr/share/discountbot/mybigcompany.discount.template.txt

Now lets configure a cron job in order to execute DiscountBot each day at 10:30 am

30 10 * * 1-5 discountbot discountbot

If you recently already sent manually some discounts to your customers, you should populate the DiscountBot database with the current entries, avoiding resending discounts to all customers returned by your SQL query at the first launch of DiscountBot

# su - discountbot $ discountbot -p Conclusion

DiscountBot is still a really young project and will improve given discovered bugs and your feedbacks

Categories: FLOSS Project Planets

Ilian Iliev: Django and working with large database tables

Planet Python - Sun, 2017-04-02 15:39

The slides from my presentation for the Django Stockholm Meetup group. Contains small comparison between MySQL and PostgreSQL in terms of performance when modigying the tables structure.

Django and working with large database tables from Ilian Iliev
Categories: FLOSS Project Planets

Weekly Python Chat: Learning Programming with Others

Planet Python - Sun, 2017-04-02 13:00

Special guest Kim Schlesinger will join us to talk about learning programming with others.

Whether you're working with a mentor or studying with a similarly-skilled peer, learning collaboratively can be more effective and more fun.

Categories: FLOSS Project Planets

Eugene V. Lyubimkin: experiment: optionalising shared libraries without dl_open via generating stub libraries

Planet Debian - Sun, 2017-04-02 10:11
Reading the discussion on debian-devel about shared library dependencies in C/C++, I wondered if it's possible to link with a shared library without having an absolute dependency on it.

The issue comes often when one has a piece of software which could use extra functionality the shared library (let's call it biglib) provides, but the developer/maintainer doesn't want to force all users to install it. The common solution seems to be either:

- defining a plugin interface and a bridge library (better detailed at here);

- dlopen/dlsym to load each library symbol by hand (good luck doing that for high-level C++ libraries).


Both solutions involve dlopen at one stage or another to avoid linking with a biglib, since if you do that, the application loader will refuse to load the application unless biglib and all of its dependencies are present. But what if application is prepared to fallback at run time, and just needs a way to be able to start without biglib?

I went ahead to check what if there was a stub library to provide all symbols which the application uses (directly or indirectly) out of biglib. Turns out that yes, at least for simple cases it seems to work. I've published my experimental stub generator at https://github.com/jackyf/so-stub .

For practical use, we'd also need a way to tell the loader that a stub library has to be loaded if the real library is not found. While there is many ways to instruct the loader to load something instead of system libraries (LD_PRELOAD, LD_LIBRARY_PATH, runpath, rpath), I found no way to load something if a system library was not found. In other words, I'd like to say "dear linker/loader, when you're loading myapp: if you didn't find libfoo1.so.4 in any of system directories configured, try also at /usr/lib/myapp/stubs/ (which'd contain stubbed libfoo1.so.4)". Apparently, nothing like "rpath-fallback" exists right now. I'm considering creating a feature request for such a "rpath-fallback" if the "optional libraries via stubs" idea gets wider support.
Categories: FLOSS Project Planets

nano @ Savannah: GNU nano 2.8.0 was released

GNU Planet! - Sun, 2017-04-02 06:54

This version of nano changes the way softwrap works: the <Up> and <Down> cursor keys now move through visual rows instead of jumping between logical lines. And nano now makes use of gnulib, to get rid of some custom shims and to avoid the need for new ones. The use of gnulib has increased the size of nano's tarball by some thirty percent, but... in lines of code this is the smallest nano since 2.2.0.

Categories: FLOSS Project Planets

Micro-services are only half the picture

Planet KDE - Sun, 2017-04-02 06:30

Hello,

today I want to expose my my thoughts about the general hype for micro-services.

The first objection that one can move against this approach is that it does not really solve the problem of having the maintainable code because the same principles can be found in a lot of other paradigms that did not prevent bad software to be produced.

I believe that the turning point of the micro-services stuff is that is compatible with the devops philosophy.

With the combination of micro-services and devops you get software that has some reasonably well-defined limits and whose management is assigned to the people who developed the software.

This combination avoids of development shortcuts that make management more difficult (maintenance is a big deal).

This thing also solve one of the great IT open problems: the documentation.

It is true that it can not force us to produce documentation, but, at least, who run the code is exactly who produced it and i can guess that who writes the code knows how the code has to work.

It is now possible to build applications with high performance and functionality unimmaginable before, all this thanks to the fact that each component can be realized, evolved and delployed with the best life cycle that we are able to develop, without limiting the entire ecosystem.

Thanks for reading, see you next time

Categories: FLOSS Project Planets

PyBites: Twitter digest 2017 week 13

Planet Python - Sun, 2017-04-02 06:29

Every weekend we share a curated list of 15 cool things (mostly Python) that we found / tweeted throughout the week.

Categories: FLOSS Project Planets

freedink @ Savannah: New FreeDink game data release

GNU Planet! - Sun, 2017-04-02 04:34

Here's a new release of freedink-data :)
http://ftp.gnu.org/gnu/freedink/freedink-data-1.08.20170401.tar.xz

It adds 2 new sounds, a new Swedish translation and updates the
Catalan, Spanish and German ones.

As a side note all the (simple) build process is now reproducible.

About GNU FreeDink:

Dink Smallwood is an adventure/role-playing game, similar to Zelda, made by RTsoft. Besides twisted humor, it includes the actual game editor, allowing players to create hundreds of new adventures called Dink Modules or D-Mods for short.

GNU FreeDink is a new and portable version of the game engine, which runs the original game as well as its D-Mods, with close
compatibility, under multiple platforms.

freedink-data contains the original game story, along with free sound and music replacements.
Your help is welcome to fill the gap!
https://www.gnu.org/software/freedink/doc/sounds/

Categories: FLOSS Project Planets

Ben Hutchings: Debian LTS work, March 2017

Planet Debian - Sat, 2017-04-01 22:57

I was assigned 14.75 hours of work by Freexian's Debian LTS initiative and worked all of these hours.

I prepared a security update for the Linux kernel and issued DLA 849-1. I also continued catching up with the backlog of fixes for the Linux 3.2 longterm stable branch. I released stable update 3.2.87 and started preparing the next stable update.

Categories: FLOSS Project Planets

Mike Hommey: git-cinnabar experimental features

Planet Debian - Sat, 2017-04-01 18:54

Since version 0.4.0, git-cinnabar has a few hidden experimental features. Two of them are available in 0.4.0, and a third was recently added on the master branch.

The basic mechanism to enable experimental features is to set a preference in the git configuration with a comma-separated list of features to enable, or all, for all of them. That preference is cinnabar.experiments.

Any means to set a git configuration can be used. You can:

  • Add the following to .git/config: [cinnabar] experiments=feature
  • Or run the following command: $ git config cinnabar.experiments feature
  • Or only enable the feature temporarily for a given command: $ git -c cinnabar.experiments=feature command arguments

But what features are there?

wire

In order to talk to Mercurial repositories, git-cinnabar normally uses mercurial python modules. This experimental feature allows to access Mercurial repositories without using the mercurial python modules. It then relies on git-cinnabar-helper to connect to the repository through the mercurial wire protocol.

As of version 0.4.0, the feature is automatically enabled when Mercurial is not installed.

merge

Git-cinnabar currently doesn’t allow to push merge commits. The main reason for this is that generating the correct mercurial data for those merges is tricky, and needs to be gotten right.

In version 0.4.0, enabling this feature allows to push merge commits as long as the parent commits are available on the mercurial repository. If they aren’t, you need to first push them independently, and then push the merge.

On current master, that limitation doesn’t exist anymore ; you can just push everything in one go.

The main caveat with this experimental support for pushing merges is that it currently doesn’t handle the case where a file was moved on one of the branches the same way mercurial would (i.e. the information would be lost to mercurial users).

clonebundles

As of mercurial 3.6, Mercurial servers can opt-in to providing pre-generated bundles, which, when clients support it, takes CPU load off the server when a clone is performed. Good for servers, and usually good for clients too when they have a fast network connection, because downloading a pre-generated bundle is usually faster than waiting for the server to generate one.

As of a few days ago, the master branch of git-cinnabar supports cloning using those pre-generated bundles, provided the server advertizes them (mozilla-central does).

Categories: FLOSS Project Planets

Anarcat: My free software activities, February and March 2017

Planet Python - Sat, 2017-04-01 18:51
Looking into self-financing

Before I begin, I should mention that I started tracking my time working on free software more systematically. I spend a lot of time on the computer, as regular readers of this blog might remember so I wanted to know exactly how much time was paid vs free work. I was already using org-mode's time clock system to keep track of my work hours, so I just extended this to my regular free software contributions, which also helps in writing those reports.

It turns out that over 60% of my computer time is spent working on free software. That's huge! I was expecting something more along the range of 20 to 40% of my time. So I started thinking about ways of financing this work. I created a Patreon page but I'm hesitant into launching such a campaign: the only thing worse than "no patreon page" is "a patreon page with failed goals and no one financing it". So before starting such an effort, I'd like to get a feeling of what other people's experience with it are. I know that joeyh is close to achieving his goals, but I can't compare with the guy that invented git-annex or debhelper, so I'm concerned I wouldn't be able to raise the same level of funding.

So any advice you have, feel free to contact me in private or in the comments. If you would be ready to fund my work, I'd love to know about it, obviously, but I guess I wouldn't get real numbers until I actually open up such a page...

Now, onto the regular report.

Wallabako

I spent a good chunk of time completing most of the things I had in mind for Wallabako, which I mentioned quickly in the previous report. Wallabako is now much easier to installed, with clearer instructions, an easier to use configuration file, more reliable synchronization and read status propagation. As usual the Wallabako README file has all the details.

I've also looked at better integration with Koreader, the free software e-reader that forms the basis of the okreader free software distribution which has been able to port Debian to the Kobo e-readers, a project I am really excited about. This project has the potential of supporting Kobo readers beyond the lifetime that upstream grants it and removes a lot of proprietary software and spyware that ships with the Kobo readers. So I have made a few contributions to okreader and also on koreader, the ebook reader okreader is based on.

Stressant

I rewrote stressant, my simple burn-in and stress-testing tool. After struggling in turn with Debirf, live-build, vmdebootstrap and even FAI, I just figured maybe it wasn't the best idea to try and reinvent that particular wheel: instead of reinventing how to build yet another Debian system build tool, maybe I should just reuse what's already there.

It turns out there's a well known, succesful and fairly complete recovery system called Grml. It is a Debian Derivative, so all I needed to do was to stop procrastinating and actually write the actual stressant tool instead of just creating a distribution with a bunch of random tools shipped in. This allowed me to focus on which tools were the best to stress test different components. This selection ended up being:

fio can also be used to overwrite disk drives with the proper options (--overwrite and --size=100%), although grml also ships with nwipe for wiping old spinning disks and hdparm to do a secure erase of SSD disks (whatever that's worth).

Stressant still needs to be shipped with grml for this transition to be complete. In the meantime, I was able to configure the excellent public Gitlab CI service to provide ISO images with Stressant built-in as a stopgap measure. I also need to figure out a way to automate starting stressant from a boot menu to automate deployments on a larger scale, although because I have little need for the feature at this moment in time, this will likely wait for a sponsor to show up for this to be implemented.

Still, stressant has useful features like the capability of sending logs by email using a fresh new implementation of the Python SMTPHandler (BufferedSMTPHandler) which waits for logging to complete before sending a single email. Another interesting piece of code in there is the NegateAction argparse handler that enables the use of "toggle flags" (e.g. --flag / --no-flag). I'm so happy with the code that I figure I could just share it here directly:

class NegateAction(argparse.Action): '''add a toggle flag to argparse this is similar to 'store_true' or 'store_false', but allows arguments prefixed with --no to disable the default. the default is set depending on the first argument - if it starts with the negative form (define by default as '--no'), the default is False, otherwise True. ''' negative = '--no' def __init__(self, option_strings, *args, **kwargs): '''set default depending on the first argument''' default = not option_strings[0].startswith(self.negative) super(NegateAction, self).__init__(option_strings, *args, default=default, nargs=0, **kwargs) def __call__(self, parser, ns, values, option): '''set the truth value depending on whether it starts with the negative form''' setattr(ns, self.dest, not option.startswith(self.negative))

Short and sweet. I wonder why stuff like this is not in the standard library yet - maybe just because no one bothered yet? It'd be great to get feedback of more experienced Pythonistas on this one.

I hope that my work on Stressant is complete. I get zero funding for this work, and have little use for it myself: I manage only a few machines and such a tool really shines when you regularly put new hardware online, which is (fortunately?) not my case anymore. I'd be happy, of course, to accompany organisations and people that wish to further develop and use such a tool.

A short demo of stressant as well as detailed description of how it works is of course available in its README file.

Standard third party repositories

After looking at improvements for the grml repository instructions, I realized there was no real "best practices" document on how to configure an Apt repository. Sure, there are tools like reprepro and others, but those hardly qualify as policy: they are very flexible and there are lots of ways to create insecure repositories or curl | sh style instructions, which we of course generally want to avoid.

While the larger problem of Unstrusted Debian packages remain generally unsolved (e.g. when you install any .deb file, it can get root on your system), it seemed to me one critical part of this problem was how to add a random third-party repository to your machine while limiting, as much as possible, what possible attackers could do with such a repository. In other words, to solve the more general problem of insecure .deb files, we also need to solve the distribution problem, otherwise fixing the .deb files themselves will be useless.

This lead to the creation of standardized repository instructions that define:

  1. how to distribute the repository's public signing key (ie. over HTTPS)
  2. how to name suites and components (e.g. use stable and main unless you have a good reason, and explain yourself)
  3. recommend a healthy does of apt preferences pinning
  4. how to distribute keys (e.g. with a derive-archive-keyring package)

I've seen so many third party repositories get this wrong. For example, a lot of repositories recommend this type of command to intialize the OpenPGP trust path:

curl http://example.com/key.asc | apt-key add -

This has the following problems:

  • the key is transfered in plaintext and can easily be manipulated by an active attacker (e.g. a router on your path to the server or a neighbor in a Wifi cafe)
  • the key is added to the main trust root, which allows the key to authentify as the real Debian archive, therefore giving it all rights over all packages
  • since it's part of the global archive, it's difficult for a package to remove/add the key when a key rollover is necessary (and repositories generally don't provide a deriv-archive-keyring to do that process anyways)

An example of this are the Docker install instructions that, at least, manage to do this over HTTPS. Some other repositories don't even bother teaching people about the proper way of adding those keys. We settled for:

wget -O /usr/share/keyrings/deriv-archive-keyring.gpg https://deriv.example.net/debian/deriv-archive-keyring.gpg

That location was explicitly chosen to be out of the main trust directory, so that it needs to be explicitly added to the sources.list as well:

deb [signed-by=/usr/share/keyrings/deriv-archive-keyring.gpg] https://deriv.example.net/debian/ stable main

Similarly, we highly recommend users setup "apt pinning" to restrict what a given repository can do. Since pinning is so confusing, most people don't actually bother even configuring it and I have yet to see a single repo advise its users to configure those preferences, which are essential to limit what a repository can do. To keep configuration simple, we recommend this:

Package: * Pin: origin deriv.example.net Pin-Priority: 100

Obviously, for a single-package repository, the actual package name should be listed, e.g.:

Package: foo Pin: origin deriv.example.net Pin-Priority: 100

And the priority should probably be set to 1 unless you want to allow automatic upgrades.

It is my hope that this design will get more traction in the years to come and become a de-facto standard that will be a key part in safely adding third party repositories. There is obviously much more work to be done to improve security when installing untrusted .deb files, and I encourage Debian developers to consider contributing to the UntrustedDebs discussions and particularly to the Teams/Dpkg/Spec/DeclarativePackaging work.

Signal R&D

I spent a significant amount of time this month struggling with the Signal project on my phone. I'm still ambivalent on Signal: it's a centralized designed, too dependent on phone numbers, but I must admit they get a lot of things right and it's the only free-software platform that allows for easy-to-use, multi-platform videoconferencing that my family can use.

I've been following Signal for a while: up until now, I had been using the LibreSignal rebuild of the official client, as it is distributed on a F-Droid repository. Because I try to avoid Google (proprietary) software on my phone, it's basically the only way I could even install Signal. Unfortunately, the repository is out of date and introduces another point of trust in the distribution model: now you not only need to trust the Signal authors to do the right thing, you also need to trust that F-Droid repo not to inject nasty code on your phone. I've therefore started a discussion about how Signal could be distributed outside of the Google Play Store. I'd like to think it's one of the things that led the Signal people to distribute an official copy of Signal outside of the playstore.

After much struggling, I was able to upgrade to this official client and will be able to upgrade easily by just downloading the APK. (Do note that I ended up reinstalling and re-registering Signal, which unfortunately changed my secret keys.) I do hope Signal enters F-Droid one day, but it could take a while because it still doesn't work without Google services and barely works with MicroG, the free software alternative to the Google services clients. Moxie also set a list of requirements like crash reporting and statistics that need to be implemented on F-Droid's side before he agrees to the deployment, so this could take a while.

I've also participated in the, ahem, discussion on the JWZ blog regarding a supposed vulnerability in Signal where it would leak previously unknown phone numbers to third parties. I reviewed the way the phone number is uploaded and, while it's possible to create a rainbow table of phone numbers (which are hashed with a truncated SHA-1 checksum), I couldn't verify the claims of other participants in the thread. For me, Signal still does the right thing with contacts, although I do question the way "read status" notifications get transmitted, but that belong in another bug report / blog post.

Debian Long Term Support (LTS)

It's been more than a year working on Debian LTS, started by Raphael Hertzog at Freexian. I didn't work much in February so I had a lot of hours to catchup with, and was unfortunately unable to do so, partly because I was busy with other projects, and partly because my colleagues are doing a great job at resolving the most important issues.

So one my concerns this month was finding work. It seemed that all the hard packages were either taken (e.g. my usual favorites, tiff and imagemagick, we done by others) or just too challenging (e.g. I don't feel quite comfortable tackling the LTS branch of the Linux kernel yet).

I spent quite a bit of time trying to figure out what was wrong with pcre3, only to realise the "32" in the report was not about the architecture, but about the character width. Because of thise, I marked 4 CVEs (CVE-2017-7186, CVE-2017-7244, CVE-2017-7245, CVE-2017-7246) as "not-affected", since the 32-bith character support wasn't enabled in wheezy (or jessie, for that matter). I still spent some time trying to reproduce the issues, which require a compiler with an AddressSanitizer, something that was introduced in both Clang and GCC after Wheezy was released, which makes reproducing this fairly complicated...

This allowed me to experiment more with Vagrant, however, and I have provided the Debian cloud team with a 32-bit Vagrant box that was merged in shortly after, although it doesn't show up yet in the official list of Debian images.

Then I looked at the apparmor situation (CVE-2017-6507), Debian bug #858768). That was one tricky bug as well, since it's not a security issue in apparmor per se, but more an issue with things that assume a certain behavior from apparmor. I have concluded that Wheezy was not affected because there are no assumptions of proper isolation there - which are provided only starting from LXC 1.0 - and Docker is not in Wheezy. I also couldn't reproduce the issue on Jessie, but, as it turns out, the issue was sysvinit-specific, which is why I couldn't reproduce it under the default systemd configuration shipped with Jessie.

I also looked at the various binutils security issues: as I reported on the mailing list, I didn't see anything serious enough in there to warrant a a security release and followed the lead of both the stable and Red Hat security teams by marking this "no-dsa". I similiarly reviewed the mp3splt security issues (specifically CVE-2017-5666) and was fairly puzzled by that issue, which seems to be triggered only the same address sanitization extensions than PCRE, although there was some pretty wild interplay with debugging flags in there. All in all, it seems we can't reproduce that issue in wheezy, but I do not feel confident enough in the results to push that issue aside for now.

I finally uploaded the pending graphicsmagick issue (DLA-547-2), a regression update to fix a crash that was introduced in the previous release (DLA-547-1, mistakenly named DLA-574-1). Hopefully that release should clear up some of the confusion and fix the regression.

I also released DLA-879-1 for the CVE-2017-6369 in firebird2.5 which was an interesting experiment: I couldn't reproduce the issue in a local VM. After following the Ubuntu setup tutorial, as I wasn't too familiar with the Firebird database until now (hint: the default username and password is sysdba/masterkey), I ended up assuming we were vulnerable and just backporting the patch after seeing the jessie folks push out a release just in case.

I also looked at updating the ca-certificates package to deal with the pending WoSign/Startcom removal: I made an explicit list of the CAs that need to be removed after reviewing the Mozilla list. I also sent a patch for an unrelated issue where ca-certificates is writing to /usr/local (!!) in Debian bug #843722.

I have also done some "meta" work in starting a discussion about fixing the missing DLA links in the tracker, as you will notice all of the above links lead to nowhere. Thanks to pabs, there are now some links but unfortunately there are about 500 DLAs missing from the website. We also discussed ways to Debian bug #859123, something which is currently a manual process. This is now in the hands of the excellent webmaster team.

I have also filed a few missing security bugs (Debian bug #859135, Debian bug #859136), partly because I wanted to help the security team. But it turned out that I felt the script needed some improvements, so I submitted a patch to improve the script so it is easier to run.

Other projects

As usual, there's the usual mixed bags of chaos:

More stuff on Github...

Categories: FLOSS Project Planets

Antoine Beaupré: My free software activities, February and March 2017

Planet Debian - Sat, 2017-04-01 18:51
Looking into self-financing

Before I begin, I should mention that I started tracking my time working on free software more systematically. I spend a lot of time on the computer, as regular readers of this blog might remember so I wanted to know exactly how much time was paid vs free work. I was already using org-mode's time clock system to keep track of my work hours, so I just extended this to my regular free software contributions, which also helps in writing those reports.

It turns out that over 60% of my computer time is spent working on free software. That's huge! I was expecting something more along the range of 20 to 40% of my time. So I started thinking about ways of financing this work. I created a Patreon page but I'm hesitant into launching such a campaign: the only thing worse than "no patreon page" is "a patreon page with failed goals and no one financing it". So before starting such an effort, I'd like to get a feeling of what other people's experience with it are. I know that joeyh is close to achieving his goals, but I can't compare with the guy that invented git-annex or debhelper, so I'm concerned I wouldn't be able to raise the same level of funding.

So any advice you have, feel free to contact me in private or in the comments. If you would be ready to fund my work, I'd love to know about it, obviously, but I guess I wouldn't get real numbers until I actually open up such a page...

Now, onto the regular report.

Wallabako

I spent a good chunk of time completing most of the things I had in mind for Wallabako, which I mentioned quickly in the previous report. Wallabako is now much easier to installed, with clearer instructions, an easier to use configuration file, more reliable synchronization and read status propagation. As usual the Wallabako README file has all the details.

I've also looked at better integration with Koreader, the free software e-reader that forms the basis of the okreader free software distribution which has been able to port Debian to the Kobo e-readers, a project I am really excited about. This project has the potential of supporting Kobo readers beyond the lifetime that upstream grants it and removes a lot of proprietary software and spyware that ships with the Kobo readers. So I have made a few contributions to okreader and also on koreader, the ebook reader okreader is based on.

Stressant

I rewrote stressant, my simple burn-in and stress-testing tool. After struggling in turn with Debirf, live-build, vmdebootstrap and even FAI, I just figured maybe it wasn't the best idea to try and reinvent that particular wheel: instead of reinventing how to build yet another Debian system build tool, maybe I should just reuse what's already there.

It turns out there's a well known, succesful and fairly complete recovery system called Grml. It is a Debian Derivative, so all I needed to do was to stop procrastinating and actually write the actual stressant tool instead of just creating a distribution with a bunch of random tools shipped in. This allowed me to focus on which tools were the best to stress test different components. This selection ended up being:

fio can also be used to overwrite disk drives with the proper options (--overwrite and --size=100%), although grml also ships with nwipe for wiping old spinning disks and hdparm to do a secure erase of SSD disks (whatever that's worth).

Stressant still needs to be shipped with grml for this transition to be complete. In the meantime, I was able to configure the excellent public Gitlab CI service to provide ISO images with Stressant built-in as a stopgap measure. I also need to figure out a way to automate starting stressant from a boot menu to automate deployments on a larger scale, although because I have little need for the feature at this moment in time, this will likely wait for a sponsor to show up for this to be implemented.

Still, stressant has useful features like the capability of sending logs by email using a fresh new implementation of the Python SMTPHandler (BufferedSMTPHandler) which waits for logging to complete before sending a single email. Another interesting piece of code in there is the NegateAction argparse handler that enables the use of "toggle flags" (e.g. --flag / --no-flag). I'm so happy with the code that I figure I could just share it here directly:

class NegateAction(argparse.Action): '''add a toggle flag to argparse this is similar to 'store_true' or 'store_false', but allows arguments prefixed with --no to disable the default. the default is set depending on the first argument - if it starts with the negative form (define by default as '--no'), the default is False, otherwise True. ''' negative = '--no' def __init__(self, option_strings, *args, **kwargs): '''set default depending on the first argument''' default = not option_strings[0].startswith(self.negative) super(NegateAction, self).__init__(option_strings, *args, default=default, nargs=0, **kwargs) def __call__(self, parser, ns, values, option): '''set the truth value depending on whether it starts with the negative form''' setattr(ns, self.dest, not option.startswith(self.negative))

Short and sweet. I wonder why stuff like this is not in the standard library yet - maybe just because no one bothered yet? It'd be great to get feedback of more experienced Pythonistas on this one.

I hope that my work on Stressant is complete. I get zero funding for this work, and have little use for it myself: I manage only a few machines and such a tool really shines when you regularly put new hardware online, which is (fortunately?) not my case anymore. I'd be happy, of course, to accompany organisations and people that wish to further develop and use such a tool.

A short demo of stressant as well as detailed description of how it works is of course available in its README file.

Standard third party repositories

After looking at improvements for the grml repository instructions, I realized there was no real "best practices" document on how to configure an Apt repository. Sure, there are tools like reprepro and others, but those hardly qualify as policy: they are very flexible and there are lots of ways to create insecure repositories or curl | sh style instructions, which we of course generally want to avoid.

While the larger problem of Unstrusted Debian packages remain generally unsolved (e.g. when you install any .deb file, it can get root on your system), it seemed to me one critical part of this problem was how to add a random third-party repository to your machine while limiting, as much as possible, what possible attackers could do with such a repository. In other words, to solve the more general problem of insecure .deb files, we also need to solve the distribution problem, otherwise fixing the .deb files themselves will be useless.

This lead to the creation of standardized repository instructions that define:

  1. how to distribute the repository's public signing key (ie. over HTTPS)
  2. how to name suites and components (e.g. use stable and main unless you have a good reason, and explain yourself)
  3. recommend a healthy does of apt preferences pinning
  4. how to distribute keys (e.g. with a derive-archive-keyring package)

I've seen so many third party repositories get this wrong. For example, a lot of repositories recommend this type of command to intialize the OpenPGP trust path:

curl http://example.com/key.asc | apt-key add -

This has the following problems:

  • the key is transfered in plaintext and can easily be manipulated by an active attacker (e.g. a router on your path to the server or a neighbor in a Wifi cafe)
  • the key is added to the main trust root, which allows the key to authentify as the real Debian archive, therefore giving it all rights over all packages
  • since it's part of the global archive, it's difficult for a package to remove/add the key when a key rollover is necessary (and repositories generally don't provide a deriv-archive-keyring to do that process anyways)

An example of this are the Docker install instructions that, at least, manage to do this over HTTPS. Some other repositories don't even bother teaching people about the proper way of adding those keys. We settled for:

wget -O /usr/share/keyrings/deriv-archive-keyring.gpg https://deriv.example.net/debian/deriv-archive-keyring.gpg

That location was explicitly chosen to be out of the main trust directory, so that it needs to be explicitly added to the sources.list as well:

deb [signed-by=/usr/share/keyrings/deriv-archive-keyring.gpg] https://deriv.example.net/debian/ stable main

Similarly, we highly recommend users setup "apt pinning" to restrict what a given repository can do. Since pinning is so confusing, most people don't actually bother even configuring it and I have yet to see a single repo advise its users to configure those preferences, which are essential to limit what a repository can do. To keep configuration simple, we recommend this:

Package: * Pin: origin deriv.example.net Pin-Priority: 100

Obviously, for a single-package repository, the actual package name should be listed, e.g.:

Package: foo Pin: origin deriv.example.net Pin-Priority: 100

And the priority should probably be set to 1 unless you want to allow automatic upgrades.

It is my hope that this design will get more traction in the years to come and become a de-facto standard that will be a key part in safely adding third party repositories. There is obviously much more work to be done to improve security when installing untrusted .deb files, and I encourage Debian developers to consider contributing to the UntrustedDebs discussions and particularly to the Teams/Dpkg/Spec/DeclarativePackaging work.

Signal R&D

I spent a significant amount of time this month struggling with the Signal project on my phone. I'm still ambivalent on Signal: it's a centralized designed, too dependent on phone numbers, but I must admit they get a lot of things right and it's the only free-software platform that allows for easy-to-use, multi-platform videoconferencing that my family can use.

I've been following Signal for a while: up until now, I had been using the LibreSignal rebuild of the official client, as it is distributed on a F-Droid repository. Because I try to avoid Google (proprietary) software on my phone, it's basically the only way I could even install Signal. Unfortunately, the repository is out of date and introduces another point of trust in the distribution model: now you not only need to trust the Signal authors to do the right thing, you also need to trust that F-Droid repo not to inject nasty code on your phone. I've therefore started a discussion about how Signal could be distributed outside of the Google Play Store. I'd like to think it's one of the things that led the Signal people to distribute an official copy of Signal outside of the playstore.

After much struggling, I was able to upgrade to this official client and will be able to upgrade easily by just downloading the APK. (Do note that I ended up reinstalling and re-registering Signal, which unfortunately changed my secret keys.) I do hope Signal enters F-Droid one day, but it could take a while because it still doesn't work without Google services and barely works with MicroG, the free software alternative to the Google services clients. Moxie also set a list of requirements like crash reporting and statistics that need to be implemented on F-Droid's side before he agrees to the deployment, so this could take a while.

I've also participated in the, ahem, discussion on the JWZ blog regarding a supposed vulnerability in Signal where it would leak previously unknown phone numbers to third parties. I reviewed the way the phone number is uploaded and, while it's possible to create a rainbow table of phone numbers (which are hashed with a truncated SHA-1 checksum), I couldn't verify the claims of other participants in the thread. For me, Signal still does the right thing with contacts, although I do question the way "read status" notifications get transmitted, but that belong in another bug report / blog post.

Debian Long Term Support (LTS)

It's been more than a year working on Debian LTS, started by Raphael Hertzog at Freexian. I didn't work much in February so I had a lot of hours to catchup with, and was unfortunately unable to do so, partly because I was busy with other projects, and partly because my colleagues are doing a great job at resolving the most important issues.

So one my concerns this month was finding work. It seemed that all the hard packages were either taken (e.g. my usual favorites, tiff and imagemagick, we done by others) or just too challenging (e.g. I don't feel quite comfortable tackling the LTS branch of the Linux kernel yet).

I spent quite a bit of time trying to figure out what was wrong with pcre3, only to realise the "32" in the report was not about the architecture, but about the character width. Because of thise, I marked 4 CVEs (CVE-2017-7186, CVE-2017-7244, CVE-2017-7245, CVE-2017-7246) as "not-affected", since the 32-bith character support wasn't enabled in wheezy (or jessie, for that matter). I still spent some time trying to reproduce the issues, which require a compiler with an AddressSanitizer, something that was introduced in both Clang and GCC after Wheezy was released, which makes reproducing this fairly complicated...

This allowed me to experiment more with Vagrant, however, and I have provided the Debian cloud team with a 32-bit Vagrant box that was merged in shortly after, although it doesn't show up yet in the official list of Debian images.

Then I looked at the apparmor situation (CVE-2017-6507), Debian bug #858768). That was one tricky bug as well, since it's not a security issue in apparmor per se, but more an issue with things that assume a certain behavior from apparmor. I have concluded that Wheezy was not affected because there are no assumptions of proper isolation there - which are provided only starting from LXC 1.0 - and Docker is not in Wheezy. I also couldn't reproduce the issue on Jessie, but, as it turns out, the issue was sysvinit-specific, which is why I couldn't reproduce it under the default systemd configuration shipped with Jessie.

I also looked at the various binutils security issues: as I reported on the mailing list, I didn't see anything serious enough in there to warrant a a security release and followed the lead of both the stable and Red Hat security teams by marking this "no-dsa". I similiarly reviewed the mp3splt security issues (specifically CVE-2017-5666) and was fairly puzzled by that issue, which seems to be triggered only the same address sanitization extensions than PCRE, although there was some pretty wild interplay with debugging flags in there. All in all, it seems we can't reproduce that issue in wheezy, but I do not feel confident enough in the results to push that issue aside for now.

I finally uploaded the pending graphicsmagick issue (DLA-547-2), a regression update to fix a crash that was introduced in the previous release (DLA-547-1, mistakenly named DLA-574-1). Hopefully that release should clear up some of the confusion and fix the regression.

I also released DLA-879-1 for the CVE-2017-6369 in firebird2.5 which was an interesting experiment: I couldn't reproduce the issue in a local VM. After following the Ubuntu setup tutorial, as I wasn't too familiar with the Firebird database until now (hint: the default username and password is sysdba/masterkey), I ended up assuming we were vulnerable and just backporting the patch after seeing the jessie folks push out a release just in case.

I also looked at updating the ca-certificates package to deal with the pending WoSign/Startcom removal: I made an explicit list of the CAs that need to be removed after reviewing the Mozilla list. I also sent a patch for an unrelated issue where ca-certificates is writing to /usr/local (!!) in Debian bug #843722.

I have also done some "meta" work in starting a discussion about fixing the missing DLA links in the tracker, as you will notice all of the above links lead to nowhere. Thanks to pabs, there are now some links but unfortunately there are about 500 DLAs missing from the website. We also discussed ways to Debian bug #859123, something which is currently a manual process. This is now in the hands of the excellent webmaster team.

I have also filed a few missing security bugs (Debian bug #859135, Debian bug #859136), partly because I wanted to help the security team. But it turned out that I felt the script needed some improvements, so I submitted a patch to improve the script so it is easier to run.

Other projects

As usual, there's the usual mixed bags of chaos:

More stuff on Github...

Categories: FLOSS Project Planets

Enrico Zini: Stereo remote control

Planet Debian - Sat, 2017-04-01 18:10

Wouldn't it be nice if I could use the hifi remote control to control mpd?

It turns out many wishes can come true when one has a GPIO board.

A friend of mine had a pile of IR receiver components in his stash and gave me one. It is labeled "38A 1424A", and the closest matching datasheet I found is this one.

I wired the receiver with the control pin on GPIO port 24, and set up lirc by following roughly this guide.

Enable lirc_rpi support

I had to add these lines to /boot/config.txt to enable lirc_rpi support:

dtoverlay=lirc-rpi,gpio_in_pin=24,gpio_out_pin=22 dtparam=gpio_in_pull=up

At first I had missed configuration of the internal pull up resistor, and reception worked but was very, very poor.

Then reboot.

Install and configure lirc apt install lirc

I added these lines to /etc/lirc/hardware.conf:

DRIVER="default" DEVICE="/dev/lirc0" MODULES="lirc_rpi"

Stopped lircd:

systemctl stop lirc

Tested that the receiver worked:

mode2 -d /dev/lirc0

Downloaded remote control codes for my hifi and put them in /etc/lirc/lircd.conf.

Started lircd

systemctl start lirc

Tested that lirc could parse commands from my remote control:

$ irw 0000400405506035 00 CD_PAUSE RAK-SC304W 0000400405506035 01 CD_PAUSE RAK-SC304W 0000400405506035 02 CD_PAUSE RAK-SC304W 0000400405505005 00 CD_PLAY RAK-SC304W 0000400405505005 01 CD_PLAY RAK-SC304W Interface lirc with mpd

I made this simple lirc program and saved it in ~pi/.lircrc:

begin prog = irexec button = CD_NEXT config = mpc next end begin prog = irexec button = TAPE_FWD config = mpc next end begin prog = irexec button = TAPE_REW config = mpc prev end begin prog = irexec button = CD_PREV config = mpc prev end begin prog = irexec button = TAPE_PAUSE config = mpc pause end begin prog = irexec button = CD_PAUSE config = mpc pause end begin prog = irexec button = CD_PLAY config = mpc toggle end begin prog = debug button = TAPE_PLAY_RIGHT config = mpc toggle end

Then wrote a systemd unit file to start irexec and saved it as /etc/systemd/system/mpd-irexec.service:

[Unit] Description=Control mpd via lirc remote control After=lirc mpd [Service] Type=simple ExecStart=/usr/bin/irexec Restart=always User=pi WorkingDirectory=~ [Install] WantedBy=multi-user.target

Then systemctl start mpd-irexec to start irexec, and systemctl enable mpd-irexec to start irexec at boot.

Profit!

All of this was done by me, with almost no electronics training, following online tutorials for the hardware parts.

To connect components I used a breadboard and female-male jumper leads, so I didn't have to solder, for which I have very little practice.

Now the Raspberry Pi is so much a part of my hifi component that I can even control it with the hifi remote control.

Given that I disconnected the CD and tape players, there are now at least 16 free buttons on the remote control that I can script however I like.

Categories: FLOSS Project Planets

Thorsten Alteholz: My Debian Activities in March 2017

Planet Debian - Sat, 2017-04-01 17:18

FTP assistant

This month I marked 111 packages for accept and sent four emails to maintainers asking questions. The bad number of the month are the 41 packages I had to reject. This rejection rate was the worst of all my NEW-months.

May I ask everybody to pay a bit more attention before uploading/sponsoring a package?

Debian LTS

This was my thirty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 14.75h. During that time I did uploads of

  • [DSA 3798-1] tnef security update for four CVEs
  • [DLA 839-2] tnef regression update
  • [DSA 3798-2] tnef regression update
  • tnef security update in unstable/testing for four CVEs
  • [DLA 878-1] libytnef security update for ten CVEs

I also took care of radare and marked all CVEs as not-affected in Wheezy. My next package on the list will be qbittorrent.

Other stuff

I uploaded a new version of entropybroker to fix a bug with the handling of return codes of ppoll. This version will also make it to Stretch. The same happens with a bug in alljoyn-services-1509. I don’t know why everybody talks about unblock-bugs that need to be filed!? The release team was always faster in granting the unblock than me in filing the corresponding bug.

As my DOPOM for this month I adopted httperf, took care of some bugs and sent patches upstream.

I also created a new project on Alioth that is called debian-mobcom (Alioth page), which shall be a place for all packages concerning mobile communication on the network part. I only uploaded libosmocore to experimental yet, so the package list is rather short.

Categories: FLOSS Project Planets

Jeff Geerling's Blog: MidCamp 2017 Presentation - Drupal VM for Drupal 8 Development

Planet Drupal - Sat, 2017-04-01 14:23

MidCamp is one of my favorite Drupal events—it hits the sweet spot (at least for me) in terms of diversity, topics, and camp size. I was ecstatic when one of my session submissions was accepted, and just finished presenting Developing for Drupal 8 with Drupal VM.

You can see slides from the presentation here: Drupal VM for Drupal 8 Development, but without the full video there are a lot of gaps (especially on slides where there's just a giant emoji!). Luckily, Kevin Thull of Blue Drop Shop is hard at work recording all the sessions and posting them to YouTube. He's already processed the video from my session, and it's available below:

Categories: FLOSS Project Planets
Syndicate content