Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 6 hours 44 min ago

Thomas Goirand: About the privacy of the unlocking procedure for Xiaomi’s Mi 5s plus

7 hours 5 min ago

My little girl decided the old OnePlus One of my wife had to take a swim in the toilets. So we had to buy a new phone. Since I know how bad standard ROMs are, I looked-up in the LineageOS list of compatible OS, and found out that the Xiaomi’s Mi 5s plus was not too bad, and we bought one. The phone itself looks quite nice, 64 bits, lots of RAM, nice screen, etc. Then I tried the procedure for unlocking…

First, you got to register on Xiaomi’s website, and request for the permission to unlock the deivce. That’s already bad enough: why should I ask for the permission to use the device I own as I am pleased to? Anyway, I did that. The procedure includes receiving an SMS. Again, more bad: why should I give-up such a privacy thing as my phone number? Anyway, I did it, and received the code to activate my website account. Then I started the unlock program in a virtualbox Windows XP VM (yeah right… I wasn’t expecting something better anyway…), and then, the program tells me that I need to add my Xiaomi’s account in the phone. Of course, it then sends a web request to Xiaomi’s server. I’m already not happy with all of this, but that’s not it. After all of these privacy breaches, the unlock APP tells me that I need to wait 72 hours to get my phone to account association to be activated. Since I wont be available in the middle of the week, for me, that means waiting until next week-end to do that. Silly…

Let’s recap. During this unlock procedure, I had to give-up:

  • My phone number (due to the SMS)
  • My phone ID (probably the EMEI was sent)
  • My email address (truth is: I could have given them a temporary email address)
  • Hours of my time understanding and run the stupid procedure

So my advice: if you want an unlocked Android device, do not choose Xiaomi, unless you’re ok to give up all of the above private information.

Categories: FLOSS Project Planets

Uwe Kleine-König: Using the switch on Turris Omnia with Debian

Fri, 2018-03-23 17:29

After installing Debian on Turris Omnia there are a few more steps needed to make use of the network switch.

The Armada 385 CPU provides three network interfaces. Two are connected to the switch (but only one of them is used to "talk" to the switch), and one is routed directly to the WAN port.

After booting you might have to issue the following commands to make the devices representing the five external ports of the switch appear and functional:

# ip link set eth1 up # modprobe mv88e6xxx

After that you can use the network devices lan0 to lan4 like normal network devices. To make them actually behave as you would expect from a network switch you have to put them into a bridge. The driver then offloads forwarding between the ports to the switch hardware such that the cpu doesn't need to bother for each single packet.

To automate setup of the bridged ports I used systemd-networkd as follows:

# echo mv88e6xxx > /etc/modules-load.d/switch.conf # printf '[Match]\nPath=platform-f1030000.ethernet\n[Link]\n#MACAddress=...\nName=eth1\n' > /etc/systemd/network/ # printf '[NetDev]\nName=brlan\nKind=bridge\n' > /etc/systemd/network/brlan.netdev # printf '[Match]\nName=brlan\n\n[Network]\nLinkLocalAddressing=ipv6\n' > /etc/systemd/network/ # printf '[Match]\nName=lan[01234]\n\n[Network]\nBridge=brlan\nBindCarrier=eth1\n' > /etc/systemd/network/ # printf '[Match]\nName=eth1\n' > /etc/systemd/network/ # systemctl enable --now systemd-networkd.service

You also might want to mask NetworkManager and/or ifupdown to not interfere with the above setup. And obviously you might want to add some more options to to configure the addresses used there. See

Categories: FLOSS Project Planets

Aigars Mahinovs: Automation of embedded development

Fri, 2018-03-23 09:35

I am wondering if there is a standard solution to a problem that I am facing. Say you are developing an embedded Debian Linux device. You want to have a "test farm" - a bunch of copies of your target hardware running a lot of tests, while the development is ongoing. For this to work automatically, your automation setup needs to have a way to fully re-flash the device, even if the image previously flashed to it does not boot. How would that be usually achieved?

I'd imagine some sort of option in the initial bootloader that would look at some hardware switch (that your test host could trip programmatically) and if that is set, then boot into a very minimal and very stable "emergency" Linux system, then you could ssh into that, mount the target partitions and rewrite their contents with the new image to be tested.

Are there ready-made solutions that do such a thing? Generically or even just for some specific development boards? Do people solve this problem in a completely different way? Was unable to find any good info online.

Categories: FLOSS Project Planets

Renata D'Avila: Pushing a commit to a different repo

Thu, 2018-03-22 23:49

View from the Barra-Galheta beach trail, in Florianopolis, Brazil

My Outreachy internship with Debian is over. I'm still going to write an article about it, to let everyone know what I worked on towards the ending, but I simply didn't have the time yet to sit down and compile all the information.

As you might or might not have noticed, right after my last Outreachy activity, I sort of took a week off in the beach. \o/

Mila, a cute stray dog that accompanied us during a whole trail

For the past weeks, I've also been involved in the organization of three events (one of them was a Debian Women meeting in Curitiba that took place two Saturdays ago and another one is Django Girls Porto Alegre, which starts tonight). Because of this last one, I was reviewing their Brazilian Portuguese tutorial and adding some small fixes to the language. After all, we are talking to women who read the tutorial during the workshop, so why all the mentions to programmers and hackers and such should mention the male counterpart in Portuguese? Women program, too!

When I was going to commit my fixes, though, I got an error:

remote: error: GH006: Protected branch update failed for refs/heads/master. To ! [remote rejected] master -> master (protected branch hook declined)


Yup, as it so happens more often than not, I forgot to fork the repository before starting to change the files! I just did 'git clone' straight to Django Girls' tutorial repository. But, since I had already done all the steps towards the commit, what could I do to avoid losing the changes? Could I just push this commit to another repository of my own and try and open a Pull Request to DjangoGirls/tutorial?

Of course I had to go and search for that. Isn't that what all programmers do? Go and find someone else who already had the same problem they have to try and find a solution?

Quick guide to the solution I've found:

  • Fork the original repository to my collection of repos (on Github, just clicking 'Fork' will do).

  • Get the branch and the id of the commit that had been created. For instance, on this case:

[master 4d314550] Small fixes for pt-br version

The branch is: master

The id is: 4d314550

  • Use the URL for the new repository (your fork), the branch and the commit id for a new git push command, like this:
git push URL_FOR_THE_NEW_REPO commit_id:branch

Example with my repo:

git push 4d314550:master

And this was yet another article for future reference.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppCNPy 0.2.9

Thu, 2018-03-22 20:07

Another minor maintenance release of the RcppCNPy package arrived on CRAN this evening.

RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers.

There is only small code change: a path is now checked before an attempt to save. Thanks to Wush for the suggestion. I also added a short new vignette showing how reticulate can be used for NumPy data.

Changes in version 0.2.9 (2018-03-22)
  • The npySave function has a new option to check the path in the given filename.

  • A new vignette was added showing how the reticulate package can be used instead.

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Christoph Egger: iCTF 2018 Spiderman writeup

Thu, 2018-03-22 08:06

This is FAUST playing CTF again, this time iCTF. We somehow managed to score an amazing 5th place.

Crew: izibi, siccegge

spider is a patched python interpreter. man is a pyc but with different magic values (explaining the patched python interpreter for now). Plain decompiling failes due to some (dead) cruft code at the beginning of all methods. can be patched away or you do more manual disassembling.


  • does something with RSA
  • public exponent is slightly uncommon (\(2^{16}+3\) instead of \(2^{16}+1\)) but that should be fine.
  • uses openssl prime -generate to generate the RSA key. Doesn't use -safe but should also be fine for RSA purposes
  • You need to do a textbook RSA signature on a challenge to get the flag

Fine so far nothing obvious to break. When interacting with the service, you will likely notice the Almost Equal function in the Fun menu. According to the bytecode, it takes two integers \(a\) and \(b\) and outputs if \(a = b \pm 1\), but looking at the gameserver traffic, these two numbers are also considered to be almost equal:

$$ a = 33086666666199589932529891 \\ b = 35657862677651939357901381 $$

So something's strange here. Starting the spider binary gives a python shell where you can play around with these numbers and you will find that a == b - 1 will actually result in True. So there is something wrong with the == operator in the shipped python interpreter, however it doesn't seem to be any sort of overflow. Bit representation also doesn't give anything obvious. Luky guess: why the strange public exponent? let's try the usual here. and indeed \(a = b - 1 \pmod{2^{16}+1}\). Given this is also used to compare the signature on the challenge this becomes easily bruteforceable.

#!/usr/bin/env python3 import nclib, sys from random import getrandbits e = 2**16+3 # exponent w = 2**16+1 # wtf nc = nclib.Netcat((sys.argv[1], 20005), udp=False, verbose=True) nc.recv_until(b'4) Exit\n') nc.send(b'3\n') # Read nc.recv_until(b'What do you want to read?\n') nc.send(sys.argv[2].encode() + b'\n') nc.recv_until(b'solve this:\n') modulus, challenge = map(int, nc.recv_until(b'\n').decode().split()[:2]) challenge %= w # Starting at 0 would also work, but using large random numbers makes # it less obvious that we only bruteforce a small set of numbers answer = getrandbits(2000) while (pow(answer, e, modulus)) % w != challenge: answer += 1 nc.send(str(answer).encode() + b'\n') flag = nc.recv_until(b'\n') nc.recv_until(b'4) Exit\n') nc.send(b'4\n')
Categories: FLOSS Project Planets

Petter Reinholdtsen: Self-appointed leaders of the Free World

Thu, 2018-03-22 06:00

The leaders of the worlds have started to congratulate the re-elected Russian head of state, and this causes some criticism. I am though a little fascinated by a comment from USA senator John McCain, sited by The Hill and others:

"An American president does not lead the Free World by congratulating dictators on winning sham elections."

While I totally agree with the senator here, the way the quote is phrased make me suspect that he is unaware of the simple fact that USA have not lead the Free World since at least before its government kidnapped a completely innocent Canadian citizen in transit on his way home to Canada via John F. Kennedy International Airport in September 2002 and sent him to be tortured in Syria for a year.

USA might be running ahead, but the path they are taking is not the one taken by any Free World.

Categories: FLOSS Project Planets

Vincent Fourmond: Release 2.2 of QSoas

Wed, 2018-03-21 16:46
The new release of QSoas is finally ready ! It brings in a lot of new features and improvements, notably greatly improved memory use for massive multifits, a fit for linear (in)activation processes (the one we used in Fourmond et al, Nature Chemistry 2014), a new way to transform "numbers" like peak position or stats into new datasets and even SVG output ! Following popular demand, it also finally brings back the peak area output in the find-peaks command (and the other, related commands) ! You can browse the full list of changes there.

The new release can be downloaded from the downloads page.
Freely available binary images for QSoas 1.0In addition to the new release, we are now releasing the binary images for MacOS and Windows for the release 1.0. They are also freely available for download from the downloads page.
About QSoasQSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.
Categories: FLOSS Project Planets

Julien Danjou: On blog migration

Wed, 2018-03-21 14:46

I've started my first Web page in 1998 and one could say that it evolved quite a bit in the meantime. From a Frontpage designed Web site with frames, it evolved to plain HTML files. I've started blogging in 2003, though the archives of this blog only gets back to 2007. Truth is, many things I wrote in the first years were short (there were no Twitter) and not that relevant nowadays. Therefore, I never migrated them along the road of the many migrations that site had.

The last time I switched this site engine was in 2011, were I switched from Emacs Muse (and my custom muse-blog.el extension) to Hyde, a static Web site generator written in Python.

That taught me a few things.

First, you can't really know for sure which project will be a ghost in 5 years. I had no clue back then that Hyde author would lose interest and struggle passing the maintainership to someone else. The community was not big but it existed. Betting on a horse is part skill and part chance. My skills were probably lower seven years ago and I also may have had bad luck.

Secondly, maintaining a Web site is painful. I used to blog more regularly a few years ago, as the friction of using a dynamic blog engine was lower than spawning my deprecated static engine. Knowing that it needs 2 minutes to generate a static Web site really makes it difficult to compose and see the result at the same time without losing patience. It took me a few years to decide it was time to invest in the migration. I just jumped from Hyde to Ghost, hosted on their Pro engine as I don't want to do any maintenance. Let's be honest, I've no will to inflict myself the maintenance of a JavaScript blogging engine.

The positive side is that this is still Markdown based, so the migration job was not so painful. Ghost offers a REST API which allow to manipulate most of the content. It works fine, and I was able to leverage the Python ghost-client to write a tiny migration script to migrate every post.

I am looking forward to share most of the things that I work on during the next months. I really enjoyed reading contents of great hackers those last years, and I've learned ton of things by reading the adventure of smarter engineers.

It might be my time to share.

Categories: FLOSS Project Planets

Jeremy Bicha: gksu is dead. Long live PolicyKit

Wed, 2018-03-21 13:29

Today, gksu was removed from Debian unstable. It was already removed 2 months ago from Debian Testing (which will eventually be released as Debian 10 “Buster”).

It’s not been decided yet if gksu will be removed from Ubuntu 18.04 LTS. There is one blocker bug there.


Categories: FLOSS Project Planets

Petter Reinholdtsen: Facebooks ability to sell your personal information is the real Cambridge Analytica scandal

Wed, 2018-03-21 11:30

So, Cambridge Analytica is getting some well deserved criticism for (mis)using information it got from Facebook about 50 million people, mostly in the USA. What I find a bit surprising, is how little criticism Facebook is getting for handing the information over to Cambridge Analytica and others in the first place. And what about the people handing their private and personal information to Facebook? And last, but not least, what about the government offices who are handing information about the visitors of their web pages to Facebook? No-one who looked at the terms of use of Facebook should be surprised that information about peoples interests, political views, personal lifes and whereabouts would be sold by Facebook.

What I find to be the real scandal is the fact that Facebook is selling your personal information, not that one of the buyers used it in a way Facebook did not approve when exposed. It is well known that Facebook is selling out their users privacy, but a scandal nevertheless. Of course the information provided to them by Facebook would be misused by one of the parties given access to personal information about the millions of Facebook users. Collected information will be misused sooner or later. The only way to avoid such misuse, is to not collect the information in the first place. If you do not want Facebook to hand out information about yourself for the use and misuse of its customers, do not give Facebook the information.

Personally, I would recommend to completely remove your Facebook account, and take back some control of your personal information. According to The Guardian, it is a bit hard to find out how to request account removal (and not just 'disabling'). You need to visit a specific Facebook page and click on 'let us know' on that page to get to the real account deletion screen. Perhaps something to consider? I would not trust the information to really be deleted (who knows, perhaps NSA, GCHQ and FRA already got a copy), but it might reduce the exposure a bit.

If you want to learn more about the capabilities of Cambridge Analytica, I recommend to see the video recording of the one hour talk Paul-Olivier Dehaye gave to NUUG last april about Data collection, psychometric profiling and their impact on politics.

And if you want to communicate with your friends and loved ones, use some end-to-end encrypted method like Signal or Ring, and stop sharing your private messages with strangers like Facebook and Google.

Categories: FLOSS Project Planets

Iustin Pop: Hakyll basics

Tue, 2018-03-20 22:00

As part of my migration to Hakyll, I had to spend quite a bit time understanding how it works before I became somewhat “at-home” with it. There are many posts that show “how to do x”, but not so many that explain its inner workings. Let me try to fix that: at its core, Hakyll is nothing else than a combination of make and m4 all in one. Simple, right? Let’s see :)

Note: in the following, basic proficiency with Haskell is assumed.

Monads and data types Rules

The first area (the make equivalent), more precisely the Rules monad, concerns itself with the rules for mapping source files into output files, or creating output files from scratch.

Key to this mapping is the concept of an Identifier, which is name in an abstract namespace. Most of the time—e.g. for all the examples in the upstream Hakyll tutorial—this identifier actually maps to a real source file, but this is not required; you can create an identifier from any string value.

The similarity, or relation, to file paths manifests in two ways:

  • the Identifier data type, although opaque, is internally implemented as a simple data type consisting of a file path and a “version”; the file path here points to the source file (if any), while the version is rather a variant of the item (not a numeric version!).
  • if the identifier has been included in a rule, it will have an output file (in the Compiler monad, via getRoute).

In effect, the Rules monad is all about taking source files (as identifiers) or creating them from scratch, and mapping them to output locations, while also declaring how to transform—or create—the contents of the source into the output (more on this later). Anyone can create an identifier value via fromFilePath, but “registering” them into the rules monad is done by one of:

Note: I’m probably misusing the term “registered” here. It’s not the specific value that is registered, but the identifier’s file path. Once this string value has been registered, one can use a different identifier value with a similar string (value) in various function calls.

Note: whether we use match or create doesn’t matter; only the actual values matter. So a match "" is equivalent to create [""], match here takes the list of identifiers from the file-system, but does not associated them to the files themselves—it’s just a way to get the list of strings.

The second argument to the match/create calls is another rules monad, in which we’re processing the identifiers and tell how to transform them.

This transformation has, as described, two aspects: how to map the file path to an output path, via the Rules data type, and how to compile the body, in the Compiler monad.

Name mapping

The name mapping starts with the route call, which lifts the routes into the rules monad.

The routing has the usual expected functionality:

  • idRoute :: Routes, which maps 1:1 the input file name to the output one.
  • setExtension :: String -> Routes, which changes the extension of the filename, or sets it (if there wasn’t any).
  • constRoute :: FilePath -> Routes, which is special in that it will result in the same output filename, which is obviously useful only for rules matching a single identifier.
  • and a few more options, like building the route based on the identifier (customRoute), building it based on metadata associated to the identifier (metadataRoute), composing routes, match-and-replace, etc.

All in all, routes offer all the needed functionality for mapping.

Note that how we declare the input identifier and how we compute the output route is irrelevant, what matters is the actual values. So for an identifier with name (file path), route idRoute is equivalent to constRoute "".


Slightly into more interesting territory here, as we’re moving beyond just file paths :) Lifting a compiler into the routes monad is done via the compile function:

compile :: (Binary a, Typeable a, Writable a) => Compiler (Item a) -> Rules ()

The Compiler monad result is an Item a which is just and identifier with a body (of type a). This type variable a means we can return any Writable item. Many of the compiler functions work with/return String, but the flexibility to use other types is there.

The functionality in this module revolves around four topics:

The current identifier

First the very straightforward functions for the identifier itself:

  • getUnderlying :: Compiler Identifier, just returns the identifier
  • getUnderlyingExtension :: Compiler String, returns the extension

And the for the body (data) of the identifier (mostly copied from the haddock of the module):

  • getResourceBody :: Compiler (Item String): returns the full contents of the matched source file as a string, but without metadata preamble, if there was one.
  • getResourceString :: Compiler (Item String), returns the full contents of the matched source file as a string.
  • getResourceLBS :: Compiler (Item ByteString), equivalent to the above but as lazy bytestring.
  • getResourceFilePath :: Compiler FilePath, returns the file path of the resource we are compiling.

More or less, these return the data to enable doing arbitrary things to it, and are at the cornerstone of a static site compiler. One could implement a simple “copy” compiler by doing just:

match "*.html" $ do -- route to the same path, per earlier explanation. route idRoute -- the compiler just returns the body of the source file. compile getResourceLBS

All the other functions in the module work on arbitrary identifiers.


I’m used to Yesod and its safe routes functionality. Hakyll has something slightly weaker, but with programmer discipline can allow similar levels of I know this will point to the right thing (and maybe correct escaping as well). Enter the:

getRoute :: Identifier -> Compiler (Maybe FilePath)

function which I alluded to earlier, and which—either for the current identifier or another identifier—returns the destination file path, which is useful for composing links (as in HTML links) to it.

For example, instead of hard-coding the path to the archive page, as /archive.html, one can instead do the following:

let archiveId = "archive.html" create [archiveId] $ do -- build here the archive page -- later in the index page create "index.html" $ do compile $ do -- compute the actual url: archiveUrl <- toUrl <$> getRoute archiveId -- then use it in the creation of the index.html page

The reuse of archiveId above ensures that if the actual path to the archive page changes (renames, site reorganisation, etc.), then all the links to it (assuming, again, discipline of not hard-coding them) are automatically pointing to the right place.

Working with other identifiers

Getting to the interesting aspect now. In the compiler monad, one can ask for any other identifier, whether it was already loaded/compiled or not—the monad takes care of tracking dependencies/compiling automatically/etc.

There are two main functions:

  • load :: (Binary a, Typeable a) => Identifier -> Compiler (Item a), which returns a single item, and
  • loadAll :: (Binary a, Typeable a) => Pattern -> Compiler [Item a], which return a list of items, based on the same patterns used in the rules monad.

If the identifier/pattern requested do not match actual identifiers declared in the “parent” rules monad, then these calls will fail (as in monadic fail).

The use of other identifiers in a compiler step is what allows moving beyond “input file to output file”; aggregating a list of pages (e.g. blog posts) into a single archive page is the most obvious example.

But sometimes getting just the final result of the compilation step (of other identifiers) is not flexible enough—in case of HTML output, this includes the entire page, including the <html><head>…</head> part, not only the body we might be interested in. So, to ease any aggregation, one uses snapshots.


Snapshots allow, well, snapshotting the intermediate result under a specific name, to allow later retrieval:

  • saveSnapshot :: (Binary a, Typeable a) => Snapshot -> Item a -> Compiler (Item a), to save a snapshot
  • loadSnapshot :: (Binary a, Typeable a) => Identifier -> Snapshot -> Compiler (Item a), to load a snapshot, similar to load
  • loadAllSnapshots :: (Binary a, Typeable a) => Pattern -> Snapshot -> Compiler [Item a], similar to loadAll

One can save an arbitrary number of snapshots at various steps of the compilation, and then re-use them.

Note: load and loadAll are actually just the snapshot variant, with a hard-coded value for the snapshot. As I write this, the value is "_final", so probably it’s best not to use the underscore prefix for one’s own snapshots. A bit of a shame that this is not done better, type-wise.

What next?

We have rules to transform things, including smart name transforming, we have compiler functionality to transform the data. But everything mentioned until now is very generic, fundamental functionality, bare-bones to the bone (ha!).

With just this functionality, you have everything needed to build an actual site. But starting at this level would be too tedious even for hard-core fans of DIY, so Hakyll comes with some built-in extra functionality.

And that will be the next post in the series. This one is too long already :)

Categories: FLOSS Project Planets

Steinar H. Gunderson: Debian CEF packages

Tue, 2018-03-20 19:38

I've created some Debian CEF packages—CEF isn't the easiest thing to package (and it takes an hour to build even on my 20-core server, since it needs to build basically all of Chromium), but it's fairly rewarding to see everything fall into place. It should benefit not only Nageru, but also OBS and potentially CasparCG if anyone wants to package that.

It's not in the NEW queue because it depends on a patch to chromium that I hope the Chromium maintainers are brave enough to include. :-)

Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible Builds: Weekly report #151

Tue, 2018-03-20 15:59

Here's what happened in the Reproducible Builds effort between Sunday March 11 and Saturday March 17 2018:

Upcoming events Patches submitted Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (168)
  • Emmanuel Bourg (2)
  • Pirate Praveen (1)
  • Tiago Stürmer Daitx (1)

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Categories: FLOSS Project Planets

Neil McGovern: ED Update – week 11

Tue, 2018-03-20 11:45

It’s time (Well, long overdue) for a quick update on stuff I’ve been doing recently, and some things that are coming up. I’ve worked out a new way of doing these, so they should be more regular now, about every couple of weeks or so.

  • The annual report is moving ahead. I’ve moved up the timelines a bit here from previous years, so hopefully, the people who very kindly help author this can remember what we did in the 2016/17 financial year!
  • GUADEC/GNOME.Asia/LAS sponsorship – elements are coming together for the sponsorship brochure
    • Some sponsors are lined up, and these will be announced by the usual channels – thanks to everyone who supports the project and our conferences!
  • Shell Extensions – It’s been noticed that reviews of extensions have been taking quite some time recently, so I’ve stepped in to help. I still think that part of the process could be automated, but at the moment it’s quite manual. Help is very much appreciated!
  • The Code of Conduct consultation has been useful, and there’s been a couple of points raised where clarity could be added. I’m getting those drafted at the moment, and hope to get the board to approve this soon.
  • A couple of administrative bits:
    • We now have a filing system for paperwork in NextCloud
    • Reviewing accounts for the end of year accounts – it’s the end of the tax year, so our finances need to go to the IRS
    • Tracking of accounts receivable hasn’t been great in the past, probably not helped by GNUCash. I’m looking at alternatives at the moment.
  • Helping out with a couple of trademark issues that have come up
  • Regular working sessions for Flathub legal bits with our lawyers
  • I’ll be at LibrePlanet 2018 this weekend, and I’m giving a talk on Sunday. With the FSF, we’re hosting a SpinachCon on Friday. This aims to do some usability testing and finding those small things which annoy people.
Categories: FLOSS Project Planets

Holger Levsen: 20180319-some-problems

Tue, 2018-03-20 08:26
Some problems with Code of Conducts

shiromarieke took her time and wrote an IMHO very good text about problems with Code of Conducts, which I wholeheartly recommend to read.

I'll just quote two sentences which I think are essential:

Quote 1: "This is not a rant - it is a call for action: Let's gather, let's build the structures we need to make all people feel safe and respected in our communities." - in that sense, if you have feedback, please share it with shiromarieke as suggested by her. I'm very thankful she is taking the time to discuss her critism and work on possible improvements! (I'll likely not discuss this online though I'll be happy to discuss offline.) I just wanted to share this link with the Debian communities, as I agree with many of shiromarieke's points and because I want to support effords to improve this, as I believe those efforts will benefit everyone (as diversity and a welcoming athmospehre benefits everyone).

Quote 2: "Although I don't believe CoC are a good solution to help fix problems I have and will always do my best to respect existing CoC of workplaces, events or other groups I am involved with and I am thankful for your attempt to make our places and communities safer." - me too.

Categories: FLOSS Project Planets

Daniel Pocock: Can a GSoC project beat Cambridge Analytica at their own game?

Tue, 2018-03-20 08:15

A few weeks ago, I proposed a GSoC project on the topic of Firefox and Thunderbird plugins for Free Software Habits.

At first glance, this topic may seem innocent and mundane. After all, we all know what habits are, don't we? There are already plugins that help people avoid visiting Facebook too many times in one day, what difference will another one make?

Yet the success of companies like Facebook and those that prey on their users, like Cambridge Analytica (who are facing the prospect of a search warrant today), is down to habits: in other words, the things that users do over and over again without consciously thinking about it. That is exactly why this plugin is relevant.

Many students have expressed interest and I'm keen to find out if any other people may want to act as co-mentors (more information or email me).

One Facebook whistleblower recently spoke about his abhorrence of the dopamine-driven feedback loops that keep users under a spell.

The game changer

Can we use the transparency of free software to help users re-wire those feedback loops for the benefit of themselves and society at large? In other words, instead of letting their minds be hacked by Facebook and Cambridge Analytica, can we give users the power to hack themselves?

In his book The Power of Habit, Charles Duhigg lays bare the psychology and neuroscience behind habits. While reading the book, I frequently came across concepts that appeared immediately relevant to the habits of software engineers and also the field of computer security, even though neither of these topics is discussed in the book.

Most significantly, Duhigg finishes with an appendix on how to identify and re-wire your habits and he has made it available online. In other words, a quickstart guide to hack yourself: could Duhigg's formula help the proposed plugin succeed where others have failed?

If you could change one habit, you could change your life

The book starts with examples of people who changed a single habit and completely reinvented themselves. For example, an overweight alcoholic and smoker who became a super-fit marathon runner. In each case, they show how the person changed a single keystone habit and everything else fell into place. Wouldn't you like to have that power in your own life?

Wouldn't it be even better to share that opportunity with your friends and family?

One of the challenges we face in developing and promoting free software is that every day, with every new cloud service, the average person in the street, including our friends, families and co-workers, is ingesting habits carefully engineered for the benefit of somebody else. Do you feel that asking your friends and co-workers not to engage you in these services has become a game of whack-a-mole?

Providing a simple and concise solution, such as a plugin, can help people to find their keystone habits and then help them change them without stress or criticism. Many people want to do the right thing: if it can be made easier for them, with the right messages, at the right time, delivered in a positive manner, people feel good about taking back control. For example, if somebody has spent 15 minutes creating a Doodle poll and sending the link to 50 people, is there any easy way to communicate your concerns about Doodle? If a plugin could highlight an alternative before they invest their time in Doodle, won't they feel better?

If you would like to provide feedback or even help this project go ahead, you can subscribe here and post feedback to the thread or just email me.

Categories: FLOSS Project Planets

Shirish Agarwal: Debconf 2018, MATE 1.2.0, libqalculate transition etc.

Tue, 2018-03-20 01:20

Dear all,

First up is news on Debconf 2018 which will be held in Hsinchu, Taiwan. Apparently, the CFP or Call for Proposals was made just a few days ago and I probably forgot to share about it. Registration has also been opened now.

The only thing most people have to figure out is how to get a system-generated certificate, make sure to have an expiry date, I usually have a year, make it at least 6 months as you would need to put up your proposal for contention and let the content-team decide it on the proposal merit. This may at some point move from alioth to salsa as the alioth service is going away.

The best advice I can give is to put your proposal in and keep reworking/polishing it till the end date for applications is near. At the same time do not over commit yourself. From a very Indian perspective and somebody who has been to one debconf, you can think of the debconf as a kind of ‘khumb‘ Mela or gathering as you will. You can definitely network with all the topics and people you care for, but the most rewarding are those talks which were totally unplanned for. Also it does get crazy sometime so it’s nice if you are able to have some sane time for yourself even if it just a 5-10 minute walk.

On the budgeting side of things, things have been going well but could be better. The team has managed to raise probably bit more than half the target. See the list of the sponsors of Debconf. With so many companies using the products the Debian Developers work hard at maintaining, it would be in the companies self-enlightened interest to keep the pot going. There are high hopes that it will be a healthy turnout and influences hardware, software, Information technology policymakers to have a more open and secure society where people are just not data.

In other news, I’m excited to see MATE 1.20 which is now in testing. I asked people from the mate-team last week for the new packages, came to know of the gtk3+ port which was unfortunately postponed to 3.20.1 which is also complete and might be in a little later. I love mate quite a bit for the functionality and yet low memory usage it provides. I tried to push to have a mate-desktop install CD but was consequently denied. While he didn’t elaborate the reasons, I can hypothesize some of the reasons that might be an influence –

a. Any -desktop CD would not be for a single architecture but all of the architectures.
b. Which in turn would bring headaches from storage at the mirror network
c. not to mention making sure that mate is always at a releasable state especially in point releases.

I have to admit that I have become a bit of mate fanboy since I started using it sometime back.

The mate-team atm consists of Mike Gabriel and Martin Wimpress with Martin usually doing the patching work while Mike does the uploading work to the archive. There are well-wishers like me who do chime in from time to time but probably needs 1 or 2 dedicated people who make things easier. If you have the technical chops and want to learn packaging it might be a good way to get into it. It isn’t big and heavy like GNOME, nor is it at light as some of the other competitors in the desktop space. It’s just right. Add to that it brings in its own unique themeing and looks which makes it look unique than other distributions.

The only thing bad about it is that upstream is a bit secretive about what can we expect in the releases round the corner and in the near/late future probably bit of reason might be constrained resources.

Update – For what it’s worth they have started the package uploads of the new version having the which means by the pkg-mate-team archives are now in read-only mode. While I dunno what the long-term plans for the alioth infrastructure is, but probably think as more and more packages support shift to, would turn to read-only and then at some point a highly compressed data-dump for historical purposes where crazy people like me might come from time-to-time in order to have history of the packaging, collaboration or something about any of the teams or the packages that the teams maintained.

One of the interesting and yet frustrating things I have been seeing from far is how nodejs is missing the whole build from the source and in turn the reproducible builds concept. I have been looking to see come into debian but it seems it will just take forever. Just this github discussion is enough to highlight the difference between the two cultures and understandings. And that is a problem because then you are trusting with your data. As both a maintainer and a user you want something which you can trust in.

In Debian even if nothing, at least many packages initial packaging has some hardening flags which at the very least give some protection. The goal is though to have the whole archive in hardened mode and more. The matrix wiki page is a nice page to watch over and see how things come in Debian.

I don’t want to delve too much into it as there is a whole team called debian-security where probably lot of white hats, grey hats and even black hats might be congregating

Categories: FLOSS Project Planets

Jonathan McDowell: First impressions of the Gemini PDA

Mon, 2018-03-19 16:41

Last March I discovered the IndieGoGo campaign for the Gemini PDA, a plan to produce a modern PDA with a decent keyboard inspired by the Psion 5. At that point in time the estimated delivery date was November 2017, and it wasn’t clear they were going to meet their goals. As someone has owned a variety of phones with keyboards, from a Nokia 9000i to a T-Mobile G1 I’ve been disappointed about the lack of mobile devices with keyboards. The Gemini seemed like a potential option, so I backed it, paying a total of $369 including delivery. And then I waited. And waited. And waited.

Finally, one year and a day after I backed the project, I received my Gemini PDA. Now, I don’t get as much use out of such a device as I would have in the past. The Gemini is definitely not a primary phone replacement. It’s not much bigger than my aging Honor 7 but there’s no external display to indicate who’s calling and it’s a bit clunky to have to open it to dial (I don’t trust Google Assistant to cope with my accent enough to have it ring random people). The 9000i did this well with an external keypad and LCD screen, but then it was a brick so it had the real estate to do such things. Anyway. I have a laptop at home, a laptop at work and I cycle between the 2. So I’m mostly either in close proximity to something portable enough to move around the building, or travelling in a way that doesn’t mean I could use one.

My first opportunity to actually use the Gemini in anger therefore came last Friday, when I attended BelFOSS. I’d normally bring a laptop to a conference, but instead I decided to just bring the Gemini (in addition to my normal phone). I have the LTE version, so I put my FreedomPop SIM into it - this did limit the amount I could do with it due to the low data cap, but for a single day was plenty for SSH, email + web use. I already have the Pro version of the excellent JuiceSSH, am a happy user of K-9 Mail and tend to use Chrome these days as well. All 3 were obviously perfectly happy on the Android 7.1.1 install.

Aside: Why am I not running Debian on the device? Planet do have an image available form their Linux Support page, but it’s running on top of the crufty 3.18 Android kernel and isn’t yet a first class citizen - it’s not clear the LTE will work outside Android easily and I’ve no hope of ARM opening up the Mali-T880 drivers. I’ve got plans to play around with improving the support, but for the moment I want to actually use the device a bit until I find sufficient time to be able to make progress.

So how did the day go? On the whole, a success. Battery life was great - I’d brought a USB battery pack expecting to need to boost the charge at some point, but I last charged it on Thursday night and at the time of writing it’s still claiming 25% battery left. LTE worked just fine; I had a 4G signal for most of the day with occasional drops down to 3G but no noticeable issues. The keyboard worked just fine; much better than my usual combo of a Nexus 7 + foldable Bluetooth keyboard. Some of the symbols aren’t where you’d expect, but that’s understandable on a scaled down keyboard. Screen resolution is great. I haven’t used the USB-C ports other than to charge and backup so far, but I like the fact there are 2 provided (even if you need a custom cable to get HDMI rather than it following the proper standard). The device feels nice and solid in your hand - the case is mostly metal plates that remove to give access to the SIM slot and (non-removable but user replaceable) battery. The hinge mechanism seems robust; I haven’t been worried about breaking it at any point since I got the device.

What about problems? I can’t deny there are a few. I ended up with a Mediatek X25 instead of an X27 - that matches what was initial promised, but there had been claims of an upgrade. Unfortunately issues at the factory meant that the initial production run got the older CPU. Later backers are support to get the upgrade. As someone who took the early risk this does leave a slightly bitter taste but I doubt I’ll actually notice any significant performance difference. The keys on the keyboard are a little lop sided in places. This seems to be just a cosmetic thing and I haven’t noticed any issues in typing. The lack of first class Debian support is disappointing, but I believe will be resolved in time (by the community if not Planet). The camera isn’t as good as my phone, but then it’s a front facing webcam style thing and it’s at least as good as my laptop at that.

Bottom line: Would I buy it again? At $369, absolutely. At the current $599? Probably not - I’m simply not on the move enough to need this on a regular basis, so I’d find it hard to justify. Maybe the 2nd gen, assuming it gets a bit more polish on the execution and proper mainline Linux support. Don’t get me wrong, I think the 1st gen is lovely and I’ve had lots of envious people admiring it, I just think it’s ended up priced a bit high for what it is. For the same money I’d be tempted by the GPD Pocket instead.

Categories: FLOSS Project Planets

Vincent Bernat: Integration of a Go service with systemd: socket activation

Mon, 2018-03-19 04:28

In a previous post, I highlighted some useful features of systemd when writing a service in Go, notably to signal readiness and prove liveness. Another interesting bit is socket activation: systemd listens on behalf of the application and, on incoming traffic, starts the service with a copy of the listening socket. Lennart Poettering details in a blog post:

If a service dies, its listening socket stays around, not losing a single message. After a restart of the crashed service it can continue right where it left off. If a service is upgraded we can restart the service while keeping around its sockets, thus ensuring the service is continously responsive. Not a single connection is lost during the upgrade.

This is one solution to get zero-downtime deployment for your application. Another upside is you can run your daemon with less privileges—loosing rights is a difficult task in Go.1

The basics