Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 9 min ago

Aigars Mahinovs: Debconf 19 photos

6 hours 13 min ago

The main feed for my photos from Debconf 19 in Curitiba, Brazil is currently in my GPhoto album. I will later also sync it to Debconf git share.

The first batch is up, but now the hardest part comes - the group photo will be happening a bit later today :)

Categories: FLOSS Project Planets

Molly de Blanc: Free software activities (June 2019)

8 hours 1 min ago

I know this is almost a month late, but I am sharing it nonetheless. My June was dominated by my professional and personal life, leaving little time for expansive free software activities. I’ll write a little more in my OSI report for June.

Activities (Personal)
  • The biggest thing I did was head over to the Other Cambridge (a.k.a. Cambridge Prime, a.k.a. Cambridge, UK) for a Debian sprint with the Debian Project Leader, Debian Account Managers, and Debian Anti-Harassment team.
  • We had some Anti-Harassment meetings.
  • We had some Outreach meetings.
  • I helped both teams prep for DebConf.
Activities (Professional)
  • Worked on organizing sponsorships for GUADEC. If you’re interested in attending or sponsoring GUADEC, I highly recommend it!
  • Wrote profiles of members of the GNOME community for the GNOME Engagement blog. I also wrote a newsletter for Friends of GNOME. You can see both online.
  • Attended Diversity & Inclusion team meetings, participated in the Engagement team discussions, and spoke with several GUADEC organizers.
Categories: FLOSS Project Planets

Candy Tsai: Outreachy Week 6 – Week 7: Getting Code Merge

Mon, 2019-07-22 06:20

Already half way through the internship! I have implemented some features and opened a merge request. So… what now? Let’s get those changes merged once and for all! Since I’m already at mid-point, there’s also a video shared on what I’ve done so far in this project.

  • Breaking large merge request into smaller pieces
  • Thoughts on remote pair programming
  • Video sharing for the current progress with the project

Making that video was probably the most time-consuming part. Paying great respects to all YouTubers out there!

Breaking The Merge Request

When I looked back at my merge request, it actually started out quite small and precise. After discussions in the merge request, I started to fix things in the same merge request and then it just got bigger and bigger and we had to seperate out the “mergable parts” to make actual progress in this project.

Remote Pair Programming

You can’t overhear what others are doing or learn something about your colleagues through gossip over lunch break when working remotely. So after being stuck for quite a bit, terceiro suggested that we try pair programming.

After our first remote pair programming session, I think there should be no difference in pair programming in person. We shared the same terminal, looked at the same code and discussed just like people standing side by side.

Through our pair programming session, I found out that I had a bad habit. I didn’t run tests on my code that often, so when I had failing tests that didn’t fail before, I spent more time debugging than I should have. Pair programming gave insight to how others work and I think little improvements go a long way.

Video Report of the Internship

This was initially made for DebConf 2019.

Week 6

And then I took almost a week off, so my week 7 was delayed.

Week 7

I found out that I can make small merge requests and list the merge requests it depends on. Gitlab will automatically handle the rest for me once a request is merged.

  • finally finished breaking down my large merge request
  • added the history section
Categories: FLOSS Project Planets

Daniel Lange: Security is hard, open source security unnecessarily harder

Sun, 2019-07-21 21:15

Now it is a commonplace that security is hard. It involves advanced mathematics and a single, tiny mistake or omission in implementation can spoil everything.

And the only sane IT security can be open source security. Because you need to assess the algorithms and their implementation and you need to be able to completely verify the implementation. You simply can't if you don't have the code and can compile it yourself to produce a trusted (ideally reproducible) build. A no-brainer for everybody in the field.

But we make it unbelievably hard for people to use security tools. Because these have grown over decades fostered by highly intelligent people with no interest in UX.
"It was hard to write, so it should be hard to use as well."
And then complain about adoption.

PGP / gpg has received quite some fire this year and the good news is this has resulted in funding for the sole gpg developer. Which will obviously not solve the UX problem.

But the much worse offender is OpenSSL. It is so hard to use that even experienced hackers fail.

Now, securely encrypting a mass communication media like IRC is not possible at all. Read Trust is not transitive: or why IRC over SSL is pointless1.
Still it makes wiretapping harder and that may be a good thing these days.

LibreSSL has forked the OpenSSL code base "with goals of modernizing the codebase, improving security, and applying best practice development processes". No UX improvement. A cleaner code for the chosen few. Duh.

I predict the re-implementations and gradual improvement scenarios will fail. The nearly-impossible-to-use-right situation with both gpg and (much more importantly) OpenSSL cannot be fixed by gradual improvements and however thorough code reviews.

Now the "there's an App for this" security movement won't work out on a grand scale either:

  1. Most often not open source. Notable exceptions: ChatSecure, TextSecure.
  2. No reference implementations with excellent test servers and well documented test suites but products. "Use my App.", "No, use MY App!!!".
  3. Only secures chat or email. So the VC-powered ("next WhatsApp") mass-adoption markets but not the really interesting things to improve upon (CA, code signing, FDE, ...).
  4. While everybody is focusing on mobile adoption the heavy lifting is still on servers. We need sane libraries and APIs. No App for that.

So we need a new development, a new code, a new open source product. Sadly so the Core Infrastructure Initiative so far only funds existing open source projects in dire needs and people bug hunting.

It basically makes the bad solutions of today a bit more secure and ensures maintenance of decade old crufty code bases. That way it extends the suffering of everybody using the inadequate solutions of today.

That's inevitable until we have a better stack but we need to look into getting rid of gpg and OpenSSL and replacing it with something new. Something designed well from the ground up, technically and from a user experience perspective.

Now who's in for a five year funding plan? $3m2 annually. ROCE 0. But a very good chance to get the OBE awarded.

Updates:

21.07.19: A current essay on "The PGP problem" is making rounds and lists some valid issues with the file format, RFCs and the gpg implementation. The GnuPG-users mailing list has a discussion thread on the issues listed in the essay.

19.01.19: Daniel Kahn Gillmor, a Senior Staff Technologist at the ACLU, tried to get his gpg key transition correct. He put a huge amount of thought and preparation into the transition. To support Autocrypt (another try to get GPG usable for more people than a small technical elite), he specifically created different identities for him as a person and his two main email addresses. Two days later he has to invalidate his new gpg key and back-off to less "modern" identity layouts because many of the brittle pieces of infrastructure around gpg from emacs to gpg signature management frontends to mailing list managers fell over dead.

28.11.18: Changed the Quakenet link on why encrypting IRC is useless to an archive.org one as they have removed the original content.

13.03.17: Chris Wellons writes about why GPG is a failure and created a small portable application Enchive to replace it for asymmetric encryption.

24.02.17: Stefan Marsiske has written a blog article: On PGP. He argues about adversary models and when gpg is "probably" 3 still good enough to use. To me a security tool can never be a sane choice if the UI is so convoluted that only a chosen few stand at least a chance of using it correctly. Doesn't matter who or what your adversary is.
Stefan concludes his blog article:

PGP for encryption as in RFC 4880 should be retired, some sunk-cost-biases to be coped with, but we all should rejoice that the last 3-4 years had so much innovation in this field, that RFC 4880 is being rewritten[Citation needed] with many of the above in mind and that hopefully there'll be more and better tools. [..]

He gives an extensive list of tools he considers worth watching in his article. Go and check whether something in there looks like a possible replacement for gpg to you. Stefan also gave a talk on the OpenPGP conference 2016 with similar content, slides.

14.02.17: James Stanley has written up a nice account of his two hour venture to get encrypted email set up. The process is speckled with bugs and inconsistent nomenclature capable of confusing even a technically inclined person. There has been no progress in the last ~two years since I wrote this piece. We're all still riding dead horses. James summarizes:

Encrypted email is nothing new (PGP was initially released in 1991 - 26 years ago!), but it still has a huge barrier to entry for anyone who isn't already familiar with how to use it.

04.09.16: Greg Kroah-Hartman ends an analysis of the Evil32 PGP keyid collisions with:

gpg really is horrible to use and almost impossible to use correctly.

14.11.15:
Scott Ruoti, Jeff Andersen, Daniel Zappala and Kent Seamons of BYU, Utah, have analysed the usability [local mirror, 173kB] of Mailvelope, a webmail PGP/GPG add-on based on a Javascript PGP implementation. They describe the results as "disheartening":

In our study of 20 participants, grouped into 10 pairs of participants who attempted to exchange encrypted email, only one pair was able to successfully complete the assigned tasks using Mailvelope. All other participants were unable to complete the assigned task in the one hour allotted to the study. Even though a decade has passed since the last formal study of PGP, our results show that Johnny has still not gotten any closer to encrypt his email using PGP.
  1. Quakenet has removed that article citing "near constant misrepresentation of the presented argument" sometime in 2018. The contents (not misrepresented) are still valid so I have added and archive.org Wayback machine link instead. 

  2. The estimate was $2m until end of 2018. The longer we wait, the more expensive it'll get. And - obviously - ever harder. E.g. nobody needed to care about sidechannel attacks on big-LITTLE five years ago. But now they start to hit servers and security-sensitive edge devices. 

  3. Stefan says "probably" five times in one paragraph. Probably needs an editor. The person not the application. 

Categories: FLOSS Project Planets

Giovanni Mascellani: Bootstrappable Debian BoF

Sun, 2019-07-21 20:30

Greetings from DebConf 19 in Curitiba! Just a quick reminder that I will run a Bootstrappable Debian BoF on Tuesday 23rd, at 13.30 Brasilia time (which is 16.30 UTC, if I am not mistaken). If you are curious about bootstrappability in Debian, why do we want it and where we are right now, you are welcome to come in person if you are at DebCon or to follow the streaming.

Categories: FLOSS Project Planets

Vincent Bernat: A Makefile for your Go project (2019)

Sun, 2019-07-21 15:20

My most loathed feature of Go was the mandatory use of GOPATH: I do not want to put my own code next to its dependencies. I was not alone and people devised tools or crafted their own Makefile to avoid organizing their code around GOPATH.

Hopefully, since Go 1.11, it is possible to use Go’s modules to manage dependencies without relying on GOPATH. First, you need to convert your project to a module:1

$ go mod init hellogopher go: creating new go.mod: module hellogopher $ cat go.mod module hellogopher

Then, you can invoke the usual commands, like go build or go test. The go command resolves imports by using versions listed in go.mod. When it runs into an import of a package not present in go.mod, it automatically looks up the module containing that package using the latest version and adds it.

$ go test ./... go: finding github.com/spf13/cobra v0.0.5 go: downloading github.com/spf13/cobra v0.0.5 ? hellogopher [no test files] ? hellogopher/cmd [no test files] ok hellogopher/hello 0.001s $ cat go.mod module hellogopher require github.com/spf13/cobra v0.0.5

If you want a specific version, you can either edit go.mod or invoke go get:

$ go get github.com/spf13/cobra@v0.0.4 go: finding github.com/spf13/cobra v0.0.4 go: downloading github.com/spf13/cobra v0.0.4 $ cat go.mod module hellogopher require github.com/spf13/cobra v0.0.4

Add go.mod to your version control system. Optionally, you can also add go.sum as a safety net against overriden tags. If you really want to vendor the dependencies, you can invoke go mod vendor and add the vendor/ directory to your version control system.

Thanks to the modules, in my opinion, Go’s dependency management is now on a par with other languages, like Ruby. While it is possible to run day-to-day operations—building and testing—with only the go command, a Makefile can still be useful to organize common tasks, a bit like Python’s setup.py or Ruby’s Rakefile. Let me describe mine.

Using third-party tools

Most projects need some third-party tools for testing or building. We can either expect them to be already installed or compile them on the fly. For example, here is how code linting is done with Golint:

BIN = $(CURDIR)/bin $(BIN): @mkdir -p $@ $(BIN)/%: | $(BIN) @tmp=$$(mktemp -d); \ env GO111MODULE=off GOPATH=$$tmp GOBIN=$(BIN) go get $(PACKAGE) \ || ret=$$?; \ rm -rf $$tmp ; exit $$ret $(BIN)/golint: PACKAGE=golang.org/x/lint/golint GOLINT = $(BIN)/golint lint: | $(GOLINT) $(GOLINT) -set_exit_status ./...

The first block defines how a third-party tool is built: go get is invoked with the package name matching the tool we want to install. We do not want to pollute our dependency management and therefore, we are working in an empty GOPATH. The generated binaries are put in bin/.

The second block extends the pattern rule defined in the first block by providing the package name for golint. Additional tools can be added by just adding another line like this.

The last block defines the recipe to lint the code. The default linting tool is the golint built using the first block but it can be overrided with make GOLINT=/usr/bin/golint.

Tests

Here are some rules to help running tests:

TIMEOUT = 20 PKGS = $(or $(PKG),$(shell env GO111MODULE=on $(GO) list ./...)) TESTPKGS = $(shell env GO111MODULE=on $(GO) list -f \ '{{ if or .TestGoFiles .XTestGoFiles }}{{ .ImportPath }}{{ end }}' \ $(PKGS)) TEST_TARGETS := test-default test-bench test-short test-verbose test-race test-bench: ARGS=-run=__absolutelynothing__ -bench=. test-short: ARGS=-short test-verbose: ARGS=-v test-race: ARGS=-race $(TEST_TARGETS): test check test tests: fmt lint go test -timeout $(TIMEOUT)s $(ARGS) $(TESTPKGS)

A user can invoke tests in different ways:

  • make test runs all tests;
  • make test TIMEOUT=10 runs all tests with a timeout of 10 seconds;
  • make test PKG=hellogopher/cmd only runs tests for the cmd package;
  • make test ARGS="-v -short" runs tests with the specified arguments;
  • make test-race runs tests with race detector enabled.

go test includes a test coverage tool. Unfortunately, it only handles one package at a time and you have to explicitely list the packages to be instrumented, otherwise the instrumentation is limited to the currently tested package. If you provide too many packages, the compilation time will skyrocket. Moreover, if you want an output compatible with Jenkins, you need some additional tools.

COVERAGE_MODE = atomic COVERAGE_PROFILE = $(COVERAGE_DIR)/profile.out COVERAGE_XML = $(COVERAGE_DIR)/coverage.xml COVERAGE_HTML = $(COVERAGE_DIR)/index.html test-coverage-tools: | $(GOCOVMERGE) $(GOCOV) $(GOCOVXML) # ❶ test-coverage: COVERAGE_DIR := $(CURDIR)/test/coverage.$(shell date -u +"%Y-%m-%dT%H:%M:%SZ") test-coverage: fmt lint test-coverage-tools @mkdir -p $(COVERAGE_DIR)/coverage @for pkg in $(TESTPKGS); do \ # ❷ go test \ -coverpkg=$$(go list -f '{{ join .Deps "\n" }}' $$pkg | \ grep '^$(MODULE)/' | \ tr '\n' ',')$$pkg \ -covermode=$(COVERAGE_MODE) \ -coverprofile="$(COVERAGE_DIR)/coverage/`echo $$pkg | tr "/" "-"`.cover" $$pkg ;\ done @$(GOCOVMERGE) $(COVERAGE_DIR)/coverage/*.cover > $(COVERAGE_PROFILE) @go tool cover -html=$(COVERAGE_PROFILE) -o $(COVERAGE_HTML) @$(GOCOV) convert $(COVERAGE_PROFILE) | $(GOCOVXML) > $(COVERAGE_XML)

First, we define some variables to let the user override them. In ❶, we require the following tools—built like golint previously:

  • gocovmerge merges profiles from different runs into a single one;
  • gocov-xml converts a coverage profile to the Cobertura format, for Jenkins;
  • gocov is needed to convert a coverage profile to a format handled by gocov-xml.

In ❷, for each package to test, we run go test with the -coverprofile argument. We also explicitely provide the list of packages to instrument to -coverpkg by using go list to get a list of dependencies for the tested package and keeping only our owns.

Build

Another useful recipe is to build the program. While this could be done with just go build, it is not uncommon to have to specify build tags, additional flags, or to execute supplementary build steps. In the following example, the version is extracted from Git tags. It will replace the value of the Version variable in the hellogopher/cmd package.

VERSION ?= $(shell git describe --tags --always --dirty --match=v* 2> /dev/null || \ echo v0) all: fmt lint | $(BIN) go build \ -tags release \ -ldflags '-X hellogopher/cmd.Version=$(VERSION)' \ -o $(BIN)/hellogopher main.go

The recipe also runs code formatting and linting.

The excerpts provided in this post are a bit simplified. Have a look at the final result for more perks, including fancy output and integrated help!

  1. For an application not meant to be used as a library, I prefer to use a short name instead of a name derived from an URL, like github.com/vincentbernat/hellogopher. It makes it easier to read import sections:

    import ( "fmt" "os" "hellogopher/cmd" "github.com/pkg/errors" "github.com/spf13/cobra" )

    ↩︎

Categories: FLOSS Project Planets

Bits from Debian: DebConf19 starts today in Curitiba

Sun, 2019-07-21 15:10

DebConf19, the 20th annual Debian Conference, is taking place in Curitiba, Brazil from from July 21 to 28, 2019.

Debian contributors from all over the world have come together at Federal University of Technology - Paraná (UTFPR) in Curitiba, Brazil, to participate and work in a conference exclusively run by volunteers.

Today the main conference starts with over 350 attendants expected and 121 activities scheduled, including 45- and 20-minute talks and team meetings ("BoF"), workshops, a job fair as well as a variety of other events.

The full schedule at https://debconf19.debconf.org/schedule/ is updated every day, including activities planned ad-hoc by attendees during the whole conference.

If you want to engage remotely, you can follow the video streaming available from the DebConf19 website of the events happening in the three talk rooms: Auditório (the main auditorium), Miniauditório and Sala de Videoconferencia. Or you can join the conversation about what is happening in the talk rooms: #debconf-auditorio, #debconf-miniauditorio and #debconf-videoconferencia (all those channels in the OFTC IRC network).

You can also follow the live coverage of news about DebConf19 on https://micronews.debian.org or the @debian profile in your favorite social network.

DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Anti-Harassment team) are available to help so both on-site and remote participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf19 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf19, particularly our Platinum Sponsors: Infomaniak, Google and Lenovo.

Categories: FLOSS Project Planets

Holger Levsen: 20190721-piuparts-was-not-down

Sun, 2019-07-21 13:31
piuparts.debian.org was't down for maintenance

I hadn't shut down piuparts.debian.org for maintenance, I just said so, to make you attend my talk, as my last call for help at DebConf17 was attended by 3 people only...

So please join the session about piuparts(d.o.) today at 14:30 localtime.

Please help help help!

Categories: FLOSS Project Planets

Sylvain Beucler: Planet clean-up

Sun, 2019-07-21 12:57

I did some clean-up / resync on the planet.gnu.org setup

  • Fix issue with newer https websites (SNI)
  • Re-sync Debian base config, scripts and packaging, update documentation; the planet-venus package is still in bad shape though, it's not officially orphaned but the maintainer is unreachable AFAICS
  • Fetch all Savannah feeds using https
  • Update feeds with redirections, which seem to mess-up caching
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RPushbullet 0.3.2

Sun, 2019-07-21 10:28

A new release 0.3.2 of the RPushbullet package is now on CRAN. RPushbullet is interfacing the neat Pushbullet service for inter-device messaging, communication, and more. It lets you easily send alerts like the one to the left to your browser, phone, tablet, … – or all at once.

This is the first new release in almost 2 1/2 years, and it once again benefits greatly from contributed pull requests by Colin (twice !) and Chan-Yub – see below for details.

Changes in version 0.3.2 (2019-07-21)
  • The Travis setup was robustified with respect to the token needed to run tests (Dirk in #48)

  • The configuration file is now readable only by the user (Colin Gillespie in #50)

  • At startup initialization is now more consistent (Colin Gillespie in #53 fixing #52)

  • A new function to fetch prior posts was added (Chanyub Park in #54). `

Courtesy of CRANberries, there is also a diffstat report for this release. More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Jose M. Calhariz: New release of switchconf 0.0.16

Sat, 2019-07-20 19:56

I have not touched switchconf for a long time. Being at DebCamp19 was a good time to work on it.

I have moved the development of switchconf from a private svn repo to a git repo in salsa: https://salsa.debian.org/debian/switchconf Created a virtual host called http://software.calhariz.com were I will publish the sources of the software that I take care. Updated the Makefile to the git repo and released version 0.0.16.

You can download the latest version of switchconf from here: http://software.calhariz.com/switchconf

Categories: FLOSS Project Planets

John Goerzen: Alas, Poor PGP

Sat, 2019-07-20 19:15

Over in The PGP Problem, there’s an extended critique of PGP (and also specifics of the GnuPG implementation) in a modern context. Robert J. Hansen, one of the core GnuPG developers, has an interesting response:

First, RFC4880bis06 (the latest version) does a pretty good job of bringing the crypto angle to a more modern level. There’s a massive installed base of clients that aren’t aware of bis06, and if you have to interoperate with them you’re kind of screwed: but there’s also absolutely nothing prohibiting you from saying “I’m going to only implement a subset of bis06, the good modern subset, and if you need older stuff then I’m just not going to comply.” Sequoia is more or less taking this route — more power to them.

Second, the author makes a couple of mistakes about the default ciphers. GnuPG has defaulted to AES for many years now: CAST5 is supported for legacy reasons (and I’d like to see it dropped entirely: see above, etc.).

Third, a couple of times the author conflates what the OpenPGP spec requires with what it permits, and with how GnuPG implements it. Cleaner delineation would’ve made the criticisms better, I think.

But all in all? It’s a good criticism.

The problem is, where does that leave us? I found the suggestions in the original author’s article (mainly around using IM apps such as Signal) to be unworkable in a number of situations.

The Problems With PGP

Before moving on, let’s tackle some of the problems identified.

The first is an assertion that email is inherently insecure and can’t be made secure. There are some fairly convincing arguments to be made on that score; as it currently stands, there is little ability to hide metadata from prying eyes. And any format that is capable of talking on the network — as HTML is — is just begging for vulnerabilities like EFAIL.

But PGP isn’t used just for this. In fact, one could argue that sending a binary PGP message as an attachment gets around a lot of that email clunkiness — and would be right, at the expense of potentially more clunkiness (and forgetfulness).

What about the web-of-trust issues? I’m in agreement. I have never really used WoT to authenticate a key, only in rare instances trusting an introducer I know personally and from personal experience understand how stringent they are in signing keys. But this is hardly a problem for PGP alone. Every encryption tool mentioned has the problem of validating keys. The author suggests Signal. Signal has some very strong encryption, but you have to have a phone number and a smartphone to use it. Signal’s strength when setting up a remote contact is as strong as SMS. Let that disheartening reality sink in for a bit. (A little social engineering could probably get many contacts to accept a hijacked SIM in Signal as well.)

How about forward secrecy? This is protection against a private key that gets compromised in the future, because an ephemeral session key (or more than one) is negotiated on each communication, and the secret key is never stored. This is a great plan, but it really requires synchronous communication (or something approaching it) between the sender and the recipient. It can’t be used if I want to, for instance, burn a backup onto a Bluray and give it to a friend for offsite storage without giving the friend access to its contents. There are many, many situations where synchronous key negotiation is impossible, so although forward secrecy is great and a nice enhancement, we should assume it to be always applicable.

The saltpack folks have a more targeted list of PGP message format problems. Both they, and the article I link above, complain about the gpg implementation of PGP. There is no doubt truth to these. Among them is a complaint that gpg can emit unverified data. Well sure, because it has a streaming mode. It exits with a proper error code and warnings if a verification fails at the end — just as gzcat does. This is a part of the API that the caller needs to be aware of. It sounds like some callers weren’t handling this properly, but it’s just a function of a streaming tool.

Suggested Solutions

The Signal suggestion is perfectly reasonable in a lot of cases. But the suggestion to use WhatsApp — a proprietary application from a corporation known to brazenly lie about privacy — is suspect. It may have great crypto, but if it uploads your address book to a suspicious company, is it a great app?

Magic Wormhole is a pretty neat program I hadn’t heard of before. But it should be noted it’s written in Python, so it’s probably unlikely to be using locked memory.

How about backup encryption? Backups are a lot more than just filesystem; maybe somebody has a 100GB MySQL or zfs send stream. How should this be encrypted?

My current estimate is that there’s no magic solution right now. The Sequoia PGP folks seem to have a good thing going, as does Saltpack. Both projects are early in development, so as a privacy-concerned person, should you trust them more than GPG with appropriate options? That’s really hard to say.

Additional Discussions
Categories: FLOSS Project Planets

Gunnar Wolf: DebConf19 Key Signing Party: Your personalized map is ready!

Sat, 2019-07-20 14:13

When facing a large key signing party in a group, even a group where you are already well socially connected in, you often lose track whom you have cross-signed with already, who is farther away from you (in the interest of better weaving the Web of Trust)...

So, having Samuel announce the DebConf19 KSP fingerprints list, I hacked a bit to improve the scripts I used on previous years, and... Behold!

The DC19 KSP personalized maps!

This time it's even color-coded! People you have not cross-signed with are in light grey. People whose keys have been signed by you are presented with blue text. People that have signed your key are presented with green background. Of course, people you have cross-signed with have blue text and green background :-]

The graph is up to date as of early today, pulling the data from keys.gnupg.net. Sorry for the huge size, but it's the only way I found it to be useful to see both the big picture and the detailed information. Of course — You can zoom in and out at will!

Categories: FLOSS Project Planets

Bits from Debian: DebConf19 invites you to Debian Open Day at the Federal University of Technology - Paraná (UTFPR), in Curitiba

Sat, 2019-07-20 12:15

DebConf, the annual conference for Debian contributors and users interested in improving the Debian operating system, will be held in Federal University of Technology - Paraná (UTFPR) in Curitiba, Brazil, from July 21 to 28, 2019. The conference is preceded by DebCamp from July 14 to 19, and the DebConf19 Open Day on July 20.

The Open Day, Saturday, 20 July, is targeted at the general public. Events of interest to a wider audience will be offered, ranging from topics specific to Debian to the greater Free Software community and maker movement.

The event is a perfect opportunity for interested users to meet the Debian community, for Debian to broaden its community, and for the DebConf sponsors to increase their visibility.

Less purely technical than the main conference schedule, the events on Open Day will cover a large range of topics from social and cultural issues to workshops and introductions to Debian.

The detailed schedule of the Open Day's events includes events in English and Portuguese. Some of the talks are:

  • "The metaverse, gaming and the metabolism of cities" by Bernelle Verster
  • "O Projeto Debian quer você!" by Paulo Henrique de Lima Santana
  • "Protecting Your Web Privacy with Free Software" by Pedro Barcha
  • "Bastidores Debian - Entenda como a distribuição funciona" by Joao Eriberto Mota Filho
  • "Caninos Loucos: a plataforma nacional de Single Board Computers para IoT" by geonnave
  • "Debian na vida de uma Operadora de Telecom" by Marcelo Gondim
  • "Who's afraid of Spectre and Meltdown?" by Alexandre Oliva
  • "New to DebConf BoF" by Rhonda D'Vine

During the Open Day, there will also be a Job Fair with booths from our several of our sponsors, a workshop about the Git version control system and a Debian installfest, for attendees who would like to get help installing Debian on their machines.

Everyone is welcome to attend. As the rest of the conference, attendance is free of charge, but registration in the DebConf19 website is highly recommended.

The full schedule for the Open Day's events and the rest of the conference is at https://debconf19.debconf.org/schedule and the video streaming will be available at the DebConf19 website

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the numerous sponsors for their commitment to DebConf19, particularly its Platinum Sponsors: Infomaniak, Google and Lenovo.

Categories: FLOSS Project Planets

Michael Stapelberg: Linux distributions: Can we do without hooks and triggers?

Sat, 2019-07-20 12:06

Hooks are an extension feature provided by all package managers that are used in larger Linux distributions. For example, Debian uses apt, which has various maintainer scripts. Fedora uses rpm, which has scriptlets. Different package managers use different names for the concept, but all of them offer package maintainers the ability to run arbitrary code during package installation and upgrades. Example hook use cases include adding daemon user accounts to your system (e.g. postgres), or generating/updating cache files.

Triggers are a kind of hook which run when other packages are installed. For example, on Debian, the man(1) package comes with a trigger which regenerates the search database index whenever any package installs a manpage. When, for example, the nginx(8) package is installed, a trigger provided by the man(1) package runs.

Over the past few decades, Open Source software has become more and more uniform: instead of each piece of software defining its own rules, a small number of build systems are now widely adopted.

Hence, I think it makes sense to revisit whether offering extension via hooks and triggers is a net win or net loss.

Hooks preclude concurrent package installation

Package managers commonly can make very little assumptions about what hooks do, what preconditions they require, and which conflicts might be caused by running multiple package’s hooks concurrently.

Hence, package managers cannot concurrently install packages. At least the hook/trigger part of the installation needs to happen in sequence.

While it seems technically feasible to retrofit package manager hooks with concurrency primitives such as locks for mutual exclusion between different hook processes, the required overhaul of all hooks¹ seems like such a daunting task that it might be better to just get rid of the hooks instead. Only deleting code frees you from the burden of maintenance, automated testing and debugging.

① In Debian, there are 8620 non-generated maintainer scripts, as reported by find shard*/src/*/debian -regex ".*\(pre\|post\)\(inst\|rm\)$" on a Debian Code Search instance.

Triggers slow down installing/updating other packages

Personally, I never use the apropos(1) command, so I don’t appreciate the man(1) package’s trigger which updates the database used by apropos(1). The process takes a long time and, because hooks and triggers must be executed serially (see previous section), blocks my installation or update.

When I tell people this, they are often surprised to learn about the existance of the apropos(1) command. I suggest adopting an opt-in model.

Unnecessary work if programs are not used between updates

Hooks run when packages are installed. If a package’s contents are not used between two updates, running the hook in the first update could have been skipped. Running the hook lazily when the package contents are used reduces unnecessary work.

As a welcome side-effect, lazy hook evaluation automatically makes the hook work in operating system images, such as live USB thumb drives or SD card images for the Raspberry Pi. Such images must not ship the same crypto keys (e.g. OpenSSH host keys) to all machines, but instead generate a different key on each machine.

Why do users keep packages installed they don’t use? It’s extra work to remember and clean up those packages after use. Plus, users might not realize or value that having fewer packages installed has benefits such as faster updates.

I can also imagine that there are people for whom the cost of re-installing packages incentivizes them to just keep packages installed—you never know when you might need the program again…

Implemented in an interpreted language

While working on hermetic packages (more on that in another blog post), where the contained programs are started with modified environment variables (e.g. PATH) via a wrapper bash script, I noticed that the overhead of those wrapper bash scripts quickly becomes significant. For example, when using the excellent magit interface for Git in Emacs, I encountered second-long delays² when using hermetic packages compared to standard packages. Re-implementing wrappers in a compiled language provided a significant speed-up.

Similarly, getting rid of an extension point which mandates using shell scripts allows us to build an efficient and fast implementation of a predefined set of primitives, where you can reason about their effects and interactions.

② magit needs to run git a few times for displaying the full status, so small overhead quickly adds up.

Incentivizing more upstream standardization

Hooks are an escape hatch for distribution maintainers to express anything which their packaging system cannot express.

Distributions should only rely on well-established interfaces such as autoconf’s classic ./configure && make && make install (including commonly used flags) to build a distribution package. Integrating upstream software into a distribution should not require custom hooks. For example, instead of requiring a hook which updates a cache of schema files, the library used to interact with those files should transparently (re-)generate the cache or fall back to a slower code path.

Distribution maintainers are hard to come by, so we should value their time. In particular, there is a 1:n relationship of packages to distribution package maintainers (software is typically available in multiple Linux distributions), so it makes sense to spend the work in the 1 and have the n benefit.

Can we do without them?

If we want to get rid of hooks, we need another mechanism to achieve what we currently achieve with hooks.

If the hook is not specific to the package, it can be moved to the package manager. The desired system state should either be derived from the package contents (e.g. required system users can be discovered from systemd service files) or declaratively specified in the package build instructions—more on that in another blog post. This turns hooks (arbitrary code) into configuration, which allows the package manager to collapse and sequence the required state changes. E.g., when 5 packages are installed which each need a new system user, the package manager could update /etc/passwd just once.

If the hook is specific to the package, it should be moved into the package contents. This typically means moving the functionality into the program start (or the systemd service file if we are talking about a daemon). If (while?) upstream is not convinced, you can either wrap the program or patch it. Note that this case is relatively rare: I have worked with hundreds of packages and the only package-specific functionality I came across was automatically generating host keys before starting OpenSSH’s sshd(8)³.

There is one exception where moving the hook doesn’t work: packages which modify state outside of the system, such as bootloaders or kernel images.

③ Even that can be moved out of a package-specific hook, as Fedora demonstrates.

Conclusion

Global state modifications performed as part of package installation today use hooks, an overly expressive extension mechanism.

Instead, all modifications should be driven by configuration. This is feasible because there are only a few different kinds of desired state modifications. This makes it possible for package managers to optimize package installation.

Categories: FLOSS Project Planets

Steinar H. Gunderson: Nageru 1.9.0 released

Sat, 2019-07-20 11:45

I've just released version 1.9.0 of Nageru, my live video mixer. This contains some fairly significant changes to the way themes work, and I'd like to elaborate a bit about why:

Themes in Nageru govern what's put on screen at any given time (this includes the actual output, of course, but also preview channels show in the UI). They were always a compromise between flexibility and implementation cost; with limited resources, I just could not create a full-fledged animation studio like VizRT has.

Themes work by defining chains (now called scenes) at startup, which get optimized and compiled down to a set of OpenGL shaders. In the beginning, most chains were fairly pedestrian; take an input and put it on screen:

local chain = EffectChain.new(16, 9) input = chain:add_live_input(false, false) chain:finalize(true)

You'd actually have to create each chain twice, since the live output and the previews need different output formats (Y'CbCr vs. RGB), but that wasn't worse than a little for loop and calling finalize() with true or false, respectively.

After a while, one would want to support e.g. 1080p inputs in a 720p stream. By default, those would be scaled directly by the GPU, which is acceptable but not the best one could do, so one would add a high-quality Lanzcos3 resampler:

local chain = EffectChain.new(16, 9) input = chain:add_live_input(false, false) local resample_effect = chain:add_effect(ResampleEffect.new()) resample_effect:set_int("width", 1280) resample_effect:set_int("height", 720) chain:finalize(true)

But you wouldn't want to waste GPU resources on resampling if the input signal were already the right resolution, so you'd build chains with and without resampling and choose the right one ahead-of-time.

At some point, Nageru started supporting interlaced inputs, by means of deinterlacing them. This requires adding a deinterlacer in the chain and also keeping some history of previous frames; most of this is transparent, but it would need to be specified when building the chain. So now we're up to eight possibilities; all combinatoins of deinterlacing on/off, scaling on/off, and preview or live output.

And as I started doing sports, and I wanted fades. This means you would have two different inputs to deal with, and you're up to 32 different kinds. And as Nageru started to support multiple input types, such as images or HTML inputs (rendered via an embedded Chromium), there would be even more. And the for loops would grow, and be replaced by some fairly elaborate multidimensional Lua tables, and as I one day needed to add crop support for some inputs to alleviate letterboxing, I thought working with themes does not spark joy and it was time to do something.

So Nageru 1.9.0 moves a lot of this complexity to where you no longer need to think about it. You just do addinput(), and that can display any kind of input (be it progressive, deinterlaced, image, or HTML). And you can add _optional effects, such as the resampling mentioned above, and turn it on and off as needed. You're still writing themes in Lua instead of drawing boxes in a neat GUI, and there's still combinatorial explosion behind the scenes (no pun intended), but it's much, much more manageable. Here's an example from the included theme:

local scene = Scene.new(16, 9) local simple_scene = { scene = scene, input = scene:add_input(), resample_effect = scene:add_effect({ResampleEffect.new(), ResizeEffect.new(), IdentityEffect.new()}), wb_effect = scene:add_effect(WhiteBalanceEffect.new()) } scene:finalize()

So that's an input (of any kind), a high-quality resize, low-quality resize or no resize, a white balance adjustment, and then finalization. This becomes 24 different sets of shaders internally, and you don't really need to know anything about it. You just do

simple_scene.resample_effect:choose(ResampleEffect) simple_scene.resample_effect:set_int("width", width) simple_scene.resample_effect:set_int("height", height)

or

simple_scene.resample_effect:disable() -- No scaling.

There are also many other small tweaks to how themes work, I believe all of them strongly for the better. However, all old themes continue to work as before; I don't like breaking people's hard work for no reason. I do recommend you move to the newer interfaces as soon as possible, though!

As usual, Nageru 1.9.0 can be downloaded from https://nageru.sesse.net/. It is also uploaded to Debian experimental, not not to unstable yet—it depends on a newer version of bmusb clearing the NEW queue for a soname bump. The documentation is updated with the new theme interfaces, too.

Categories: FLOSS Project Planets

Holger Levsen: 20190719-piuparts-down

Sat, 2019-07-20 11:14
piuparts.debian.org down for maintenance

So I've just shut down piuparts.debian.org for maintenance, the website is still up but the slaves won't be running for the next week. I think this will block testing migration for a few packages, but probably that's how it is.

If you want to know more, please join my session about piuparts(d.o.) tomorrow on the first day of DebConf19 at 14:30 localtime.

With a little help from some friends the service should soon be running nicely again for many more years!

Please help help help!

Categories: FLOSS Project Planets

Hideki Yamane: Debian 10 "buster" release party @Tokyo (7/7)

Sat, 2019-07-20 11:01

We ate a delicious cake to celebrate Debian 10 "buster" release, at party in Tokyo (my employer provided the venue, cake and wine. Thanks to SIOS Technology, Inc.! :)

Hope we'll do the same 2 years later for "bullseye"


Categories: FLOSS Project Planets

Vincent Bernat: Sustainable Python scripts

Sat, 2019-07-20 09:15

Python is a great language to write a standalone script. Getting to the result can be a matter of a dozen to a few hundred lines of code and, moments later, you can forget about it and focus on your next task.

Six months later, a co-worker asks you why the script fails and you don’t have a clue: no documentation, hard-coded parameters, nothing logged during the execution and no sensible tests to figure out what may go wrong.

Turning a “quick-and-dirty” Python script into a sustainable version, which will be easy to use, understand and support by your co-workers and your future self, only takes some moderate effort. As an illustration, let’s start from the following script solving the classic Fizz-Buzz test:

import sys for n in range(int(sys.argv[1]), int(sys.argv[2])): if n % 3 == 0 and n % 5 == 0: print("fizzbuzz") elif n % 3 == 0: print("fizz") elif n % 5 == 0: print("buzz") else: print(n) Documentation

I find useful to write documentation before coding: it makes the design easier and it ensures I will not postpone this task indefinitely. The documentation can be embedded at the top of the script:

#!/usr/bin/env python3 """Simple fizzbuzz generator. This script prints out a sequence of numbers from a provided range with the following restrictions: - if the number is divisble by 3, then print out "fizz", - if the number is divisible by 5, then print out "buzz", - if the number is divisible by 3 and 5, then print out "fizzbuzz". """

The first line is a short summary of the script purpose. The remaining paragraphs contain additional details on its action.

Command-line arguments

The second task is to turn hard-coded parameters into documented and configurable values through command-line arguments, using the argparse module. In our example, we ask the user to specify a range and allow them to modify the modulo values for “fizz” and “buzz”.

import argparse import sys class CustomFormatter(argparse.RawDescriptionHelpFormatter, argparse.ArgumentDefaultsHelpFormatter): pass def parse_args(args=sys.argv[1:]): """Parse arguments.""" parser = argparse.ArgumentParser( description=sys.modules[__name__].__doc__, formatter_class=CustomFormatter) g = parser.add_argument_group("fizzbuzz settings") g.add_argument("--fizz", metavar="N", default=3, type=int, help="Modulo value for fizz") g.add_argument("--buzz", metavar="N", default=5, type=int, help="Modulo value for buzz") parser.add_argument("start", type=int, help="Start value") parser.add_argument("end", type=int, help="End value") return parser.parse_args(args) options = parse_args() for n in range(options.start, options.end + 1): # ...

The added value of this modification is tremendous: parameters are now properly documented and are discoverable through the --help flag. Moreover, the documentation we wrote in the previous section is also displayed:

$ ./fizzbuzz.py --help usage: fizzbuzz.py [-h] [--fizz N] [--buzz N] start end Simple fizzbuzz generator. This script prints out a sequence of numbers from a provided range with the following restrictions: - if the number is divisble by 3, then print out "fizz", - if the number is divisible by 5, then print out "buzz", - if the number is divisible by 3 and 5, then print out "fizzbuzz". positional arguments: start Start value end End value optional arguments: -h, --help show this help message and exit fizzbuzz settings: --fizz N Modulo value for fizz (default: 3) --buzz N Modulo value for buzz (default: 5)

The argparse module is quite powerful. If you are not familiar with it, skimming through the documentation is helpful. I like to use the ability to define sub-commands and argument groups.

Logging

A nice addition to a script is to display information during its execution. The logging module is a good fit for this purpose. First, we define the logger:

import logging import logging.handlers import os import sys logger = logging.getLogger(os.path.splitext(os.path.basename(sys.argv[0]))[0])

Then, we make its verbosity configurable: logger.debug() should output something only when a user runs our script with --debug and --silent should mute the logs unless an exceptional condition occurs. For this purpose, we add the following code in parse_args():

# In parse_args() g = parser.add_mutually_exclusive_group() g.add_argument("--debug", "-d", action="store_true", default=False, help="enable debugging") g.add_argument("--silent", "-s", action="store_true", default=False, help="don't log to console")

We add this function to configure logging:

def setup_logging(options): """Configure logging.""" root = logging.getLogger("") root.setLevel(logging.WARNING) logger.setLevel(options.debug and logging.DEBUG or logging.INFO) if not options.silent: ch = logging.StreamHandler() ch.setFormatter(logging.Formatter( "%(levelname)s[%(name)s] %(message)s")) root.addHandler(ch)

The main body of our script becomes this:

if __name__ == "__main__": options = parse_args() setup_logging(options) try: logger.debug("compute fizzbuzz from {} to {}".format(options.start, options.end)) for n in range(options.start, options.end + 1): # ... except Exception as e: logger.exception("%s", e) sys.exit(1) sys.exit(0)

If the script may run unattended—e.g. from a crontab, we can make it log to syslog:

def setup_logging(options): """Configure logging.""" root = logging.getLogger("") root.setLevel(logging.WARNING) logger.setLevel(options.debug and logging.DEBUG or logging.INFO) if not options.silent: if not sys.stderr.isatty(): facility = logging.handlers.SysLogHandler.LOG_DAEMON sh = logging.handlers.SysLogHandler(address='/dev/log', facility=facility) sh.setFormatter(logging.Formatter( "{0}[{1}]: %(message)s".format( logger.name, os.getpid()))) root.addHandler(sh) else: ch = logging.StreamHandler() ch.setFormatter(logging.Formatter( "%(levelname)s[%(name)s] %(message)s")) root.addHandler(ch)

For this example, this is a lot of code just to use logger.debug() once, but in a real script, this will come handy to help users understand how the task is completed.

$ ./fizzbuzz.py --debug 1 3 DEBUG[fizzbuzz] compute fizzbuzz from 1 to 3 1 2 fizz Tests

Unit tests are very useful to ensure an application behaves as intended. It is not common to use them in scripts, but writing a few of them greatly improves their reliability. Let’s turn the code in the inner “for” loop into a function with some interactive examples of use to its documentation:

def fizzbuzz(n, fizz, buzz): """Compute fizzbuzz nth item given modulo values for fizz and buzz. >>> fizzbuzz(5, 3, 5) 'buzz' >>> fizzbuzz(3, 3, 5) 'fizz' >>> fizzbuzz(15, 3, 5) 'fizzbuzz' >>> fizzbuzz(4, 3, 5) 4 >>> fizzbuzz(4, 4, 6) 'fizz' """ if n % fizz == 0 and n % buzz == 0: return "fizzbuzz" if n % fizz == 0: return "fizz" if n % buzz == 0: return "buzz" return n

pytest can ensure the results are correct:1

$ python3 -m pytest -v --doctest-modules ./fizzbuzz.py ============================ test session starts ============================= platform linux -- Python 3.7.4, pytest-3.10.1, py-1.8.0, pluggy-0.8.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /home/bernat/code/perso/python-script, inifile: plugins: xdist-1.26.1, timeout-1.3.3, forked-1.0.2, cov-2.6.0 collected 1 item fizzbuzz.py::fizzbuzz.fizzbuzz PASSED [100%] ========================== 1 passed in 0.05 seconds ==========================

In case of an error, pytest displays a message describing the location and the nature of the failure:

$ python3 -m pytest -v --doctest-modules ./fizzbuzz.py -k fizzbuzz.fizzbuzz ============================ test session starts ============================= platform linux -- Python 3.7.4, pytest-3.10.1, py-1.8.0, pluggy-0.8.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /home/bernat/code/perso/python-script, inifile: plugins: xdist-1.26.1, timeout-1.3.3, forked-1.0.2, cov-2.6.0 collected 1 item fizzbuzz.py::fizzbuzz.fizzbuzz FAILED [100%] ================================== FAILURES ================================== ________________________ [doctest] fizzbuzz.fizzbuzz _________________________ 100 101 >>> fizzbuzz(5, 3, 5) 102 'buzz' 103 >>> fizzbuzz(3, 3, 5) 104 'fizz' 105 >>> fizzbuzz(15, 3, 5) 106 'fizzbuzz' 107 >>> fizzbuzz(4, 3, 5) 108 4 109 >>> fizzbuzz(4, 4, 6) Expected: fizz Got: 4 /home/bernat/code/perso/python-script/fizzbuzz.py:109: DocTestFailure ========================== 1 failed in 0.02 seconds ==========================

We can also write unit tests as code. Let’s suppose we want to test the following function:

def main(options): """Compute a fizzbuzz set of strings and return them as an array.""" logger.debug("compute fizzbuzz from {} to {}".format(options.start, options.end)) return [str(fizzbuzz(i, options.fizz, options.buzz)) for i in range(options.start, options.end+1)]

At the end of the script,2 we add the following unit tests, leveraging pytest’s parametrized test functions:

# Unit tests import pytest # noqa: E402 import shlex # noqa: E402 @pytest.mark.parametrize("args, expected", [ ("0 0", ["fizzbuzz"]), ("3 5", ["fizz", "4", "buzz"]), ("9 12", ["fizz", "buzz", "11", "fizz"]), ("14 17", ["14", "fizzbuzz", "16", "17"]), ("14 17 --fizz=2", ["fizz", "buzz", "fizz", "17"]), ("17 20 --buzz=10", ["17", "fizz", "19", "buzz"]), ]) def test_main(args, expected): options = parse_args(shlex.split(args)) options.debug = True options.silent = True setup_logging(options) assert main(options) == expected

The test function runs once for each of the provided parameters. The args part is used as input for the parse_args() function to get the appropriate options we need to pass to the main() function. The expected part is compared to the result of the main() function. When everything works as expected, pytest says:

python3 -m pytest -v --doctest-modules ./fizzbuzz.py ============================ test session starts ============================= platform linux -- Python 3.7.4, pytest-3.10.1, py-1.8.0, pluggy-0.8.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /home/bernat/code/perso/python-script, inifile: plugins: xdist-1.26.1, timeout-1.3.3, forked-1.0.2, cov-2.6.0 collected 7 items fizzbuzz.py::fizzbuzz.fizzbuzz PASSED [ 14%] fizzbuzz.py::test_main[0 0-expected0] PASSED [ 28%] fizzbuzz.py::test_main[3 5-expected1] PASSED [ 42%] fizzbuzz.py::test_main[9 12-expected2] PASSED [ 57%] fizzbuzz.py::test_main[14 17-expected3] PASSED [ 71%] fizzbuzz.py::test_main[14 17 --fizz=2-expected4] PASSED [ 85%] fizzbuzz.py::test_main[17 20 --buzz=10-expected5] PASSED [100%] ========================== 7 passed in 0.03 seconds ==========================

When an error occurs, pytest provides a useful assessment of the situation:

$ python3 -m pytest -v --doctest-modules ./fizzbuzz.py [...] ================================== FAILURES ================================== __________________________ test_main[0 0-expected0] __________________________ args = '0 0', expected = ['0'] @pytest.mark.parametrize("args, expected", [ ("0 0", ["0"]), ("3 5", ["fizz", "4", "buzz"]), ("9 12", ["fizz", "buzz", "11", "fizz"]), ("14 17", ["14", "fizzbuzz", "16", "17"]), ("14 17 --fizz=2", ["fizz", "buzz", "fizz", "17"]), ("17 20 --buzz=10", ["17", "fizz", "19", "buzz"]), ]) def test_main(args, expected): options = parse_args(shlex.split(args)) options.debug = True options.silent = True setup_logging(options) > assert main(options) == expected E AssertionError: assert ['fizzbuzz'] == ['0'] E At index 0 diff: 'fizzbuzz' != '0' E Full diff: E - ['fizzbuzz'] E + ['0'] fizzbuzz.py:160: AssertionError ----------------------------- Captured log call ------------------------------ fizzbuzz.py 125 DEBUG compute fizzbuzz from 0 to 0 ===================== 1 failed, 6 passed in 0.05 seconds =====================

The call to logger.debug() is included in the output. This is another good reason to use the logging feature! If you want to know more about the wonderful features of pytest, have a look at “Testing network software with pytest and Linux namespaces.”

To sum up, enhancing a Python script to make it more sustainable can be done in four steps:

  1. add documentation at the top,
  2. use the argparse module to document the different parameters,
  3. use the logging module to log details about progress, and
  4. add some unit tests.

You can find the complete example on GitHub and use it as a template!

  1. This requires the script name to end with .py. I dislike appending an extension to a script name: the language is a technical detail that shouldn’t be exposed to the user. However, it seems to be the easiest way to let test runners, like pytest, discover the enclosed tests. ↩︎

  2. Because the script ends with a call to sys.exit(), when invoked normally, the additional code for tests will not be executed. This ensures pytest is not needed to run the script. ↩︎

Categories: FLOSS Project Planets

Sean Whitton: Debian Policy call for participation -- July 2019

Sat, 2019-07-20 08:37

Debian Policy started off the Debian 11 “bullseye” release cycle with the release of Debian Policy 4.4.0.0. Please consider helping us fix more bugs and prepare more releases (whether or not you’re at DebCamp19!).

Consensus has been reached and help is needed to write a patch:

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#582109 document triggers where appropriate

#592610 Clarify when Conflicts + Replaces et al are appropriate

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#770440 policy should mention systemd timers

#823256 Update maintscript arguments with dpkg >= 1.18.5

#905453 Policy does not include a section on NEWS.Debian files

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs:

#786470 [copyright-format] Add an optional “License-Grant” field

#919507 Policy contains no mention of /var/run/reboot-required

#920692 Packages must not install files or directories into /var/cache

#922654 Section 9.1.2 points to a wrong FHS section?

Categories: FLOSS Project Planets

Pages