Feeds

Mike Driscoll: Python 2017 – Second Day

Planet Python - Sun, 2017-05-21 12:27

The second day of the PyCon 2017 conference was kicked off by breakfast with people from NASA and Pixar, among others, followed by several lightning talks. I didn’t see them all, but they were kind of fun. Then they moved on to the day’s first keynote by Lisa Guo & Hui Ding from Instagram. I hadn’t realized that they used Django and Python as their core technology.

They spoke on how they transitioned from Django 1.3 to 1.8 and Python 2 to 3. It was a really interesting talk and had a pretty deep dive into how they use Python at Instagram. It’s really neat to see Python being able to scale to several hundred million users. If I remember correctly, they also mentioned that Python 3 saved them 30% in memory utilization as compared with Python 2 along with a 12% boost in CPU utilization. They also mentioned that when they did their conversion, they did in the main branch by making it compatible with both Python 2 and 3 while continually releasing product to their users. You can see the video on Youtube:

The next keynote was done by Katy Huff, a nuclear engineer. While I personally didn’t find it as interesting as the Instagram one, it was fun to see how Python is being used in so many scientific communities and in so many disparate ways. If you’re interested, you watch the keynote here:

After that, I went to my first talk of the day which was Debugging in Python 3.6: Better, Faster, Stronger by Elizaveta Shashkova who works for the PyCharm team. Her talk focused on the new frame evaluation API that was introduced to CPython in PEP 523 and how it can make debugging easier and faster, albeit with a longer lead time to set up. Here’s the video:

Next up was Static Types for Python by Jukka Lehtosalo and David Fisher from the Dropbox team. They discussed how to use MyPy to introduce static typing using a live code demo as well as how they used it at Dropbox to add typing to 700,000 lines of code. I thought it was fascinating, even though I really enjoy Python’s dynamic nature. I can see this as a good way to enforce docstrings as well as make them more readable. Here’s the video:

After lunch, I went to an Open Space room about Python 201, which ended up being about what problems people face when they are trying to learn Python. It was really interesting and gave me many new insights into what people without a background in computer science are facing.

I attempted my own open space on wxPython, but somehow the room was commandeered by a group of people talking about drones and as far as I could tell, no one showed up to talk about wxPython. Disappointing, but whatever. I got to work on a fun wxPython project while I waited.

The last talk I attended was one given by Jean-Baptiste Aviat entitled Writing a C Python extension in 2017. He mentioned several different ways to interact with C/C++ with Python such as ctypes, cffi, Cython, and SWIG. His choice was ctypes. He was a bit hard to understand, so I highly recommend watching the video yourself to see what you think:

My other highlights were just random encounters in the hallways or at lunch where I got to meet other interesting people using Python.

Categories: FLOSS Project Planets

Dan Crosta: Introducing Tox-Docker

Planet Python - Sun, 2017-05-21 10:55

Today I released Tox-Docker to GitHub and the Python Package Index. Tox-Docker is a plugin for Tox, which, you guessed it, manages one or more Docker containers during test runs.

Why Tox-Docker?

Tox-Docker began its life because I needed to test some code that uses the PostgreSQL JSONB column type. In another life, I might have done this by instructing Jenkins to first install Postgres, start the server, create a user, create a database, and so on. There's even a small chance that this would work well, some of the time -- so long as tests didn't fail, the build didn't die unexpectedly without cleaning up, multiple tests didn't run at once, and so on. In fact, I've done exactly this sort of hackery a few times in the past already. It is dirty, and often requires manual cleanup after failures.

So, when confronted with the need to write tests that talk to a real Postgres instance, rather than reaching into my old toolbox I was determined to find a better solution. Docker can run multiple instances of Postgres at the same time in isolation from one another, which obviouates the need for mutual exclusion of builds. Docker containers are lightweigt to start, and easy to clean up (you can delete them all at once with a single command), so when the tests are done, we can simply remove it and move on.

There was still the question of how to manage the lifecycle of the container, though, which is where Tox comes in. Tox is a test automation tool that standardizes an interface between your machine (or your continuous integration environment) and your test code. Like Docker, Tox encourages isolation, by creating a clean virtualenv for each test run, free of old package installs, custom hacks, and so on. Tox already has a well-defined set of steps it runs to build your package, install dependencies, start tests, and gather results. Happily, it allows plugins to hook into this sequence to add custom behavior.

How Tox-Docker Works

Tox's plugins implement callback hooks to participate in the test workflow. For Tox-Docker, we use the pre-test and post-test hooks, which set up and tear down our Docker environment, respectively. Importantly, the post-test hook runs regardless of whether the tests passed, failed, or errored, ensuring that we'll have an opportunity to clean up any Docker containers we started during the pre-test hook. Finally, Tox plugins can also hook into the configuration system, so that projects using Tox-Docker can specify what Docker containers they require.

The simplest use of Tox-Docker is to specicy the Docker image or images, including version tags, that are required during test runs. For instance, if your project requires Postgres, you might add this to your tox.ini:

[testenv] docker = postgres:9.6

With Tox-Docker installed, the next time you run tox, you will see something like the following:

py27 docker: pull 'postgres:9.6' py27 docker: run 'postgres:9.6' py27 runtests: PYTHONHASHSEED='944551639' py27 runtests: commands[0] | py.test test_your_project.py ============================= test session starts ============================== platform linux2 -- Python 2.7.12, pytest-3.0.7, py-1.4.33, pluggy-0.4.0 rootdir: /home/travis/build/you/your-project, inifile: collected 3 items test_your_project.py ... =========================== 3 passed in 0.02 seconds =========================== py27 docker: remove '72e2ffea02' (forced)

Tox-Docker has picked up your configuration, and pulled and started a PostgreSQL container, which it shuts down after the tests finish. This is equivalent to running docker pull, docker run, and docker rm yourself, but without the manual hassle.

Challenges and Helpers

Not every Dockerized component can be started and be expected to "just work". Most services will require or allow for some amount of configuration, and your tests will need some information back out of Docker to know how to use the services. In particular, we need:

  1. A way to pass settings into Docker containers, in cases where the defaults are not sufficient
  2. A way to inform the tests how to communicate with the service inside the container, specifically, what ports are exposed
  3. A way to delay beginning the test run until the container has started and the application within it is ready to work

Tox-Docker lets you specify environment variables with the dockerenv setting in the tox.ini file:

[testenv] docker = postgres:9.6 dockerenv = POSTGRES_USER=user_name POSTGRES_DB=database_name

Tox-Docker takes these variables and passes them to Docker as it launches the container, just as you might do manually with the --env flag. These variables are also made available in the environment that tests run in, so they can be used to construct a connection string, for instance.

Additionally, Tox-Docker interrogates the container just after it's started, to get the list of exposed TCP or UDP ports. For each port, Tox-Docker constructs an environment variable named after the container and exposed port, whose value is the host-side port number that Docker has mapped the exposed port to. Postgres listens on TCP port 5432 within the container, which might be mapped to port 32187 on your host system. In this case, an environment variable POSTGRES_5432_TCP will be set with value "32187".

Tests can use these environment variables to parameterize connections to the Dockerized services, rather than having to hard-code knowledge of the environment.

Finally, in order to avoid false-negative test failures or errors, Tox-Docker waits until it can connect to each of the ports exposed by the Docker container. This is not a perfect way to determine that the service inside is actually ready, but Docker provides no way for a service inside the container to signal to the outside world that it's finished starting up. In practice, I hope that this heuristic is good enough.

Future Work

The most obvious next step for Tox-Docker is to support Docker Compose, the Docker tool that lets you launch a cluster of interconnected containers in a single command. For the projects I am working with, I haven't yet had need of Docker Compose, but for projects of a certain level of complexity this will be preferable to attempting to manually manage this in tox.ini.

Installation and Feedback

Tox-Docker is available in the Python Package Index for installation via pip install tox-docker. Contributions, suggestions, questions, and feedback are welcome via GitHub.

Categories: FLOSS Project Planets

Bryan Pendleton: Back online

Planet Apache - Sun, 2017-05-21 09:01

I took a break from computers.

I had a planned vacation, and so I did something that's a bit rare for me: I took an 11 day break from computers.

I didn't use any desktops or laptops. I didn't have my smartphone with me.

I went 11 days without checking my email, or signing on to various sites where I'm a regular, or opening my Feedly RSS read, or anything like that.

Now, I wasn't TOTALLY offline: there were newspapers and television broadcasts around, and I was traveling with other people who had computers.

But, overall, it was a wonderful experience to just "unplug" for a while.

I recommend it highly.

Categories: FLOSS Project Planets

Elena 'valhalla' Grandi: Modern XMPP Server

Planet Debian - Sun, 2017-05-21 07:30
Modern XMPP Server

I've published a new HOWTO on my website http://www.trueelena.org/computers/howto/modern_xmpp_server.html:

http://www.enricozini.org/blog/2017/debian/modern-and-secure-instant-messaging/ already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.

How

I've decided to install https://prosody.im/, mostly because it was recommended by the RTC QuickStart Guide http://rtcquickstart.org/; I've heard that similar results can be reached with https://www.ejabberd.im/ and other servers.

I'm also targeting https://www.debian.org/ stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).

Installation and prerequisites

You will need to enable the https://backports.debian.org/ repository and then install the packages prosody and prosody-modules.

You also need to setup some TLS certificates (I used Let's Encrypt https://letsencrypt.org/); and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide http://rtcquickstart.org/guide/multi/xmpp-server-prosody.html for more details.

On your firewall, you'll need to open the following TCP ports:


  • 5222 (client2server)

  • 5269 (server2server)

  • 5280 (default http port for prosody)

  • 5281 (default https port for prosody)



The latter two are needed to enable some services provided via http(s), including rich media transfers.

With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:

prosodyctl adduser alice@example.org

In-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim https://en.wikipedia.org/wiki/Messaging_spam).

prosody configuration

You can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.

First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:


c2s_require_encryption = true
s2s_secure_auth = true


and then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:


s2s_insecure_domains = { "gmail.com" }


virtualhosts

For each virtualhost you want to configure, create a file /etc/prosody/conf.avail/chat.example.org.cfg.lua with contents like the following:


VirtualHost "chat.example.org"
enabled = true
ssl = {
key = "/etc/ssl/private/example.org-key.pem";
certificate = "/etc/ssl/public/example.org.pem";
}

For the domains where you also want to enable MUCs, add the follwing lines:


Component "conference.chat.example.org" "muc"
restrict_room_creation = "local"

the "local" configures prosody so that only local users are allowed to create new rooms (but then everybody can join them, if the room administrator allows it): this may help reduce unwanted usages of your server by random people.

You can also add the following line to enable rich media transfers via http uploads (XEP-0363):


Component "upload.chat.example.org" "http_upload"

The defaults are pretty sane, but see https://modules.prosody.im/mod_http_upload.html for details on what knobs you can configure for this module

Don't forget to enable the virtualhost by linking the file inside /etc/prosody/conf.d/.

additional modules

Most of the other interesting XEPs are enabled by loading additional modules inside /etc/prosody/prosody.cfg.lua (under modules_enabled); to enable mod_something just add a line like:


"something";

Most of these come from the prosody-modules package (and thus from https://modules.prosody.im/ ) and some may require changing when prosody 0.10 will be available; when this is the case it is mentioned below.



  • mod_carbons (XEP-0280)
    To keep conversations syncronized while using multiple devices at the same time.

    This will be included by default in prosody 0.10.



  • mod_privacy + mod_blocking (XEP-0191)
    To allow user-controlled blocking of users, including as an anti-spim measure.

    In prosody 0.10 these two modules will be replaced by mod_privacy.



  • mod_smacks (XEP-0198)
    Allow clients to resume a disconnected session before a customizable timeout and prevent message loss.



  • mod_mam (XEP-0313)
    Archive messages on the server for a limited period of time (default 1 week) and allow clients to retrieve them; this is required to syncronize message history between multiple clients.

    With prosody 0.9 only an in-memory storage backend is available, which may make this module problematic on servers with many users. prosody 0.10 will fix this by adding support for an SQL backed storage with archiving capabilities.



  • mod_throttle_presence + mod_filter_chatstates (XEP-0352)
    Filter out presence updates and chat states when the client announces (via Client State Indication) that the user isn't looking. This is useful to reduce power and bandwidth usage for "useless" traffic.




@Gruppo Linux Como @LIFO
Categories: FLOSS Project Planets

Getting Free Software into our users’ hands

Planet KDE - Sun, 2017-05-21 04:00

In KDE we cover a mix of platforms and form factors that make our technology very powerful. But how to reach so many different systems while maintaining high quality on all of them?

What variables are we talking about? Form factors

We use different form factors nowadays, daily. When moving, we need to be straight-forward; when focusing we want all functionality.

Together with QtQuick Controls, Kirigami offers ways for us to be flexible both in input types and screen sizes.

Platforms

We are not constantly on the same device, diversity is part of our lives. Recommending our peers the tools we make should always be a possibility, without forcing them into major workflow changes (like changing OS, yes).

Qt has been our tool of choice for years and it’s proven to keep up with the latest industry changes, embracing mobile, and adapting to massively different form factors and operating systems. This integration includes some integration in their look and feel, which is very important to many of us.

Devices & Quality Assurance

We are targeting different devices, we need to allow developers to test and make it easy to reproduce and make the most out of the testing we get, learn from our users.

Whatever is native to the platform. APK (and possibly even Google Play) on Android, Installers on Windows and distribution packages for GNU/Linux.
Furthermore, we’ve been embracing new technologies on GNU/Linux systems that can help a lot in this front including Snap/Flatpak/AppImage, which could help streamline this process as well.

What needs to happen?

Some of these technologies are slowly blooming as they get widely adopted, and our community needs as well to lead in offering tooling and solutions to make all of this viable.

  • We need straightforward quality assurance. We should ensure the conditions under which we develop and test are our users’ platforms. When facing an error, being able to reproduce and test is fundamental.
  • We should allow for swift release cycles. Users should always be on fresh stable releases. When a patch release is submitted, we should test it and then have it available to the users. Nowadays, some users are not benefiting from most stable releases and that’s makes lots of our work in vain.
  • Feedback makes us grow. We need to understand how our applications are being used, if we want to solve the actual problems users are having.

All of this won’t happen automatically. We need people who wants to get their hands dirty and help build the infrastructure to make it happen.

There’s different skills that you can put in practice here: ranging from DevOps, helping to offer fresh quality recipes for your platform of choice, improving testing infrastructure, or actual system development on our development tools and of course any of the upstream projects we use.

Hop on! Help KDE put Free Software on every device!

Categories: FLOSS Project Planets

GSoC - Community Bonding Period with Krita

Planet KDE - Sun, 2017-05-21 03:00

Hi everyone, is everything ok? I hope so.

Today, I will talk about my week working on Krita during this community bonding period that ends next Sunday.

You probably are asking, why I put a Garfield comic here. First, I love cats and Garfield :). Second, I figure out that represents what open source community needs, more consistency and constant work. Boud told me sometimes that we need to commit and be in touch with the community every day. It's a problem, send a huge modification or do not enter in IRC for a long time. I'm trying to be more constant because is not all about code.

This week was pretty cool why I could know more the community, talking with users and devs to define the initial set for the Krita's showcase.

  • Monday - I opened a discussion in the Krita’s forum to obtain new suggestions for the Krita’s showcase.

  • Tuesday - I was trying to understand some current features of the Krita that users told me like Image Reference and Palette.

  • Wednesday - I organized and wrote all suggestion of users from the forum and from the IRC on the phabricator task.

  • Thursday - I asked more experienced devs for help with suggestions in the task thread, as you can see here.

  • Friday - A day to solve some personal problems.

  • Saturday - I wrote an answear with my guideline for the GSoC period.

That’s it for now. Thanks, Krita community. Until next week, See ya!!

Categories: FLOSS Project Planets

Shawn McKinney: The Anatomy of a Secure Web App Using Java EE, Spring Security and Apache Fortress

Planet Apache - Sat, 2017-05-20 22:55

Had a great time this week at ApacheCon.  This talk was presented on Thursday…


Categories: FLOSS Project Planets

Plasma 5.11 Wallpaper Production – Part 1

Planet KDE - Sat, 2017-05-20 20:20

Today I streamed the first half of the Plasma 5.11 wallpaper production, and it was an interesting experience. The video above is the abridged version sped up ~20x, heavily edited to the actual creation, and should be a fun watch for the interested.

It looks like there’s another full work-day that needs to go into the wallpaper still, and while I think I’ll also record the second half I don’t think I’ll livestream it; while I’m very appreciative of the viewers I had, it was quite a bit of extra work and quite difficult to carry on a one-man conversation for 8 hours, while working, for at most a few people. Like I said, I will still record the second half of the wallpaper for posterity, I simply don’t think I’ll be streaming it. I do think I’ll keep streaming the odd icon batch, as those are about as long as I want, so they can be kept to a digestible hour.

The wallpaper as it is is based on an image of a reef along with a recent trip to the beach during the Blue Systems sprint. There’s still a long way to go, and I can easily see another 8 hours going into this before it’s completed; there’s water effects, tides, doing the rocks, and taking a second pass at the foam – among other things – especially before I hit the level of KDE polish I’d like meet.

Looking at it, I may also make a reversed image with only the shoreline components for dual-screen aficionados.

Within the next week or so I’ll post the next timelapse after I complete the wallpaper.


Categories: FLOSS Project Planets

Stein Magnus Jodal: netsgiro: a parser and builder for AvtaleGiro and OCR Giro files

Planet Python - Sat, 2017-05-20 20:00

Today I released the first stable release of netsgiro to PyPI. netsgiro is a Python 3.4+ library for parsing and building Nets “OCR” files.

AvtaleGiro is a direct debit solution that is in widespread use in Norway, with more than 15 000 companies offering it to their customers. OCR Giro is used by Nets and Norwegian banks to update payees on recent deposits to their bank accounts. In combination, AvtaleGiro and OCR Giro allows for a high level of automation of invoicing and payment processing.

The “OCR” file format and file format name originates in days when giro payments were delivered on paper to your bank and then processed either manually or using optical character recognition, OCR. I’m not sure how old the format is, but some of the examples in the OCR Giro specification use dates in 1993, and the specification changelog starts in 1999 and ends in 2003. A couple of decades later, the file format is still in daily use by Nets’ AvtaleGiro and OCR Giro services. In other words, I have high hopes that this will be a very stable open source project requiring minimal maintenance efforts.

Here’s an example OCR file, to be used in later examples:

>>> data = ''' ... NY000010555555551000081000080800000000000000000000000000000000000000000000000000 ... NY210020000000000400008688888888888000000000000000000000000000000000000000000000 ... NY2121300000001170604 00000000000000100 008000011688373000000 ... NY2121310000001NAVN 00000 ... NY212149000000140011 Gjelder Faktura: 168837 Dato: 19/03/0400000000000000000000 ... NY212149000000140012 ForfallsDato: 17/06/0400000000000000000000 ... NY2121300000002170604 00000000000000100 008000021688389000000 ... NY2121310000002NAVN 00000 ... NY212149000000240011 Gjelder Faktura: 168838 Dato: 19/03/0400000000000000000000 ... NY212149000000240012 ForfallsDato: 17/06/0400000000000000000000 ... NY2121300000003170604 00000000000000100 008000031688395000000 ... NY2121310000003NAVN 00000 ... NY2121300000004170604 00000000000000100 008000041688401000000 ... NY2121310000004NAVN 00000 ... NY2121300000005170604 00000000000000100 008000051688416000000 ... NY2121310000005NAVN 00000 ... NY212149000000540011 Gjelder Faktura: 168841 Dato: 19/03/0400000000000000000000 ... NY212149000000540012 ForfallsDato: 17/06/0400000000000000000000 ... NY2102300000006170604 00000000000000100 008000061688422000000 ... NY2102310000006NAVN 00000 ... NY210088000000060000002000000000000000600170604170604000000000000000000000000000 ... NY000089000000060000002200000000000000600170604000000000000000000000000000000000 ... '''.strip()

netsgiro is obviously not a child of recent changes in my spare time interests, but was conceived as a part of our ongoing efforts in automating all aspects of invoicing and payment processing for Otovo’s residential solar and electricity customers. As such, netsgiro is Otovo’s first open source project. Hopefully, it will soon get some company on our public GitHub profile.

As of netsgiro 1.0.0, sloccount(1) counts 1160 lines of code and 948 lines of tests, providing 97% statement coverage. In other words, netsgiro is a small library trying to do one thing well. Meanwhile, enough effort has been poured into it that I’m happy I was immediately allowed to open source the library, hopefully saving others and my future self from doing the same work again.

The library is cleanly split in two layers. The lower level is called the “records API” and is imported from netsgiro.records. The records API parses OCR files line by line and returns one record object for each line it has parsed. This is done with good help from Python’s multiline regexps and enum.IntEnum, and Hynek Schlawack’s excellent attrs. Conversely, you can also create a bunch of record objects and convert them to an OCR file.

>>> import netsgiro.records >>> records = netsgiro.records.parse(data) >>> len(records) 22 >>> from pprint import pprint >>> pprint(records) [TransmissionStart(service_code=<ServiceCode.NONE: 0>, transmission_number='1000081', data_transmitter='55555555', data_recipient='00008080'), AssignmentStart(service_code=<ServiceCode.AVTALEGIRO: 21>, assignment_type=<AssignmentType.TRANSACTIONS: 0>, assignment_number='4000086', assignment_account='88888888888', agreement_id='000000000'), TransactionAmountItem1(service_code=<ServiceCode.AVTALEGIRO: 21>, transaction_type=<TransactionType.PURCHASE_WITH_TEXT: 21>, transaction_number=1, nets_date=datetime.date(2004, 6, 17), amount=100, kid='008000011688373', centre_id=None, day_code=None, partial_settlement_number=None, partial_settlement_serial_number=None, sign=None), TransactionAmountItem2(service_code=<ServiceCode.AVTALEGIRO: 21>, transaction_type=<TransactionType.PURCHASE_WITH_TEXT: 21>, transaction_number=1, reference=None, form_number=None, bank_date=None, debit_account=None, _filler=None, payer_name='NAVN'), TransactionSpecification(service_code=<ServiceCode.AVTALEGIRO: 21>, transaction_type=<TransactionType.PURCHASE_WITH_TEXT: 21>, transaction_number=1, line_number=1, column_number=1, text=' Gjelder Faktura: 168837 Dato: 19/03/04'), TransactionSpecification(service_code=<ServiceCode.AVTALEGIRO: 21>, transaction_type=<TransactionType.PURCHASE_WITH_TEXT: 21>, transaction_number=1, line_number=1, column_number=2, text=' ForfallsDato: 17/06/04'), ... AssignmentEnd(service_code=<ServiceCode.AVTALEGIRO: 21>, assignment_type=<AssignmentType.TRANSACTIONS: 0>, num_transactions=6, num_records=20, total_amount=600, nets_date_1=datetime.date(2004, 6, 17), nets_date_2=datetime.date(2004, 6, 17), nets_date_3=None), TransmissionEnd(service_code=<ServiceCode.NONE: 0>, num_transactions=6, num_records=22, total_amount=600, nets_date=datetime.date(2004, 6, 17))]

The higher level “objects API” is imported from netsgiro. It combines multiple records into higher level objects. For example, an AvtaleGiro payment request can consist of up to 86 records, which in the higher level API is represented by a single PaymentRequest object.

>>> import netsgiro >>> transmission = netsgiro.parse(data) >>> transmission Transmission(number='1000081', data_transmitter='55555555', data_recipient='00008080', date=datetime.date(2004, 6, 17)) >>> len(transmission.assignments) 1 >>> transmission.assignments[0] Assignment(service_code=<ServiceCode.AVTALEGIRO: 21>, type=<AssignmentType.TRANSACTIONS: 0>, number='4000086', account='88888888888', agreement_id='000000000', date=None) >>> len(transmission.assignments[0].transactions) 6 >>> from pprint import pprint >>> pprint(transmission.assignments[0].transactions) [PaymentRequest(service_code=<ServiceCode.AVTALEGIRO: 21>, type=<TransactionType.PURCHASE_WITH_TEXT: 21>, number=1, date=datetime.date(2004, 6, 17), amount=Decimal('1'), kid='008000011688373', reference=None, text=' Gjelder Faktura: 168837 Dato: 19/03/04 ForfallsDato: 17/06/04\n', payer_name='NAVN'), PaymentRequest(service_code=<ServiceCode.AVTALEGIRO: 21>, type=<TransactionType.PURCHASE_WITH_TEXT: 21>, number=2, date=datetime.date(2004, 6, 17), amount=Decimal('1'), kid='008000021688389', reference=None, text=' Gjelder Faktura: 168838 Dato: 19/03/04 ForfallsDato: 17/06/04\n', payer_name='NAVN'), PaymentRequest(service_code=<ServiceCode.AVTALEGIRO: 21>, type=<TransactionType.PURCHASE_WITH_TEXT: 21>, number=3, date=datetime.date(2004, 6, 17), amount=Decimal('1'), kid='008000031688395', reference=None, text='', payer_name='NAVN'), PaymentRequest(service_code=<ServiceCode.AVTALEGIRO: 21>, type=<TransactionType.PURCHASE_WITH_TEXT: 21>, number=4, date=datetime.date(2004, 6, 17), amount=Decimal('1'), kid='008000041688401', reference=None, text='', payer_name='NAVN'), PaymentRequest(service_code=<ServiceCode.AVTALEGIRO: 21>, type=<TransactionType.PURCHASE_WITH_TEXT: 21>, number=5, date=datetime.date(2004, 6, 17), amount=Decimal('1'), kid='008000051688416', reference=None, text=' Gjelder Faktura: 168841 Dato: 19/03/04 ForfallsDato: 17/06/04\n', payer_name='NAVN'), PaymentRequest(service_code=<ServiceCode.AVTALEGIRO: 21>, type=<TransactionType.AVTALEGIRO_WITH_PAYEE_NOTIFICATION: 2>, number=6, date=datetime.date(2004, 6, 17), amount=Decimal('1'), kid='008000061688422', reference=None, text='', payer_name='NAVN')]

The higher level API also features some helper methods to quickly build payment requests, the only file variant typically created by anyone else than Nets.

>>> from datetime import date >>> from decimal import Decimal >>> import netsgiro >>> transmission = netsgiro.Transmission( ... number='1703231', ... data_transmitter='01234567', ... data_recipient=netsgiro.NETS_ID) >>> assignment = transmission.add_assignment( ... service_code=netsgiro.ServiceCode.AVTALEGIRO, ... assignment_type=netsgiro.AssignmentType.TRANSACTIONS, ... number='0323001', ... account='99998877777') >>> payment_request = assignment.add_payment_request( ... kid='000133700501645', ... due_date=date(2017, 4, 6), ... amount=Decimal('5244.63'), ... reference='ACME invoice #50164', ... payer_name='Wonderland', ... bank_notification=None) >>> transmission.get_num_transactions() 1 >>> transmission.get_total_amount() Decimal('5244.63') >>> data = transmission.to_ocr() >>> print(data) NY000010012345671703231000080800000000000000000000000000000000000000000000000000 NY210020000000000032300199998877777000000000000000000000000000000000000000000000 NY2102300000001060417 00000000000524463 000133700501645000000 NY2102310000001Wonderland ACME invoice #50164 00000 NY210088000000010000000400000000000524463060417060417000000000000000000000000000 NY000089000000010000000600000000000524463060417000000000000000000000000000000000

Otovo has been using netsgiro in production for about a month now, and so far so good. We’re surely not the only shop in Norway doing invoicing with Python, so I hope netsgiro will be a useful building block for others as well. If you’re interested in learning more, start with the quickstart guide and then continue with the API reference.

Happy invoicing!

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-05-20

Planet Apache - Sat, 2017-05-20 19:58
Categories: FLOSS Project Planets

Randa Meetings 2017 – Registration goes live

Planet KDE - Sat, 2017-05-20 16:29

Two days after the Global Accessibility Awareness Day we go live with the registration for the Randa Meetings 2017. Thus we would like to bring as many people to Randa this September to make more of our and other Free Software more accessible.

Another topic during this year’s Randa Meetings will be KDE PIM but it’s for sure not forbidden to work on accessibility feature of our PIM stuff as well.

So please come and make KDE more accessible. CU there.

Categories: FLOSS Project Planets

Simon 0.4.90 beta released

Planet KDE - Sat, 2017-05-20 16:08
KDE Project:

The second version (0.4.90) towards Simon 0.5.0 is out in the wilds. Please download the source code, test it and send us feedback.

What we changed since the alpha release:

  • Bugfix: The download of Simon Base Models work again flawlessly (bug: 377968)
  • Fix detection of utterid APIs in Pocketsphinx

You can get it here:
https://download.kde.org/unstable/simon/0.4.90/simon-0.4.90.tar.xz.mirrorlist

In the work is also an AppImage version of Simon for easy testing. We hope to deliver one for the Beta release coming soon.

Known issues with Simon 0.4.90 are:

  • Some Scenarios available for download don't work anymore (BUG: 375819)
  • Simon can't add Arabic or Hebrew words (BUG: 356452)

We hope to fix these bugs and look forward to your feedback and bug reports and maybe to see you at the next Simon IRC meeting: Tuesday, 23rd of May, at 10pm (UTC+2) in #kde-accessibility on freenode.net.

About Simon
Simon is an open source speech recognition program that can replace your mouse and keyboard. The system is designed to be as flexible as possible and will work with any language or dialect. For more information take a look at the Simon homepage.

Categories: FLOSS Project Planets

BangPypers: Talks - May, 2017

Planet Python - Sat, 2017-05-20 14:25

For May's session, we continued with the “Talks” pattern, but we had a unique theme by name - “Journey to the Kernel”. The venue was VM Ware, and this time we had 5 speakers , all of them focusing in some way, on the internals of Python. The talks were each of 40-minutes.

The first talk was by Naveen Sivanandam Python of VM Ware on Containers - Deploying Web Services on Kubernetes. He spoke briefly about kubernetes and the typical infrastructure using a master,nodes, pods and services. He also demo-ed how to deploy 3 pods and run a flask app on them. The code for his talk along with the presentation can be found here.

YouTube video for the talk -

The second talk was by Dhilip Siva about Dictionaries in Python 2 and 3.
This was quite a nice session with people learning about how a dictionary could possibly be implemented starting with falsehoods where an implementation of a dictionary was only seemingly one but violated the basic principles of it by indexing keys instead of hashing. Gradually this was built upon, to demonstrate collisions of keys and how to avoid them etc.

YouTube video for the talk -

This was followed by a brief break where tea and biscuits were available at VM Ware. Naveen also spoke during this interval about how VM Ware was using Python.

The third talk was presented by Sasidhar and he spoke about How Import Works in Python i.e, what exactly happens when you say “import math” etc. in a Python program. He explained the different levels of searches that take place (first looking up in sys.module and then in the sys.meta_path and so on) finally resulting in the loading of the module. His code and slides can be obtained here.

YouTube video for the talk -

The fourth talk was by Rivas Hameed and he spoke about the Garbage Collector in Python - about basic memory allocation , garbage collection by ref counts and by tracing, along with some live examples.

YouTube video for the talk -

The fifth and final talk was by Piyus Gupta, speaking about the Concurrency and Parallelism in Python. He spoke about CPU scheduling , thread safety and demonstrated how different implementations with or without multiple CPUs/single or multi-threading/multiprocessing can positively or negatively influence the performance of programs. The slides for this talk can be found here.

YouTube video for the talk -

Hope you enjoyed the talks! See you again next time!

Some pics from the meetup -

The entire Youtube Playlist for the above mentioned talks are available here

Categories: FLOSS Project Planets

PyCon: Come contribute to open source, come sprint!

Planet Python - Sat, 2017-05-20 10:28
PyCon 2017 is in full swing. The last four days of the conference will be development sprints. If you've never heard about sprints before, this is the time when developers, maintainers, regular users/contributors, AND complete newcomers get together and develop features or fix bugs in their favorite projects. Many projects will be sprinting throughout various rooms. Last year there were roughly 500 people sprinting on many different projects.

If you ever thought of contributing to Open Source projects, but did not know where to start, PyCon sprints are a great place to learn new skills. Having the maintainers of the projects sit at the same table with new contributors always helps to solve issues fast.
I am a complete newcomer, don’t know where to start. Is joining development sprints a good idea for me?

Quick answer: yes, of course. Not only do the experienced mentors help new developers at the sprints, we also have some extra help for beginners:

* We try to identify the projects which are good candidates for newcomers.* Within projects we point newcomers to "easy to fix" bugs or "simple to create" features* We also have an “Intro to Sprinting” workshop on the last day of the main conference (Sunday, May 21 - Room C123+C124) >> details below.* We will have sprint ambassadors to answer questions and provide encouragement

In case you did not register for sprints during the registration process, no worries. There are no registration fees for the sprints. You can still come to the sprints and stay for one day OR all four. Also remember that anyone can participate in the sprints (no registration required), even if they are not registered for the main PyCon conference.

Introduction to Sprints Workshop

We will host the “Introduction to Sprints” workshop for first-time sprinters in room C123+C124 @ 5:00. This is a good place to start as there will be experienced sprinters to mentor and coach newcomers. We will teach you, via hands-on exercises, many of the most important technical skills and we will address a number of the soft-skills aspects of participating in sprints to help set you at ease.

Mentors/Teachers: If you want to join us in mentoring newcomers, please sign up here to mentor/teach.
How to prepare for PyCon development sprints?

Maintainer Preparation: If you are a project maintainer, and you want to sprint on your project, this is a good time to add your project details to the event page. Adding your project details to the events page will help people get ready for your project. Remember to create a list of “easy issues” which can be solved by the newcomers during the sprints. Also having clear steps on how to build your project from the source code is always helpful.
Participant Preparation:If you are planning to participate  in any of the projects during the development sprints:

* As a first step you should update your operating system. Even though there will be the Internet during sprints, having your system ready for development will save time.
* Next step will be installing a version control system, to start you can install both git and Mercurial on your computer. Just in case you are planning to contribute to a project which is written in C/C++, then please install the corresponding compiler on your operating system. i.e: Xcode on your Mac, or make/gcc toolchains on your Linux system (Don't know yet, what your project will need? The project maintainer will help you figure that out, so come anyway).
* Stop by the workshop on Sunday in Rooms C123 and C124 @ 5:00 pm
* The sprints will be spread across a number of rooms. If you are not sure which project you want to work on, please make sure to visit all the rooms and meet the sprinters. We will also have the regular sprints “Help Table” with a list of projects and rooms on a board.
* Just in case you are interested in hardware related projects, we will also have that at the development sprints. Last year, the tables related to MicroPython and Microbit were full during the sprint days.
Categories: FLOSS Project Planets

Correcting mistakes from the past

Planet KDE - Sat, 2017-05-20 09:00

Not only, but to a large extent I worked in the last few months on foundational improvements to KWin’s DRM backend, which is a central building block of KWin’s Wayland session. The idea in the beginninng was to directly expand upon my past Atomic Mode Setting (AMS) work from last year. We’re talking about direct scanout of graphic buffers for fullscreen applications and later layered compositing. Indeed this was my Season of KDE project with Martin Flöser as my mentor, but in the end relative to the initial goal it was unsuccessful.

The reason for the missed goal wasn’t a lack of work or enthusiasm from my side, but the realization that I need to go back and first rework the foundations, which were in some kind of disarray, mostly because of mistakes I did when I first worked on AMS last year, partly because of changes Daniel Stone made to his work-in-progress patches for AMS support in Weston, which I used as an example throughout my work on AMS in KWin, and also because of some small flaws introduced to our DRM backend before I started working on it.

The result of this rework are three seperate patches depending on each other and all of them got merged last week. They will be part of the 5.10 release. The reason for doing three patches instead of only one, was to ease the review process.

The first patch dealt with the query of important kernel display objects, which represent real hardware, the CRTCs and Connectors. KWin didn’t remember these objects in the past, although they are static while the system is running. This meant for example that KWin requeried all of them on a hot plugging event and had no prolonged knowledge about their state after a display was disconnected again. The last point made it in particular difficult to do a proper cleanup of the associated memory after a disconnect. So changing this in a way that the kernel objects are only queried once in the beginning made sense. Also from my past work I already had created a generic class for kernel object with the necessary subclasses, which could be used in this context. But still to me this patch was the most “controversial” one of the three, which means it was the one I was most worried about being somehow “wrong”, not just in details, but in general, especially since it didn’t solve any observable specific misbehaviour, which it could be benchmarked against. Of course I did my research, but there is always the anxiety of overlooking something crucial. Too bad the other patches depended on it. But the patch was accepted and to my relief everything seems to work well on the current master and the beta branch for the upcoming release as well.

The second patch restructured the DrmBuffer class. We support KWin builds with or without Generic Buffer Manager (GBM). It made therefore sense to split off the GBM dependent part of DrmBuffer into a seperate file, which gets only included when GBM is available. Martin had this idea and, although the patch is still quite large because of all the moved around code and renamed classes, the change was straight forward. I still managed to introduce a build breaking regression, which was quickly discovered and easily to solve. This patch was also meant as a preperation for the future direct scanout of buffers, which will then be done by a new subclass of DrmBuffer, also depending on GBM.

The last patch finally directly tackled all the issues I experienced when trying to use the before that rather underwhelming code path for AMS. Yes, you saw the picture on the screen, the buffer flipping worked, but basic functionality like hot plugging or display suspending were not working at all or led to unpredictable behaviour. Basically a complete rewrite later with many, many manual in and out pluggings of external monitors to test the bahaviour the problems have been solved to the point I consider the AMS code path now to be ready for daily use. For Plasma 5.11 I therefore plan to make it the new default. That means that it will be available on Intel graphics automatically from Linux kernel 4.12 onwards, when on the kernel side the Intel driver also defaults to it. If you want to test my code on Plasma 5.10 you need to set the environment variable KWIN_DRM_AMS and on kernels older than 4.12 you need to add the boot parameter i915.nuclear_pageflip. If you use Nvidia with the open source Nouveau driver, AMS should be available to you since kernel 4.10. In this case you should only need to set the environment variable above on 5.10, if you want to test it. Since I only tested AMS with Intel graphics until now, some reports back how it works with Nouveau would be great.

That’s it for now. But of course there is more to come. I haven’t given up on the direct scanout and at some point in the future I want to finish it. I already had a working prototype and mainly waited for my three patches to land. But for now I’ll postpone further work on direct scanout and layered compositing. Instead the last weeks I worked on something special for our Wayland session in 5.11. I call it Night Color, and with this name you can probably guess what it will be. And did I mention, that I was accepted as a Google Summer of Code student for the X.org foundation with my project to implement multi buffered present support in XWayland? Nah, I didn’t. Sorry for rhetorical asking in this smug way, but I’m just very happy and also a bit proud of having learned so much in basically only one year to the point of now being able to start work on an X.org project directly. I’ll write about it in another blog post in the near future.

Categories: FLOSS Project Planets

Drupal Association blog: Insight into Drupal Association Financials

Planet Drupal - Sat, 2017-05-20 08:47

To give more insight into Drupal Association financials, we are launching a blog series. This is the first one in the series and it is for all of you who love knowing the inner workings. It provides an overview of:

  • Our forecasting process
  • How financial statements are approved
  • The auditing process
  • How we report financials to the U.S. government via 990s

There’s a lot to share in this blog post and we appreciate you taking the time to read it.

Replacing Annual Budgets With Rolling Forecasts

Prior to 2016, the Drupal Association produced an annual budget, which is a common practice for non-profits. However, two years ago, we found that the Drupal market was changing quickly and that impacted our projected revenue. Plus, we needed faster and more timely performance analysis of pilot programs so we could adjust projections and evaluate program success throughout the year. In short, we needed to be more agile with our financial planning, so we moved to a rolling forecast model, using conservative amounts.

Using a rolling forecast means we don’t have a set annual budget. Instead, we project revenue and expense two years out into a forecast. Then, we update the forecast several times a year as we learn more. The first forecast of the year is similar to a budget. We study variance against this version throughout the year. As we conduct the additional forecasts during the year, we replace forecasts of completed months with actual expenses and income (“actuals”) and revise forecasts for the remaining months. This allows us to see much more clearly if we are on or off target and to adjust projections as conditions that could impact our financial year change and evolve. For example, if we learn that the community wants us to change a drupal.org ad placement that could impact revenue, we will downgrade the revenue forecast appropriately for this product.

In 2017, we there will be three forecasts:

  • December 2016:  The initial forecast was created. This serves as our benchmark for the year and we run variances against it.
  • May 2017: We updated the forecast after DrupalCon Baltimore since this event has the biggest impact on both our expenses and revenue for the year.
  • October 2017: We will reforecast again after DrupalCon Vienna. This is our final update before the end of the year and will be the benchmark forecast for 2018.

Creating and approving the forecasts is a multi-party process.

  1. Staff create the initial forecast much like you would a budget. They are responsible for their income and expense budget line items and insert them into the forecasting worksheet. They use historical financials, vendor contracts and quotes, and more to project the amount for each line item and document all of their assumptions. Each budget line manager reviews those projections and assumptions with me. I provide guidance and challenge assumptions and sign off on the inputs

  2. Our virtual CFO firm, Summit CPA, analyzes the data and provides financial insight including: Income Statement, Balance Sheet, Cash Flow, and Margin Analysis. Through these reports, we can see how we are positioned to perform against our financial KPIs. This insight allows us to make changes or strengthen our focus on certain areas to ensure we are moving towards those KPIs - which I will talk about in another blog post. Once these reports are generated, the Drupal Association board finance committee receives them along with the forecasting assumptions. During a committee meeting, the committee is briefed by Summit and myself. They ask questions to make sure various items are considered in the forecast and they provide advice for me to consider as we work to improve our financial health.  

  3. Once the committee reviews the forecast and assumptions, then, the full board reviews it in an Executive Session. The board asks questions and provides advice as well. This review process happens with all three forecasts for the year.

Approving Financial Reports

As we move through the year, our Operations Manager and CFO team work together to close the books each month. This ensures our monthly actuals are correct. Then, our CFO team creates a monthly financial report that includes our financial statements (Income Statement and Balance Sheet) for the finance committee to review and approve. Each month the finance committee meets virtually and the entire team reviews the most recently prepared report. After asking questions and providing advice, the committee approves the report.

The full board receives copies of the financial reports quarterly and is asked to review and approve the statements for the preceding three months. Board members can ask questions, provide advice, and approve the statements in Executive Session or in the public board meeting. After approval, I write a blog post so the community can access and review the financial statements. You can see an example of the Q3 2016 financial statement blog here. The board just approved the Q4 2016 financials and I will do a blog post shortly to share the financial statements.

Financial Audits

Every two or three years the Association contracts to have the financial practices and transactions audited.  For the years that we do not conduct a full audit, we will contract for a “financial review” by our CPA firm (which is separate from our CFO firm) to ensure our financial policies and transactions are in good order.

An audit is an objective examination and evaluation of the financial statements of an organization to make sure that the records are a fair and accurate representation of the transactions they claim to represent. It can be done internally by employees of the organization, or externally by an outside firm.  Because we want accountability, we contracted with an external CPA firm, McDonald Jacobs, to handle the audit.

The Drupal Association conducts audits for several reasons:

  1. to demonstrate our commitment to financial transparency.

  2. to assure our community that we follow appropriate procedures to ensure that the community funds are being handled with care.  

  3. to give our board of directors outside assurance that the financial statements are free of material misstatements.

What do the auditors look at?  For 2016, our auditors will generally focus on three points:

  • Proper recording of income and expense: Auditors will ensure that our financial statements are an accurate representation of the business we have conducted. Did we record transactions on the right date, to the right account, and the right class? In other words, if we said that 2016 revenue was a certain amount, is that really true?

  • Financial controls: Preventing fraud is an important part of the audit. It is important to put the kinds of controls in place that can prevent common types of fraud, such as forged checks and payroll changes. Auditors look to see that there are two sets of eyes on every transaction, and that documentation is provided to verify expenses and check requests.

  • Policies and procedures: There are laws and regulations that require we have certain policies in place at our organization. Our auditors will look at our current policies to ensure they were in place and, in some cases, had been reviewed by the board and staff.

The primary goal of the audit is for the auditor to express an opinion on two aspects of the financial statements of the Association: the financial statements are fairly presented, and they are in accordance with generally accepted accounting principles (GAAP). Generally accepted accounting principles are the accepted body of accounting rules and policies established by the accounting profession. The purpose of these rules is to promote consistency and fairness in financial reporting throughout the business community. These principles provide comparability of financial information.

Once our audit for 2016 is complete and approved by the board (expected in early summer), we can move to have the 990 prepared.  We look to have this item completed by September 2016.

Tax Filing: The Form 990

As a U.S.-based 501c3 exempt organization, and to maintain this tax-exempt status, the U.S. Internal Revenue Service (IRS) requires us to file a 990 each year. Additionally, this form is also filed with state tax departments as well. The 990 is meant for the IRS and state regulators to ensure that non-profits continue to serve their stated charitable activities. The 990 can be helpful when you are reviewing our programs and finances, but know that it’s only a “snapshot” of our year.  

You can find our past 990s here.

Here are some general points, when reviewing our 990.

FORM 990, PART I—REVENUES, EXPENSES, AND CHANGES IN NET ASSETS OR FUND BALANCES

Lines 8-12 indicates our yearly revenue revenue. Not only how much total revenue (line 12), but also where we have earned our income, broken out into four groups. Line 12 is the most important: total income for the year.

Lines 13-18 shows expenses for the year, and where we focused.

Cash Reserves are noted on lines 20-22 on page 1.

The 990 has a comparison of the net assets from last year (or the beginning of the year) and the end of the current year, as well as illustrates the total assets and liabilities of the Association.

FORM 990, PART II—STATEMENT OF FUNCTIONAL EXPENSES

Part II shows our expenditures by category and major function (program services, management and general, and fundraising).

FORM 990, PART III—STATEMENT OF PROGRAM SERVICE ACCOMPLISHMENTS

In Part III, we describe the activities performed in the previous year that adhere to our 501c3 designation.  You can see here that Drupal.org, DrupalCon and our Fiscal Sponsorship programs are noted.

FORM 990, PART IV—BALANCE SHEETS

Part IV details our assets and liabilities. Assets are our resources that we have at our disposal to execute on our mission.  Liabilities are the outstanding claims against those assets.

FORM 990, PART V—LIST OF OFFICERS, DIRECTORS, TRUSTEES AND KEY EMPLOYEES

Part V lists our board and staff who are responsible in whole or in part for the operations of an organization. These entries do include titles and compensation of key employees.

FORM 990, PART VI—OTHER INFORMATION

This section contains a number of questions regarding our operations over the year. Any “yes” answers require explanation on the following page.

Schedule A, Part II—Compensation of the Five Highest Paid Independent Contractors for Professional Services

We list any of our contractors, if we have paid them more than $50,000, on this schedule.

Once our 990 is complete and filed we are required to post the return publicly, which we do here on our website.  We expect to have the 2016 990 return completed, filed and posted by September 2017.

Phew. I know that was long. Thank you for taking the time to read all of the steps we take to ensure financial health and accuracy. We are thankful for the great team work that goes into this process. Most of all we are thankful for our funders who provide the financial fuel for us to do our mission work.

Stay tuned for our next blog in this series: Update on Q4 2016 financial (to follow up on our Q3 2016 financial update)

Categories: FLOSS Project Planets

Neil Williams: Software, service, data and freedom

Planet Debian - Sat, 2017-05-20 03:24
Free software, free services but what about your data?

I care a lot about free software, not only as a Debian Developer. The use of software as a service matters as well because my principle free software development is on just such a project, licensed under the GNU Affero General Public License version 3. The AGPL helps by allowing anyone who is suitably skilled to install their own copy of the software and run their own service on their own hardware. As a project, we are seeing increasing numbers of groups doing exactly this and these groups are actively contributing back to the project.

So what is the problem? We've got an active project, an active community and everything is under a free software licence and regularly uploaded to Debian main. We have open code review with anonymous access to our own source code CI and anonymous access to project planning, open mailing list archives as well as an open bug tracker and a very active IRC channel (#linaro-lava on OFTC). We develop in the open, we respond in the open and we publish frequently (monthly, approximately). The code we write defaults to public visibilty at runtime with restrictions available for certain use cases.

What else can we be doing? Well it was a simple question which started me thinking.

The lava documentation has various example test scripts e.g. https://validation.linaro.org/static/docs/v2/examples/test-jobs/qemu-kernel-standard-sid.yaml

these have no licence information, we've adapted them for a Linux Foundation project, what licence should apply to these files?

Robert Marshall

Those are our own examples, contributed as part of the documentation and covered by the AGPL like the rest of the documentation and the software which it documents, so I replied with the same. However, what about all the other submissions received by the service?

Data Freedom

LAVA acts by providing a service to authenticated users. The software runs your test code on hardware which might not be available to the user or which is simply inconvenient for the test writer to setup themselves. The AGPL covers this nicely.

What about the data contributed by the users? We make this available to other users who will, naturally, copy and paste for their own tests. In most cases, because the software defaults to public access, anonymous users also get to learn from the contributions of other test writers. This is a good thing and to be encouraged. (One reason why we moved to YAML for all submissions was to allow comments to help other users understand why the submission does specific things.)

Writing a test job submission or a test shell definition from scratch is a non-trivial amount of work. We've written dozens of pages of documentation covering how and how not to do it but the detail of how a test job runs exactly what the test writer requires can involve substantial effort. (Our documentation recommends using version control for each of these works for exactly these reasons.)

At what point do these works become software? At what point do these need licensing? How could that be declared?

Perils of the Javascript Trap approach

When reading up on the AGPL, I also read about Service as a Software Substitute (SaaSS) and this led to The Javascript Trap.

I don't consider LAVA to be SaaSS although it is Software as a Service (SaaS). (Distinguishing between those is best left to the GNU document as it is an almighty tangle at times.)

I did look at the GNU ideas for licensing Javascript but it seems cumbersome and unnecessary - a protocol designed for the specific purposes of their own service rather than as a solution which could be readily adopted by all such services.

The same problems affect trying to untangle sharing the test job data within LAVA.

Adding Licence text

The traditional way, of course, is simply to add twenty lines or so of comments at the top of every file. This works nicely for source code because the comments are hidden from the final UI (unless an explicit reference is made in the --help output or similar). It is less nice for human readable submissions where the first thing someone has to do is scroll passed the comments to get to what they want to see. At that point, it starts to look like a popup or a nagging banner - blocking the requested content on a website to try and get the viewer to subscribe to a newsletter or pay for the rest of the content. Let's not actively annoy visitors who are trying to get things done.

Adding Licence files

This can be done in the remote version control repository - then a single line in the submitted file can point at the licence. This is how I'm seeking to solve the problem of our own repositories. If the reference URL is included in the metadata of the test job submission, it can even be linked into the test job metadata and made available to everyone through the results API.

metadata: licence.text: http://mysite/lava/git/COPYING licence.name: BSD 3 clause

Metadata in LAVA test job submissions is free-form but if the example was adopted as a convention for LAVA submissions, it would make it easy for someone to query LAVA for the licences of a range of test submissions.

Currently, LAVA does not store metadata from the test shell definitions except the URL of the git repo for the test shell definition but that may be enough in most cases for someone to find the relevant COPYING or LICENCE file.

Which licence?

This could be a problem too. If users contribute data under unfriendly licences, what is LAVA to do? I've used the BSD 3 clause in the above example as I expect it to be the most commonly used licence for these contributions. A copyleft licence could be used, although doing so would require additional metadata in the submission to declare how to contribute back to the original author (because that is usually not a member of the LAVA project).

Why not Creative Commons?

Although I'm referring to these contributions as data, these are not pieces of prose or images or audio. These are instructions (with comments) for a specific piece of software to execute on behalf of the user. As such, these objects must comply with the schema and syntax of the receiving service, so a code-based licence would seem correct.

Results

Finally, a word about what comes back from your data submission - the results. This data cannot be restricted by any licence affecting either the submission or the software, it can be restricted using the API or left as the default of public access.

If the results and the submission data really are private, then the solution is to take advantage of the AGPL, take the source code of LAVA and run it internally where the entire service can be placed within a firewall.

What happens next?
  1. Please consider editing your own LAVA test job submissions to add licence metadata.
  2. Please use comments in your own LAVA test job submissions, especially if you are using some form of template engine to generate the submission. This data will be used by others, it is easier for everyone if those users do not have to ask us or you about why your test job does what it does.
  3. Add a file to your own repositories containing LAVA test shell definitions to declare how these files can be shared freely.
  4. Think about other services to which you submit data which is either only partially machine generated or which is entirely human created. Is that data free-form or are you essentially asking the service to do a precise task on your behalf as if you were programming that server directly? (Jenkins is a classic example, closely related to LAVA.)
    • Think about how much developer time was required to create that submission and how the service publishes that submission in ways that allow others to copy and paste it into their own submissions.
    • Some of those submissions can easily end up in documentation or other published sources which will need to know about how to licence and distribute that data in a new format (i.e. modification.) Do you intend for that useful purpose to be defeated by releasing your data under All Rights Reserved?
Contact

I don't enable comments on this blog but there are enough ways to contact me and the LAVA project in the body of this post, it really shouldn't be a problem for anyone to comment.

Categories: FLOSS Project Planets

Ritesh Raj Sarraf: Patanjali Research Foundation

Planet Debian - Sat, 2017-05-20 00:46

PSA: Research in the domain of Ayurveda

http://www.patanjaliresearchfoundation.com/patanjali/
 

I am so glad to see this initiative taken by the Patanjali group. This is a great stepping stone in the health and wellness domain.

So far, Allopathy has been blunt in discarding alternate medicine practices, without much solid justification. The only, repetitive, response I've heard is "lack of research". This initiative definitely is a great step in that regard.

Ayurveda (Ancient Hindu art of healing) has a huge potential to touch lives. For the Indian sub-continent, this has the potential of a blessing.

The Prime Minister of India himself inaugurated the research centre.

Categories: Keywords: Like: 
Categories: FLOSS Project Planets

Sandipan Dey: Some Image and Video Processing: Motion Estimation with Block-Matching in Videos, Noisy and Motion-blurred Image Restoration with Inverse Filter in Python and OpenCV

Planet Python - Fri, 2017-05-19 20:08
The following problems appeared in the exercises in the coursera course Image Processing (by Northwestern University). The following descriptions of the problems are taken directly from the exercises’ descriptions. 1. Analysis of an Image quality after applying an nxn Low Pass Filter (LPF) for different n The next figure shows the problem statement. Although it was … Continue reading Some Image and Video Processing: Motion Estimation with Block-Matching in Videos, Noisy and Motion-blurred Image Restoration with Inverse Filter in Python and OpenCV
Categories: FLOSS Project Planets
Syndicate content