FLOSS Project Planets

Norbert Preining: TensorFlow 2.0 with GPU on Debian/sid

Planet Debian - Fri, 2019-10-04 04:37

Some time ago I have been written about how to get Tensorflow (1.x) running on current Debian/sid back then. It turned out that this isn’t correct anymore and needs an update, so here it is, getting the most uptodate TensorFlow 2.0 running with nVidia support running on Debian/sid.

Step 1: Install CUDA 10.0

Follow more or less the instructions here and do

wget -O- https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | sudo tee /etc/apt/trusted.gpg.d/nvidia-cuda.asc echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /" | sudo tee /etc/apt/sources.list.d/nvidia-cuda.list sudo apt-get update sudo apt-get install cuda-libraries-10-0

Warning! Don’t install the 10-1 version since the TensorFlow binaries need 10.0.

This will install lots of libs into /usr/local/cuda-10.0 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-10-0.conf.

Step 2: Install CUDA 10.0 CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 10.0. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Download cuDNN v7.N.N (xxxx NN, YYYY), for CUDA 10.0 and then cuDNN Runtime Library for Ubuntu18.04 (Deb).

At the moment (as of today) this will download a file libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb.

Step 3: Install Tensorflow for GPU

This is the easiest one and can be done as explained on the TensorFlow installation page using

pip3 install --upgrade tensorflow-gpu

This will install several other dependencies, too.

Step 4: Check that everything works

Last but not least, make sure that TensorFlow can be loaded and find your GPU. This can be done with the following one-liner, and in my case gives the following output:

$ python3 -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))" ....(lots of output) 2019-10-04 17:29:26.020013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3390 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) tf.Tensor(444.98087, shape=(), dtype=float32) $

I haven’t tried to get R working with the newest TensorFlow/Keras combination, though. Hope the above helps.

Categories: FLOSS Project Planets

DrupalCon News: Be inspired by standout keynote address

Planet Drupal - Fri, 2019-10-04 04:20

For DrupalCon Amsterdam, we’ve curated keynote speakers who are engaging presenters — and add value to the gathering as a whole. Enhance your professional life by hearing more about differing experiences in tech and open source. These keynotes are only for conference attendees, so we invite you to join us by registering.  

Categories: FLOSS Project Planets

Talk Python to Me: #232 Become a robot developer with Python

Planet Python - Fri, 2019-10-04 04:00
When you think about the types of jobs you get as a Python developer, you probably weight the differences between data science and web development.
Categories: FLOSS Project Planets

Matthew Garrett: Investigating the security of Lime scooters

Planet Debian - Fri, 2019-10-04 02:04
(Note: to be clear, this vulnerability does not exist in the current version of the software on these scooters. Also, this is not the topic of my Kawaiicon talk.)

I've been looking at the security of the Lime escooters. These caught my attention because:
(1) There's a whole bunch of them outside my building, and
(2) I can see them via Bluetooth from my sofa
which, given that I'm extremely lazy, made them more attractive targets than something that would actually require me to leave my home. I did some digging. Limes run Linux and have a single running app that's responsible for scooter management. They have an internal debug port that exposes USB and which, until this happened, ran adb (as root!) over this USB. As a result, there's a fair amount of information available in various places, which made it easier to start figuring out how they work.

The obvious attack surface is Bluetooth (Limes have wifi, but only appear to use it to upload lists of nearby wifi networks, presumably for geolocation if they can't get a GPS fix). Each Lime broadcasts its name as Lime-12345678 where 12345678 is 8 digits of hex. They implement Bluetooth Low Energy and expose a custom service with various attributes. One of these attributes (0x35 on at least some of them) sends Bluetooth traffic to the application processor, which then parses it. This is where things get a little more interesting. The app has a core event loop that can take commands from multiple sources and then makes a decision about which component to dispatch them to. Each command is of the following form:

AT+type,password,time,sequence,data$

where type is one of either ATH, QRY, CMD or DBG. The password is a TOTP derived from the IMEI of the scooter, the time is simply the current date and time of day, the sequence is a monotonically increasing counter and the data is a blob of JSON. The command is terminated with a $ sign. The code is fairly agnostic about where the command came from, which means that you can send the same commands over Bluetooth as you can over the cellular network that the Limes are connected to. Since locking and unlocking is triggered by one of these commands being sent over the network, it ought to be possible to do the same by pushing a command over Bluetooth.

Unfortunately for nefarious individuals, all commands sent over Bluetooth are ignored until an authentication step is performed. The code I looked at had two ways of performing authentication - you could send an authentication token that was derived from the scooter's IMEI and the current time and some other stuff, or you could send a token that was just an HMAC of the IMEI and a static secret. Doing the latter was more appealing, both because it's simpler and because doing so flipped the scooter into manufacturing mode at which point all other command validation was also disabled (bye bye having to generate a TOTP). But how do we get the IMEI? There's actually two approaches:

1) Read it off the sticker that's on the side of the scooter (obvious, uninteresting)
2) Take advantage of how the scooter's Bluetooth name is generated

Remember the 8 digits of hex I mentioned earlier? They're generated by taking the IMEI, encrypting it using DES and a static key (0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88), discarding the first 4 bytes of the output and turning the last 4 bytes into 8 digits of hex. Since we're discarding information, there's no way to immediately reverse the process - but IMEIs for a given manufacturer are all allocated from the same range, so we can just take the entire possible IMEI space for the modem chipset Lime use, encrypt all of them and end up with a mapping of name to IMEI (it turns out this doesn't guarantee that the mapping is unique - for around 0.01%, the same name maps to two different IMEIs). So we now have enough information to generate an authentication token that we can send over Bluetooth, which disables all further authentication and enables us to send further commands to disconnect the scooter from the network (so we can't be tracked) and then unlock and enable the scooter.

(Note: these are actual crimes)

This all seemed very exciting, but then a shock twist occurred - earlier this year, Lime updated their authentication method and now there's actual asymmetric cryptography involved and you'd need to engage in rather more actual crimes to obtain the key material necessary to authenticate over Bluetooth, and all of this research becomes much less interesting other than as an example of how other companies probably shouldn't do it.

In any case, congratulations to Lime on actually implementing security!

comments
Categories: FLOSS Project Planets

Dave Hall Consulting: Announcing the DrupalSouth Diversity Scholarship

Planet Drupal - Fri, 2019-10-04 01:11

Over the years I have benefited greatly from the generosity of the Drupal Community. In 2011 people sponsored me to write lines of code to get me to DrupalCon Chicago.

Today Dave Hall Consulting is a very successful small business. We have contributed code, time and content to Drupal. It is time for us to give back in more concrete terms.

We want to help someone from an under represented group take their career to the next level. This year we will provide a Diversity Scholarship for one person to attend DrupalSouth, our 2 day Gettin’ Git training course and 5 nights at the conference hotel. This will allow this person to attend the premier Drupal event in the region while also learning everything there is to know about git.

To apply for the scholarship, fill out the form by 23:59 AEST 12 October 2019 to be considered.

Categories: FLOSS Project Planets

Plasma Mobile: weekly update: part 1

Planet KDE - Thu, 2019-10-03 20:00

Starting from today, the Plasma Mobile team is beginning a weekly blog series to highlight the fixes and features landing in various modules that make Plasma Mobile.

Phone shell

At Akademy Bhushan and Marco presented Plasma Nano shell to the community. Earlier this week the changes to use plasma-nano as a base shell package landed in plasma-phone-components. The shell includes an updated look for the app launcher and several of the shell interactions, including adding and removing widgets and changing the wallpaper.

Kirigami

A very common pattern for applications, both for mobile and desktop, is to include some kind of menu which loads different “main pages” for the application. On a desktop application, you’ll have a sidebar on the left which switches the pages on the right. On a mobile application you’ll have this list either as the first page or in the left side drawer accessible by swiping right. Since it’s a pattern that ended up being needed by many apps, we introduced a new dedicated API for it: PagePool and PagePoolAction: This API makes it possible to implement this paradigm with just few lines of code.

Here is a minimal example of an application that implements this behavior with PagePool and PagePoolAction:

import QtQuick 2.6 import org.kde.kirigami 2.11 as Kirigami Kirigami.ApplicationWindow { id: root Kirigami.PagePool { id: mainPagePool } globalDrawer: Kirigami.GlobalDrawer { title: "Hello App" titleIcon: "applications-graphics" actions: [ Kirigami.PagePoolAction { text: i18n("Page1") icon.name: "speedometer" pagePool: mainPagePool page: "Page1.qml" }, Kirigami.PagePoolAction { text: i18n("Page2") icon.name: "window-duplicate" pagePool: mainPagePool page: "Page2.qml" } ] } contextDrawer: Kirigami.ContextDrawer { id: contextDrawer } pageStack.initialPage: mainPagePool.loadPage("Page1.qml") }

See it in action in the following video:

Maui Project

MauiKit, the UI framework, is now making further usage of Kirigami properties, components and helpers for visual consistency. It has now become more integrated into the platform by using KF5 libraries and has gained new features to improve the user interaction patterns both on mobile and desktop.

Settings application

Nicolas Fella updated the “Settings” app to fix the module activation when the app is already running, commit.

Code of the “Accounts” module was moved from the Settings app to kaccounts-integration, replacing the existing module there. This makes the desktop and mobile platforms use the same unified code base.

Jonah Brüchert added an “Information” module in the Settings application, which will eventually replace the the “about-distro” module in the kinfocenter code base.

Applications

Dan Leinir Turthra Jensen fixed Peruse making it usable on HighDPI screens. commit.

Jonah Brüchert introduced changes in Plasma Angelfish to port the settings screen to match the Kirigami look-and-feel and navigation and usage patterns. Changes were also introduced to split the global drawer and context drawer.

Bhushan Shah fixed a crash in the dialer code at startup, which was then tested by Luca Weiss on the Pinephone developer kit.

Index, the file manager, now makes use of KIO for file operations. It also uses the same model for bookmarks and places as the desktop. This means tighter integration with other apps and the system, providing progress notifications on moving, copying and removing files and browsing remotes locations like SFTP.

There is now a collapsible sidebar that, when collapsed, can be dragged to preview its contents. This is useful on small screens, such as on Plasma Mobile devices.

You can now also browse your files with the new Miller Column View, and open different places in different tabs.

With the Selection Bar interaction pattern you can select files across different places. This interaction pattern has been improved a lot and the selection state in the different views and directories is preserved.

Index incorporates a file preview which allows to quickly preview files and get basic information from text, images, videos and audio files. Coming soon: PDFs.

Nota, the simple text editor, has gained syntax highlighting support and you can also open multiple files in different tabs thanks to the KIO libraries and KQuickSyntaxHightlighter.

Buho, the note-taking and link-collector, can now sync notes by using NextCloud’s Notes app API, and can benefit from MauiKit Editor component for syntax highlighting to save snippets of code.

Some of the Maui apps are about to have stable relases and you can try them out on Android as well!

Johan Ouwerkerk has made major improvements to the otpclient app in the last few weeks from. otpclient is an app for generating two factor login codes, similar to Google’s Authenticator or SailOTP.

Currently the basic feature works, but a lot of work remains to be done. One way you can help us a lot is by suggesting a better name for the app on the theme of “keys”, “two factor”, “login” and “authentication”.

Downstream

In postmarketOS, changes by Clayton Craft to update the device support for Librem 5 devkit was merged along with changes by Bart Ribbers to update Plasma to the latest pre-release and update the settings app to the latest revision. Bhushan created a merge request to update the mesa and kernel used for pinephone in postmarketOS. You can watch a video by the postmarketOS developer, Martijn Braam. In the video he puts together the final PinePhone prototype, which also includes a sneak preview of Plasma Mobile!

You can also watch a demo of Plasma Mobile running on Librem 5 devkit using updated postmarketOS packages:

The KDE Neon team upgraded the Qt version from 5.12.3 to 5.13.1. This includes several bugfixes and new features. These upgrades have landed in the new edge image for Halium based devices like the Nexus 5X.

Want to help?

Next time your name could be here! To find out the right task for you, from promotion to core system development, check out Find your way in Plasma Mobile. We are also always happy to welcome new contributors on our public channels. See you there!

Categories: FLOSS Project Planets

FSF Events: Sign-making party at the FSF office to prepare for IDAD 2019!

GNU Planet! - Thu, 2019-10-03 17:25

The International Day against DRM(IDAD), organized yearly by the Defective by Design campaign, is promising to be an exciting day of protest against Digital Restrictions Management (DRM). This year we are standing up for readers' rights against the restrictive behavior of DRM-encumbered textbooks and digital learning environments from groups like Pearson, and our protestors will collect at the Pearson Education offices in Boston on October 12th, 2019.

The day's success is dependent on the amount of people showing up, and, of course, on the visuals that we provide to supplement our message. And so, we're inviting you to our sign-making party at 17:30 on October 9th, at the Free Software Foundation (FSF) office in downtown Boston! We will provide a light dinner, art materials, and instructions to make your own protest signs, so all you have to do is join in the fun!

Volunteering at the sign-making party is a great way to meet new community members and to contribute to the fight against DRM. We are also still looking for people to join us in our Boston IDAD protest on October 12 at noon, as well as an evening hackathon, or collaboration session, on unrestricted and truly shareable educational materials in the FSF offices from 17:00 onward.

Please visit the Defective by Design Web site or our page on LibrePlanet for more information.

If you have any questions, please feel free to email info@defectivebydesign.org

Categories: FLOSS Project Planets

Joey Hess: Project 62 Valencia Floor Lamp review

Planet Debian - Thu, 2019-10-03 16:30

From Target, this brass finish floor lamp evokes 60's modernism, updated for the mid-Anthropocene with a touch plate switch.

The integrated microcontroller consumes a mere 2.2 watts while the lamp is turned off, in order to allow you to turn the lamp on with a stylish flick. With a 5 watt LED bulb (sold separately), the lamp will total a mere 7.2 watts while on, making it extremely energy efficient. While off, the lamp consumes a mere 19 kilowatt-hours per year.

Though the lamp shade at first appears perhaps flimsy, while you are drilling a hole in it to add a physical switch, you will discover metal, though not brass all the way through. Indeed, this lamp should last for generations, should the planet continue to support human life for that long.

As an additional bonus, the small plastic project box that comes free in this lamp will delight any electrical enthusiast. As will the approximately 1 hour conversion process to delete the touch switch phantom load. The 2 cubic foot of syrofoam packaging is less delightful.

Two allen screws attach the pole to the base; one was missing in my lamp. Also, while the base is very heavily weighted, the lamp still rocks a bit when using the aftermarket switch. So I am forced to give it a mere 4 out of 5 stars.

Categories: FLOSS Project Planets

Vinta Software: PyGotham 2019: Talking Python in NY!

Planet Python - Thu, 2019-10-03 15:47
We are arriving at New York! Part of our team is on their way to PyGotham 2019, the biggest event of the Python community in New York. The experience last year was amazing, so we decided to come back. We are also sponsoring it this year, so if you are going to the event make sure to stop by our booth, we are bringing lots of cool swags and some br
Categories: FLOSS Project Planets

Molly de Blanc: Free software activities (September 2019)

Planet Debian - Thu, 2019-10-03 12:49

September marked the end of summer and the end of my summer travel.  Paid and non-paid activities focused on catching up with things I fell behind on while traveling. Towards the middle of September, the world of FOSS blew up, and then blew up again, and then blew up again.

Free software activities: Personal
  • I caught up on some Debian Community Team emails I’ve been behind on. The CT is in search of new team members. If you think you might be interested in joining, please contact us.
  • After much deliberation, the OSI decided to appoint two directors to the board. We will decide who they will be in October, and are welcoming nominations.
  • On that note, the OSI had a board meeting.
  • Wrote a blog post on rights and freedoms to create a shared vocabulary for future writing concerning user rights. I also wrote a bit about leadership in free software.
  • I gave out a few pep talks. If you need a pep talk, hmu.
Free software activities: Professional
  • Wrote and published the September Friends of GNOME Update.
  • Interviewed Sammy Fung for the GNOME Engagement Blog.
  • Did a lot of behind the scenes work for GNOME, that you will hopefully see more of soon!
  • I spent a lot of time fighting with CiviCRM.
  • I attended GitLab Commit on behalf of GNOME, to discuss how we implement GitLab.

 

Categories: FLOSS Project Planets

Drupal Association blog: Drupal Association collaborates on new groundbreaking tech initiative as featured on TagTeamTalk

Planet Drupal - Thu, 2019-10-03 12:40

The Drupal Association collaborated on Automatic Updates, one of the Drupal Core Strategic Initiatives that was funded by the European Commission. We are excited to partner with MTech, Tag1 Consulting, and the European Commission FOSSA program on this new initiative and share information with you about its features.

Automatic Updates has three components.

Public safety messaging

This feature pulls a feed of alerts from Drupal.org directly into Drupal's administrative interface. This helps ensure that critical Public service announcements (PSA) or Security Advisories (SA) from the Drupal security team will be seen directly by site owners. 

  • This provides yet another communication mechanism before an update so site owners can verify they are ready for an upcoming update, before it lands.

  • The feed of alerts comes directly from the feed of PSAs and SAs that the security team and release managers are already producing. 

  • This will vastly increase the ability of the Drupal project to get the word out about critical and highly critical updates - ensuring the community can respond fast. 

Readiness checks, or “Pre-flight” checks

These automated and extensible readiness checks are built into the Automatic Updates system to verify that a site doesn't have any blockers that would prevent it from being updated.

  • These checks are slated to run at least every 6 hours on a site via Drupal Cron and will inform site owners if they are ready to auto update their site.

  • Examples of the readiness checks include:

    • Is the site is running on a read-only file system?

    • Have any files included in the update been modified from what they should be? 

    • Does the site still need to run database updates, etc.? 

There’s about 8 or 9 of these readiness checks and some are warnings (Cron isn’t running frequently enough to automatically update the site in a timely manner) and some are errors (the file system is read-only). Warnings won’t stop automatic updates, but errors will.

In place updates

Finally, the key pillar of the automatic updates feature is the update itself. Drupal.org generates a signed and secure package of files which can be overlaid atop the existing site files in order to apply the update. 

  • This update package is downloaded as a signed zip file from Drupal.org. The automatic updates module on the site then compares the signature of the zip file using drupal/php-signify, which is based on BSD’s Signify and libsodium to verify the package.

  • It then proceeds to backup the files about to be updated and updates the site.

  • If all goes well, the site is upgraded. If something fails, the backup is restored.

  • Many workflows are supported and you can customize how the updates are performed. Updates can flow through your CI/CD system, be staged for review and approval, and or automatically go live.

In the past few weeks, the Drupal Association has been invited to participate in TagTeamTalks, a new recorded talk series about various tech projects supporting the Drupal project. This bi-weekly format provides real-time shared collaboration and informative discussions. 

TagTeamTalk launched its webinar focused on Automatic Updates this week. The group dives deep into the nuts and bolts of Drupal's groundbreaking Automatic Updates feature, and the strategic initiative sponsored by the Drupal Association, MTech, Tag1 Consulting, and the European Commission. Guests include Preston So (prestonso), Contributing Editor at Tag1 and Moderator of the TagTeamTalks; Michael Meyers (michalemeyers), Managing Director of Tag1; Lucas Hedding (heddn), Senior Architect and Data and Application Migration Expert at Tag1; Fabian Franz (Fabianx), Senior Technical Architect and Performance Lead at Tag1; and Tim Lehnen (hestenet) CTO at the Drupal Association. Read the TagTeamTalks blog.

Content marketing is one of the most effective ways to promote your brand and capabilities - it has been a really powerful approach for the organizations that I’ve worked for,” said Michael. “The goal is to give our team an opportunity to talk about the cool things they’re working on and excited about and to share it with people. It helps get the word out about the latest developments in the open source communities we contribute to, and it promotes Tag1’s expertise - it helps us recruit new hires, and drives new business.” 

Meyers is the Managing Director of Tag1, and has been involved with the Drupal community for over 15 years. He was Founder and CTO of the first venture backed drupal based startup, CTO of the first Top 100 website on Drupal, and VP of Developer Relations at Acquia before joining Tag1.  “The great thing about TagTeamTalks is that it doesn’t take a tremendous amount of effort or energy. Our engineers are subject matter experts. We decide on a topic for the week, spend 15 minutes brainstorming a rough outline as a guide, and then record the talk. We don’t want to be rehearsed. The conversation is what makes it dynamic and enjoyable for us to do, and for people to listen to. And, the team loves it because they want to talk about what they are working on, and this format doesn’t take a lot of time away from what they enjoy doing most - writing code.” 

Hedding is one of the top 20 most active contributors to Drupal 8, and is also the Drupal Core Migrate Sub-system Maintainer, a core contribution mentor, and a D.O. project application reviewer. “Auto Updates has long been one of the most requested Drupal features, it is a capability the platform really needs that will help everyone using Drupal. Now that the alpha is available, we need to early adopters to start using it, we need feedback so we can continue to improve it. We also need to get more people involved in development, and we need to raise more money from organizations to support the project - it might sound like a simple feature, but it is actually really complex and requires a lot of effort. TagTeamTalks are a great way to get the word out and to enlist support from the Drupal community.”

Lucas added, “The European Commission provided generous funding for this initiative. The focus has been exclusively or largely around the European Commission’s features and functionality. The funding is running out very soon. There is a need for other people to help continue to build Automatic Updates by adding the features they need with their developers or by providing funding.”  

“It is critical for us to spread the message and make that call to action; that this is a community-driven effort and that without continued community support, it is not going to be as successful or as robust in the timeframe that we would like,” said Meyers.

The first year of funding from the European Commission provided for readiness checking, delivery of update 'quasi-patches,’ and a robust package signing system. The focus of this first phase of the Automatic Updates initiative has been on support for security updates in particular. 

In the second phase, as yet unfunded, we hope to extend this foundational work in the following ways:

  • Provide more robust composer support. The first phase of the automatic updates project should be compatible with composer-ready sites, but as the site’s composer.json file and vendor directory of a site change from the default, then more controls and though need to be implemented. 

  • Create an A/B front-end controller for the site being updated to further increase our confidence in the success of the update, allow for additional post-update testing and provide an easy mechanism to roll-back the update. This is also when updates will be able to move into Drupal core from the contrib project.

  • Expand to more types of updates (particularly further support for contrib updates), and also handle multiple updates in a row, for sites that are several versions behind. 

To accomplish all of this, we will continue to seek more funding and more partners. 

“I’m looking forward to seeing where this goes now that we have the first release out, ” said Hedding. “ There’s a larger community needed to get this initiative completed.”

The initial alpha version of the Automatic Updates module can be tested by the community right now. The plan is to: demonstrate Automatic Updates at DrupalCon Amsterdam this month, complete the scope of the funded work by the European Commission by the end of this year, and stabilize Automatic Updates by DrupalCon Minneapolis in May 2020. 

“The Automatic Updates initiative is designed to reduce the friction in keeping a Drupal site secure and up-to-date. The team behind the initiative is architecting a robust system, secure by design, and building components that can be shared with the broader PHP community,” said Tim Lehnen.

Many thanks to MTech, Tag1 Consulting, and the European Commission FOSSA program for funding this initiative. The Drupal Association is proud to be a part of this initiative.

Categories: FLOSS Project Planets

KDE & Qt Applications and High DPI Displays with Scaling

Planet KDE - Thu, 2019-10-03 12:40
What is a High DPI Display?

In the past, most displays had (or the OS pretended to have) around 96 PPI, more or less.

If you differed a bit and had too small/large UI elements, you mostly just resized your default font size a bit and were kind of happy.

In the last years, more and more displays arise that have a much higher PPI values, which allows for e.g. very crisp rendering of text.

I arrived late in that era for my Linux machines by now starting to use two 163 PPI displays.

Just tweaking your fonts doesn’t help here, all other things will still be unbearable small, even if you in addition increase e.g. icon sizes.

A solution for this is the current trend to just “scale” your UI by some factor, for my displays some factor of 1.5 leads to the most pleasant sizes.

How does Qt handle that?

A detailed description on how Qt does try to tackle the challenges of such displays can be found here.

More or less the gist of this is: In your application you work on logical pixels (in most cases) and Qt will do the hard work for you to then paint that in real pixels with the right scaling applied.

In practice, this isn’t fully transparent to the programmer. For example, as soon as you work with QPixmap, you will think a bit about where which pixel variant is used. You need to be careful to not mix-up the size() of a QPixmap 1:1 with let’s say layout/widget sizes in such scaled scenarios, see here.

Fine, nice, but what does that mean in practice?

Let’s take a look at how this works out in practice using the latest stable release of KDE & Qt stuff:

  • KDE Plasma 5.16.5
  • KDE Applications 19.08.1
  • KDE Frameworks 5.62.0
  • Qt 5.13.1

My setup for the below experiments are two 163 PPI displays with scale factor 1.5.

I use some Manjaro Linux with open-source AMD drivers for some average middle class card.

The screenshots are taken on my second screen. I used PNG to avoid that some JPEG artifacts make the real rendering artifacts unclear, bear with the large size.

Experiments on Kate & Konsole

Let’s show the current state with Kate & Konsole, here how Kate 19.08.1 looks if you start it on the second screen with default configuration with COPYING.LIB of ktexteditor.git as file:

This looks kind of strange. What you see is actually no split screen, even that is a pure rendering artifacts, actually, the whole Kate windows is more or less one artifacts.

A user reported this in Bug 411965 - Rendering issue in dual screen hidpi setup. With my new setup I was able to reproduce that, on every Kate start :/

The user himself investigated this and came to the same conclusion as me, the culprit is some winId() call in KonsolePart. As Kate constructs the KonsolePart widget without a parent first and then insert it into a layout, the code inside the part will call winId() on a non-native widget.

This is now fixed and backported to the 19.08 branch.

This means, with 19.08.2, you will have the following experience:

This somehow looks more like an actual working application.

For people not able to update, a workaround is to disable both project and terminal plugin in your Kate setup, no ideal solution, but makes Kate at least usable again.

Is now all fine with Kate? Unfortunately not, lets change my font size a bit and select things:

I selected both in the text view (KTextEditor) and in the KonsolePart to show the issue is not just a plain “we are too dumb to render things” in KTextEditor. You get equal artifacts with most of our software :(

I spend some time to trace this issues down in the QTBUG-66036 - QTextLayout draw() rendering issues with “some” font sizes.

It came up to not being a text related issue at all.

To give a small outline how KTextEditor and Konsole render stuff:

  • All things are in pure integer coordinates inside the applications.
  • More or less we render some uniform high lines of text.
  • Most background/selection coloring is done via fillRect with integer coordinates/sizes in both KTextEditor/Konsole.
  • KTextEditor paints parts of the text background via QTextLayout::setFormats.
  • KTextEditor and Konsole rely in some parts on the clipping to avoid over-painting.

Given none of both rendering “engines” work anywhere with non-integer coordinates and sizes, the artifacts seems to be strange. They only occur with fractional scaling, e.g. with 1.5, not with e.g 2.0.

During debugging, three major issues that lead to the artifacts came up, I created separate bugs for them, as they are not text rendering related:

QTBUG-78964 - fillRect + Anti-Aliasing + hi-dpi Scaling => missing filled pixels

If you use fillRect, even for purely integer coordinates and sizes, if the scaling is fractional and you have the render hint “QPainter::Antialiasing” turned on, it will miss to fill one pixel at the border. For KTextEditor/Konsole mostly at the lower part of the filling. A workaround for this, now commited for KTextEditor framework and Konsole is to turn the anti-aliasing off for large parts of the rendering. Only the parts that actually need it, turn if on again, this doesn’t affect e.g. the text anti-aliasing.

QTBUG-78962 - setClipRect misbehavior for hi-dpi scaling with QRect overload vs. QRectF overload

KTextEditor uses setClipRect to avoid overpainting between individual lines. Unfortunately, like fillRect, setClipRect leads to one pixel being clipped away too early for fractional scaling. A workaround for this is to use the QRectF overload of setClipRect. Even thought the passed QRectF has the same pure-integer coordinates, this will avoid the clipping errors due to different internal handling. KTextEditor uses now this workaround.

QTBUG-78963 - Misbehavior of clipping done for ::paintEvent with hi-dpi fractional scaling

Even after all this fixed, Konsole still draws some artifacts. Konsole is more relying on paintEvent to clip correctly than KTextEditor. Unfortunately the internal clipping done for the paintEvent seems to have the same off-by-one rounding issues like the manual setClipRect for QRect instead of QRectF. As we can’t control this clipping region in Konsole, I see no easy workaround, beside more often trigger full widget updates, which are costly.

This leads to this current state of the rendering in the master branch. I did select + deselect a bit text in the terminal to trigger the paintEvent related clipping failure, you can see a few small one pixel high selection leftovers below the selection area. You need to play a bit with the scaling factor and font size, too, to trigger the effects, as like all rounding errors, you need specific values to trigger them.

I hope the Qt bugs linked above can be fixed in the near future, as I doubt we can add workaround to all the applications affected (nor do we want to) and the clipping issue of the paintEvent, if it really is the reason for the last remaining Konsole artifacts, seems not to be really fixable at all in the application code, beside going away from fine grained repaints.

Here are the matching KTextEditor and Konsole bug report for the above issues. The relevant Qt bugs are linked there again, too.

Are that all current issues? I assume not.

I think there are for sure more pitfalls hidden, if you use some fractional scaling and Qt & KDE applications. Some of our applications are even still horrible broken for any kind of scaling :(

We are open-source software, patches to improve the current situation are very welcome.

Perhaps you are able to fix one of the above Qt bugs, that would be great!

P.S. Floating point math is hard!

One thing that disturbed me during trying to get rid of the rendering artifacts, is the bit careless choice of scaling factors people use.

I can understand that you want to have things 20% larger, but unfortunately, a factor of 1.2 leads to rounding errors all over the place, as 1.2 is no the nice number it seems to be in the hardware double precision floats we use.

If you want to avoid running in bad artifacts more than needed, please better scale with some factor that is a nicely representable float, like some multiple of 1⁄16 or 1⁄32.

For example some scaling with 1.25 will lead to much less issues than 1.2.

For details, just read up about how stuff like 0.1 or 0.2 is represented ;=)

You can ignore that advice and scale like you want, but I won’t take care of the artifacts that remain for e.g. 1.1 scaling in some corner cases ;=)

Discussion

Feel free to join the discussion at the KDE reddit.

Categories: FLOSS Project Planets

wishdesk.com: Access control in Drupal 8 with the Rabbit Hole module

Planet Drupal - Thu, 2019-10-03 12:24
In this post, we describe how your website can benefit from one of the most interesting Drupal 8 modules for user access and page display control — the Rabbit Hole.
Categories: FLOSS Project Planets

mark.ie: Printing Values of a Parent Node from a Drupal Paragraphs Field

Planet Drupal - Thu, 2019-10-03 12:01
Printing Values of a Parent Node from a Drupal Paragraphs Field

Someone asked in Slack today how to print the URL of the node that a paragraph is on. I was up to the challenge.

markconroy Thu, 10/03/2019 - 17:01

First off, you can do this with php in your .theme file quite easily, but I like to keep my template items in my templates.

Here's the code I used to first get the node id, then the node title, and then create a link from these two pieces of information.

  1. {% set parent = paragraph._referringItem.parent.parent.entity %}

What this does is:

  1. Set a variable called parent - note is uses parent twice and then entity

    You won't see parent or entity in your kint/dpm/dd output, which is a pity because entity is great - load the entity you want to get information from.

  2. Use parent to then get the node id value and title value parent.nid.value and parent.title.value.
  3. Create a link using this variables.

It's quite simple really. You can now use this approach to get other fields/data from your host node.

Categories: FLOSS Project Planets

OpenSense Labs: OpenSense Labs as a Silver Sponsor of DrupalCon Amsterdam 2019

Planet Drupal - Thu, 2019-10-03 11:34
OpenSense Labs as a Silver Sponsor of DrupalCon Amsterdam 2019 Jayati Fri, 10/04/2019 - 18:30

The Drupal community is one of the largest open source communities in the world. Each year, we meet at Drupal Camps, meet-ups, and other events organized around the world. 

But the biggest event, DrupalCon, happens twice every year. It is a platform where developers, designers, and marketers come together to explore the most ambitious and cutting edge case studies. It offers prospective users, a glimpse into “the art of the possible” when you choose Drupal. It is a collaborative event where anyone can learn to use Drupal to make the Internet a better place. 

This year, OpenSense Labs is a silver sponsor of DrupalCon Europe 2019 to be held in Amsterdam, Netherlands.
  Join us for the Sessions


What you will learn?

  • How to divide the right content strategy for your agency 
  • Which form of content works best?
  • How do you measure the success of your content strategy
  • Creating the right lean team for helping you achieve the content goals?
  • Which channel should you use, to market your content?



Theming Drupal 8 is a challenging job and not many are aware of how to smartly theme the e-commerce sites. Here are some major components which we will focus on in this session:

  • Product pages
  • Product-level field variables 
  • product variation level variables 
  • Checkout flows 
  • Creating flow as per requirement 
  • Customizing checkout progress 
Be in touch!

We can’t wait to talk to you about the amazing offers our team has for you. Ask us about our Agency++ programs to scale higher and discover more about higher-ed and e-learning systems. We have loads to unveil at DrupalCon Amsterdam!
So, swing by our booth and our team would love to meet and connect with you.

You can register now to be a part of the event.

blog banner blog image DrupalCon DrupalCon Europe DrupalCon Amsterdam DrupalCon 2019 Blog Type Articles Is it a good read ? Off
Categories: FLOSS Project Planets

Thorsten Alteholz: My Debian Activities in September 2019

Planet Debian - Thu, 2019-10-03 11:08

FTP master

This month I accepted 246 packages and rejected 28. The overall number of packages that got accepted was 303.

Debian LTS

This was my sixty third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 23.75h. During that time I did LTS uploads of:

    [DLA 1911-1] exim4 security update for one CVE
    [DLA 1936-1] cups security update for one CVE
    [DLA 1935-1] e2fsprogs security update for one CVE
    [DLA 1934-1] cimg security update for 8 CVEs
    [DLA 1939-1] poppler security update for 3 CVEs

I also started to work on opendmarc and spip but did not finish testing yet.
Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the sixteenth ELTS month.

During my allocated time I uploaded:

  • ELA-160-1 of exim4
  • ELA-166-1 of libpng
  • ELA-167-1 of cups
  • ELA-169-1 of openldap
  • ELA-170-1 of e2fsprogs

I also did some days of frontdesk duties.

Other stuff

This month I uploaded new packages of …

I also uploaded new upstream versions of …

I improved packaging of …

On my Go challenge I uploaded golang-github-rivo-uniseg, golang-github-bruth-assert, golang-github-xlab-handysort, golang-github-paypal-gatt.

I also sponsored the following packages: golang-gopkg-libgit2-git2go.v28.

Categories: FLOSS Project Planets

Drudesk: Smart internal linking with D8 Editor Advanced Link module

Planet Drupal - Thu, 2019-10-03 10:30

The proper use of internal linking can turn any website into a powerful marketing tool. It is a vital part of effective content writing strategies. In this post, we explore why it is so, as well as review a helpful module for smart content linking in Drupal 8 — D8 Editor Advanced Link. Let’s go.

Categories: FLOSS Project Planets

PyCharm: 2019.3 EAP 4

Planet Python - Thu, 2019-10-03 10:11

This week’s Early Access Program (EAP) for PyCharm 2019.3 is available now! Download it from our website.

New for this version Test templates for pytest support

Support for pytest test creation using pytest templates was added. Now you can create and edit test files using pytest templates.

To create a test using these templates first you will need to set pytest as the default test runner (Settings/Preferences | Tools | Python Integrated Tools  then on the Default test runner option select pytest).  Then navigate to the context menu from the method declaration you wish to create a test from. Click on Go To | Test and select Create New Test. A dialog will open so you can configure your testing file accordingly. Once you click OK on this dialog PyCharm will generate a file using the appropriate test method.

Fixed in this version
  • The “Go to Declaration”/”Go to Implementations” behavior was corrected so they will properly lead to library implementations and not other files.
  • We fixed an error that made imports to be inserted before module-level dunder names. Now, in compliance to PEP-8, imports are placed after dunders.
  • An issue was solved that wasn’t allowing to use quick fix to install missing packages when the interpreter is switched through the Status Bar.
  • Some issues causing an interpreter not to be removed through the project settings or changed when using the interpreter widget were solved.
  • For more details on what’s new in this version, see the release notes
Interested?

Download this EAP from our website. Alternatively, you can use the JetBrains Toolbox App to stay up to date throughout the entire EAP.

If you’re on Ubuntu 16.04 or later, you can use snap to get PyCharm EAP, and stay up to date. You can find the installation instructions on our website.

Categories: FLOSS Project Planets

Stack Abuse: File Management with AWS S3, Python, and Flask

Planet Python - Thu, 2019-10-03 08:46
Introduction

One of the key driving factors to technology growth is data. Data has become more important and crucial in the tools being built as technology advances. It has become the driving factor to technology growth, how to collect, store, secure, and distribute data.

This data growth has led to an increase in the utilization of cloud architecture to store and manage data while minimizing the hassle required to maintain consistency and accuracy. As consumers of technology, we are generating and consuming data and this has necessitated the requirement of elaborate systems to help us manage the data.

The cloud architecture gives us the ability to upload and download files from multiple devices as long as we are connected to the internet. And that is part of what AWS helps us achieve through S3 buckets.

What is S3?

Amazon Simple Storage Service (S3) is an offering by Amazon Web Services (AWS) that allows users to store data in the form of objects. It is designed to cater to all kinds of users, from enterprises to small organizations or personal projects.

S3 can be used to store data ranging from images, video, and audio all the way up to backups, or website static data, among others.

An S3 bucket is a named storage resource used to store data on AWS. It is akin to a folder that is used to store data on AWS. Buckets have unique names and based on the tier and pricing, users receive different levels of redundancy and accessibility at different prices.

Access privileges to S3 Buckets can also be specified through the AWS Console, the AWS CLI tool, or through provided APIs and libraries.

What is Boto3?

Boto3 is a software development kit (SDK) provided by AWS to facilitate the interaction with S3 APIs and other services such as Elastic Compute Cloud (EC2). Using Boto3, we can list all the S3 buckets, create an EC2 instances, or control any number of AWS resources.

Why use S3?

We can always provision our own servers to store our data and make it accessible from a range of devices over the internet, so why should we use AWS's S3? There are several scenarios where it comes in handy.

First, AWS S3 eliminates all the work and costs involved in building and maintaining servers that store our data. We do not have to worry about acquiring the hardware to host our data or the personnel required to maintain the infrastructure. Instead, we can focus solely on our code and ensuring our services are in the best condition.

By using S3, we get to tap into the impressive performance, availability, and scalability capabilities of AWS. Our code will be able to scale effectively and perform under heavy loads and be highly available to our end users. We get to achieve this without having to build or manage the infrastructure behind it.

AWS offers tools to help us with analytics and audit, as well as management and reports on our data. We can view and analyze how the data in our buckets is accessed or even replicate the data into other regions to enhance the access of the data by the end-users. Our data is also encrypted and securely stored so that it is secure at all times.

Through AWS Lambda we can also respond to data being uploaded or downloaded from our S3 buckets and respond to users through configured alerts or reports for a more personalized and instant experience as expected from technology.

Setting Up AWS

To get started with S3, we need to set up an account on AWS or log in to an existing one.

We will also need to set up the AWS CLI tool to be able to interact with our resources from the command line, which is available for Mac, Linux, and Windows.

We can install it by running:

$ pip install awscli

Once the CLI tool is set up, we can generate our credentials under our profile dropdown and use them to configure our CLI tool as follows:

$ aws configure

This command will give us prompts to provide our Access Key ID, Secret Access Key, default regions, and output formats. More details about configuring the AWS CLI tool can be found here.

Our Application - FlaskDrive Setup

Let's build a Flask application that allows users to upload and download files to and from our S3 buckets, as hosted on AWS.

We will use the Boto3 SDK to facilitate these operations and build out a simple front-end to allow users to upload and view the files as hosted online.

It is advisable to use a virtual environment when working on Python projects, and for this one we will use the Pipenv tool to create and manage our environment. Once set up, we create and activate our environment with Python3 as follows:

$ pipenv install --three $ pipenv shell

We now need to install Boto3 and Flask that are required to build our FlaskDrive application as follows:

$ pipenv install flask $ pipenv install boto3 Implementation

After setting up, we need to create the buckets to store our data and we can achieve that by heading over to the AWS console and choosing S3 in the Services menu.

After creating a bucket, we can use the CLI tool to view the buckets we have available:

$ aws s3api list-buckets { "Owner": { "DisplayName": "robley", "ID": "##########################################" }, "Buckets": [ { "CreationDate": "2019-09-25T10:33:40.000Z", "Name": "flaskdrive" } ] }

We will now create the functions to upload, download, and list files on our S3 buckets using the Boto3 SDK, starting off with the upload_file function:

def upload_file(file_name, bucket): """ Function to upload a file to an S3 bucket """ object_name = file_name s3_client = boto3.client('s3') response = s3_client.upload_file(file_name, bucket, object_name) return response

The upload_file function takes in a file and the bucket name and uploads the given file to our S3 bucket on AWS.

def download_file(file_name, bucket): """ Function to download a given file from an S3 bucket """ s3 = boto3.resource('s3') output = f"downloads/{file_name}" s3.Bucket(bucket).download_file(file_name, output) return output

The download_file function takes in a file name and a bucket and downloads it to a folder that we specify.

def list_files(bucket): """ Function to list files in a given S3 bucket """ s3 = boto3.client('s3') contents = [] for item in s3.list_objects(Bucket=bucket)['Contents']: contents.append(item) return contents

The function list_files is used to retrieve the files in our S3 bucket and list their names. We will use these names to download the files from our S3 buckets.

With our S3 interaction file in place, we can build our Flask application to provide the web-based interface for interaction. The application will be a simple single-file Flask application for demonstration purposes with the following structure:

. ├── Pipfile # stores our application requirements ├── __init__.py ├── app.py # our main Flask application ├── downloads # folder to store our downloaded files ├── s3_demo.py # S3 interaction code ├── templates │ └── storage.html └── uploads # folder to store the uploaded files

The core functionality of our Flask application will reside in the app.py file:

import os from flask import Flask, render_template, request, redirect, send_file from s3_demo import list_files, download_file, upload_file app = Flask(__name__) UPLOAD_FOLDER = "uploads" BUCKET = "flaskdrive" @app.route('/') def entry_point(): return 'Hello World!' @app.route("/storage") def storage(): contents = list_files("flaskdrive") return render_template('storage.html', contents=contents) @app.route("/upload", methods=['POST']) def upload(): if request.method == "POST": f = request.files['file'] f.save(os.path.join(UPLOAD_FOLDER, f.filename)) upload_file(f"uploads/{f.filename}", BUCKET) return redirect("/storage") @app.route("/download/<filename>", methods=['GET']) def download(filename): if request.method == 'GET': output = download_file(filename, BUCKET) return send_file(output, as_attachment=True) if __name__ == '__main__': app.run(debug=True)

This is a simple Flask application with 4 endpoints:

  • The /storage endpoint will be the landing page where we will display the current files in our S3 bucket for download, and also an input for users to upload a file to our S3 bucket,
  • The /upload endpoint will be used to receive a file and then call the upload_file() method that uploads a file to an S3 bucket
  • The /download endpoint will receive a file name and use the download_file() method to download the file to the user's device

And finally, our HTML template will be as simple as:

<!DOCTYPE html> <html> <head> <title>FlaskDrive</title> </head> <body> <div class="content"> <h3>Flask Drive: S3 Flask Demo</h3> <p>Welcome to this AWS S3 Demo</p> <div> <h3>Upload your file here:</h3> <form method="POST" action="/upload" enctype=multipart/form-data> <input type=file name=file> <input type=submit value=Upload> </form> </div> <div> <h3>These are your uploaded files:</h3> <p>Click on the filename to download it.</p> <ul> {% for item in contents %} <li> <a href="/download/{{ item.Key }}"> {{ item.Key }} </a> </li> {% endfor %} </ul> </div> </div> </body> </html>

With our code and folders set up, we start our application with:

$ python app.py

When we navigate to http://localhost:5000/storage we are welcomed by the following landing page:

Let us now upload a file using the input field and this is the output:

We can confirm the upload by checking our S3 dashboard, and we can find our image there:

Our file has been successfully uploaded from our machine to AWS's S3 Storage.

On our FlaskDrive landing page, we can download the file by simply clicking on the file name then we get the prompt to save the file on our machines.

Conclusion

In this post, we have created a Flask application that stores files on AWS's S3 and allows us to download the same files from our application. We used the Boto3 library alongside the AWS CLI tool to handle the interaction between our application and AWS.

We have eliminated the need for us having our own servers to handle the storage of our files and tapped into Amazon's infrastructure to handle it for us through the AWS Simple Storage Service. It has taken us a short time to develop, deploy and make our application available to end-users and we can now enhance it to add permissions among other features.

The source code for this project is available here on Github.

Categories: FLOSS Project Planets

Andrew Dalke: mmpdb crowdfunding consortium

Planet Python - Thu, 2019-10-03 08:00

How can we raise money to fund open source software development in cheminformatics? It's a hard question. Asking for donations doesn't work – companies might not even have a mechanism to make donations. Consultant-based funding doesn't work that well either, because the cost of developing a general-purpose tool is several times more expensive than developing a tool which only meets the specialized needs of one client, and few clients are willing to subsidize the rest of the field. Proprietary software development solves the problem by getting many people to pay for the same product. Can we learn from the success of proprietary software to get the funds which would certainly be useful in improving open source software?

I have started the mmpdb crowdfunding consortium to see if crowdfunding can be used to fund further development of the matched molecular pair program mmpdb. The deadline to join is 1 Febrary 2020 – join now!

Background

mmpdb is an open source success story. It started as the mmpa program developed by Jameed Hussain and Ceara Rea. Their employer, GSK contributed it to the RDKit project. There was no more GSK funding, but others could study and improve the code.

Roche then funded me, Christian Kramer, and Jérôme Hert to add several improvements:

  • better support for symmetry, which results in fully canonical pair descriptions
  • support for chirality, including matching chiral with prochiral structures
  • can include the chemical environment when finding pairs
  • generate property change statistics for each pair, environment, and property type
  • parallelized fragmentation
  • fragmentation can re-use fragmentations from a previous run
  • performance speedups during indexing
  • pair, environment, and property statistics are stored in a SQLite database
  • analysis tools to propose possible transforms to an input structure, or to predict property shifts between two structures
The final code was also contributed to the RDKit project.

Now what?

Mmpdb is popular. Several people at the 2019 RDKit User Group meeting in Hamburg presented work which used it or at least referenced it.

But, who supports it? Who adds features? There is no more funding from GSK or Roche, so all we have a precious and scarce volunteer time. Others might fund their own developers to improve mmpdb, but the code is pretty complicated and it will take a while for new developers to get up to speed.

Sustainability

There is a long and ongoing discussion about how to fund open source projects. I won't even attempt to summarize them here, though I will point to Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure as one starting point.

My question is, are mmpdb users willing to fund its further development? If not, the project is not sustainable. I believe they are willing; the problem is that it's hard to justify paying money for software anyone can download for free.

Crowfunding consortium

I previously tried to develop chemfp as a purely open source commercial product. When customers bought the product, they got the software under the MIT license. This ended up being difficult, for reasons I'll likely blog about later. I now also offer chemfp with proprietary licensing, at a cheaper price.

With mmpdb, I am trying crowdfunding, along the lines of Kickstarter. The basic goals are:

  • Postgres support
  • new commmand-line option ("proprulecat") to export property tables as CSV
Everyone who joins will get these two features, under the existing 3-clause BSD license.

Beyond that are stretch goals. The one many people want is to store the chemical environment in the database as a fragment SMILES, rather than a hex-encoded SHA256 hash of the rooted Morgan fingerprints.

As more people sign up, I'll develop mmpdb further. Many of the stretch goals are related to documentation and testing. Mmpdb was developed as a research project, and needs those sorts of infrastructure improvements to allow future growth.

If enough people join, there will definitely be future crowdfunding efforts, perhaps a web interface, or support for categorial statistics, or other features people have asked me about.

I don't think people will pay for features that are available for free, so these changes will not be made available to the public until specific funding goals are reached.

How do you explain crowdfunding to accounting?

Don't. (Unless you really want to.) Tell them you are going to purchase a new version of mmpdb with Postgres and "proprulecat" support. You will receive these within two weeks of sending me – that is, my Sweden-based software company – a purchase order.

In addition, purchase includes membership in the mmpdb consortium. As more people join, and additional funding goals met, I will continue to improve mmpdb, and you will get those improvements as part of your membership.

Join now!

Categories: FLOSS Project Planets

Pages