Feeds

Fabio Zadrozny: PyDev 5.9.2 released (Debugger improvements, isort, certificate)

Planet Python - Tue, 2017-08-15 07:34
PyDev 5.9.2 is now available for download.
This version now integrates the performance improvements which were done in PyDev.Debugger for 3.6 (which use the new hook available by Python and changes bytecode to add calls to the debugger so that there's less overhead during the debugging -- note that this only really takes place if breakpoints are added before a given code is loaded, adding or removing breakpoints afterwards falls back to the previous approach of tracing).
Another nice feature in this release is that isort (https://github.com/timothycrosley/isort) can be used as the default engine for sorting imports (needs to be configured in preferences > PyDev > Editor > Code Style > Imports -- note that at that same preferences dialog you may save the settings to a project, not only globally).
There were also a number of bug-fixes... in particular one that prevented text searches from working if the user had another plugin which also used Lucene in a different version was really nasty... http://www.pydev.org has more details on the changes.
This is also the first release which is signed with a proper certificate (provided by Comodo) -- so, it's nice that Eclipse won't complain that the plugin is not signed when it's being installed, although I discovered that it isn't as useful as I thought... it does work as intended for Eclipse plugins, but for Windows, even signing the LiClipse installer will show a dialog for users (there's a more expensive version with extended validation which could be used, but I didn't go for that one) and on Mac OS I haven't even tried to sign as it seems Comodo certificates are worthless there (the only choice is having a development subscription from Apple and using a certificate Apple gives you... the verification they do seems compatible with what Comodo gives, which uses a DUNS number, so, it's apparently just a point of them wanting more $$$/control, not really being more secure), so, currently Mac users will still use unsigned binaries (the sha256 is provided for users which want to actually check that what they download is what's being distributed).
Categories: FLOSS Project Planets

foss-gbg gets going again

Planet KDE - Tue, 2017-08-15 06:07

It is time for foss-gbg to get going again. A week from now Zeeshan will talk about The good kind of Rust. Tickets are free, so if you are in Gothenburg, feel free to drop by for some snacks, the Rust talk and some lighting talks.

foss-gbg is a local group sharing ideas and knowledge around Free and Open Source Software in the Gothenburg area.

Categories: FLOSS Project Planets

Alpha APX Details

Planet KDE - Tue, 2017-08-15 06:03

I just want to share some links. This is the type of articles that makes me follow Raymond Chen.

As he lives outside of the open source space, his blog might be a gem that many have missed.

Categories: FLOSS Project Planets

Martin Fitzpatrick: KropBot: Multiplayer Internet-controlled robot

Planet Python - Tue, 2017-08-15 05:00

KropBot is a little multiplayer robot you can control over the internet. Co-operate with random internet strangers to drive around in circles and into walls.

If it is online, you can drive the KropBot yourself!

KropBot is dead. 15 minutes after posting to Planet Python, Kropbot was mercilessly driven down a flight of stairs. He is no more, he is kaput. He is an ex-robot.

Requirements

If you already have a working 2-motor robot platform you can skip straight to the code. The code shown below will work with any Raspberry Pi with WiFi (Zero W recommended) and MotorHAT.

  • Raspberry Pi Zero W or Raspberry Pi 3B
  • Raspberry Pi Camera
    If you’re using a Zero W you need the specific Pi Zero camera cable (it’s smaller on the Pi end)
  • MotorHAT
    There are official kits from AdaFruit but cheaper options are also available.
  • 2-motor robot platform/chassis — In this example I’m using a rubber tank chassis but this required some hacking (see below) to get a 4xAA battery pack inside and is not recommended. Try this shock-absorbed robot tank-chassis or a 4WD car base. If you want to go off road avoid bases with 2 wheels + a bumper which are only suitable for flat smooth floors.
  • 4x AA (or bigger, check motor specifications) battery pack to power motors.
  • Li-Ion battery pack for Pi Zero — I recommend using a USB backup powerbank as they’re cheap, easy to charge and will easily run a Pi Zero W for a few hours. If you want longer runtime or a permanent installation you could also look into use multiple 18650 cells with a charging board.

I also needed —

  • Short lengths of wire (to extend the short wires in the base)
  • Soldering iron, for extending wires and soldering the switch
  • Heatshrink tubing to cover solder joints
  • Card, bluetack and tape to hold the camera in place
Build Chassis

The chassis I used came pre-constructed, with motors in place and seems to be the deconstructed base of a toy tank. The sales photo slightly oversells its capabilities.

There was no AA battery holder included, just a space behind a flap labelled 4.8V. The space measured the size of 4xAA batteries (giving 6V total) but the 4xAA battery holder I ordered didn’t fit. However, a 6xAA battery pack I had could be cut down to size by lopping off 2 holders and rewiring. Save yourself the hassle and get a chassis with a battery holder.

The pack is still a bit too deep, but the door can be closed with a screwdriver to wedge it shut.

An ON/OFF switch is provided in the bottom of the case which I wanted to be able to use to switch off the motor power (to save battery life, when the Pi wasn’t running). The power leads were fed through to the upper side, but were too short to reach the switch, so these were first extended before being soldered to the switch.

MotorHAT

The MotorHAT (here using a cheap knock-off) is a extension board for controlling 4 motors (or stepper motors) from a Pi, with speed and direction control. This is a shortcut, but you could also use L293D motor drivers together with PWM on the GPIO pins for speed control.

Once wired into the power supply (AA batteries) the MotorHAT power LED should light up.

The motor supply is wired in separately to the HAT, and the board keeps this isolated from the Pi supply/GPIO. The + lead is wired in through the switch as already described.

Next the motors are wired into the terminals, using the outer terminals for each motor. The left motor goes on M1 and the right on M2. Getting the wires the right way around (so forward=forward) is a case of trial an error, try one then reverse it if it’s wrong.

Once that’s wired up, you can pop the MotorHAT on top of the Pi. Make sure it goes the right way around — the MotorHAT should be over the Pi board, on the top side.

Pi Camera

The Pi Camera unit attaches into the camera port on the Pi using the cable. If using a Pi Zero you need a specific connector for the camera which is narrower at the Pi end. Make sure the plastic widget is on the port to hold the cable, line the cable up metal up and push it in.

The camera assembly isn’t fancy, it’s just tacked to the edge of the base with some card and tape.

Once that’s one you can power up the Pi with a powerbank.

The code

The robot control code is split into 3 parts —

  1. The robot control code, that handles the inputs, moves the robot and streams the camera
  2. The server which receives inputs from the (multiple) clients and forwards them to the robot in batches, and receives the single camera images from the robot and broadcasts them to the clients.
  3. The client code which sends user inputs to the server, and renders the images being sent in return.

The server isn’t strictly necessary in this setup — the Pi itself could happily run a webserver to serve the client interface, and then directly interface with clients via websockets. However, that would require the robot to be accessible on the internet and streaming the camera images to multiple clients would add quite a bit of overhead. Since we’re hoping to support a largish number (>25) of simultaneous clients it’s preferable to offload that work somewhere other than the Pi. The downside is that this two-hop approach adds some control/refresh delay and makes things slightly less reliable (more later).

The full source is available on Github.

The client app.js

The browser part, which provides the the user-interface for control and a display for streaming images from the robot, was implemented in AngularJS to keep things simple. Just the controller code is shown below.

var robotApp = angular.module('robotApp', []); robotApp.controller('RobotController', function ($scope, $http, $interval, socket) { var uuidv4 = function () { return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) { var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8); return v.toString(16); }); } $scope.directions = [4,5,6,7,8,1,2,3]; $scope.uuid = uuidv4(); $scope.data = { selected: null, direction: null, magnitude: 0, n_controllers: 0, total_counts: {} } $scope.min = window.Math.min; socket.emit('client_ready'); $scope.set_direction = function(i) { $scope.data.selected = i; $scope.send_instruction() } $scope.send_instruction = function() { socket.emit('instruction', { user: $scope.uuid, direction: $scope.data.selected, }) } // Instruction timeout is 3 second, ping to ensure we stay live setInterval($scope.send_instruction, 1500);

The main data receive block accepts all data from the server and assigns it onto the $scope to update the interface. Image data from the camera is sent as raw bytes, so to get it into an img element we need to convert it to base64encoded URL. This would be more efficient to do on the robot/server (avoid every client having to perform this operation) but base64 encoding increases the size of the data transmitted by about 1/3.

// Receive updated signal via socket and apply data socket.on('updated_status', function (data) { $scope.data.direction = data.direction; $scope.data.magnitude = data.magnitude; $scope.data.n_controllers = data.n_controllers $scope.data.total_counts = data.total_counts; }); // Receive updated signal via socket and apply data socket.on('updated_image', function (data) { blob = new Blob([data], {type: "image/jpeg"}); $scope.data.imageUrl = window.URL.createObjectURL(blob); }); });

Finally, we define a set of wrappers to integrate sockets in our AngularJS application. This ensures that changes to the scope that are triggered by websocket events are detected — and the view updated.

robotApp.factory('socket', function ($rootScope) { var socket = io.connect(); return { on: function (eventName, callback) { socket.on(eventName, function () { var args = arguments; $rootScope.$apply(function () { callback.apply(socket, args); }); }); }, emit: function (eventName, data, callback) { socket.emit(eventName, data, function () { var args = arguments; $rootScope.$apply(function () { if (callback) { callback.apply(socket, args); } }); }) } }; }); The server app.py

Note the (optional) use of ROBOT_WS_SECRET to change the endpoints used for communicating with the robot. This is a simple way to prevent a client connecting, pretending it’s the robot, and broadcasting naughty images to everyone. If you set this value, just make sure you set it to the same thing on both the robot and server (via the environment variable) or they won’t be able to talk to one another.

import os import time from flask import Flask from flask_socketio import SocketIO, join_room app = Flask(__name__) app.config.from_object(os.environ.get('APP_SETTINGS', 'config.Config')) app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False app.secret_key = app.config['SECRET_KEY'] socketio = SocketIO(app) # Use a secret WS endpoint for robot. robot_ws_secret = app.config['ROBOT_WS_SECRET'] # Buffer incoming instructions and emit direct to the robot. # on each image cycle. instruction_buffer = {} latest_robot_state = {} INSTRUCTION_DURATION = 3 @app.route('/') def index(): """ Return template for survey view, data (form) loaded by JSON :return: raw index.html response """ return app.send_static_file('index.html')

To stop dead clients from continuing to control the robot, we need to expire instructions after INSTRUCTION_DURATION seconds have elapsed.

def clear_expired_instructions(): """ Remove all expired instructions from the buffer. Instructions are expired if their age > INSTRUCTION_DURATION. This needs to be low enough that the robot stops performing a behaviour when a client leaves, but high enough that an active client's instructions are not cleared due to lag. """ global instruction_buffer threshold = time.time() - INSTRUCTION_DURATION instruction_buffer = {k: v for k, v in instruction_buffer.items() if v['timestamp'] > threshold}

All communication between the server, the clients and robot is handled through websockets. We assigned clients to a specific socketIO “room” so we can communicate with them in bulk, without also sending the same data to the robot.

@socketio.on('client_ready') def client_ready_join_room(message): """ Receive the ready instruction from (browser) clients and assign them to the client room. """ join_room('clients')

User instructions are received regularly from clients (timed-ping) with a unique user ID being used to ensure only one instruction is stored per client. Instructions are timestamped as they arrive, so they can be expired later.

@socketio.on('instruction') def user_instruction(message): """ Receive and buffer direction instruction from client. :return: """ # Perform validation on inputs, direction must be in range 1-9 or None. Anything else # is interpreted as None (=STOP) from that client. message['direction'] = message['direction'] if message['direction'] in range(1, 9) else None instruction_buffer[message['user']] = { 'direction': message['direction'], 'timestamp': int(time.time()) }

The robot camera stream and instruction/status loop run separately, and connect to different sockets. Updates to status are forwarded onto the clients and responded to with the latest instruction buffer. The camera image is forwarded onwards with no response.

@socketio.on('robot_update_' + robot_ws_secret) def robot_update(message): """ Receive the robot's current status message (dict) and store for future forwarding to clients. Respond with the current instruction buffer directions. :param message: dict of robot status :return: list of directions (all clients) """ # Forward latest state to clients. socketio.emit('updated_status', message, json=True, room='clients') # Clear expired instructions and return the remainder to the robot. clear_expired_instructions() return [v['direction'] for v in instruction_buffer.values()] @socketio.on('robot_image_' + robot_ws_secret) def robot_image(data): """ Receive latest image data and broadcast to clients :param data: """ # Forward latest camera image socketio.emit('updated_image', data, room='clients') if __name__ == '__main__': socketio.run(app) The robot robot.py

The robot is implemented in Python 3 and runs locally on the Pi (see later for instructions to run automatically at startup).

import atexit from collections import Counter from concurrent import futures from io import BytesIO import math, cmath import os import time from Adafruit_MotorHAT import Adafruit_MotorHAT, Adafruit_DCMotor from picamera import PiCamera from socketIO_client import SocketIO

The following constants can be used to configure the behaviour of the robot, but the values below were found to be produce a stable and reasonably responsive bot.

SPEED_MULTIPLIER = 200 UPDATES_PER_SECOND = 5 CAMERA_QUALITY = 10 CAMERA_FPS = 5 # A store of incoming instructions from clients, stored as list of client directions instructions = [] # Use a secret WS endpoint for robot. robot_ws_secret = os.getenv('ROBOT_WS_SECRET', '')

The robot uses 8 direction values for the compass points and intermediate directions. The exact behaviour of these directions are defined in the DIRECTIONS dictionary, as a tuple of direction and magnitudes for the two motors.

# Conversion from numeric inputs to motor instructions + multipliers. The multipliers # are adjusted for each direction, e.g. forward is full-speed, turn is half. DIRECTIONS = { 1: ((Adafruit_MotorHAT.FORWARD, 0.75), (Adafruit_MotorHAT.FORWARD, 0.5)), 2: ((Adafruit_MotorHAT.FORWARD, 0.5), (Adafruit_MotorHAT.BACKWARD, 0.5)), 3: ((Adafruit_MotorHAT.BACKWARD, 0.75), (Adafruit_MotorHAT.BACKWARD, 0.5)), 4: ((Adafruit_MotorHAT.BACKWARD, 1), (Adafruit_MotorHAT.BACKWARD, 1)), 5: ((Adafruit_MotorHAT.BACKWARD, 0.5), (Adafruit_MotorHAT.BACKWARD, 0.75)), 6: ((Adafruit_MotorHAT.BACKWARD, 0.5), (Adafruit_MotorHAT.FORWARD, 0.5)), 7: ((Adafruit_MotorHAT.FORWARD, 0.5), (Adafruit_MotorHAT.FORWARD, 0.75)), 8: ((Adafruit_MotorHAT.FORWARD, 1.5), (Adafruit_MotorHAT.FORWARD, 1.5)), } # Initialize motors, and define left and right controllers. motor_hat = Adafruit_MotorHAT(addr=0x6f) left_motor = motor_hat.getMotor(1) right_motor = motor_hat.getMotor(2) def turnOffMotors(): """ Shutdown motors and unset handlers. Called on exit to ensure the robot is stopped. """ left_motor.run(Adafruit_MotorHAT.RELEASE) right_motor.run(Adafruit_MotorHAT.RELEASE)

The average of all client’s input direction angles are combined using complex math to convert the angles into vectors, sum them and calculate back the angle of the resulting single point. You could also do this using the sum of sines and cosines.

def average_radians(list_of_radians): """ Return average of a list of angles, in radians, and it's amplitude. We calculate a set of vectors for each angle, using a fixed distance. Add up the sum of the x, y of the resulting vectors. Work back to an angle + get a magnitude. :param list_of_radians: :return: """ vectors = [cmath.rect(1, angle) if angle is not None else cmath.rect(0, 0) # length 1 for each vector; or 0,0 for null (stopped) for angle in list_of_radians] vector_sum = sum(vectors) return cmath.phase(vector_sum), abs(vector_sum) def to_radians(d): """ Convert 7-degrees values to radians. :param d: :return: direction in radians """ return d * math.pi / 4 if d is not None else None def to_degree7(r): """ Convert radians to 'degrees' with a 0-7 scale. :param r: :return: direction in 7-value degrees """ return round(r * 4 / math.pi) def map1to8(v): """ Limit v to the range 1-8 or None, with 0 being converted to 8 (straight ahead). This is necessary because the back-calculation to degree7 will negative values yet the input to calculate_average_instruction must use 1-8 to weight forward instructions correctly. :param v: :return: v, in the range 1-8 or None """ if v is None or v > 0: return v return v + 8 # if 0, return 8

We iterate over all the client instructions, calculate an average angle and magnitude, and count totals for each direction (to be sent back to the clients).

def calculate_average_instruction(): """ Return a dictionary of counts for each direction option in the current instructions and the direction with the maximum count. Directions are stored in numeric range 0-7, we first convert these imaginary degrees to radians, then calculate the average radians by adding vectors. Once we have that value in radians we can convert back to our own scale which the robot understands. The amplitude value gives us a speed. 0 = Forward 7/1 = Forward left/right (slight) 6/2 = Turn left right (stationary) 5/3 = Backwards left/right (slight) 4 = Backwards :return: dict total_counts, direction """ # If instructions remaining, calculate the average. if instructions: directions_v, direction_rads = zip(*[(d, to_radians(d)) for d in instructions]) total_counts = Counter([map1to8(v) for v in directions_v]) rad, magnitude = average_radians(direction_rads) if magnitude < 0.05: magnitude = 0 direction = None return { 'total_counts': total_counts, 'direction': map1to8(to_degree7(rad)), 'magnitude': magnitude } else: return { 'total_counts': {}, 'direction': None, 'magnitude': 0 }

The average control data is converted to motor instructions using the DIRECTIONS dictionary defined earlier. Because of the way that multiple client instructions are combined and then multiplied motor speeds can end up > 255, so we additionally need to cap these.

def control_robot(control): """ Takes current robot control instructions and apply to the motors. If direction is None, all-stop, otherwise calculates a speed for each motor using a combination of DIRECTIONS, magnitude and SPEED_MULTIPLIER, capped at 255. :param control: """ if control['direction'] is None: # All stop. left_motor.setSpeed(0) right_motor.setSpeed(0) return direction = int(control['direction']) left, right = DIRECTIONS[direction] magnitude = control['magnitude'] left_motor.run(left[0]) left_speed = int(left[1] * magnitude * SPEED_MULTIPLIER) left_speed = min(left_speed, 255) left_motor.setSpeed(left_speed) right_motor.run(right[0]) right_speed = int(right[1] * magnitude * SPEED_MULTIPLIER) right_speed = min(right_speed, 255) right_motor.setSpeed(right_speed)

We collect incoming messages and store them in the instruction list. This is emptied out on each loop before the callback should hit this function — we delete it in the main loop in case the callback fails.

def on_new_instruction(message): """ Handler for incoming instructions from clients. Instructions are received, combined and expired on the server, so only active instructions (on per client) are received here. :param message: dict of all current instructions from all clients. :return: """ instructions.extend(message)

Streaming the images from robots is heavy work, so we spin it off into it’s own process — and using a separate websocket to the server. The worker intializes the camera, opens a socket to the server and then iterates over the Pi camera capture_continuous iterator — a never-ending generator of images. Image data is emitted in raw bytes.

def streaming_worker(): """ A self-container worker for streaming the Pi camera over websockets to the server as JPEG images. Initializes the camera, opens the websocket then enters a continuous capture loop, with each snap transmitted. :return: """ camera = PiCamera() camera.resolution = (200, 300) camera.framerate = CAMERA_FPS with BytesIO() as stream, SocketIO('https://kropbot.herokuapp.com', 443) as socketIO: # capture_continuous is an endless iterator. Using video port + low quality for speed. for _ in camera.capture_continuous(stream, format='jpeg', use_video_port=True, quality=CAMERA_QUALITY): stream.truncate() stream.seek(0) data = stream.read() socketIO.emit('robot_image_' + robot_ws_secret, bytearray(data)) stream.seek(0)

The main loop controls the sending of robot status updates to the server, and receiving of client instructions in return. The loop is throttled to UPDATES_PER_SECOND by setting wait locks at the end of each iteration. This is required to stop flooding the server with useless packets of data.

if __name__ == "__main__": # Register our function to disable motors when we shutdown. atexit.register(turnOffMotors) with futures.ProcessPoolExecutor() as executor: # Execute our camera streamer 'streaming_worker' in a separate process. # This runs continuously until exit. future = executor.submit(streaming_worker) with SocketIO('https://kropbot.herokuapp.com', 443) as socketIO: while True: current_time = time.time() lock_time = current_time + 1.0 / UPDATES_PER_SECOND # Calculate current average instruction based on inputs, # then perform the action. instruction = calculate_average_instruction() control_robot(instruction) instruction['n_controllers'] = len(instructions) # on_new_instruction is a callback to handle the server's response. socketIO.emit('robot_update_' + robot_ws_secret, instruction, on_new_instruction) # Empty all current instructions before accepting any new ones, # ensuring that if we lose contact with the server we stop. del instructions[:] socketIO.wait_for_callbacks(5) # Throttle the updates sent out to UPDATES_PER_SECOND (very roughly). time.sleep(max(0, lock_time - time.time())) Starting up Server

The code repository contains the files needed to get this set up on a Heroku host. First we need to create the app on Heroku, and add this repository as a remote for our git repo.

heroku create <app-name> heroku heroku git:remote -a <app-name>

The app name will be the subdomain your app is hosted under https://<app-name>.herokuapp.com. To push the code up and start the application you can now use:

git push heroku remote

Note that if you are going to set a ROBOT_WS_SECRET on your robot (optional, but recommended) you will need to set this on Heroku too. Random letters and numbers are fine.

heroku config:set ROBOT_WS_SECRET =<your-ws-secret> Robot

Copy the robot code over to your Pi, e.g. using scp

scp robot.py pi@raspberrypi.local:/home/pi

You can then SSH in with ssh pi@raspberrypi.local. First install the PiCamera and then MotorHAT libraries from Adafruit (these work for the non-official boards too).

git clone https://github.com/adafruit/Adafruit-Motor-HAT-Python Library.git cd Adafruit-Motor-HAT-Python-Library sudo apt-get install python3-dev python3-picamera sudo python3 setup.py install

You can then install the remaining Python dependencies with pip, e.g.

sudo pip3 install git+https://github.com/wwqgtxx/socketIO-client.git

We are using a non-standard version of SocketIO_client in order to get support for binary data streaming. Otherwise we would need to base64encode it on the robot, increasing it’s size and adding load.

Finally, we need to enable the camera on the Pi. To do this open up the Raspberry Pi configuration manager and enable the camera interface.

raspi-config

Once the camera is enabled (you may need to restart), you can start the robot with:

python3 robot.py

The simplest way to get the robot starting on each boot is to use cron. Edit your cron tab using crontab -e and add a line line the following, replacing your ROBOT_WS_SECRET value:

@reboot ROBOT_WS_SECRET=<your_ws_value> python3 /home/pi/robot.py

Save the file and exit back to the command line. If you run sudo reboot your Pi will restart, and the robot controller will start up automatically. Open your browser to your hosted Heroku app and as soon as the robot is live the camera stream should begin.

Notes

The configuration of the robot (fps/instructions per second) has been chosen to give decent responsiveness while not overloading the Heroku server. If you’re running on a beefier host you can probably increase these values a bit.

Streaming JPEG images is very inefficient (since each frame is encoded independently, a static stream still uses data). Using an actual video format e.g. h264 to stream would allow a huge improvement in quality with the same data rate. However, this does complicate the client (and robot) a bit, since we need to be able to send the initial stream spec data to each new client + need a raw h264 stream decoder on the client. Something for a rainy day.

RIP Kropbot

On 15th August 2017 at approximately 13.05 EST KropBot was driven down a flight of stairs by a kind internet stranger. He is no more.

Categories: FLOSS Project Planets

Colorfield: Payment and Mollie on Drupal 8

Planet Drupal - Tue, 2017-08-15 04:22
Payment and Mollie on Drupal 8 christophe Tue, 15/08/2017 - 10:22 Mollie provides a facade for several payment methods (credit card, debit card, Paypal, Sepa, Bitcoin, ...) with various languages and frameworks support. In some cases, you could decide to use the Payment module instead of the full Commerce distribution. This tutorial describes how to create a product as a node and process payment with Mollie, only via configuration. A possible use case can be an existing Drupal 8 site that just needs to enable a few products (like membership, ...).
Categories: FLOSS Project Planets

Talk Python to Me: #125 Django REST framework and a new API star is born

Planet Python - Tue, 2017-08-15 04:00
APIs were once the new and enabling thing in technology. Today they are table-stakes. And getting them right is important. Today we'll talk about one of the most popular and mature API frameworks in Django REST Framework. You'll meet the creator, Tom Christie and talk about the framework, API design, and even his successful take on funding open source projects. <br/> <br/> But Tom is not done here. He's also creating the next generation API framework that fully embraces Python 3's features called API Star.<br/> <br/> Links from the show:<br/> <br/> <div style="font-size: .85em;"><b>Django REST framework</b>: <a href="http://www.django-rest-framework.org/" target="_blank">django-rest-framework.org</a><br/> <b>API Star</b>: <a href="https://github.com/tomchristie/apistar" target="_blank">github.com/tomchristie/apistar</a><br/> <b>Tom on Twitter</b>: <a href="https://twitter.com/_tomchristie" target="_blank">@_tomchristie</a><br/></div>
Categories: FLOSS Project Planets

Sixth Blog Gsoc 2017

Planet KDE - Tue, 2017-08-15 03:00

Hi, this post is general information about telemetry in Krita. I want to clarify some points. Soon we will launch a preliminary testing of my branch. In case of successful testing, it will go into one of the closest releases of Krita (not 3.2). Krita must follow the policy of...

Categories: FLOSS Project Planets

Note names

Planet KDE - Tue, 2017-08-15 03:00

I mentioned in my previous blog that I started with the note names activity. This will be a musical blog covering the different components that we have and some music knowledge :)

I had been a fond of music from playing a piano to a guitar, that being a reason for me working on background music and making the musical activities as a part of my GSoC. Music is generally represented with Staff. So what is a Staff? The Staff consists of 5 horizontal lines on which our musical notes lie. We represent the lower pitches lower on the Staff and the higher pitches are represented higher on the staff.

Repeater { model: nbLines Rectangle { width: staff.width height: 5 border.width: 5 color: "black" x: 0 y: index * verticalDistanceBetweenLines } }

nbLines = number of horizontal lines = 5

But with blank staff can you tell what notes will be played? No, we can’t, we use Clef for that. We have two main Clefs: Base Clef and Treble Clef. Also more notes can be added to a staff using Ledger lines which are used for extending the Staff. We can specify the type of clef using which the notes are represented.

Repeater { id: staves model: nbStaves Staff { id: staff clef: multipleStaff.clef height: (multipleStaff.height - distanceBetweenStaff * (nbStaves - 1)) / nbStaves width: multipleStaff.width y: index * (height + distanceBetweenStaff) lastPartition: index == nbStaves - 1 firstNoteX: multipleStaff.firstNoteX } }

We can even have multiple staffs by specifying the nbStaves in MultipleStaff component. For note names we have nbStaves = 1 whereas clef = treble for levels < 10 whereas cleff = bass for level > 10

MultipleStaff { id: staff nbStaves: 1 clef: bar.level <= 10 ? "treble" : "bass" height: background.height / 4 width: bar.level == 1 || bar.level == 11 ? background.width * 0.8 : background.width / 2 nbMaxNotesPerStaff: bar.level == 1 || bar.level == 11 ? 8 : 1 firstNoteX: bar.level == 1 || bar.level == 11 ? width / 5 : width / 2 }

I did various changes and fixes in the last week in note names which include:

  1. Adding highlighting to options in the levels.
  2. Fixing keyboard controls which allows you to navigate between options using arrow keys and selecting the answer using Enter or return key
  3. Added the initial version of highlighting of the notes on staff for note names.

In the coming days, I will work on the following things:

  1. Improving the highlight of notes on staff.
  2. Add a drag for the options in levels.
  3. Cleaning the code and other minor fixes :)

Did I tell you that I am also working on more animations for oware? Yes, we have more animations for oware coming up for the movements of the seeds when captured to the score houses. I completed the animation movement pretty quickly this time as compared to the time taken when I was implementing it for the movements. Probably that was due to all the things I learnt in those animations which made me realise that though it took a lot of time (made me behind my timeline alot :D) but it was totally worth it. At the end our aim is to provide the best activities for kids with the best experience that they can get and not just workable activities. Along with clean and maintainable code to make it as easy as we can for new contributors or anyone to understand. Well, that’s what you learn the most :)

I will share more about the note names activity and the score animation also in my next blog post :)

Categories: FLOSS Project Planets

Agiledrop.com Blog: AGILEDROP: Accepted Business Sessions for DrupalCon Vienna

Planet Drupal - Tue, 2017-08-15 02:27
This year European DrupalCon will take place in Vienna, Austria. It's still more than a month away. However, the sessions were already selected. We will look at the ones, which were accepted in the business track. And we will also explain why. DrupalCon Vienna is one of the biggest Drupal events in the world this year. Therefore, some of our team members will be present at the event in the capital city of Austria. But once again our AGILEDROP team will not be just present at the event. We had a »bigger« role. Namely, our commercial director Iztok Smolic was invited to the Business track… READ MORE
Categories: FLOSS Project Planets

Former CIA Dir. Woolsey joins OSI member NAVO in movement to open source elections

Open Source Initiative - Mon, 2017-08-14 22:31

Former CIA Director and U.S. Ambassador James Woolsey recently issued a New York Times op/ed piece with open source stalwart, GNU Bash creator, and technology lead for OSI Affiliate Member NAVO/CAVO, Brian Fox, to call on politicians to expedite efforts toward open source election systems. Director Woolsey was blunt about the need for Microsoft and others to cease and desist lobbying efforts against the open source voting community and commended the open source momentum toward securing the elections.

NAVO / CAVO Secretary Brent Turner responded to the New York Times article with appreciation for Amb. Woolsey stating,"We are pleased that this is hitting the main stream media and look forward to helping to facilitate the transition from the at-risk proprietary code voting systems to the open source model more capable of protecting the national security".

Turner and Woolsey, along with Hollywood notables, are currently involved in filming a documentary "The Real Activist", exposing the underbelly of the election integrity world and highlighting the struggle for GPL open source paper ballot election systems.

Categories: FLOSS Research

Dirk Eddelbuettel: #9: Compacting your Shared Libraries

Planet Debian - Mon, 2017-08-14 21:49

Welcome to the nineth post in the recognisably rancid R randomness series, or R4 for short. Following on the heels of last week's post, we aim to look into the shared libraries created by R.

We love the R build process. It is robust, cross-platform, reliable and rather predicatable. It. Just. Works.

One minor issue, though, which has come up once or twice in the past is the (in)ability to fully control all compilation options. R will always recall CFLAGS, CXXFLAGS, ... etc as used when it was compiled. Which often entails the -g flag for debugging which can seriously inflate the size of the generated object code. And once stored in ${RHOME}/etc/Makeconf we cannot on the fly override these values.

But there is always a way. Sometimes even two.

The first is local and can be used via the (personal) ~/.R/Makevars file (about which I will have to say more in another post). But something I have been using quite a bite lately uses the flags for the shared library linker. Given that we can have different code flavours and compilation choices---between C, Fortran and the different C++ standards---one can end up with a few lines. I currently use this which uses -Wl, to pass an the -S (or --strip-debug) option to the linker (and also reiterates the desire for a shared library, presumably superfluous):

SHLIB_CXXLDFLAGS = -Wl,-S -shared SHLIB_CXX11LDFLAGS = -Wl,-S -shared SHLIB_CXX14LDFLAGS = -Wl,-S -shared SHLIB_FCLDFLAGS = -Wl,-S -shared SHLIB_LDFLAGS = -Wl,-S -shared

Let's consider an example: my most recently uploaded package RProtoBuf. Built under a standard 64-bit Linux setup (Ubuntu 17.04, g++ 6.3) and not using the above, we end up with library containing 12 megabytes (!!) of object code:

edd@brad:~/git/rprotobuf(feature/fewer_warnings)$ ls -lh src/RProtoBuf.so -rwxr-xr-x 1 edd edd 12M Aug 14 20:22 src/RProtoBuf.so edd@brad:~/git/rprotobuf(feature/fewer_warnings)$

However, if we use the flags shown above in .R/Makevars, we end up with much less:

edd@brad:~/git/rprotobuf(feature/fewer_warnings)$ ls -lh src/RProtoBuf.so -rwxr-xr-x 1 edd edd 626K Aug 14 20:29 src/RProtoBuf.so edd@brad:~/git/rprotobuf(feature/fewer_warnings)$

So we reduced the size from 12mb to 0.6mb, an 18-fold decrease. And the file tool still shows the file as 'not stripped' as it still contains the symbols. Only debugging information was removed.

What reduction in size can one expect, generally speaking? I have seen substantial reductions for C++ code, particularly when using tenmplated code. More old-fashioned C code will be less affected. It seems a little difficult to tell---but this method is my new build default as I continually find rather substantial reductions in size (as I tend to work mostly with C++-based packages).

The second option only occured to me this evening, and complements the first which is after all only applicable locally via the ~/.R/Makevars file. What if we wanted it affect each installation of a package? The following addition to its src/Makevars should do:

strippedLib: $(SHLIB) if test -e "/usr/bin/strip"; then /usr/bin/strip --strip-debug $(SHLIB); fi .phony: strippedLib

We declare a new Makefile target strippedLib. But making it dependent on $(SHLIB), we ensure the standard target of this Makefile is built. And by making the target .phony we ensure it will always be executed. And it simply tests for the strip tool, and invokes it on the library after it has been built. Needless to say we get the same reduction is size. And this scheme may even pass muster with CRAN, but I have not yet tried.

Lastly, and acknowledgement. Everything in this post has benefited from discussion with my former colleague Dan Dillon who went as far as setting up tooling in his r-stripper repository. What we have here may be simpler, but it would not have happened with what Dan had put together earlier.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Daniel Bader: Unpacking Nested Data Structures in Python

Planet Python - Mon, 2017-08-14 20:00
Unpacking Nested Data Structures in Python

A tutorial on Python’s advanced data unpacking features: How to unpack data with the “=” operator and for-loops.

Have you ever seen Python’s enumerate function being used like this?

for (i, value) in enumerate(values): ...

In Python, you can unpack nested data structures in sophisticated ways, but the syntax might seem complicated: Why does the for statement have two variables in this example, and why are they written inside parentheses?

This article answers those questions and many more. I wrote it in two parts:

  • First, you’ll see how Python’s “=” assignment operator iterates over complex data structures. You’ll learn about the syntax of multiple assignments, recursive variable unpacking, and starred targets.

  • Second, you’ll discover how the for-statement unpacks data using the same rules as the = operator. Again, we’ll go over the syntax rules first and then dive into some hands-on examples.

Ready? Let’s start with a quick primer on the “BNF” syntax notation used in the Python language specification.

BNF Notation – A Primer for Pythonistas

This section is a bit technical, but it will help you understand the examples to come. The Python 2.7 Language Reference defines all the rules for the assignment statement using a modified form of Backus Naur notation.

The Language Reference explains how to read BNF notation. In short:

  • symbol_name ::= starts the definition of a symbol
  • ( ) is used to group symbols
  • * means appearing zero or more times
  • + means appearing one or more times
  • (a|b) means either a or b
  • [ ] means optional
  • "text" means the literal text. For example, "," means a literal comma character.

Here is the complete grammar for the assignment statement in Python 2.7. It looks a little complicated because Python allows many different forms of assignment:

An assignment statement consists of

  • one or more (target_list "=") groups
  • followed by either an expression_list or a yield_expression
assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)

A target list consists of

  • a target
  • followed by zero or more ("," target) groups
  • followed by an optional trailing comma
target_list ::= target ("," target)* [","]

Finally, a target consists of any of the following

  • a variable name
  • a nested target list enclosed in ( ) or [ ]
  • a class or instance attribute
  • a subscripted list or dictionary
  • a list slice
target ::= identifier | "(" target_list ")" | "[" [target_list] "]" | attributeref | subscription | slicing

As you’ll see, this syntax allows you to take some clever shortcuts in your code. Let’s take a look at them now:

#1 – Unpacking and the “=” Assignment Operator

First, you’ll see how Python’s “=” assignment operator iterates over complex data structures. You’ll learn about the syntax of multiple assignments, recursive variable unpacking, and starred targets.

Multiple Assignments in Python:

Multiple assignment is a shorthand way of assigning the same value to many variables. An assignment statement usually assigns one value to one variable:

x = 0 y = 0 z = 0

But in Python you can combine these three assignments into one expression:

x = y = z = 0

Recursive Variable Unpacking:

I’m sure you’ve written [ ] and ( ) on the right side of an assignment statement to pack values into a data structure. But did you know that you can literally flip the script by writing [ ] and ( ) on the left side?

Here’s an example:

[target, target, target, ...] = or (target, target, target, ...) =

Remember, the grammar rules allow [ ] and ( ) characters as part of a target:

target ::= identifier | "(" target_list ")" | "[" [target_list] "]" | attributeref | subscription | slicing

Packing and unpacking are symmetrical and they can be nested to any level. Nested objects are unpacked recursively by iterating over the nested objects and assigning their values to the nested targets.

Here’s what this looks like in action:

(a, b) = (1, 2) # a == 1 # b == 2 (a, b) = ([1, 2], [3, 4]) # a == [1, 2] # b == [3, 4] (a, [b, c]) = (1, [2, 3]) # a == 1 # b == 2 # c == 3

Unpacking in Python is powerful and works with any iterable object. You can unpack:

  • tuples
  • lists
  • dictionaries
  • strings
  • ranges
  • generators
  • comprehensions
  • file handles.

Test Your Knowledge: Unpacking

What are the values of a, x, y, and z in the example below?

a = (x, y, z) = 1, 2, 3

Hint: this expression uses both multiple assignment and unpacking.

Starred Targets (Python 3.x Only):

In Python 2.x the number of targets and values must match. This code will produce an error:

x, y, z = 1, 2, 3, 4 # Too many values

Python 3.x introduced starred variables. Python first assigns values to the unstarred targets. After that, it forms a list of any remaining values and assigns it to the starred variable. This code does not produce an error:

x, *y, z = 1, 2, 3, 4 # y == [2,3]

Test Your Knowledge: Starred Variables

Is there any difference between the variables b and *b in these two statements? If so, what is it?

(a, b, c) = 1, 2, 3 (a, *b, c) = 1, 2, 3 #2 – Unpacking and for-loops

Now that you know all about target list assignment, it’s time to look at unpacking used in conjunction with for-loops.

In this section you’ll see how the for-statement unpacks data using the same rules as the = operator. Again, we’ll go over the syntax rules first and then we’ll look at a few hands-on examples.

Let’s examine the syntax of the for statement in Python:

for_stmt ::= "for" target_list "in" expression_list ":" suite ["else" ":" suite]

Do the symbols target_list and expression_list look familiar? You saw them earlier in the syntax of the assignment statement.

This has massive implications:

Everything you’ve just learned about assignments and nested targets also applies to for loops!

Standard Rules for Assignments:

Let’s take another look at the standard rules for assignments in Python. The Python Language Reference says:

The for statement is used to iterate over the elements of a sequence (such as a string, tuple or list) or other iterable objects … Each item, in turn, is assigned to the target list using the standard rules for assignments.

You already know the standard rules for assignments. You learned them earlier when we talked about the = operator. They are:

  • assignment to a single target
  • assignment to multiple targets
  • assignment to a nested target list
  • assignment to a starred variable (Python 3.x only)

In the introduction, I promised I would explain this code:

for (i,value) in enumerate(values): ...

Now you know enough to figure it out yourself:

  • enumerate returns a sequence of (number, item) tuples
  • when Python sees the target list (i,value) it unpacks (number, item) tuple into the target list.

Examples:

I’ll finish by showing you a few more examples that use Python’s unpacking features with for-loops. Here’s some test data we’ll use in this section:

# Test data: negative_numbers = (-1, -2, -3, -4, -5) positive_numbers = (1, 2, 3, 4, 5)

The built-in zip function returns pairs of numbers:

>>> list(zip(negative_numbers, positive_numbers)) [(-1, 1), (-2, 2), (-3, 3), (-4, 4), (-5, 5)]

I can loop over the pairs:

for z in zip(negative_numbers, positive_numbers): print(z)

Which produces this output:

(-1, 1) (-2, 2) (-3, 3) (-4, 4) (-5, 5)

I can also unpack the pairs if I wish:

>>> for (neg, pos) in zip(negative_numbers, positive_numbers): ... print(neg, pos) -1 1 -2 2 -3 3 -4 4 -5 5

What about starred variables? This example finds a string’s first and last character. The underscore character is often used in Python when we need a dummy placeholder variable:

>>> animals = [ ... 'bird', ... 'fish', ... 'elephant', ... ] >>> for (first_char, *_, last_char) in animals: ... print(first_char, last_char) b d f h e t Unpacking Nested Data Structures – Conclusion

In Python, you can unpack nested data structures in sophisticated ways, but the syntax might seem complicated. I hope that with this tutorial I’ve given you a clearer picture of how it all works. Here’s a quick recap of what we covered:

  • You just saw how Python’s “=” assignment operator iterates over complex data structures. You learned about the syntax of multiple assignments, recursive variable unpacking, and starred targets.

  • You also learned how Python’s for-statement unpacks data using the same rules as the = operator and worked through a number of examples.

It pays off to go back to the basics and to read the language reference closely—you might find some hidden gems there!

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-08-14

Planet Apache - Mon, 2017-08-14 19:58
Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible Builds: Weekly report #119

Planet Debian - Mon, 2017-08-14 19:30

Here's what happened in the Reproducible Builds effort between Sunday July 30 and Saturday August 5 2017:

Media coverage

We were mentioned on Late Night Linux Episode 17, around 29:30.

Packages reviewed and fixed, and bugs filed

Upstream packages:

  • Bernhard M. Wiedemann:
    • efl (merged), unique ids based on memory address
    • 389-ds (merged), SOURCE_DATE_EPOCH support.
    • plowshare, SOURCE_DATE_EPOCH support
    • sphinx, file ordering
    • sphinx, SOURCE_DATE_EPOCH support

Debian packages:

Reviews of unreproducible packages

29 package reviews have been added, 72 have been updated and 151 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (36)
  • Andreas Beckmann (2)
  • Daniel Schepler (2)
  • Logan Rosen (1)
  • Lucas Nussbaum (93)
diffoscope development

Version 85 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

  • Mattia Rizzolo:
    • Add an explicit Recommends: on the defusedxml python package.
    • Various other code quality tweaks.
  • Juliana Oliveira Rodrigues:
    • Fix test_ico_image for ImageMagick identify >= 6.9.8.
    • Use the defusedxml XML library by default in the XML comparator, if it's available. This protects against various XML parser DoS attacks and other security holes, which other Python XML libraries are vulnerable to.
  • Ximin Luo:
    • Force a flush when writing output to diff. (Closes: #870049).

as well as previous weeks' contributions, summarised in the changelog.

There were also further commits in git, which will be released in a later version:

  • Guangyuan Yang:
    • tests/iso9660: support isoinfo's output coming from cdrtools' version instead of genisoimage's
  • Mattia Rizzolo:
    • Code quality and test fixes.
  • Chris Lamb:
    • Code quality and test fixes.
Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Categories: FLOSS Project Planets

Acquia Lightning Blog: Round up your front-end JavaScript libraries with Composer

Planet Drupal - Mon, 2017-08-14 18:03
Round up your front-end JavaScript libraries with Composer phenaproxima Mon, 08/14/2017 - 18:03

In Lightning 2.1.7, we’re finally answering a long-standing question: if I’m managing my code base with Composer, how can I bring front-end JavaScript libraries into my site?

This has long been a tricky issue. drupal.org doesn’t really provide an official solution -- modules that require JavaScript libraries usually include instructions for downloading and extracting said libraries yourself. Libraries API can help in some cases; distributions are allowed to ship certain libraries. But if you’re building your site with Composer, you’ve been more or less on your own.

Now, the Lightning team has decided to add support for Asset Packagist. This useful repository acts as a bridge between Composer and the popular NPM and Bower repositories, which catalog thousands of useful front-end and JavaScript packages. When you have Asset Packagist enabled in a Composer project, you can install a Bower package like this (using Dropzone as an example):

$ composer require bower-asset/dropzone

And you can install an NPM package just as easily:

$ composer require npm-asset/dropzone

To use Asset Packagist in your project, merge the following into your composer.json:

"repositories": [ { "type: "composer", "url": "https://asset-packagist.org" } ]

Presto! You can now add Bower and NPM packages to your project as if they were normal PHP packages. Yay! However...

Normally, asset packages will be installed in the vendor directory, like any other Composer package. This probably isn’t what you want to do with a front-end JavaScript library, though -- luckily, there is a special plugin you can use to install the libraries in the right place. Note that you’ll need Composer 1.5 (recently released) or later for this to work; run composer self-update if you're using an older version of Composer.

Now, add the plugin as a dependency:

$ composer require oomphinc/composer-installers-extender

Then merge the following into your composer.json:

"extra": { "installer-types": [ "bower-asset", "npm-asset" ], "installer-paths": { "path/to/docroot/libraries/{$name}": [ "type:bower-asset", "type:npm-asset" ] } }

Now, when you install a Bower or NPM package, it will be placed in docroot/libraries/NAME_OF_PACKAGE. Boo-yah!

Let's face it -- if you're using Composer to manage your Drupal code base and you want to add some JavaScript libraries, Asset Packagist rocks your socks around the block.

BUT! Note that this -- adding front-end libraries to a browser-based application -- is really the only use case for which Asset Packagist is appropriate. If you're writing a JavaScript app for Node, you should use NPM or Yarn, not Composer! Asset Packagist isn't meant to replace NPM or Bower, and it doesn't necessarily resolve dependencies the same way they do. So use this power wisely and well!

P.S. Lightning 2.1.7 includes a script which can help set up your project's composer.json to use Asset Packagist. To run this script, switch into the Lightning profile directory and run:

$ composer run enable-asset-packagist
Categories: FLOSS Project Planets

Jamie McClelland: Diversity doesn't help the bottom line

Planet Debian - Mon, 2017-08-14 14:39

A Google software engineer's sexist screed against diversity has been making the rounds lately.

Most notable are the offensive and mis-guided statements about gender essentialism, which honestly make the thing hard to read at all.

What seems lost in the hype, however, is that his primary point seems quite accurate. In short: If Google successfully diversified it's workforce, racial and gender tensions would increase not decrease, divisiveness would spread and, with all liklihood, Google could be damaged.

Imagine what would happen if the thousands of existing, mostly male, white and Asian engineers, the majority of whom are convinced that they play no part in racism and sexism, were confronted with thousands of smart and ambitious women, African Americans and Latinos who were becoming their bosses, telling them to work in different ways, and taking "their" promotions.

It would be a revolution! I'd love to see it. Google's bosses definitely do not.

That's why none of the diversity programs at Google or any other major tech company are having any impact - because they are not designed to have an impact. They are designed to boost morale and make their existing engineers feel good about what they do.

Google has one goal: to make money. And one strategy: to design software that that people want to use. One of their tactics that is highly effective is building tight knit groups of programmers who work well together. If the creation of hostile, racist and sexist environments is a by-product - well, it's not one that affects their bottom line.

Would Google make better software with a more diverse group of engineers? Definitely! For one, if African American engineers were working on their facial recognition software, it's doubtful it would have mistaken people with black faces for gorillas.

However, if the perceived improvement in software outweighed the risks of diversification, then Google would not waste any time on feel-good programs and trainings - they would simply build a jobs pipeline and change their job outreach programs to recruit substantially more female, African Americans and Latino candidates.

In the end, this risk avoidance and failure to perceive the limitations of homogeneity is the achiles heel of corporate software design.

Our challenge is to see what we can build outside the confines of corporate culture that prioritizes profits, production efficiency, and stability. What can we do with teams that are willing to embrace racial and gender tension, risk diviseveness and be willing to see benefits beyond releasing version 1.0?

Categories: FLOSS Project Planets

Go support in KDevelop. GSoC week 11. Code completion and bug fixing.

Planet KDE - Mon, 2017-08-14 14:30
Hello!

Sidenote: I'm working on Go language support in KDevelop. KDevelop is a cross-platform IDE with awesome plugins support and possibility to implement support for various build systems and languages. The Go language is an cross-platform open-source compiled statically-typed languages which tends to be simple and readable, and mainly targets console apps and network services.

During last week I was continuing working on code completion support.
Firstly, I spent time investigating what else could be added to existing support - and realized that Go channels wasn't covered really well. "Channels" in Go world are something like queues, or, maybe more exactly, pipes. They provides ability to communicate between different goroutines (think of them as of lightweight threads) - you can send a value to channel and receive it on other side.
So, my first change was related to matching types while passing values to channel - now it works correctly and suggests matching types with higher priority.
Aside from different value types channels differs in direction - there are mono-directional and bidirectional channels: in, out, and in/out.
Because of that my second change was aimed on providing support for matching these different kinds of channels. Now, if function expects, for example, in channel, both in and in/out channels will have higher priority than out channel.

After doing that I began to open various Go files \ projects to find remaining bugs and received segfault while parsing fmt/print.go. :( After some investigating I realized that in case of struct variable declaration with literal (e.g. initializing struct fields inside {} block) no context was opened and that leaded to crash lately. Although it took me some time to find where real problem is and how to fix it it's fixed now and even 1142-lines fmt/print.go opens successfully.

Despite that, I found that in case of struct literal initialization names of fields are not highlighted as usages - I am going to fix that during next week and spend more time on testing and fixing remaining issues.

Looking forward to next week!
Categories: FLOSS Project Planets

Continuum Analytics News: Five Organizations Successfully Fueling Innovation with Data Science

Planet Python - Mon, 2017-08-14 14:12
Company Blog Tuesday, August 15, 2017 Christine Doig Sr. Data Scientist, Product Manager

Data science innovation requires availability, transparency and interoperability. But what does that mean in practice? At Anaconda, it means providing data scientists with open source tools that facilitate collaboration; moving beyond analytics to intelligence. Open source projects are the foundation of modern data science and are popping up across industries, making it more accessible, more interactive and more effective. So, who’s leading the open source charge in the data science community? Here are five organizations to keep your eye on:

1. TaxBrain. TaxBrain is a platform that enables policy makers and the public to simulate and study the effects of tax policy reforms using open source economic models. Using the open source platform, anyone can plug elements of the administration’s proposed tax policy to get an idea of how it would perform in the real world.

Why public policy is going #opensource via @teoliphant @MattHJensen in @datanami https://t.co/vKTzYtdvGl #datascience #taxbrain

— Continuum Analytics (@ContinuumIO) August 17, 2016

 

2. Recursion Pharmaceuticals. Recursion is a pharmaceutical company dedicated to finding the remedies for rare genetic diseases. Its drug discovery assay is built on an open source software platform, combining biological science with machine learning techniques to visualize cell data and test drugs efficiently. This approach shortens research and development process, reducing time to market for remedies to these rare genetic diseases. Their goal is to treat 100 diseases by 2026 using this method.

3. The U.S. Government. Under the previous administration, the U.S. government launched Data.gov, an open data initiative that offers more than 197K datasets for public use. This database exists, in part, thanks to the former U.S. chief data scientist, DJ Patil. He helped drive the government’s data science projects forward, at the city, state and federal levels. Recently, concerns have been raised over the the Data.gov portal, as certain information has started to disappear. Data scientists are keeping a sharp eye on the portal to ensure that these resources are updated and preserved for future innovative projects.

4. Comcast. Telecom and broadcast giant, Comcast, run their projects on open source platforms to drive data science innovation in the industry. 

For example, earlier this month, Comcast’s advertising branch announced they were creating a Blockchain Insights Platform to make the planning, targeting, execution and measurement of video ads more efficient. This data-driven, secure approach would be a game changer for the advertising industry, which eagerly awaits its launch in 2018.

5. DARPA. The Defense Advanced Research Projects Agency (DARPA) is behind the Memex project, a program dedicated to fighting human trafficking, which is a top mission for the defense department. DARPA estimates that in two years, traffickers spent $250 million posting the temporary advertisements that fuel the human trafficking trade. Using an open source platform, Memex is able to index and cross reference interactive and social media, text, images and video across the web. This allows them to find the patterns in web data that indicate human trafficking. Memex’s data science approach is already credited in generating at least 20 active cases and nine open indictments. 

These are just some of the examples of open source-fueled data science turning industries on their head, bringing important data to the public and generally making the world a better place. What will be the next open source project to put data science in the headlines? Let us know what you think in the comments below!

Categories: FLOSS Project Planets

Elevated Third: E3 Named Finalist in 5 Acquia Engage Award Categories

Planet Drupal - Mon, 2017-08-14 14:04
E3 Named Finalist in 5 Acquia Engage Award Categories E3 Named Finalist in 5 Acquia Engage Award Categories root Mon, 08/14/2017 - 12:04

As an Acquia Preferred Partner, we are thrilled to announce our work has ranked amongst the world’s most innovative websites and digital experiences in the 2017 Acquia Engage Awards. Elevated Third received recognition in the Nonprofit, Brand Experience, Financial Services, Digital Experience, and Community categories for the following projects. 

The Acquia Engage Awards recognize the amazing sites and digital experiences that organizations are building with the Acquia Platform. Nominations that demonstrated an advanced level of visual design, functionality, integration and overall experience have advanced to the finalist round, where an outside panel of experts will select the winning projects.

Winners will be announced at Acquia Engage in Boston from October 16-18, of which we are sponsors.  

“Acquia’s partners and customers are setting the benchmark for orchestrating the customer journey and driving the future of digital. Organizations are mastering the art of making every interaction personal and meaningful, and creating engaging, elegant solutions that extend beyond the browser,” said Joe Wykes, senior vice president, global channels, and commerce at Acquia. “We’re laying the foundation to help our partners and customers achieve their greatest ambitions and grow their digital capabilities long into the future. We’re inspired by the nominees and impact of their amazing collective work.”

Check out our competition! The full list of finalists for the 2017 Acquia Engage Awards is posted here.

Categories: FLOSS Project Planets

Antonio Terceiro: Debconf17

Planet Debian - Mon, 2017-08-14 13:27

I’m back from Debconf17.

I gave a talk entitled “Patterns for Testing Debian Packages”, in which I presented a collection of 7 patterns I documented while pushing the Debian Continuous Integration project, and were published in a 2016 paper. Video recording and a copy of the slides are available.

I also hosted the ci/autopkgtest BoF session, in which we discussed issues around the usage of autopkgtest within Debian, the CI system, etc. Video recording is available.

Kudos for the Debconf video team for making the recordings available so quickly!

Categories: FLOSS Project Planets
Syndicate content