Feeds

Drupal core announcements: Modernizing JavaScript in Drupal 8

Planet Drupal - Fri, 2017-05-19 18:17

We are modernizing our JavaScript by moving to ECMAScript 6 (ES6) for core development. ES6 is a major update to JavaScript that includes dozens of new features. In order to move to ES6, contributors will have to use a new workflow. For more details, see the change record. We have adopted the AirBnB coding standards for ES6 code.

Categories: FLOSS Project Planets

Drupal Association blog: Providing Insight Into Drupal Association Financials

Planet Drupal - Fri, 2017-05-19 15:56

It is critical that the Drupal Association remains financially sustainable so we can fulfill our mission into the future. As a non-profit organization based in the United States, the responsibility of maintaining financial health falls on the Executive Director and the Drupal Association Board.

Association board members, like all board members for US-based organizations, have three legal obligations: duty of care, duty of loyalty, and duty of obedience. Additionally, there is a lot of practical work that the board undertakes. These generally fall under the fiduciary responsibilities, which includes overseeing financial performance.

The Drupal Association’s sustainability impacts everyone in the community. For this reason, we want to provide more insight into our financial process and statements with a series of blog posts covering the following topics:

  • How we create forecasts, financial statements, and ensure accounting integrity

  • Update on Q4 2016 financial (to follow up on our Q3 2016 financial update)

  • Which countries provide funding and which countries are served by that funding (a question asked in the recent public board meeting by a community member)

If you would like additional topics covered, please tell us via the comments section. 

Categories: FLOSS Project Planets

PyBites: How to Create Your Own Steam Game Release Notifier

Planet Python - Fri, 2017-05-19 13:53

In this post we demonstrate ways in which you can parse common data formats used in Python.

Categories: FLOSS Project Planets

Clint Adams: Help the Aged

Planet Debian - Fri, 2017-05-19 12:10

I keep meeting girls from Walnut Creek who don’t know about the CDROM.

Posted on 2017-05-19 Tags: ranticore
Categories: FLOSS Project Planets

Announcing QtCon Brasil 2017

Planet KDE - Fri, 2017-05-19 11:27

Hi there,

It's been almost a year since I, Filipe and Aracele were having a beer at Alexander Platz after the very last day of QtCon Berlin, when Aracele astutely came up with a very crazy idea of organizing QtCon in Brazil. Since then, we have been maturing such an idea and after a lot of work we are very glad to announce: QtCon Brasil 2017 happens from 18th to 20th August in São Paulo.

QtCon Brazil 2017 is the first Qt community conference in Brazil and Latin America. Its goals are twofold: i) provide a common venue for existing Brazilian and Latin-American Qt developers, engineers, and managers share their experiences on creating Qt-powered technologies and ii) disseminate Qt adoption in Latin America, with the purpose of expanding its contributors base, encouraging business opportunities, and narrowing relationships between sectors like industry, universities, and government.

In this first edition of QtCon Brazil, the conference will focus on cross-platform development enabled by Qt. With that, the meeting can benefit a wider range of stakeholders, with interest in all sort of platforms, including desktop (Windows, Linux, and OS X), mobile (Android and iOS), embedded, and IoT. We are bringing together experienced Qt specialists from Brazil and overseas to delivery a state-of-art program of talks and training sessions that illustrate how Qt has been used as enabling technology for many sectors in industry.

QtCon Brazil 2017 will take place in São Paulo – the most important technical, social, and cultural hub in Brazil and the world’s tenth largest GDP. São Paulo is easily reachable from most of Brazilian airports, provides satisfactory infrastructure regarding the venue and accommodation, and is a strategic place to augment the achievements of this first edition of QtCon Brazil. The venue where the meeting will happen is Espaço Fit, a multifunctional conference center with an auditorium for 220 people and that provides all required infrastructure regarding multimedia equipments, Wi-Fi, and catering services. Espaço Fit is located in São Paulo downtown – at Avenida Paulista – it is easily reachable from airports, in a walking distance from metro stations, and nearby a large array of hotels.

QtCon Brasil 2017 is kindly sponsored by The Qt Company, Toradex, openSUSE and KDE. It has also the valuable support for logistics and outreach of Carreira RH and Embarcados. Thank you all for making QtCon Brasil possible.

You can find more information (in portuguese) at QtCon Brasil webpage. Also, be sure to follow us in QtCon Brasil Twitter, Facebook and Google+ pages.

Categories: FLOSS Project Planets

Reinout van Rees: PyGrunn: google machine learning APIs for python developers - keynote from Google Cloud

Planet Python - Fri, 2017-05-19 10:29

(One of my summaries of a talk at the 2017 PyGrunn conference).

Lee Boonstra and Dmitriy Novakovskiy gave the keynote.

Python at google. Python is widely used at google, it is one of its official languages. It is integral to many of the google projects, for instance youtube and 'app engine'. And lots of open source libraries. Every API has its own python client.

Google for python developers. What can you do as a python programmer on google? Google cloud platform. It consists of many parts and services that you can use.

  • You can embed machine learning services like tensorflow, speach API, the translation API, etc.
  • Serverless data processing and analytics. Pub/Sub, bigquery (map/reduce without needing to run your own hadoop cluster), etc.
  • Server stuff like kubernetes, container services.

Machine learning. There have been quite some presentation on this already. Look at it like this: how do you teach things to your kids? You show them! "That is a car", "that is a bike". After a while they will start learning the words for the concepts.

Machine learning is not the same as AI (artificial intelligence). AI is the process of building smarter computers. Machine learning is getting the computer to actually learn. Which is actually much easier.

Why is machine learning so popular now? Well:

  • The amount of data. There is much more data. So we finally have the data we need to do something with it.
  • Better models. The models have gotten much better.
  • More computing power. Parallellization and so. You now have the power to actually do it in reasonable time.

Why google? Tensorflow is very popular (see an earlier talk about tensorflow).

You can do your own thing and use tensorflow and the cloud machine learning engine. OR you can use one of the google-trained services like the vision API (object recognition, text recognision, facial sentiment, logo detection). Speech API/natural language API (syntax analysis, sentiment analysis, entity recognision). Translation API (realtime subtitles, language detection). Beta feature: the video intelligence API (it can detect the dogs in your video and tell you when in the video the dogs appeared...).

Code and demos. She gave a nice demo about what she could recognize with google stuff in an Arjan Robben image. It even detected the copyright statement text at the bottom of the photo and the text on his lanyard ("champions league final"). And it was 88% sure it was a soccer player. And 76% sure it might be a tennis player :-)

Using the API looked pretty easy. Nice detail: several textual items that came back from the API were then fed to the automatic translation API to convert them to Dutch.

Tensorflow demo. He used the MNIST dataset, a set of handwritten numbers often used for testing neural nets.

Dataflow is a unifield programming model for batchor stream data processing. You can use it for map/reduce-like operations and "embarrassingly parallel" workloads. It is open sourced as apache Beam (you can use it hosted on google's infrastructure).

The flow has four steps:

  • Cloud storage (storage of everything).
  • dataflow.
  • Bigquery (data storage).
  • Data studio (data visualization).

(The demo code can be found in the sheets that will be available, googling for it probably also helps).

Photo explanation: just a nice unrelated picture from the my work-in-progress german model railway

Dutch note: python+django programmeren in hartje Utrecht bij de oude gracht? Watersector, dus veel data en geo. Leuk! Nelen&Schuurmans is op zoek. Stuur mij maar een mailtje, want de vacaturetekst staat nog niet online :-)

Categories: FLOSS Project Planets

tryexceptpass: Making 3D Interfaces for Python with Unity3D

Planet Python - Fri, 2017-05-19 10:20
Using Sofi and WebSockets to command game engines

A while back I wrote a post on using with game engines as frontend interfaces. I want to enable rich interfaces in the Python ecosystem that are usable in virtual and augmented reality. Since then, I was able to build some basic concepts on top of Sofi and I’m here to share them.

The backend

Sofi works as a WebSocket server that can issue commands and handle events from a client. It’s written to simplify the use of frontend web technologies as interfaces to a Python backend. You can even package it as a desktop app with a desktop look and feel.

It functions by sending the user to a webpage with the basics needed to open a WebSocket client. This client then registers handlers that process server commands telling it how to alter the DOM or respond to events. All the logic resides with Python, the webpage is the user interface.

I wrote it with enough modularity such that the command and event structure is reusable for any technology. So it wasn’t much of a stretch to think that I could replace the webpage portion with a game engine. It was only a matter of finding one with easy access to WebSocket clients.

The game engine

I looked at Unity3D, Unreal and CryEngine. Of the three, the first two have the most expansive communities, and of those, Unity seems to have the larger ecosystem. While usually a matter of taste, I’m not here to argue which one has better graphics. At this point, I want simplicity for a proof of concept, and this came in the form of WebSocketSharp.

WebSocketSharp is a C# library available in Unity3D’s asset store. It provides client and server classes usable by any script in your project’s scene. I ran a few tests and it worked well enough for my needs. Mapping back to Sofi, this part performs the function of the JavaScript WebSocket client library that runs on your browser. Now, let’s wrap it so it understands how to talk to the Sofi WebSocket server, and make the equivalent of sofi.js.

Making the frontend

If you haven’t used Unity3D before, or any game engine for that matter, getting to “hello world” can be confusing. While not required to understand this implementation, I’ll go over some of the main concepts here. Please note that Unity has their own extensive documentation available for more details.

First thing to do when making a “game” — which is what I’m doing for all intensive purposes — is to create a project. Think of this as a package that contains all the assets you’ll need for rendering a world and its interactions. Almost everything in a project is an asset: textures, shaders, scripts, objects, etc.

Within a project there are scenes. Think of these like game levels, or scenes in a movie. It’s a collection of objects, scripts and camera views that render into an experience with a common objective.

Once there’s a scene, I can add the WebSocketSharp library as an asset. This makes it possible to import its WebSocket class into a script in which to do some magic. You can assign scripts to objects in the scene and trigger execution through the engine’s event loop.

Every scene has a default camera. Cameras, among other things, determine how to view the scene. This includes viewing angles, distance, render settings, lighting, etc. But since they’re objects themselves, they can also hold scripts. Which means I can stick a WebSocketClient script on the camera that loads immediately with the scene.

Unity scripts can schedule handlers in the main event loop. There are many game events for physical collisions or user inputs detailed in the docs. But I’m interested in those that run at initialization and on a regular basis. In our case, I’m looking for Update and Start. The former runs when rendering every frame, while the latter executes after loading. I also register a handler for OnDestroy to clean things up when exiting.

To configure the client, instantiate a WebSocket during the engine’s Start event. This requires the address and port of the server, as well as its own set of event handlers. I use OnOpen when a connection completes, OnMessage when receiving new data, and OnClose for disconnections.

It’s important to think about the event loop when working with Start. Message processing cannot happen here because it can block the scene from loading. So I made a command queue in the form of a List object that lives within the client class. The idea is to review that queue on every Update event — once every frame — and handle the commands then.

Besides message processing, the Update event is a good point at which to check for a connection. It’s easy to trigger a reconnect, allowing the scene to wait for a server to spin up if needed.

The engine gathers user inputs and queues them up, much like the earlier socket messages. So I add checks for those here as well, using the sendAsync() method to communicate them back to the server.

Communicating with Sofi

To talk a little bit about protocol, Sofi’s communication mechanism is very extensible. The socket server listens on a particular port expecting JSON data. Event messages have an 'event' key set with the name which the server uses to check for callbacks. If one is present, execution passes to that function, which can in turn use the dispatch() method to send a command. UI commands have a 'command' key to identify them and their other parameters. Except for the basics, I don’t hardcode any of this, so it’s very easy to add custom events and commands by making up new names without modifying Sofi itself.

The WebSocketClient can handle commands for spawning new objects, updating object properties, removing objects and subscribing to events. The types of objects are the very basic shapes: cubes, spheres, cylinders, capsules, planes and text. There are quite a few settings for each, including color, size, position and rotation.

Packaging

The other neat part is that Unity3D can produce binaries for all major platforms. This includes Windows, Linux, MacOS, Android, iOS, and otherx. So not only does this add 3D capabilities to Python, but it also includes multi-platform support. The build process itself is simple and directed by the game engine itself. Couple it with PyInstaller like in my previous article, and now you have only one executable for distribution, Python environment and all.

What does it look like?

For a demo of what it can do, have a look at my PyCaribbean presentation below. I run it at about 20:09.

https://medium.com/media/7d00b8efd626e66112ac05b9def1a93c/href

I’ve posted the Python code on GitHub under the sofi-unity3d repo. The WebSocketClient implementation is located in the engine/Assets folder. It also includes the Unity3D project along with it. The full code will run through my slides for PyCaribbean and starts a listener for any #python tweet.

I also went off and built the binaries for Linux, Windows and MacOS, so you can download and play with it yourselves. When you run the engine, it will load an empty sky and grassy plane scene that sits waiting for a WebSocket server. This could be the Sofi instance in server.py or something of your own making that listens on port 9000.

What’s next?

Today we’ve explored the possibilities around creating game engine based “widget libraries”. Demonstrating that it’s very possible for a Python backend to provide the logic, while leaving the UI heavy lifting to the engine. It’s also portable cross-platform and distributable with little effort.

The next step on the game engine side is to provide practical widgets and assets. With this framework we can spawn any pre-existing game objects, meshes and scripts. It’s up to us to define what makes for good reusable components and interactions, and package them with the built binaries.

On the Python side, I’d like to provide an HTTP-based asset server. This allows folks to create custom assets in something like Blender, which is also Python-enabled, and load them into any scene. No need to touch the game engine.

I also want to build a world with that enables VR or AR on hardware like the Rift, Vibe, Hololens, etc. I have a DK2 at home, but haven’t looked into the making any projects with it yet.

For additional reading on how I made Sofi, have a look at the A Python Ate My GUI series.

If you liked this article and want to keep up with what I’m working on, please click the heart below to recommend it, and follow me on Twitter.

Making 3D Interfaces for Python with Unity3D was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: FLOSS Project Planets

ICC Examin 1.0 on Android

Planet KDE - Fri, 2017-05-19 09:26

ICC Examin allows since version 1.0 ICC Color Profile viewing on the Android mobile platform. ICC Examin shows ICC color profile elements graphically. This way it is much easier to understand the content. Color primaries, white point, curves, tables and color lists are displayed both numerically and as graphics. Matrices, international texts, Metadata are much easier to read.

Features:
* most profile elements from ICC specification version 2 and version 4
* additionally some widely used non standard tag are understood

ICC color profiles are used in photography, print and various operating systems for improving the visual appearance. A ICC profile describes the color response of a color device. Read more about ISO 15076-1:2010 Standard / Specification ICC.1:2010-12 (Profile version 4.3.0.0), color profiles and ICC color management under www.color.org .

The ICC Examin App is completely rewritten in Qt/QML. QML is a declarative language, making it easy to define GUI elements and write layouts with fewer code. In recent years the Qt project extended support from desktop platforms to mobiles like Nokias Meego, Sailfish OS, iOS, Android, embedded devices and more. ICC Examin is available as a paid app in the Google Play Store. Sources are currently closed in order to financially support further development. This ICC Examin version continues to use Oyranos CMS. New is the dependency to RefIccMAX for parsing ICC Profile binaries. In the process both the RefIccMAX library and the Oyranos Color Management System obtained changes and fixes in git for cross compilation with Android libraries. Those changes will be in the next respective releases.

The FLTK Toolkit, as used in previous versions, was not ported to the Android or other mobile platforms. Thus a complete rewrite was unavoidable. The old FLTK based version is still maintained by the same author.

Categories: FLOSS Project Planets

Python Software Foundation: 2017 Frank Willson Memorial Award Goes To Katie Cunningham And Barbara Shaurette

Planet Python - Fri, 2017-05-19 08:51
Every year the Python Software Foundation awards the Frank Willison Memorial Award to a member(s) of the Python community. The purpose of this award is to recognize the outstanding contributions that Python community members have made having began as an award, “established in memory of Frank Willison, a Python enthusiast and O'Reilly editor-in-chief, who died in July 2001”.
The Python Software Foundation has awarded the 2017 Frank Willison Award to Katie Cunningham and Barbara Shaurette in recognition of their work creating Young Coders classes.  Cunningham and Shaurette have gone above and beyond making the Young Coders teaching materials freely available.
The program began at PyCon 2013 in Santa Clara and was an immediate success. The follow-up blog post is the second most popular post in PyCon's history by a wide margin. Additionally the event was one of the most talked about topics of the 2013 conference.
Lynn Root and Jesse Noller pitched the idea to Cunningham asking her to lead it. Cunningham  then reached out to Shaurette seeking her assistance, or as she said, “Omg help!”
Shaurette has experience teaching early childhood education. Her experience teaching younger students came in handy as she reworked materials used for adult classes into the materials the program uses today. The class includes Raspberry Pis, keyboards, and a mouse that the students were allowed to take with them, along with two books Python for Kids and Hello World! Computer Programming for Kids and Other Beginners.
The first class for students aged 10 to 12 did not go without hitches.  That year there were a lot of technical issues with the Raspberry Pis. Noah Kantrowitz saved the day helping Cunningham and Barbara getting the Raspberry Pi’s set up. “The setup is a little complex, but he set the guidelines for what equipment we use, and how we plan the classroom every year,” Shaurette said.
“There were moments setting up that I said, ‘I don’t know if this is going to work,” Cunningham  recalls.
That first class was eight hours. Then Katie and Barbara wrapped up and did it again the next day for a second a time with a whole new class.
By the end of the first day it was already a noted success. “The enthusiasm around it was insane. People were so excited that we were doing it. We were off in our own corner and not central to the conference, but people were stopping by and peeking in,” Cunningham  explains.
Once the kids were let loose to experiment, they tried all sorts of things.  “I don't think you'd ever see that kind of experimentation in a classroom full of adults, who would more likely do everything in their power not to break their computers,”  Shaurette wrote of the kids’ ability to learn, write, and run code.
The second day was a whole new class, but this time it was a group of 13 to 16 year olds, and just as successful. “One thing that I find is how energizing the kids get at the end,” Cunningham said.
Not long after that, Young Coders was approached by the PyOhio and PyTennessee organizers. Both conferences have held Young Coders nearly every year since.  Brad Montgomery has taken over responsibility in PyTennessee, but  Cunningham  still runs the workshop at PyOhio.
Since the start of the program  Cunningham  and Shaurette have taught over 400 kids!
We thank Cunningham and Shaurette  for their work in actively promoting and teaching Python to a new generation of programmers.
Categories: FLOSS Project Planets

Join us at Akademy 2017 in Almería!

Planet KDE - Fri, 2017-05-19 08:31
KDE Project:

This July KDE's user and developer community is once again going to come together at Akademy, our largest annual gathering.

I'm going there this year as well, and you'll even be able to catch me on stage giving a talk on Input Methods in Plasma 5. Here's the talk abstract to hopefully whet your appetite:

An overview over the How and Why of Input Methods support (including examples of international writing systems, emoji and word completion) in Plasma on both X11 and Wayland, its current status and challenges, and the work ahead of us.

Text input is the foundational means of human-computer interaction: We configure or systems, program them, and express ourselves through them by writing. Input Methods help us along by converting hardware events into text - complex conversion being a requirement for many international writing systems, new writing systems such as emoji, and at the heart of assistive text technologies such as word completion and spell-checking.

This talk will illustrate the application areas for Input Methods by example, presenting short introductions to several international writing systems as well as emoji input. It will explain why solid Input Methods support is vital to KDE's goal of inclusivity and how Input Methods can make the act of writing easier for all of us.

It will consolidate input from the Input Methods development and user community to provide a detailed overview over the current Input Methods technical architecture and user experience in Plasma, as well as free systems in general. It will dive into existing pain points and present both ongoing work and plans to address them.

This will actually be the first time I'm giving a presentation at Akademy! It's a topic close to my heart, and I hope I can do a decent job conveying a snaphot of all the great and important work people are doing in this area to your eyes and ears.

See you there!

Categories: FLOSS Project Planets

ThinkShout: Skipping a Version - Migrating from Drupal 6 to Drupal 8 with Drupal Migrate

Planet Drupal - Fri, 2017-05-19 08:30

I recently had the opportunity to migrate content from a Drupal 6 site to a Drupal 8 site. This was especially interesting for me as I hadn’t used Drupal 6 before. As you’d expect, there are some major infrastructure changes between Drupal 6 and Drupal 8. Those differences introduce some migration challenges that I’d like to share.

The Migrate module is a wonderful thing. The vast majority of node-based content can be migrated into a Drupal 8 site with minimal effort, and for the content that doesn’t quite fit, there are custom migration sources. A custom migration source is a small class that can provide extra data to your migration in the form of source fields. Typically, a migration will map source fields to destination fields, expecting the fields to exist on both the source node type and destination node type. We actually published an in-depth, two-part blog series about how we use Drupal Migrate to populate Drupal sites with content in conjunction with Google Sheets in our own projects.

In the following example, we are migrating the value of content_field_text_author from Drupal 6 to field_author in Drupal 8. These two fields map one-to-one:

id: book label: Book migration_group: d6 deriver: Drupal\node\Plugin\migrate\D6NodeDeriver source: key: migrate target: d6 plugin: d6_node node_type: book process: field_author: content_field_text_author destination: plugin: entity:node

This field mapping works because content_field_text_author is a table in the Drupal 6 database and is recognized by the Migrate module as a field. Everyone is happy.

However, in Drupal 6, it’s possible for a field to exist only in the database table of the node type. These tables look like this:

mysql> DESC content_type_book; +----------------------------+------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------------------------+------------------+------+-----+---------+-------+ | vid | int(10) unsigned | NO | PRI | 0 | | | nid | int(10) unsigned | NO | MUL | 0 | | | field_text_issue_value | longtext | YES | | NULL | | +----------------------------+------------------+------+-----+---------+-------+

If we want to migrate the content of field_text_issue_value to Drupal 8, we need to use a custom migration source.

Custom migration sources are PHP classes that live in the src/Plugin/migrate/source directory of your module. For example, you may have a PHP file located at src/Plugin/migrate/source/BookNode.php that would provide custom source fields for a Book content type.

A simple source looks like this:

namespace Drupal\custom_migrate_d6\Plugin\migrate\source; use Drupal\node\Plugin\migrate\source\d6\Node; /** * @MigrateSource( * id = "d6_book_node", * ) */ class BookNode extends Node { /** * @inheritdoc */ public function query() { $query = parent::query(); $query->join('content_type_book', 'book', 'n.nid = book.nid'); $query->addField('book', 'field_text_issue_value'); return $query; } }

As you can see, we are using our migration source to modify the query the Migrate module uses to retrieve the data to be migrated. Our modification extracts the field_text_issue_value column of the book content type table and provides it to the migration as a source field.

To use this migration source, we need to make one minor change to change to our migration. We replace this:

plugin: d6_node

With this:

plugin: d6_book_node

We do this because our migration source extends the standard Drupal 6 node migration source in order to add our custom source field.

The migration now contains two source fields and looks like this:

id: book label: Book migration_group: d6 deriver: Drupal\node\Plugin\migrate\D6NodeDeriver source: key: migrate target: d6 plugin: d6_book_node node_type: book process: field_author: content_field_text_author field_issue: field_text_issue_value destination: plugin: entity:node

You’ll find you can do a lot with custom migration sources, and this is especially useful with legacy versions of Drupal where you’ll have to fudge data at least a little bit. So if the Migrate module isn’t doing it for you, you’ll always have the option to step in and give it a little push.

Categories: FLOSS Project Planets

Reinout van Rees: PyGrunn: deep learning with tensorflow ("Trump tweet generator") - Ede Meijer

Planet Python - Fri, 2017-05-19 08:17

(One of my summaries of a talk at the 2017 PyGrunn conference).

He originally used a java solution for 'machine learning', but that didn't work very comfortably. He then switch to tensorflow, written in python.

Machine learning is learning from data without being explicitly programmed. You feed the computer lots of data and it learns from that. Some examples of the techniques used: linear regression, logistic regression, decision trees, artifical neural networks and much more.

Artificial neural networks are what tensorflow is about. A normal neural network has "input layer", a "hidden layer" and "an output layer". The nodes in the three layers are connected. The neural net tries to learn the "weights" of the connections.

Deep learning means you have neural networks with multiple hidden layers. Often it deals with features at different levels of abstractions. Images that have to be recognized can be cut in to several pieces of different sizes and fed to the net as those parts, but also as the full image. Training a model often works wit minimizing error by using the "gradual descent" method.

Tensor flow? What are tensors? Well, a 0D tensor is a scalar, 1D a vector, 2D a matrix, etc.

Then he started showing code examples. The ultimate test was trying to learn the network to talk like Trump. So it got fed a huge number of Trump twitter messages :-) It worked by defining a number of "layers" in order to predict the next character in a tweet. Very low-level, thus.

In the end the generated tweets started to look like Trump tweets. The code is here: https://github.com/EdeMeijer/trumpet

Photo explanation: just a nice unrelated picture from the my work-in-progress german model railway

Dutch note: python+django programmeren in hartje Utrecht bij de oude gracht? Watersector, dus veel data en geo. Leuk! Nelen&Schuurmans is op zoek. Stuur mij maar een mailtje, want de vacaturetekst staat nog niet online :-)

Categories: FLOSS Project Planets

Reinout van Rees: PyGrunn: looking at molecules using python - Jonathan Barnoud

Planet Python - Fri, 2017-05-19 07:31

(One of my summaries of a talk at the 2017 PyGrunn conference).

He researches at fat molecules. He applies simulation to molecules. F = m * a (+ some much more elaborate formulas). With an elaborate simulation, he was able to explain some of the properties of fat (using a "jupyter" notebook).

His (python) workflow? First you need to prepare the simulation. He did have (of did build) a simulation engine. The preparation takes text files with the following info:

  • Topology.
  • Initial coordinates.
  • Simulation parameters.

Those text files are prepared and fed to the simulation engine. What comes out is a trajectory (a file with the position and direction and speed of every single molecure for all timesteps).

The next step is analysis. A problem here is that various simulation engines export different formats.... Similar problem with the input, btw...

Luckily we've got python. And for python there are a huge amount of libraries. Including "MDAnalysis" (http://www.mdanalysis.org/) , a library that can use these trajectory files. The output: python numpy arrays. Nice! This way you can use the entire python scientific stack (numpy, scipy, etc) with all its power.

Numpy? Made for matrices. So you can work with your entire data set. Or you can filter, mask or slice your data.

Thanks to these tools, he can experiment with the data in a comfortable way. And make plots.

But... movies with the 3D simulation are better! So: https://github.com/arose/nglview . His molecules are moving around. He can even place a graph nearby and bind the timeline in the graph and the 3D visualization together. It is all python!

A problem he had was his directory structure. Lots of directories with simulation config files with different settings. A mess. So: http://datreant.org/ , "persistent, pythonic trees for heterogeneous data".

Summary:

  • Python is awesome.
  • Jupyter is awesome too.
  • The python science stack is awesome as well.
  • Each field develops awesome tools based on the above.

Photo explanation: just a nice unrelated picture from the my work-in-progress german model railway

Dutch note: python+django programmeren in hartje Utrecht bij de oude gracht? Watersector, dus veel data en geo. Leuk! Nelen&Schuurmans is op zoek. Stuur mij maar een mailtje, want de vacaturetekst staat nog niet online :-)

Categories: FLOSS Project Planets

Bertrand Delacretaz: Apache: lean and mean, durable, fun!

Planet Apache - Fri, 2017-05-19 05:45

Here’s another blog post of mine that was initially published by Computerworld UK.

My current Fiat Punto Sport is the second Diesel car that I own, and I love those engines. Very smooth yet quite powerful acceleration, good fuel savings, a discount on state taxes thanks to low pollution, and it’s very reliable and durable. And fun to drive. How often does Grandma go “wow” when you put the throttle down in your car? That happens here, and that Grandma is not usually a car freak.

Diesel engines used to be boring, but they have made incredible progress in the last few years – while staying true to their basic principles of simplicity, robustness and reliability.

The recent noise about the Apache Software Foundation (ASF) moving to Git, or not, made me think that the ASF might well be the (turbocharged, like my car) Diesel engine of open source. And that might be a good thing.

The ASF’s best practices are geared towards project sustainability, and building communities around our projects. That might not be as flashy as creating a cool new project in three days, but sometimes you need to build something durable, and you need to be able to provide your users with some reassurances that that will be the case – or that they can take over cleanly if not.

In a similar way to a high tech Diesel engine that’s built to last and operate smoothly, I think the ASF is well suited for projects that have a long term vision. We often encourage projects that want to join the ASF via its Incubator to first create a small community and release some initial code, at some other place, before joining the Foundation. That’s one way to help those projects prove that they are doing something viable, and it’s also clearly faster to get some people together and just commit some code to one of the many available code sharing services, than following the ASF’s rules for releases, voting etc.

A Japanese 4-cylinder 600cc gasoline-powered sports bike might be more exciting than my Punto on a closed track, but I don’t like driving those in day-to-day traffic or on long trips. Too brutal, requires way too much attention. There’s space for both that and my car’s high tech Diesel engine, and I like both styles actually, depending on the context.

Open Source communities are not one-size-fits-all: there’s space for different types of communities, and by exposing each community’s positive aspects, instead of trying to get them to fight each other, we might just grow the collective pie and live happily ever after (there’s a not-so-hidden message to sensationalistic bloggers in that last paragraph).

I’m very happy with the ASF being the turbocharged Diesel engine of Open Source – it does have to stay on its toes to make sure it doesn’t turn into a boring old-style Diesel, but there’s no need to rush evolution. There’s space for different styles.


Categories: FLOSS Project Planets

Tryton News: Bugfix release for trytond-4.4.1

Planet Python - Fri, 2017-05-19 05:00

A bug in the trytond release 4.4.0 has been found that prevents the server to create foreign key constraint. This means that database created or updated using this version are missing those constraints. It is important to upgrade to the new release 4.4.1 and exceptionally re-run the database update to create the new foreign keys.

Categories: FLOSS Project Planets

Michael Prokop: Debian stretch: changes in util-linux #newinstretch

Planet Debian - Fri, 2017-05-19 04:42

We’re coming closer to the Debian/stretch stable release and similar to what we had with #newinwheezy and #newinjessie it’s time for #newinstretch!

Hideki Yamane already started the game by blogging about GitHub’s Icon font, fonts-octicons and Arturo Borrero Gonzalez wrote a nice article about nftables in Debian/stretch.

One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.25.2 in Debian/jessie and in Debian/stretch there will be util-linux >=v2.29.2. There are many new options available and we also have a few new tools available.

Tools that have been taken over from other packages
  • last: used to be shipped via sysvinit-utils in Debian/jessie
  • lastb: used to be shipped via sysvinit-utils in Debian/jessie
  • mesg: used to be shipped via sysvinit-utils in Debian/jessie
  • mountpoint: used to be shipped via initscripts in Debian/jessie
  • sulogin: used to be shipped via sysvinit-utils in Debian/jessie
New tools
  • lsipc: show information on IPC facilities, e.g.:
  • root@ff2713f55b36:/# lsipc RESOURCE DESCRIPTION LIMIT USED USE% MSGMNI Number of message queues 32000 0 0.00% MSGMAX Max size of message (bytes) 8192 - - MSGMNB Default max size of queue (bytes) 16384 - - SHMMNI Shared memory segments 4096 0 0.00% SHMALL Shared memory pages 18446744073692774399 0 0.00% SHMMAX Max size of shared memory segment (bytes) 18446744073692774399 - - SHMMIN Min size of shared memory segment (bytes) 1 - - SEMMNI Number of semaphore identifiers 32000 0 0.00% SEMMNS Total number of semaphores 1024000000 0 0.00% SEMMSL Max semaphores per semaphore set. 32000 - - SEMOPM Max number of operations per semop(2) 500 - - SEMVMX Semaphore max value 32767 - -
  • lslogins: display information about known users in the system, e.g.:
  • root@ff2713f55b36:/# lslogins UID USER PROC PWD-LOCK PWD-DENY LAST-LOGIN GECOS 0 root 2 0 1 root 1 daemon 0 0 1 daemon 2 bin 0 0 1 bin 3 sys 0 0 1 sys 4 sync 0 0 1 sync 5 games 0 0 1 games 6 man 0 0 1 man 7 lp 0 0 1 lp 8 mail 0 0 1 mail 9 news 0 0 1 news 10 uucp 0 0 1 uucp 13 proxy 0 0 1 proxy 33 www-data 0 0 1 www-data 34 backup 0 0 1 backup 38 list 0 0 1 Mailing List Manager 39 irc 0 0 1 ircd 41 gnats 0 0 1 Gnats Bug-Reporting System (admin) 100 _apt 0 0 1 65534 nobody 0 0 1 nobody
  • lsns: list system namespaces, e.g.:
  • root@ff2713f55b36:/# lsns NS TYPE NPROCS PID USER COMMAND 4026531835 cgroup 2 1 root bash 4026531837 user 2 1 root bash 4026532473 mnt 2 1 root bash 4026532474 uts 2 1 root bash 4026532475 ipc 2 1 root bash 4026532476 pid 2 1 root bash 4026532478 net 2 1 root bash
  • setpriv: run a program with different privilege settings
  • zramctl: tool to quickly set up zram device parameters, to reset zram devices, and to query the status of used zram devices
New features/options

addpart (show or change the real-time scheduling attributes of a process):

--reload reload prompts on running agetty instances

blkdiscard (discard the content of sectors on a device):

-p, --step <num> size of the discard iterations within the offset -z, --zeroout zero-fill rather than discard

chrt (show or change the real-time scheduling attributes of a process):

-d, --deadline set policy to SCHED_DEADLINE -T, --sched-runtime <ns> runtime parameter for DEADLINE -P, --sched-period <ns> period parameter for DEADLINE -D, --sched-deadline <ns> deadline parameter for DEADLINE

fdformat (do a low-level formatting of a floppy disk):

-f, --from <N> start at the track N (default 0) -t, --to <N> stop at the track N -r, --repair <N> try to repair tracks failed during the verification (max N retries)

fdisk (display or manipulate a disk partition table):

-B, --protect-boot don't erase bootbits when creating a new label -o, --output <list> output columns --bytes print SIZE in bytes rather than in human readable format -w, --wipe <mode> wipe signatures (auto, always or never) -W, --wipe-partitions <mode> wipe signatures from new partitions (auto, always or never) New available columns (for -o): gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize sgi: Device Start End Sectors Cylinders Size Type Id Attrs sun: Device Start End Sectors Cylinders Size Type Id Flags

findmnt (find a (mounted) filesystem):

-J, --json use JSON output format -M, --mountpoint <dir> the mountpoint directory -x, --verify verify mount table content (default is fstab) --verbose print more details

flock (manage file locks from shell scripts):

-F, --no-fork execute command without forking --verbose increase verbosity

getty (open a terminal and set its mode):

--reload reload prompts on running agetty instances

hwclock (query or set the hardware clock):

--get read hardware clock and print drift corrected result --update-drift update drift factor in /etc/adjtime (requires --set or --systohc)

ldattach (attach a line discipline to a serial line):

-c, --intro-command <string> intro sent before ldattach -p, --pause <seconds> pause between intro and ldattach

logger (enter messages into the system log):

-e, --skip-empty do not log empty lines when processing files --no-act do everything except the write the log --octet-count use rfc6587 octet counting -S, --size <size> maximum size for a single message --rfc3164 use the obsolete BSD syslog protocol --rfc5424[=<snip>] use the syslog protocol (the default for remote); <snip> can be notime, or notq, and/or nohost --sd-id <id> rfc5424 structured data ID --sd-param <data> rfc5424 structured data name=value --msgid <msgid> set rfc5424 message id field --socket-errors[=<on|off|auto>] print connection errors when using Unix sockets

losetup (set up and control loop devices):

-L, --nooverlap avoid possible conflict between devices --direct-io[=<on|off>] open backing file with O_DIRECT -J, --json use JSON --list output format New available --list column: DIO access backing file with direct-io

lsblk (list information about block devices):

-J, --json use JSON output format New available columns (for --output): HOTPLUG removable or hotplug device (usb, pcmcia, ...) SUBSYSTEMS de-duplicated chain of subsystems

lscpu (display information about the CPU architecture):

-y, --physical print physical instead of logical IDs New available column: DRAWER logical drawer number

lslocks (list local system locks):

-J, --json use JSON output format -i, --noinaccessible ignore locks without read permissions

nsenter (run a program with namespaces of other processes):

-C, --cgroup[=<file>] enter cgroup namespace --preserve-credentials do not touch uids or gids -Z, --follow-context set SELinux context according to --target PID

rtcwake (enter a system sleep state until a specified wakeup time):

--date <timestamp> date time of timestamp to wake --list-modes list available modes -r, --reorder <dev> fix partitions order (by start offset)

sfdisk (display or manipulate a disk partition table):

New Commands: -J, --json <dev> dump partition table in JSON format -F, --list-free [<dev> ...] list unpartitioned free areas of each device -r, --reorder <dev> fix partitions order (by start offset) --delete <dev> [<part> ...] delete all or specified partitions --part-label <dev> <part> [<str>] print or change partition label --part-type <dev> <part> [<type>] print or change partition type --part-uuid <dev> <part> [<uuid>] print or change partition uuid --part-attrs <dev> <part> [<str>] print or change partition attributes New Options: -a, --append append partitions to existing partition table -b, --backup backup partition table sectors (see -O) --bytes print SIZE in bytes rather than in human readable format --move-data[=<typescript>] move partition data after relocation (requires -N) --color[=<when>] colorize output (auto, always or never) colors are enabled by default -N, --partno <num> specify partition number -n, --no-act do everything except write to device --no-tell-kernel do not tell kernel about changes -O, --backup-file <path> override default backup file name -o, --output <list> output columns -w, --wipe <mode> wipe signatures (auto, always or never) -W, --wipe-partitions <mode> wipe signatures from new partitions (auto, always or never) -X, --label <name> specify label type (dos, gpt, ...) -Y, --label-nested <name> specify nested label type (dos, bsd) Available columns (for -o): gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize sgi: Device Start End Sectors Cylinders Size Type Id Attrs sun: Device Start End Sectors Cylinders Size Type Id Flags

swapon (enable devices and files for paging and swapping):

-o, --options <list> comma-separated list of swap options New available columns (for --show): UUID swap uuid LABEL swap label

unshare (run a program with some namespaces unshared from the parent):

-C, --cgroup[=<file>] unshare cgroup namespace --propagation slave|shared|private|unchanged modify mount propagation in mount namespace -s, --setgroups allow|deny control the setgroups syscall in user namespaces Deprecated / removed options

sfdisk (display or manipulate a disk partition table):

-c, --id change or print partition Id --change-id change Id --print-id print Id -C, --cylinders <number> set the number of cylinders to use -H, --heads <number> set the number of heads to use -S, --sectors <number> set the number of sectors to use -G, --show-pt-geometry deprecated, alias to --show-geometry -L, --Linux deprecated, only for backward compatibility -u, --unit S deprecated, only sector unit is supported
Categories: FLOSS Project Planets

Python Bytes: #26 How have you automated your life, or CLI, with Python?

Planet Python - Fri, 2017-05-19 04:00
<p>Sponsored by rollbar: <a href="http://rollbar.com/pythonbytes">rollbar.com/pythonbytes</a></p> <p><strong>Brian #1: Two part series on interactive terminal applications</strong></p> <p><strong>Part 1:</strong> <a href="https://opensource.com/article/17/5/4-terminal-apps"><strong>4 terminal applications with great command-line UIs</strong></a></p> <ul> <li>For Comparison: both ok but could be better <ul> <li>MySQL REPL</li> <li>Python REPL</li> </ul></li> <li><a href="https://bpython-interpreter.org/">bpython</a> adds autocompletion and other goodies <ul> <li>also check out <a href="https://pypi.python.org/pypi/ptpython">ptpython</a> as a REPL replacement</li> </ul></li> <li><a href="http://mycli.net/">mycli</a> adds context aware completion to MySQL <a href="http://mycli.net/">mycli</a> - <a href="https://www.pgcli.com/">pgcli</a> for postgress that adds fuzzy search</li> <li><a href="https://fishshell.com/">fish</a> : like bash, but has better search history</li> </ul> <p><strong>Part 2:</strong> <a href="https://opensource.com/article/17/5/4-practical-python-libraries"><strong>4 Python libraries for building great cli's</strong></a></p> <ul> <li><a href="https://python-prompt-toolkit.readthedocs.io/en/latest/">prompt_toolkit</a> - for building a REPL like interface <ul> <li>includes command history, auto-suggestion, auto-completion</li> </ul></li> <li><a href="http://click.pocoo.org/5/">click</a> <ul> <li>includes pager and ability to launch an editor</li> </ul></li> <li><a href="https://pypi.python.org/pypi/fuzzyfinder">fuzzyfinder</a> - make suggestions <ul> <li>article shows how to combine that with prompt_toolkit</li> </ul></li> <li><a href="http://pygments.org/">pygments</a> - syntax highlighting</li> </ul> <p><strong>Michael #2:</strong> <a href="https://www.reddit.com/r/Python/comments/69ba93/how_have_you_automated_your_life_with_python_if/"><strong>How have you automated your life with python?</strong></a></p> <ul> <li>There is something magical about writing code that interacts with the physical world.</li> <li>I have a script which runs every 5 minutes between 17:00 and 17:30 which scrapes the train times website and sends me desktop notifications saying whether or not my trains home are delayed / cancelled.</li> <li>I recently wrote a quick python script that tells me when my girlfriend comes home: It sniffs the network for DHCP traffic, when her phone joins the wifi network outside it uses the say command to let me know.</li> <li>Wrote a script to check if nearby ice cream shops are stocking my favourite (rare) flavour by scanning their menu page for keywords.</li> <li>A script to check the drive time too/from work using a route with tolls or without tolls.. to try and save some money when the times aren't too different. Using google maps API and a flask site.</li> <li>I have a script that generates weekly status update emails based off my git commit messages and pull requests. It also creates timesheets in Harvest based on the projects I'm assigned.</li> <li>I have thrown together some python that automatically controls my reverse-cycle AC system so that it makes optimal use of my solar panels on my roof.</li> </ul> <p><strong>Brian #3</strong>: <a href="http://pybit.es/flask-sqlalchemy-bday-app.html"><strong>Building a Simple Birthday App with Flask-SQLAlchemy</strong></a></p> <ul> <li>Nice simple application with a clear need. <ul> <li>Keep track of upcoming birthdays</li> <li>Avoid Faceboook</li> <li>Build a simple Flask app</li> <li>Try SQLAlchemy</li> </ul></li> </ul> <p><strong>Sponsored by Rollbar</strong>, try them at <a href="http://rollbar.com/pythonbytes">rollbar.com/pythonbytes</a> and don't forget to visit their booth at PyCon!</p> <p><strong>Michael #4:</strong> <a href="https://www.amin.space/blog/2017/5/elemental_speller/"><strong>Spelling with Elemental Symbols</strong></a></p> <ul> <li>How does it work? <ul> <li>Input: "Amputations"</li> <li>Output: "AmPuTaTiONS", "AmPUTaTiONS"</li> </ul></li> <li>Generating Character Groupings: <ul> <li>'AmPuTaTiONS' <code>(2,2,2,2,1,1,1)</code></li> <li>'AmPUTaTiONS' <code>(2,1,1,2,2,1,1,1)</code></li> <li>How many are there in general for a given word? <code>fib(n + 1)</code>!</li> </ul></li> <li>Addressing Performance Issues: A few attempts don’t add much but</li> <li>Memoization: The technique of saving a function's output and returning it if the function is called again with the same inputs. A memoized function only needs to generate output once for a given input. This can be very helpful with expensive functions that are called many times with the same few inputs, but only works for pure functions. → 30% faster</li> <li>Algorithms: Switch to directed graphs and recursion, changes O(2^n) to O(n) and time from 16min to 10 sec.</li> <li>Learned a great deal along the way. This project introduced: <ul> <li>Combinatorics</li> <li>Performance profiling</li> <li>Time complexity</li> <li>Memoization</li> <li>Recursion</li> <li>Graphs and trees</li> </ul></li> </ul> <p><strong>Brian #5:</strong> <strong>IDE's for beginners</strong></p> <ul> <li><a href="https://www.reddit.com/r/Python/comments/6ahnsb/thonny_python_ide_for_beginners/">Recent discussion on Reddit about Thonny</a></li> <li>I have mixed feelings about encouraging beginner IDE's. <ul> <li>Mostly negative feelings.</li> <li>And yet there is IDLE, there is Thonny, ...</li> </ul></li> <li>Are these useful? Anti-useful?</li> <li>Isn't learning a decent editor part of learning to program?</li> </ul> <p><strong>Michael #6:</strong> <a href="https://twitter.com/dtizzlenizzle/status/861024781273112576"><strong>PDF Plumber</strong></a></p> <ul> <li>Plumb a PDF for detailed information about each char, rectangle, line, et cetera — and easily extract text and tables.</li> <li>Visual debugging with <code>.to_image()</code></li> <li>Extracting tables <ul> <li>pdfplumber's approach to table detection borrows heavily from Anssi Nurminen's master's thesis, and is inspired by Tabula. It works like this:</li> <li>For any given PDF page, find the lines that are (a) explicitly defined and/or (b) implied by the alignment of words on the page.</li> <li>Merge overlapping, or nearly-overlapping, lines.</li> <li>Find the intersections of all those lines.</li> <li>Find the most granular set of rectangles (i.e., cells) that use these intersections as their vertices.</li> <li>Group contiguous cells into tables.</li> <li>Check out the demonstrations section.</li> </ul></li> </ul>
Categories: FLOSS Project Planets

Jeff Geerling's Blog: Call for Sessions is open for DrupalCamp St. Louis 2017 - come and speak!

Planet Drupal - Thu, 2017-05-18 22:13

DrupalCamp St. Louis 2017 will be held September 22-23, 2017, in St. Louis, Missouri. This will be our fourth year hosting a DrupalCamp, and we're one of the best camps for new presenters!

If you did something amazing with Drupal, if you're an aspiring themer, site builder, or developer, or if you are working on making the web a better place, we'd love for you to submit a session. Session submissions are due by August 1.

Categories: FLOSS Project Planets

Benjamin Mako Hill: Children’s Perspectives on Critical Data Literacies

Planet Debian - Thu, 2017-05-18 20:51

Last week, we presented a new paper that describes how children are thinking through some of the implications of new forms of data collection and analysis. The presentation was given at the ACM CHI conference in Denver last week and the paper is open access and online.

Over the last couple years, we’ve worked on a large project to support children in doing — and not just learning about — data science. We built a system, Scratch Community Blocks, that allows the 18 million users of the Scratch online community to write their own computer programs — in Scratch of course — to analyze data about their own learning and social interactions. An example of one of those programs to find how many of one’s follower in Scratch are not from the United States is shown below.

Last year, we deployed Scratch Community Blocks to 2,500 active Scratch users who, over a period of several months, used the system to create more than 1,600 projects.

As children used the system, Samantha Hautea, a student in UW’s Communication Leadership program, led a group of us in an online ethnography. We visited the projects children were creating and sharing. We followed the forums where users discussed the blocks. We read comment threads left on projects. We combined Samantha’s detailed field notes with the text of comments and forum posts, with ethnographic interviews of several users, and with notes from two in-person workshops. We used a technique called grounded theory to analyze these data.

What we found surprised us. We expected children to reflect on being challenged by — and hopefully overcoming — the technical parts of doing data science. Although we certainly saw this happen, what emerged much more strongly from our analysis was detailed discussion among children about the social implications of data collection and analysis.

In our analysis, we grouped children’s comments into five major themes that represented what we called “critical data literacies.” These literacies reflect things that children felt were important implications of social media data collection and analysis.

First, children reflected on the way that programmatic access to data — even data that was technically public — introduced privacy concerns. One user described the ability to analyze data as, “creepy”, but at the same time, “very cool.” Children expressed concern that programmatic access to data could lead to “stalking“ and suggested that the system should ask for permission.

Second, children recognized that data analysis requires skepticism and interpretation. For example, Scratch Community Blocks introduced a bug where the block that returned data about followers included users with disabled accounts. One user, in an interview described to us how he managed to figure out the inconsistency:

At one point the follower blocks, it said I have slightly more followers than I do. And, that was kind of confusing when I was trying to make the project. […] I pulled up a second [browser] tab and compared the [data from Scratch Community Blocks and the data in my profile].

Third, children discussed the hidden assumptions and decisions that drive the construction of metrics. For example, the number of views received for each project in Scratch is counted using an algorithm that tries to minimize the impact of gaming the system (similar to, for example, Youtube). As children started to build programs with data, they started to uncover and speculate about the decisions behind metrics. For example, they guessed that the view count might only include “unique” views and that view counts may include users who do not have accounts on the website.

Fourth, children building projects with Scratch Community Blocks realized that an algorithm driven by social data may cause certain users to be excluded. For example, a 13-year-old expressed concern that the system could be used to exclude users with few social connections saying:

I love these new Scratch Blocks! However I did notice that they could be used to exclude new Scratchers or Scratchers with not a lot of followers by using a code: like this: when flag clicked if then user’s followers < 300 stop all. I do not think this a big problem as it would be easy to remove this code but I did just want to bring this to your attention in case this not what you would want the blocks to be used for.

Fifth, children were concerned about the possibility that measurement might distort the Scratch community’s values. While giving feedback on the new system, a user expressed concern that by making it easier to measure and compare followers, the system could elevate popularity over creativity, collaboration, and respect as a marker of success in Scratch.

I think this was a great idea! I am just a bit worried that people will make these projects and take it the wrong way, saying that followers are the most important thing in on Scratch.

Kids’ conversations around Scratch Community Blocks are good news for educators who are starting to think about how to engage young learners in thinking critically about the implications of data. Although no kid using Scratch Community Blocks discussed each of the five literacies described above, the themes reflect starting points for educators designing ways to engage kids in thinking critically about data.

Our work shows that if children are given opportunities to actively engage and build with social and behavioral data, they might not only learn how to do data analysis, but also reflect on its implications.

This blog-post and the work that it describes is a collaborative project by Samantha Hautea, Sayamindu Dasgupta, and Benjamin Mako Hill. We have also received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Hal Abelson from MIT CSAIL. Financial support came from the US National Science Foundation.
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-05-18

Planet Apache - Thu, 2017-05-18 19:58
  • Spotting a million dollars in your AWS account · Segment Blog

    You can easily split your spend by AWS service per month and call it a day. Ten thousand dollars of EC2, one thousand to S3, five hundred dollars to network traffic, etc. But what’s still missing is a synthesis of which products and engineering teams are dominating your costs.  Then, add in the fact that you may have hundreds of instances and millions of containers that come and go. Soon, what started as simple analysis problem has quickly become unimaginably complex.  In this follow-up post, we’d like to share details on the toolkit we used. Our hope is to offer up a few ideas to help you analyze your AWS spend, no matter whether you’re running only a handful of instances, or tens of thousands.

    (tags: segment money costs billing aws ec2 ecs ops)

Categories: FLOSS Project Planets
Syndicate content