FLOSS Project Planets

Wesley Chun: Generating slides from spreadsheet data

Planet Python - Wed, 2016-11-30 05:58
NOTE: The code covered in this post are also available in a video walkthrough.


IntroductionA common use case when you have data in a spreadsheet or database, is to find ways of making that data more visually appealing to others. This is the subject of today's post, where we'll walk through a simple Python script that generates presentation slides based on data in a spreadsheet using both the Google Sheets and Slides APIs.

Specifically, we'll take all spreadsheet cells containing values and create an equivalent table on a slide with that data. The Sheet also features a pre-generated pie chart added from the Explore in Google Sheets feature that we'll import into a blank slide. Not only do we do that, but if the data in the Sheet is updated (meaning the chart is as well), then so can the imported chart image in the presentation. These are just two examples of generating slides from spreadsheet data. The example Sheet we're getting the data from for this script looks like this:


The data in this Sheet originates from the Google Sheets API codelab. In the codelab, this data lives in a SQLite relational database, and in the previous post covering how to migrate SQL data to Google Sheets, we "imported" that data into the Sheet we're using. As mentioned before, the pie chart comes from the Explore feature.

Using the Google Sheets & Slides APIsThe scopes needed for this application are the read-only scope for Sheets (to read the cell contents and the pie chart) and the read-write scope for Slides since we're creating a new presentation:
  • 'https://www.googleapis.com/auth/spreadsheets.readonly' — Read-only access to Google Sheets and properties
  • 'https://www.googleapis.com/auth/presentations' — Read-write access to Slides and Slides presentation properties
If you're new to using Google APIs, we recommend reviewing earlier posts & videos covering the setting up projects and the authorization boilerplate so that we can focus on the main app. Once we've authorized our app, two service endpoints are created, one for each API. The one for Sheets is saved to the SHEETS variable while the one for Slides goes to SLIDES.

Start with SheetsThe first thing to do is to grab all the data we need from the Google Sheet using the Sheets API. You can either supply your own Sheet with your own chart, or you can run the script from the earlier post mentioned earlier to create an identical Sheet as above. In either case, you need to provide the Sheet ID to read from, which is saved to the sheetID variable. Using its ID, we call spreadsheets().values().get() to pull out all the cells (as rows & columns) from the Sheet and save it to orders:
sheetID = '. . .' # use your own!
orders = SHEETS.spreadsheets().values().get(range='Sheet1',
spreadsheetId=sheetID).execute().get('values')
The next step is to call spreadsheets().get() to get all the sheets in the Sheet —there's only one, so grab it at index 0. Since this sheet only has one chart, we also use index 0 to get that:
sheet = SHEETS.spreadsheets().get(spreadsheetId=sheetID,
ranges=['Sheet1']).execute().get('sheets')[0]
chartID = sheet['charts'][0]['chartId']
That's it for Sheets. Everything from here on out takes places in Slides.

Create new Slides presentationA new slide deck can be created with SLIDES.presentations().create()—or alternatively with the Google Drive API which we won't do here. We'll name it, "Generating slides from spreadsheet data DEMO" and save its (new) ID along with the IDs of the title and subtitle textboxes on the (one) title slide created in the new deck:
DATA = {'title': 'Generating slides from spreadsheet data DEMO'}
rsp = SLIDES.presentations().create(body=DATA).execute()
deckID = rsp['presentationId']
titleSlide = rsp['slides'][0]
titleID = titleSlide['pageElements'][0]['objectId']
subtitleID = titleSlide['pageElements'][1]['objectId']Create slides for table & chartA mere title slide doesn't suffice as we need a place for the cell data as well as the pie chart, so we'll create slides for each. While we're at it, we might as well fill in the text for the presentation title and subtitle. These requests are self-explanatory as you can see below in the reqs variable. The SLIDES.presentations().batchUpdate() method is then used to send the four commands to the API. Upon return, save the IDs for both the cell table slide as well as the blank slide for the chart:
reqs = [
{'createSlide': {'slideLayoutReference': {'predefinedLayout': 'TITLE_ONLY'}}},
{'createSlide': {'slideLayoutReference': {'predefinedLayout': 'BLANK'}}},
{'insertText': {'objectId': titleID, 'text': 'Importing Sheets data'}},
{'insertText': {'objectId': subtitleID, 'text': 'via the Google Slides API'}},
]
rsp = SLIDES.presentations().batchUpdate(body={'requests': reqs},
presentationId=deckID).execute().get('replies')
tableSlideID = rsp[0]['createSlide']['objectId']
chartSlideID = rsp[1]['createSlide']['objectId']
Note the order of the requests. The create-slide requests come first followed by the text inserts. Responses that come back from the API are returned in the same order as they were sent, hence why the cell table slide ID comes back first (index 0) followed by the chart slide ID (index 1). The text inserts don't have any meaningful return values and are thus ignored.

Filling out the table slideNow let's focus on the table slide. There are two things we need to accomplish. In the previous set of requests, we asked the API to create a "title only" slide, meaning there's (only) a textbox for the slide title. The next snippet of code gets all the page elements on that slide so we can get the ID of that textbox, the only thing on that page:
rsp = SLIDES.presentations().pages().get(presentationId=deckID,
pageObjectId=tableSlideID).execute().get('pageElements')
textboxID = rsp[0]['objectId'] On this slide, we need to add the cell table for the Sheet data, so a create-table request takes care of that. The required elements in such a call include the ID of the slide the table should go on as well as the total number of rows and columns desired. Fortunately all that are available from tableSlideID and orders saved earlier. Oh, and add a title for this table slide too. Here's the code:
reqs = [
{'createTable': {
'elementProperties': {'pageObjectId': tableSlideID},
'rows': len(orders),
'columns': len(orders[0])},
},
{'insertText': {'objectId': textboxID, 'text': 'Toy orders'}},
]
rsp = SLIDES.presentations().batchUpdate(body={'requests': reqs},
presentationId=deckID).execute().get('replies')
tableID = rsp[0]['createTable']['objectId']Another call to SLIDES.presentations().batchUpdate() and we're done, saving the ID of the newly-created table. Next, we'll fill in each cell of that table.

Populate table & add chart imageThe first set of requests needed now fill in each cell of the table. The most compact way to issue these requests is with a double-for loop list comprehension. The first loops over the rows while the second loops through each column (of each row). Magically, this creates all the text insert requests needed.
reqs = [
{'insertText': {
'objectId': tableID,
'cellLocation': {'rowIndex': i, 'columnIndex': j},
'text': str(data),
}} for i, order in enumerate(orders) for j, data in enumerate(order)]
The final request "imports" the chart from the Sheet onto the blank slide whose ID we saved earlier. Note, while the dimensions below seem completely arbitrary, be assured we're using the same size & transform as a blank rectangle we drew on the slide earlier (and read those values from). The alternative would be to use math to come up with your object dimensions. Here is the code we're talking about, followed by the actual call to the API:
reqs.append({'createSheetsChart': {
'spreadsheetId': sheetID,
'chartId': chartID,
'linkingMode': 'LINKED',
'elementProperties': {
'pageObjectId': chartSlideID,
'size': {
'height': {'magnitude': 7075, 'unit': 'EMU'},
'width': {'magnitude': 11450, 'unit': 'EMU'}
},
'transform': {
'scaleX': 696.6157,
'scaleY': 601.3921,
'translateX': 583875.04,
'translateY': 444327.135,
'unit': 'EMU',
},
},
}})
SLIDES.presentations().batchUpdate(body={'requests': reqs},
presentationId=deckID).execute()
Once all the requests have been created, send them to the Slides API then we're done. (In the actual app, you'll see we've sprinkled various print() calls to let the user knows which steps are being executed.

ConclusionThe entire script clocks in at just under 100 lines of code... see below. If you run it, you should get output that looks something like this:
$ python3 slides_table_chart.py
** Fetch Sheets data
** Fetch chart info from Sheets
** Create new slide deck
** Create 2 slides & insert slide deck title+subtitle
** Fetch table slide title (textbox) ID
** Create table & insert table slide title
** Fill table cells & create linked chart to Sheets
DONE
When the script has completed, you should have a new presentation with these 3 slides:




Below is the entire script for your convenience which runs on both Python 2 and Python 3 (unmodified!). If I were to divide the script into major sections, they would be represented by each of the print() calls above. Here's the complete script—by using, copying, and/or modifying this code or any other piece of source from this blog, you implicitly agree to its Apache2 license:
from __future__ import print_function

from apiclient import discovery
from httplib2 import Http
from oauth2client import file, client, tools

SCOPES = (
'https://www.googleapis.com/auth/spreadsheets.readonly',
'https://www.googleapis.com/auth/presentations',
)
store = file.Storage('storage.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('client_secret.json', SCOPES)
creds = tools.run_flow(flow, store)
HTTP = creds.authorize(Http())
SHEETS = discovery.build('sheets', 'v4', http=HTTP)
SLIDES = discovery.build('slides', 'v1', http=HTTP)

print('** Fetch Sheets data')
sheetID = '. . .' # use your own!
orders = SHEETS.spreadsheets().values().get(range='Sheet1',
spreadsheetId=sheetID).execute().get('values')

print('** Fetch chart info from Sheets')
sheet = SHEETS.spreadsheets().get(spreadsheetId=sheetID,
ranges=['Sheet1']).execute().get('sheets')[0]
chartID = sheet['charts'][0]['chartId']

print('** Create new slide deck')
DATA = {'title': 'Generating slides from spreadsheet data DEMO'}
rsp = SLIDES.presentations().create(body=DATA).execute()
deckID = rsp['presentationId']
titleSlide = rsp['slides'][0]
titleID = titleSlide['pageElements'][0]['objectId']
subtitleID = titleSlide['pageElements'][1]['objectId']

print('** Create 2 slides & insert slide deck title+subtitle')
reqs = [
{'createSlide': {'slideLayoutReference': {'predefinedLayout': 'TITLE_ONLY'}}},
{'createSlide': {'slideLayoutReference': {'predefinedLayout': 'BLANK'}}},
{'insertText': {'objectId': titleID, 'text': 'Importing Sheets data'}},
{'insertText': {'objectId': subtitleID, 'text': 'via the Google Slides API'}},
]
rsp = SLIDES.presentations().batchUpdate(body={'requests': reqs},
presentationId=deckID).execute().get('replies')
tableSlideID = rsp[0]['createSlide']['objectId']
chartSlideID = rsp[1]['createSlide']['objectId']

print('** Fetch table slide title (textbox) ID')
rsp = SLIDES.presentations().pages().get(presentationId=deckID,
pageObjectId=tableSlideID).execute().get('pageElements')
textboxID = rsp[0]['objectId']

print('** Create table & insert table slide title')
reqs = [
{'createTable': {
'elementProperties': {'pageObjectId': tableSlideID},
'rows': len(orders),
'columns': len(orders[0])},
},
{'insertText': {'objectId': textboxID, 'text': 'Toy orders'}},
]
rsp = SLIDES.presentations().batchUpdate(body={'requests': reqs},
presentationId=deckID).execute().get('replies')
tableID = rsp[0]['createTable']['objectId']

print('** Fill table cells & create linked chart to Sheets')
reqs = [
{'insertText': {
'objectId': tableID,
'cellLocation': {'rowIndex': i, 'columnIndex': j},
'text': str(data),
}} for i, order in enumerate(orders) for j, data in enumerate(order)]

reqs.append({'createSheetsChart': {
'spreadsheetId': sheetID,
'chartId': chartID,
'linkingMode': 'LINKED',
'elementProperties': {
'pageObjectId': chartSlideID,
'size': {
'height': {'magnitude': 7075, 'unit': 'EMU'},
'width': {'magnitude': 11450, 'unit': 'EMU'}
},
'transform': {
'scaleX': 696.6157,
'scaleY': 601.3921,
'translateX': 583875.04,
'translateY': 444327.135,
'unit': 'EMU',
},
},
}})
SLIDES.presentations().batchUpdate(body={'requests': reqs},
presentationId=deckID).execute()
print('DONE')
As with our other code samples, you can now customize it to learn more about the API, integrate into other apps for your own needs, for a mobile frontend, sysadmin script, or a server-side backend!

Code challengeGiven the knowledge you picked up from this post and its code sample, augment the script with another call to the Sheets API that updates the number of toys ordered by one of the customers, then add the corresponding call to the Slides API that refreshes the linked image based on the changes made to the Sheet (and chart). EXTRA CREDIT: Use the Google Drive API to monitor the Sheet so that any updates to toy orders will result in an "automagic" update of the chart image in the Slides presentation.
Categories: FLOSS Project Planets

SystemSeed: SystemSeed do Minsk

Planet Drupal - Wed, 2016-11-30 05:41

If you haven’t noticed from our Twitter feed this week we’ve flown the team to Minsk in Belarus to socialise, eat, drink and be merry while maybe taking in a little culture and even some work(!)

One of the first things that we all notice when the team get together is you can’t hug someone over Skype… you can’t make the other person a cup of tea, or complain about the same weather.

While distributed teams can pick the world’s finest global talent as far as timezones and personal or client flexibility allows, meeting in person is a necessary task to undertake on a regular basis. It’s not essential that everyday is spent this way and removing the ability to choose from only your local talent, or those willing to relocate is not the most sensible choice in our modern era of collaborative tools and communication methods. We’d still much rather be distributed but greatly appreciate these times together.

We’ll continue to blog through the week and Tweet some extras as they are happening.

From all of us in our temporary Minsk HQ - have a fun and productive day and if you are sat next to a colleague give them a hug or make them a cup of tea. Not all teams can enjoy this luxury every day of their work life.

Categories: FLOSS Project Planets

Krita 3.1 Release Candidate

Planet KDE - Wed, 2016-11-30 05:12

Due to illness, a week later than planned, we are still happy to release today the first release candidate for Krita 3.1. There are a number of important bug fixes, and we intend to fix a number of other bugs still in time for the final release.

  • Fix a crash when saving a document that has a vector layer to anything but the native format (regression in beta 3)
  • Fix exporting images using the commandline on Linux
  • Update the OSX QuickLook plugin to use the right thumbnail sizes
  • Improved zoom menu icons
  • Unify colors on all svg icons
  • Fix tilt-elevation brushes to work properly on a rotated or mirrored canvas
  • Improve drawing with the stabilizer enabled
  • Fix isotropic spacing when painting on a mirrored canvas
  • Fix a race condition when saving
  • Fix multi-window usage: the tool options palette would only be available the last openend window, now it’s available everywhere.
  • Fix a number memory leaks
  • Fix selecting the saving location for rendering animations (there are still several bugs in that plugin, though — we’re on it!)
  • Improve rendering speed of the popup color selector

You can find out more about what is going to be new in Krita 3.1 in the release notes. The release notes aren’t finished yet, but take a sneak peek all the same!

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store is available in the beta channel.

OSX Source code
Categories: FLOSS Project Planets

Fabio Zadrozny: PyDev 5.4.0 (Python 3.6, Patreon crowdfunding)

Planet Python - Wed, 2016-11-30 02:12
PyDev 5.4.0 is now available.

The main new feature in this release is support for Python 3.6 -- it's still not 100% complete, but at least all the new syntax is already supported (so, for instance, syntax and code analysis is already done, even inside f-strings, although code-completion is still not available inside them).

Also, it's now possible to launch modules using the python '-m' flag (so, PyDev will resolve the module name from the file and will launch it using 'python -m module.name'). Note that this must be enabled at 'Preferences > PyDev > Run'.

The last feature which I think is noteworthy is that the debugger can now show return values (note that this has a side-effect of making those variables live longer, so, if your program cares deeply about that, it's possible to disable it in Preferences > PyDev > Debug).

Now, for those that enjoy using PyDev, I've started a crowdfunding campaign at Patreon: https://www.patreon.com/fabioz so that PyDev users can help to keep it properly maintained... Really, without the PyDev supporters: https://www.brainwy.com/supporters/PyDev, it wouldn't be possible to keep it going -- thanks a lot! Hope to see you at https://www.patreon.com/fabioz too ;)
Categories: FLOSS Project Planets

Arturo Borrero González: Creating a team for netfilter packages in debian

Planet Debian - Wed, 2016-11-30 00:00

There are about 15 Netfilter packages in Debian, and they are maintained by separate people.

Yersterday, I contacted the maintainers of the main packages to propose the creation of a pkg-netfilter team to maintain all the packages together.

The benefits of maintaining packages in a team is already known to all, and I would expect to rise the overall quality of the packages due to this movement.

By now, the involved packages and maintainers are:

We should probably ping Jochen Friedrich as well who maintains arptables and ebtables. Also, there are some other non-official Netfilter packages, like iptables-persistent. I’m undecided to what to do with them, as my primary impulse is to only put in the team upstream packages.

Given the release of Stretch is just some months ahead, the creation of this packaging team will happen after the release, so we don’t have any hurry moving things now.

Categories: FLOSS Project Planets

Python 4 Kids: 3: Conditionals/if and Code Blocks

Planet Python - Tue, 2016-11-29 20:50

Book ref: 70ff

Python 2.7: Same, see earlier post

Note: this tutorial will be easier if you use Python-IDLE, not Python (Command Line).

So far, the programs you’ve written have gone straight from top to bottom, without skipping any lines. There are times though, when you do want a line to run only some of the time. For example, if you wrote a game, you’d only want one of these two lines of code to run:

print("Congratulations, you win!!!") print("Sorry, you lose.")

In fact, you’d want your code to look something like:

if player wins: print("Congratulations, you win!!") otherwise: print("Sorry, you lose.")

Python isn’t quite that much like English, but it’s pretty close. In Python you can do this by using the if and else (not otherwise) keywords, like this:

player_wins = True if player_wins: print("Congratulations, you win!!") else: print("Sorry, you lose.")

You see I’ve changed the two English words player wins to a single Python name player_wins. When you run this code, you get this output:

Congratulations, you win!!

If you change the line

player_wins = True

to

player_wins = False

(try it) then this is printed:

Sorry, you lose.

You need to remember that player_wins is just the name you’re using to store a value, the program can’t tell whether the player has won or lost. You need to set the value beforehand. You also need to notice that player_wins takes only two values – True and False (True and False are special values in Python, but you can take them as having their English meanings).

The if keyword needs to be followed by something that is either True or False. It could be a variable that holds one of those values or it could be an expression (see below) that evaluates to them. After “that something” you put a colon. The colon tells Python that a new code block is about to start. On the next line the code in the code block is indented. This indenting is important. Python will complain if the code is not indented (try it). If you want more things to happen, then you can put in more lines of code, as long as they’re all indented at the same level – that is, they have the same number of spaces in front of them (try 4 spaces in front). When you do more Python you’ll discover that you can put code blocks within code blocks – but that’s for later.

After the code block comes another keyword, else, followed by a colon and another code block. The else keyword serves the function of word otherwise in my mock code above. It is run if the if isn’t. After the else is another code block. This is indented like the first one and, like the first one, can contain more lines of code, as long as they are all indented with the same number of spaces in front of them.

Visually indented code blocks are one of Python’s great programming features. Most languages have code blocks, but few of them require them to be shown visually with indents. This makes Python programs easier to read and follow.

If you don’t have alternative code that is to be run if the conditional is not True, then you can leave out the else: and its code block. So, if you want to say Happy Birthday if it’s someone’s birthday, but nothing if it’s not, you could do something like this:

players_birthday = False if players_birthday: print("Happy Birthday!")

When you run this code, it doesn’t print anything, because you’ve set players_birthday = False and have not included an else block. Set players_birthday = True and see what happens.

Expressions

Rather than setting a value of True or False expressly, you can include a comparison. Python has a lot of comparisons but the main ones are == (equal- note the two equal signs together, not one), > (greater than) and < (less than). Here are some of them in action:

>>> 1 == 1 True >>> 1 == 2 False >>> 1 > 2 False >>> 1 < 2 True

See how they give either a True or False result? Try some yourself. Make sure you know the difference between 1 = 2 and 1 == 2.

Going back to our earlier example, rather than having a name player_wins, you’re probably more likely to have something like players_score. You can then compare the score against a winning score and print your message. For example, if they players_score is 100 and a score of greater than 90 is a win you could code this:

players_score = 100 if players_score > 90: print("Congratulations, you win!!") else: print("Sorry, you lose.")

Run it, then change players_score = 90 and run it again.


Categories: FLOSS Project Planets

Daniel Bader: 3 Reasons why you need a programming blog

Planet Python - Tue, 2016-11-29 19:00
3 Reasons why you need a programming blog

One of the best things I ever did for my dev career: A little story and three reasons why you should start a programming portfolio website right now.

At PyCon Germany I chatted with Astrid1, a freelance Python (Django) developer looking for ways to improve her career and to find more contracts.

Astrid seemed quite frustrated with her situation — it was tough for her to get the contracts and jobs she really wanted.

Often when she sent out her resume for more desirable gigs she wouldn’t even receive an answer. It sounded like she was stuck with a certain quality of clients and couldn’t really push past that invisible barrier.

I always love to help a sister (or brother) out and went into full-on diagnosing mode. Usually I just end up spouting unsolicited advice in these situations but with Astrid I think I actually hit the nail on the head… 😉

Eventually I asked Astrid if she had a website or blog as a “programmer portfolio” of sorts.

She did not.

And I think that was a BIG mistake —

Looking back I’d say starting my personal website at dbader.org was probably the best thing I ever did for my programming career:

Reason #1: Employers loved it—it made it much easier to get interviews

In fact once I had my website up for a while companies started contacting me through it. And they were no longer the crappy recruiter emails I got through LinkedIn, but from managers and dev leads at companies that I found actually interesting.

Reason #2: It was easier to get started than I thought

I launched my site with just 3 articles I wrote over the holidays hanging out with my family one year. I was surprised to find I got more (not less) traffic over time even though I didn’t post new stuff constantly. More people started linking to my posts and they ranked higher in Google (also search engines seem to favor content that has been around for a while). It was incredibly fun to see that growth and to find new ways of reaching developers.

Reason #3: It put me in touch with so many fine folks (like you!)

Most of the places I lived in didn’t have strong software dev / meetup communities. Starting a website was a fantastic way to make friends with other developers around the world and to exchange ideas.

How you can get started today

I know it seems super difficult to get everything set up in the beginning. And the work involved can seem kind of boring at first… “it’s just a website”.

What finally got me started with setting up my own website was turning it into a programming exercise.

Instead of using a pre-fab framework like Wordpress I wrote my own Python framework for generating the website.

I figured even if I wouldn’t follow through with the site I’d learn some web development skills in the process… And this was exactly true 😃

Putting myself in Astrid’s shoes again I really believe every software developer should have a personal website. The time investment is so small in comparison to the awesome benefits and opportunities it can generate for you.

If you’re sold on the idea of starting a programming blog but you don’t know how to go about it yet then check out this video I created for you.

In the video embedded below I’m going over my own website as an example and how it looks very different today compared to when I started it in 2012.

It doesn’t take much to get started with your own programming blog or portfolio website and the benefits can be huge.

  1. Not her real name. Astrid Müller is my German-sounding take on Jane Doe 😉 

Categories: FLOSS Project Planets

Drupal Modules: The One Percent: Drupal Modules: The One Percent — Configuration Split (video tutorial)

Planet Drupal - Tue, 2016-11-29 18:04
Drupal Modules: The One Percent — Configuration Split (video tutorial) NonProfit Tue, 11/29/2016 - 17:04 7

Here is where we look at Drupal modules running on less than 1% of reporting sites. Today we'll look at Configuration Split, a module which allows you to export only the D.8x configuration you want to production. More information can be found http://nuvole.org/blog/2016/nov/28/configuration-split-first-beta-release-drupal-ironcamp.

Categories: FLOSS Project Planets

A. Jesse Jiryu Davis: Announcing Motor 1.1 For MongoDB 3.4

Planet Python - Tue, 2016-11-29 17:58

MongoDB 3.4 was released this morning; tonight I’ve released Motor 1.1 with support for the latest MongoDB features.

Motor 1.1 now depends on PyMongo 3.4 or later. (It’s an annoying coincidence that the latest MongoDB and PyMongo versions are the same number.)

With MongoDB 3.4 and the latest Motor, you can now configure unicode-aware string comparison using collations. See PyMongo’s examples for collation.

Motor also supports the new Decimal128 BSON type. The new MongoDB version supports write concern with all commands that write, so drop_database, create_collection, create_indexes, and all the other commands that modify your data accept a writeConcern parameter.

The Max Staleness Spec I’ve labored the last few months to complete is now implemented in all drivers, including Motor 1.1.

Motor has improved support for logging server discovery and monitoring events. See PyMongo’s monitoring documentation for examples.

For a complete list of changes, see the Motor 1.1 changelog and the PyMongo 3.4 changelog. Post questions about Motor to the mongodb-user list on Google Groups. For confirmed issues or feature requests, open a case in Jira in the “MOTOR” project.

Or, just let me know on Twitter that you’re using Motor.

Categories: FLOSS Project Planets

Join us as a member to give back for the free software you use

FSF Blogs - Tue, 2016-11-29 17:48

As software permeates more and more aspects of society, the FSF must expand our work to protect and extend computer user freedom. We launched our yearly fundraiser with the goal of welcoming 500 new members and raising $450,000 before December 31st. Please support the work at the root of the free software movement: make a donation or – better yet – join us and become a member today.

For the past year, we have been very busy upgrading our server infrastructure, which we wrote about in the Fall FSF Bulletin. The new stack of machines works fully on free software all the way down through the BIOS, and makes use of redundant network attached storage over 10Gbps Ethernet. Cool stuff! We take care to prevent issues with freedom and privacy on our machines, which means avoiding the current x86 server CPUs that are encumbered with back doors as well as other components that require the user to load nonfree firmware. We use a high-end ASUS KGPE D-16 server motherboard, supported by Libreboot. Despite being a few years old – and thus supporting CPUs without known back doors – it is a beefy piece of gear, running up to 32 CPU cores, 256GB of RAM, and many terabytes of Solid State Disk storage.

Making the extra effort to build a uniquely free server stack does not come without some hiccups. Although the motherboards work fine on their own – we are already using them to run lists.gnu.org and Savannah services – they do have rough edges that need to be polished, for example, to get reliable Peripheral Component Interconnect support. The setup of the new stack and migration of our services will require a sustained effort from our three-person tech team during 2017, which cannot happen successfully without your support.

By hosting most of the GNU Project, we enable development on free software components that are key for the whole computer industry, such as Emacs, Bash, and the utilities at the base of all the GNU/Linux distributions powering supercomputers and the Internet's servers. Our public FTP server, ftp.gnu.org, serves 100Mb per second of free software all day, every day. That is more than a terabyte! On top of that, lists.gnu.org and lists.nongnu.org spool out about a half million emails between free software developers and users each day!

We are excited to have the opportunity to benefit the community by building, testing, and perfecting new hardware and technology that doesn't just work, but also supports our freedoms. Our ability to provide dependable servers for the FSF and GNU Project comes from your generosity and commitment. Your support funds hardware and the time of free software experts to work on deployments. We need you to give back and support this root infrastructure, enabling future free software development and distribution to thrive.

P.S.: If you have not already submitted, the LibrePlanet Call For Papers is closing tomorrow, November 30th, 10:00 EST (15:00 UTC)!

Categories: FLOSS Project Planets

Cheeky Monkey Media: Hacking DrupalCon: A Behind the Scenes Look at the Event Design and Planning Process

Planet Drupal - Tue, 2016-11-29 16:58
Hacking DrupalCon: A Behind the Scenes Look at the Event Design and Planning Process Spela Tue, 11/29/2016 - 21:58

Ever wonder what goes into a conference or other business event that participants will gush about (in a good way) for years? After an event of these mythical proportions, participants walk away raving about the food, the speakers, the social events, the aura, and the list goes on ...

But pulling off such an event is no easy feat. Thus, I decided to speak with Lead DrupalCon Coordinator Amanda Gonser to find out how she manages to make sure the DrupalCon event design is flawless and fits into the overall event planning process seamlessly.

(This video may cause unexpected bursts of laughter. If you cannot laugh in your current environment, please scroll down for the written version.)

Categories: FLOSS Project Planets

Mediacurrent: Communicating Design to Clients

Planet Drupal - Tue, 2016-11-29 16:06

What makes a good designer? Well, of course you have to be creative, understand how to solve problems in unconventional ways, and do it all within budget. But wait, there's more to it than being super creative and solving problems. You must be able to make others understand how your design vision solves their problems.

Categories: FLOSS Project Planets

Shirish Agarwal: The Iziko South African Museum

Planet Debian - Tue, 2016-11-29 15:49

This would be a bit long on my stay in Cape Town, South Africa after Debconf16.

Before I start, let me share the gallery works, you can see some photos that I have been able to upload to my gallery . It seems we are using gallery 2 while upstream had made gallery 3 and then it sort of died. I actually asked in softwarerecs stackexchange site if somebody knows of a drop-in replacement for gallery and was told/shared about Pwigo . I am sure the admin knows about it. There would be costs to probably migrate from gallery to Pwigo with the only benefit that it would be something which would perhaps be more maintainable.

The issues I face with the current gallery system are few things –

a. There is no way to know how much your progress your upload has taken.
b. After it has submit, it gives a fake error message saying some error has occurred. This has happened on every occasion/attempt. Now I don’t know whether it is because I have slow upload speeds or something else altogether. I had shared the error page last time in the blog post hence not sharing again.

Although, all the pictures which would be shared in this blog post would be from the same gallery

Categories: FLOSS Project Planets

Lullabot: Pull Content From a Remote Drupal 8 Site Using Migrate and JSON API

Planet Drupal - Tue, 2016-11-29 15:02

I wanted to find a way to pull data from one Drupal 8 site to another, using JSON API to expose data on one site, and Drupal’s Migrate with a JSON source on another site to consume it. Much of what I wanted to do was undocumented and confusing, but it worked well, once I figured it out. Nevertheless, it took me several days to get everything working, so I thought I’d write up an article to explain how I solved the problem. Hopefully, this will save someone a lot of time in the future.

I ended up using the JSON API module, along with the REST modules in Drupal Core on the source site. On the target site, I used Migrate from  Drupal Core 8.2.3 along with Migrate Plus and Migrate Tools.

Why JSON API?

Drupal 8 Core ships with two ways to export JSON data. You can access data from any entity by appending ?_format=json to its path, but that means you have to know the path ahead of time, and you’d be pulling in one entity at a time, which is not efficient.

You could also use Views to create a JSON endpoint, but it might be difficult to configure it to include all the required data, especially all the data from related content, like images, authors, and related nodes. And you’d have to create a View for every possible collection of data that you want to make available. To further complicate things, there's an outstanding bug using GET with Views REST endpoints.

JSON API provides another solution. It puts the power in the hands of the data consumer. You don’t need to know the path of every individual entity, just the general path for a entity type, and bundle. For example: /api/node/article. From that one path, the consumer can select exactly what they want to retrieve just by altering the URL. For example, you can sort and filter the articles, limit the fields that are returned to a subset, and bring along any or all related entities in the same query. Because of all that flexibility, that is the solution I decided to use for my example. (The Drupal community plans to add JSON API to Core in the future.)

There’s a series of short videos on YouTube that demonstrate many of the configuration options and parameters that are available in Drupal’s JSON API.

Prepare the Source Site

There is not much preparation needed for the source because of JSON API’s flexibility. My example is a simple Drupal 8 site with an article content type that has a body and field_image image field, the kind of thing core provides out of the box.

First, download and install the JSON API module. Then, create YAML configuration to “turn on” the JSON API. This could be done by creating a simple module that has YAML file(s) in /MODULE/config/optional. For instance, if you created a module called custom_jsonapi, a file that would expose node data might look like:

filename: /MODULE/config/optional/rest.resource.entity.node.yml: id: entity.node plugin_id: 'entity:node' granularity: method configuration: GET: supported_formats: - json supported_auth: - basic_auth - cookie dependency: enforced: module: - custom_jsonapi

To expose users or taxonomy terms or comments, copy the above file, and change the name and id as necessary, like this:

filename: /MODULE/config/optional/rest.resource.entity.taxonomy_term.yml: id: entity.taxonomy_term plugin_id: 'entity:taxonomy_term' granularity: method configuration: GET: supported_formats: - json supported_auth: - basic_auth - cookie dependency: enforced: module: - custom_jsonapi

That will support GET, or read-only access. If you wanted to update or post content you’d add POST or PATCH information. You could also switch out the authentication to something like OAuth, but for this article we’ll stick with the built-in basic and cookie authentication methods. If using basic authentication and the Basic Auth module isn’t already enabled, enable it.

Navigate to a URL like http://sourcesite.com/api/node/article?_format=api_json and confirm that JSON is being output at that URL.

That's it for the source.

Prepare the Target Site

The target site should be running Drupal 8.2.3 or higher. There are changes to the way file imports work that won't work in earlier versions. It should already have a matching article content type and field_image field ready to accept the articles from the other site.

Enable the core Migrate module. Download and enable the Migrate Plus and Migrate Tools modules. Make sure to get the versions that are appropriate for the current version of core. Migrate Plus had 8.0 and 8.1 branches that only work with outdated versions of core, so currently you need version 8.2 of Migrate Plus.

To make it easier, and so I don’t forget how I got this working, I created a migration example as the Import Drupal module on Github. Download this module into your module repository. Edit the YAML files in the /config/optional  directory of that module to alter the JSON source URL so it points to the domain for the source site created in the earlier step.

It is important to note that if you alter the YAML files after you first install the module, you'll have to uninstall and then reinstall the module to get Migrate to see the YAML changes.

Tweaking the Feed Using JSON API

The primary path used for our migration is (where sourcesite.com is a valid site):

http(s)://sourcesite.com/api/node/article?_format=api_json

This will display a JSON feed of all articles. The articles have related entities. The field_image field points to related images, and the uid/author field points to related users. To view the related images, we can alter the path as follows:

http(s)://sourcesite.com/api/node/article?_format=api_json&include=field_image

That will add an included array to the feed that contains all the details about each of the related images. This way we won’t have to query again to get that information, it will all be available in the original feed. I created a gist with an example of what the JSON API output at this path would look like.

To include authors as well, the path would look like the following. In JSON API you can follow the related information down through as many levels as necessary:

http(s)://sourcesite.com/api/node/article?_format=api_json&include=field_image,uid/author

Swapping out the domain in the example module may be the only change needed to the example module, and it's a good place to start. Read the JSON API module documentation to explore other changes you might want to make to that configuration to limit the fields that are returned, or sort or filter the list.

Manually test the path you end up with in your browser or with a tool like Postman to make sure you get valid JSON at that path.

Migrating From JSON

I had a lot of trouble finding any documentation about how to migrate into Drupal 8 from a JSON source. I finally found some in the Migrate Plus module. The rest I figured out from my earlier work on the original JSON Source module (now deprecated) and by trial and error. Here’s the source section of the YAML I ended up with, when migrating from another Drupal 8 site that was using JSON API.

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: http://sourcesite.com/api/node/article?_format=api_json ids: nid: type: integer item_selector: data/ fields: - name: nid label: 'Nid' selector: /attributes/nid - name: vid label: 'Vid' selector: /attributes/vid - name: uuid label: 'Uuid' selector: /attributes/uuid - name: title label: 'Title' selector: /attributes/title - name: created label: 'Created' selector: /attributes/created - name: changed label: 'Changed' selector: /attributes/changed - name: status label: 'Status' selector: /attributes/status - name: sticky label: 'Sticky' selector: /attributes/sticky - name: promote label: 'Promote' selector: /attributes/promote - name: default_langcode label: 'Default Langcode' selector: /attributes/default_langcode - name: path label: 'Path' selector: /attributes/path - name: body label: 'Body' selector: /attributes/body - name: uid label: 'Uid' selector: /relationships/uid - name: field_image label: 'Field image' selector: /relationships/field_image


One by one, I’ll clarify some of the critical elements in the source configuration.

File-based imports, like JSON and XML use the same pattern now. The main variation is the parser, and for JSON and XML, the parser is in the Migrate Plus module:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json

The url is the place where the JSON is being served. There could be more than one URL, but in this case there is only one. Reading through multiple URLs is still pretty much untested, but I didn’t need that:

urls: http://sourcesite.com/api/node/article?_format=api_json

We need to identify the unique id in the feed. When pulling nodes from Drupal, it’s the nid:

ids: nid: type: integer

We have to tell Migrate where in the feed to look to find the data we want to read. A tool like Postman (mentioned above) helps figure out how the data is configured. When the source is using JSON API, it’s an array with a key of data:

item_selector: data/

We also need to tell Migrate what the fields are. In the JSON API, they are nested below the main item selector, so they are prefixed using an xpath pattern to find them. The following configuration lets us refer to them later by a simple name instead of the full path to the field. I think the label would only come into play if you were using a UI:

fields: - name: nid label: 'Nid' selector: /attributes/nid Setting up the Image Migration Process

For the simple example in the Github module we’ll just try to import nodes with their images. We’ll set the author to an existing author and ignore taxonomy. We’ll do this by creating two migrations against the JSON API endpoint, first one to pick up the related images, and then a second one to pick up the nodes.

Most fields in the image migration just need the same values they’re pulling in from the remote file, since they already have valid Drupal 8 values, but the uri value has a local URL that needs to be adjusted to point to the full path to the file source so the file can be downloaded or copied into the new Drupal site.

Recommendations for how best to migrate images have changed over time as Drupal 8 has matured. As of Drupal 8.2.3 there are two basic ways to process images, one for local images and a different one for remote images.  The process steps are different than in earlier examples I found. There is not a lot of documentation about this. I finally found a Drupal.org thread where the file import changes were added to Drupal core and did some trial and error on my migration to get it working.  

For remote images:

source: ... constants: source_base_path: 'http://sourcesite.com/' process: filename: filename filemime: filemime status: status created: timestamp changed: timestamp uid: uid uuid: id source_full_path: plugin: concat delimiter: / source: - 'constants/source_base_path' - url uri: plugin: download source: - '@source_full_path' - uri guzzle_options: base_uri: 'constants/source_base_path'

For local images change it slightly:

source: ... constants: source_base_path: 'http://sourcesite.com/' process: filename: filename filemime: filemime status: status created: timestamp changed: timestamp uid: uid uuid: id source_full_path: plugin: concat delimiter: / source: - 'constants/source_base_path' - url uri: plugin: file_copy source: - '@source_full_path' - uri

The above configuration works because the Drupal 8 source uri value is already in the Drupal 8 format, http://public:image.jpg. If migrating from a pre-Drupal 7 or non-Drupal source, that uri won’t exist in the source. In that case you would need to adjust the process for the uri value to something more like this:

source: constants: is_public: true ... process: ... source_full_path: - plugin: concat delimiter: / source: - 'constants/source_base_path' - url - plugin: urlencode destination_full_path: plugin: file_uri source: - url - file_directory_path - temp_directory_path - 'constants/is_public' uri: plugin: file_copy source: - '@source_full_path' - '@destination_full_path' Run the Migration

Once you have the right information in the YAML files, enable the module. On the command line, type this:

drush migrate-status

You should see two migrations available to run.  The YAML files include migration dependencies and that will force them to run in the right order. To run them, type:

drush mi --all

The first migration is import_drupal_images. This has to be run before import_drupal_articles, because field_image on each article is a reference to an image file. This image migration uses the path that includes the related image details, and just ignores the primary feed information.

The second migration is import_drupal_articles. This pulls in the article information using the same url, this time without the included images. When each article is pulled in, it is matched to the image that was pulled in previously.

You can run one migration at a time, or even just one item at a time, while testing this out:

drush migrate-import import_drupal_images --limit=1

You can rollback and try again.

drush migrate-rollback import_drupal_images

If all goes as it should, you should be able to navigate to the content list on your new site and see the content that Migrate pulled in, complete with image fields. There is more information about the Migrate API on Drupal.org.

What Next?

There are lots of other things you could do to build on this. A Drupal 8 to Drupal 8 migration is easier than many other things, since the source data is generally already in the right format for the target. If you want to migrate in users or taxonomy terms along with the nodes, you would create separate migrations for each of them that would run before the node migration. In each of them, you’d adjust the include value in the JSON API path to pull the relevant information into the feed, then update the YAML file with the necessary steps to process the related entities.

You could also try pulling content from older versions of Drupal into a Drupal 8 site. If you want to pull everything from one Drupal 6 site into a new Drupal 8 site you would just use the built in Drupal to Drupal migration capabilities, but if you want to selectively pull some items from an earlier version of Drupal into a new Drupal 8 site this technique might be useful. The JSON API module won’t work on older Drupal versions, so the source data would have to be processed differently, depending on what you use to set up the older site to serve JSON. You might need to dig into the migration code built into Drupal core for Drupal to Drupal migrations to see how Drupal 6 or Drupal 7 data had to be massaged to get it into the right format for Drupal 8.

Finally, you can adapt the above techniques to pull any kind of non-Drupal JSON data into a Drupal 8 site. You’ll just have to adjust the selectors to match the format of the data source, and do more work in the process steps to massage the values into the format that Drupal 8 expects.

The Drupal 8 Migrate module and its contributed helpers are getting more and more polished, and figuring out how to pull in content from JSON sources could be a huge benefit for many sites. If you want to help move the Migrate effort forward, you can dig into the Migrate in core initiative and issues on Drupal.org.

Categories: FLOSS Project Planets

Drupal Association News: Drupal Association Financial Statements for Q3 2016

Planet Drupal - Tue, 2016-11-29 14:43

We normally share  our financial statements in posts about public board meetings, since that is the time when board members approve the statements. However, I wanted to give this quarter’s update its own blog post. We’ve made many changes to improve our sustainability over the last few months and I am fully embracing our value of communicating with transparency by giving insight into our progress.

First, a word of thanks

We are truly thankful for all the contributions that our community makes to help Drupal thrive. Your contribution comes in the form of time, talent, and treasure and all are equally important. Just as contributing code or running a camp is critical, so is financial contribution.

The Drupal Association is able to achieve its mission to unite the community to build and promote Drupal thanks to those who buy DrupalCon tickets and sponsor the event, our Supporters and Members, Drupal.org sponsors, and talent recruiters who post jobs on Drupal Jobs.

We use these funds to maintain Drupal.org and it’s tooling so the community can build and release the software and so technical evaluators can learn why Drupal is right for them through our new marketing content. It also funds DrupalCon production so we can bring the community together to level up skills, accelerate contribution, drive Drupal business, and build stronger bonds within our community. Plus, it funds Community Cultivation Grants and DrupalCon scholarships, removing financial blockers for those who want to do more for Drupal. And of course, these funds pay staff salaries so we have the right people on board to do all of this mission work.

I also want to thank our board members who serve on the Finance Committee, Tiffany Farris (Treasurer), Dries Buytaert, Jeff Walpole, and Donna Benjamin. They provide financial oversight for the organization, making sure we are the best stewards possible for the funds the community gives to us. I also want to thank Jamie Nau of Summit CPA, our new CFO firm. Summit prepares our financial statements and forecasts and is advising us on long term sustainability.

Q3 Financial Statements

A financial statement is a formal record of the financial activities of the Drupal Association. The financial statements present information in a structured way that should make it easy to understand what is happening with the organization's finances.

Once staff closes the books each month, Summit CPA prepares the financial statement, which the finance committee reviews and approves. Finally, the full Drupal Association Board approves the financial statements. This process takes time, which is why Q3 financials are released in Q4.

You can find the Q3 financial statements here. They explain how The Association used its money in July, August, and September of this year. It takes a little financial background to understand them, so Summit CPA provides an executive summary and they set KPIs so it is clear how we are doing against important financial goals.

The latest executive summary is at the beginning of the September financial statement. In short, it says we are sustainable and on the right path to continue improving our financial health.

“We are working on building an adequate cash reserve balance. As of September a cash balance of $723K is 14% of twelve-months of revenue. Summit recommends a cash reserve of 15%-30% of estimated twelve-month revenue. Since Drupal’s revenue and expenditures drastically fluctuate from month to month [due to DrupalCon] a cash reserve goal closer to 30% is recommended.

Through August we have achieved a Net Income Margin of 4% and a Gross Profit Margin 33%. Our goal is to increase the Net Income Margin to over 10% during the next year.”
- Summit CPA

Improving our sustainability will continue to be an imperative through 2017, so the Association can serve its mission for generations to come. Financial health improvements will be achieved by the savings we gain over time from the staff reductions we did this summer. Another area of focus is improving our programs’ gross margins.

You can expect to see the Q4 2016 financials in Q1 2017. You can also expect to see our 2017 budget and operational focus. We are certainly excited (and thankful) for your support and we look forward to finding additional ways to serve this amazing community in 2017.

Categories: FLOSS Project Planets

Agaric Collective: Redirect users after login to the page they were viewing in Drupal 8

Planet Drupal - Tue, 2016-11-29 14:13

Have you ever been asked to log into a website while you are viewing a page? And after doing so you get redirected to some page other than the one you were reading? This is an obvious and rather common usability problem. When this happens people lose track of what they were doing and some might not even bother to go back. Let's find out how to solve this in Drupal 8.

In a recent project a client wisely requested exactly that: whenever a user logs into the site, redirect them to the page they were before clicking the login link. This seemed like a very common request so we looked for a contrib module that provided the functionality. Login Destination used to do it in Drupal 7. Sadly the Drupal 8 version of this module does not provide the functionality yet.

Other modules, and some combinations of them, were tested without success. Therefore, we built Login Return Page. It a very small module that just does one thing and it does it well: it appends destination=/current/page to all the links pointing to /user/login effectively redirecting users to the page they were viewing before login. The project is waiting to be approved before promoting it to full project.

Have you had a similar need? Are there other things you are requested to do after login? Please share them in the comments.

Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible Builds: week 83 in Stretch cycle

Planet Debian - Tue, 2016-11-29 13:12

What happened in the Reproducible Builds effort between Sunday November 20 and Saturday November 26 2016:

Reproducible work in other projects Bugs filed

Chris Lamb:

Daniel Shahaf:

Reiner Herrmann:

Reviews of unreproducible packages

63 package reviews have been added, 73 have been updated and 41 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been added:

Weekly QA work

During our reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (9)
  • Helmut Grohne (1)
  • Peter De Wachter (1)
strip-nondeterminism development
  • #845203 was fixed in git by Reiner Herrmann - the next release will be able to normalize NTFS timestamps in zip files.
debrepatch development Continuous integration:
  • Holger updated our jenkins jobs for disorderfs and strip-nondeterminism to build these from their respective git master branches, and removed the jobs that build them from other branches since we have none at the moment.
tests.reproducible-builds.org

Debian:

Since the stretch freeze is getting closer, Holger made the following changes:

  • Schedule testing builds to be as equally-frequent as unstable, on all archs, so that testing's build results are more up-to-date.

  • Adjust experimental builds scheduling frequency so that experimental results are not more recent than the ones in unstable.

  • Disable our APT repository for the testing suite (stretch), but leave it active for the unstable and experimental suites.

    This is the repository where we uploaded patched toolchain packages from time to time, that are necessary to reproduce other packages with. Since recently, all our essential patches have been accepted into Debian stretch and this repository is currently empty. Debian stretch will soon become the next Debian stable, and we want to get an accurate impression of how many of its packages will be reproducible.

    Therefore, disabling this repository for stretch whilst leaving it activated for the Debian unstable and experimental suites, allows us to continue to experiment with new patches to toolchain packages, without affecting our knowledge of the next Debian stable.

Misc.

This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

Categories: FLOSS Project Planets

Trey Hunner: Check Whether All Items Match a Condition in Python

Planet Python - Tue, 2016-11-29 12:45

In this article, we’re going to look at a common programming pattern and discuss how we can refactor our code when we notice this pattern.

Categories: FLOSS Project Planets

Caktus Consulting Group: Django Under the Hood 2016 Recap

Planet Python - Tue, 2016-11-29 12:31

Caktus was a proud sponsor of Django Under the Hood (DUTH) 2016 in Amsterdam this year. Organized by Django core developers and community members, DUTH is a highly technical conference that delves deep into Django.

Django core developers and Caktus Technical Manager Karen Tracey and CEO/Co-founder Tobias McNulty both flew to Amsterdam to spend time with fellow Djangonauts. Since not all of us could go, we wanted to ask them what Django Under the Hood was like.

Can you tell us more about Django Under the Hood?

Tobias: This was my first Django Under the Hood. The venue was packed. It’s an in-depth, curated talk series by invite-only speakers. It was impeccably organized. Everything is thought through. They even have little spots where you can pick up toothbrush and toothpaste.

Karen: I’ve been to all three. They sell out very quickly. Core developers are all invited, get tickets, and some funding depending on sponsorship. This is the only event where some costs are covered for core developers. DjangoCon EU and US have core devs going, but they attend it however they manage to get funds for it.

What was your favorite part of Django Under the Hood?

Tobias: The talks: they’re longer and more detailed than typical conference talks; they’re curated and confined to a single track so the conference has a natural rhythm to it. I really liked the talks, but also being there with the core team. Just being able to meet these people you see on IRC and mailing list, there’s a big value to that. I was able to put people in context. I’d met quite a few of the core team before but not all.

Karen: I don’t have much time to contribute to Django because of heavy involvement in cat rescue locally and a full time job, but this is a great opportunity to have at least a day to do Django stuff at the sprint and to see a lot of people I don’t otherwise have a chance to see.

All the talk videos are now online. Which talk do you recommend we watch first?

Karen: Depends on what you’re interested in. I really enjoyed the Instagram one. As someone who contributed to the Django framework, to see it used and scaled to the size of Instagram 500 million plus users is interesting.

Tobias: There were humorous insights, like the Justin Bieber effect. Originally they’d shared their database by user ID, so everybody on the ops team had memorized his user ID to be prepared in case he posted anything. At that scale, maximizing the number of requests they can serve from a single server really matters.

Karen: All the monitoring was interesting too.

Tobias: I liked Ana Balica’s testing talk. It included a history of testing in Django, which was educational to me. Django didn’t start with a framework for testing your applications. It was added as a ticket in the low thousands. She also had practical advice on how to treat your test suite as part of the application, like splitting out functional tests and unit tests. She had good strategies to make your unit tests as fast as possible so you can run them as often as needed.

What was your favorite tip or lesson?

Tobias: Jennifer Akullian gave a keynote on mental health that had a diagram of how to talk about feelings in a team. You try to dig into what that means. She talked about trying to destigmatize mental health in tech. I think that’s an important topic we should be discussing more.

Karen: I learned things in each of the talks. I have a hard time picking out one tip that sticks with me. I’d like to look into what Ana Balica said about mutation testing and learn more about it.

What are some trends you’re seeing in Django?

Karen: The core developers met for a half-day meeting the first day of the conference. We talked about what’s going on with DJango, what’s happened in the past year, what’s the future of Django. The theme was “Django is boring.”

Tobias: “Django is boring” because it is no longer unknown. It’s an established, common framework now used by big organizations like NASA, Instagram, Pinterest, US Senate, etc. At the start, it was a little known bootstrappy cutting edge web framework. The reasons why we hooked up with Django nine years ago at Caktus, like security and business efficacy, all of those arguments are ever so much stronger today. That can make it seem boring for developers but it’s a good thing for business.

Karen: It’s been around for awhile. Eleven years. A lot of the common challenges in Django have been solved. Not that there aren’t cutting edge web problems. But should you solve some problems elsewhere? For example, in third party, reusable apps like channels, REST framework.

Tobias: There was also recognition that Django is so much more than the software. It’s the community and all the packages around it. That’s what make Dango great.

Where do you see Django going in the future?

Karen: I hate those sorts of questions. I don’t know how to answer that. It’s been fun to see the Django community grow and I expect to see continued growth.

Tobias: That’s not my favorite question either. But Django has a role in fostering and continuing to grow the community it has. Django can set an example for open source communities on how to operate and fund themselves in sustainable ways. Django is experimenting with funding right now. How do we make open source projects like this sustainable without relying on people with full-time jobs volunteering their nights and weekends? This is definitely not a “solved problem,” and I look forward to seeing the progress Django and other open source communities make in the coming years.

Thank you to Tobias and Karen for sharing their thoughts.
Categories: FLOSS Project Planets

Third & Grove: Theming form elements in Drupal 8

Planet Drupal - Tue, 2016-11-29 11:03
Theming form elements in Drupal 8 ross Tue, 11/29/2016 - 11:03
Categories: FLOSS Project Planets
Syndicate content