Feeds

Agiledrop.com Blog: AGILEDROP: Drupal Logos Showing Emotions

Planet Drupal - Thu, 2017-03-23 04:21
It's not over yet. There are still Druplicons that need to be presented. After already exploring the fields of Humans and Superhumans, Fruits and Vegetables, Animals, Outdoor Activities and National Identities, it's now time to look in the field of emotions and see, which emotions are shown by Drupal Logos. After expecting to find many Druplicons in the area of national identities, we came up with an idea of exploring something more challenging. After some thought, we decided it's time to look in the area of emotions. After all, Druplicon was designed with a mischievous smile, so it looks… READ MORE
Categories: FLOSS Project Planets

Talk Python to Me: #104 Game Theory in Python

Planet Python - Thu, 2017-03-23 04:00
Game theory is the study competing interests, be it individual actors within an economy or healthy vs. cancer cells within a body. <br/> <br/> Our guests this week, Vince Knight, Marc Harper, and Owen Campbell, are here to discuss their python project built to study and simulate one of the central problems in Game Theory: The prisoners' dilemma. <br/> <br/> Links from the show: <br/> <div style="font-size: .85em;"> <br/> <b>Axelrod on GitHub</b>: <a href='https://github.com/Axelrod-Python/Axelrod' target='_blank'>github.com/Axelrod-Python/Axelrod</a> <br/> <b>The docs</b>: <a href='http://axelrod.readthedocs.io/en/latest/' target='_blank'>axelrod.readthedocs.io/en/latest</a> <br/> <b>The tournament</b>: <a href='http://axelrod-tournament.readthedocs.io/en/latest/' target='_blank'>axelrod-tournament.readthedocs.io/en/latest</a> <br/> <b>Chat: Gitter room</b>: <a href='https://gitter.im/Axelrod-Python' target='_blank'>gitter.im/Axelrod-Python</a> <br/> <b>Peer reviewed paper</b>: <a href='http://openresearchsoftware.metajnl.com/articles/10.5334/jors.125/' target='_blank'>openresearchsoftware.metajnl.com/articles/10.5334/jors.125</a> <br/> <b>Djaxelrod v2</b>: <a href='https://github.com/Axelrod-Python/axelrod-api' target='_blank'>github.com/Axelrod-Python/axelrod-api</a> <br/> <b>Some examples with jupyter</b>: <a href='https://github.com/Axelrod-Python/Axelrod-notebooks' target='_blank'>github.com/Axelrod-Python/Axelrod-notebooks</a> <br/> <br/> <strong>Find them on Twitter</strong> <br/> <b>The project</b>: <a href='https://twitter.com/AxelrodPython' target='_blank'>@AxelrodPython</a> <br/> <b>Owen on Twitter</b>: <a href='https://twitter.com/opcampbell' target='_blank'>@opcampbell</a> <br/> <b>Vince on on Twitter</b>: <a href='https://twitter.com/drvinceknight' target='_blank'>@drvinceknight</a> <br/> <br/> <strong>Sponsored items</strong> <br/> <b>Our courses</b>: <a href='https://training.talkpython.fm/' target='_blank'>training.talkpython.fm</a> <br/> <b>Podcast's Patreon</b>: <a href='https://www.patreon.com/mkennedy' target='_blank'>patreon.com/mkennedy</a> <br/> </div>
Categories: FLOSS Project Planets

Mike Hommey: Why is the git-cinnabar master branch slower to clone?

Planet Debian - Thu, 2017-03-23 03:38

Apart from the memory considerations, one thing that the data presented in the “When the memory allocator works against you” post that I haven’t touched in the followup posts is that there is a large difference in the time it takes to clone mozilla-central with git-cinnabar 0.4.0 vs. the master branch.

One thing that was mentioned in the first followup is that reducing the amount of realloc and substring copies made the cloning more than 15 minutes faster on master. But the same code exists in 0.4.0, so this isn’t part of the difference.

So what’s going on? Looking at the CPU usage during the clone is enlightening.

On 0.4.0:

On master:

(Note: the data gathering is flawed in some ways, which explains why the git-remote-hg process goes above 100%, which is not possible for this python process. The data is however good enough for the high level analysis that follows, so I didn’t bother to get something more acurate)

On 0.4.0, the git-cinnabar-helper process was saturating one CPU core during the File import phase, and the git-remote-hg process was saturating one CPU core during the Manifest import phase. Overall, the sum of both processes usually used more than one and a half core.

On master, however, the total of both processes barely uses more than one CPU core.

What happened?

This and that happened.

Essentially, before those changes, git-remote-hg would send instructions to git-fast-import (technically, git-cinnabar-helper, but in this case it’s only used as a wrapper for git-fast-import), and use marks to track the git objects that git-fast-import created.

After those changes, git-remote-hg asks git-fast-import the git object SHA1 of objects it just asked to be created. In other words, those changes replaced something asynchronous with something synchronous: while it used to be possible for git-remote-hg to work on the next file/manifest/changeset while git-fast-import was working on the previous one, it now waits.

The changes helped simplify the python code, but made the overall clone process much slower.

If I’m not mistaken, the only real use for that information is for the mapping of mercurial to git SHA1s, which is actually rarely used during the clone, except at the end, when storing it. So what I’m planning to do is to move that mapping to the git-cinnabar-helper process, which, incidentally, will kill not 2, but 3 birds with 1 stone:

  • It will restore the asynchronicity, obviously (at least, that’s the expected main outcome).
  • Storing the mapping in the git-cinnabar-helper process is very likely to take less memory than what it currently takes in the git-remote-hg process. Even if it doesn’t (which I doubt), that should still help stay under the 2GB limit of 32-bit processes.
  • The whole thing that spikes memory usage during the finalization phase, as seen in previous post, will just go away, because the git-cinnabar-helper process will just have prepared the git notes-like tree on its own.

So expect git-cinnabar 0.5 to get moar faster, and to use moar less memory.

Categories: FLOSS Project Planets

Mikkel Høgh: A vote of no confidence in the Drupal Association leadership

Planet Drupal - Thu, 2017-03-23 02:42
A vote of no confidence in the Drupal Association leadership

I have had many differences with the Drupal Association in the past, starting with the many clashes we had with their erstwhile leadership when we were organising DrupalCon Copenhagen 2010, so I’ll admit I wasn’t their biggest fan before the latest events.

mikl Thu, 2017-03-23 - 07:42 Tags Drupal Planet Drupal
Categories: FLOSS Project Planets

Kushal Das: Running MicroPython on 96Boards Carbon

Planet Python - Thu, 2017-03-23 02:42

I received my Carbon from Seedstudio a few months back. But, I never found time to sit down and work on it. During FOSSASIA, in my MicroPython workshop, Siddhesh was working to put MicroPython using Zephyr on his Carbon. That gave me the motivation to have a look at the same after coming back home.

What is Carbon?

Carbon is a 96Boards IoT edition compatible board, with a Cortex-M4 chip, and 512KB flash. It currently runs Zephyr, which is a Linux Foundation hosted project to build a scalable real-time operating system (RTOS).

Setup MicroPython on Carbon

To install the dependencies in Fedora:

$ sudo dnf group install "Development Tools" $ sudo dnf install git make gcc glibc-static \ libstdc++-static python3-ply ncurses-devel \ python-yaml python2 dfu-util

The next step is to setup the Zephyr SDK. You can download the latest binary from here. Then you can install it under your home directory (you don’t have to install it system-wide). I installed it under ~/opt/zephyr-sdk-0.9 location.

Next, I had to check out the zephyr source, I cloned from https://git.linaro.org/lite/zephyr.git repo. I also cloned MicroPython from the official GitHub repo. I will just copy paste the next steps below.

$ source zephyr-env.sh $ cd ~/code/git/ $ git clone https://github.com/micropython/micropython.git $ cd micropython/zephyr

Then I created a project file for the carbon board specially, this file is named as prj_96b_carbon.conf, and I am pasting the content below. I have submitted the same as a patch to the upstream Micropython project. It disables networking (otherwise you will get stuck while trying to get the REPL).

# No networking for carbon CONFIG_NETWORKING=n CONFIG_NET_IPV4=n CONFIG_NET_IPV6=

Next, we have to build MicroPython as a Zephyr application.

$ make BOARD=96b_carbon $ ls outdir/96b_carbon/ arch ext isr_tables.c lib Makefile scripts tests zephyr.hex zephyr.map zephyr.strip boards include isr_tables.o libzephyr.a Makefile.export src zephyr.bin zephyr.lnk zephyr_prebuilt.elf drivers isrList.bin kernel linker.cmd misc subsys zephyr.elf zephyr.lst zephyr.stat

After the build is finished, you will be able to see a zephyr.bin file in the output directory.

Uploading the fresh build to the carbon

Before anything else, I connected my Carbon board to the laptop using an USB cable to the OTG port (remember to check the port name). Then, I had to press the *BOOT0 button and while pressing that one, I also pressed the Reset button. Then, left the reset button first, and then the boot0 button. If you run the dfu-util command after this, you should be able to see some output like below.

$ sudo dfu-util -l dfu-util 0.9 Copyright 2005-2009 Weston Schmidt, Harald Welte and OpenMoko Inc. Copyright 2010-2016 Tormod Volden and Stefan Schmidt This program is Free Software and has ABSOLUTELY NO WARRANTY Please report bugs to http://sourceforge.net/p/dfu-util/tickets/ Found DFU: [0483:df11] ver=2200, devnum=14, cfg=1, intf=0, path="2-2", alt=3, name="@Device Feature/0xFFFF0000/01*004 e", serial="385B38683234" Found DFU: [0483:df11] ver=2200, devnum=14, cfg=1, intf=0, path="2-2", alt=2, name="@OTP Memory /0x1FFF7800/01*512 e,01*016 e", serial="385B38683234" Found DFU: [0483:df11] ver=2200, devnum=14, cfg=1, intf=0, path="2-2", alt=1, name="@Option Bytes /0x1FFFC000/01*016 e", serial="385B38683234" Found DFU: [0483:df11] ver=2200, devnum=14, cfg=1, intf=0, path="2-2", alt=0, name="@Internal Flash /0x08000000/04*016Kg,01*064Kg,03*128Kg", serial="385B38683234"

This means the board is in DFU mode. Next we flash the new application to the board.

$ sudo dfu-util -d [0483:df11] -a 0 -D outdir/96b_carbon/zephyr.bin -s 0x08000000 dfu-util 0.9 Copyright 2005-2009 Weston Schmidt, Harald Welte and OpenMoko Inc. Copyright 2010-2016 Tormod Volden and Stefan Schmidt This program is Free Software and has ABSOLUTELY NO WARRANTY Please report bugs to http://sourceforge.net/p/dfu-util/tickets/ dfu-util: Invalid DFU suffix signature dfu-util: A valid DFU suffix will be required in a future dfu-util release!!! Opening DFU capable USB device... ID 0483:df11 Run-time device DFU version 011a Claiming USB DFU Interface... Setting Alternate Setting #0 ... Determining device status: state = dfuERROR, status = 10 dfuERROR, clearing status Determining device status: state = dfuIDLE, status = 0 dfuIDLE, continuing DFU mode device DFU version 011a Device returned transfer size 2048 DfuSe interface name: "Internal Flash " Downloading to address = 0x08000000, size = 125712 Download [=========================] 100% 125712 bytes Download done. File downloaded successfully Hello World on Carbon

The hello world of the hardware land is the LED blinking code. I used the on-board LED(s) for the same, the sample code is given below. I have now connected the board to the UART (instead of OTG).

$ screen /dev/ttyUSB0 115200 >>> >>> import time >>> from machine import Pin >>> led1 = Pin(("GPIOD",2), Pin.OUT) >>> led2 = Pin(("GPIOB",5), Pin.OUT) >>> while True: ... led2.low() ... led1.high() ... time.sleep(0.5) ... led2.high() ... led1.low() ... time.sleep(0.5)
Categories: FLOSS Project Planets

Bryan Pendleton: It's not just a game, ...

Planet Apache - Thu, 2017-03-23 00:54

... close reading shows that it's an homage to many great works of art before it: 14 Greatest Witcher 3 Easter Eggs That Will Make You Wanna Replay It Immediately

Categories: FLOSS Project Planets

Mike Hommey: Analyzing git-cinnabar memory use

Planet Debian - Thu, 2017-03-23 00:30

In previous post, I was looking at the allocations git-cinnabar makes. While I had the data, I figured I’d also look how the memory use correlates with expectations based on repository data, to put things in perspective.

As a reminder, this is what the allocations look like (horizontal axis being the number of allocator function calls):

There are 7 different phases happening during a git clone using git-cinnabar, most of which can easily be identified on the graph above:

  • Negotiation.

    During this phase, git-cinnabar talks to the mercurial server to determine what needs to be pulled. Once that is done, a getbundle request is emitted, which response is read in the next three phases. This phase is essentially invisible on the graph.

  • Reading changeset data.

    The first thing that a mercurial server sends in the response for a getbundle request is changesets. They are sent in the RevChunk format. Translated to git, they become commit objects. But to create commit objects, we need the entire corresponding trees and files (blobs), which we don’t have yet. So we keep this data in memory.

    In the git clone analyzed here, there are 345643 changesets loaded in memory. Their raw size in RawChunk format is 237MB. I think by the end of this phase, we made 20 million allocator calls, have about 300MB of live data in about 840k allocations. (No certainty because I don’t actually have definite data that would allow to correlate between the phases and allocator calls, and the memory usage change between this phase and next is not as clear-cut as with other phases). This puts us at less than 3 live allocations per changeset, with “only” about 60MB overhead over the raw data.

  • Reading manifest data.

    In the stream we receive, manifests follow changesets. Each changeset points to one manifest ; several changesets can point to the same manifest. Manifests describe the content of the entire source code tree in a similar manner as git trees, except they are flat (there’s one manifest for the entire tree, where git trees would reference other git trees for sub directories). And like git trees, they only map file paths to file SHA1s. The way they are currently stored by git-cinnabar (which is planned to change) requires knowing the corresponding git SHA1s for those files, and we haven’t got those yet, so again, we keep everything in memory.

    In the git clone analyzed here, there are 345398 manifests loaded in memory. Their raw size in RawChunk format is 1.18GB. By the end of this phase, we made 23 million more allocator calls, and have about 1.52GB of live data in about 1.86M allocations. We’re still at less than 3 live allocations for each object (changeset or manifest) we’re keeping in memory, and barely over 100MB of overhead over the raw data, which, on average puts the overhead at 150 bytes per object.

    The three phases so far are relatively fast and account for a small part of the overall process, so they don’t appear clear-cut to each other, and don’t take much space on the graph.

  • Reading and Importing files.

    After the manifests, we finally get files data, grouped by path, such that we get all the file revisions of e.g. .cargo/.gitignore, followed by all the file revisions of .cargo/config.in, .clang-format, and so on. The data here doesn’t depend on anything else, so we can finally directly import the data.

    This means that for each revision, we actually expand the RawChunk into the full file data (RawChunks contain patches against a previous revision), and don’t keep the RawChunk around. We also don’t keep the full data after it was sent to the git-cinnabar-helper process (as far as cloning is concerned, it’s essentially a wrapper for git-fast-import), except for the previous revision of the file, which is likely the patch base for the next revision.

    We however keep in memory one or two things for each file revision: a mapping of its mercurial SHA1 and the corresponding git SHA1 of the imported data, and, when there is one, the file metadata (containing information about file copy/renames) that lives as a header in the file data in mercurial, but can’t be stored in the corresponding git blobs, otherwise we’d have irrelevant data in checkouts.

    On the graph, this is where there is a steady and rather long increase of both live allocations and memory usage, in stairs for the latter.

    In the git clone analyzed here, there are 2.02M file revisions, 78k of which have copy/move metadata for a cumulated size of 8.5MB of metadata. The raw size of the file revisions in RawChunk format is 3.85GB. The expanded data size is 67GB. By the end of this phase, we made 622 million more allocator calls, and peaked at about 2.05GB of live data in about 6.9M allocations. Compared to the beginning of this phase, that added about 530MB in 5 million allocations.

    File metadata is stored in memory as python dicts, with 2 entries each, instead of raw form for convenience and future-proofing, so that would be at least 3 allocations each: one for each value, one for the dict, and maybe one for the dict storage ; their keys are all the same and are probably interned by python, so wouldn’t count.

    As mentioned above, we store a mapping of mercurial to git SHA1s, so for each file that makes 2 allocations, 4.04M total. Plus the 230k or 310k from metadata. Let’s say 4.45M total. We’re short 550k allocations, but considering the numbers involved, it would take less than one allocation per file on average to go over this count.

    As for memory size, per this answer on stackoverflow, python strings have an overhead of 37 bytes, so each SHA1 (kept in hex form) will take 77 bytes (Note, that’s partly why I didn’t particularly care about storing them as binary form, that would only save 25%, not 50%). That’s 311MB just for the SHA1s, to which the size of the mapping dict needs to be added. If it were a plain array of pointers to keys and values, it would take 2 * 8 bytes per file, or about 32MB. But that would be a hash table with no room for more items (By the way, I suspect the stairs that can be seen on the requested and in-use bytes is the hash table being realloc()ed). Plus at least 290 bytes per dict for each of the 78k metadata, which is an additional 22M. All in all, 530MB doesn’t seem too much of a stretch.

  • Importing manifests.

    At this point, we’re done receiving data from the server, so we begin by dropping objects related to the bundle we got from the server. On the graph, I assume this is the big dip that can be observed after the initial increase in memory use, bringing us down to 5.6 million allocations and 1.92GB.

    Now begins the most time consuming process, as far as mozilla-central is concerned: transforming the manifests into git trees, while also storing enough data to be able to reconstruct manifests later (which is required to be able to pull from the mercurial server after the clone).

    So for each manifest, we expand the RawChunk into the full manifest data, and generate new git trees from that. The latter is mostly performed by the git-cinnabar-helper process. Once we’re done pushing data about a manifest to that process, we drop the corresponding data, except when we know it will be required later as the delta base for a subsequent RevChunk (which can happen in bundle2).

    As with file revisions, for each manifest, we keep track of the mapping of SHA1s between mercurial and git. We also keep a DAG of the manifests history (contrary to git trees, mercurial manifests track their ancestry ; files do too, but git-cinnabar doesn’t actually keep track of that separately ; it just relies on the manifests data to infer file ancestry).

    On the graph, this is where the number of live allocations increases while both requested and in-use bytes decrease, noisily.

    By the end of this phase, we made about 1 billion more allocator calls. Requested allocations went down to 1.02GB, for close to 7 million live allocations. Compared to the end of the dip at the beginning of this phase, that added 1.4 million allocations, and released 900MB. By now, we expect everything from the “Reading manifests” phase to have been released, which means we allocated around 620MB (1.52GB – 900MB), for a total of 3.26M additional allocations (1.4M + 1.86M).

    We have a dict for the SHA1s mapping (345k * 77 * 2 for strings, plus the hash table with 345k items, so at least 60MB), and the DAG, which, now that I’m looking at memory usage, I figure has the one of the possibly worst structure, using 2 sets for each node (at least 232 bytes per set, that’s at least 160MB, plus 2 hash tables with 345k items). I think 250MB for those data structures would be largely underestimated. It’s not hard to imagine them taking 620MB, because really, that DAG implementation is awful. The number of allocations expected from them would be around 1.4M (4 * 345k), but I might be missing something. That’s way less than the actual number, so it would be interesting to take a closer look, but not before doing something about the DAG itself.

    Fun fact: the amount of data we’re dealing with in this phase (the expanded size of all the manifests) is close to 2.9TB (yes, terabytes). With about 4700 seconds spent on this phase on a real clone (less with the release branch), we’re still handling more than 615MB per second.

  • Importing changesets.

    This is where we finally create the git commits corresponding to the mercurial changesets. For each changeset, we expand its RawChunk, find the git tree we created in the previous phase that corresponds to the associated manifest, and create a git commit for that tree, with the right date, author, and commit message. For data that appears in the mercurial changeset that can’t be stored or doesn’t make sense to store in the git commit (e.g. the manifest SHA1, the list of changed files[*], or some extra metadata like the source of rebases), we keep some metadata we’ll store in git notes later on.

    [*] Fun fact: the list of changed files stored in mercurial changesets does not necessarily match the list of files in a `git diff` between the corresponding git commit and its parents, for essentially two reasons:

    • Old buggy versions of mercurial have generated erroneous lists that are now there forever (they are part of what makes the changeset SHA1).
    • Mercurial may create new revisions for files even when the file content is not modified, most notably during merges (but that also happened on non-merges due to, presumably, bugs).
    … so we keep it verbatim.

    On the graph, this is where both requested and in-use bytes are only slightly increasing.

    By the end of this phase, we made about half a billion more allocator calls. Requested allocations went up to 1.06GB, for close to 7.7 million live allocations. Compared to the end of the previous phase, that added 700k allocations, and 400MB. By now, we expect everything from the “Reading changesets” phase to have been released (at least the raw data we kept there), which means we may have allocated at most around 700MB (400MB + 300MB), for a total of 1.5M additional allocations (700k + 840k).

    All these are extra data we keep for the next and final phase. It’s hard to evaluate the exact size we’d expect here in memory, but if we divide by the number of changesets (345k), that’s less than 5 allocations per changeset and less than 2KB per changeset, which is low enough not to raise eyebrows, at least for now.

  • Finalizing the clone.

    The final phase is where we actually go ahead storing the mappings between mercurial and git SHA1s (all 2.7M of them), the git notes where we store the data necessary to recreate mercurial changesets from git commits, and a cache for mercurial tags.

    On the graph, this is where the requested and in-use bytes, as well as the number of live allocations peak like crazy (up to 21M allocations for 2.27GB requested).

    This is very much unwanted, but easily explained with the current state of the code. The way the mappings between mercurial and git SHA1s are stored is via a tree similar to how git notes are stored. So for each mercurial SHA1, we have a file that points to the corresponding git SHA1 through git links for commits or directly for blobs (look at the output of git ls-tree -r refs/cinnabar/metadata^3 if you’re curious about the details). If I remember correctly, it’s faster if the tree is created with an ordered list of paths, so the code created a list of paths, and then sorted it to send commands to create the tree. The former creates a new str of length 42 and a tuple of 3 elements for each and every one of the 2.7M mappings. With the 37 bytes overhead by str instance and the 56 + 3 * 8 bytes per tuple, we have at least 429MB wasted. Creating the tree itself keeps the corresponding fast-import commands in a buffer, where each command is going to be a tuple of 2 elements: a pointer to a method, and a str of length between 90 and 93. That’s at least another 440MB wasted.

    I already fixed the first half, but the second half still needs addressing.

Overall, except for the stupid spike during the final phase, the manifest DAG and the glibc allocator runaway memory use described in previous posts, there is nothing terribly bad with the git-cinnabar memory usage, all things considered. Mozilla-central is just big.

The spike is already half addressed, and work is under way for the glibc allocator runaway memory use. The manifest DAG, interestingly, is actually mostly useless. It’s only used to track the heads of the DAG, and it’s very much possible to track heads of a DAG without actually storing the entire DAG. In fact, that’s what git-cinnabar already does for changeset heads… so we would only need to do the same for manifest heads.

One could argue that the 1.4GB of raw RevChunk data we’re keeping in memory for later user could be kept on disk instead. I haven’t done this so far because I didn’t want to have to handle temporary files (and answer questions like “where to put them?”, “what if there isn’t enough disk space there?”, “what if disk access is slow?”, etc.). But the majority of this data is from manifests. I’m already planning changes in how git-cinnabar stores manifests data that will actually allow to import them directly, instead of keeping them in memory until files are imported. This would instantly remove 1.18GB of memory usage. The downside, however, is that this would be more CPU intensive: Importing changesets will require creating the corresponding git trees, and getting the stored manifest data. I think it’s worth, though.

Finally, one thing that isn’t obvious here, but that was found while analyzing why RSS would be going up despite memory usage going down, is that git-cinnabar is doing way too many reallocations and substring allocations.

So let’s look at two metrics that hopefully will highlight the problem:

  • The cumulated amount of requested memory. That is, the sum of all sizes ever given to malloc, realloc, calloc, etc.
  • The compensated cumulated amount of requested memory (naming is hard). That is, the sum of all sizes ever given to malloc, calloc, etc. except realloc. For realloc, we only count the delta in size between what the size was before and after the realloc.

Assuming all the requested memory is filled at some point, the former gives us an upper bound to the amount of memory that is ever filled or copied (the amount that would be filled if no realloc was ever in-place), while the the latter gives us a lower bound (the amount that would be filled or copied if all reallocs were in-place).

Ideally, we’d want the upper and lower bounds to be close to each other (indicating few realloc calls), and the total amount at the end of the process to be as close as possible to the amount of data we’re handling (which we’ve seen is around 3TB).

… and this is clearly bad. Like, really bad. But we already knew that from the previous post, although it’s nice to put numbers on it. The lower bound is about twice the amount of data we’re handling, and the upper bound is more than 10 times that amount. Clearly, we can do better.

We’ll see how things evolve after the necessary code changes happen. Stay tuned.

Categories: FLOSS Project Planets

Fabio Zadrozny: PyDev 5.6.0 released: faster debugger, improved type inference for super and pytest fixtures

Planet Python - Thu, 2017-03-23 00:29
PyDev 5.6.0 is now already available for download (and is already bundled in LiClipse 3.5.0).

There are many improvements on this version!

The major one is that the PyDev.Debugger got some attention and should now be 60%-100% faster overall -- in all supported Python versions (and that's on top of the improvements done previously).

This improvement was a nice example of trading memory vs speed (the major change done was that the debugger now has 2 new caches, one for saving whether a frame should be skipped or not and another to save whether a given line in a traced frame should be skipped or not, which enables the debugger to make much less checks on those occasions).

Also, other fixes were done in the debugger. Namely:

  • the variables are now properly displayed when the interactive console is connected to a debug session;
  • it's possible to select the Qt version for which QThreads should be patched for the debugger to work with (in preferences > PyDev > Debug > Qt Threads);
  • fixed an issue where a native Qt signal is not callable message was raised when connecting a signal to QThread.started.
  • fixed issue displaying variable (Ctrl+Shift+D) when debugging.

Note: from this version onward, the debugger will now only support Python 2.6+ (I believe there should be very few Python 2.5 users -- Python 2.6 itself stopped being supported in 2013, so, I expect this change to affect almost no one -- if someone really needs to use an older version of Python, it's always possible to get an older version of the IDE/debugger too). Also, from now on, supported versions are actually properly tested on the ci (2.6, 2.7 and 3.5 in https://travis-ci.org/fabioz/PyDev.Debugger and 2.7, 3.5 in https://ci.appveyor.com/project/fabioz/pydev-debugger).

The code-completion (Ctrl+Space) and find definition (F3) also had improvements and can now deal with the Python super (so, it's possible to get completions and go to the definition of a method declared in a superclass when using the super construct) and pytest fixtures (so, if you have a pytest fixture, you should now be able to have completions/go to its definition even if you don't add a docstring to the parameter saying its expected type).

Also, this release improved the support in third-party packages, so, coverage, pycodestyle (previously pep8.py) and autopep8 now use the latest version available. Also, PyLint was improved to use the same thread pool used in code-analysis and an issue in the Django shell was fixed when django >= 1.10.

And to finish, the preferences for running unit-tests can now be saved to the project or user settings (i.e.: preferences > PyDev > PyUnit > Save to ...) and an issue was fixed when coloring the matrix multiplication operator (which was wrongly recognized as a decorator).

Thank you very much to all the PyDev supporters and Patrons (http://www.patreon.com/fabioz), who help to keep PyDev moving forward and to JetBrains, which sponsored many of the improvements done in the PyDev.Debugger.

Categories: FLOSS Project Planets

Dries Buytaert: Living our values

Planet Drupal - Thu, 2017-03-23 00:26

The Drupal community is committed to welcome and accept all people. That includes a commitment to not discriminate against anyone based on their heritage or culture, their sexual orientation, their gender identity, and more. Being diverse has strength and as such we work hard to foster a culture of open-mindedness toward differences.

A few weeks ago, I privately asked Larry Garfield, a prominent Drupal contributor, to leave the Drupal project. I did this because it came to my attention that he holds views that are in opposition with the values of the Drupal project.

I had hoped to avoid discussing this decision publicly out of respect for Larry's private life, but now that Larry has written about it on his blog and it is being discussed publicly, I believe I have no choice but to respond on behalf of the Drupal project.

It is not for me to share any of the confidential information that I've received, so I won't point out the omissions in Larry's blog post. However, I can tell you that those who have reviewed Larry's writing, including me, suffered from varying degrees of shock and concern.

In the end, I fundamentally believe that all people are created equally. This belief has shaped the values that the Drupal project has held since it's early days. I cannot in good faith support someone who actively promotes a philosophy that is contrary to this.

While the decision was unpleasant, the choice was clear. I remain steadfast in my obligation to protect the shared values of the Drupal project. This is unpleasant because I appreciate Larry's many contributions to Drupal, because this risks setting a complicated precedent, and because it involves a friend's personal life. The matter is further complicated by the fact that this information was shared by others in a manner I don't find acceptable either.

It's not for me to judge the choices anyone makes in their private life or what beliefs they subscribe to. I certainly don't take offense to the role-playing activities of Larry's alternative lifestyle. However, when a highly-visible community member's private views become public, controversial, and disruptive for the project, I must consider the impact that his words and actions have on others and the project itself. In this case, Larry has entwined his private and professional online identities in such a way that it blurs the lines with the Drupal project. Ultimately, I can't get past the fundamental misalignment of values.

First, collectively, we work hard to ensure that Drupal has a culture of diversity and inclusion. Our goal is not just to have a variety of different people within our community, but to foster an environment of connection, participation and respect. We have a lot of work to do on this and we can't afford to ignore discrepancies between the espoused views of those in leadership roles and the values of our culture. It's my opinion that any association with Larry's belief system is inconsistent with our project's goals.

Second, I believe someone's belief system inherently influences their actions, in both explicit and subtle ways, and I'm unwilling to take this risk going forward.

Third, Larry's continued representation of the Drupal project could harm the reputation of the project and cause harm to the Drupal ecosystem. Any further participation in a leadership role implies our community is complicit with and/or endorses these views, which we do not.

It is my responsibility and obligation to act in the best interest of the project at large and to uphold our values. Decisions like this are unpleasant and disruptive, but important. It is moments like this that test our commitment to our values. We must stand up and act in ways that demonstrate these values. For these reasons, I'm asking Larry to resign from the Drupal project.

(Comments on this post are allowed but for obvious reasons will be moderated.)

Categories: FLOSS Project Planets

ActiveLAMP: Shibboleth Authentication in Symfony 2.8+|3.0+

Planet Drupal - Wed, 2017-03-22 22:00

We recently had the opportunity to work on a Symfony app for one of our Higher Ed clients that we recently built a Drupal distribution for. Drupal 8 moving to Symfony has enabled us to expand our service offering. We have found more opportunities building apps directly using Symfony when a CMS is not needed. This post is not about Drupal, but cross posting to Drupal Planet to demonstrate the value of getting off the island. Writing custom authentication schemes in Symfony used to be on the complicated side. But with the introduction of the Guard authentication component, it has gotten a lot easier.

Read more...
Categories: FLOSS Project Planets

Matthew Rocklin: Dask Release 0.14.1

Planet Python - Wed, 2017-03-22 20:00

This work is supported by Continuum Analytics, the XDATA Program, and the Data Driven Discovery Initiative from the Moore Foundation.

I’m pleased to announce the release of Dask version 0.14.1. This release contains a variety of performance and feature improvements. This blogpost includes some notable features and changes since the last release on February 27th.

As always you can conda install from conda-forge

conda install -c conda-forge dask distributed

or you can pip install from PyPI

pip install dask[complete] --upgrade Arrays

Recent work in distributed computing and machine learning have motivated new performance-oriented and usability changes to how we handle arrays.

Automatic chunking and operation on NumPy arrays

Many interactions between Dask arrays and NumPy arrays work smoothly. NumPy arrays are made lazy and are appropriately chunked to match the operation and the Dask array.

>>> x = np.ones(10) # a numpy array >>> y = da.arange(10, chunks=(5,)) # a dask array >>> z = x + y # combined become a dask.array >>> z dask.array<add, shape=(10,), dtype=float64, chunksize=(5,)> >>> z.compute() array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]) Reshape

Reshaping distributed arrays is simple in simple cases, and can be quite complex in complex cases. Reshape now supports a much more broad set of shape transformations where any dimension is collapsed or merged to other dimensions.

>>> x = da.ones((2, 3, 4, 5, 6), chunks=(2, 2, 2, 2, 2)) >>> x.reshape((6, 2, 2, 30, 1)) dask.array<reshape, shape=(6, 2, 2, 30, 1), dtype=float64, chunksize=(3, 1, 2, 6, 1)>

This operation ends up being quite useful in a number of distributed array cases.

Optimize Slicing to Minimize Communication

Dask.array slicing optimizations are now careful to produce graphs that avoid situations that could cause excess inter-worker communication. The details of how they do this is a bit out of scope for a short blogpost, but the history here is interesting.

Historically dask.arrays were used almost exclusively by researchers with large on-disk arrays stored as HDF5 or NetCDF files. These users primarily used the single machine multi-threaded scheduler. We heavily tailored Dask array optimizations to this situation and made that community pretty happy. Now as some of that community switches to cluster computing on larger datasets the optimization goals shift a bit. We have tons of distributed disk bandwidth but really want to avoid communicating large results between workers. Supporting both use cases is possible and I think that we’ve achieved that in this release so far, but it’s starting to require increasing levels of care.

Micro-optimizations

With distributed computing also comes larger graphs and a growing importance of graph-creation overhead. This has been optimized somewhat in this release. We expect this to be a focus going forward.

DataFrames Set_index

Set_index is smarter in two ways:

  1. If you set_index on a column that happens to be sorted then we’ll identify that and avoid a costly shuffle. This was always possible with the sorted= keyword but users rarely used this feature. Now this is automatic.
  2. Similarly when setting the index we can look at the size of the data and determine if there are too many or too few partitions and rechunk the data while shuffling. This can significantly improve performance if there are too many partitions (a common case).
Shuffle performance

We’ve micro-optimized some parts of dataframe shuffles. Big thanks to the Pandas developers for the help here. This accelerates set_index, joins, groupby-applies, and so on.

Fastparquet

The fastparquet library has seen a lot of use lately and has undergone a number of community bugfixes.

Importantly, Fastparquet now supports Python 2.

We strongly recommend Parquet as the standard data storage format for Dask dataframes (and Pandas DataFrames).

dask/fastparquet #87

Distributed Scheduler Replay remote exceptions

Debugging is hard in part because exceptions happen on remote machines where normal debugging tools like pdb can’t reach. Previously we were able to bring back the traceback and exception, but you couldn’t dive into the stack trace to investigate what went wrong:

def div(x, y): return x / y >>> future = client.submit(div, 1, 0) >>> future <Future: status: error, key: div-4a34907f5384bcf9161498a635311aeb> >>> future.result() # getting result re-raises exception locally <ipython-input-3-398a43a7781e> in div() 1 def div(x, y): ----> 2 return x / y ZeroDivisionError: division by zero

Now Dask can bring a failing task and all necessary data back to the local machine and rerun it so that users can leverage the normal Python debugging toolchain.

>>> client.recreate_error_locally(future) <ipython-input-3-398a43a7781e> in div(x, y) 1 def div(x, y): ----> 2 return x / y ZeroDivisionError: division by zero

Now if you’re in IPython or a Jupyter notebook you can use the %debug magic to jump into the stacktrace, investigate local variables, and so on.

In [8]: %debug > <ipython-input-3-398a43a7781e>(2)div() 1 def div(x, y): ----> 2 return x / y ipdb> pp x 1 ipdb> pp y 0

dask/distributed #894

Async/await syntax

Dask.distributed uses Tornado for network communication and Tornado coroutines for concurrency. Normal users rarely interact with Tornado coroutines; they aren’t familiar to most people so we opted instead to copy the concurrent.futures API. However some complex situations are much easier to solve if you know a little bit of async programming.

Fortunately, the Python ecosystem seems to be embracing this change towards native async code with the async/await syntax in Python 3. In an effort to motivate people to learn async programming and to gently nudge them towards Python 3 Dask.distributed we now support async/await in a few cases.

You can wait on a dask Future

async def f(): future = client.submit(func, *args, **kwargs) result = await future

You can put the as_completed iterator into an async for loop

async for future in as_completed(futures): result = await future ... do stuff with result ...

And, because Tornado supports the await protocols you can also use the existing shadow concurrency API (everything prepended with an underscore) with await. (This was doable before.)

results = client.gather(futures) # synchronous ... results = await client._gather(futures) # asynchronous

If you’re in Python 2 you can always do this with normal yield and the tornado.gen.coroutine decorator.

dask/distributed #952

Inproc transport

In the last release we enabled Dask to communicate over more things than just TCP. In practice this doesn’t come up (TCP is pretty useful). However in this release we now support single-machine “clusters” where the clients, scheduler, and workers are all in the same process and transfer data cost-free over in-memory queues.

This allows the in-memory user community to use some of the more advanced features (asynchronous computation, spill-to-disk support, web-diagnostics) that are only available in the distributed scheduler.

This is on by default if you create a cluster with LocalCluster without using Nanny processes.

>>> from dask.distributed import LocalCluster, Client >>> cluster = LocalCluster(nanny=False) >>> client = Client(cluster) >>> client <Client: scheduler='inproc://192.168.1.115/8437/1' processes=1 cores=4> >>> from threading import Lock # Not serializable >>> lock = Lock() # Won't survive going over a socket >>> [future] = client.scatter([lock]) # Yet we can send to a worker >>> future.result() # ... and back <unlocked _thread.lock object at 0x7fb7f12d08a0>

dask/distributed #919

Connection pooling for inter-worker communications

Workers now maintain a pool of sustained connections between each other. This pool is of a fixed size and removes connections with a least-recently-used policy. It avoids re-connection delays when transferring data between workers. In practice this shaves off a millisecond or two from every communication.

This is actually a revival of an old feature that we had turned off last year when it became clear that the performance here wasn’t a problem.

Along with other enhancements, this takes our round-trip latency down to 11ms on my laptop.

In [10]: %%time ...: for i in range(1000): ...: future = client.submit(inc, i) ...: result = future.result() ...: CPU times: user 4.96 s, sys: 348 ms, total: 5.31 s Wall time: 11.1 s

There may be room for improvement here though. For comparison here is the same test with the concurent.futures.ProcessPoolExecutor.

In [14]: e = ProcessPoolExecutor(8) In [15]: %%time ...: for i in range(1000): ...: future = e.submit(inc, i) ...: result = future.result() ...: CPU times: user 320 ms, sys: 56 ms, total: 376 ms Wall time: 442 ms

Also, just to be clear, this measures total roundtrip latency, not overhead. Dask’s distributed scheduler overhead remains in the low hundreds of microseconds.

dask/distributed #935

Related Projects

There has been activity around Dask and machine learning:

  • dask-learn is undergoing some performance enhancements. It turns out that when you offer distributed grid search people quickly want to scale up their computations to hundreds of thousands of trials.
  • dask-glm now has a few decent algorithms for convex optimization. The authors of this wrote a blogpost very recently if you’re interested: Developing Convex Optimization Algorithms in Dask
  • dask-xgboost lets you hand off distributed data in Dask dataframes or arrays and hand it directly to a distributed XGBoost system (that Dask will nicely set up and tear down for you). This was a nice example of easy hand-off between two distributed services running in the same processes.
Acknowledgements

The following people contributed to the dask/dask repository since the 0.14.0 release on February 27th

  • Antoine Pitrou
  • Brian Martin
  • Elliott Sales de Andrade
  • Erik Welch
  • Francisco de la Peña
  • jakirkham
  • Jim Crist
  • Jitesh Kumar Jha
  • Julien Lhermitte
  • Martin Durant
  • Matthew Rocklin
  • Markus Gonser
  • Talmaj

The following people contributed to the dask/distributed repository since the 1.16.0 release on February 27th

  • Antoine Pitrou
  • Ben Schreck
  • Elliott Sales de Andrade
  • Martin Durant
  • Matthew Rocklin
  • Phil Elson
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-03-22

Planet Apache - Wed, 2017-03-22 19:58
  • Why American Farmers Are Hacking Their Tractors With Ukrainian Firmware

    DRM working as expected:

    To avoid the draconian locks that John Deere puts on the tractors they buy, farmers throughout America’s heartland have started hacking their equipment with firmware that’s cracked in Eastern Europe and traded on invite-only, paid online forums. Tractor hacking is growing increasingly popular because John Deere and other manufacturers have made it impossible to perform “unauthorized” repair on farm equipment, which farmers see as an attack on their sovereignty and quite possibly an existential threat to their livelihood if their tractor breaks at an inopportune time. (via etienneshrdlu)

    (tags: hacking farming drm john-deere tractors firmware right-to-repair repair)

Categories: FLOSS Project Planets

Jeff Geerling's Blog: Use a Drupal 8 BLT project with Drupal VM on Windows 7 or Windows 8

Planet Drupal - Wed, 2017-03-22 19:09

Windows 10 is the only release Acquia's BLT officially supports. But there are still many people who use Windows 7 and 8, and most of these people don't have control over what version of Windows they use.

Drupal VM has supported Windows 7, 8, and 10 since I started building it a few years ago (at that time I was still running Windows 7), and using a little finesse, you can actually get an entire modern BLT-based Drupal 8 project running on Windows 7 or 8, as long as you do all the right things, as will be demonstrated in this blog post.

Categories: FLOSS Project Planets

C++ Concepts TS for getting functions as arguments, and the book discount

Planet KDE - Wed, 2017-03-22 19:06

One of my pet peeves with teaching FP in C++ is that if we want to have efficient code, we need to catch functions and other callable objects as template arguments.

Because of this, we do not have function signatures that are self-documenting. Consider a function that outputs items that satisfy a predicate to the standard output:

template <typename Predicate> void write_if(const std::vector<int> &xs, Predicate p) { std::copy_if(begin(xs), end(xs), std::ostream_iterator<int>(std::cout, " ")); p); }

We see that the template parameter is named Predicate, so we can imply that it needs to return a bool (or something convertible to bool), and we can deduce from the function name and the type of the first argument that it should be a function that takes an int.

This is a lot of reasoning just to be able to tell what we can pass to the function.

For this reason, Bartosz uses std::function in his blog posts – it tells us exactly which functions we can pass in. But std::function is slow.

So, we either need to have a bad API or a slow API.

With concepts, the things will change.

We will be able to define a really short (and a bit dirty) concept that will check whether the functions we get are of the right signature:

Edit: Changed the concept name to Callable to fit the naming in the standard [func.def] since it supports any callable, not just function objects

template <typename F, typename CallableSignature> concept bool Callable = std::is_convertible<F, std::function<CallableSignature>>::value; void foo(Callable<int(int)> f) // or Callable<auto (int) -> int> { std::cout << std::invoke(f, 42) << std::endl; }

We will be able to call foo with any callable that looks like a int-to-int function. And we will get an error ‘constraint Callable<int(int)> is not satisfied’ for those that do not have the matching signature.

An alternative approach is to use std::is_invocable type trait (thanks Agustín Bergé for writing the original proposal and pointing me to it). It will provide us with a cleaner definition for the concept, though the usage syntax will have to be a bit different if we want to keep the concept definition short and succulent.

template <typename F, typename R, typename ...Args> concept bool Callable = std::is_invocable<R, F, Args...>::value; void foo(Callable<int, int> f) { std::cout << std::invoke(f, 42) << std::endl; }

When we get concepts (C++20, hopefully), we will have the best of both worlds – we will have an optimal way to accept callable objects as function arguments, and not sacrificing the API to do it.

Book discount

Today, Functional Programming in C++ is again the Deal of the Day – you get half off if you use the code dotd032317au at cukic.co/to/manning-dotd


Read more...
Categories: FLOSS Project Planets

Tarek Ziade: Load Testing at Mozilla

Planet Python - Wed, 2017-03-22 19:00

After a stabilization phase, I am happy to announce that Molotov 1.0 has been released!

(Logo by Juan Pablo Bravo)

This release is an excellent opportunity to explain a little bit how we do load testing at Mozilla, and what we're planning to do in 2017 to improve the process.

I am talking here specifically about load testing our HTTP services, and when this blog post mentions what Mozilla is doing there, it refers mainly to the Mozilla QA team, helped with Services developers team that works on some of our web services.

What's Molotov?

Molotov is a simple load testing tool

Molotov is a minimalist load testing tool you can use to load test an HTTP API using Python. Molotov leverages Python 3.5+ asyncio and uses aiohttp to send some HTTP requests.

Writing load tests with Molotov is done by decorating asynchronous Python functions with the @scenario function:

from molotov import scenario @scenario(100) async def my_test(session): async with session.get('http://localhost:8080') as resp: assert resp.status == 200

When this script is executed with the molotov command, the my_test function is going to be repeatedly called to perform the load test.

Molotov tries to be as transparent as possible and just hands over session objects from the aiohttp.client module.

The full documentation is here: http://molotov.readthedocs.io

Using Molotov is the first step to load test our services. From our laptops, we can run that script and hammer a service to make sure it can hold some minimal charge.

What Molotov is not

Molotov is not a fully-featured load testing solution

Load testing application usually comes with high-level features to understand how the tested app is performing. Things like performance metrics are displayed when you run a test, like what Apache Bench does by displaying how many requests it was able to perform and their average response time.

But when you are testing web services stacks, the metrics you are going to collect from each client attacking your service will include a lot of variation because of the network and clients CPU overhead. In other words, you cannot guarantee reproducibility from one test to the other to track precisely how your app evolves over time.

Adding metrics directly in the tested application itself is much more reliable, and that's what we're doing these days at Mozilla.

That's also why I have not included any client-side metrics in Molotov, besides a very simple StatsD integration. When we run Molotov at Mozilla, we mostly watch our centralized metrics dashboards and see how the tested app behaves regarding CPU, RAM, Requests-Per-Second, etc.

Of course, running a load test from a laptop is less than ideal. We want to avoid the hassle of asking people to install Molotov & all the dependencies a test requires everytime they want to load test a deployment -- and run something from their desktop. Doing load tests occasionally from your laptop is fine, but it's not a sustainable process.

And even though a single laptop can generate a lot of loads (in one project, we're generating around 30k requests per second from one laptop, and happily killing the service), we also want to do some distributed load.

We want to run Molotov from the cloud. And that's what we do, thanks to Docker and Loads.

Molotov & Docker

Since running the Molotov command mostly consists of using the right command-line options and passing a test script, we've added in Molotov a second command-line utility called moloslave.

Moloslave takes the URL of a git repository and will clone it and run the molotov test that's in it by reading a configuration file. The configuration file is a simple JSON file that needs to be at the root of the repo, like how you would do with Travis-CI or other tools.

See http://molotov.readthedocs.io/en/latest/slave

From there, running in a Docker can be done with a generic image that has Molotov preinstalled and picks the test by cloning a repo.

See http://molotov.readthedocs.io/en/latest/docker

Having Molotov running in Docker solves all the dependencies issues you can have when you are running a Python app. We can specify all the requirements in the configuration file and have moloslave installs them. The generic Docker image I have pushed in the Docker Hub is a standard Python 3 environment that works in most case, but it's easy to create another Docker image when a very specific environment is required.

But the bottom line is that anyone from any OS can "docker run" a load test by simply passing the load test Git URL into an environment variable.

Molotov & Loads

Once you can run load tests using Docker images, you can use specialized Linux distributions like CoreOS to run them.

Thanks to boto, you can script the Amazon Cloud and deploy hundreds of CoreOS boxes and run Docker images in them.

That's what the Loads project is -- an orchestrator that will run hundreds of CoreOS EC2 instances to perform a massively distributed load test.

Someone that wants to run such a test has to pass to a Loads Broker that's running in the Amazon Cloud a configuration that tells where is the Docker that runs the Molotov test, and says for how long the test needs to run.

That allows us to run hours-long tests without having to depend on a laptop to orchestrate it.

But the Loads orchestrator has been suffering from reliability issues. Sometimes, EC2 instances on AWS are not responsive anymore, and Loads don't know anymore what's happening in a load test. We've suffered from that and had to create specific code to clean up boxes and avoid keeping hundreds of zombie instances sticking around.

But even with these issues, we're able to perform massive load tests distributed across hundreds of boxes.

Next Steps

At Mozilla, we are in the process of gradually switching all our load testing scripts to Molotov. Using a single tool everywhere will allow us to simplify the whole process that takes that script and performs a distributed load test.

I am also investigating on improving metrics. One idea is to automatically collect all the metrics that are generated during a load test and pushing them in a specialized performance trend dashboard.

We're also looking at switching from Loads to Ardere. Ardere is a new project that aims at leveraging Amazon ECS. ECS is an orchestrator we can use to create and manage EC2 instances. We've tried ECS in the past, but it was not suited to run hundreds of boxes rapidly for a load test. But ECS has improved a lot, and we started a prototype that leverages it and it looks promising.

For everything related to our Load testing effort at Mozilla, you can look at https://github.com/loads/

And of course, everything is open source and open to contributions.

Categories: FLOSS Project Planets

Nick Kew: The right weapon

Planet Apache - Wed, 2017-03-22 17:04

Today’s terrorist attack in London seems to have been in the worst tradition of slaughtering the innocent, but pretty feeble in its token attempt on the more noble target of Parliament.  This won’t become a Grand Tradition like Catesby’s papists’ attack.

But if we accept that the goal was slaughter of the innocent, then today’s perpetrator made a better job of it than most have done, at least since the days of the IRA, with their deep-pocketed US backers and organised paramilitary structure.  His weapon of choice was the obvious one for the purpose, having far more destructive power than many that are subject to heavy security theatre and sometimes utterly ridiculous restrictions.  Even some of those labelled “weapons of mass destruction”.

The car.  The weapon that is available freely to everyone, no questions asked.  The weapon no government dare restrict.  The weapon that kills more than all others, yet where it’s so rare as to be newsworthy for any perpetrator to be meaningfully punished.  Would the 5/11 plotters have gone to such lengths with explosives if they’d had such effective weapons to hand?

With this weapon, the only limit on terrorist attacks is the number of terrorists.  No need for preparation and planning – the kind of thing that might attract the attention of police or spooks – just go ahead.

And next time we get a display of security theatre – like banning laptops on flights – we can point to the massive double-standards.


Categories: FLOSS Project Planets

Looking for a job ?

Planet KDE - Wed, 2017-03-22 16:40
KDE Project:

Are you looking for a C++/Qt/Linux developer job in Germany ?
Then maybe this is something for you: Sharp Reflections

I'm looking forward to hear from you. :-)
Alex

Categories: FLOSS Project Planets

Sooper Drupal Themes: Are you ready for Drupal 8?

Planet Drupal - Wed, 2017-03-22 16:30

Between the rush of product updates we're putting out lately, a moment of reflection...

Like many other Drupal shops and theme/product developers I've been taking it easy with major investment in D8. But times are changing. Now we are seeing a time where Google searches including Drupal 8 are more numerous than searches containing Drupal 7. This is by no means a guarantee that D8 is a clear winner but to me it is a sign of progress and it inspires enough confidence to push ahead with our Drupal 8 product upgrades. SooperThemes is on schedule to release our Drupal themes and modules on Drupal 8 soon and I'm sure it will be great for us and our customers.

2017 will be an interesting year for Drupal, a year in which Drupal 8 will really show whether it can be as popular as it's younger brother. The lines in the chart might be crossing but Drupal 8 some way to go before it is as popular as 7. Understanding that Drupal 8 is more geared towards developers one might say it never will, but I think that it's important for the open web that Drupal will stay competitive in the low end market. Start-ups like Tesla and SpaceX have demonstrated how Drupal can grow along with your business all the way towards IPO and beyond.

Is your business ready for Drupal 8?

Personally I think I will need a month or 2 before I can say I'm totally comfortable with shifting focus of development to Drupal 8. Most of my existing customers are on Drupal 7 and my Drupal 7 expertise and products will not be irrelevant any time soon. One thing that is holding me back is uncertainty about media library features in Drupal 8, I hope the D8media team will be successful with their awesome work that puts this critical feature set in core.

If you are a Drupal developer, themer, or business owner, how do you feel about Drupal 8? Are you getting more business for Drupal 8 than Drupal 7? How is your experience with training yourself or your staff to work with Drupal 8 and it's more object oriented code? 

Let me know in the comments if you have anything to share about what Drupal 8 means to you!

Categories: FLOSS Project Planets

Meet the LibrePlanet 2017 Speakers: Denver Gingerich

FSF Blogs - Wed, 2017-03-22 15:55

Would you tell us a bit about yourself?

I was born and raised in British Columbia, Canada, and although I currently live in the New York City area, I am undeniably a West Coast boy at heart. I was always an extremely quiet and shy kid, but had no problem making friends with computers. So naturally, my high school socializing involved a lot of LAN parties, which is where I discovered that installing Apache on GNU/Linux was MUCH easier than on Windows. That was where my interest in free software really began, and it has been a big part of my life ever since. When I'm not sitting at a computer, I love traveling, and generally being outdoors as much as possible—hiking and skiing are favourite pastimes, as well as exploring new places I have never been before. I am also a transit enthusiast; I love learning about the history of subway systems, transit networks and infrastructure, and trains of all kinds. I generally find it fascinating to learn about how things work, and how things came to be the way the are, and because of that, I often fall down Wikipedia rabbit holes. I will also eat just about anything, and never turn down a free conference T-shirt, no matter how hideous the colour.

How did you first become interested in having your cell phone be fully free?

I first got a cell phone number in mid-2009, but I didn't have a cell phone—the number was hosted by Google Voice. I was mostly able to use the number with free software (using email for SMS and SIP for calls) so I didn't think a lot about the freedom implications of cell phones then.

I purchased a Nokia N900 and used it when I wasn't near a computer. It still ran a lot of non-free software. Later I learned that the most significant piece of this non-free software was the baseband firmware.

A few years ago I started my transition away from all Google services. I wanted my computer to remain my primary device for SMS and calls, so I needed a Google Voice replacement. I tried to find an equivalent service, but could not find one. So I decided to write my own.

That led to the first version of Soprani.ca, which I use to this day. I've recently created a newer version of the software, called JMP, which is easier to use for the average person. Both allow a person to use phone features like SMS and calling without a cell phone (and thus without baseband firmware). And both are free software, licensed under the GNU Affero General Public License, version 3 or later.

I'm still interested in this topic because people still use phone numbers and cell phones, even though they have certain "reprehensible" features, as RMS puts it. I hope by showing people ways to communicate with cell phone users that do not require a baseband firmware that we can take back control of our communication from the cellular companies and proprietary firmware makers.

Is this your first LibrePlanet?

No, this will actually be my fifth LibrePlanet in a row! I'm looking forward to chatting with all the wonderful people that I know I'll find there, and hearing some great ideas for how we can advance the free software movement.

In particular, it is becoming increasingly difficult to buy a computer that will function with only free software. I've met people at past LibrePlanet conferences who are building their own hardware so they can continue to run exclusively free software (such as the EOMA68 CPU card). These efforts are critically important, since existing computer manufacturers will no longer create the hardware we need. I hope to learn more about these efforts and ways I can contribute to them so that we'll still be able to run free software even after the last ThinkPad without a Management Engine stops working.

How can we follow you on social media?

I'm @ossguy on many social media sites, including Pump.io and Twitter.

What is a skill or talent you have that you wish more people knew about?

My wife says that if stubbornness and perfectionism could be counted as Olympic sports, I would win all the gold medals... She is smarter and much better looking than me, so she is probably right.

Want to hear Denver and the other amazing speakers Join us March 25-26th for LibrePlanet 2017!

Edited for content and grammar.

Categories: FLOSS Project Planets

Valuebound: How to send custom formatted HTML mail in Drupal 8 using hook_mail_alter()

Planet Drupal - Wed, 2017-03-22 15:47

As you can understand from name itself it’s basically used to Alter an email created with drupal mail in D7/ MailManagerInterface->mail() in D8.  hook_mail_alter() allows modification of email messages which includes adding and/or changing message text, message fields, and message headers.

Email sent rather than drupal_mail() will not call hook_mail_alter(). All core modules use drupal_mail() & always a recommendation to use drupal_mail but it’s not mandatory.
 

Syntax: hook_mail_alter(&$message)

Parameters

$message: Array containing the message data. Below are the Keys in this array include:

  • 'id': The id of the message.
  • 'to': The…
Categories: FLOSS Project Planets
Syndicate content