FLOSS Project Planets

Norman Maurer: Inline all the things

Planet Apache - Tue, 2014-07-29 11:28

If you are familiar with the JVM or JIT you may know that there is a little magic happen that is called inlining. Inlining is often mention as one of the most powerful things the JIT can do to optimize your code while it is executed it at the same time.

But what is inlining and why the heck does it make things faster?

Inlining is a technique that will basically just "inline" one method in another and so get rid of a method invocation. The JIT automatically detects "hot" methods and try to inline them for you. A method is considered "hot" if it was executed more the X times, where X is a threshold that can be configured using a JVM flag when start up java (10000 is the default). This is needed as inlining all methods would do more harm then anything else, because of the enormous produced byte-code. Beside this the JIT may "revert" previous inlined code when an optimization turns out to be wrong at a later state. Remember the JIT stands for Just in Time and so optimize (which includes inlining but also other things) while execute your code.

But even if the JVM consider a method to be "hot" it may not inline it. But why? One of the most likely reasons is that it is just to big to get inlined. How big a hot method can be and still be inlined is defined via the -XX:FreqInlineSize= and is 325 bytecode instructions as default on Linux 64 Bit. The default value is platform dependent. Don't change this number if you are not 100% sure you understand what you are doing and the impact of it!

So this gives us the first advice:

The JVM loves small methods

So if your method is hot but is to big you should think about how you can make it smaller.

Now you may wonder yourself how to find out about what is inlined and what not. Fortunally it's quite easy to gather this informations. All you need is some extra JVM flags during startup. Those are:

  • -XX:+PrintCompilation: Prints out when JIT compilation happens
  • -XX:+UnlockDiagnosticVMOptions: Is needed to use flags like -XX:+PrintInlining
  • -XX:+PrintInlining: Prints what methods were inlined

That's it. With those flags you will get a lot of informations logged to STDOUT, so you should store them in a log file to better analyze later. So with this background let us focus on how you can make the best use out of it.

Optimize performance by allow for inline - A real story

As most of you may know I'm working on the Netty Project as part of my day job. Netty tries to make development of asynchronous network applications easy while still provide an excellent performance. So I often end up running benchmarks as part of my work, which was exactly what I did when came across the "problem".

While doing the benchmark I started to wonder how well the JIT kicks in when Netty is used as simple HTTP Server. So I fired up the Hello World HTTP Server example with the previous mention JVM args like:

java -XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintInlining .... > inline.log

Now it was time to generate some workload on the HTTP Server so it does some real-world. For this I used one of my prefered benchmarking tools when it comes to HTTP, wrk.

wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 600 -c 1024 -t 8 http://127.0.0.1:8080/plaintext

This basically runs the test for 10 minutes with 1024 concurrent clients. Sending a simple GET request and wait for a response. Nothing fancy here, so moving on ;)

After this completed I was about to review the inline.log file. Like I said before the output is really noise and looks like:

66527 370 io.netty.channel.nio.NioEventLoop::processSelectedKeysOptimized (80 bytes) @ 14 java.nio.channels.SelectionKey::attachment (5 bytes) inline (hot) ! @ 33 io.netty.channel.nio.NioEventLoop::processSelectedKey (120 bytes) inline (hot) @ 1 io.netty.channel.nio.AbstractNioChannel::unsafe (8 bytes) inline (hot) @ 1 io.netty.channel.AbstractChannel::unsafe (5 bytes) inline (hot) @ 6 java.nio.channels.spi.AbstractSelectionKey::isValid (5 bytes) inline (hot) @ 26 sun.nio.ch.SelectionKeyImpl::readyOps (9 bytes) inline (hot) @ 1 sun.nio.ch.SelectionKeyImpl::ensureValid (16 bytes) inline (hot) @ 1 java.nio.channels.spi.AbstractSelectionKey::isValid (5 bytes) inline (hot) ! @ 42 io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe::read (191 bytes) inline (hot) ! @ 42 io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe::read (327 bytes) hot method too big @ 4 io.netty.channel.socket.nio.NioSocketChannel::config (5 bytes) inline (hot) @ 1 io.netty.channel.socket.nio.NioSocketChannel::config (5 bytes) inline (hot) @ 12 io.netty.channel.AbstractChannel::pipeline (5 bytes) inline (hot) @ 17 io.netty.channel.DefaultChannelConfig::getAllocator (5 bytes) inline (hot) @ 24 io.netty.channel.DefaultChannelConfig::getMaxMessagesPerRead (5 bytes) inline (hot) @ 44 io.netty.channel.DefaultChannelConfig::getRecvByteBufAllocator (5 bytes) inline (hot) @ 49 io.netty.channel.AdaptiveRecvByteBufAllocator::newHandle (20 bytes) executed < MinInlining Threshold times

The most interesting things here are method which are marked as "hot method too big". Those are the methods which the JIT consider to be hot (so executed a lot) but are to big to consider them for inlining at all. If you want to archive max speed you don't want to see this ;) Or at least you want to try to make them shorter.

But how to shorten them without loosing functionality? The solution is to move "less common patterns" out of the method into another method. This way the method itself becomes smaller while still be able to cover everything by the cost of dispatch to another method. This is exactly what I did for the here is what it looks like for the io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe::read method.

Before optimize it it looked like:

private final class NioMessageUnsafe extends AbstractNioUnsafe { ... @Override public void read() { assert eventLoop().inEventLoop(); final SelectionKey key = selectionKey(); if (!config().isAutoRead()) { int interestOps = key.interestOps(); if ((interestOps & readInterestOp) != 0) { // only remove readInterestOp if needed key.interestOps(interestOps & ~readInterestOp); } } final ChannelConfig config = config(); final int maxMessagesPerRead = config.getMaxMessagesPerRead(); final boolean autoRead = config.isAutoRead(); final ChannelPipeline pipeline = pipeline(); boolean closed = false; Throwable exception = null; try { for (;;) { int localRead = doReadMessages(readBuf); if (localRead == 0) { break; } if (localRead < 0) { closed = true; break; } if (readBuf.size() >= maxMessagesPerRead | !autoRead) { break; } } } catch (Throwable t) { exception = t; } int size = readBuf.size(); for (int i = 0; i < size; i ++) { pipeline.fireChannelRead(readBuf.get(i)); } readBuf.clear(); pipeline.fireChannelReadComplete(); if (exception != null) { if (exception instanceof IOException) { // ServerChannel should not be closed even on IOException because it can often continue // accepting incoming connections. (e.g. too many open files) closed = !(AbstractNioMessageChannel.this instanceof ServerChannel); } pipeline.fireExceptionCaught(exception); } if (closed) { if (isOpen()) { close(voidPromise()); } } } ... }

What the method does is to read messages and the pass them through a pipeline for further processing. But how to make the method smaller without loose functionality? The key here is that the method checks on every execution if isAutoRead() == false and if not remove the interest ops from the SelectionKey. But the default is true and will never be false and almost noone will ever cheange this behavior (as it's for advanced usage only). So why not move the code out there, as we only need to save a few bytes…

So here we go:

private final class NioMessageUnsafe extends AbstractNioUnsafe { ... private void removeReadOp() { SelectionKey key = selectionKey(); int interestOps = key.interestOps(); if ((interestOps & readInterestOp) != 0) { // only remove readInterestOp if needed key.interestOps(interestOps & ~readInterestOp); } } @Override public void read() { assert eventLoop().inEventLoop(); if (!config().isAutoRead()) { removeReadOp(); } final ChannelConfig config = config(); ... } ... }

You see I just moved everything to the new method called removeReadOp. Now running the application again and re-run the same test as before the JIT was able to finally inline it.

! @ 42 io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe::read (288 bytes) inline (hot)

This eliminates the overhead of an method invocation / dispatch and so makes the execution of the code faster. You can find the full issue details in issue #1812.

Make JIT's job easier

Beside have small methods there are other things you can do the help the JIT to inline methods. Inlining of methods is a lot "easier" by:

  • Use private methods when possible, as this way there is no need to check for other classes that override those methods
  • Use final classes / methods for the same reason as stated above
  • Use static methods for the same reason as stated above.

It's also fair to say that have a "flat" class hierarchy helps a lot. The JVM / JIT does especially handle situations very well when you have only two implementations of a specific interface or abstract base class. This is because it can handle things quite easy with an almost free instanceof check. For more informations on this topic please check Cliff Clicks post.

Inline != Inline

Even if a method is inlined it may not perform as well as another inlined method. Why is that? Basically it makes a difference if a protected/public method is inlined or a private/static one. This is because even if a protected / public method is inlined it still needs to do type-checks to be safe, as another class may be loaded that override/implement those. This is not the case for private / static methods, as the JVM / JIT knows here that those will always be the same.

So in some situations it may pay off to just "copy" code and not use abstract / protected / public etc to share it. But always think about the tradeoffs, which are mainly caused by maintance hell. So only do it if you really need to. As always meassure it, and see if you need the last 2% performance. So don't say I haven't warned you ;)

Summary

So does it worth all the effort? As always it depends… But if you are sure the hot-method is the one for the common use-pattern and you can split it up to move the "non-common" path out of the method and not make it complex as hell, YES it worth it.

Categories: FLOSS Project Planets

Norman Maurer: Reactive Streams

Planet Apache - Tue, 2014-07-29 11:28

As some of you may hopefully noticed, today Reactive Streams was announced and left stealth-mode. The idea of the Reactive Streams project is to provide a well defined SPI/API for asynchronous processing of data with back pressure built in. This will make it quite easy to bridge different asynchronous frameworks together and so pass data from one to the other and vice versa. And all will backpressure etc working out of the box without the need to have it implement by the user himself.

Vert.x and Reactive Streams

As Vert.x is one of these asynchronous frameworks / platforms that runs on the JVM we are already working on a prototype that allows Vert.x to be used with the propsed SPI/API. While the prototype is currently mainly focused on the AsyncFile I'm quite certain that other areas of Vert.x will follow once all the details are worked out and the SPI/API has stabilized.

Providing such a unified abstraction offers a lot of freedom to the user and simplifies the use of different projects that implement it.

For example once Vert.x, Akka, RxJava and Reactor all support it, passing data from one to the others would be as easy as here:

vertx.fileSystem().open("/path/to/file", new Handler<AsyncResult<AsyncFile>>() { @Override public void handle(AsyncResult<AsyncFile> result) { if (result.successed()) { AsyncFile file = result.result(); file.produceTo(akkaStream).produceTo(rxjavaObservable).produceTo(reactorStream); } else { // handle error } } });

All this processing is handled in an async manner and back pressure is applied. So stay tuned for more news on Reactive Streams, exciting times ahead.

So what ?

Being part of such a movement is a big honour for me and I am looking forward to help shape the future of asynchronous processing. Special thanks to Typesafe for driving the effort at the first place and Red Hat for allowing me to spend time on it.

Categories: FLOSS Project Planets

Norman Maurer: JNI Performance - Welcome to the dark side

Planet Apache - Tue, 2014-07-29 11:28

During the Holidays I finally found the mood to take on a task that has been on the to-do list for too long (I was talking about it since 2012 - anyway better late then never). The plan was to implement a native netty transport which doesn't use java.nio, but directly makes use of C and JNI and uses edge-triggered epoll, which is only available on linux.

Seriously ?

Yeah… The idea was to write a transport implementation that outperforms what ships with java.nio by making optimal use of the Thread-Model that powers Netty and is optimized for linux. Also, I wanted to practice my C and JNI skills again, as they felt a bit rusty. This blog post will talk about some performance issues related to JNI and other pitfalls that I encountered while working on the transport.

I will write up an extra post about the transport itself once it is opensourced, which will be in the next few weeks. In short, it outperforms the other netty transport which uses java.nio. This comes as no surprise as the provided one by java.nio must be more generic than what I needed for netty and linux.

Let me welcome you to the dark side!

Chris Isherwood

There are a few techniques you can use to improve the performance. These sections will cover them…

Caching jmethodID, jfieldID and jclass

When you work with JNI you often need to either access a method of a java object (jobject) or a field which holds some value. Also, you often need to get the class (jclass) to instantiate a new Object and return it from within your JNI call. All of this means you will need to make a "lookup" to get access to the needed jmethodID, jfieldID or jclass. But this doesn't come for free. Each lookup takes time and so affects performance if you are kicking the tires hard enough.

Luckily enough, there is a solution: caching.

Caching of jmethodID and jfieldID is straight forward. All you need to do is lookup the jmethodID or jfieldID and store it in a global field.

jmethodID limitMethodId; jfieldID limitFieldId; // Is automatically called once the native code is loaded via System.loadLibary(...); jint JNI_OnLoad(JavaVM* vm, void* reserved) { JNIEnv* env; if ((*vm)->GetEnv(vm, (void **) &env, JNI_VERSION_1_6) != JNI_OK) { return JNI_ERR; } else { jclass cls = (*env)->FindClass("java/nio/Buffer"); // Get the id of the Buffer.limit() method. limitMethodId = (*env)->GetMethodID(env, cls, "limit", "()I"); // Get int limit field of Buffer limitFieldId = (*env)->GetFieldID(env, cls, "limit", "I"); } }

This way, every time you need to either access the field or the method you can just reuse the global jmethodID and jfieldID. This is safe even from different threads. You may be tempted to do the same with jclass, and it may work at first, but then bombs out later. This is because jclass is handled as a local reference and so can be recycled by the GC.

There is a solution, however, which will allow you to cache the jclass and eliminate subsequent lookups. JNI provides special methods to "convert" a local reference to a global one which is guaranteered to not be GC'ed until it is explicitly removed. For example:

jclass bufferCls; // Is automatically called once the native code is loaded via System.loadLibary(...); jint JNI_OnLoad(JavaVM* vm, void* reserved) { JNIEnv* env; if ((*vm)->GetEnv(vm, (void **) &env, JNI_VERSION_1_6) != JNI_OK) { return JNI_ERR; } else { jclass localBufferCls = (*env)->FindClass(env, "java/nio/ByteBuffer"); bufferCls = (jclass) (*env)->NewGlobalRef(env, localBufferCls); } } // Is automatically called once the Classloader is destroyed void JNI_OnUnload(JavaVM *vm, void *reserved) { JNIEnv* env; if ((*vm)->GetEnv(vm, (void **) &env, JNI_VERSION_1_6) != JNI_OK) { // Something is wrong but nothing we can do about this :( return; } else { // delete global references so the GC can collect them if (bufferCls != NULL) { (*env)->DeleteGlobalRef(env, bufferCls); } } }

Please note the explicit free of the global reference by calling `DeleteGlobalRef(…). This is needed to prevent a memory leak as the GC is not allowed to release it. So remember this!

Crossing the borders

Typically, you have some native code which calls from java into your C code, but there are sometimes also situations where you need to access some data from your C (JNI) code that is stored in the java object itself. For this, you can call "back" into java from within the C code. One problem that is often overlooked is the performance hit it takes to cross the border. This is especially true when you call back from C into java.

The same problem hit me hard when I implemented the writev method of my native transport. This method basically takes an array of ByteBuffer objects and tries to write them via gathering writes for performances reasons. My first approach was to lookup the ByteBuffer.limit() and ByteBuffer.position() methods and cache their `jmethodID's as explained before. This yielded the following:

JNIEXPORT jlong JNICALL Java_io_netty_jni_internal_Native_writev(JNIEnv * env, jclass clazz, jint fd, jobjectArray buffers, jint offset, jint length) { struct iovec iov[length]; int i; int iovidx = 0; for (i = offset; i < length; i++) { jobject bufObj = (*env)->GetObjectArrayElement(env, buffers, i); jint pos = (*env)->CallIntMethod(env, bufObj, posId, NULL); jint limit = (*env)->CallIntMethod(env, bufObj, limitId, NULL); void *buffer = (*env)->GetDirectBufferAddress(env, bufObj); iov[iovidx].iov_base = buffer + pos; iov[iovidx].iov_len = limit - pos; iovidx++; } ... // code to write to the fd ... }

After the first benchmark, I was wondering why the speed was not matching my expections. I was only able to get about 530k req/sec with the following command against my webserver implementation:

# wrk-pipeline -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 8 --pipeline 16 http://127.0.0.1:8080/plaintext

After more thinking, I suspected that calling back into java code so often during the loop was the cause of the problems. So I checked the openjdk source code to find the names of the actual fields that hold the limit and position values. I changed my code as follows:

JNIEXPORT jlong JNICALL Java_io_netty_jni_internal_Native_writev(JNIEnv * env, jclass clazz, jint fd, jobjectArray buffers, jint offset, jint length) { struct iovec iov[length]; int i; int iovidx = 0; for (i = offset; i < length; i++) { jobject bufObj = (*env)->GetObjectArrayElement(env, buffers, i); jint pos = (*env)->GetIntField(env, bufObj, posFieldId); jint limit = (*env)->GetIntField(env, bufObj, limitFieldId); void *buffer = (*env)->GetDirectBufferAddress(env, bufObj); iov[iovidx].iov_base = buffer + pos; iov[iovidx].iov_len = limit - pos; iovidx++; } ... // code to write to the fd ... }

This change resulted in a boost of about 63k req/sec for a total of about 593k req/sec! Not bad at all…

Each benchmark iteration included a 20 minute warmup period followed by 3 runs of 2 minutes to gather the actual data.

The following graphs show the outcome in detail:

Lessons learned here are that crossing the border is quite expensive when you are pushing hard enough. The down-side of accessing the fields directly is that a change to the field itself will break your code. In the actual code (which I will blog about and release soon), this is handled gracefully by falling back to using the methods if the fields aren't found, and logging a warning.

Releasing with care

When using JNI, you often have to convert from some of the various j*Array instances to a pointer and release it again after you are done. So make sure all the changes are "synced" between the array you passed to the jni method and the pointer you used within the jni code. When calling Release*ArrayElements(...) you have to specify a mode to tell the JVM how it should handle the syncing of the array you passed in and the one used within your JNI code.

Different modes are:

  • 0

    Default: copy everything from the native array to the java array, and free the java array.

  • JNI_ABORT

    Don't touch the java array but free it.

  • JNI_COMMIT

    Copy everything from the native array to the java array, but don't free it. It must be freed later.

Often people just use mode 0 as it is the "safest". But using 0 when you actually don't need it gives you a performance penality. Why? Mainly because using 0 will trigger an array copy all the time, but there are two situations where you won't need the array copy at all:

  1. You are not changing the values in the array at all; only reading them.
  2. The JVM returns a direct pointer to the java array which is pinned in memory. When this is the case, you won't need to copy the array over as you operate directly on the same data used by java itself. Whether or not the JVM does this depends on the JNI implementation. Because of this, you need to pass in a pointer to a jboolean when you obtain the elements. The value of this jboolean indicates whether a copy was made or if it is just pinned.

The following code modifies the native array and then checks if it needs to copy the data back or not.

JNIEXPORT jint JNICALL Java_io_netty_jni_internal_Native_epollWait(JNIEnv * env, jclass clazz, jint efd, jlongArray events, jint timeout) { int len = (*env)->GetArrayLength(env, events); struct epoll_event ev[len]; int ready; // blocks until ev is filled and return if ready < 1. .... jboolean isCopy; jlong *elements = (*env)->GetLongArrayElements(env, events, &isCopy); if (elements == NULL) { // No memory left ?!?!? throwOutOfMemoryError(env, "Can't allocate memory"); return return -1; } int i; for (i = 0; i < ready; i++) { elements[i] = ev[i].data.u64; } jint mode; // release again to prevent memory leak if (isCopy) { mode = 0; } else { // was just pinned so use JNI_ABORT to eliminate not needed copy. mode = JNI_ABORT; } (*env)->ReleaseLongArrayElements(env, events, elements, mode); return ready; }

Doing the isCopy check may save you an array copy, so it's a good practice. There are more JNI methods that allow you to specify a mode, for which this advice also applies.

Summary

Hopefully, this post gave you some insight about JNI and the performance impact some operations have. The next post will cover the native transport for netty in detail, and give you some concrete numbers in terms of performance. So stay tuned ….

Thanks again to Nitsan Wakart , Michael Nitschinger and Jim Crossley for the review!

Usefull JNI Links

Java Native Interface Specification

JNI Reference Example

JNI Tips

Categories: FLOSS Project Planets

Norman Maurer: The hidden performance costs of instantiating Throwables

Planet Apache - Tue, 2014-07-29 11:28

Today it's time to make you aware of the performance penalty you may pay when using Throwable, Error, Exception and as a result give you a better idea how to avoid it. You may never have thought about it, but using those in a wrong fashion can affect the performance of your applications to a large degree.

Alright, let us start from scratch. You may have heard that you should only use Exception / Throwable / Error for exceptional situations (something that is not the norm and signals unexpected behaviour). This is actually a good advice, but even if you follow it (which I really hope you do) there may be situations where you need to throw one.

Throwing a Throwable (or one of it's subtypes) is not a big deal. Well it's not for free, but still not the main cause for peformance issues. The real issue comes up when you create the object itself.

Huh?

So why is creating a Throwable so expensive? Isn't it just a simple light-weight POJO? Simple yes, but certainly not light-weight!

It's because usally it will call Throwable.fillInStackTrace(), which needs to look down the stack and put it in the newly created Throwable. This can affect the performance of your application to a large degree if you create a lot of them.

But what to do about this?

There are a few techniques you can use to improve the performance. Let's have a deeper look into them now.

Lazy create a Throwable and reuse

There are some situations where you would like to use the same Throwable multiple times. In this case you can lazily create and then reuse it. This way you eliminate a lot of the initial overhead.

To make things more clear let's have a look at some real-world example. In this example we assume that we have a list of pending writes which are all failed because the underlying Channel was closed.

The pending writes are represented by the PendingWrite interface as shown below.

public interface PendingWrite { void setSuccess(); void setFailure(Throwable cause); }

We have a Writer class which will need to fail all PendingWrite instances with a ClosedChannelException. You may be tempted to implement it like this:

public class Writer { .... private void failPendingWrites(PendingWrite... writes) { for (PendingWrite write: writes) { write.setFailure(new ClosedChannelException()); } } }

This works, but if this method is called often and with a not to small array of PendingWrites you are in serious trouble. It will need to fill in the stacktrace for every PendingWrite you are about to fail!

This is not only very wasteful but also something that is easy to optimize, so let's bring it on…

The key is to lazy create the ClosedChannelException and reuse it for each PendingWrite that needs to get failed. And doing so will even result in the correct stacktrace to be filled in… JackPot!

So fixing this is as easy as rewriting the failPendingWrites(...) method as shown here:

public class Writer { .... private void failPendingWrites(PendingWrite... writes) { if (writes.length == 0) { return; } ClosedChannelException error = new ClosedChannelException(); for (PendingWrite write: writes) { write.setFailure(error); } } }

Notice we lazily create the ClosedChannelException only if needed (if we have something to fail) and reuse the same instance for all the PendingWrites in the array. This will dramatically cut down the overhead, but you can reduce it even more with some tradeoff which I will explain next…

Use static Throwable with no stacktrace at all

Sometimes you may not need a stacktrace at all as the Throwable itself is enough information for what's going on. In this case, you are able to just use a static Throwable and reuse it.

What you should remember in this case is to set the stacktrace to an empty array to not have some "wrong" stacktrace show up, and so cause a lot of headache when debugging.

Let us see how this fit in again in our Writer class:

public class Writer { private static final ClosedChannelException CLOSED_CHANNEL_EXCEPTION = new ClosedChannelException(); static { CLOSED_CHANNEL_EXCEPTION.setStackTrace(new StackTraceElement[0]); } .... private void failPendingWrites(PendingWrite... writes) { for (PendingWrite write: writes) { write.setFailure(CLOSED_CHANNEL_EXCEPTION); } } }

But where is this useful?

For example in a network application a closed Channel is not a really exceptional state anyway. So this may be a good fit in this case. In fact we do something similar in Netty for exactly this case.

Caution: only do this if you are sure you know what you are doing!

Benchmarks

Now with all the claims it's time to actually proof them. For this I wrote a microbenchmark using JMH.

You can find the source code of the benchmark in the github repository. As there is no JMH version in any public maven repository yet I just bundled a SNAPSHOT version of it in the repository. As this is just a SNAPSHOT it may get out of date at some point in time…. Anyway this is good enough for us to run a benchmark and should be quite simple to be updated if needed.

This benchmark was run with:

# git clone https://github.com/normanmaurer/jmh-benchmarks.git # cd jmh-benchmarks ➜ jmh-benchmarks git:(master) ✗ mvn clean package ➜ jmh-benchmarks git:(master) ✗ java -jar target/microbenchmarks.jar -w 10 -wi 3 -i 3 -of csv -o output.csv -odr ".*ThrowableBenchmark.*"

This basically means:

  • Clone the code
  • Build it the code
  • Run a warmup for 10 seconds
  • Run warmup 3 times
  • Run each benchmark 3 times
  • Generate output as csv

The benchmark result contains the ops/msec. Each op represents the call of failPendingWrites(...) with and array of 10000 PendingWrites.

Enough said, time to look at the outcome:

As you can see here creating a new Throwable is by far the slowest way to handle it. Next one is to lazily create a Throwable and reuse it for the whole method invocation. The winner is to reuse a static Throwable with the drawback of not having any stacktrace. So I think it's fair to say using a lazy created Throwable is the way to go in most cases. If you really need the last 1 % performance you could also make use of the static solution but will loose the stacktrace for debugging. So you see it's always a tradeoff.

Summary

You should be aware of how expensive Throwable.fillInStackTrace() is and so think hard about how and when you create new instances of Throwable. This is also true for subtypes as those will call the super constructor.

To make it short, nothing is for free so think about what you are doing before you run into performance problems later. Another good read on this topic is the blog post of John Rose.

Thanks again to Nitsan Wakart and Michael Nitschinger for the review!

Categories: FLOSS Project Planets

Ken Rickard: DrupalCamp Colorado

Planet Drupal - Tue, 2014-07-29 11:16

I'll be heading out to Denver to give a Sunday keynote at DrupalCamp Colorado.

The theme of the event is "Enterprise Drupal," so we'll be diving in to what that phrase actually means for development firms.

If you're in Denver, please come on down and say hello.

Categories: FLOSS Project Planets

Martijn Faassen: On Naming In Open Source

Planet Python - Tue, 2014-07-29 10:37

Here are some stories on how you can go wrong with naming, especially in open source software.

Easy

Don't use the name "easy" or "simple" in your software as it won't be and people will make fun of it.

Background

People tend to want to use the word 'easy' or 'simple' when things really are not, to describe a facade. They want to paper over immense complexity. Inevitably the facade will be a leaky abstraction, and developers using the software are exposed to it. And now you named it 'easy', when it's anything but not. Just don't give in to the temptation in the first place, and people won't make fun of it.

Examples

easy_install is a Python tool to easily and automatically install Python packages, similar to JavaScript npm or Ruby gems. pip is a more popular tool these days that does the same. easy_install hides, among many other complicated things, a full-fledged web scraper that follows links onto arbitrary websites to find packages. It's "easy" until it fails, and it will fail at one point or another.

SimpleItem is an infamous base class in Zope 2 that pulls in just about every aspect of Zope 2 as mixin classes. It's supposed to make it easy to create a new content type for Zope. The amount of methods made available is truly intimidating and anything but simple.

Demo

Don't use the word "demo" or "sample" in your main codebase or people will depend on it and you will be stuck with it forever.

Background

It's tempting in some library or framework consisting of many parts to want to expose an integrated set of pieces, just as an example, within that codebase itself. Real use of it will of course have the developers integrating those pieces themselves. Except they won't, and now you have people using Sample stuff in real world code.

The word Sample or Demo is fine if the entire codebase is a demo, but it's not fine as part of a larger codebase.

Examples

SampleContainer was a part of Zope 3 that serves as the base class of most actual container subclasses in real world code. It was just supposed to demonstrate how to do the integration.

Rewrite

Don't reuse the name of software for an incompatible rewrite, unless you want people to be confused about it.

Background

Your software has a big installed base. But it's not perfect. You decide to create a new, incompatible version, without a clear upgrade path. Perhaps you handwave the upgrade path "until later", but that then never happens.

Just name the new version something else. Because the clear upgrade path may never materialize, and people will be confused anyway. They will find documentation and examples for the old system if they search for the new one, and vice versa. Spare your user base that confusion.

The temptation to do this is great; you want to benefit from popularity of the name of the old system and this way attract users to the shiny new system. But that's exactly the situation where doing this is most confusing.

Examples

Zope 3: there was already a very popular Zope 2 around, and then we decide to completely rewrite it and named it "Zope 3". Some kind of upgrade path was promised but conveniently handwaved. Immense confusion arose. We then landed pieces of Zope 3 in the old Zope 2 codebase, and it took years to resolve all the confusion.

Company name

If you want a open source community, don't name the software after your company, or your company after the software.

Background

If you have a piece of open source software and you want an open source community of developers for it, then don't name it after your company. You may love your company, but outside developers get a clear indication that "the Acme Platform" is something that is developed by Acme. They know that as outside developers, they will never gain as much influence on the development of that software as developers working at Acme. So they just don't contribute. They go to other open source software that isn't so clearly allied to a single business and contribute there. And you are left to wonder why developers are not attracted to work on your software.

Similarly, you may have great success with an open source project and now want to name your own company after it. That sends a powerful signal of ownership to other stakeholders, and may deter them from contributing.

Of course naming is only a part of what makes an open source project look like something a developer can safely contribute to. But if you get the naming bit wrong, it's hard to get the rest right.

Add the potential entanglement into trademark politics on top of it, and just decide not to do it.

Examples

Examples omitted so I won't get into trouble with anyone.

Categories: FLOSS Project Planets

YouID Identity Claim

Planet KDE - Tue, 2014-07-29 09:54

di:sha1;eCt+TB1Pj/vgY05nqB48sd1seqo=?http=trueg.selfhost.eu%3A8899


Categories: FLOSS Project Planets

Drupalize.Me: Guided Help Tours in Drupal 8 (sort of)

Planet Drupal - Tue, 2014-07-29 09:30
One of the neat new things in Drupal 8 is something called the Tour module. It is built on the Joyride jQuery plugin, which provides a clickable tour of HTML elements on your website. It gives you a way to walk a new user through your site or a particular interface with text instructions and next buttons. If you're not sure what this all means or looks like, have a look at the video below to see it in action in Drupal 8.   I was drawn to investigating the Tour module because I love ways of helping people through documentation. The Drupal core help system is an old system, and there have been many discussions and attempts to update it in the past. Tour certainly doesn't replace the help pages at this point, but it is an interesting new tool. So what exactly is going on with it in Drupal 8? Will we have a fancy new tours all over a default installation? Well, no. As it stands right now, there is only one tour in Drupal 8, which is for the Views building interface. It was submitted as a proof of concept with the Views module in core. So what's the deal?  
Categories: FLOSS Project Planets

Matthias Wessendorf: Beta1 of the UnifiedPush Server 1.0.0 released

Planet Apache - Tue, 2014-07-29 08:47

Today we are announcing the first beta release of our 1.0.0 version. After the big overhaul, including a brand new AdminUI with the last release this release contains several enhancements:

  • iOS8 interactive notification support
  • increased APNs payload (2k)
  • Pagination for analytics
  • improved callback for details on actual push delivery
  • optimisations and improvements

The complete list of included items are avialble on our JIRA instance.

iOS8 interactive notifications

Besides the work on the server, we have updated our Java and Node.js sender libraries to support the new iOS8 interactive notification message format.

If you curious about iOS8 notifications, Corinne Krych has a detailed blog post on it and how to use it with the AeroGear UnifiedPush Server.

Swift support for iOS

On the iOS client side Corinne Krych and Christos Vasilakis were also busy starting some Swift work: our iOS registration SDK supports swift on this branch. To give you an idea how it looks, here is some code:

func application(application: UIApplication!, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: NSData!) { // setup registration let registration = AGDeviceRegistration(serverURL: NSURL(string: "<# URL of the running AeroGear UnifiedPush Server #>")) // attemp to register registration.registerWithClientInfo({ (clientInfo: AGClientDeviceInformation!) in // setup configuration clientInfo.deviceToken = deviceToken clientInfo.variantID = "<# Variant Id #>" clientInfo.variantSecret = "<# Variant Secret #>" // apply the token, to identify THIS device let currentDevice = UIDevice() // --optional config-- // set some 'useful' hardware information params clientInfo.operatingSystem = currentDevice.systemName clientInfo.osVersion = currentDevice.systemVersion clientInfo.deviceType = currentDevice.model }, success: { println("UnifiedPush Server registration succeeded") }, failure: {(error: NSError!) in println("failed to register, error: \(error.description)") }) } Demos

To get easily started using the UnifiedPush Server we have a bunch of demos, supporting various client platforms:

  • Android
  • Apache Cordova (with jQuery and Angular/Ionic)
  • iOS

The simple HelloWorld examples are located here. Some more advanced examples, including a Picketlink secured JAX-RS application, as well as a Fabric8 based Proxy, are available here.

For those of you who that are into Swift, there Swift branches for these demos as well:

Feedback

We hope you enjoy the bits and we do appreciate your feedback! Swing by on our mailing list! We are looking forward to hear from you!


Categories: FLOSS Project Planets

End Point: Python Subprocess Wrapping with sh

Planet Python - Tue, 2014-07-29 08:35

When working with shell scripts written in bash/csh/etc. one of the primary tools you have to rely on is a simple method of piping output and input from subprocesses called by the script to create complex logic to accomplish the goal of the script. When working with python, this same method of calling subprocesses to redirect the input/output is available, but the overhead of using this method in python would be so cumbersome as to make python a less desirable scripting language. In effect you were implementing large parts of the I/O facilities, and potentially even writing replacements for the existing shell utilities that would perform the same work. Recently, python developers attempted to solve this problem, by updating an existing python subprocess wrapper library called pbs, into an easier to use library called sh.

Sh can be installed using pip, and the author has posted some documentation for the library here: http://amoffat.github.io/sh/

Using the sh library

After installing the library into your version of python, there will be two ways to call any existing shell command available to the system, firstly you can import the command as though it was itself a python library:

from sh import hostname print(hostname())In addition, you can also call the command directly by just referencing the sh namespace prior to the command name:import sh print(sh.hostname())

When running this command on my linux workstation (hostname atlas) it will return the expected results:

Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sh >>> print(sh.hostname()) atlas

However at this point, we are merely replacing a single shell command which prints output to the screen, the real benefit of the shell scripts was that you could chain together commands in order to create complex logic to help you do work.

Advanced Gymnastics

A common use of shell scripts is to provide administrators the ability to quickly filter log file output and to potentially search for specific conditions within those logs, to alert in the event that an application starts throwing errors. With python piping in sh we can create a simple log watcher, which would be capable of calling anything we desire in python when the log file contains any of the conditions we are looking for.

To pipe together commands using the sh library, you would encapsulate each command in series to create a similar syntax to bash piping:

>>> print(sh.wc(sh.ls("-l", "/etc"), "-l")) 199

This command would have been equivalent to the bash pipe of "ls -l /etc | wc -l" indicating that the long listing of /etc on my workstation contained 199 lines of output. Each piped command is encapsulated inside the parenthesis of the command the precedes it.

For our log listener we will use the tail command along with a python iterator to watch for a potential error condition, which I will represent with the string "ERROR":

>>> for line in sh.tail("-f", "/tmp/test_log", _iter=True): ... if "ERROR" in line: ... print line

In this example, once executed, python will call the tail command to follow a particular log file. It will iterate over each line of output produced by tail and if any of the lines contain the string we are watching for python will print that line to standard output. At this point, this would be similar to using the tail command and piping the output to a string search command, like grep. However, you could replace the third line of the python with a more complex action, emailing the error condition out to a developer or administrator for review, or perhaps initiating a procedure to recover from the error automatically.

Conclusions

In this manner with just a few lines of python, much like with bash, one could create a relatively complex process without recreating all the shell commands which already perform this work, or create a convoluted wrapping process of passing output from command to command. This combination of the existing shell commands and the power of python; you get all the functions available to any python environment, with the ease of using the shell commands to do some of the work. In the future I will definitely be using this python library for my own shell scripting needs, as I have generally preferred the syntax and ease of use of python over that of bash, but now I will be able to enjoy both at the same time.

Categories: FLOSS Project Planets

Russell Coker: Android Screen Saving

Planet Debian - Tue, 2014-07-29 08:28

Just over a year ago I bought a Samsung Galaxy Note 2 [1]. About 3 months ago I noticed that some of the Ingress menus had burned in to the screen. Back in ancient computer times there were “screen saver” programs that blanked the screen to avoid this, then the “screen saver” programs transitioned to displaying a variety of fancy graphics which didn’t really fulfill the purpose of saving the screen. With LCD screens I have the impression that screen burn wasn’t an issue, but now with modern phones we have LED displays which have the problem again.

Unfortunately there doesn’t seem to be a free screen-saver program for Android in the Google Play store. While I can turn the screen off entirely there are some apps such as Ingress that I’d like to keep running while the screen is off or greatly dimmed. Now I sometimes pull the notification menu down when I’m going to leave Ingress idle for a while, this doesn’t stop the screen burning but it does cause different parts to burn which alleviates the problem.

It would be nice if apps were designed to alleviate this. A long running app should have an option to change the color of it’s menus, it would be ideal to randomly change the color on startup. If the common menus such as the “COMM” menu would appear in either red, green, or blue (the 3 primary colors of light) in a ratio according to the tendency to burn (blue burns fastest so should display least) then it probably wouldn’t cause noticable screen burn after 9 months. The next thing that they could do is to slightly vary the position of the menus, instead of having a thin line that’s strongly burned into the screen there would be a fat line lightly burned in which should be easier to ignore.

It’s good when apps have an option of a “dark” theme, that involves less light coming from the screen that should reduce battery use and screen burn. A dark theme should be at least default and probably mandatory for long running apps, a dark theme is fortunately the only option for Ingress.

I am a little disappointed with my phone. I’m not the most intensive Ingress player so I think that the screen should have lasted for more than 9 months before being obviously burned.

Related posts:

  1. Maintaining Screen Output In my post about getting started with KVM I noted...
  2. Android Device Service Life In recent years Android devices have been the most expensive...
  3. Cheap Android Tablet from Aldi I’ve just bought a 7″ Onix tablet from Aldi....
Categories: FLOSS Project Planets

GNUnet News: Talk @ Oxford: A Public Key Infrastructure for Social Movements in the Age of Universal Surveillance

GNU Planet! - Tue, 2014-07-29 08:08

On March 3rd 2014 Christian Grothoff gave a talk on "A Public Key Infrastructure for Social Movements in the Age of Universal Surveillance" at Oxford. You can now find the video below.

Categories: FLOSS Project Planets

Machinalis: Embedding Interactive Charts on an IPython Notebook - Part 2

Planet Python - Tue, 2014-07-29 08:01

On Part 1 we discussed the basics of embedding JavaScript code into an IPython Notebook, and saw how to use this feature to integrate D3.js charts. On this part we’ll show you how to do the same with Chart.js.

IPython Notebook

This post is also available as an IPython Notebook on github.com

Part 2 - Embedding ChartJS

First we need to declare the requirement using RequireJS:

%%javascript require.config({ paths: { chartjs: '//cdnjs.cloudflare.com/ajax/libs/Chart.js/0.2.0/Chart.min' } });

The procedure is the same as before, we define a template that will contain the rendered JavaScript code, and we use display to embed the code into the running page.

Chart.js is an HTML5 charting library capable of producing beautiful graphics with very little code. Now we want to plot the male and female population by region, division an state. We’ll use interact with a new callback function called display_chart_chartjs, but this time we don’t need custom widgets as we’re only selecting a single item (we’ll use a DropdownWidget). We’ve also included the show_javascript checkbox.

i = interact( display_chart_chartjs, sc_est2012_sex=widgets.fixed(sc_est2012_sex), region=widgets.fixed(region), division=widgets.fixed(division), show_javascript=widgets.CheckboxWidget(value=False), show=widgets.DropdownWidget( values={'By Region':'by_region', 'By Division': 'by_division', 'By State': 'by_state'}, value='by_region' ), div=widgets.HTMLWidget(value='<canvas width=800 height=400 id="chart_chartjs"></canvas>') )

As you can see, the library generates an beautiful and simple animated column chart. There’s not much in terms of customization of Chart.js charts, but that makes it very easy to use.

On the last part (soon), we’ll show you how to embed HighCharts charts.

Categories: FLOSS Project Planets

[GSoC'14]: Chronicle of a hitchhiker’s journey so far

Planet KDE - Tue, 2014-07-29 07:43

nuqneH [Klingon | in English- "Hello"], I am Avik [:avikpal] and this summer I got the opportunity to work with Andreas Cord-Landwehr [:CoLa] to contribute to the KDE-Edu project Artikulate. My task is to implement a way so as to tell a learner how well his/her pronunciation is compared to a native speaker.

Let me warn you about a couple of stuff beforehand; firstly, the trailing post is going to be a bit lengthy to read but I have tried to keep things interesting, secondly, I have a habit of addressing people by their IRC nicks though I have tried to put their real name as well ;)

So let me dive right into what I have been doing for the last couple of months. The first thing I had to do was to port Artikulate to QtGStreamer 1.0. The API changes in QtGStreamer mainly follow the changes performed in GStreamer 1.0. The biggest change is that Gst::PropertyProbe is gone or in our case QGst::PropertyProbePtr is gone which results in a compilation error. So the related code had to be adapted i.e. worked around to do the same. I got some great insights and tips from George Kiagiadakis [:gkiagia] and Diane Trout [:detrout] at #qtgstreamer and finally resolved this.

But still I was getting a runtime error because of Artikulate linking to both libgstreamer-1.0.so.0 and libgstreamer-0.10.so.0. It is a very common problem as GStreamer does not use symbol versioning and in some cases programs end up linking both of them through indirect shared library dependencies. I used pax-utils and lddtree (thanks to CoLa for telling me about these two great tools) to find out the cause of the linking error. Actually libqtwebkit.so.4 links the GStreamer 0.10 shared library as its dependency. CoLa got libqtwebkit built against GStreamer 1.0 and did some code changes and refactoring.

Also we decided against keeping phonon multimedia backend and Artikulate now supports only GStreamer backend. Precisely with Artikulate we are at QtGstreamer 1.2 and for the last few days the CI system also has it. This is just a heads up- I will let CoLa share the details of this work himself so stay tuned.

For pronunciation comparison I had initially decided to generate fingerprint of the audio file and then compare the two fingerprints (i.e. learner pronunciation and native pronunciation). Most of the phrases available with the trainer have one/two-syllable and are around 4-5 seconds in duration. The present chromaprint APIs don’t generate distinguishable fingerprints for audio of such low duration. I talked to Lukas Lalinsky from the Accoustid about how can the Chromaprint library be tweaked so as to get distinguishable fingerprints for small duration audio files. Chromaprint does a STFT analysis (FFT over a sliding window) and the window size and overlap determines how much data the algorithm generates. I went on trying to better the results by tweaking with the library but it was giving me only erratic data.

This was the time when I decided that it would be prudent to start working on writing a very basic audio fingerprint generator to cater my purpose. The concept is well discussed and illustrated in numerous papers and blogs so it wasn’t hard to break it up into modules.

The first job was to generate a spectrogram of the audio clip. I used the sox API to generate a spectrogram- the following system illustrates such a spectrogram.

Spectrogram of ‘European Union’ pronounced by me in Bengali

Next I wrote a code to find the peaks in amplitude where peak is a (time, frequency) pair corresponding to an amplitude value which is the greatest in a local neighborhood around it. Other pairs around it are lower in amplitude, and  thus are less likely to survive noise.

My next job is to group these neighborhood peaks into collections/beans and then use a hash function to get the final fingerprint. I am currently working on implementing this part.

Now to get the peaks out of the spectrogram I first found out the histogram of the image and there came an idea to see how different are the histograms of spectrograms of two pronunciation are. There are several statistical ways to compare histograms and so far the results that I have found are quite promising. I shall try to demonstrate using an example.

I asked CoLa for audio recordings of the word “weltmeisterschaft” [World Champion in English] and he send me several recordings- let me take a couple of those.

And its spectrogram looks like this-

CoLa’s pronunciation (sample 1)

And this is another sample from CoLa

And its spectrogram looks like-

CoLa’s pronunciation (sample 2)

It may be noted that in above two spectrograms there is only a linear shift by a small amount which is expected and desired.

Before giving examples of my pronunciations let me clarify how I have compared the two histograms. To compare two histograms (H1 and H2), first we have to choose a metric (d(H1,H2)) to express how well both histograms match. I have computed four different metric to compute the matching: Correlation, Chi-Square, Intersection and Bhattacharya distance.

Next I present to you my first attempt at pronouncing “weltmeisterschaft”

Yeah I admit, though my first obviously I could have done better and it sounds kind of like “wetmasterschaft”. And here is the reportcard (err….spectrogram) of my poor performance

My first attempt

But I was not ready to give up yet…. I made some disgusting though necessary gurgling sounds and tried to set my vocals into tune and this is what I came up with.

and its spectrogram looks like

weltmeisterschaft- by me after a few attempts

Now I shall show you how the comparison metrics stack up- for correlation and intersection methods, the higher the metric the more accurate the match and for the other two the less the metrics the better the match.

*it is actually a comparison between the two same pronunciation by CoLa with which the rest are compared- this is just to give a sense of accuracy achievable.

The next job is to converge on a single metric which will take into account all four metrics that I currently have. Meanwhile I will also work on the fingerprinting part as it would also enable it to point out specific parts of pronunciation in which further improvement is needed. I am working on removing noise from the spectrograms as it is needed in finding out the intensity peaks(part of the fingerprinting work)- I have finished writing a code to find an intensity threshold for the noise from the histogram.

Below is a histogram of the spectrogram of my somewhat better pronunciation of “weltmeisterschaft”-

Histogram- different colours depict different channels

I hope to club these all modules in a standalone application and share it with community members for their testing, meanwhile you may use the code at https://github.com/avikpal/noise-removal-and-sound-visualization and test it yourself.

Now its time to fire up the warp machine but even in a parallel universe too I will be eagerly listening to #kde-artikulate with my identifier being “avikpal” for any kind of suggestion and/or queries. You may also mail me at avikpal[dot]me[at]gmail[dot]com.

Qapla’![Klingon | in English- "Good-bye"] until next time.


Categories: FLOSS Project Planets

Russell Coker: Happiness and Lecture Questions

Planet Debian - Tue, 2014-07-29 06:57

I just attended a lecture about happiness comparing Australia and India at the Australia India Institute [1]. The lecture was interesting but the “questions” were so bad that it makes a good case for entirely banning questions from public lectures. Based on this and other lectures I’ve attended I’ve written a document about how to recognise worthless questions and cut them off early [2].

As you might expect from a lecture on happiness there were plenty of stupid comments from the audience about depression, as if happiness is merely the absence of depression.

Then they got onto stupidity about suicide. One “question” claimed that Australia has a high suicide rate, Wikipedia however places Australia 49th out of 110 countries, that means Australia is slightly above the median for suicide rates per country. Given some of the dubious statistics in the list (for example the countries claiming to have no suicides and the low numbers reported by some countries with extreme religious policies) I don’t think we can be sure that Australia would be above the median if we had better statistics. Another “question” claimed that Sweden had the highest suicide rate in Europe, while Greenland, Belgium, Finland, Austria, France, Norway, Denmark, Iceland, and most of Eastern Europe are higher on the list.

But the bigger problem in regard to discussing suicide is that the suicide rate isn’t about happiness. When someone kills themself because they have a terminal illness that doesn’t mean that they were unhappy for the majority of their life and doesn’t mean that they were any unhappier than the terminally ill people who don’t do that. Some countries have a culture that is more positive towards suicide which would increase the incidence, Japan for example. While people who kill themselves in Japan are probably quite unhappy at the time I don’t think that there is any reason to believe that they are more unhappy than people in other countries who only keep living because suicide is considered to be wrong.

It seems to me that the best strategy when giving or MCing a lecture about a potentially contentious topic is to plan ahead for what not to discuss. For a lecture about happiness it would make sense to rule out all discussion of suicide, anti-depressants, and related issues as they aren’t relevant to the discussion and can’t be handled in an appropriate manner in question time.

Related posts:

  1. Length of Conference Questions After LCA last year I wrote about “speaking stacks” and...
  2. Questions During Lectures An issue that causes some discussion and debate is the...
  3. Ziggy’s Lecture about Nuclear Power The Event I just attended a lecture by Dr Ziggy...
Categories: FLOSS Project Planets

Johannes Schauer: bootstrap.debian.net temporarily not updated

Planet Debian - Tue, 2014-07-29 04:46

I'll be moving places twice within the next month and as I'm hosting the machine that generates the data, I'll temporarily suspend the bootstrap.debian.net service until maybe around September. Until then, bootstrap.debian.net will not be updated and retain the status as of 2014-07-28. Sorry if that causes any inconvenience. You can write to me if you need help with manually generating the data bootstrap.debian.net provided.

Categories: FLOSS Project Planets

Ricardo Mones: Switching PGP keys

Planet Debian - Tue, 2014-07-29 04:42
Finally I find the mood to do this, a process which started 5 years ago in DebConf 9 at Cáceres by following Ana's post, of course with my preferred options and my name, not like some other ;-).

Said that, dear reader, if you have signed my old key:

1024D/C9B55DAC 2005-01-19 [expires: 2015-10-01] Key fingerprint = CFB7 C779 6BAE E81C 3E05 7172 2C04 5542 C9B5 5DAC
And want to sign my "new" and stronger key:

4096R/DE5BCCA6 2009-07-29 Key fingerprint = 43BC 364B 16DF 0C20 5EBD 7592 1F0F 0A88 DE5B CCA6
You're welcome to do so :-)

The new key is signed with the old, and the old key is still valid, and will probably be until expiration date next year. Don't forget to gpg --recv-keys DE5BCCA6 to get the new key and gpg --refresh-keys C9B55DAC to refresh the old (otherwise it may look expired).

Debian's Keyring Team has already processed my request to add the new key, so all should keep working smoothly. Kudos to them!
Categories: FLOSS Project Planets

Pedro Rocha: Like &amp; Dislike widgets for Drupal

Planet Drupal - Tue, 2014-07-29 03:31
It sounds simples, but while Drupal has the awesome Voting API, with Fiverstar, Vote Up/Down and many other voting like modules, we didn't have a "ready to use" solution for a "Like" widget, as we see on Facebook and many other social networks. Another issue is that many people avoid, but sometimes we do need the "Dislike" widget too. Until Like & Dislike module!
Categories: FLOSS Project Planets

Kristian Polso: How to make language switcher links link to frontpage in Drupal 7

Planet Drupal - Tue, 2014-07-29 02:54
Drupal has a block called "Language Switcher", which displays links to different language versions of the current page/node. If the node does not have translated version on the specified language, the block will not display a link for it. This can cause some confusion, since the user always expects to see links to all of the site's languages. This can be fixed by modifying the block so that the all of the links link to the corresponding language's frontpage. It is easy to do by editing the site's theme.
Categories: FLOSS Project Planets

Bryan Pendleton: Backpacking 2014: Matlock Lake, John Muir Wilderness

Planet Apache - Tue, 2014-07-29 00:55

It was that time of year again, so I packed up the pack, loaded up the car, and headed off with the gang.

We spent the night in Bishop so we could get an early start, and by 9:00 AM we were at the Onion Valley Trailhead.

Our destination was Matlock Lake, in the John Muir Wilderness Area.

The John Muir Wilderness Area is the most spectacular of all the central Sierra Nevada wilderness areas, and is also the location of the world famous John Muir Trail, surely the most amazing backpacking trail in the lower 48 states. Our trip didn't actually take us on the John Muir Trail, but we came very close to it, and there were many hikers on our trail headed to and from the JMT/PCT (in this area they are the same trail).

Although our hike was short, the elevation gain and overall altitude was substantial: the Onion Valley trailhead is at 9,200 feet, and our campsite was at 10,600 feet; with the various ups and downs of the trail we found that Deb's FitBit registered an astonishing 180 staircases at the completion of the first day's hike.

So we were good and tired at the end of the first day!

We were lucky enough to have clear skies, fair weather, and a new moon, giving us near perfect star gazing and an active evening debate about whether or not we were seeing Iridium Flashers.

On the second day we took a cross-country scramble to nearby Bench Lake, a gorgeous hidden lake which is about 300 feet above Matlock Lake. Bench Lake was beautiful and secluded, and the views were marvelous.

On our third day, we took a trip up and over Kearsarge Pass. At 11,800 feet, this pass was the highest I've been on foot in many years, perhaps decades. Sitting at the pass is an astonishing spectacle, as you can see, simultaneously, more than 60 miles to the east, across the Owens Valley and beyond, and more than 25 miles to the west, down through Kings Canyon National Park and beyond.

The trail to the pass is well-maintained and pleasant (it has to be, as it is a major mule train pack trail), but the conditions at the pass are not suited for long periods of relaxation; as another hiker at the pass commented, "above 10,000 feet in the Sierras there are only two situations that apply: either the sun is out and you are too hot, or there are clouds and you are too cold."

Indeed it is true.

Some of our party extended the hike by dropping down to visit the Kearsarge Lakes and then returning via the pass, while I made it a shorter day by returning straight to Matlock Lake where I had time for an afternoon dip in the lake.

The lakes in this are are at the forefront of a concerted campain to save the mountain yellow-legged frog:

Mountain yellow-legged frogs in the California Sierra Nevada are disappearing at an alarming rate, primarily due to a virulent fungus. UCSB scientists Cheryl Briggs and Roland Knapp are racing to understand why some populations of frogs succumb and others survive, with the aim of not only saving the frogs but also gaining knowledge of how and why organisms develop resistance to virulent pathogen attack.

Of course, your perspective on this may depend on where you are standing: Feds to list frogs as ‘endangered’

A final decision on the critical habitat proposal is expected to be made early next year, but the proposals were met with opposition in the Eastern Sierra, where residents said there is a fear that the designations will close off backcountry access (or at least appear to) and negatively impact the local, tourist-based economy. Some of those fears were validated during the public comment process when it was brought to light that the California Department of Fish and Wildlife had been removing trout from backcountry lakes for several years in an effort to protect the frogs and prevent an endangered species or critical habitat listing.

With fishing season kicking off today, anglers seeking solitude in the remote reaches of the Sierra Nevada are being advised that several lakes that were once thriving fisheries have been cleared of all trout to protect the frogs, which are eaten by the fish in their tadpole form.

Sure enough, there was not a fish to be found in the lakes we visited. As the Inyo Register observed:

In the Independence area, the DFW has removed trout from Bench, Matlock and Slim lake but the higher-elevation waters continue to produce trout. "We have no plans to remove fish from any of those."

It's not clear which higher-elevation waters they mean.

But we definitely saw lots of tadpoles and frogs, so that part of the program is certainly working!

And there is plenty of other life to see in the woods. With campfire restrictions nearly universal in California this year, we were limited to enjoying the wilderness around our camp stove, but I think this is a good thing, for Dead Trees Are Anything But Dead.

After three beautiful days in the wilderness, it was time to get back to coding, so we broke camp and had a pleasant walk down the hill to our cars.

I think it was a good thing we left as we did, for between the massive storm that came racing up from the south and the forest fire near El Portal, we were looking back over our shoulders at storm clouds and smoky skies during our drive home.

Although we might have wished for a bit more solitude, overall this was a near perfect trip for us: the weather was perfect, the scenery was glorious, everything went just as you would hope a backpacking trip would go.

If you're looking for a great place to go backpacking, and you haven't yet tried the Kearsarge Pass trail out of the Onion Valley Trailhead, you should put it on your list.

Categories: FLOSS Project Planets
Syndicate content