FLOSS Project Planets

unifont @ Savannah: Unifont 7.0.05 Now Available

GNU Planet! - Sat, 2014-10-18 00:40

Unifont version 7.0.05 is now available for download at ftp://ftp.gnu.org/gnu/unifont/unifont-7.0.05/.

Unifont is part of the GNU Project. It is a dual-width font,
with TrueType and other versions created from an underlying pixel
map. Glyphs are composed on either an 8-by-16 pixel grid or
a 16-by-16 pixel grid. Its goal is to provide a low-resolution font that covers all of Unicode's Basic Multilingual Plane, Plane 0.

This version includes over 5,400 glyphs in the Unicode Supplemental Multilingual Plane (Plane 1), in addition to complete coverage of the Basic Multilingual Plane and several scripts in Michael Everson's ConScript Unicode Registry (CSUR).

Further details are available at https://savannah.gnu.org/projects/unifont/ and at http://unifoundry.com/unifont.html.

Paul Hardy

Categories: FLOSS Project Planets

Doug Vann: Drupal Training at Drupal Camps And Why We Need More Of It

Planet Drupal - Fri, 2014-10-17 21:41

Drupal Camp Road Warrior
By the end of 2014, I will have hit 50 Drupal Camps! It took 72 months to hit 22 cities, in 16 states! In that time, I've seen Drupal Camps run in almost every conceivable way possible. From Madison WI to Orlando FL, from NewYork NY to San Diego CA, I've seen thousands of attendees flocking to these events, all with the hopes of growing in their knowledge and understanding of Drupal. In my experience, the system works -- mostly.
But, we can do better.

We all know the drill
You assemble a bunch of speakers. They will deliver a bunch of sessions. You try to group these sessions into tracks, if you can. You wrestle with how to add a few sessions about the Drupal Community or maybe about Business or a few odd sessions that don't fit into your tracks. Oh yah... You almost forgot about the beginners, so you have a session or two that demystifies one topic or another.

The N00B experience
You would be surprised at how many people show up to a Drupal Camp who don't know what a node is. Or if they do know what a node is, they don't know how to create their own content types. Or if they do know how to create content types, they don't know how to create Views. These people show up and attend sessions that they have little chance of comprehending. They sit down for up to an hour per session listening to senior developers from major Drupal shops talk about nodes and fields and blocks and views-displays and modules. The whole time they may be thinking, "Dang! I thought by showing up for a day or two I would start picking this stuff up!?" But they're not.

Meet the N00Bs
Who are these people who are "New To Drupal?" Well, for starters, they're probably not really that new to Drupal! Based on my experiences, here is an incomplete list of ppl who regularly attend my classes.

  1. Certainly anyone who just discovered Drupal very recently and has come to the camp to gain a better understanding of Drupal. [This is not always the biggest portion]
  2. Individuals who have been to a couple camps and have tried to read the books or watch the videos but still haven't had the needed "AHA!" moments to grasp it all.
  3. Individuals who work for a University or Government or Company who uses, or is considering, Drupal. [This is a BIG ONE]
    • People, often with other web skills [sys admins, java, asp, php, etc] who are sent by their employers to scope out Drupal and/or to learn how to use it.
    • People coming to gain skills in an effort to alleviate their, or their employer's, dependency on vendors. [This happens a lot!]
    • New hires to Drupal shops or Design shops or shops offering web related services who are looking to better provide Drupal related services. 
    • People who know plenty, but want to make sure they are properly grounded.
    • People who come in the hopes of asking lots of questions!

I've seen all that and more. Multitudes of people are coming to camps in hopes of really wrapping their minds around how Drupal solves the modern problem of publishing dynamic content on the web. Too often, without a day of training they leave the camp with the same [and more] questions than they arrived with.

What they really want/need
After attending camp after camp, it's a proven fact. People are coming to learn what Drupal is and how to use it.  If the camp has no full day training opportunity then many are going to drown in the other sessions and simply not get what they really need.
I'll just be frank at this point. I believe that every camp needs to have a full day of beginner training. I believe that this training should be delivered not across differing tracks with differing speakers, but by the same individual, or group of individuals, working together to provide the full day of training. I have done this time and time again and I see the relief on people's faces as they gain a practical understanding of the power and flexibility of Drupal and how they can leverage it. This day of training starts them down the road of really learning Drupal. If there's a 2nd day of camp, I can PROMISE you that they will get far more out of it after a day of training.

How to provide a day of training at a Drupal Camp
There are many ways! Here's a list that is, by no means, exhaustive.

  1. Some camps have a dedicated day just for trainings on the day before the regular camp.
    • This is effecive not only for beginner classes but for classes on SEO, Drupal 8, Module Development, etc.
    • Most often training takes place in the same location as the camp, but occasionally it is not.
  2. Some camps simply reserve one track and dedicate it to a full day of training.
    • I've done this quite a few times where I have a room all day while others hop from session to session.
    • This is easier if you can't dedicate a whole day to training.
  3. The content in the full day Drupal beginner's training.
    • In some camps someone leads the class through the Acquia curriculum of Drupal In A Day
    • Some camps have a vendor come in and do the training
      • Doug Vann! If you want me to join your camp and present a day of training call me at 765-5-DRUPAL or CONTACT ME
      • I've seen posts from BLINK REACTION & OSTRAINING about their various full day offerings at Drupal Camps as well.
      • If I missed anyone who has travelled to multiple camps and provided full day trainings in the past and would do so again, leave a comment and I'll add you here. :-)
    • Some camps have used the BuildAModule.com Mentored Training method.
  4. The finances of a full day of training. Here's how I've experienced this as a trainer.
    • Some camps offer it for free or as part of the Camp fee that attendees have already paid.
    • Some camps charge attendees enough to cover the cost of catering.
    • Some camps charge a flat fee per attendee and share a percentage with the trainer.
    • Some camps procure a "training sponsor" and hand that sum off to the trainer.

Every Drupal Camp can do this! I've been invited to one-day camps and they give me one of their rooms for the whole day. I show up and deliver the full day of Drupal Beginner Training. Sadly, I never get to see any of the other sessions. Oh well... After 50 Drupal Camps, I've seen plenty of Drupal Sessions! :-)
Providing a full day of training will definitely raise your attendance. Universities, Governments, and Companies will send people. People will ask their employers to send them. Sponsors will really appreciate the fact that you're providing extra value to a broader audience.
Seriously folks... What more can I say? 

Full Day Trainings at Drupal Camps is a Big Win for everyone involved!



Drupal Planet

View the discussion thread.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-17

Planet Apache - Fri, 2014-10-17 19:58
Categories: FLOSS Project Planets

David Szotten: Strace to the rescue

Planet Python - Fri, 2014-10-17 19:00

Recently I was debugging a strange error when using the eventlet support in the new the new coverage 4.0 alpha. The issue manifested itself as a network connection problem: Name or service not known. This was confusing, since the host it was trying to connect to was localhost. How can it fail to resolve localhost?! Switching off the eventlet tracing, the problem went away.

After banging my head against this for a few days, I finally remembered a tool a rarely think to pull out: strace.

There's an excellent blog post showing the basics of strace by Chad Fowler, The Magic of Strace. After tracing my test process, I could easily search the output for my error message:

11045 write(2, "2014-10-15 09:16:48,348 [ERROR] py2neo.packages.httpstream.http: !!! NetworkAddressError: Name or service not known", 127) = 127

and a few lines above lay the solution to my mystery:

11045 open("/etc/hosts", O_RDONLY|O_CLOEXEC) = -1 EMFILE (Too many open files)

It turns out the eventlet tracer was causing my code to leak file descriptors, (a problem I'm still investigating), eventually hitting my relatively low ulimit. Bumping the limit in /etc/security/limits.conf, the problem disappeared!

I must remember to reach for strace sooner when trying to debug odd system behaviours.

Categories: FLOSS Project Planets

teximpatient @ Savannah: Chinese translation of TeX for the Impatient

GNU Planet! - Fri, 2014-10-17 18:36

Zou Hu (thank you!) has completed a Chinese translation of TFTI (starting from earlier work by others). The sources are available from https://bitbucket.org/zohooo/impatient and a PDF is at http://zoho.is-programmer.com/user_files/zoho/epics/tex-impatient-cn.pdf ...

Categories: FLOSS Project Planets

FSF Blogs: Recap of Friday Free Software Directory IRC meetup: October 17

GNU Planet! - Fri, 2014-10-17 18:03

In today's Friday Free Software Directory (FSD) IRC Meeting we approved updates to several entries; we added a new category, System-administration/virtualization; and we also sent emails to the maintainers of two different programs asking them if, in addition to publishing their source code, they would consider making it free software. We also added a new entry that I am looking forward to trying out this weekend:

  • tocc, a tag-based file management system that allows you to tag/classify any file in your file system. It is licensed under the terms of the GNU GPL version 3 or (at your option) any later version.

In addition to all this good work, we also also had some discussions related to Respects Your Freedom computer hardware certification, which makes me think that we should make RYF a theme for an upcoming meeting!

You can join in our discussions and help improve the Free Software Directory every Friday! Find out how to attend the Friday Free Software Directory IRC Meetings by checking our blog or by subscribing to the RSS feed.

Categories: FLOSS Project Planets

Recap of Friday Free Software Directory IRC meetup: October 17

FSF Blogs - Fri, 2014-10-17 18:03

In today's Friday Free Software Directory (FSD) IRC Meeting we approved updates to several entries; we added a new category, System-administration/virtualization; and we also sent emails to the maintainers of two different programs asking them if, in addition to publishing their source code, they would consider making it free software. We also added a new entry that I am looking forward to trying out this weekend:

  • tocc, a tag-based file management system that allows you to tag/classify any file in your file system. It is licensed under the terms of the GNU GPL version 3 or (at your option) any later version.

In addition to all this good work, we also also had some discussions related to Respects Your Freedom computer hardware certification, which makes me think that we should make RYF a theme for an upcoming meeting!

You can join in our discussions and help improve the Free Software Directory every Friday! Find out how to attend the Friday Free Software Directory IRC Meetings by checking our blog or by subscribing to the RSS feed.

Categories: FLOSS Project Planets

Forum One: DrupalCon Amsterdam: Done and Deployed

Planet Drupal - Fri, 2014-10-17 16:14

DrupalCon Amsterdam 2014…what a week! Drupal 8 Beta released, core contributions made, and successful sessions presented!

Drupal 8 Beta — has a nice ring to it, don’t you think?! But what exactly does that mean? According to the drupal.org release announcement, “Betas are good testing targets for developers and site builders who are comfortable reporting (and where possible, fixing) their own bugs, and who are prepared to rebuild their test sites from scratch if necessary. Beta releases are not recommended for non-technical users, nor for production websites.” Or more simply put, we’re over the hump, but we’re not there yet. But you can help!

Contrib to Core

One of the biggest focal points of this DrupalCon was contributing to Drupal 8 core in the largest code sprints of the year. Specially trained mentors helped new contributors set up their development environments, find tasks, and work on issues. This model is actually repeated at Drupal events all over the world, all year long. So even if you missed the Con, code sprints are happening all the time and the community truly welcomes all coders, novice or expert.

Forum One is proud that our own Kalpana Goel was featured as a mentor at DrupalCon Amsterdam. She is very passionate about helping new people contribute.

It was my third time mentoring at DrupalCon and like every time, it not only gave me an opportunity to share my knowledge, but also learn from others. Tobias Stockler took time to explain to me the Drupal 8 plugin system and walk me through an example. And fgm explained Traits to me and worked on a related issue.

-Kalpana Goel

Forum One Steps Up

While the sprints raged on, other Forum One team members led training sessions for people currently developing with Drupal. I, Campbell, presented Panels, Display Suite, and Context – oh my! to a capacity crowd (200+), and together, we presented Coder vs. Themer: Ultimate Grudge Smackdown Fight to the Death to over three hundred coders and themers. Now that Drupal 8 Beta is released we’re already looking forward to creating a Drupal 8 version of Coder vs. Themer for both Los Angeles and Barcelona!

This year’s European DrupalCon was a huge success, and a lot of fun! As a group, our Forum One team got to take a leading role in teaching, mentoring, and sharing with the rest of the Drupal community. It’s easy to pay lip service to open source values, but we really love the opportunity to show how important this community is to us. We recently estimated that we contribute almost a hundred patches to Drupal contrib projects in a good month. We’re pretty proud of that participation, but it’s only at the conventions that we get to engage with other Drupalists face to face. DrupalCon isn’t just for the code, or the sessions. It’s for seeing and having fun with our friends and colleagues, too.

At Amsterdam, we got to participate in code sprints, lead sessions and BOFs (birds of a feather sessions), and join the community in lots of fun extracurricular activities. We’re already making plans for DrupalCon LA in the spring. We’ll see you there!

Categories: FLOSS Project Planets

Facebook library in KF5: coming soon!

Planet KDE - Fri, 2014-10-17 16:11

Today I started porting KDE's own fully asynchronous library which talks to Facebook - LibKFbAPI - to Qt5 and Frameworks 5, which should bring back the Facebook Akonadi resource for Akonadi-Qt5 and make it available for Plasma 5, including Facebook Contacts, Facebook Events, Facebook Posts and Facebook Notifications (displayed as regular desktop notifications).

So far the library can connect to Facebook using KAccount's data (more about KAccounts next week) and retrieve logged-in user info.

I expect the port to go quite quickly and the Facebook Akonadi resource should follow up shortly. I'm considering making KFbAPI part of the KDE Frameworks collection, but as it uses some (possibly unnecessary) kdepimlibs bits, it might wait a bit for that to split up and join Frameworks too.

That is all.

Categories: FLOSS Project Planets

Drupal Watchdog: Drupal in the Age of Surveillance

Planet Drupal - Fri, 2014-10-17 15:28

On Feb. 11, 2014, Drupal.org – flagship site of the Drupal project – joined thousands of other websites in a campaign against state Internet surveillance dubbed “The Day We Fight Back.”

In announcing Drupal.org participation in the campaign, leading Drupal developer Larry Garfield made a strong link between free software and digital freedom: “Both the American and British governments have been found violating the digital privacy of millions of people in their own countries and around the world. That is exactly the sort of attack on individual digital sovereignty that Free Software was created to combat.”

What are the implications of recent surveillance revelations for Drupal site owners? What can and should Drupal site builders and developers be doing to protect user privacy? To find out, I spoke with analysts and developers both within and outside the Drupal community.

User Data and Threat Modeling

“Contemporary websites have almost innumerable places where information can be entered, logged, and accessed, by either the first party or third parties.”

That’s the frank assessment of Chris Parsons, a postdoctoral fellow at The Citizen Lab at the University of Toronto’s Munk School of Global Affairs. Parsons’ current research focus is on state access to telecommunications data, through both overt mechanisms and signals intelligence – covert surveillance.

Parsons recommends an approach to user data protection called threat modeling. “So who are you concerned about, what do you believe your ethical duties of care are, and then how do you both defend against your perceived attackers and apply your duty of care?”

Parsons suggests, “The first step is really just information inventory: what’s collected, why, where’s it going, for how long.”

Categories: FLOSS Project Planets

Lullabot: Drupal.org Initiatives

Planet Drupal - Fri, 2014-10-17 15:00

In this episode, Joshua Mitchell, CTO at the Drupal Association talks with Amber Matz about the exciting initiatives in the works for drupal.org and associated sites. We also talk about how the community, including the D.A. Board, working groups, and volunteers are utilized to determine priorities and work on infrastructure improvements. There's exciting changes in the works on drupal.org regarding automated testing, git, deployment, the issue queue, localize.drupal.org, and groups.drupal.org.

Categories: FLOSS Project Planets

Blink Reaction: Drupal As A Public Good and Renewing our Commitment

Planet Drupal - Fri, 2014-10-17 14:54

I was going to write a blog about Drupalcon Amsterdam and our commitment to Drupal and then I realized the best way to say it was to show it.

Thursday, October 16, 2014

Memo to all staff:

I am pleased to announce that starting this quarter Blink will significantly increase our efforts in support of Drupal. 

Categories: FLOSS Project Planets

Matt Raible: Developing Services with Apache Camel - Part IV: Load Testing and Monitoring

Planet Apache - Fri, 2014-10-17 14:02

Welcome to the final article in a series on my experience developing services with Apache Camel. I learned how to implement CXF endpoints using its Java DSL, made sure everything worked with its testing framework and integrated Spring Boot for external configuration. For previous articles, please see the following:

This article focuses on load testing and tools for monitoring application performance. In late July, I was asked to look into load testing the new Camel-based services I'd developed. My client's reason was simple: to make sure the new services were as fast as the old ones (powered by IBM Message Broker). I sent an email to the Camel users mailing list asking for advice on load testing.

I'm getting ready to put a Camel / CXF / Spring Boot application into production. Before I do, I want to load test and verify it has the same throughput as a the IBM Message Broker system it's replacing. Apparently, the old system can only do 6 concurrent connections because of remote database connectivity issues.

I'd like to write some tests that make simultaneous requests, with different data. Ideally, I could write them to point at the old system and find out when it falls over. Then I could point them at the new system and tune it accordingly. If I need to throttle because of remote connectivity issues, I'd like to know before we go to production. Does JMeter or any Camel-related testing tools allow for this?

In reply, I received suggestions for Apache's ab tool and Gatling. I'd heard of Gatling before, and decided to try it.


I don't remember where I first heard of Gatling, but I knew it had a Scala DSL and used Akka under the covers. I generated a new project using a Maven archetype and went to work developing my first test. My approach involved three steps:

  1. Write tests to run against current system. Find the number of concurrent requests that make it fall over.
  2. Run tests against new system and tune accordingly.
  3. Throttle requests if there are remote connectivity issues with 3rd parties. If I needed to throttle requests, I was planning to use Camel's Throttler.

To develop the first test, I started with Gatling's Recorder. I set it to listen on port 8000, changed my DrugServiceITest to use the same port and ran the integration test. This was a great way to get started because it recorded my requests as XML files, and used clean and concise code.

I ended up creating a parent class for all simulations and named it AbstractSimulation. This was handy because it allowed me to pass in parameters for all the values I wanted to change.

import io.gatling.core.scenario.Simulation import io.gatling.http.Predef._ /** * Base Simulation class that allows passing in parameters. */ class AbstractSimulation extends Simulation { val host = System.getProperty("host", "localhost:8080") val serviceType = System.getProperty("service", "modern") val nbUsers = Integer.getInteger("users", 10).toInt val rampRate = java.lang.Long.getLong("ramp", 30L).toLong val httpProtocol = http .baseURL("http://" + host) .acceptHeader("text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8") .doNotTrackHeader("1") .acceptLanguageHeader("en-US,en;q=0.5") .acceptEncodingHeader("gzip, deflate") .userAgentHeader("Gatling 2.0") val headers = Map( """Cache-Control""" -> """no-cache""", """Content-Type""" -> """application/soap+xml; charset=UTF-8""", """Pragma""" -> """no-cache""") }

The DrugServiceSimulation.scala class posts a SOAP request over HTTP.

import io.gatling.core.Predef._ import io.gatling.http.Predef._ import scala.concurrent.duration._ class DrugServiceSimulation extends AbstractSimulation { val service = if ("modern".equals(serviceType)) "/api/drugs" else "/axis2/services/DrugService" val scn = scenario("Drug Service :: findGpiByNdc") .exec(http(host) .post(service) .headers(headers) .body(RawFileBody("DrugServiceSimulation_request.xml"))) setUp(scn.inject(ramp(nbUsers users) over (rampRate seconds))).protocols(httpProtocol) }

To run tests against the legacy drug service with 100 users over 60 seconds, I used the following command:

mvn test -Dhost=legacy.server:7802 -Dservice=legacy -Dusers=100 -Dramp=60

The service property's default is "modern" and determines the service's URL. To run against the local drug service with 100 users over 30 seconds, I could rely on more defaults.

mvn test -Dusers=100

The name of the simulation to run is configured in pom.xml:

<plugin> <groupId>io.gatling</groupId> <artifactId>gatling-maven-plugin</artifactId> <version>${gatling.version}</version> <configuration> <simulationsFolder>src/test/scala</simulationsFolder> <simulationClass>com.company.app.${service.name}Simulation</simulationClass> </configuration> <executions> <execution> <phase>test</phase> <goals> <goal>execute</goal> </goals> </execution> </executions> </plugin>

When the simulations were done running, the console displayed a link to some pretty snazzy HTML reports. I ran simulations until things started falling over on the legacy server. That happened at around 400 requests per second (rps). When I ran them against a local instance on my fully-loaded 2013 MacBook Pro, errors started flying at 4000/rps while 3000/rps performed just fine.


I configured simulations to run in Jenkins with the Gatling Plugin. It's a neat plugin that allows you to record and compare results over time. After initial setup, I found I didn't use it much. Instead, I created a Google Doc with my findings and created screenshots of results so my client had it in an easy-to-read format.

Data Feeders

I knew the results of the simulations were likely skewed, since the same request was used for all users. I researched how to make dynamic requests with Gatling and found Feeders. Using a JDBC Feader I was able make all the requests contain unique data for each user.

I added a feeder to DrugServiceSimulation, added it to the scenario and changed to use an ELFileBody so the feeder would substitute a ${NDC} variable in the XML file.

val feeder = jdbcFeeder("jdbc:db2://server:50002/database", "username", "password", "SELECT NDC FROM GENERICS") val scn = scenario("Drug Service") .feed(feeder) .exec(http(host) .post(service) .headers(headers) .body(ELFileBody("DrugServiceSimulation_request.xml")))

I deployed the new services to a test server and ran simulations with 100 and 1000 users.

100 users over 30 seconds
Neither service had any failures with 100 users. The max response time for the legacy service was 389 ms, while the new service was 172 ms. The mean response time was lower for the legacy services: 89 ms vs. 96 ms.
1000 users over 60 seconds
When simulating 1000 users against the legacy services, 50% of the requests failed and the average response time was over 40 seconds. Against the new services, all requests succeeded and the mean response time was 100ms.

I was pumped to see the new services didn't need any additional performance enhancements. These results were enough to convince my client that Apache Camel was going to be a performant replacement for IBM Message Broker.

I wrote more simulations for another service I developed. In doing so, I discovered I missed implementing a couple custom routes for some clients. The dynamic feeders made me stumble onto this because they executed simulations for all clients. After developing the routes, the dynamic data helped me uncover a few more bugs. Using real data to load test with was very helpful in figuring out the edge-cases our routes needed to handle.

Next, I started configuring logging for our new Camel services.

Logging with Log4j2

Log4j 2.0 had just been released and my experience integrating it in AppFuse motivated me to use it for this project. I configured Spring to use Log4j 2.0 by specifying the following dependencies. Note: Spring Boot 1.2+ has support for Log4j2.

<log4j.version>2.0</log4j.version> ... <!-- logging --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.7</version> </dependency> <!-- Necessary to configure Spring logging with log4j2.xml --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jcl</artifactId> <version>${log4j.version}</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <version>${log4j.version}</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-web</artifactId> <version>${log4j.version}</version> </dependency>

I created a src/main/resources/log4j2.xml file and configured a general log, as well as one for each route. I configured each route to use "log:com.company.app.route.input" and "log:com.company.app.route.output" instead of "log:input" and "log:output". This allowed the log-file-per-route configuration you see below.

<?xml version="1.0" encoding="UTF-8"?> <Configuration> <Properties> <Property name="fileLogDir">/var/log/app-name</Property> <Property name="fileLogPattern">%d %p %c: %m%n</Property> <Property name="fileLogTriggerSize">1 MB</Property> <Property name="fileLogRolloverMax">10</Property> </Properties> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d [%-15.15t] %-5p %-30.30c{1} %m%n"/> </Console> <RollingFile name="File" fileName="${fileLogDir}/all.log" filePattern="${fileLogDir}/all-%d{yyyy-MM-dd}-%i.log"> <PatternLayout pattern="${fileLogPattern}"/> <Policies> <SizeBasedTriggeringPolicy size="${fileLogTriggerSize}"/> </Policies> <DefaultRolloverStrategy max="${fileLogRolloverMax}"/> </RollingFile> <RollingFile name="DrugServiceFile" fileName="${fileLogDir}/drug-service.log" filePattern="${fileLogDir}/drug-service-%d{yyyy-MM-dd}-%i.log"> <PatternLayout pattern="${fileLogPattern}"/> <Policies> <SizeBasedTriggeringPolicy size="${fileLogTriggerSize}"/> </Policies> <DefaultRolloverStrategy max="${fileLogRolloverMax}"/> </RollingFile> <!-- Add a RollingFile for each route --> </Appenders> <Loggers> <Logger name="org.apache.camel" level="info"/> <Logger name="org.springframework" level="error"/> <Logger name="com.company.app" level="info"/> <Root level="error"> <AppenderRef ref="Console"/> <AppenderRef ref="File"/> </Root> <Logger name="com.company.app.drugs" level="debug"> <AppenderRef ref="DrugServiceFile"/> </Logger> <!-- Add a Logger for each route --> </Loggers> </Configuration>

I did run into some issues with this configuration:

  • The /var/log/app-name directory has to exist or there's a stacktrace on startup and no logs are written.
  • When deploy from Jenkins, I ran into permissions issues between deploys. To fix this, I chowned the directory before restarting Tomcat. chown -R tomcat /var/log/app-name /etc/init.d/tomcat start

While I was configuring the new services on our test server, I also installed hawtio at /console. I had previously configured it to run in Tomcat when running "mvn tomcat7:run":

<plugin> <groupId>org.apache.tomcat.maven</groupId> <artifactId>tomcat7-maven-plugin</artifactId> <version>2.2</version> <configuration> <path>/</path> <webapps> <webapp> <contextPath>/console</contextPath> <groupId>io.hawt</groupId> <artifactId>hawtio-web</artifactId> <version>1.4.19</version> <type>war</type> <asWebapp>true</asWebapp> </webapp> </webapps> </configuration> ... </plugin>

hawtio has a Camel plugin that's pretty slick. It shows all your routes and their runtime metrics; you can even edit the source code for routes. Even though I used a Java DSL, my routes are only editable as XML in hawtio. Claus Ibsen has a good post on Camel's new Metrics Component. I'd like to learn how to build a custom dashboard for hawtio - Claus's example looks pretty nice.

The Spring Boot plugin for hawtio is not nearly as graphic intensive. Instead, it just displays metrics and their values in a table format.

There's some good-looking Spring Boot Admin UIs out there, notably JHipster's and the one in spring-boot-admin. I hope the hawtio Spring Boot plugin gets prettier as it matures.

I wanted more than just monitoring, I wanted alerts when something went wrong. For that, I installed New Relic on our Tomcat server. I'm fond of getting the Monday reports, but they only showed activity when I was load testing.

I believe all these monitoring tools will be very useful once the app is in production. My last day with this client is next Friday, October 24. I'm trying to finish up the last couple of services this week and next. With any luck, their IBM Message Broker will be replaced this year.


This article shows how to use Gatling to load test a SOAP service and how to configure Log4j2 with Spring Boot. It also shows how hawtio can help monitor and configure a Camel application. I hope you enjoyed reading this series on what I learned about developing with Camel over the past several months. If you have stories about your experience with Camel (or similar integration frameworks), Gatling, hawtio or New Relic, I'd love to hear them.

It's been a great experience and I look forward to developing solid apps, built on open source, for my next client. I'd like to get back into HTML5, AngularJS and mobile development. I've had a good time with Spring Boot and JHipster this year and hope to use them again. I find myself using Java 8 more and more; my ideal next project would embrace it as a baseline. As for Scala and Groovy, I'm still a big fan and believe I can develop great apps with them.

If you're looking for a UI/API Architect that can help accelerate your projects, please let me know! You can learn more about my extensive experience from my LinkedIn profile.

Categories: FLOSS Project Planets

PyCharm: Second PyCharm 4 EAP: IPython notebook, Attach to process and more

Planet Python - Fri, 2014-10-17 13:21

Having announced the first Early Access Preview build of PyCharm 4 almost a month ago, today we’re eager to let you know that the second PyCharm 4 EAP build 139.113 is ready for your evaluation. Please download it for your platform from our EAP page.

Just as always, this EAP build can be used for 30 days starting from its release date and it does not require any license.

The most exciting announcement of this fresh preview and the whole upcoming release of PyCharm 4 is that the IPython notebook functionality is now fully supported in PyCharm!
It has been one of the most top voted feature requests in PyCharm’s public tracker for quite a long time and now we’re proud to introduce this brand new integration to you.

Note that the IPython Notebook integration is available in both PyCharm Community Edition and PyCharm Professional Edition.

Now with PyCharm you can perform all the usual IPython notebook actions with *.ipynb files. Basically everything that you got used to with the ordinary IPython notebook is now supported inside PyCharm: PyCharm recognizes different types of cells and evaluates them independently. You can delete cells or edit previously evaluated ones. Also you can output matplotlib plots or images:

When editing code inside cells, PyCharm provides well-know intelligent code completion as if it were an ordinary Python file. You can also get quick documentation and perform all other usual actions that can be done in PyCharm.
So with this integration we have great news – now you can get the best of both PyCharm and IPython Notebook using them together!
Please give it a try, and give us your feedback prior to the final release of Pycharm 4.

Stay tuned for future blog posts with detailed descriptions of this great feature!

Introducing a new feature – Attach to process

Another great feature of the second PyCharm 4 preview build is that Pycharm’s debugger can now attach to a process!

Note: the “attach to process” functionality is available in both PyCharm Community Edition and PyCharm Professional Edition

With PyCharm 4 you can now connect a debugger with any running python process and debug in the attached mode. All you need is go to Tools | Attach to Process.
PyCharm will show you the list of running python processes on a system. Just select the one you want to connect to and click OK:

From this point you can use the debugger as usual – setting breakpoints, stepping into/over, pausing and resuming the process, evaluating variables and expressions, and changing the runtime context:

Currently we support the attach to process only for Windows and Linux platforms. Hopefully we’ll add the support for Mac OS with the next EAP.
Also please note that on most Linux machines, attaching to a process is disabled by default. In order to enable it on a system level, please do

echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope

If you want it permanently, please edit /etc/sysctl.d/10-ptrace.conf (works for ubuntu) and change the line:

kernel.yama.ptrace_scope = 1

to read:

kernel.yama.ptrace_scope = 0

Check ptrace configuration for your Linux distribution accordingly.

Better package management

The other good news is the improved package management subsystem. It got smarter and now recognizes unmet package requirements better. It also has a better UI – showing progress on package installation and a Choose Packages to Install dialog:

In case of errors, PyCharm now has better reports that include suggested solutions:

Another good thing is that the JSON support now comes bundled with PyCharm 4 in both Community Edition and Professional Edition. That means JSON is now supported on a platform level and has separate code style and appearance settings as well as its own code inspections, etc.:

And finally one more useful feature that comes from the Intellij platform. The Code Style settings now offers a new option: Detect and use existing file indents for editing (enabled by default):

This new option lets Pycharm detect certain Code Style settings (such as Use Tab character and Indent size) in the currently edited file on the fly. It means that even if the file has its code style different from your current settings, they will still be preserved.
Now you don’t need to worry about losing the formatting that is specific to certain files in your project.

That’s not all as this build has many other improvements and bug fixes – for example, improved Django 1.7 code insight. So we urge you to check the fixed issues and compare this build to the previous one!

Please give PyCharm 4 EAP a try before its official release and please report any bugs and feature request to our issue tracker.

Develop with Pleasure!
-PyCharm team

Categories: FLOSS Project Planets

NEWMEDIA: Drupal SA-CORE-2014-005

Planet Drupal - Fri, 2014-10-17 13:13
Drupal SA-CORE-2014-005Drupal Security threats and how we respond at NEWMEDIA!

Here at NEWMEDIA! we are constantly learning and improving. Over the course of the past year we have been refining our continuous integration and hosting platforms as they relate to Drupal. A significant threat, and subsequent fix has been identifeid in all versions of Drupal 7 that has literally rocked the. The good news is that your site is already patched if you are hosting a Drupal 7 site with us. The great news is that we have an opportunity to highlight some of the improvements we have made to our hosting offering.

The new system provides a smoother flow between development efforts and your ability to see the changes. When a developer's code is accepted to your project, it is immediately made visible to you in a password protected staging environment. When the change is approved, it can immediately be made available on the production site. Our systems ensure that the servers developed on are identical to the servers in the staging and production environments. This consistency increases the return on your investment by decreasing the amount of time it takes for a developer to perform their tasks. At the same time, it gaurantees a smoother deployment pipeline.

We are systematically moving all of our hosting properties into this new system.

* Your sites will now be hosted in what is known as Amazon's Virtual Private Cloud. This is the next generation of Amazon's cloud offering that provides advanced network control and separation for increased performance and security.

* Your sites will move from a static ip address to utilize state of the art load balancing techniques. The load balancing and proxy layers provide significant protection agains DDoS and other types of attacks that might be utilized against a website.

* Your DNS management will simplify. The same technology we are using at the load balancing layer allows for a more dynamic system. Because we are moving from addressing the machines by numbers to addressing them by name we are allowed additional flexibility. For example, let's say your site is under a higher than average load. We could temporarily add additional webservers that would increase the performance of your site.

* Site performance will improve. You are being moved to a distributed system that is more capable of handling your sites needs.

The goal of this is to increase the quality of our services and offerings while continuing the tradition of giving back. It is unfortunate that a security issue of this magnitude has affected Drupal. It is good to see the community come together to help bring the current set of continuous integration and deployment practices to the next level.  Come find us at the http://2013.badcamp.net/events/drupal-devops-summit to see how we do continuous.

Help us figure out the best way to share!

Categories: FLOSS Project Planets

What is KTracks?

Planet KDE - Fri, 2014-10-17 13:06

In a series of articles we illustrate the user centered design process from scratch, based on a still missing application in the KDE world: KTracks, an endurance activity tracker.

Keep on reading: What is KTracks?

Categories: FLOSS Project Planets

FSF News: The Free Software Foundation opens nominations for the 17th annual Free Software Awards

GNU Planet! - Fri, 2014-10-17 13:05
Award for the Advancement of Free Software

The Free Software Foundation Award for the Advancement of Free Software is presented annually by FSF president Richard Stallman to an individual who has made a great contribution to the progress and development of free software, through activities that accord with the spirit of free software.

Individuals who describe their projects as "open" instead of "free" are eligible nonetheless, provided the software is in fact free/libre.

Last year, Matthew Garrett was recognized with the Award for the Advancement of Free Software for his work to keep "Secure Boot" free software compatible, as well as his other work to make sure that so-called security measures do not come at the expense of user freedom. Garrett joined a prestigious list of previous winners including Dr. Fernando Perez, Yukihiro Matsumoto, Rob Savoye, John Gilmore, Wietse Venema, Harald Welte, Ted Ts'o, Andrew Tridgell, Theo de Raadt, Alan Cox, Larry Lessig, Guido van Rossum, Brian Paul, Miguel de Icaza, and Larry Wall.

Award for Projects of Social Benefit

Nominations are also open for the 2014 Award for Projects of Social Benefit.

The Award for Projects of Social Benefit is presented to the project or team responsible for applying free software, or the ideas of the free software movement, in a project that intentionally and significantly benefits society in other aspects of life.

We look to recognize projects or teams that encourage people to cooperate in freedom to accomplish social tasks. A long-term commitment to one's project (or the potential for a long-term commitment) is crucial to this end.

This award stresses the use of free software in the service of humanity. We have deliberately chosen this broad criterion so that many different areas of activity can be considered. However, one area that is not included is that of free software itself. Projects with a primary goal of promoting or advancing free software are not eligible for this award (we honor individuals working on those projects with our annual Award for the Advancement of Free Software).

We will consider any project or team that uses free software or its philosophy to address a goal important to society. To qualify, a project must use free software, produce free documentation, or use the idea of free software as defined in the Free Software Definition. Projects that promote or depend on the use of non-free software are not eligible for this award. Commercial projects are not excluded, but commercial success is not our scale for judging projects.

Last year, the GNOME Foundation's Outreach Program for Women (OPW) received the award, in recognition of its work to involve women (cis and trans) and genderqueer people in free software development. OPW's work benefits society more broadly, addressing gender discrimination by empowering women to develop leadership and development skills in a society which runs on technology. OPW does this critical work using the ideals and collaborative culture of the free software movement.

Other previous winners have included OpenMRS, GNU Health, Tor, the Internet Archive, Creative Commons, Groklaw, the Sahana project, and Wikipedia.


In the case of both awards, previous winners are not eligible for nomination, but renomination of other previous nominees is encouraged. Only individuals are eligible for nomination for the Advancement of Free Software Award (not projects), and only projects can be nominated for the Social Benefit Award (not individuals). For a list of previous winners, please visit https://www.fsf.org/awards.

Current FSF staff and board members, as well as award committee members, are not eligible.

The tentative award committee members are: Marina Zhurakhinskaya, Matthew Garrett, Rob Savoye, Wietse Venema, Richard Stallman, Suresh Ramasubramanian, Vernor Vinge, Hong Feng, Fernanda G. Weiden, Harald Welte, Vernor Vinge, Jonas Oberg, and Yukihiro Matsumoto.


After reviewing the eligibility rules above, please send your nominations to award-nominations@gnu.org, on or before Sunday, November 16th, 2014 at 23:59 UTC. Please submit nominations in the following format:

  • In the email message subject line, either put the name of the person you are nominating for the Award for Advancement of Free Software, or put the name of the project for the Award for Projects of Social Benefit.

  • Please include, in the body of your message, an explanation (forty lines or less) of the work done and why you think it is especially important to the advancement of software freedom or how it benefits society, respectively.

  • Please state, in the body of your message, where to find the materials (e.g., software, manuals, or writing) which your nomination is based on.

Information about the previous awards can be found at https://www.fsf.org/awards. Winners will be announced at an awards ceremony at the LibrePlanet conference, March 21-22 2015, in Cambridge, Massachusetts.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Libby Reinish
Campaigns Manager
Free Software Foundation
+1 (617) 542 5942

Categories: FLOSS Project Planets

Martin Pitt: Ramblings from LinuxCon/Plumbers 2014

Planet Debian - Fri, 2014-10-17 12:54

I’m on my way home from Düsseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans.

Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren’t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like “fleet” which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like project-builder.org which auto-build packages from each commit of projects.

Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly “Stateless systems, factory reset, and golden images” which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu’s system-image) is far too limited and static yet, it doesn’t extend to desktops/servers/cloud workloads at all. It’s also a lot of work to implement this properly, so it’s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment.

On Plumbers my main occupations were mostly the highly interesting LXC track to see what’s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I’m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian’s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I’m the de-facto maintainer now, so I’ll need to spend some time soon to review/include/clean up some much requested little features and some fixes.

All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P).

If you are interested you can also see my raw notes, but beware that there are mostly just scribbling.

Now, off to next week’s Canonical meeting in Washington, DC!

Categories: FLOSS Project Planets

ERPAL: IMPORTANT! Safety first - The Drupal 7.32 Update

Planet Drupal - Fri, 2014-10-17 12:39

Yesterday, when the Drupal 7.31 SQL injection vulnerability came up, I think this was one of the most crititcal updates I ever saw in the Drupal world. First of all - thanks a lot to everybody that helped to find and fix this issue. With the discovering of this security issue and the fix, the Drupal security and the community behind has shown once more how important this combination is. All Drupal sites should and MUST be updated to this version 7.32 to keep their applications secure. An new ERPAL release 2.1 is already available. And it is very important that you use this update for your ERPAL installation.

Why this hurry?

As I already mentioned above, this update is critical to all sites as the vulnerability can be executed by anonymous users. It is possible to get admin access (user 1) with the correct attack sequence. Some of you may ask if Drupal is still secure at all? The answer is still - YES! It is one of the most secure CMF / CMS out there. And with a dedicated security team on Drupal.org many security issues are discovered. Security issues are worst if they are not discovered by the admin / support or security team but only by hackers. And it becomes even worse if people don't update their sites.

So what to do?

Don't panic! You just need to update your site to the latest Drupal 7.32 version. If you are using a distribution, that may have patches included in their installation profile to support all features, check for updates on their project page and get your update there. Easy - Thats it.

How to avoid future problems

Please follow the Drupal security advisories and keep you site's modules up to date. That's one of the most important rules for Drupal users.

While creating business applications with Drupal means for us taking responsibility for all our users to keep their data save and their ERPAL system running. With this blog post I want to ask every Drupal dev, maintainer, client or site builder to update the site immediately.

Categories: FLOSS Project Planets

Amazee Labs: Faster import & display with Data, Feeds, Views & Panels

Planet Drupal - Fri, 2014-10-17 12:25
Faster import & display with Data, Feeds, Views & Panels

Handling loads of data with nodes and fields in Drupal can be a painful experience: every field is put into a separate table which makes inserts and queries slow. In case you just want to import & display unstructured data without the flexibility and sugar of fields, this walkthrough is for you! 

On a recent customer project, we were tasked with importing prices and other information related to products. While we are fine with handling 10k+ products in the database, we didn't want to create field tables for the price information to be attached to products. For every product, we have 10 maybe even more prices which would result in 100k+ prices at least.

The prices shouldn't be involved in anything related to the product search, they should just appear as part of the product view itself. Also there is no commerce system involved at the current state of the project.

Putting the prices into a separate field on the product node may sound like a good idea in the first place. Remember, when loading a list of of those products, all the prices will have to get loaded as well. We wanted those prices to be decoupled from the products, be stored in a lightweight way and only be loaded when necessary - on the single product view.

1) Light-weight data structures in Drupal using the data module

First, I thought implementing a custom entity or just data table would be the way to go. But then we considered giving the data module a try. The data module allows site builders to work on a much lower level than with Drupal fields: you can create database tables, specify their columns and define relationships. What it really makes appealing is that you can access the structured data using views, expose the custom data tables as custom entity types and use the Feeds module for importing that data, without any coding required.

After installing the data module, you can manage your data tables under Structure > Data tables

We create a data table for the product prices and specify the schema with all the columns that should be included. Just like fields but without any fancy formatters on top of it:

This will create the desired database table for you.

Having defined the data, we can use the Entity data module that comes with Data to expose the data table as a custom entity type. By doing so, you will get integrations like for example with Search API for free.


2) Import using Feeds and the generic entity processor

Luckily, the [Meta] Generic entity processor issue for the Feeds module has been committed after 3 years of work. As there hasn't been a release since the time of committing the patch (January 2014), this is only available from later dev versions of the Feeds module.

But it's worth the hassle! We can now select from a multitude of different feeds processors based on all the different entity types in the system. After clearing caches, the data tables that we have previously exposed as entity types, do now show up:

The feeds configuration is performed as usual. In the following, we map all the fields from the clients CSV file to the previously defined columns of the data table:

We are now able to import large junks of data without pushing them through the powerful but slow Field API. A test import of ~30k items was performed within seconds. A nodes & Fields based import usually creates 200 items per minute.

3) Data is good, display is better

In the next step, we create a View based on the custom data table to display prices for products. We specify a number of contextual filters so that users will see prices a) the current product and restricted to b) the user's price source and c) currency.

Notice, that the Views display is a (Ctools / Views) Content pane, which has some advanced pane settings in the mid section of the views configuration.

Most importantly, we want to specify the argument input: Usually we would use Context to map the views contextual filters to Ctools contexts that we provide through Panels.

Somehow, in this case, a specific field didn't work with the context system which automatically checks if all necessary context's are available and only allows you to use the Views pane under such circumstances. As you can see in the screenshot above, i have set all arguments to "Input on pane config" as a work around.

Exactly these pane config inputs show up when we configure the Views pane in Panels. In this case, we have added the Product prices view as a pane on the panelized full node display of the Product node type (Drupal jargons ftw!).

Each pane config is populated with the appropriate keyword substitutions based on available contexts node and user of the panelized node.

4) The end result

Finally this is the site builded result of a product node including a prices table:


This concludes my how-to on the Data, Feeds, Views and Panels modules to attach a large data sets to nodes without putting them into fields. Once you know how the pieces fit together, it will take you less time than me writing this blog post to import and display large amounts of data in a less flexible, but more performant way! 

Categories: FLOSS Project Planets
Syndicate content