FLOSS Project Planets

Riku Voipio: Arm builder updates

Planet Debian - Thu, 2014-05-08 15:14
Debian has recently received a donation of 8 build machines from Marvell. The new machines come with Quad core MV78460 Armada XP CPU's, DDR3 DIMM slot so we can plug in more memory, and speedy sata ports. They replace the well served Marvell MV78200 based builders - ones that have been building debian armel since 2009. We are planning a more detailed announcement, but I'll provide a quick summary:

The speed increase provided by MV78460 can viewed by comparing build times on selected builds since early april:

Qemu build times.

We can now build Qemu in 2h instead of 16h -8x faster than before! Certainly a substantial improvement, so impressive kit from Marvell! But not all packages gain this amount of speedup:

webkitgtk build times.

This example, webkitgtk, builds barely 3x faster. The explanation is found from debian/rules of webkitgkt:
# Parallel builds are unstable, see #714072 and #722520
# ifneq (,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# NUMJOBS = $(patsubst parallel=%,%,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# endif
The old builders are single-core[1], so the regardless of parallel building, you can easily max out the cpu. New builders will use only 1 of 4 cores without parallel build support in debian/rules.

During this buildd cpu usage graph, we see most time only one CPU is consumed. So for fast package build times.. make sure your packages supports parallel building.

For developers, abel.debian.org is porter machine with Armada XP. It has schroot's for both armel and armhf. set "DEB_BUILD_OPTIONS=parallel=4" and off you go.

Finally I'd like to thank Thomas Petazzoni, Maen Suleiman, Hector Oron, Steve McIntyre, Adam Conrad and Jon Ward for making the upgrade happen.

Meanwhile, we have unrelated trouble - a bunch of disks have broken within a few days apart. I take the warranty just run out...

[1] only from Linux's point of view. - mv78200 has actually 2 cores, just not SMP or coherent. You could run an RTOS on the other core while you run Linux on the other.
Categories: FLOSS Project Planets

agoradesign: Fatal errors may cause infinite Feeds batch runs and exasperate you!

Planet Drupal - Thu, 2014-05-08 14:50

Every developer knows that kind of situations, where spend hours chasing a problem, that suddenly appeared, and the more you debug and dive into fixing it, t

Categories: FLOSS Project Planets

Aten Design Group: Someone Dropped a New Website in Your Lap, Now What?

Planet Drupal - Thu, 2014-05-08 12:38

At Aten, I tend to work on already live websites. Sometimes this means small bug fixes. Sometimes it encompasses information architecture, design work, a weeklong development sprint or working on the front end. In most cases my work is on a site I didn't build originally and often on a site Aten didn't build.

I started putting some notes together on some of the gotchas I run across when working on new-to-me sites. This will become a series of blog posts, but one always has to start at the beginning: getting the site working on your local environment.

We're assuming you already have some sort of a LAMP/MAMP/WAMP stack working.

Version Control

Next, you'll need access to the code to get it into your preferred version control system. Make sure you don't commit any settings.php files, the files directory, and possibly .htaccess if it contains server-specific settings. If using Git, remember that just because you've added these files/directories to .gitignore, doing git add . will add them anyway. If the code is already in a repository this step should be as easy as git clone [PATH]. If not, you might need to use something like tar czvf ~/everything.tar.gz public_html to make a copy of the whole file structure. I've also seen cases where JavaScript libraries are maintained as separate Git repositories within the main one. If those are in .gitignore a regular git clone of the main repo will skip these. I prefer to keep everything together by removing the entries in .gitignore. Run git clone on each library, delete each library's .git folder, and finally use git add to add them to the main repository.


After you have the code, you'll need to make a database export and import it. Drush sql-sync can do this, assuming [but that presumes] you have ssh keys and aliases configured and an already running site that drush can bootstrap. From the MySQL command line, create database [DATABASE_NAME]; use [DATABASE_NAME]; source ./[FILENAME.sql] always works. Don't forget to create a database user with something like: GRANT ALL ON database.* TO user@localhost IDENTIFIED BY 'someLongAndCompletelyRandomPassword'; FLUSH PRIVILEGES;.


Along with the database, you'll need to deal with the files directory. If you had to tar the whole thing you've already got many megabytes of files on your hard drive. If you want to avoid that, Stage File Proxy can help by downloading images from a production website as needed on a page by page basis. While this will help, there are some images it has problems dealing with. Another possibility is using Apache rewrites to load image resources from the production site.


My final steps are to add an entry to /etc/hosts for the new local domain, and an entry or new vhosts .conf file to direct that domain to the correct web root directory. Finally, flush DNS cache and restart apache. These steps will differ slightly, depending on your specific LAMP implementation.

Once we have the site up and running, it's time to make some improvements. Next time I'll discuss how to track down some specific situations when you know very little about how a site was built.

Categories: FLOSS Project Planets

Riccardo Mottola: Tailoring OpenBSD for an old strange computer

GNU Planet! - Thu, 2014-05-08 12:35
I have an ol' OmniBook 800CT. A small, interesting computer, for its time, extremely advanced!
Small form factor, but still a very nice keyboard, something unmatched on modern netbooks. The unique pop-out mouse. The series started out with 386 processor, b&w display and ROM expansions.
The 800CT is one of the latest models: same form factor, SCSI connector, but color screen (800x600) and a hefty Pentium 133Mhz!
But only 32 MB of ram (the kernel report 31 of real mem, 24 avail mem)

Original 5.4 kernel: 9.2M
Custom kernel: 5.0 M

This shrinkage is quite hefty! almost 50%! More than raw disk usage, this new kernel boots faster and leaves  more free memory. Enough more that X11 is now almost usable

How can this be achieved? essentially by removing unused kernel options. If you remove drivers which you know you don't need because you don't have the hardware (and won't use it, e.g. you know you won't plug-in a certain card in the future) then you configure it out, it won't be built and it won't get in your kernel.
On an old laptop with no expansion except the ports and the PCMCIA port it has, this is relatively easy.

To build your custom kernel, follow the OpenBSD FAQ.

The main theory is to take the kernel configuration file, skim over it line by line it and see if you have the hardware, which you know by checking your dmesg. Dmesg shows which devices and drivers were loaded.Remember that you do not modify GENERIC, but a copy of it.

You can automate this with a tool called dmassage: it will parse your GENERIC configuration and produce an optimal tuned version, however it will not work out of the box.
Why? there are drivers which do not compile if other drivers are not present.

I'm unsure if this is really a bug, in my opinion it is at least "unclean" code, however since mostly this kind of extreme driver-picking is not done, it is not fatal and probably won't be fixed.

 If you remove all drivers at once, you won't easily find out one which one breaks, so my suggestion is to remove them in sets. One by-one is surely too tedious, since for each you need to make a build.
  1. remove X drivers
  2. build, if it works, copy the configuration file as a backup
  3. test the kernel, optionally, by booting it
  4. continue removal

Thus, in case of breakage, you can narrow it down to a less options.

If your mahcine doesn't have a certain bus, you may remove all drievrs attached to each. But proceed from the leaves, not the trunk: gradually remove the peripheral drivers before removing the bus support.

In my case, I found that an unremovable driver is:
et*    at pci?                # Agere/LSI ET1310

Remember that you are running an unsupported kernel, if you want support for a problem, better try it with the original kernel, of which you should anyway for safety retain a backup copy during the iterative building process.


In X11, which needs to be set to 800x600 8-bit mode, I had to uncomment these lines:
#Option "progLcdModeRegs" "true"
#Option "progLcdModeStretch" "true"
Categories: FLOSS Project Planets

GNUnet News: Presenting CADET, GNUnet's routing and transport layer

GNU Planet! - Thu, 2014-05-08 12:15

In the upcoming Med-Hoc-Net 2014 we will present a paper describing GNUnet's CADET service (previously known as "mesh") which allows a GNUnet application to communicate securely with any peer on the network knowing only it's Peer Identity.

If you want to know exactly what it offers, how it works and how it performs, you can check out the paper now.

Categories: FLOSS Project Planets

Blink Reaction: How to Set up Symfony

Planet Drupal - Thu, 2014-05-08 11:23

Long-time Drupal developer Wes Roepken walks you through the steps involved in getting Symfony set up and ready to work.

Categories: FLOSS Project Planets

Results of Card Sorting the KDE System Settings

Planet KDE - Thu, 2014-05-08 11:18

KDE tries to be as much customizable as possible: All freedom to the user! This leads to an extended configuration that might be confusing to new users. Additionally, modules from different sources are aggregated in a way that not necessarily fits the mental representation of users. For instance, the distinction between ‘workspace appearance’ and ‘window appearance’ is not common in other desktop environments.

Therefore we started a card sorting test to analyze how people think about system setting. Here we present the results.


Card sorting is the standard method in usability to analyze hierarchical information. Participants are asked to build a structure that fits best their representation by creating groups and sorting all items (aka index cards in the offline world) into these groups. Items sorted into the same group get higher similarity value. And the statistical evaluation aggregates the individual grouping to an average model represented by a dendrogram.


First: the KDE community is awesome! Within a few hours we got enough responses for the evaluation, and in total we have 331 answers. Thanks a lot, it’s great to be part of it. All analysis results are available online (http://conceptcodify.com/studies/keodby8c/analyze/).

Comparing the dendrogram based on user responses (right figure) with a virtual dendrogram as constructed by the actual system settings (left figure) the different categorization becomes obvious. There are about six main topics covered by system settings with some subordinate categories.

Dendrogram of original organization

Dendrogram of participant’s sorting

To go into detail, we built a heatmap from the similarity matrix. Items that are regularly sorted into the same group have a higher similarity and are colored from blue for low values over green and yellow to red.

Heatmap of similarity: the brighter the intersection the higher both items are associated in average. For details have a look at the original matrix.

Some groups are very prominent:

  1. Window Management / Workspace
    It contain of two sub groups with a) Activities, Activity Settings, Virtual Desktops, Manage Notifications, and b) Window Behavior, Window Rules, Kwin Scripts, Task Switcher, Screen Edges.
  2. Shortcuts
    Including Global Keyboard Shortcuts, Standard Keyboard Shortcuts, Custom Shortcuts
  3. Appearance
    Desktop Theme, Icons, Cursor Theme, Style, Fonts, Colors, GTK, Emoticons, Window Decorations
  4. Personalization / Accessibility
    Containing a) Country / Region & Language, Spell Checker, Accessibility, and b) Accounts, Password and User, KDE Wallet, Social Desktop (only partially associated with 4a, and separated into distant nodes at the dendrogram)
  5. Hardware / Devices
    That is a) Mouse, Touchpad, Joystick, Keyboard, Graphic Tablet, Remote Controls and b) Devices, Adapters, Energy Saving
  6. Networking
    Connection Preferences, Proxy, File Transfers, Service Discovery, Windows Shares

A few items are ambiguous:

  • Launch Feedback, Workspace, and Screen Locker: these items are related to 1. Window Management/Workspace or 3. Appearance
  • Display Configuration and Gamma which are associated with 5. Hardware, but Gamma with 3. Appearance too
  • Web short-cuts with 2.Shortcuts or 6.Networking
  • System Bell which seems to be very confusing (it’s actually an ancient feature of notifications) and has relations to 1a, 2, 4a, 5b, or Launch Feedback

As commented by others the study has the bias of potentially unknown functions, or rather unusual names or items out of context. For instance, Devices and adapters are originally an item of Bluetooth settings, and would probably have been sorted differently with this information. Furthermore some settings rather target experts, like KWin Scripts, and should be placed less prominently. Hence our following suggestion is only loosely based on the results.

Six top level groups (or topics) should be enough to organize the modules and to maintain the task. This is quite similar to what we have right now. But the assignment of subordinate categories and the corresponding items is slightly different, some have improved labels too.

  • Appearance
    • Themes (aka Workspace) (Widget Style, Desktop theme, Cursor theme)
    • Style (Window decoration, Splash screen, Gtk)
    • Colors
    • Font
    • Emoticons

    (Remark: Jens Reuterberg proposed some kind of mega theme at the KDE forum, which sounds pretty nice. But all topics need some categories to keep the navigation consistent. Another point for discussion is where Widget Style belongs to: Theme or Style? Again, we should take care of balancing the structure.)

  • Workspace
    • Window behavior (Desktop effects, Screen Edges, Launch Feedback, Task switcher, KWin Scripts, Window Rules)
    • Notification (Applications, System Bell)
    • Shortcuts and Gestures (Custom, Standard, Global)
    • Activities

    (Remark: Can’t we just drop the beep support (aka System Bell)? Who owns and want to use an in-built speaker instead of jingles?)

  • Personalization
    • Account Details (Password, Path, Wallet)
    • Regional Settings (aka Locale) (Country/Region, Language, Spell Checker)
    • Standard Programs (Default Applications, File Association, Desktop Search)
    • Accessibility
  • Networking
    • (Network) Settings (Proxy, Preferences, Certificates (aka SSL Preferences), +Network Manager)
    • Connectivity (Accounts aka (PIM) Personal Information, Instant Messaging and VoIP, Social Desktop, Web Shortcuts)
    • Sharing

    (Remark: Network Manager settings should get an own KCM, which is being in preparation right now. Secondly, it is questionable if PIM accounts should have a central configuration since it’s applied in Kontact only. And, apparently from the study results too, the organization of web shortcuts is difficult and could be improved by another wording.)

  • Hardware
    • Input Devices (including Keyboard, Mouse, Touchpad, Joystick, Remote Control, Camera)
    • Display and Monitor (including Gamma)
    • Removable Devices
    • Printers
    • Multimedia
    • Device Actions
    • Power management
  • Software
    • Bodega
    • Adobe Flash Player

    (Remark: Apparently, Bodega is just an idea for future improvements. The topic could be extended by the distribution specific software management.)

Based on this list we will outline an idea how to get both simple access to a module and the full set of features without introducing a further navigation. Join the discussion at the KDE forum!

Categories: FLOSS Project Planets

ThinkShout: What Nonprofits Can Learn About Content Structure… from Pearl Jam

Planet Drupal - Thu, 2014-05-08 11:00

Photo by Phil King, licensed under CC 2.0.

Pearl Jam have been posterboys for a lot of things, but probably not structured web content. Content strategists like to point to NPR’s Create Once, Publish Everywhere (COPE) framework, to large media outlets, sometimes to the U.S. government – but given the breadth of coverage (and budgets) available to those entities, making the move to fully structured content may seem daunting in the nonprofit context.

If Pearl Jam can do it, so can you.

The basic concept is this: by separating out the most important components of your content into “fields”, instead of dumping everything from images to embedded videos to pull quotes into a WYSIWYG editor, you’ll be able to :

  • Display your content responsively across devices;
  • Share it more easily with your affiliates and supporters; and
  • Create dynamic ways to surface relevant content and encourage engagement.

In a striking example of why tech folks shouldn’t be allowed to name concepts, creating fields to structure your content is affectionately known as the difference between making blobs and chunks.

If you use a modern CMS, you’ve already used structured content to a degree. The title of your page or post is almost always separate from the body. This allows you, at the most basic level, to build a dynamic page of blog posts that displays only the title and maybe a snippet of the body, which then links off to a detail page containing the full post.

The New York Times uses this concept, breaking out fields for author, publication date, and more for its news stories. Amazon has taken it to an entirely different level by assigning scores of categories to its products; when you narrow down your mattress search to a queen size goose down featherbed from Pacific Coast, you’re taking advantage of structured data (in the form of faceted search).

What Pearl Jam has done – and what every nonprofit should think about doing – is match the motivations their audience has in visiting their website to PJ’s own (organizational) goals and structured their site content so the two complement each other.

Pearl Jam’s core offering is music. People visit their website to find that music, either in the form of upcoming (or past) shows, lyrics, or songs they can buy. So, much of Pearl Jam’s website is structured around the concept of the song.

[Note that I don’t have any insider’s knowledge about the exact structure or software they’re using. This is just how we would do it if we built their site in Drupal.]

Practically every song Pearl Jam has ever recorded or performed live has a place on the website, and they’re all structured the same:

  • Title
  • Release Date
  • Composer
  • Artist
  • Image
  • Lyrics

That’s it. Everything else on that page, and much of the site, is built through the application of structured data.

If you look at an individual album, you’re actually looking at a different content type, which has its own structure:

  • Title
  • Release Date
  • Cover Image
  • Purchase Links
  • Body
  • Song [REFERENCE]

It’s that REFERENCE field that’s key. Every album is a collection of references to the individual songs, rather than list built by hand. (On Drupal, we’d probably use something like Entity Reference.) Clicking on an individual song takes you to its detail page.

It gets more interesting when you look at a Setlist, another structured content type:

  • Venue
  • Location
  • Date
  • Concert Poster Image
  • Product Links
  • Bootleg Image
  • Song [REFERENCE]
  • Live Image [REFERENCE]

A setlist is built up using the same song REFERENCE field as an album; each song exists as a single entity, but it can be referenced from hundreds of other pages (in the case of a classic like “Jeremy”).

All the way back in 2000, Pearl Jam started recording every show they did off the mixing board so they could sell high-quality recordings. While you can’t quite get every one of the 672 versions of “Alive” they’ve performed over the years, you can come pretty close.

Setlists include the all-important link to purchase a copy of an entire live performance.

This relational system has created endless connections between the Songs they’ve performed – their core content offering – and where and when they’ve performed them. By then layering on the ability to purchase copies of those concerts at any time, Pearl Jam has taken one of the primary motivations of their audience – to engage with PJ’s music – and tied it directly to their organizational goal of making money, without shoving that in your face.

It’s also worth noting that structured data has also allowed Pearl Jam to flesh out the detail pages for each of its content types with just a few lines of code.

On a song page, the “First Played”, “Last Played”, and “Times Played” lines are created dynamically, as is the list of every place and date it’s been performed. Tours are created by referencing each of the setlists. I imagine that the slider showing all of the album covers is created by pulling the cover image associated with each album (instead of being inserted by hand).

Once your content is structured, the ways you can reformat and display it are limited only by your imagination communications plan, your organizational goals, and your CMS. Oh, and it helps if you understand the motivations of your various audiences.

What’s your core content offering? Can you create a similar structure? Have you already?

And if anybody with PhotoShop skills wants to create that new poster for Pearl Jam, highlighting their mastery of structured content...

Categories: FLOSS Project Planets

Bad Voltage Season 1 Episode 15: Why Dear Watson

LinuxPlanet - Thu, 2014-05-08 10:44

Myself, Bryan Lunduke, Jono Bacon, and Stuart Langridge present Bad Voltage, :

  • Bryan gave his yearly “Linux Sucks” talk at LinuxFest Northwest, and the rest of us take issue with his approach, his arguments, his data sources, and his general sense of being
  • The XBox 360: reviewed as a TV set-top box, not as a gaming console
  • The up and coming elementary OS: is it any good, and what do we think of it?
  • Our discussion of elementary OS raised a number of questions: Daniel Foré, leader of the project, talks about the goals of the OS and answers our queries
  • Community recap: your emails and forum posts and happenings in the Bad Voltage community

Listen to: 1×15: Why Dear Watson

As mentioned here, Bad Voltage is a new project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.


Categories: FLOSS Project Planets

Nathan Lemoine: PyStan: A Second Intermediate Tutorial of Bayesian Analysis in Python

Planet Python - Thu, 2014-05-08 10:00
I promised a while ago that I’d give a more advanced tutorial of using PySTAN and Python to fit a Bayesian hierarchical model. Well, I’ve been waiting for a while because the paper was in review and then in print. … Continue reading →
Categories: FLOSS Project Planets

Poplarware: The case for a small Drupal shop contributing to Drupal

Planet Drupal - Thu, 2014-05-08 09:27

Dries Buytaert, the CEO of Aquia and head of the open-source Drupal project, recently wrote a blog post about the business case for hiring a Drupal core contributor. Dries wrote about the measurable effect that a larger Drupal shop can realize from hiring a contributor full-time.

read more

Categories: FLOSS Project Planets

James Oakley: Updating Drupal core with bash and drush

Planet Drupal - Thu, 2014-05-08 08:55

Yesterday, Drupal 7.28 was released.

People rush to upgrade, knowing that there will be a tranche of bug-fixes that may resolve longstanding issues.

People hesitate to upgrade, because updating Drupal core is not as simple as we'd like.

Other times, the core update is a security release, and you can't afford to wait.

This does not need to be painful!!

Upgrading core in Drupal 7

You have probably read the official documentation on doing this. … Read more about Updating Drupal core with bash and drush

Blog Category: Drupal Planet
Categories: FLOSS Project Planets

Steve Kemp: Some brief updates

Planet Debian - Thu, 2014-05-08 08:28

Some brief notes, between tourist-moments.

Temporary file races

I reported some issues against the lisp that is bundled with GNU Emacs, the only one of any significance related to the fall-back uudecode option supported by tramp.el.

(tramp allows you to edit files remotely, it is awesome.)

Inadvertantly I seem to have received a CVE identifier refering to the Mosaic web-browser. Damn. That's an old name now.

Image tagging

A while back I wrote about options for tagging/finding images in large collections.

Taking a step back I realized that I mostly file images in useful hierarchies:

Images/People/2014/ Images/People/2014/01/ Images/People/2014/01/03-Heidi/{ RAW JPG thumbs } Images/People/2014/01/13-Hanna/{ RAW JPG thumbs } ..

On that basis I just dropped a .meta file in each directory with brief notes. e.g:

name = Jasmine XXX location = Leith, Edinburgh source = modelmayhem theme = umbrella, rain, water contact = 0774xxxxxxx

Then I wrote a trivial perl script to find *.meta - allowing me to create IMAGE_123.CR2.meta too - and the job was done.

Graphical Applications

I'm currently gluing parts of Gtk + Lua together, which is an experiment to see how hard it is to create a flexible GUI mail client. (yeah.)

So far its easy if I restrict the view to three-panes, but I'm wondering if I can defer that, and allow the user to handle the layout 100%. I suspect "not easily".

We'll see, since I'm not 100% sold on the idea of a GUI mail client in the first place. Still it is a diversion.


I actually find myself looking forward to my next visit which is .. interesting?

Categories: FLOSS Project Planets

James Strachan: Micro Services the easy way with Fabric8

Planet Apache - Thu, 2014-05-08 06:51
Micro Services have received a lot of discussion of late. While its easy to argue the exact meaning of the term; its hard to deny there's a clear movement in the Java ecosystem towards micro services: using smaller, lighter weight and isolated micro service processes instead of putting all your code into monolithic application servers with approaches like DropWizard, Spring Boot and Vert.x. There are lots of benefits of the micro services approach; particularly considering DevOps and the cloud.
Fabric8 is poly app serverOn the fabric8 project we're very application server agnostic; there are lots of pros and cons with using different application servers. The ideal choice often comes down to your requirements and your team's knowledge and history. Folks tend to prefer to stick with the application servers they know and love (or that their operations folks are happy managing) rather than switching; as all application servers require time to learn.

OSGi is very flexible, modular and standards based; but the modularity has a cost in terms of learning and development time. (OSGi is kinda marmite technology; you tend to either love it or hate it ;). Servlet engines like Tomcat are really simple; but with very limited modularity. Then for those that love JEE there's WildFly and TomEE.

In the fabric8 project we initially started supporting OSGi for managing things like JBoss Fuse, Apache Karaf, and Apache ServiceMix. If you use version 1.0.x of fabric8 (which is included in JBoss Fuse 6.1) then OSGi is the only application server model supported.

However in 1.1.x we're working hard on support for Apache Tomcat, Apache TomEE and WildFly as first class citizens in fabric8; so you can pick whichever application server model you and your team prefer; including using a mixture of container types for different services.
Fabric8 1.1.0.Beta5 now supports Java ContainersI'm personally really excited about the new Java Container capability which is now available in version 1.1.0.Beta5 or later of fabric8 which lets you easily provision and manage java based Micro Services.

A Java Container is an alternative to using an application server; its literally using the java process using a classpath and main you specify. So there's no mandated application server or libraries.

With pretty much all application servers you're gonna hit class loader issues at some point; with the Java Container in fabric8; its a simple flat class loader that you fully control. Simples! If things work in maven and your tests; they work in the Java Container (*); since its the same classpath - a flat list of jars.
* provided you don't include duplicate classes in different jars, where the order of the jars in the classpath can cause issues but thats easy to check for in your build).The easiest, simplest thing that could possibly work as an application developer is just using a simple flat class loader. i.e. using the java process on the command line like this:
java -cp "lib/*" $MAINCLASSThis then means that each micro service is a separate isolated container (operating system process) with its own class path so its easy to monitor and perform incremental upgrades of dependencies without affecting other containers; this makes version upgrades a breeze.

However the problem with micro services is managing the deployment of all these java processes; starting them, stopping them, managing them, having nice tooling to view whats happening and performing rolling upgrades of changes. Thats where fabric8 comes to help! The easiest way to see is via a demo...
DemoHere's a screencast I just recorded to show how easy it is to work with any Java project which has a maven build and a Java main function (a static main(String[] args) function to bootstrap the Java code. I use an off the shelf Apache Camel and Spring example; but really any Java project with an executable jar or main class would do.

For more background on how to use this and how all this works check the documentation on the Java Container and Micro Services in Fabric8.

Please take it for a spin and tell us how you get on. We love feedback and contributions!

Categories: FLOSS Project Planets

Riccardo Mottola: DataBasin: advanced SObject describe

GNU Planet! - Thu, 2014-05-08 06:50
I enhanced DataBasin object's describe. First, I do parse the RecordTypeInfo now, but that was the easy part.  Salesforce.com returns only the record type ID and name, but programmatically, this is not very useful. The important bit would be the Developer Name of each record type.

I enhanced the results by querying automatically the RecordType table, extracting the Developer Name, matching it through the RT Id and merging back the results, so that the resulting DBSObject has a complete Record Type information, totally transparent.
Categories: FLOSS Project Planets


LinuxPlanet - Tue, 2014-05-06 22:27
Lostnbronx buys a quadcopter, and thinks about community.
Categories: FLOSS Project Planets


LinuxPlanet - Tue, 2014-05-06 22:27
Lostnbronx is sick of everybody, and they're probably sick of him.
Categories: FLOSS Project Planets

Disable / Password Protect Single User Mode / RHEL / CentOS / 5.x / 6.x

LinuxPlanet - Mon, 2014-05-05 00:15
Hello All, If you have not protected Single User Mode with Password then it is big risk for your Linux Server, So protecting Single User Mode with Password is very important when it comes to security, Today in this article i will show you how you can protect Single User Mode with Password on RHEL […]
Categories: FLOSS Project Planets

Seperating pdf into single pages using pdfseperate in linux

LinuxPlanet - Fri, 2014-05-02 06:24
pdfseperate: A tool than can be used to split a pdf document into individual pages or can be used to extract a set of pages as individual pages from a pdf document.

Let us say we have a document named hello.pdf, that has 100 pages and we need to extract the pages 20 to 22 as individual pages. We can use pdfseperate to achieve this.

$ pdfseperate -f 20 -l 30 hello.pdf foo%

After the execution of the command, we should have three files foo1.pdf,foo2.pdf and foo3.pdf which will be the 20th,21st and 22nd page of the hello.pdf document.

If pdfseperate is not available, we need to install the package poppler-utils, for debian based systems

$ sudo apt-get install poppler-utils

Categories: FLOSS Project Planets
Syndicate content