Planet Apache

Syndicate content
Updated: 12 hours 23 min ago

Bryan Pendleton: Those were the days ...

Mon, 2017-06-26 20:53

A dear colleague of mine tracked down this wonderful picture of me with one of my favorite engineering teams:

How wonderful it was to have such brilliant co-workers to learn from!

If, in some magical way, 56-year-old Bryan could go back 23 years and talk to 33-year-old Bryan, what would I say?

Maybe something like:

Pay attention, keep listening, and work hard: you've still got a LOT left to learn.

Of course, that's just as true now.

Hmmm, maybe I just got some words of wisdom from 79-year-old Bryan, far off in the future?

As for the picture, as I recall, we were looking for a theme for a team picture, and Nat and Brian both happened to be wearing their leather coats that day, and so somebody (Rich? Ken?) suggested we all get our coats and sunglasses and "look tough". So we did...

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-06-26

Mon, 2017-06-26 19:58
Categories: FLOSS Project Planets

Colm O hEigeartaigh: Securing Apache Solr - part I

Mon, 2017-06-26 05:46
This is the first post in a series of articles on securing Apache Solr. In this post we will look at deploying an example SolrCloud instance and securing access to it via basic authentication.

1) Install and deploy a SolrCloud example

Download and extract Apache Solr (6.6.0 was used for the purpose of this tutorial). Now start SolrCloud via:
  • bin/solr -e cloud
Accept all of the default options. This creates a cluster of two nodes, with a collection "gettingstarted" split into two shards and two replicas per-shard. A web interface is available after startup at: http://localhost:8983/solr/.

Once the cluster is up and running we can post some data to the collection we have created via the REST interface:
  • curl http://localhost:8983/solr/gettingstarted/update -d '[ {"id" : "book1", "title_t" : "The Merchant of Venice", "author_s" : "William Shakespeare"}]'
  • curl http://localhost:8983/solr/gettingstarted/update -d '[ {"id" : "book2", "title_t" : "Macbeth", "author_s" : "William Shakespeare"}]'
  • curl http://localhost:8983/solr/gettingstarted/update -d '[ {"id" : "book3", "title_t" : "Death of a Salesman", "author_s" : "Arthur Miller"}]'
We can search the REST interface to for example return all entries by William Shakespeare as follows:
  • curl http://localhost:8983/solr/gettingstarted/query?q=author_s:William+Shakespeare
2) Authenticating users to our SolrCloud instance

Now that our SolrCloud instance is up and running, let's look at how we can secure access to it, by using HTTP Basic Authentication to authenticate our REST requests. Download the following security configuration which enables Basic Authentication in Solr:
Two users are defined - "alice" and "bob" - both with password "SolrRocks". Now upload this configuration to the Apache Zookeeper instance that is running with Solr:
  • server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:9983 -cmd putfile /security.json security.json
Now try to run the search query above again using Curl. A 401 error will be returned. Once we specify the correct credentials then the request will work as expected, e.g.:
  • curl -u alice:SolrRocks http://localhost:8983/solr/gettingstarted/query?q=author_s:Arthur+Miller
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-06-24

Sat, 2017-06-24 19:58
Categories: FLOSS Project Planets

Bruce Snyder: Annual Spinal Cord Injury Re-evaluation

Fri, 2017-06-23 12:42
Recently I went back to Craig Hospital for an annual spinal cord injury re-evaluation and the results were very positive. It was really nice to see some familiar faces of the people for whom I have such deep admiration like my doctors, physical therapists and administrative staff. My doctor and therapists were quite surprised to see how well I am doing, especially given that I'm still seeing improvements three years later. Mainly because so many spinal cord injury patients have serious issues even years later. I am so lucky to no longer be taking any medications and to be walking again.
It has also been nearly one year since I have been back to Craig Hospital and it seems like such a different place to me now. Being back there again feels odd for a couple of reasons. First, due to the extensive construction/remodel, the amount of change to the hospital makes it seem like a different place entirely. It used to be much smaller which encouraged more close interaction between patients and staff. Now the place is so big (i.e., big hallways, larger individual rooms, etc.) that patients can have more privacy if they want or even avoid some forms of interaction. Second, although I am comfortable being around so many folks who have been so severely injured (not everyone is), I have noticed that some folks are confused by me. I can tell the way they look at me that they are wondering what I am doing there because, outwardly, I do not appear as someone who has experienced a spinal cord injury. I have been lucky enough to make it out of the wheelchair and to walk on my own. Though my feet are still paralyzed, I wear flexible, carbon fiber AFO braces on my legs and walk with one arm crutch, the braces are covered by my pants so it's puzzling to many people.
The folks who I wish I could see more are the nurses and techs. These are the folks who helped me the most when I was so vulnerable and confused and to whom I grew very attached. To understand just how attached I was, simply moving to a more independent room as I was getting better was upsetting to me because I was so emotionally attached to them. I learned that these people are cut from a unique cloth and possess very big hearts to do the work they do every day. Because they are so involved with the acute care of in-patients, they are very busy during the day and not available for much socializing as past patients come through. Luckily, there was one of my nurses who I ran into and was able to spend some time speaking with him. I really enjoyed catching up with him and hearing about new adventures in his career. He was one of the folks I was attached to at the time and he really made a difference in my experience. I will be eternally thankful for having met these wonderful people during such a traumatic time in my life.
Today I am walking nearly 100% of the time with the leg braces and have been for over two years. I am working to rebuild my calves and my glutes, but this is a very, very long and slow process due to severe muscle atrophy after not being able to move my glutes for five months and my calves for two years. Although my feet are not responding yet, we will see what the future holds. I still feel so very lucky to be alive and continuing to make progress.
Although I cannot run at all or cycle the way I did previously, I am very thankful to be able to work out as much as I can. I am now riding the stationary bike regularly, using my Total Gym (yes, I have a Chuck Norris Total Gym) to build my calves, using a Bosu to work on balance and strength in my lower body, doing ab roller workouts and walking as much as I can both indoors on a treadmill and outside. I'd like to make time for swimming laps again, but all of this can be time consuming (and tiring!). I am not nearly as fit as I was at the time of my injury, but I continue to work hard and to see noticeable improvements for which I am truly thankful.
Thank you to everyone who continues to stay in touch and check in on me from time-to-time. You may not think it's much to send a quick message, but these messages have meant a lot to me through this process. The support from family and friends has been what has truly kept me going. The patience displayed by Bailey, Jade and Janene is pretty amazing.
Next month will mark the three year anniversary of my injury. It seems so far away and yet it continues to affect my life every day. My life will never be the same, but I do believe I have found peace with this entire ordeal.
Categories: FLOSS Project Planets

Colm O hEigeartaigh: SSO support for Apache Syncope REST services

Fri, 2017-06-23 11:32
Apache Syncope has recently added SSO support for its REST services in the 2.0.3 release. Previously, access to the REST services of Syncope was via HTTP Basic Authentication. From the 2.0.3 release, SSO support is available using JSON Web Tokens (JWT). In this post, we will look at how this works and how it can be configured.

1) Obtaining an SSO token from Apache Syncope

As stated above, in the past it was necessary to supply HTTP Basic Authentication credentials when invoking on the REST API. Let's look at an example using curl. Assume we have a running Apache Syncope instance with a user "alice" with password "ecila". We can make a GET request to the user self service via:
  • curl -u alice:ecila http://localhost:8080/syncope/rest/users/self
It may be inconvenient to supply user credentials on each request or the authentication process might not scale very well if we are authenticating the password to a backend resource. From Apache Syncope 2.0.3, we can instead get an SSO token by sending a POST request to "accessTokens/login" as follows:
  • curl -I -u alice:ecila -X POST http://localhost:8080/syncope/rest/accessTokens/login
The response contains two headers:
  • X-Syncope-Token: A JWT token signed according to the JSON Web Signature (JWS) spec.
  • X-Syncope-Token-Expire: The expiry date of the token
The token in question is signed using the (symmetric) "HS512" algorithm. It contains the subject "alice" and the issuer of the token ("ApacheSyncope"), as well as a random token identifier, and timestamps that indicate when the token was issued, when it expires, and when it should not be accepted before.

The signing key and the issuer name can be changed by editing 'security.properties' and specifying new values for 'jwsKey' and 'jwtIssuer'. Please note that it is critical to change the signing key from the default value! It is also possible to change the signature algorithm from the next 2.0.4 release via a custom 'securityContext.xml' (see here). The default lifetime of the token (120 minutes) can be changed via the "jwt.lifetime.minutes" configuration property for the domain.

2) Using the SSO token to invoke on a REST service

Now that we have an SSO token, we can use it to invoke on a REST service instead of specifying our username and password as before. For Syncope 2.0.3 only, the header name is the same as the header name above "X-Syncope-Token". From Syncope 2.0.4 onwards, the header name is "Authorization: Bearer <token>", e.g.:
  • curl -H "Authorization: Bearer eyJ0e..." http://localhost:8080/syncope/rest/users/self
The signature is first checked on the token, then the issuer is verified so that it matches what is configured, and then the expiry and not-before dates are checked. If the identifier matches that of a saved access token then authentication is successful.

Finally, SSO tokens can be seen in the admin console under "Dashboard/Access Token", where they can be manually revoked by the admin user:


Categories: FLOSS Project Planets

Chiradeep Vittal: Design patterns in orchestrators: transfer of desired state (part 3/N)

Fri, 2017-06-23 01:40

Most datacenter automation tools operate on the basis of desired state. Desired state describes what should be the end state but not how to get there. To simplify a great deal, if the thing being automated is the speed of a car, the desired state may be “60mph”. How to get there (braking, accelerator, gear changes, turbo) isn’t specified. Something (an “agent”) promises to maintain that desired speed.

The desired state and changes to the desired state are sent from the orchestrator to various agents in a datacenter. For example, the desired state may be “two apache containers running on host X”. An agent on host X will ensure that the two containers are running. If one or more containers die, then the agent on host X will start enough containers to bring the count up to two. When the orchestrator changes the desired state to “3 apache containers running on host X”, then the agent on host X will create another container to match the desired state.

Transfer of desired state is another way to achieve idempotence (a problem described here)

We can see that there are two sources of changes that the agent has to react to:

  1. changes to desired state sent from the orchestrator and
  2. drift in the actual state due to independent / random events.

Let’s examine #1 in greater detail. There’s a few ways to communicate the change in desired state:

  1. Send the new desired state to the agent (a “command” pattern). This approach works most of the time, except when the size of the state is very large. For instance, consider an agent responsible for storing a million objects. Deleting a single object would involve sending the whole desired state (999999 items). Another problem is that the command may not reach the agent (“the network is not reliable”). Finally, the agent may not be able to keep up with rate of change of desired state and start to drop some commands.  To fix this issue, the system designer might be tempted to run more instances of the agent; however, this usually leads to race conditions and out-of-order execution problems.
  2. Send just the delta from the previous desired state. This is fraught with problems. This assumes that the controller knows for sure that the previous desired state was successfully communicated to the agent, and that the agent has successfully implemented the previous desired state. For example, if the first desired state was “2 running apache containers” and the delta that was sent was “+1 apache container”, then the final actual state may or may not be “3 running apache containers”. Again, network reliability is a problem here. The rate of change is an even bigger potential problem here: if the agent is unable to keep up with the rate of change, it may drop intermediate delta requests. The final actual state of the system may be quite different from the desired state, but the agent may not realize it! Idempotence in the delta commands helps in this case.
  3. Send just an indication of change (“interrupt”). The agent has to perform the additional step of fetching the desired state from the controller. The agent can compute the delta and change the actual state to match the delta. This has the advantage that the agent is able to combine the effects of multiple changes (“interrupt debounce”). By coalescing the interrupts, the agent is able to limit the rate of change. Of course the network could cause some of these interrupts to get “lost” as well. Lost interrupts can cause the actual state to diverge from the desired state for long periods of time. Finally, if the desired state is very large, the agent and the orchestrator have to coordinate to efficiently determine the change to the desired state.
  4. The agent could poll the controller for the desired state. There is no problem of lost interrupts; the next polling cycle will always fetch the latest desired state. The polling rate is critical here: if it is too fast, it risks overwhelming the orchestrator even when there are no changes to the desired state; if too slow, it will not converge the the actual state to the desired state quickly enough.

To summarize the potential issues:

  1. The network is not reliable. Commands or interrupts can be lost or agents can restart / disconnect: there has to be some way for the agent to recover the desired state
  2. The desired state can be prohibitively large. There needs to be some way to efficiently but accurately communicate the delta to the agent.
  3. The rate of change of the desired state can strain the orchestrator, the network and the agent. To preserve the stability of the system, the agent and orchestrator need to coordinate to limit the rate of change, the polling rate and to execute the changes in the proper linear order.
  4. Only the latest desired state matters. There has to be some way for the agent to discard all the intermediate (“stale”) commands and interrupts that it has not been able to process.
  5. Delta computation (the difference between two consecutive sets of desired state) can sometimes be more efficiently performed at the orchestrator, in which case the agent is sent the delta. Loss of the delta message or reordering of execution can lead to irrecoverable problems.

A persistent message queue can solve some of these problems. The orchestrator sends its commands or interrupts to the queue and the agent reads from the queue. The message queue buffers commands or interrupts while the agent is busy processing a desired state request.  The agent and the orchestrator are nicely decoupled: they don’t need to discover each other’s location (IP/FQDN). Message framing and transport are taken care of (no more choosing between Thrift or text or HTTP or gRPC etc).

There are tradeoffs however:

  1. With the command pattern, if the desired state is large, then the message queue could reach its storage limits quickly. If the agent ends up discarding most commands, this can be quite inefficient.
  2. With the interrupt pattern, a message queue is not adding much value since the agent will talk directly to the orchestrator anyway.
  3. It is not trivial to operate / manage / monitor a persistent queue. Messages may need to be aggressively expired / purged, and the promise of persistence may not actually be realized. Depending on the scale of the automation, this overhead may not be worth the effort.
  4. With an “at most once” message queue, it could still lose messages. With  “at least once” semantics, the message queue could deliver multiple copies of the same message: the agent has to be able to determine if it is a duplicate. The orchestrator and agent still have to solve some of the end-to-end reliability problems.
  5. Delta computation is not solved by the message queue.

OpenStack (using RabbitMQ) and CloudFoundry (using NATS) have adopted message queues to communicate desired state from the orchestrator to the agent.  Apache CloudStack doesn’t have any explicit message queues, although if one digs deeply, there are command-based message queues simulated in the database and in memory.

Others solve the problem with a combination of interrupts and polling – interrupt to execute the change quickly, poll to recover from lost interrupts.

Kubernetes is one such framework. There are no message queues, and it uses an explicit interrupt-driven mechanism to communicate desired state from the orchestrator (the “API Server”) to its agents (called “controllers”).

(Image courtesy: https://blog.heptio.com/core-kubernetes-jazz-improv-over-orchestration-a7903ea92ca)

Developers can use (but are not forced to use) a controller framework to write new controllers. An instance of a controller embeds an “Informer” whose responsibility is to watch for changes in the desired state and execute a controller function when there is a change. The Informer takes care of caching the desired state locally and computing the delta state when there are changes. The Informer leverages the “watch” mechanism in the Kubernetes API Server (an interrupt-like system that delivers a network notification when there is a change to a stored key or value). The deltas to the desired state are queued internally in the Informer’s memory. The Informer ensures the changes are executed in the correct order.

  • Desired states are versioned, so it is easier to decide to compute a delta, or to discard an interrupt.
  • The Informer can be configured to do a periodic full resync from the orchestrator (“API Server”) – this should take care of the problem of lost interrupts.
  • Apparently, there is no problem of the desired state being too large, so Kubernetes does not explicitly handle this issue.
  • It is not clear if the Informer attempts to rate-limit itself when there are excessive watches being triggered.
  • It is also not clear if at some point the Informer “fast-forwards” through its queue of changes.
  • The watches in the API Server use Etcd watches in turn. The watch server in the API server only maintains a limited set of watches received from Etcd and discards the oldest ones.
  • Etcd itself is a distributed data store that is more complex to operate than say, an SQL database. It appears that the API server hides the Etcd server from the rest of the system, and therefore Etcd could be replaced with some other store.

I wrote a Network Policy Controller for Kubernetes using this framework and it was the easiest integration I’ve written.

It is clear that the Kubernetes creators put some thought into the architecture, based on their experiences at Google. The Kubernetes design should inspire other orchestrator-writers, or perhaps, should be re-used for other datacenter automation purposes. A few issues to consider:

  • The agents (“controllers”) need direct network reachability to the API Server. This may not be possible in all scenarios, needing another level of indirection
  • The API server is not strictly an orchestrator, it is better described as a choreographer. I hope to describe this difference in a later blog post, but note that the API server never explicitly carries out a step-by-step flow of operations.

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-06-22

Thu, 2017-06-22 19:58
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-06-21

Wed, 2017-06-21 19:58
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-06-20

Tue, 2017-06-20 19:58
Categories: FLOSS Project Planets

Bryan Pendleton: All Over the Place: a very short review

Tue, 2017-06-20 16:52

Is it possible that I am the first person to tell you about Geraldine DeRuiter's new book: All Over the Place: Adventures in Travel, True Love, and Petty Theft?

If I am, then yay!

For this is a simply wonderful book, and I hope everybody finds out about it.

As anyone who has read the book (or has read her blog) knows, DeRuiter can be just screamingly funny. More than once I found myself causing a distraction on the bus, as my fellow riders wondered what was causing me to laugh out loud. Many books are described as "laugh out loud funny," but DeRuiter's book truly is.

Much better, though, are the parts of her book that aren't the funny parts, for it is here where her writing skills truly shine.

DeRuiter is sensitive, perceptive, honest, and caring. Best of all, however, is that she is able to write about affairs of the heart in a way that is warm and generous, never cloying or cringe-worthy.

So yes: you'll laugh, you'll cry, you'll look at the world with fresh new eyes. What more could you want from a book?

All Over the Place is all of that.

Categories: FLOSS Project Planets

Rich Bowen: Software Morghulis

Tue, 2017-06-20 03:36

In George R R Martin’s books “A Song of Fire and Ice” (which you may know by the name “A Game of Thrones”), the people of Braavos,
have a saying – “Valar Morghulis” – which means “All men must die.” As you follow the story, you quickly realize that this statement is not made in a morbid, or defeatist sense, but reflects on what we must do while alive so that the death, while inevitable, isn’t meaningless. Thus, the traditional response is “Valar Dohaeris” – all men must serve – to give meaning to their life.

So it is with software. All software must die. And this should be viewed as a natural part of the life cycle of software development, not as a blight, or something to be embarrassed about.

Software is about solving problems – whether that problem is calculating launch trajectories, optimizing your financial investments, or entertaining your kids. And problems evolve over time. In the short term, this leads to the evolution of the software solving them. Eventually, however, it may lead to the death of the software. It’s important what you choose to do next.

You win, or you die

One of the often-cited advantages of open source is that anybody can pick up a project and carry it forward, even if the original developers have given up on it. While this is, of course, true, the reality is more complicated.

As we say at the Apache Software Foundation, “Community > Code”. Which is to say, software is more than just lines of source code in a text file. It’s a community of users, and a community of developers. It’s documentation, tutorial videos, and local meetups. It’s conferences, business deals and interpersonal relationships. And it’s real people solving real-world problems, while trying to beat deadlines and get home to their families.

So, yes, you can pick up the source code, and you can make your changes and solve your own problems – scratch your itch, as the saying goes. But a software project, as a whole, cannot necessarily be kept on life support just because someone publishes the code publicly. One must also plan for the support of the ecosystem that grows up around any successful software project.

Eric Raymond just recently released the source code for the 1970s
computer game Colossal Cave Adventure on Github. This is cool, for us greybeard geeks, and also for computer historians. It remains to be seen whether the software actually becomes an active open source project, or if it has merely moved to its final resting place.

The problem that the software solved – people want to be entertained – still exists, but that problem has greatly evolved over the years, as new and different games have emerged, and our expectations of computer games have radically changed. The software itself is still an enjoyable game, and has a huge nostalgia factor for those of us who played it on greenscreens all those years ago. But it doesn’t measure up to the alternatives that are now available.

Software Morghulis. Not because it’s awful, but because its time has
passed.

Winter is coming

The words of the house of Stark in “A Song of Fire and Ice”, are “Winter is coming.” As with “Valar Morghulis,” this is about planning ahead for the inevitable, and not being caught surprised and unprepared.

How we plan for our own death, with insurance, wills, and data backups, isn’t morbid or defeatist. Rather, it is looking out for those that will survive us. We try to ensure continuity of those things which are possible, and closure for those things which are not.

Similarly, Planning ahead for the inevitable death of a project isn’t defeatist. Rather, it shows concern for the community. When a software project winds down, there will often be a number of people who will continue to use it. This may be because they have built a business around it. It may be because it perfectly solves their particular problem. And it may be that they simply can’t afford the time, or cost, of migrating to something else.

How we plan for the death of the project prioritizes the needs of this community, rather than focusing merely on the fact that we, the developers, are no longer interested in working on it, and have moved on to something else.

At Apache, we have established the Attic as a place for software projects to come to rest once the developer community has dwindled. While the project itself may reach a point where they can no longer adequately shepherd the project, the Foundation as a whole still has a responsibility to the users, companies, and customers, who rely on the software itself.

The Apache Attic provides a place for the code, downloadable releases, documentation, and archived mailing lists, for projects that are no longer actively developed.

In some cases, these projects are picked up and rejuvenated by a new community of developers and users. However, this is uncommon, since there’s usually a very good reason that a project has ceased operation. In many cases, it’s because a newer, better solution has been developed for the problem that the project solved. And in many cases, it’s because, with the evolution of technology, the problem is no longer important to a large enough audience.

However, if you do rely on a particular piece of software, you can rely on it always being available there.

The Attic does not provide ongoing bug fixes or make additional releases. Nor does it make any attempt to restart communities. It is
merely there, like your grandmother’s attic, to provide long-term storage. And, occasionally, you’ll find something useful and reusable as you’re looking through what’s in there.

Software Dohaeris

The Apache Software Foundation exists to provide software for the public good. That’s our stated mission. And so we must always be looking out for that public good. One critical aspect of that is ensuring that software projects are able to provide adequate oversight, and continuing support.

One measure of this is that there are always (at least) three members of the Project Management Committee (PMC) who can review commits, approve releases, and ensure timely security fixes. And when that’s no longer the case, we must take action, so that the community depending on the code has clear and correct expectations of what they’re downloading.

In the end, software is a tool to accomplish a task. All software must serve. When it no longer serves, it must die.

Categories: FLOSS Project Planets

Shawn McKinney: 2017 Dirty Kanza Finish Line

Tue, 2017-06-20 01:14

Note: This post is about my second Dirty Kanza 200 experience on June 3, 2017.

It’s broken into seven parts:

Part I – Prep / Training

Part II – Preamble

Part III – Starting Line

Part IV – Checkpoint One

Part V – Checkpoint Two

Part VI – Checkpoint Three

Part VII – Finish Line

Regroup

I went looking for Derrick but couldn’t find him.  A woman, found out later his wife…

“Are you John?” she asked.

I replied with my name and didn’t make the connection.  I’d forgotten the color of his support team and he got my name wrong so that made us even.

He caught up ten miles later, by then chasing the fast chicks.  I called out as they zoomed past, wished them well.  This is how it works.  Alliances change according to the conditions and needs from one moment to the next.

A lone rider stopped at the edge of downtown — Rick from Dewitt, Arkansas.  He was ready for takeoff.

“You headed out, how bout we team up?”  I asked matter-of-factly.  The deal was struck and then there were two.

Eventually, maybe twenty miles later, we picked up Jeremy, which made three.  It worked pretty well.  Not much small talk, but lots of operational chatter.  You’d thought we were out on military maneuvers.

  • “Rocks on left.”
  • “Mud — go right!”
  • “Off course, turning around.”
  • “Rough! Slowing!”

There were specializations.  For example, Jeremy was the scout.  His bike had fat tires and so he’d bomb the downhills, call back to us what he saw, letting us know of the dangers.  Rick did most of the navigating.  I kept watch on time, distance and set the pace.

By this time we were all suffering and made brief stops every ten miles or so.  We’d agreed that it was OK, had plenty of time, and weren’t worried.

Caught up with Derrick six miles from home.  Apparently he couldn’t keep up with the fast chicks either, but gave it the college try, and we had a merry reunion.

We rolled over the finish line somewhat past 2:00 am.

Rick and I crossing the FL

Here’s the official video feed:

https://results.chronotrack.com/athlete/index/e/29334039

And the unofficial one:

My support team was there along with a smattering of hearty locals to cheer us and offer congratulations.

Jeremy, Rick and I had a brief moment where we congratulated each other before LeLan handed over our Breakfast Club finishers patches and I overheard Rick in his southern drawl…

“I don’t care if it does say breakfast club on there.”

Next were the hugs and pictures with my pit crew and I was nearly overcome with emotion.  Felt pretty good about the finish and I don’t care if it says breakfast club on there either.

The Pit Crew, l to r, Me, Gregg, Kelly, Janice, Cheri, Kyle

Acknowledgements

In addition to my pit crew…

My wife Cindy deserves most of the credit.  She bought the bike four years ago that got me all fired up again about cycling.  Lots of times when I’m out there riding I should be home working.  Throughout this she continues to support without complaint.  Thanks baby, you’re the best, I love you.

Next, are the guys at the bike shop — Arkansas Cycle and Fitness, my support team back home in Little Rock.  They tolerate abysmal mechanical abilities, patiently listen to requirements, and teach when need be (often).  Time and again the necessary adjustments were made to correct the issues I was having with the bike.  They’ve encouraged and cheered, offered suggestions on routes, tactics, training, nutrition, hydration and everything else related to the sport of endurance cycling.

Finally, my cycling buddies — the Crackheads.  Truth be known they’re probably more trail runners than cyclists, but they’re incredible athletes, from whom I’ve learned much about training for these types of endurance events.  In the summertime, when the skeeters and chiggers get too bad for Arkansas trail running, they come out and ride which makes me happy.

The End


Categories: FLOSS Project Planets

Bryan Pendleton: Ghost Ship report released

Tue, 2017-06-20 01:09

The Oakland Fire Department has released their official report on last December's Ghost Ship Fire: Origin and Cause Report: Incident # 2016-085231.

The report is long, detailed, thorough, and terribly, terribly sad.

It is vividly illustrated with many pictures, which are simultaneously fascinating and heart-breaking.

In the end, the report accepts the limits of what is known:

No witnesses to the incipient stage of the fire were located. Based on witness statements and analysis of fire patterns, an area of origin was identified in the northwest area of the ground floor of the warehouse. In support of this hypothesis, fire patterns and fire behavior were considered, including ventilation effects of door openings, and the fuel load, consisting of large amounts of non-traditional building materials. This analysis was challenged with alternate hypotheses of fire origins, away from, or communicating to an area remote from, the immediate area of fire origin. Several potential ignition sources were considered. No conclusive determination of the initial heat source or the first materials ignited was made. The fire classification is UNDETERMINED.

In their Pulitzer Prize-winning coverage of the fire, At least nine dead, many missing in Oakland warehouse fire ,the East Bay Times highlighted the eccentricities of the collective's building, details which are thoroughly corroborated by the OFD's detailed report.

Alemany had advertised on Facebook and Craigslist looking for renters seeking "immediate change and loving revolution," who enjoyed "poetics, dramatics, film, tantric kitten juggling and nude traffic directing." He described it as 10,000 square feet of vintage redwood and antique steel "styled beyond compare."

His 1951 purple Plymouth remained parked Saturday in front of the building that burned so hot, the “Ghost Ship” letters painted across the front had all but melted away.

"They are ex-Burning Man people and had their kids in the place -- three kids running around with no shoes," said DeL Lee, 34, who lived there for three months two years ago. "It was nuts."

He described the place as a filthy firetrap, with frequent power outages, overloaded outlets, sparks and the smell of burning wire. A camping stove with butane tanks served as the kitchen, and a hole had been chiseled through the concrete wall to access the bathroom at the adjoining automotive repair shop next door.

The staircase, which had two switchbacks to get to the second floor, was built of pallets, plywood and footholds -- like a ship’s gangplank -- and was like "climbing a fort" to get up and down, say people who had visited the building.

Pianos and old couches doubled as room dividers. Pallets covered with shingles and elaborate trim formed sculptural walls. Often, Lee said, the place was filled with the sounds of sawing and hammering as Alemany continued to build.

And the OFD report confirms that chaos:

The front staircase was located along the east wall at the front of the structure. It was constructed of various wooden planks and wooden studs, as well as portions of wooden pallets at its top where it accessed the second floor. One of two bathrooms was elevated slightly off ground level where the lower staircase landing was located. The orientation of the staircase was such that it first led eastward where it bordered the east wall, then turned north (rearward), where it then turned slightly west at the very top of the staircase at the second floor.

As the Mercury News observes, releasing the report is a milestone but it's not the last we'll hear of this; the next steps will involve the courts:

Almena and the collective’s creative director, Max Harris, are charged with 36 counts of involuntary manslaughter in Alameda County Superior Court. Prosecutors charge the men knowingly created a fire trap and invited the public inside. Lawyers for the two men say their clients are being used as scapegoats and say the building owner Chor Ng should be facing criminal charges.

Ng, Almena, PG&E, are named in a wrongful death lawsuit that is in its early stages. The City of Oakland is expected to be named in the suit as soon as this week.

“Of course, we’d like to have them say what the cause of the fire is but we also have our experts and they are not done with their analysis. I don’t think it’s the end of the story,” said Mary Alexander, the lead attorney for the victim’s families.

Meanwhile, as Rick Paulas writes at The Awl, the media attention has already brought many changes to the "artist collective" aspect of the Bay Area and beyond: How The Media Mishandled The Ghost Ship Fire

“It was a tragedy, then it became a tragedy on an entirely different spectrum,” said Friday. “The original was the loss of such vibrant, wonderful people. And that was forgotten and it just became about housing ordinances, and this witch hunt of warehouses.”

The hunt continues, not just in Oakland, but around the country. City governments — rather than focusing efforts on bringing warehouses up to code, of empathizing with tenants forced to live in unsafe conditions due to increasing rents and lower wages, hell, even pragmatically understanding the added value these fringe residents add to a city’s cultural cachet — are simply closing down venues, then moving on to close down the next. Two days after Ghost Ship, the tenants of Baltimore’s iconic Bell Foundry were evicted. A week after that, the punk venue Burnt Ramen in Richmond, CA was shut down. On April 27th, eight people living in an artist collective in San Francisco were evicted.

The cities say they don’t want another Ghost Ship, implying they mean another massive loss of life. But the speed at which they’re performing these evictions — and the lack of solutions they’re offering to those displaced — suggests that what they really don’t want is another Ghost Ship media event: vultures descending, camera lights illuminating the dark corners of their own institutional failures.

It's important that we not forget the Ghost Ship tragedy, but it's a hard story to (re-)read and (re-)consider, with plenty of sadness to go around.

Categories: FLOSS Project Planets

Colm O hEigeartaigh: Querying Apache HBase using Talend Open Studio for Big Data

Mon, 2017-06-19 12:23
Recent blog posts have described how to set up authorization for Apache HBase using Apache Ranger. However the posts just covered inputing and reading data using the HBase Shell. In this post, we will show how Talend Open Studio for Big Data can be used to read data stored in Apache HBase. This post is along the same lines of other recent tutorials on reading data from Kafka and HDFS.

1) HBase setup

Follow this tutorial on setting up Apache HBase in standalone mode, and creating a 'data' table with some sample values using the HBase Shell.

2) Download Talend Open Studio for Big Data and create a job

Now we will download Talend Open Studio for Big Data (6.4.0 was used for the purposes of this tutorial). Unzip the file when it is downloaded and then start the Studio using one of the platform-specific scripts. It will prompt you to download some additional dependencies and to accept the licenses. Click on "Create a new job" called "HBaseRead". In the search bar on the right-hand side, enter "hbase" and hit enter. Drag "tHBaseConnection" and "tHBaseInput" onto the palette, as well as "tLogRow".

"tHBaseConnection" is used to set up the connection to "HBase", "tHBaseInput" uses the connection to read data from HBase, and "tLogRow" will log the data that was read so that we can see that the job ran successfully. Right-click on "tHBaseConnection" and select "Trigger/On Subjob Ok" and drag the resulting arrow to the "tHBaseInput" component. Now right click on "tHBaseInput" and select "Row/Main" and drag the arrow to "tLogRow".
3) Configure the components

Now let's configure the individual components. Double click on "tHBaseConnection" and select the distribution "Hortonworks" and Version "HDP V2.5.0" (from an earlier tutorial we are using HBase 1.2.6). We are not using Kerberos here so we can skip the rest of the security configuration. Now double click on "tHBaseInput". Select the "Use an existing connection" checkbox. Now hit "Edit Schema" and add two entries to map the column we created in two different column families: "c1" which matches DB "col1" of type String, and "c2" which matches DB "col1" of type String.


Select "data" for the table name back in tHBaseInput and add a mapping for "c1" to "colfam1", and "c2" to "colfam2".


Now we are ready to run the job. Click on the "Run" tab and then hit the "Run" button. You should see "val1" and "val2" appear in the console window.
Categories: FLOSS Project Planets

Sergey Beryozkin: SwaggerUI in CXF or what Child's Play really means

Mon, 2017-06-19 07:07
We've had an extensive demonstration of how to enable Swagger UI for CXF endpoints returning Swagger documents for a while but the only 'problem' was that our demos only showed how to unpack a SwaggerUI module into a local folder with the help of a Maven plugin and make these unpacked resources available to browsers.
It was not immediately obvious to the users how to activate SwaggerUI and with the news coming from a SpringBoot land that apparently it is really easy over there to do it it was time to look at making it easier for CXF users.
So Aki, Andriy and myself talked and this is what CXF 3.1.7 users have to do:

1. Have Swagger2Feature activated to get Swagger JSON returned
2. Add a swagger-ui dependency  to the runtime classpath.
3. Access Swagger UI

For example, run a description_swagger2 demo. After starting a server go to the CXF Services page and you will see:


Click on the link and see a familiar Swagger UI page showing your endpoint's API.

Have you wondered what do some developers mean when they say it is a child's play to try whatever they have done ? You'll find it hard to find a better example of it after trying Swagger UI with CXF 3.1.7 :-)

Note in CXF 3.1.8-SNAPSHOT we have already fixed it to work for Blueprint endpoints in OSGI (with the help from Łukasz Dywicki).  SwaggerUI auto-linking code has also been improved to support some older browsers better.

Besides, CXF 3.1.8 will also offer a proper support for Swagger correctly representing multiple JAX-RS endpoints based on the fix contributed by Andriy and available in Swagger 1.5.10 or when API interface and implementations are available in separate (OSGI) bundles (Łukasz figured out how to make it work).

Before I finish let me return to the description_swagger2 demo. Add a cxf-rt-rs-service-description dependency to pom.xml. Start the server and check the services page:


Of course some users do and will continue working with XML-based services and WADL is the best language available around to describe such services. If you click on a WADL link you will see an XML document returned. WADLGenerator can be configured with an XSLT template reference and if you have a good template you can get UI as good as this Apache Syncope document.

Whatever your data representation preferences are, CXF will get you supported.

 




Categories: FLOSS Project Planets

Aaron Morton: Reaper 0.6.1 released

Sun, 2017-06-18 20:00

Since we created our hard fork of Spotify’s great repair tool, Reaper, we’ve been committed to make it the “de facto” community tool to manage repairing Apache Cassandra clusters.
This required Reaper to support all versions of Apache Cassandra (starting from 1.2) and some features it lacked like incremental repair.
Another thing we really wanted to bring in was to remove the dependency on a Postgres database to store Reaper data. As Apache Cassandra users, it felt natural to store these in our favorite database.

Reaper 0.6.1

We are happy to announce the release of Reaper 0.6.1.

Apache Cassandra as a backend storage for Reaper was introduced in 0.4.0, but it appeared that it was creating a high load on the cluster hosting its data.
While the Postgres backend could rely on indexes to search efficiently for segments to process, the C* backend had to scan all segments and filter afterwards. The initial data model didn’t account for the frequency of those scans, which generated a lot of requests per seconds once you had repairs with hundreds (if not thousands) of segments.
Then it seems, Reaper was designed to work on clusters that do not use vnodes. Computing the number of possible parallel segment repairs for a job used the number of tokens divided by the replication factor, instead of using the number of nodes divided by the replication factor.
This lead to create a lot of overhead with threads trying and failing to repair segments because the nodes were already involved in a repair operation, each attempt generating a full scan of all segments.

Both issues are fixed in Reaper 0.6.1 with a brand new data model which requires a single query to get all segments for a run, the use of timeuuids instead of long ids in order to avoid lightweight transactions when generating repair/segment ids and a fixed computation of the number of possible parallel repairs.

The following graph shows the differences before and after the fix, observed on a 3 nodes cluster using 32 vnodes :

The load on the nodes is now comparable to running Reaper with the memory backend :

This release makes Apache Cassandra a first class citizen as a Reaper backend!

Upcoming features with the Apache Cassandra backend

On top of not having to administer yet another kind of database on top of Apache Cassandra to run Reaper, we can now better integrate with multi region clusters and handle security concerns related to JMX access.

First, the Apache Cassandra backend allows us to start several instances of Reaper instead of one, bringing it fault tolerance. Instances will share the work on segments using lightweight transactions and metrics will be stored in the database. On multi region clusters, where the JMX port is closed in cross DC communications, it will give the opportunity to start one or more instances of Reaper in each region. They will coordinate together through the backend and Reaper will still be able to apply backpressure mechanisms, by monitoring the whole cluster for running repairs and pending compactions.

Next, comes the “local mode”, for companies that apply strict security policies for the JMX port and forbid all remote access. In this specific case, a new parameter was added in the configuration yaml file to activate the local mode and you will need to start one instance of Reaper on each C* node. Each instance will then only communicate with the local node on 127.0.0.1 and ignore all tokens for which this node isn’t a replica.

Those feature are both available in a feature branch that will be merged before the next release.

While the fault tolerant features have been tested in different scenarios and considered ready for use, the local mode still needs a little bit of work before usage on real clusters.

Improving the frontend too

So far, we hadn’t touched the frontend and focused on the backend.
Now we are giving some love to the UI as well. On top of making it more usable and good looking, we are pushing some new features that will make Reaper “not just a tool for managing repairs”.

The first significant addition is the new cluster health view on the home screen :

One quick look at this screen will give you the nodes individual status (up/down) and the size on disk for each node, rack and datacenter of the clusters Reaper is connected to.

Then we’ve reorganized the other screens, making forms and lists collapsible, and adding a bit of color :

All those UI changes were just merged into master for your testing pleasure, so feel free to build, deploy and be sure to give us feedback on the reaper mailing list!

Categories: FLOSS Project Planets

Shawn McKinney: 2017 Dirty Kanza Checkpoint Three

Sun, 2017-06-18 13:26

Note: This post is about my second Dirty Kanza 200 experience on June 3, 2017.

It’s broken into seven parts:

Part I – Prep / Training

Part II – Preamble

Part III – Starting Line

Part IV – Checkpoint One

Part V – Checkpoint Two

Part VI – Checkpoint Three

Part VII – Finish Line

Don’t Worry Be Happy

My thoughts as I roll out of Eureka @ 3:30pm…

  • Thirty minutes at a checkpoint is too long, double the plan, but was overheated and feel much better now.
  • I’m enjoying myself.
  • It’s only a hundred miles back to Emporia, I could do that in my sleep.
  • What’s that a storm cloud headed our way?  It’s gonna feel good when it gets here.
Mud & Camaraderie

That first century was a frantic pace and there’s not much time or energy for team building.  We help each other out, but it’s all business.

The second part is when stragglers clump into semi-cohesive units.   It’s only natural and in any case, foolish to ride alone.  A group of riders will always be safer than one, assuming everyone does their job properly.  Each new set of eyes brings another brain to identify and solve problems.

There’s Jim, who took a few years off from his securities job down in Atlanta, Georgia to help his wife with their Montessori school, and train for this race.  He and I teamed up during the first half of the third leg.  As the worst of the thunderstorms rolled over.

Before we crossed the US hiway 54, a rider was waiting to be picked up by her support team.  Another victim of muddy roads, a derailleur twisted, bringing an early end to a long day.  We stopped, checked and offered encouragement as a car whizzed by us.

“That’s a storm chaser!!”, someone called out, leaving me to wonder just how bad these storms were gonna get.

Derrick, is an IT guy from St. Joseph, Missouri, riding a single-speed bike on his way to a fifth finish, and with it a Goblet commemorating 1000 miles of toil.

We rode for a bit at the end of the third, right at dusk.  My GPS, up to now worked flawlessly had changed into the nightime display mode and I could no longer make out which lines to follow, missed a turn and heard the buzzer telling me I’d veered off course.

I stopped and pulled out my cue sheets.  Those were tucked safely and sealed to stay nice and dry.  What, I forgot to seal, its pages wet, stuck together and useless?

I was tired and let my mind drift.  Why didn’t I bring a headlamp on this leg?  I’d be able to read the nav screen better.  And where is everybody?  How long have I been on the wrong path?  Am I lost?

Be calm.  Get your focus and above all think.  What about the phone, maps are on it too.  It’s almost dead but plenty of reserve power available.

Just then Derrick’s dim headlight appeared in the distance.  He stopped and we quietly discussed my predicament.  For some reason his GPS device couldn’t figure that turn out either.  It was then we noticed tire tracks off to our right, turned and got back on track, both nav devices mysteriously resumed working once again.

Jeremy is the service manager at one of the better bike shops in Topeka, Kansas.  He’s making a third attempt.  Two years ago, he broke down in what turned into a mudfest.  Last year, he completed the course, but twenty minutes past due and didn’t make the 3:00 am cutoff.

His bike was a grinder of sorts with some fats.  It sounded like a Mack truck on the downhills, but geared like a mountain goat on the uphills.  I want one of them bikes.  Going to have to look him up at that bike shop one day.

Last year I remembered him lying at the roadside, probably ten maybe fifteen miles outside of Emporia.

“You alright?”, we stopped and asked.  It was an hour or more past midnight and the blackest of night.

“Yeah man, just tired, and need to rest a bit.  You guys go on, I’m fine”, he calmly told us.

There’s the guy from Iowa, who normally wouldn’t be at the back-of-the-pack (with us), but his derailleur snapped and he’d just converted to a single-speed as I caught up with him, and his buddy.  This was a first attempt for both.  They’d been making good until the rains hit.

Or the four chicks, from where I do not know, who were much faster than I, but somehow kept passing me.  How I would get past them again remains a mystery.

Also, all of the others, whose names can’t be placed, but the stories can…

Storms

 

Seven miles into that third leg came the rain.  It felt good, but introduced challenges.  The roads become slippery and a rider could easily go down.  They become muddy and the bike very much wants to break down.

Both are critical risk factors in terms of finishing.  One’s outcome much worse than the other.

Fortunately, both problems have good solutions.  The first, slow down the descents, pick through the rocks, pools of mud and water — carefully.  If in doubt stop and walk a section, although I never had to on this day, except for that one crossing with peanut butter on the other side.

By the way, these pictures that I’m posting are from the calmer sections.  It’s never a good idea to stop along a dangerous roadside just to take one.  That will create a hazard for the other riders, who then have to deal with you in their pathways which limits their choices for a good line.  When the going is tricky, keep it moving, if possible to do so safely.

The second problem means frequent stops to flush the grit from the drivetrains.  When it starts grinding, it’s time to stop and flush.  Mind the grind.  Once I pulled out two centimeter chunks of rocks lodged in the derailleurs and chain guards.

Use whatever is on hand.  River, water, bottles, puddles.  There was mud — everywhere.  In the chain, gears and brakes.  It’d get lodged in the pedals and cleats of our shoes making it impossible to click in or (worse) to click out.  I’d use rocks to remove other rocks or whatever is handy and/or expedient.  It helps to be resourceful at times like this.  That’s not a fork, it’s an extended, multi-pronged, mud and grit extraction tool.

The good folks alongside the road were keeping us supplied with plenty of water.  It wasn’t needed for hydration, but for maintenance.  I’d ask before using it like this, to not offend them.  Pouring their bottles of water over my bike, but they understood and didn’t seem to mind.

We got rerouted once because the water crossing decided it wanted to be a lake.  This detour added a couple of miles to a ride that was already seven over two hundred.

The rain made for slow but I was having a good time and didn’t want the fun to end.

Enjoy this moment.  Look over there, all the flowers growing alongside the road.  The roads were still muddy but the fields were clean and fresh, the temperatures were cool.

wild flowers along the third leg

Madison (once again)

Rolled in about 930p under the cover of night.

930p @ Madison CP3

After all that fussing over nameplates in the previous leg and found out it was mounted incorrectly.  It partially blocked the headlight beam and had to be fixed.

Cheri lends a hand remounting the nameplate so I can be a happy rider

It was Cheri’s second year doing support.  Last year it was her and Kelly crewing for Gregg and I.  This year, she and Gregg came as well.  As I said earlier, the best part of this race is experiencing it with friends and family.

I was in good spirits, but hungry, my neck ached, and my bike was in some serious need of attention.  All of this was handled with calm efficiency by Kelly & Co.

Kyle, who’s an RN, provided medical support with pain relievers and ice packs.  They knew I liked pizza late in the race and Gregg handed some over that had just been pulled fresh from the oven, across the street, at the EZ-mart. It may not sound like much now, but gave me the needed energy boost, from something that doesn’t get squeezed out of a tube.

As soon as Cheri finished the nameplate, Gregg got the drivetrain running smoothly once again.

All the while, Kelly and Mom were assisting and directing.  There’s the headlamp needing to be mounted, fresh battery packs, change to the clear lens on the glasses, socks, gloves, cokes, energy drinks, refilling water tanks, electrolytes, gels and more.  There’s forty-some to go, total darkness, unmarked roads.  Possibly more mud on the remaining B roads.  Weather forecast clear and mild.

Let’s Finish This

“Who are you riding with?”, Gregg called out as I was leaving.  He ran alongside for a bit, urging me on.

Gregg runs alongside as I leave CP3

“Derrick and I are gonna team up”, I called back, which was true, that was the plan as we rolled into town.  Now I just had to find him.  Madison was practically deserted at this hour, its checkpoint regions, i.e. red, green, blue, orange, were spread out, and what color did he say he was again??

 

Twenty two minutes spent refueling at checkpoint three and into the darkness again.  That last leg started @ 10 pm with 45 miles to go.  I could do that in my sleep, may need to.

Next Post: Part VII – Finish Line


Categories: FLOSS Project Planets

Justin Mason: Links for 2017-06-16

Fri, 2017-06-16 19:58
Categories: FLOSS Project Planets