Planet Apache

Syndicate content
Updated: 7 hours 12 min ago

Justin Mason: Links for 2017-10-17

Tue, 2017-10-17 19:58
Categories: FLOSS Project Planets

Ruwan Linton: Digital Transformation through Composable Integration

Tue, 2017-10-17 06:38
Digitalization is heavily used in enterprises today to achieve business success. Business entities which do not embrace this change are losing their market share and going down day-by-day, as the human society is now experiencing digitalization at a global scale. This experience starts with all day-to-day activities to the major political, industrial, informational, educational and even cultural engagements. In essence, we are experiencing a Digital Revolution.


We are experiencing a Digital RevolutionThere are many examples of businesses not being able to transform itself with the emerging technologies and be defeated by their competitors:
  • Kodak defeated by the Digital Camera, mainly by Nikon and Canon
  • Nokia defeated by the smartphones, mainly by Samsung and Apple
  • BlockBuster Video defeated by online streaming, mainly by Netflix
  • Borders Books and Music defeated by online Bookstores, mainly by Amazon
And this list continues..
Digital TransformationWith digitalization, consumer expectations on the quality of a service and the speed or the turnover time of the service have increased dramatically. Lack of adherence to this change is usually seen by the consumer market as lack of innovation in products and services. The process of shifting from the existing business methodologies to the new digitalized business methodologies or offerings is considered as Digital Transformation. A more formalized definition to this could be given as;
“Digital Transformation is the use of technology to radically improve performance or reach of the enterprise”Let me take an example from the FIT (Fully Independent Traveller) domain. Assume that there is a good hotel booking service, available probably over the internet and many other media (such as via a call-center, a physical office, etc.) and they have had a very good market share of the hotel bookings. However if they continue to provide the same service that they have used to be offering, they will sure be losing their market share as there are competing services emerging to provide better QoS, lower response times and possibly novel and more convenient media (such as mobile applications); and most importantly, a better experience for the bookings through the use of new technology.
Success of the business relies on the satisfaction of the consumers

So how could our hotel booking service leverage digitalization to achieve this innovation for the business success of their products and services? Usually, this is the responsibility of the CIO and CTO, who should look at the following 3 key areas in terms of transforming the existing business into a Digital Business.



Customer ExperienceDigital advances such as analytics, social media, mobile applications and other embedded devices help enterprises to improve customer experience.Operational ProcessCIOs are utilizing technology to be competitive in the market, and the urge to improve business performance also leads executives to think of improving internal processes with digitalization.Business ModelBusiness processes/models and marketing models have to be adapted to support the new information age and transform themselves to seamlessly function with the rest of the digitalization stream.However digital transformation is not just the use of technology to achieve these qualities; rather, the complete business has to operate as one single system, from an external consumer’s perspective. That requires existing systems, new components, mobile applications and the bleeding edge IoT devices to be able to intercommunicate.System IntegrationWith the rise of Information Technology, large businesses started to benefit by introducing software systems to accomplish different tasks within the enterprise. They have introduced different monolithic systems such as HR management systems, SAP, ERP systems, CRM systems and many more. However these systems were designed to perform a specific tasks, and later on, to achieve the intercommunication of these incompatible systems, different concepts had to be introduced. Starting with manual data entry between different systems, these gradually evolved into EAI, and have been driven towards API Management and Microservices, with ETL, EI and ESB in the middle. The formal definition of system integration could be presented as;
“System Integration is the process of linking together different computing systems and software applications functionally to act as a coordinated whole”Integration in the enterprise domain has now been further expanded to mobile devices as well as IoT through APIs and Microservices, in order to meet consumer experience expectations.
So in essence, integration of systems, APIs and devices plays a vital role in Digital Transformation as it essentially requires all these to connect with each other and intercommunicate. Ability to seamlessly integrate existing systems without much overhead is the key to a successful implementation of Digital Transformation.The other aspect of this equilibrium function of business success is the Time to Market factor, which requires integration and all these new technology usages to be adopted as fast as possible. However, these integration problems require some careful design, together with a good amount of development to achieve protocol and transport bridging, data transformation, routing and other required integration functionalities, despite the availability of dozens of frameworks and products to facilitate the same. 
Composable IntegrationIn order to reduce this integration development time, AdroitLogic has developed a lean framework for integration named Project-X, with a rich ecosystem of tooling around it. Project-X facilitates 3 building blocks for integration:
Connectors & Processors are used to compose integration flows, without having to write a single line of code!ConnectorsConnectors could be used either to accept messages/events from outside (via Ingress Connectors) or to send out/invoke external services/functionalities (via Egress Connectors). In the rare case of not being able to find the ideal match in the existing connector palette, you could easily implement your own reusable connector.ProcessorsAny processing of an accepted message such as conditional evaluations, routing, transformations, value extractions, composition, data enrichment, validation etc., could be achieved by the army of pre-built processors. In the rare case of not being able to find the most suitable processor to implement your functionality, you could implement your own reusable processor to be included in your integration flow.FeaturesA set of utility functions available to be utilized by the processors and connectors, all of which could also be utilized by any custom connector or processor that you want to write. On top of that, you could also write your own features, which might utilize existing features in turn.All these pieces are seamlessly assembled via a composable drag-n-drop style integration flow editor named UltraStudio, based on the IDE champion IntelliJ IDEA, allowing you to compose your integration flows in minutes, test and debug it on the IDE itself, and build the final artifact using Maven to deploy it into the UltraESB-X lean runtime container for production execution.
Compose your integration flow, test and debug it on the IDE itself prior to deploymentYou can pick and choose the relevant connectors and processors to be utilized in your project, from the existing connector and processor store, and the project artifact will be built as a self-contained bundle which could be deployed in the integration runtime without any hassle of adding other required drivers or 3rd party jar files; yet the runtime will have that — and only that — set of dependencies, making your execution runtime as lean as possible. Further, this makes the project lifecycle and maintainability of the solution more robust, as the project could use your existing version control systems and continuous integration management to benefit from the collaboration techniques that you have already been practicing.AdroitLogic is in the process of building the 4th layer on top of this, named Templates. These templates will have reusable, parameterized patterns (or frameworks) for building your integration flows. For example, a solution which requires guaranteed delivery to a defined set of downstream systems with specific mapping and filtering criteria, together with validation from a given upstream system with traceability and statistics, could utilize an existing template and just compose the mapping and filtering criteria to implement the whole solution in a matter of minutes.In conclusion, if your organization has not yet started on the digital transformation, this is high time that you consider stepping up your pace. While this will have multiple streams of transformation and a lot of impact on the way the business currently operates, one good initiating point is to start integrating your existing systems to work seamlessly, and facilitating the ability to connect with your partners and consumers through the latest technologies to improve consumer experience.
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-10-16

Mon, 2017-10-16 19:58
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-10-15

Sun, 2017-10-15 19:58
Categories: FLOSS Project Planets

Community Over Code: FAQ for Facebook React.js BSD + PATENTS License issue with the Apache Software Foundation

Fri, 2017-10-13 14:09

(Today we’re interviewing Shane Curcuru about the recent issues reported with Facebook’s React.js software’s BSD + PATENTS file license, and what the Apache Software Foundation (ASF) has to do with it all. Shane serves in a leadership position at the ASF, but he wants you to know he’s speaking only as an individual here; this does not represent an official position of the ASF.)

UPDATE: Facebook has relicensed React.js as well as some other software under the MIT license, without the FB+PATENTS file. That’s good news, in general!

Hello and welcome to our interview about the recent licensing kerfuffle around Facebook’s React.js software, and the custom license including a custom PATENTS file that Facebook uses for the software.

You’ve probably seen discussions recently, either decrying the downfall of your startup if you use React, or noting that this is an old issue that’s just a paper tiger. Let’s try to bring some clarity to the issue, and get you some easy-to-understand information to make your own decision. To start with, Shane, can you briefly describe what the current news hype is all about? Is this a new issue, or an old one?

Well, like many things around licensing, the details are complicated, but the big picture is fairly simple. Big picture, the current news hype is only about policy at the ASF, and does not directly affect anyone else. The only recent change was made for projects already at Apache, and even that change will take a while to implement.

I’m confused — isn’t this a new change in the licensing for the React.js project?

No, actually — Facebook’s React.js project has used this license (often called BSD + PATENTS, but it’s really a Facebook-specific file) for several years, so the underlying issue with this specific PATENTS file is old. It’s just getting attention now because the ASF has made a change in their licensing policy. The current change last month was to declare that for Apache projects, the custom PATENTS clause that Facebook uses on React.JS software is now officially on the “Category-X” list of licenses that may not be shipped in Apache projects.

So the news is about the fact that Apache projects will no longer include React.js in their source or releases. This is a policy change, and only affects Apache projects, but obviously it’s gotten some news coverage and has gotten a lot of developers to really go back and pay attention to the licensing deails around React.

Many of our readers probably don’t understand what “Category X” means, unless it’s an X-Files reference. Can you explain more how the ASF determines which kinds of software licenses are acceptable in Apache projects?

Great question, yes, Category X is the ASF’s term for software licenses that, by ASF policy, may not appear in Apache software source repositories or software releases. This is an operational decision by the ASF, and doesn’t mean that various licenses are incompatible with the Apache 2.0 license — just that the ASF doesn’t want it’s projects shipping code using these licenses.

The rationale is this: the ASF wants to attract the maximum number of inbound contributions. Thus, we use the permissive and as some say “business-friendly” Apache license for all ASF software. This allows maximum freedom for people who use Apache software to do as they please, including making proprietary software. Part of the Apache brand is this expectation: when you get a software product from the ASF, you know what to expect from the license. Besides not suing us and not using Apache trademarks, the only real restriction is including the license if you redistribute something based on Apache 2.0 licensed software.

Licenses that the ASF lists as Category X add additional restrictions on use to end users of the software, above and beyond what Apache 2.0 requires. The most obvious example are GPL* copyleft licenses, that require redistributors to provide any changes made publicly, under the GPL.

OK — So Category X isn’t a legal determination of incompatibility, it’s just a policy choice the ASF is making? Is that right?

Exactly right. Others are free to mix licenses in various ways — but the ASF chooses to not redistribute software with more restrictive licenses than Apache 2.0. So when you download an Apache product, it won’t have Category X software like React in it — but you’re free to mix Apache products with React yourself, if you like.

Aren’t there some Apache projects shipping with React today, like CouchDB?

Yes — CouchDB currently includes React in their tree and past releases, as do a few other projects. These projects will warn their users (by a NOTICE file or blog post) that their releases contain more restrictive licensed software, and are working on plans to re-design things to remove React and replace it with other, less restrictively licensed libraries.

And before you ask, yes, this is extra work for the volunteer projects at Apache, and it’s not something the ASF does lightly. But ensuring that Apache projects have clean IP that never includes any licensing restrictions beyond what the well-known Apache 2.0 license requires is critical to the broad acceptance of Apache software everywhere.

So if this recent change in ASF policy only affects Apache projects, why is it getting so much attention in tech circles these days?

Because the ASF policy announcement has made some people go back and really look at Facebook’s custom BSD + PATENTS file license used in React. This is a good thing — you should always understand the licenses of software you’re using so you follow them — and so you don’t have surprises later, like now. People using React are already bound by this license, it’s just that many people didn’t look into the details until now.

There are two conceptual issues here in terms of how open source participants decide if they want to accept Facebook’s license here. First is the addition of Facebook’s custom-written PATENTS file. Very briefly, it states that if you sue Facebook over (almost any) patent issues, you loose your license to Facebook patents. The first issue is that this patent termination clause — which is in a fair number of licenses — is a strict and exclusionary clause. The balance of rights granted (or taken away, if you sue) is strongly tilted to Facebook as a specific entity. It’s not the more even and generic balance of patent termination rights that are in the Apache 2.0 license.

That asymmetry in patent rights is the problem: it directly puts Facebook’s interests above everyone else’s interests when patent lawsuits around React happen. Of course, there are a lot more details to the matter, but for those questions you need to ask your own attorney — all I can say is that it’s an issue that will happen incredibly rarely, if ever, for open source projects.

So the Facebook BSD + PATENTS file license favors Facebook, even though they’re an open source project that wants your contributions. We kind of get that; patents are always tricky, but the asymmetry in rights there does seem a little odd compared to other licenses. You said there were two conceptual issues?

The second conceptual issue is simpler to explain. The Facebook BSD + PATENTS file license is not on the OSI list of open source licenses.

(pause) Um, is that it? What’s the real issue here about OSI approval?

Yup, that’s the core of the issue. Being on the OSI list is huge. The generally accepted definition of “open source” is that your software’s license is listed by OSI.

The reason OSI listing is key is because enough lawyers in many, many companies have vetted the OSI list licenses that the ecosystem knows what to expect. The OSI has a strong reputation, so to start with people know basically what to expect in terms of overall license to OSI listed licenses. More importantly, these licenses have been vetted over and over by counsel from a wide variety of companies.

A lot of law work is risk management: ensuring your rights are preserved when doing business or using licenses. OSI-listed licenses are well known, so lawyers can quickly and confidently express the level of risk in using them. Non-OSI licenses mean the lawyers have to read them in detail, and do a new and comprehensive review of risks. It’s not just the work, it’s the uncertainty with something new that typically translates into saying “This new license has more risks than those well-used ones.”

Now I get it — OSI licenses are popular and frequently reviewed, so people are comfortable with them. A new license — like the Facebook PATENTS file — might not be bad, but might be — people don’t know it well enough yet.

Exactly right. I can’t think of any good reason for companies that want to work with open source groups to ever use a non-OSI listed license. People keep thinking so, but license proliferation is not worth it. Successful open source projects need new contributors from a variety of places. Keeping barriers to entry low — like unusual licenses — is one of the easiest ways to turn users into potential contributors.

If the Facebook PATENTS license is unusual enough to turn off other projects from using it, like Apache, why won’t Facebook consider changing the license to an OSI-approved one?

That’s a question you’ll need to ask Facebook. The ASF already asked Facebook to consider changing the license, and they said no. Facebook also wrote an explainer for their license that’s been widely shared.

We have one listener asking: Is the Facebook PATENTS license viral? That is, if you use React.js in your software, must you use the same Facebook PATENTS license?

No, the PATENTS clause is not “viral”, or rather, it’s not copyleft. So you are free to use whatever license you want on any software you write that uses or incorporates React.js.

Note that the actual patent grant from Facebook to anyone using React.js software — even if it’s inside of your software project — is still there. The PATENTS terms apply to anyone who’s running the React.js software, and are between Facebook and all the end users. So that patent licensing issue doesn’t affect you as an application builder directly, but it might affect your users.

Great, well we’ve covered a lot of ground in this interview. What else should readers know about, so they can make up their own mind about the licensing risks around React — that were always there, but they might not have understood.

TL;DR: the only short-term question is if you’re thinking about donating your project to Apache. If so, start planning now to migrate away from React, because you won’t be able to bring it with you.

For everyone else, this is a non-issue in the short term. Longer term, it’s something you should make your own mind up about, by considering all the aspects of any change: legal risk (probably low, but it’s patents so who knows), technology (several replacements out there, but none yet as strong as React), and community (what development capacity do you have, and does your community of contributors care?)

I wrote a brief guide about the legal, technical, and community aspects of deciding to use or not use React earlier.

Also — if you have strong opinions about this, let people — and Facebook — know! I have to say a some open source types were quite surprised when Facebook refused the ASF’s request to relicense. Facebook has some great open source projects, including some open governance ones. I’m personally a little surprised they aren’t using an OSI license for this kind of stuff.

Thanks for reading along with Shane’s interview of Shane on the React licensing issue! Good luck to your project whichever licenses you choose.

For More Information About React Licensing

The ASF’s publishes their Licensing policies, including the Category X list, and some rationale for policy decisions on licenses at Apache.

UPDATE! Automattic, the company behind WordPress, will be moving away from React:

“We’ll look for something with most of the benefits of React, but without the baggage of a patents clause that’s confusing and threatening to many people

Simon Phipps’ timeline and discussion about how Apache moved the PATENTS license to the Category X list:

https://meshedinsights.com/2017/07/16/apache-bans-facebooks-license-combo/

A popular post here on Medium focused on CTOs, with a balanced view, including a discussion on one patent lawsuit between Facebook and Yahoo!:

https://medium.com/@ji/the-react-license-for-founders-and-ctos-b38d2538f3e5

Detailed (long)discussion of “what does this mean for my project” from an engineer’s perspective:

Brain dump: notes and questions arising from “Facebook’s BSD-3 + strong patent retaliation” license
This is a living document and I will keep updating it as necessarymedium.com

An Apache CouchDB developer’s take on React and the license:

Understanding the Facebook vs Apache Software Foundation License Kerfuffle
Translation: French by @gnieh_ Disclaimers: I am not a lawyer. I’m not speaking for Facebook, the ASF, or CouchDB. This…writing.jan.io

If you’re a startup, you should not use React (community/startup aspects):

If you’re a startup, you should not use React (reflecting on the BSD + patents license)
That is, if you ever hope to be acquired by a larger companymedium.com

Don’t over-REACT to the Facebook Patents License (legal aspects)

Don’t Over-REACT to the Facebook Patents License
Recently, Apache re-classified code under Facebook’s “BSD+ Patents” license to “Category X,” effectively banning it…blog.fossa.io

Why the Facebook Patents License Is A Paper Tiger (legal aspects)

React, Facebook, and the Revocable Patent License. Why It’s a Paper

Justin Mason: Links for 2017-10-12

Thu, 2017-10-12 19:58
Categories: FLOSS Project Planets

Claus Ibsen: Apache Camel 2.20 released - What's new

Thu, 2017-10-12 04:23
Apache Camel 2.20 has been released today and as usual I am tasked to write a blog about this great new release and what's the highlights.


The release has the following highlights.

1) Java 9 technical preview supportWe have started our work to support Java 9 and this release is what we call technical preview. The source code builds and runs on Java 9 and we will continue to improve work for official support in the following release.2) Improved startup timeWe have found a few spots to optimise the startup time of Apache Camel so it starts 100 - 200 milli seconds faster.

3) Optimised core to reduce footprint

Many internal optimisations in the Camel routing engine, such as reducing thread contention when updating JMX statistics, reducing internal state objects to claim less memory, and reducing the number of allocated objects to reduce overhead on GC etc, and much more.

4) Improved Spring Boot support and preparing for Spring Boot 2

We have improved Camel running on Spring Boot in various ways.

We also worked to make Apache Camel more ready and compatible with the upcoming Spring Boot 2 and Spring Framework 5. Officially support for these is expected in Camel 2.21 release.

5) Improved Spring lifecycle

Starting and stoping the CamelContext when used with Spring framework (SpringCamelContext) was revised to ensure that the Camel context is started last - when all resources should be available, and stopped first - while all resources are still available.

6) JMS 2.0 support

The camel-jms component now supports JMS 2.0 APIs.

7) Faster Map implementation for message headers

If you include camel-headersmap component on the classpath, then Camel will auto detect it on startup and use a faster implementation of case-insenstive map (used by camel message headers).

8) Health-Check API

We have added experimental support for a new health-check API  (which we will continue to work on over the next couple of releases).  The health checks can be leveraged in in cloud environments to detect non healthy contexts.

9) Cluster API

Introduced an experimental Cluster SPI (which we will continue to work on over the next couple of releases) for high availability contexts, out of the box Camel supports: atomix, consul, file, kubernetes and zookeeper as underlying clustering technologies through the respective components.

10) RouteController API

Introduced an experimental Route Controller SPI (which we will continue to work on over the next couple of releases) aimed to provide more fine-grained control of routes, out of the box Camel provides the following implementations:

  • SupervisingRouteController which delays startup of the routes after the camel context is properly started and attempt to restart routes that have not been starter successfully.
  • ClusteredRouteController which leverages Cluster SPI to start routes only when the context is elected as leader.

11) More components

As usual there is a bunch of new components for example we have support for calling AWS lambda functions in the camel-aws component. There is also a new json validator component, and camel-master is used with the new Cluster API to do route leader election in a cluster. There is 13 new components and 3 new data formats. You can find more details in the Camel 2.20 release notes.

We will now start working on the next release 2.21 which is scheduled in start of 2018. We are trying to push for a bit quicker release cycle of these bigger Camel releases, so we can go from doing 2 to 3 releases per year. This allows people to quicker pickup new functionality and components etc.

Also we want to get a release out that officially support Java 9, Spring Boot 2 and all the usual great stuff we add to each release, and what the community contributes.



Categories: FLOSS Project Planets

Edward J. Yoon: 조직 관성(Organizational Inertia)

Thu, 2017-10-12 02:45
최근 들어던 세미나에 의하면 자연 법칙으로 존재하는 관성은 조직에도 존재한다.

다소 철학적 고찰이긴 한데, 우선,  관성(inertia)부터 알아보자.

관성이란 버스가 급출발 할 때 뒤로 쏠리는 그 힘이 바로 관성이다.  관성은 원래 상태를 유지하려는 성질에 불과하니 진짜힘이 아니다. 그래서 관성을 가짜힘이라고 한다. 진짜힘은 버스를 급출발 시키는 힘이다.

몸이 뒤로 쏠리는 가짜힘의 크기는 버스를 급출발 시키는 진짜힘의 크기에 의해서만 결정된다. 이게 관성과 운동의 상대성이다.

관성과 운동의 상대성을 생각하면, 조직의 관성은 변화의 크기에 의해서 결정되므로 그것은 요주의 대상이 아니라 오히려 변화의 파도라고 볼 수 있겠다.
Categories: FLOSS Project Planets

Timothy Chen: Hierarchical Scheduling for Diverse Datacenter Workloads

Thu, 2017-10-12 02:24

Hierarchical Scheduling for Diverse Datacenter Workloads

In this post we’ll cover the paper that introduced HDRF (Hierarchical Dominant Resource Fairness) which builds upon the team’s existing work DRF (Dominant Resource Fairness), but looking to also provide hierarchical scheduling.

Background

Prior work DRF, was an algorithm that was able to decide how to allocate multi-dimensional resources to multiple frameworks, which it described how it can enforce fairness when scheduling multiple resource types with a flat hierarchy:

                 DRF  

    | —— |  —— | —— – |

   dev   test staging prod

    10     10      30       50

However, in most organizations it’s important to be able to describe resource allocation weights in a hierarchy that reflects its organizational intent:

                 H-DRF  

    | —— |  —— | —— – |

   fe      ads     spam   mail

   30      20        25       25

   /\        /\          /\         /\

 d  p     d p       d p     d  p       (d = dev, p = prod)

 50 50 20 80    30 70  40 60

The key difference with hierarchical scheduling is that when a node is not using its resources, it’s redistributed among the sibling nodes as opposed to all leaf nodes. For example, when dev environment in FE is not using its resources, it’s allocated to prod in FE instead. 

Naive implementations of hierarchical and multi-resource scheduling (such as collapsing the hierarchy into a flat hierarchy, or simply running DRF from root to leaf node) can lead to starvation, where in our example certain dev and prod environment never receiving any or their fair share of resources. This is referred as hierarchical share guarantee.

H-DRF

To avoid the problem of starvation, H-DRF incorporates two ideas when considering dominant share in the leaf nodes. The first idea is rescaling the leaf node’s resource consumption to the minimum node. The second idea is to ignore rescaling blocked nodes, where a node is blocked if one of the resources request is saturated or when it has no more tasks to launch. The actual proof and steps of the implementation is covered in the paper, and I won’t go over here in details. 

Notes

The interesting piece that was highlighted in this paper was that Hadoop implemented a naive version of HDRF and therefore has bugs where it can cause starvation in the tasks. Therefore, it’s not straightforward when attempting to modify how DRF works without proofing it’s starvation free and also provides fairness (unless it’s not the primary goal for your change). 

That said, there are more papers that continues to extend and modify DRF and also shown ways that can continue to show blindspots that HDRF didn’t cover, which I’ll try to cover more in the future.


Categories: FLOSS Project Planets

Justin Mason: Links for 2017-10-11

Wed, 2017-10-11 19:58
  • Study: wearing hi-viz clothing does not reduce risk of collision for cyclists

    Journal of Transport & Health, 22 March 2017:

    This study found no evidence that cyclists using conspicuity aids were at reduced risk of a collision crash compared to non-users after adjustment for confounding, but there was some evidence of an increase in risk. Bias and residual confounding from differing route selection and cycling behaviours in users of conspicuity aids are possible explanations for these findings. Conspicuity aids may not be effective in reducing collision crash risk for cyclists in highly-motorised environments when used in the absence of other bicycle crash prevention measures such as increased segregation or lower motor vehicle speeds.

    (tags: health safety hi-viz clothing cycling commute visibility collision crashes papers)

Categories: FLOSS Project Planets

Bryan Pendleton: WC 2018 ? Not this year, I guess.

Tue, 2017-10-10 22:31

I can't say this was a total stunner, but still: USA Stunned by Trinidad and Tobago, Eliminated From World Cup Contention

The nightmare scenario has played out for the U.S. men's national team.

A roller coaster of a qualifying campaign ended in shambles, with a stunning 2-1 loss to Trinidad & Tobago, coupled with wins by Panama and Honduras over Costa Rica and Mexico, respectively, has eliminated the USA from the World Cup. The Americans will not be playing in Russia next summer.

Trinidad and Tobago, which hadn't won in its last nine matches (0-8-1), exacted revenge for the 1989 elimination at the hands of the United States, doing so in stunning fashion. An own goal from Omar Gonzalez and a rocket from Alvin Jones provided the offense, while Christian Pulisic's second-half goal wasn't enough to save the Americans.

Oh, my.

And it seems like there's a fair chance I won't be able to root for Leo Messi, either?

Well, what shall I do?

Let's see: there's still Iceland! They're easy to root for!

Perhaps Wales? Perhaps Costa Rica? Perhaps Chile?

I'm ready, I'm an eager Yankee, looking for a team with some charisma, some elan, some heart, some fighting spirit.

Where are you? Are you out there?

It's still a few weeks until the tournament qualifications are known.

I guess I've got time to start looking...

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-10-10

Tue, 2017-10-10 19:58
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-10-09

Mon, 2017-10-09 19:58
Categories: FLOSS Project Planets

Claus Ibsen: Apache Camel route coverage tooling on the way

Mon, 2017-10-09 04:11
Last weekend I found some time to hack on new tooling for doing Apache Camel route coverage reports. The intention is to provide APIs and functionality out of the box from Apache Camel that other tooling vendors can leverage in their tooling. For example to show route coverage in IDEA or Eclipse tooling, or to generate SonarType reports, etc.

I got as far to build a prototype that is capable of generating a report which you run via the camel-maven-plugin. Having a prototype built in the came-maven-plugin is a very good idea as its neutral and basically just plain Java. And make its possible for other vendors to look at how its implemented in the camel-maven-plugin and be inspired how to use this functionality in their tooling.

I wanted to work on the hardest bit first which is being able to parse your Java routes and correlate to which EIPs was covered or not. We do have parts of such a parse based on the endpoint validation tooling which already exists in the camel-maven-plugin, which I have previously blogged about. The parser still needs a little bit more work however I do think I got it pretty far over just one weekend of work. I have not begun adding support for XML yet, but this should be much easier to do than Java, and I anticipate no problems there.

I recorded a video demonstrating the tooling in action.

I forgot to show in the video recording, that you can change the Camel routes, such as inserting empty lines, adding methods to the class etc, and when you re-run the camel-maven-plugin it will re-parse the route and output the line numbers correctly.

Anyway enjoy the video, its about 12 minutes long, so go grab a cup of coffee or tea first.



The plan is to include this tooling in the next Apache Camel 2.21 release which is scheduled in January 2018.

The JIRA ticket about the tooling is CAMEL-8657

Feedback is much welcome, and as always we love contributions at Apache Camel, so you are welcome to help out. The current work is upstream on this github branch. The code will be merged to master branch later when Apache Camel 2.20.0 is officially released.

When I get some more time in the future I would like to add support for route coverage in the Apache Camel IDEA plugin. Eclipse users may want to look at the JBoss Fuse tooling which has support for Apache Camel, which could potentially also add support for route coverage as well.

Categories: FLOSS Project Planets

Bryan Pendleton: In the Woods: a very short review

Sat, 2017-10-07 12:44

One of my voracious reader friends introduced me to Tana French and her Dublin Murder Squad series, of which In the Woods is the first entry.

Structurally, In the Woods is a classic mystery: something horrible has happened, and the detectives are called; evidence is collected; witnesses are interviewed; leads are developed and followed; more is learned.

Along the way, we explore issues such as gender discrimination in the workplace and the ongoing effects of the great recession of 2008.

What distinguishes In the Woods is not these basic elements, but more the style and depth with which they are elaborated and pursued.

But did I mention style? What really makes In the Woods a delight is the ferocious lyricism that French brings to her writing.

For instance, here are three children, playing follow-the-leader in the woods:

These three children own the summer. They know the wood as surely as they know the microlandscapes of their own grazed knees; put them down blindfolded in any dell or clearing and they could find their way out without putting a foot wrong. This is their territory, and they rule it wild and lordly as young animals; they scramble through its trees and hide-and-seek in its hollows all the endless day long, and all night in their dreams.

They are running into legend, into sleepover stories and nightmares parents never hear. Down the faint lost paths you would never find alone, skidding round the tumbled stone walls, they stream calls and shoelaces behind them like comet-trails.

How marvelous is this, at every level!

Structurally, it's almost poetry, with a natural sing-song cadence and a subtly-reinforced pattern induced by the simple rhythms ("they know...", "they rule...", "they scramble...", "they stream...").

Stylistically, each little turn of phrase is so graceful and just right ("their own grazed knees", "wild and lordly as young animals", "calls and shoelaces").

And then:

They are running into legend, into sleepover stories and nightmares parents never hear.

Wow.

Anyway, that's just page 2. French is just as polished and capable on page 302, and, like any good mystery, once you start, you won't want to stop, even as you know (or think you know) what lies ahead.

From what I hear, French's subsequent books are wonderful as well; I shall certainly read more.

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-10-06

Fri, 2017-10-06 19:58
  • The world’s first cyber-attack, on the Chappe telegraph system, in Bordeaux in 1834

    The Blanc brothers traded government bonds at the exchange in the city of Bordeaux, where information about market movements took several days to arrive from Paris by mail coach. Accordingly, traders who could get the information more quickly could make money by anticipating these movements. Some tried using messengers and carrier pigeons, but the Blanc brothers found a way to use the telegraph line instead. They bribed the telegraph operator in the city of Tours to introduce deliberate errors into routine government messages being sent over the network. The telegraph’s encoding system included a “backspace” symbol that instructed the transcriber to ignore the previous character. The addition of a spurious character indicating the direction of the previous day’s market movement, followed by a backspace, meant the text of the message being sent was unaffected when it was written out for delivery at the end of the line. But this extra character could be seen by another accomplice: a former telegraph operator who observed the telegraph tower outside Bordeaux with a telescope, and then passed on the news to the Blancs. The scam was only uncovered in 1836, when the crooked operator in Tours fell ill and revealed all to a friend, who he hoped would take his place. The Blanc brothers were put on trial, though they could not be convicted because there was no law against misuse of data networks. But the Blancs’ pioneering misuse of the French network qualifies as the world’s first cyber-attack.

    (tags: bordeaux hacking history security technology cyber-attacks telegraph telegraphes-chappe)

  • Slack 103: Communication and culture

    Interesting note on some emergent Slack communications systems using emoji: “redirect raccoon”, voting, and “I’m taking a look at this”

    (tags: slack communications emojis emoji online talk chat)

  • This Future Looks Familiar: Watching Blade Runner in 2017

    I told a lot of people that I was going to watch Blade Runner for the first time, because I know that people have opinions about Blade Runner. All of them gave me a few watery opinions to keep in mind going in—nothing that would spoil me, but things that would help me understand what they assured me would be a Very Strange Film. None of them told me the right things, though.

    (tags: culture movies film blade-runner politics slavery replicants)

  • poor man’s profiler

    ‘Sampling tools like oprofile or dtrace’s profile provider don’t really provide methods to see what [multithreaded] programs are blocking on – only where they spend CPU time. Though there exist advanced techniques (such as systemtap and dtrace call level probes), it is overkill to build upon that. Poor man doesn’t have time. Poor man needs food.’ Basically periodically grabbing stack traces from running processes using gdb.

    (tags: gdb profiling linux unix mark-callaghan stack-traces performance)

Categories: FLOSS Project Planets

Bryan Pendleton: John Cochrane's After the ACA

Fri, 2017-10-06 00:52

All that anyone has been able to talk about recently (or so it seems), is "repeal and replace."

It's a pretty interesting topic to me, partly because, as I get older, I'm thinking more and more about healthcare, and partly just because I think it's an awfully important topic.

But I didn't feel like I learned a lot during all the recent debates.

So I wandered here, and I wandered there, and eventually I found myself looking at a John Cochrane paper: After the ACA: Freeing the market for health care

Now, Cochrane is a pretty serious fellow, with pretty serious credentials, so my expectations were fairly high, perhaps unreasonably high.

And this is a major effort: the paper is nearly 50 pages long, and covers lots of ground

At the very least, I hoped to learn something new, and certainly, the paper sets out well:

I survey the supply, demand, and market for health care, and health insurance, to think about how those markets should work to provide quality care, low cost, and technical innovation. A market-based alternative does exist, and it is realistic.

As a survey, I was surprised how narrowly-focused Cochrane seemed to be. For example, there is almost no discussion in the entire paper about the role of malpractice lawsuits in driving up healthcare cost, modulo a mostly-throwaway line about its role in constraining the outsourcing of certain medical work:

Personal-injury law firms are already lining up to sue based on the “inferior quality” of outsourced readings, with requisite horror stories.

But this is just the tip of the iceberg when it comes to the effect that malpractice lawsuits have had on healthcare costs. Surely he should have more to say than this?

And I was saddened that there was very little reflection about the basic fact that the biggest reason that the United States spends dramatically more on healthcare than we did 75 years ago is because of ADVANCES in healthcare: people are living longer, so over the course of their lives they get more healthcare. Moreover, many ailments which were formerly not treatable now are reliably and safely treatable, so we treat them.

More treatment, over longer life spans, equals a greater amount of resources spent on healthcare.

But this is a GOOD thing! We should be happy that people are living longer, and are having their illnesses treated. And Cochrane seems to understand this, for he notes that

We don’t want 1950s care at 1920s prices

But then he moves rapidly on, without really spending any time to discuss how we might get by with less healthcare, overall, in some sensible fashion.

I did learn a few things:

  • I had not previously been aware of the role of the "Certificate of Need." Here's how Cochrane describes it: In Illinois as in 35 other states, every new hospital, or even major purchase, requires a “certificate of need.” This certificate is issued by our “hospital equalization board,” appointed by the governor and, like much of Illinois politics, regularly in the newspapers for various scandals. The board has an explicit mandate to defend the profitability of existing hospitals. It holds hearings at which they can complain that a new entrant would hurt their bottom line.
  • And Cochrane makes a well-worded argument in favor of a new conceptualization of health-care insurance: To summarize briefly, health insurance should be individual, portable, life-long, guaranteed-renewable, transferrable, competitive, and lightly regulated, mostly to ensure that companies keep their contractual promises. “Guaranteed renewable” means that your premiums do not increase and you can’t be dropped if you get sick. “Transferable” gives you the right to change insurance companies, increasing competition.

    Insurance should be insurance, not a negotiator and payment plan for routine expenses. It should protect overall wealth from large shocks, leaving as many marginal decisions unaltered as possible.

These are both tremendously good insights, and were certainly worth the time I invested in Cochrane's essay.

But most of the rest of Cochrane's paper baffled me.

More than just baffled me; it flat-out astonished me.

Cochrane's main point seems to be that consuming healthcare should be much more like going to a restaurant, or hiring a gardening service for your house, or buying an airplane ticket, or choosing a new set of tires for your car: you should check Yelp before you make your decision; you should shop around for the best price; you should probably even try to use a coupon or negotiate for a better deal.

Is he serious?

Does he really think that selecting medical care is like these other activities? Apparently, he does:

Health care is not that different from the services provided by lawyers, auto mechanics, home remodelers, tax accountants, financial planners, restaurants, airlines or college professors.

Does he really think that it makes sense to change medical providers on an incident-by-incident basis, just like you go to one restaurant one day, and a different one the next week? He certainly doesn't seem to think that a person's medical information is very sensitive or private, dismissing that notion breezily as:

Confidentiality regulations, apparently more stringent than those for your money in the bank.

Is it possible that Cochrane has never had to have a sensitive discussion with his doctor? Never felt like he needed to have any deeper of a relationship than he has with the barista who makes his coffee in the morning? Is his life really that uncomplicated?

Even more astonishing is this notion he has of "negotiating" for your healthcare. Cochrane is a big proponent of negotiation, and wonders why it is missing in healthcare, when it is so prevalent elsewhere:

You don’t need an “insurance” company to negotiate your cellphone contract, home repair and rehab, mortgage, airline fare, legal bills, or clothes, as we do for health.

Is he serious?

I'll grant that people certainly negotiate the price they pay for their house, and there may be some people who negotiate the price they pay for their legal bills, but do you actually know anyone who negotiates their cellphone contract? Their airline fare? The price of their clothes?

And how many acquaintances do you have (other than medical professionals) who have the requisite base knowledge to negotiate, say, a reasonable price for spinal surgery?

Discussing the well-known (and, admittedly, frustrating) strawman that "a man in the ambulance on his way to the hospital with a heart attack is in no position to negotiate," Cochrane just completely dismisses it:

Our health care system actually does a pretty decent job with heart attacks.

... have they no families? If I’m on the way to the hospital, I call my wife. She’s a heck of a negotiator.

And then continues to invoke The Mighty Yelp:

In a competitive, transparent market, a hospital that routinely overcharged cash customers with heart attacks would be creamed by Yelp reviews

Is he serious?

When you have a heart attack, your wife should be negotiating with the hospital while you're in the ambulance? Or she should be browsing Yelp, deciding whether to tell the ambulance to take you to hospital A or hospital B?

Maybe all Cochrane means by "negotiate" is "shop around", and if that's true, then certainly I grant that there's a big place for that.

For example, when my parents were planning to get cataract surgery, they certainly did their homework, tried carefully to select the best surgeon. (Although, I don't think they actually used Yelp? Maybe they did?)

And it definitely seems like it used to be Common Wisdom that for any significant medical issue, you should get a second opinion, so maybe that's what Cochrane is trying to say.

Although, when people used to say "you should get a second opinion," it was typically the QUALITY of the medical advice that was of concern, not the PRICE of the medical advice.

The people that I know are generally much more concerned about the SUCCESS of that triple bypass, not about its cost.

Most of the people that I know don't even really negotiate the price of their house. Rather, they try to pick a decent real estate agent, and let the agent handle the negotiation. I do know a handful of people that are able to do this successfully on their own; a much smaller number of them enjoyed it; a smaller segment still have actually done that multiple times in their life.

Ask around about buying a car: this is really the experience you want when you need arthroscopic surgery on your knee?

What you want is for the pain to go away, and for you to be able to take up hiking again.

So, in the end, I struggle to comprehend what sort of world it is that Cochrane envisions.

It seems like his ideal is a situation in which we are all informed consumers, and have no trouble evaluating whether we are being given a good deal for duodenal atresia surgery or base cell carcinoma immunotherapy, in which we arrange to have strokes, aneurysms, broken arms and heart attacks with enough advance notice that we can consult Yelp before the ambulance arrives, and in which we respond to being told that the yearly mammogram will cost $375 by saying: "how about $225 instead?"

I guess I'm still looking for that informed, readable, clear-headed, approachable paper which explains what we, as a society, can truly and effectively do about healthcare costs.

Thank you Mr. Cochrane for trying.

But I'm afraid that, for me at least, you were not successful.

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-10-05

Thu, 2017-10-05 19:58
Categories: FLOSS Project Planets

Mukul Gandhi: Zoomable Android image view

Wed, 2017-10-04 01:29
By default, an ImageView is not zoomable in an Android app by a finger touch. Just found a brilliant customization of Android's ImageView, which is zoomable. It's located here: https://github.com/MikeOrtiz/TouchImageView.

To use it in the app, we just have to do TouchImageView iv = (TouchImageView) findViewById(R.id.img); (in an Activity's onCreate method for example). Of course, TouchImageView supports all the behavior of standard ImageView class.
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-10-03

Tue, 2017-10-03 19:58
  • Intel pcj library for persistent memory-oriented data structures

    This is a “pilot” project to develop a library for Java objects stored in persistent memory. Persistent collections are being emphasized because many applications for persistent memory seem to map well to the use of collections. One of this project’s goals is to make programming with persistent objects feel natural to a Java developer, for example, by using familiar Java constructs when incorporating persistence elements such as data consistency and object lifetime. The breadth of persistent types is currently limited and the code is not performance-optimized. We are making the code available because we believe it can be useful in experiments to retrofit existing Java code to use persistent memory and to explore persistent Java programming in general. (via Mario Fusco)

    (tags: persistent-memory data-structures storage persistence java coding future)

  • Google and Facebook Have Failed Us – The Atlantic

    There’s no hiding behind algorithms anymore. The problems cannot be minimized. The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process, and the sooner the better.

    (tags: algorithms facebook google las-vegas news filtering hoaxes 4chan abuse breaking-news responsibility silicon-valley)

Categories: FLOSS Project Planets