Attack of the Hadoop Hybrids!
As if IBM and EMC crashing the Hadoop party wasn’t strange enough, now it seems that every data warehousing, database and appliance vendor has a Hadoop co-existence story and, in many cases, a hybrid architecture that they claim provides a best of both worlds solution to data warehousing / big data analytics.
The good news is that this is proof-positive that Hadoop has arrived as an enterprise solution. The bad news, particularly with some of these hybrid architectures, is that this is not a best of both worlds solution. In fact, at least in our opinion, the hybrid model is a worst of all worlds solution.
On the face of it, hybrid architecture should make sense. After all, Hadoop with its focus on unstructured data and its low-cost model should be the ideal complement to a traditional data warehousing solution with a focus on structured data and pre-built enterprise integration.
But how does this work when, in many cases, the two software architectures can’t co-exist on the same hardware or operating system platform? And how does it work when already stretched IT or DevOps teams are expected to manage two totally dissimilar hardware and software architectures and two datasets that need to be kept in sync in order to support up-to-date and accurate analytics?
Of course, the answer is that, for most companies, it doesn’t work. Maybe some large enterprises have the resources to make this work but, for the rest of us, it’s hard enough to maintain one architecture let alone two.
That’s why Treasure Data’s solution is a service. Dealing with the operational complexities of hybrid environments isn’t something that analytic users – or even DevOps – should have to focus on or worry about, as it adds no value to the business. Leave that to us. After all, that’s why we’re here.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)