EMC today formally announced a reseller partnership with MapR Technologies, a start-up that plans to sell a proprietary MapReduce product based on Apache Hadoop.
To date, MapR has been in development mode. The company has 15 beta customers testing its product, which wil be sold as both software and as a stand-alone appliance.
"With the EMC deal, we get worldwide distribution," said John Schroeder, CEO of MapR. "[And]...we get a worldwide support organization."
MapR will be part of the recently announced EMC Greenplum HD Enterprise Edition, an interface-compatible implementation of the Apache Hadoop software stack.
Earlier this month, EMC announced its planned partnership with MapR as part of a new direction into offering big data database and MapReduce products.
MapReduce is a framework for processing enormous data sets and performing high-performance analytics in a distributed database that run across a cluster of server nodes. In every cluster, a master node performs the mapping function. As data is input, it is partitioned into smaller sub-groups for processing of a larger query. Because the query is broken into subsets, MapReduce is faster than traditional relational databases at processing "big data" sets.
"This is a major advancement for Hadoop users everywhere. MapR's innovations coupled with EMC 's big data analytics capabilities and service will allow more people to use the power of big data analytics and enable substantial market growth," John Webster, a senior partner at market research firm the Evaluator Group, said in a statement. "MapR has managed to innovate on performance, cost reduction, dependability and ease-of-use all at once. This marks a major shift for the Hadoop market."
Luke Lonergan, CTO of EMC's Data Computing Division and a co-founder of Greenplum, the maker of a massively parallel data warehouse that EMC bought last year , said that EMC is working with dozens of resellers to get the MapR Hadoop software to customers.
"Combined with the EMC Greenplum Database, we will allow the co-processing of both structured and unstructured data within a single, seamless solution," said Scott Yara, co-founder of Greenplum and vice president of products for EMC's Data Computing Division.
MapR built a proprietary replacement for the Hadoop Distributed File System (HDFS) that can substitute existing installations of the Hadoop file system. What MapR's product adds is accelerated performance and resilience, according to Schroeder.
"HDFS is really like writing to CD ROM. You can write a file to it, but you can't access it through multiple readers. It's very constrained," he said.
MapR's product offers multiple channels to data via the Network File System protocol, which is widely used in network-attached storage today. The company also re-architected the distributed NameNode, the centerpiece of an HDFS file system. The NameNode is a hierarchical naming system on a distributed database in the same vein as a single domain name space. The rearchitected NameNode offers greater high availability, Schroeder said.
MapR said it also eliminated all single points of failure in the Hadoop stack and created an automated failover feature called Job Tracker, which shares application jobs between multiple nodes so that if a primary node fails, it automatically picks up the task on the next available node.
MapR also added data mirroring for business continuity, wide area replication support and data snapshot capability to its software for greater resiliency.
"The only data protection within Hadoop is replication," Schroeder said. "Typically people make three copies fo data. That doesn't help you if you have a user or application error."
The snapshot capability allows administrators to roll an application back to a time prior to an error. For example, if an application or user error occurred at 9 a.m., the administrator can roll the application image back to 8:59 a.m.
"It's the same thing you have in any serious storage platform from companies like EMC, HP or NetApp," he said.
Because MapR's file system is more efficient than HDFS, users will achieve two to five times the performance over standard Hadoop nodes in a cluster, according to Schroeder. That translates into being able to use about half the number of nodes typically required in a cluster, he said.
"Hadoop nodes cost about $4,000 per node depending on configuration. If you add in power costs, HVAC, switching, and rackspace, you'll probably double that," Schroeder said. "Our product can immediately save you $4,000 and over 8 years it'll save you $8000 per node."
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed . His e-mail address is firstname.lastname@example.org .
Read more about databases in Computerworld's Databases Topic Center.