We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message
80,259 News Articles

Why Big Data Means a Big Year for Hadoop

You can't have a conversation in today's business technology world without touching on the topic of big data.

Simply put, it's about data sets so large-in volume, velocity and variety-that they're impossible to manage with conventional database tools. In 2011, our global output of data was estimated at 1.8 zettabytes (each zettabyte equals 1 billion terabytes). Even more staggering is the widely quoted estimate that 90 percent of the data in the world was created within the past two years.

Behind this explosive growth in data, of course, is the world of unstructured data. At last year's HP Discover Conference, Mike Lynch, executive vice president of information management and CEO of Autonomy, talked about the huge spike in the generation of unstructured data. He said the IT world is moving away from structured, machine-friendly information (managed in rows and columns) and toward the more human-friendly, unstructured data that originates from sources as varied as e-mail and social media and that includes not just words and numbers but also video, audio and images.

Given the rise of big data, I'm sure you're hearing the buzz around Apache Hadoop, the software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes (a thousand terabytes) of data. It certainly looks like the Holy Grail for organizing unstructured data, so it's no wonder everyone is jumping on this bandwagon. A quick Web search will show you that in just the past few months, companies including EMC, Microsoft, IBM, Oracle, Informatica, HP, Dell and Cloudera (to name a few) have adopted this software framework.

What I find even more notable is that companies such as Yahoo, Amazon, comScore and AOL have turned to Hadoop to both scale their businesses and lower storage costs.

According to some recent research from Infineta Systems, a WAN optimization startup, traditional data storage runs $5 per gigabyte, but storing the same data costs about 25 cents per gigabyte using Hadoop.

That's one number any CEO will remember.

So get ready for Hadoopalooza 2012. I'd love to hear what you're doing to tackle big data storage, so please drop me a line anytime.

Michael Friedenberg is the president and CEO of CIO magazine's parent company, IDG Enterprise. Email him at [email protected].

Read more about data management in CIO's Data Management Drilldown.

IDG UK Sites

Black Friday and Cyber Monday 2014 tech deals UK Live: Best Black Friday deals from Apple, Amazon,...

IDG UK Sites

Why are people still buying satnavs? Smartphones are the modern satnav

IDG UK Sites

New Star Wars trailer: Watch the VFX-laden teaser for The Force Awakens

IDG UK Sites

Black Friday 2014 UK: Apple deals, Amazon deals & Black Friday tech offers UPDATED