We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message

Is Google too big for the internet's good?

Internet & BroadbandGoogle is popular. There's probably not an internet user who hasn't accessed its services for search and mapping, and Google has even achieved the distinction of turning its name into a verb. But enormous popularity and global reach place an unexpected burden on the search giant: when it goes down, the entire web is shaken.

That's exactly what happened on Thursday, May 14, when Google suffered a major failure. A routing error sent traffic to servers in Asia, creating what Google called "a traffic jam". No kidding. According to the company, 14 percent of its users experienced slowdowns or outages. Many accounts put the number of those inconvenienced quite a bit higher. And we can't even guess at how many people were seriously put out by subsequent outages including the Google Gmail failure last month. But this isn't the day to beat Google up or fret about the implications Google outages have for cloud computing.

What got my attention this week was a study to be formally presented on October 19 of internet usage by Arbor Networks, which found that just 100 ASNs (autonomous system numbers) out of about 35,000 account for some 60 percent of traffic on the public internet. Put another way, out of the 40,000 routed sites in the web, 30 large companies now generate and consume a disproportionate 30 percent of all traffic, according to the two-year study.

Not surprisingly, the biggest kahuna of all the big kahunas is Google, which accounts for about 6 percent of all web traffic globally. The other big guys include Level3, LimeLight, Akamai and Microsoft, in that order.

Yes, the internet is stronger - in a structural sense - than ever. But the concentration of traffic in so few hands raises troubling questions about the ability of the internet to function when a major originator of traffic goes down or becomes infected. Simply put, Google may be too big to fail, and as we learned during the financial meltdown, that ain't good.

The flat internet

I tend not to be impressed by studies conducted by vendors, but this one strikes me as quite credible. Arbor - in collaboration with the University of Michigan and Merit Network - looked at two years of internet traffic across 110 large and geographically diverse cable operators, international transit backbones, regional networks and content providers. The results were based on an analysis of 2,949 peering routers across nine Tier-1, 48 Tier-2 and 33 consumer and content providers in the Americas, Asia and Europe.

The implications of the results are, well, scary. In part that's because the structure of the internet has changed significantly in the past few years, according to Danny McPherson, Arbor's chief security officer and a co-author of the study. Network traffic used to go up and down the food chain of transit providers, an inefficient situation, but one that did not create single points of failure.

Next page: when a big site goes down, we're all in trouble >>

IDG UK Sites

Windows 9 release date, price, features: Microsoft teases new OS ahead of 30 September unveiling

IDG UK Sites

From the iPhone 6 to the iWatch and a new Apple TV we look at the products Apple is set to launch...

IDG UK Sites

September 2014 creative trends: 5 things you must see

IDG UK Sites

What to expect from Apple in autumn/winter 2014: iPhone 6, iPhone Air, iWatch, iPad 6, new Apple...