We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message
80,100 News Articles

Simon Crosby, the godfather of Xen, on virtualization, security and wimpy private clouds

Bromium is a well funded startup that promises to tap some little-used inherent strengths of Xen virtualization to secure public clouds, opening up the possibility of greater cost savings for businesses that will be able to trust more data to these services.

According to one of its founders, Simon Crosby, isolating functions and establishing a trusted core to hardware systems can create public cloud environments able to meet the scrutiny of regulators concerned about the safety of data.

MORE ON CROSBY: Godfather of Xen: Virtualization holds a key to public-cloud security

Because Bromium is still in stealth mode, Crosby is purposely vague about some of this, but he does indicate that the technology exists to package secure systems that can be deployed within public networks and that can assure customers that privacy of data will be maintained.

Network World Senior Editor Tim Greene recently talked to Crosby about this. Here is an edited transcript of that conversation.

How do you feel about leaving behind dealing with Xen day-to-day?

Well I didn't say I'd left it. It's an open-source code base, and everything we do at Bromium is based on everything we've ever learned how to do well, which is develop software and deliver better systems relative to open source. So open source is at the heart of everything we do at Bromium without exception. Ian [Pratt, the father of Xen and co-founder of Bromium] still remains chairman of Xen.org, and we are very active still in the Xen world. It was hard leaving behind the products we had built, specifically in that category XenServer and XenClient, but Xen remains extremely productive as a technology, and it's going into incredible places. It's very interesting.

What do you mean by incredible places?

Every time I peel the cover off some new widget that's being delivered -- so it's gone deeply into the science world -- lots of appliances being built with Xen-based virtualization. It's everywhere in the cloud in places I never would have imagined, some of which I'm not even allowed to tell you about. Xen has really dramatically transformed the whole cloud business and I think continues to do so.

Why can't you talk about some of the places Xen has been deployed?

Bromium does a lot of interesting things in a world that you might think of as security related that I think are actually more related to trust or being trustworthy. Many of the people we have dealt with, certainly the people when we were dealing with in the federal government when we built XenClient. These folks run deeply secure systems that they won't even tell me about because I have no security clearances. So often the conversations are quite one-way, and they're always with somebody named Bob even though they all look different. It's remarkable that open source has provided a fantastic vehicle for delivering technologies into communities where trust is absolutely fundamental, and there they seem to prefer the open-source methodology because everything is in the open. Then they can get their own hands on it, and they don't have to believe anybody. They don't have to believe me or anybody else. They can put their own eyes on the code and particularly in the case of XenClient the core security modules were written by contributors from federal security agencies, people you would never normally expect to do this work.

Xen is still the smallest, still the most mature [virtualization] platform that's ever been built. We can always make it smaller and make it more secure.

Smaller is always better. In general systems that people are dealing with today with XenServer or even with what VMware does, these are small systems, but where they become larger is courtesy of all those device drivers they have to lug around with them because they end up running all the hardware. In general that is a problem that you have to deal with. So Hyper-V is small but given all the device-driver infrastructure it becomes bigger. Getting these things smaller and more and more invisible and tinier is far better from a goal perspective. Ultimately what you want to be able to do is to bare the hypervisor within the platform in some way that you can deal with a finite set of hardware and you don't have to carry a whole ton of drivers around. XenClient does that for a relatively limited [hardware compatibility list]. But yes, absolutely, getting things smaller and faster and leaner is always the goal. A counter example would be, say, Windows, which is 60 million lines of code, right? If you simply assume your vulnerability is proportional to the number of lines of code then you want to get it down.

By the way, KVM has the same challenge, which is in general that it is as big as Linux. The KVM driver itself is tiny; it's very elegant. It's just that when you implement KVM you have Linux running underneath it. Now that brings with it its own challenges.

What do you see then as the best model for dealing with security in virtual environments?

A: I think that when we look back in five years we will actually figure out that the core value of hardware virtualization is security. Actually it's better trust or better isolation, and not all of the grandiose cases we've come up with for virtualization today. So that even in the cloud the primary use case for virtualization will, in five years or so, be security and security through isolation. Right now I think we're in a woeful state. ... It's absolutely the case that there is no Fortune 500 company out there that has not been compromised, and it is really scary what's going on out there. And I think it's mostly because for the past 10 years or so we've been enjoying the benefits of doing wonderful things and other people have been focused on how to derail that. And we're behind.

Can virtualization help in the security effort? To be absolutely clear, virtualization is an isolation technology, and I think we're starting to see the first cases of virtualization being used as a security technology in a couple of ways. I think one will be to create a highly secure cloud system which can be used to deliver multilevel secure systems. Intel recently announced its DeepSAFE technology with McAfee, a Type 1 hypervisor early load, which has a sole purpose to secure the runtime. So you start to see the specific use of virtualization security on clients. I think it will eventually be the same on server systems, too. Obviously you've got to get the server hypervisor to learn new things.

What exactly do you mean by isolation?

I'll talk about it in the context of the desktop, which is 60 million beautiful lines of code from Microsoft and every single website I've ever visited is a different domain of trust. And yet they're all cohabiting those 60 million lines of code. And that's just the problem because the structure that we use within an operating system to isolate different domains of trust from one another are very coarse and often pretty easy to compromise. For example, when a website downloads an Active X that gains a privilege it's very easy to extend those privileges across those two domains of trust. You have maybe processes as one abstraction or user identifiers.

These are extremely coarse and arguably every single website or every single application I ever touch is a different domain of trust and must be respected that way. The problem we have in general is we have too many trust domains cohabiting large blobs of relatively porous code. Therein lies the opportunity for somebody to cross from the open public insecure world to the private world. Maybe it's an interpretation, but I am the biggest threat to the enterprise because every day I walk in with my milieu wrapped around me, which is all of my friends and all the people I like to talk to and a whole bunch of enterprise tasks to do. And when one of my friends says to me to open an attachment, bang, the guy is now in the enterprise.

The challenge is that we have centralized. We have these large blobs of code that don't do the job of isolating realms of trust and in addition we've made very poor assumptions about how users behave in the context of security. What ought to scare us all witless is how tolerant we are of the invasion into our personal privacy zones in our consumer identities, on Facebook and everything else. But we bring that into the enterprise with us every day, those behaviors. So when I get the email from my friend saying happy birthday, I'm going to click on it, and we're done. Users, no matter how much you train them, they're going to make mistakes. Users go for fun instead of functionality. Security generally limits functionality and that makes users want to get out of the secure world into something like Dropbox. I can't send a big PowerPoint by my Exchange server? I'm going to have to send it by Dropbox. In general everybody is using the cloud, they just don't know about it. That's a woeful state. The only reason that is the case is we're so poor at architecting trust. I think Microsoft is going down the right path, by the way. In Windows 8 they're doing a much, much better job, but it's still pretty bad.

Are you proposing end devices that can support different security domains depending on what they are communicating with?

The same arguments apply server side. Everybody who logs onto the same Web server is in the same context of some process. Some of us are attackers and some of us are just people who want to do our banking transactions. The problem's not a particular cloud problem it's a client server problem, that we are incredibly poor at isolating units of computation which ought to be isolated from each other because they have to put trust relationships with the provider or don't trust each other. So that is the problem that Bromium is going after. It's not a security company in that it finds the bad guys. I think we're useless at it, in general are useless at it and generally the industry is useless at it. And that, by the way, is nothing more than a restatement of a well-known result of computer science which is it is not possible for one program to decide whether another program is good or bad. We need to just face up and get out of the stupid game of trying to decide whether a piece of code is an attacker or not. Blacklisting? It's done. Over. We should get out of it. It's ordinary enough on any system for the bad guys to change before you can get any new signature. So we need to just admit blacklisting is done. Whitelisting doesn't go far enough. The code that you know about is fine, you know that it's fine. But it doesn't say how trusted code -- that is well-intentioned code -- behaves when it is combined with untrustworthy data. That's a very challenging problem.

Virtualization technology can help a lot there because first, if you have the trusted components of a system there like the hypervisor, there ought to be a couple of hundred thousand lines of code, which is a far smaller vulnerability footprint. Second, we need to architect systems knowing that users will make mistakes. We are the vectors of attack, and we must be able to protect the system even when the user makes a mistake. And third, we have to be able to deal with horrible things like zero days. We have to know that there are vulnerabilities in our code and even when our code lets us down -- because we are just human after all and we have written bad code -- we must be able to make concrete statements about the trustworthiness of the remaining systems and whether or not they have been lost or compromised. It's an absolutely fundamental requirement; we have to. In the specific context of cloud systems there's no excuse for service systems to be sold anymore without TPM (Trusted Platform Module) hardware subsystems. So you are able to reason about the security of the code base. There is no excuse for every block of data in the cloud to not be encrypted. You can encrypt it at wire speed and there is no excuse ever for the cloud provider to manage the key. So what should happen is when you run an application in the cloud you should provide it with the key and only in the context of the running application as the data comes off some storage service it is decrypted and goes out re-encrypted on the fly. That way if somebody compromises the cloud provider's interface or if someone walks into the cloud provider and walks off with a hard disk, then you are OK. And there is no reason that people should not do this.

All of these technologies are there. There is no excuse for server vendors not to put this on every server. My advice to every enterprise is do not buy a server without a TPM. And do not use a hypervisor that doesn't use it. We need to use all of the capabilities that are in the hardware to make the world more secure. People should beat the heck out of their vendors until they do a better job of it -- hypervisor vendors, server vendors and everything else.

I think many of the excuses for building private clouds are wimpy, too. People want to build private clouds because they don't want to lose control. By the way, there's always a good reason for not wanting to lose control. One of them is, it's my job. The other one is the regulatory frameworks within which we work today really are articulated in terms of technologies that were cool 20 years ago. And you can't really state anything to a regulator in terms of the data if you can't find the hard disk. So how is the guy supposed to allow the data out of the data center? People will continue to build private clouds, spend a bunch of money on servers they don't need when it would be much better to use some shared resources that providers could do for you and do it at much better cost we'll simply be on to an opex based equation instead of a capex dominated. They could do it in a heartbeat if we could actually secure the regulatory frameworks for it and if we could just get the vendors to do the obvious things in terms of adopting security technologies.

Does Bromium address this broad range of problems?

Bromium is 25 engineers. That's all we are, so the answer obviously is no. We don't have a product out and we won't for a little while, but we're going to take on a piece of it.

When are you going to announce what you're up to?

I think we're on early in the new year. We're in the stage where we're sending systems to potential early customers for them to kick around and give us feedback on.

Which of these problems will you address?

Bromium does nothing about finding bad guys and we never will. We know nothing about forensics or how attacks are evolving. That's not our core competency. I think the only thing we would claim is that we're pretty good at doing virtualization stuff and virtualization can be used to build more trustworthy systems if you can figure out how to execute different domains of trust in the appropriate context so it becomes a technology which can make a system more trustworthy. But we will never be competent at finding the bad guys.

Virtualization through isolation of diverse execution paths can isolate. So if you think about what XenClient does you have a corporate desktop and you have a personal desktop. Stuff that I do on my personal desktop which is connected to the big Internet is not going to touch my corporate desktop. So there you have isolation at a granularity of my corporate identity and my personal identity. That improves security because none of my personal stuff hits my corporate desktop. Does it address all of the challenges? No. But it addresses some, and the ones that it does it addresses pretty well. The key point I'm trying to make is that virtualization technology in general through isolation provides you a different context in which to execute code of different trust levels. By way of example, the McAfee-Intel DeepSAFE technology provides McAfee a new privileged point of execution outside of Windows. They can do all sorts of cool stuff. So that becomes the most trusted code in the system. It's more trusted than anything on the desktop and so it can always have the privilege of inspection and introspection.

So does the McAfee effort fall within Bromium's model of what ought to be happening?

It's the first solely security use case of virtualization technology of which I am aware. I think there will be more.

Including yours?

Sure.

How big an investment in cost and commitment will Bromium's product be for corporate customers?

We haven't said a word about that, and I can't go there. I think it's fair to say that the adoption of virtualization technology -- let's take desktop virtualization and use that as an example. One of the barriers to its adoption is it requires substantial investment in new technology -- server-side stuff, storage, network, and a whole bunch of things. That has an impact on the practice of IT because now the guy who used to manage desktop devices is now managing server-side hypervisors and virtual desktops, and it's a fairly substantial challenge and I think that's a problem.

Do you see a way around it?

Yes.

Is that something Bromium addresses that you can't talk about?

Right. I can't talk about it.

Do you ultimately see these problems being overcome and clouds becoming the trustworthy place you think it ought to be?

A: Yes, I do. I absolutely do. Look, if we don't I think it's fair to say there is no enterprise that will not be compromised. Every single record that we own in the enterprise space [will be at risk of] being available to somebody else. It is extraordinarily scary and there are bad things going on out there so we have to solve these problems, and the way to solve them is through better system design. Every vendor has a stake in this. The security guys do a much better job of finding the bad guys. The desktop virtualization folks are going about delivering more trustworthy systems. Most of that comes about by centralization but by courtesy of virtualization that's a property you get of always being able to revert to a good golden image. All that is good, but if I click on a bad PDF an attacker could still get on my virtual desktop and steal all the data. The DLP guys are trying to get tighter and tighter controls in terms of policy they hook into about where you can and cannot go. The problem with DLP is it doesn't actually get the opportunity to get between executing code and what happens. So it's mostly logging what happened rather than preventing that.

Will Bromium's product eliminate the need for any traditional security products?

In security it turns out people want to know that they're secure. Just telling them they have a better system isn't good enough. They always want to know if there was a compromise or if somebody tried to attack them and how they tried to attack them and how they tried to attack them. So the business of finding the bad guys on the fly or post hoc is always going to be required. The ability to describe policies around how enterprise apps should be managed in practice on the fly is going to be required. I don't see that going away; I can see changing them.

Read more about wide area network in Network World's Wide Area Network section.


IDG UK Sites

Windows 9 release date, price, features: Microsoft teases new OS ahead of 30 September unveiling

IDG UK Sites

From the iPhone 6 to the iWatch and a new Apple TV we look at the products Apple is set to launch...

IDG UK Sites

September 2014 creative trends: 5 things you must see

IDG UK Sites

The 7 most ridiculous iPhone 6 rumours: what Apple WON'T reveal on 9 September