The Worst of All Possible Worlds

Sometimes I read configuration guides that advise installing anti-virus products on servers. Since I don't run Windows servers in production environments, I can usually ignore such advice. The proponents of the "anti-virus everywhere" mindset think that adding anti-virus is, at the very least, a "defense-in-depth" measure. This was debated last year, actually.

A lesson I learned from the excellent book Protect Your Windows Network is that "defense-in-depth" is not a cost-free justification for security measures. Every configuration and installation aspect of a system provides benefits as well as costs. Something implemented for "defense-in-depth" (whether truly believed to be helpful, or ignorantly applied) may turn out to harm a system.

Thanks to Harlan Carvey, I learned of another example of a defense-in-depth technique damaging security. This is the worst of all possible worlds -- adding a security measure that results in massive vulnerability. This upcoming eEye advisory warns:

A remotely exploitable vulnerability exists within the Symantec Antivirus program. This flaw does not require any end user interaction for exploitation and can compromise affected systems, allowing for the execution of malicious code with SYSTEM level access.

So you add anti-virus to a server, and BANG. 0wn3d.

Harlan focused on the following quote in the email he sent me:

"People shouldn't panic," [eEye's] Maiffret said. "There shouldn't be any exploits until a patch is produced."

This is a reference to the fact that once a patch is released, white, gray, and black hat security researchers race to analyze it to identify the vulnerable code fixed by the patch. Harlan wonders (accurately) if the underground (or others) already know about this vulnerability, and whether they are already exploiting it.

Keep this case in mind if you believe that "adding security" is a cost-free endeavor.

Comments

Anonymous said…
Rich,

This is the old problem of fighting coding errors with more code. You run the risk of providing a new attack vector through the very means of assuring "correct" operation. Nothing new here.

The only axe I have to grind with this post is that antivirus is a neccessary evil, if for no other reason then to detect and or remmediate the tools of unstructured threats that exist in perpituity. I argue that this specific case doesn't quite merit traditional argument of what's the point of introducing more code to protect against this problem, because it's really there to save admin / IR time and should be treated as such.

However I will agree that security vendors need to be held to a higher standard of secure coding than the rest of the industry as they're responsible for providing that assurance. If this bug is as simple as Maiffret has been quoted as implying in the Darkreading article then symantec has some serious developmental issues to think about, especially since they have the cash and good reason to invest in reviewing their product for basic flaws.

My $0.02,

-Pete
Anonymous said…
My concern with the quote is that disclosure discussions have long focused on the perception that if the "good guys" have discovered a vulnerability, there is always the potential the the "bad guys" already know about it.

One approach is to assume that Marc's quote is completely out of context.

It's easy to say that there "shouldn't" be any exploits...but the question then becomes, how do we know that this vulnerability hasn't already been exploited? If you're able to subvert the very "guardian" that you're relying on, what then becomes your "sensor"?
Anonymous said…
Richard -

Right on. The popularity of AntiVirus on Windows file servers is an example of deploying security controls without really understanding why you are doing so. To make matters worse, I often find that the servers are running the same detection engine and signatures that are already running on the clients and gateways.

It would often be more effective to run a HIPS product on servers that prevents any unknown code from executing, while monitoring system behavior (not signatures!). This, combined with other sensible controls (such as allowing only authenticated and authorized clients to connect, and limiting traffic to that which is required), goes a long way to protecting a server.

I disagree about not running Windows servers in production, but only because of the environments in which I've worked. If you can get away with using only *nix systems, great. I do find that Windows servers have some compelling features when working with Windows clients (go figure!), and that Windows Server 2003 has come a long way to improving security. Unfortunatly most admins don't bother to understand what it takes to secure them, or even to use the tools they've already got.

- Chris
Anonymous said…
I can't agree enough with what Chris said (above). I'm starting to think that signature-based antivirus is dead ...

Well, at least at the endpoints. On fileservers, we don't run AV software primarily to protect the server. It's there to catch malware before the clients download and run it!

So. I'm becoming less and less enamored of AV software on the endpoints (preferring, instead, things like PrevX which prevents unknown code from executing, or GPO software restriction policies). But I'm not quite ready to throw out the AV baby with the bathwater because of a few quickly-patched issues. If we had to throw out any software that ever had a vulnerability in it ... there wouldn't be much left!

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics