Wednesday, April 24, 2013

It is time to patch ... again!

A recent article on net-security reports that a study from the Finnish Security Giant F-Secure, 87% of the corporate computers are not adequately patched, which represents a huge security risk.

I can't say I am surprised: more than my share have I seen corporate computers lacking some updates. Not to say SEVERAL updates. And more than often: critical updates, things for which an exploit is available and used in the wild.

Dealing with a large number of vulnerabilities is something difficult: with time, patches may depend upon patches, or pieces that seem to be independent may become interdependent. If you have helped a friend install updates or if you are yourself working in this field, you have probably experienced the dreaded "one more reboot and we should be okay." Which usually ends up needed another 4 hours of work and several reboots. If you multiply that by two hundred (or thousand) users, some of them having a quasi religious repulsion to rebooting, you understand why the situation can deteriorate really quickly.

An indicator I frequently use is the average vulnerability (publication) age: it is based on the CVE IDs and gives an idea of "how bad is the patch front lagging." This is coupled with the publication year distribution to give an idea of whether things are slipping.


That kind of visual representation usually helps explaining why a situation is bad: it is better than dropping an A-Bomb of "You have more than a hundred thousand vulnerabilities that are 2 years old or more" but at the same time, it gives an idea of where the issue is. In this case, there is a spike for 2011, closely followed by 2012, which can be a lead to start investigating why things went wrong: did someone leave at that time? Was there a massive OS upgrade? Did the tool used to distribute updates come out of license?

If explaining the gravity of that kind of situation is easy and usually well-understood, problems start when it is time to remediate! I have heard excuses ranging from "we can't because it will break our main application" to "but surely we can't connect to each and every system to fix it, can we?" The median being "we are looking at implementing a tool that will automate the deployment of the patches." Of course, what is not mentioned is that during the time it takes to look at the tool, implement it, configure it and start having something decent out of it, nearly a year goes by during which no problem is addressed. And at best new vulnerabilities pile up and make the situation worse.

Is there a way out? Yes: the most important is to have a documented patch policy that permits the IT team to implement fixes as they are needed. While a cyclic patch program is okay, there is a need for out-of-band operations, such as urgent patches (Adobe Flash anyone?) or to fix situations that have deteriorated past a certain point, for example a machine that has vulnerabilities more than 6 months old. This is were a good management support is important, to make sure that objections can be dealt with, rather than representing a hard stop. Or starting to grant exemptions and free passes.

The server situation is a bit more complex: given that the team that deploys the patches is the same that will have to fix the mess should something go wrong, there is a natural tendency to leave these untouched if the fix doesn't bring anything needed. But given that all modern virtualization solutions offer a way or another to replicate and snapshot the servers, there is no excuse for not testing the patches. Again, it is up to management to insist on having results.

Lastly, it is important to have a set of metrics that reflects the reality, so progress can be shown.








No comments:

Post a Comment