Friday, August 2, 2013

FACTBOX - Hacking talks that got axed

The struggle between security researchers and private companies is nothing new: there are numerous examples of researchers or hackers being coerced into not talking about a vulnerability found in a product, in a website or even in a hardware product. This also include not talking at hacker conventions.

There may be various reasons for that, such as "the company doesn't want its reputation to publicly suffer" or "black hats may get the information and turn it to their advantage." There may also be an unsaid reason: some companies develop exploits for these vulnerabilities and sell them to "trustworthy" Governments and Agencies. Examples include VUPEN and the (in)famous FinFisher, part of Gamma Group. It is to be noted that the latter doesn't explicitly mention that its products are reserved for the same "trustworthy" Governments and Agencies, and as a reminder, a Gamma Group offer was found among torture equipment in 2011 in Egypt, when rioters invaded the State Security Investigations Services HQ. Given that private companies do it, there is no reason to believe that governmental agencies around the world don't do the same.

In that light, any publication of any kind of vulnerability is a hindrance: not only it may force the vendor to take action and fix the vulnerability, but it also gives other security researchers a base on where to start looking for ways of detecting or mitigating the vulnerability.

The argument of "it may help the bad guys" is not entirely valid: the cybercrime world has shown many times it can find vulnerabilities on its own, be if for software Zero-days or hardware hacks. To believe that a security researcher is the only one to look for vulnerability for a given piece of technology is simply unrealistic: if it can lead to money - and most of the time it can - the bad guys will have an interest in it.

Remains the reputation concern, which may also be a poor excuse. A number of companies. mostly dealing in the Open Source movement, have opted to publicly disclose everything concerning vulnerabilities and breaches. As a result, some have actually gained recognition and the trust of their users, as they know what to expect. A real excuse is the cost of fixing a vulnerability: it may take a lot of work, which translates into hard cash for the companies, and that often for products available for free (think "Adobe Reader", "Adobe Flash" or "Oracle Java" to name a few of the usual suspects.) On the hardware side, it is even worse: if it is possible to distribute a patch, applying it to millions of cars or door locks is problematic, as this fix may need a special tech.

A concept that has been developed over the last few years is "responsible disclosure", a discussion between the security researcher who found the vulnerability and the company that makes the affected product. The "responsible" part is that a delay is negotiated before the vulnerability is made public. However, this has been slowly replaced by "vulnerability commercialization": a company, such as iDefense or TippingPoint, pays any vulnerability (with a proof of concept) discovered. The question is: "but what happens after?"

That concept of disclosure is very sensitive: it has been used in the past as a form of blackmail against the affected company, either to have them address the problem quickly or to simply extort money. These companies are no angels either, and often used the courts to threaten the security researchers.

As you see, this is a very difficult topic, and not one I expect to see settled in the near future.




No comments:

Post a Comment