Wednesday, August 14, 2013

Smartphone Experts notifies customers of hack

That's the usual story: a payment-processing site/application got hacked, customers' data lands in hackers'hands, company notifies customers. However, there is something that really shocks me:

Although stored customer data were encrypted, Diana Kingree, the Senior Vice President of Commerce, noted that the hacker may have been able to use a decryption feature of the system to view customers’ names, addresses, credit or debit card number, CVV, and card expiration date.
The PCI-DSS Requirements state in point 3.2.2

Do not store the card verification code or value (three-digit or four-digit number printed on the front or back of a payment card) used to verify card-not-present transactions. 
This code is meant to be used when the person executing and the card used to execute the transaction are not physically present where the payment takes place. This is, to some extent, a password or a PIN. Why companies still store that CVV code? Beats me.

Storing the CVV defeats its whole purpose: making sure that the person doing the payment possesses the card. By having it in the same database as the credit card number and expiration date, its role is completely negated.



Monday, August 12, 2013

Friday, August 9, 2013

Chinese Hacking Team Caught Taking Over a Decoy Water Plant

Not a surprise, but still, quite a shock: a security researcher set up a decoy water plant, simulating everything from the workstations to the industrial control systems and caught some hackers who infiltrated his systems. If they got in his system, chances are they are already inside other providers such as energy and telecommunications providers.

The article is here.

Wednesday, August 7, 2013

Researchers develop new method for understanding network connections

This is interesting: a team of researchers at MIT have designed a way to find the underlying network under an observed network.  This allows to find the direct dependencies in a network, separating in the process the indirect links, or elements that just "tag along" other elements.

The paper is here, but behind a paywall.


Monday, August 5, 2013

You’ve got mail: Someone else’s medical test results


Doctors and Internet don't mix well: here is an article on a few mix-ups involving e-mails and doctors. A reporter for the Boston Globe received a few e-mails with medical results for people with names similar to hers.

From this, a possible attack I can think of is to create a number of e-mails with names similar to a target. There is a chance a misspelling at a doctor's office will land you some results.



Friday, August 2, 2013

FACTBOX - Hacking talks that got axed

The struggle between security researchers and private companies is nothing new: there are numerous examples of researchers or hackers being coerced into not talking about a vulnerability found in a product, in a website or even in a hardware product. This also include not talking at hacker conventions.

There may be various reasons for that, such as "the company doesn't want its reputation to publicly suffer" or "black hats may get the information and turn it to their advantage." There may also be an unsaid reason: some companies develop exploits for these vulnerabilities and sell them to "trustworthy" Governments and Agencies. Examples include VUPEN and the (in)famous FinFisher, part of Gamma Group. It is to be noted that the latter doesn't explicitly mention that its products are reserved for the same "trustworthy" Governments and Agencies, and as a reminder, a Gamma Group offer was found among torture equipment in 2011 in Egypt, when rioters invaded the State Security Investigations Services HQ. Given that private companies do it, there is no reason to believe that governmental agencies around the world don't do the same.

In that light, any publication of any kind of vulnerability is a hindrance: not only it may force the vendor to take action and fix the vulnerability, but it also gives other security researchers a base on where to start looking for ways of detecting or mitigating the vulnerability.

The argument of "it may help the bad guys" is not entirely valid: the cybercrime world has shown many times it can find vulnerabilities on its own, be if for software Zero-days or hardware hacks. To believe that a security researcher is the only one to look for vulnerability for a given piece of technology is simply unrealistic: if it can lead to money - and most of the time it can - the bad guys will have an interest in it.

Remains the reputation concern, which may also be a poor excuse. A number of companies. mostly dealing in the Open Source movement, have opted to publicly disclose everything concerning vulnerabilities and breaches. As a result, some have actually gained recognition and the trust of their users, as they know what to expect. A real excuse is the cost of fixing a vulnerability: it may take a lot of work, which translates into hard cash for the companies, and that often for products available for free (think "Adobe Reader", "Adobe Flash" or "Oracle Java" to name a few of the usual suspects.) On the hardware side, it is even worse: if it is possible to distribute a patch, applying it to millions of cars or door locks is problematic, as this fix may need a special tech.

A concept that has been developed over the last few years is "responsible disclosure", a discussion between the security researcher who found the vulnerability and the company that makes the affected product. The "responsible" part is that a delay is negotiated before the vulnerability is made public. However, this has been slowly replaced by "vulnerability commercialization": a company, such as iDefense or TippingPoint, pays any vulnerability (with a proof of concept) discovered. The question is: "but what happens after?"

That concept of disclosure is very sensitive: it has been used in the past as a form of blackmail against the affected company, either to have them address the problem quickly or to simply extort money. These companies are no angels either, and often used the courts to threaten the security researchers.

As you see, this is a very difficult topic, and not one I expect to see settled in the near future.