Monday, January 7, 2013

They’re guilty of ID theft, but don’t ask the government how/where they got the personal info?

An article on DataBreaches.net caught my eye: the story of one Travonn X. Russell whose residence was raided and searched, and in which all the material needed for ID theft was found: social security numbers, address, bank information and so forth.

The author comments and asks why the source of the material has not been mentioned.

While this is a totally legit question, I think the answer is quite obvious: there may be a parallel investigation concerning the source of the material, possibly from a larger data breach. Any information concerning this may be illegal, but may also compromise months of painstaking, methodic work: the information can vanish quickly, the relevant systems may be outside the investigators' jurisdiction and the theft may have actually gone for a long time without being detected.

At the core of the problem is this question: "How can we force PII holders to act responsibly?" There is a ton of regulations and legal compliance rules, such as PCI DSS, HIPAA/HITECH and various states' law in the US to mention but a few; but frequently companies will be found in violation of these with TJX being the most famous. In addition, the heartland case proved that PCI DSS compliance is not enough. I tend to agree: PCI DSS and other regulations actually set the bare minimum requirements, each industry is supposed to go the extra "relevant" mile. As it was put nicely:



"Compliance is not Security Ensurance."



However for a lot of companies being just compliant is the target, and any additional effort is not taken. The reasons are legions:
  • Cost of additional measures;
  • "Incompetence" of key players;
  • "Brand engineering";
  • Lack of incentives.

Cost of additional measures

Measures that go above and beyond the bare minimum may have a cost. However, the perception is that they will incur a cost. As that cost is real, as opposed to the cost of an incident which is potential, a number of companies will not engage into providing additional security.

As a result, these companies are compliant, but they are not secure. But they don't care: legally  they can't be sued for non-compliance.

"Incompetence" of key players

And by "incompetence" I mean "ignorance", "carelessness" and "lack of understanding". As a worker in the Information Security field, I have had my share of recommendations ignored or bypassed. When a reason was provided - a.k.a my advice didn't fall into a MBH (Management Black Hole) - it was from time to time a good reason, from time to time a bad one.

As I am writing this article, I am also playing Wargames. Back in the day, to approach a company's computer, you had to have at least a M.Sc. in math or something related. It meant that there was a "barreer" to cross between being able to lay your hands on a keyboard. And even before being able to touch it, you had to undergo a training for the tools you were about to use.

Nowadays, anyone who can sign a piece of paper can own a computer. However, that doens't mean that these understand everything that goes with it. The equivalent would be to hand your key cars to any random key on your street and hope she/he won't cause an accident or won't break the car down.

"Brand engineering"

IT professionals have heard slogans such as "If you buy XYZ, you can't be wrong." And several brands are playing on this: they market products that will ensure your compliance with any given standards. However, this is generic and covers only what is in the tests.

An example: I have used Rapid7's Nexpose in various situations. This is a very good product, the scanner is super cool and accurate. And it has modules for generating compliance reports. And in an instance, I had almost everything up to date, but a few applications (PDF Creator, Notepad+ and such) were obsolete. Nevertheless, the system came out as compliant: the outdated applications were not part of the automated tests.

"Brand engineering" is usually a substitute for people who don't understand fully and in depth the domain they are working on: not understanding the OS part of a system and blindly using Microsoft products, not understanding the storage portion and blindly ordering EMC parts, not understanding the networking aspect and blindly ordering Cisco components. All these vendors want you to think that by buying their products, you can skimp on IT personnel and hire less qualified people.

While this can work for a while, as soon as you get into the specifics, you are stuck with something that may or may not do what you need, that costs a lot of money and that has only a generic approach to your problem. And with people you can't rely on to find the solutions to your conundrums.

Lack of incentives

In life, all is about the balance of powers: the force and counterforce. If an individual or an organization has the ability to impose a decision for which he/it doesn't suffer the consequences or has no liability for, the risk is the "they will get used to it" kind of mindset.

In this case, there is no real incentive beyond the "damage to the brand reputation": if you are PCI-DSS compliant and your credit card records got compromised, from a legal standpoint you are golden. However, for the people affected by the breach, the situation is not as comfortable.

Instead of imposing a set of minimum requirements, the PCI-DSS and other regulations should say "for each record compromised, a fine of $10k will be imposed, with no limits": 10 records compromised? That's $100K. 5,000? That's $50M.

The lack of incentive here is that past the initial minimal requirements, there is currently no real way of forcing corporations and organizations to behave when it comes to managing the PII.

It seemed like a good idea ...


There are a number of ideas that seem to be a good idea, but that quickly fade into a security and privacy nightmare, especially when it comes to PII.

  1. Everybody needs a laptop

In a few positions, I actually refused to get a laptop and ask for a desktop instead. The truth is, companies are throwing laptops at employees in the hope they will be more productive and work from home but:

  • Laptops get lost or stolen;
  • Laptops require whole disk encryption;
  • Laptops are slower than same-price desktop.


Laptops get lost or stolen ...

... and there is a cost associated with that. A report from the Ponemon Institute sets the price tag for a stolen laptop to around $50K. That's a hefty sum to pay for people who may or may not use them at home.

Let's do some math: a company has 300 employees equipped with laptops. Of these, 10% will be stolen during their active life - usually around 3 years. That is 30 laptops will go MIA.

Each loss will cost around $50K, over 3 years, that represents $1.5M, or $500K per year.

Laptops require whole disk encryption ...

... and people who understand how to use it correctly. There is no doubt over the use of disk encryption to protect data at rest. I just affirm that (a) laptops are not the best candidates for it and (b) people usually don't understand what it entails.

WDE (Whole Disk Encryption) or FDE (Full Disk Encryption) is the process by which a computer will have its permanent storage encrypted to prevent an attacker from being able to extract data should he get the physical storage or a copy of it.

For a laptop, a partial encryption is irrelevant: other parts of the OS can contain some information associated with the protected data, such as temporary files, or even the decryption keys.

However, there are a few issues with WDE/FDE:

  • It slows down the computer: try me, do a disk intensive task on a computer whose storage is completely encrypted;
  • There may be bugs in the code that make recovery possible: yes, if there is a bug in the implementation, then your data can be recovered;
  • There exist attacks that circumvent W/FDE: ever heard of Bruce Schneier's Evil Maid attack?
  • There is a difference between shutdown, hibernate and sleep modes: most of the people I know, even IT people, don't realize that a laptop in sleep mode WILL NOT require the keys when waking up. Hibernate and Shutdown do.    

Laptops are slower than same-price desktop ...

... and sincerely, I am more productive on a machine that doesn't start to swap when I have 5 windows open. Especially if the pagefile is on a WDE disk ...



Instead of providing laptops to everybody, provide a good remote access solution and/or a "thin" client solution. For instance, the internal workstations can be equipped with Linux for the mundane tasks (surfing the web, doing heavy calculations) and  access to a Windows Terminal Server of any form for the other tasks (e-mails and so forth). From the outside, there will be a secure access to the same infrastructure, allowing the people to have access to the same desktop inside and outside the company, without the need for a laptop.

Other solutions are of course possible.

  1. The BYOD craze
It is tempting, for a user, to request or even demand to be able to access the company on the latest iToy or the shiny new laptop "that is way faster than the usual crap IT provides me with." And often, accounting will look at it with the "great, we don't have to pay for that" look in its face. 

What are the problems with that? Several actually. Let's mention only a few. 

If you want to work on a personal device, you need to move some of the data to the device. This means that it is only a matter of time before confidential or PII bearing data is present on the device, and moved in and out of the company. 

And then? 

First, can you guarantee that the data is encrypted? Most likely not. Or you may install either a W/FDE product on the personal laptop, or install some encryption tool on the device. That will probably slow the device down to the point the user says "no more."

Second, how do you provide support for these? If the user is experiencing a problem with is personal device, does the IT department have to help? If so, how do you make sure your IT department has the knowledge to reply to any question on any kind of device known to man? 

Third - a big one -, usually company laptops come with some security package (AV, firewalls and so forth). How do you guarantee that all personal devices have the same level of trust?  

Lastly comes the question: "user XYZ who used his personal device is no longer with the company. How do we guarantee he deleted all the company data?" Remote wipe? In some cases, this can be "Nay nay": you may erase the user's personal data at the same time. He accepted it in the BYOD clause of his contract? Are you sure it is airtight? And there is no backup home? What if the device in question is a laptop and is not remote managed by the company IT department? There is no way you can be sure the user got rid of everything

As a result and in my mind, BYOD is a big no-no when it comes to companies dealing with PII, for the risk outweights the benefits by far. 

  1. Delegation of the IT knowledge/know how 
A typical one: someone thinks that it would save money to reduce the IT team to the bare minimum, and outsource the engineering and implementation to an integrator. This is often a bad plan, as by losing the knowledge and know how, a company puts itself in a position where the integrator is the only one able to do any change, but also the understanding of the security implications can be lost. 

Understanding the security consequences of a change is paramount to a company that deals with confidential information such as PII, Credit Cards or Medical Records. Without that, there can be no educated decision on whether a change is a good idea or not. 

  1. Granting exceptions ...
Ok, I will make some enemies there. One of the biggest problems with security is ... exceptions. When someone asks to be exempted from a particular rule, there is always the risk of abuse. Another big risk with exception is the "snowball" effect: someone gets an exemption, then someone else says "hey, me too then", and before you know it, your systems are teeming teeming with "special cases" and "exceptions". 

Most of the time, the exception is there because someone didn't like the constraint and would rather have their convenience than making the system secure. 

Most of the companies will then opt for a complex mechanism requiring an exec or the CSO to approve these exception requests. Unfortunately, in practice it doesn't work that well: the exceptions being required most of the time by the executive management, the approval mechanism is often either bypassed or biased, and the perception is that the mechanism is just a long way to get a "yes" anyways. 

The issue arises when a problem happens: if this is due to an approved exception, who is responsible? Who will pay for the damages? Who will perform the work? Is an individual responsibility engaged?



No comments:

Post a Comment