A nagging issue in security is to evaluate the level of a group or of a single person: does that person know? How well does she understand the concepts? Does she realise what the consequences might be?
Traditionally, this is done through a series of tests/trainings: a company such as PhishMe offers a wide range of phishing tests, that lead to trainings or information pages. For example, if a user provides his credentials, he is sent to a video that explains he would have potentially given access to the corporate network to cybercriminals. While there is a huge value in doing these, it only assesses the ratio of people who failed the test, but not the depth of awareness of people who succeed: a person may pass the test because "providing my e-mail credentials when clicking on a link is wrong", but may fail to see that the PDF file attached to the next e-mail is malicious.
In order to evaluate how well security concepts are understood, I suggest using a different scale, IKUC. This stands for
K - Know
U - Understand
C - Care
Ignore - the base level: the person has no knowledge of it. The term or concept may have been heard or read about, but the person can, at best, vaguely formulate it.
Know - the level at which a person can quote a precise definition or explain what the concept is, but this sounds like a mechanical regurgitation.
Understand - not only the person knows the definition but also succeeds in explaining it and how the concept works.
Care - the goal level: the person understands the term or concept, but also the threat and impact that may result. This is the realisation that the term or concept is not merely words, but an actual attack that can affect the person or the company in various ways.
Know - the level at which a person can quote a precise definition or explain what the concept is, but this sounds like a mechanical regurgitation.
Understand - not only the person knows the definition but also succeeds in explaining it and how the concept works.
Care - the goal level: the person understands the term or concept, but also the threat and impact that may result. This is the realisation that the term or concept is not merely words, but an actual attack that can affect the person or the company in various ways.
This is a progression: in order to understand something, you have to know it. In order to care, you have to understand it. While one may argue that it is possible to care for something that is known but not understood, I think that this is inefficient, as it quickly turns to recognising scenarios instead of the broader, underlying concept. This may be seen as the "don't click on links in e-mails", which leaves the possibility for clicking on files or answering the e-mail with the information the attacker seeks.
By elevating the user from a basic knowledge to understanding, not only will the concept by clearer and easier to recognise, but also this enables the user to relate a variety of threats as being really the same "thing." In the long run, this saves time and money to the company by not having to develop a scenario for everything.
Caring is the next step, it is the realisation that not only there is a threat, but that threat has an impact on the person or the firm. That is the realisation that "bad things don't happen at random." This is, for me, the "true awareness" and is summed up in the idiom "once burned, twice shy." However, "cyberburning" can be persistent (think "credit score damage") or even fatal (DigiNotar, Mt. Gox and an article from Fox Business). This is by far the hardest step, as human being we tend to downplay the risks or impacts when we want something (either to possess it or as a mean to achieve a goal, such as performing one's duty), but to exaggerate the inconvenience of anything that may stand between us and these goals/things.
Unfortunately, this "magnification of inconvenience" and "downplaying of risks" clouds the step from "Understanding" to "Caring": "if it is inconvenient and not that risky, why should I care?" Sounds familiar? For me, way too much.
A good security awareness program has to address both the K, U and C states. It has to make sure everyone knows what is being explained (the "K"): if it is phishing, does everybody know what phishing is? Can it be defined in a simple way and without requiring to drop various examples? From there, does everybody understand how this works and is everybody able to recognise such a scenario for what it is?
As I wrote, getting to the C is the hardest part, due to having to go "over the ledge of perception of the "rarity", "lack of danger" and "inconvenience of doing otherwise." It is also by far the most important step. This may be related to a speed limit on a street: we all know what a speed limit is, most of us understand why a speed limitation may be placed somewhere, but some of us fail to care and just disregard the limitation. From time to time, this leads to an accident, injuries and possible death.
I think this is where all the "phishing" companies fail: they focus on bringing people to the C directly, regardless of the previous state. A more comprehensive process would be to make sure that everyone attending such a training has gone through the K and is at the U state before leaping to the C state.
By elevating the user from a basic knowledge to understanding, not only will the concept by clearer and easier to recognise, but also this enables the user to relate a variety of threats as being really the same "thing." In the long run, this saves time and money to the company by not having to develop a scenario for everything.
Caring is the next step, it is the realisation that not only there is a threat, but that threat has an impact on the person or the firm. That is the realisation that "bad things don't happen at random." This is, for me, the "true awareness" and is summed up in the idiom "once burned, twice shy." However, "cyberburning" can be persistent (think "credit score damage") or even fatal (DigiNotar, Mt. Gox and an article from Fox Business). This is by far the hardest step, as human being we tend to downplay the risks or impacts when we want something (either to possess it or as a mean to achieve a goal, such as performing one's duty), but to exaggerate the inconvenience of anything that may stand between us and these goals/things.
Unfortunately, this "magnification of inconvenience" and "downplaying of risks" clouds the step from "Understanding" to "Caring": "if it is inconvenient and not that risky, why should I care?" Sounds familiar? For me, way too much.
A good security awareness program has to address both the K, U and C states. It has to make sure everyone knows what is being explained (the "K"): if it is phishing, does everybody know what phishing is? Can it be defined in a simple way and without requiring to drop various examples? From there, does everybody understand how this works and is everybody able to recognise such a scenario for what it is?
As I wrote, getting to the C is the hardest part, due to having to go "over the ledge of perception of the "rarity", "lack of danger" and "inconvenience of doing otherwise." It is also by far the most important step. This may be related to a speed limit on a street: we all know what a speed limit is, most of us understand why a speed limitation may be placed somewhere, but some of us fail to care and just disregard the limitation. From time to time, this leads to an accident, injuries and possible death.
I think this is where all the "phishing" companies fail: they focus on bringing people to the C directly, regardless of the previous state. A more comprehensive process would be to make sure that everyone attending such a training has gone through the K and is at the U state before leaping to the C state.
No comments:
Post a Comment