Monday, April 29, 2013

Spam Part II - The Different Actors




In part I, we went over a few things regarding the landing page and the way the requests flow between the different sites. Let's have a look at the various sites, the IPs and the countries involved.










Name IP Country
----------------------------------------------------------------
ruraltrauma.com 74.208.58.41 US

goodwaystoloseweightsolution.com 176.9.208.109 DE
94.185.80.109 SE
188.165.237.195 FR
46.4.172.234 DE
94.102.52.2 NL
46.4.172.228 DE
201.182.92.85 UY
94.102.55.160 NL
201.182.92.109 UY
5.149.252.21 CA
213.152.160.81 NL
176.9.208.111 DE
78.47.26.67 DE
199.91.174.75 US
199.91.174.74 US
199.91.174.71 US
188.165.237.197 FR
94.185.80.110 SE
201.182.92.110 UY
176.9.208.108 DE
5.149.252.20 CA
176.9.208.125 DE
201.182.92.86 UY
199.91.174.72 US
78.47.26.68 DE
176.9.208.126 DE

traffictrackingsys.com 78.46.196.20 DE

www.clclckck.com 107.23.215.168 US 
(Alias for wehasoffer-elb.go2cloud.org)

wehasoffers.go2cloud.org 50.18.211.52 US

processingordersonline.com 67.215.173.14 US

authenticgarciniacambogia.net 50.28.6.107 US

affiliate.cpavhits.com 67.215.170.92 US

www.mediahub.bz 192.41.78.41 US

offer.my-secure-page.com 199.189.84.137 US



The majority of servers are hosted in the US and Germany. We can breakdown the hosting companies for these two countries.

/>

The situation for Germany is interesting, due to the only 2 providers involved. Furthermore the upstream provider for "TT Internationl d.o.o." is ... "Hetzner." It seems highly unlikely that a single hoster/provider  is unlucky enough to be the only one to serve the spam. A quick search on Google reports that several of their servers are known to distribute malware. SpamHaus goes even further and write down the two "TT International d.o.o.o" networks seen as dirty.

Given the number of hosts returned for "goodwaystoloseweightsolution.com", it could be a number of compromised servers, possibly members of a botnet. Another interesting point of view is who registered what name.
The last 3 are US based registrars. The first one is Ukrainian based.

The "www.clclckck.com" is load-balanced across various servers on the Amazon EC cloud. The name is resolved with a TTL of 60 seconds between: 107.23.215.129, 54.246.179.176, 50.18.211.52, 107.21.29.227 or 50.241.149.139 are some of the IPs returned.

Conclusion ...

After discussing with my friend, she told me her account was hacked into and she had to change her password. The account was used to send an e-mail to all her contacts.

The front-end URL was most likely hacked into and a script is present that redirects only browsers based on the user agent. That way, if a search engine hits the page, it doesn't notice the redirection and ultimately the spam content.

The number of machines suggests this is not an amateur campaign, but rather an organization is behind it, and that the system can be extended to accommodate for further "products". The Amazon EC is leveraged to provide core redirection, which can ensure persistence should some parts of the chain be cleaned or turned off.

Lastly, it also seems that some of the servers are from less-than-reputable hosting companies known to harbor malware and criminal activities, reinforcing the idea that a criminal organization is behind it.





Friday, April 26, 2013

Analysis of a spam site

Last week, I received an e-mail from a contact of mine. Immediately, I knew something was wrong: the subject was her name between <> and the body was only a single line.

Before 12/2012, it was possible to find where an e-mail was sent using Hotmail: the header "X-Originating-IP" contained the IP of the machine used to send the e-mail through the web interface. Now, this is no longer possible, or at least not easily, as microsoft as decided to replace the "X-Originating-IP" by a "X-EIP", which contains something that seems to be a hash. If you have more information on this, let me know.


Warning: do not copy any of the following links in your browser unless you know exactly what you are doing! I have not tested any of them for possible malware. You have been warned!

The single line is actually a link (http://ruraltrauma.com/vvowfjp/xxotv685/ljr9c44/z087l8st/fwmfg). So, let's wget that bad boy.

Without a user-agent, wget doesn't hide its nature. In this case, this is welcomed by a 403 code (Unauthorized). Interesting. What about changing the user-agent to match a Windows 7 with IE9? The corresponding user-agent string is "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)". And yes, this changes the reply! The same test with a Windows XP/IE7 returns the same page.

The request to "ruraltrauma.com" returns a 302 (Moved) to "http://goodwaystoloseweightsolution.com/indexer.php?a=273446&c=wl_con", which also returns a 302 to "http://goodwaystoloseweightsolution.com/diet/GarciniaCambogiaDiet/".

There are two parameters in the middle URL: a and c. I played a bit and found the following:

  • The "a" parameter seems to be some form of counter, but isn't used to select a specific page: be it there or not, the same pages are returned;
  • Two pages are returned: 'GarciniaCambogiaDiet' and 'GreenCoffeDiet' in alternance;
  • The "c" paramter seems to select the campaign, but I was not able to find another set of sites. Yet.
Code analysis - first landing page

Let's dive into the first page. 

There are a few javascripts present: one returns the date with the day of the week and the month name, the other one is the usual "you are about to pass on a once in a lifetime opportunity, do you want to reconsider?" type of message box, executed when the user leaves or closes the page.

Most of the body is the usual crap: "facts", "user comments", "leave us a comment" (which is just a decoy, there is no form or no script attached to it). Dr Oz is mentioned in the text 

    196               <h2>Conclusion</h2>
    197 <p><a  href="go.php" target="_blank"><strong>Pure Garcinia Cambogia</strong></a>
    198  is made from HCA the finest 100% Garcinia Cambogia fruits on the
    199 planet. We offer the highest potency Garcinia Cambogia extract available
    200  which meets all of the criteria put forth by Dr. Oz. We are confident
    201 that it will work for you, as it has for so many others.</p>
And there is another mention of the Doctor at the bottom of the page. There is also a video from Youtube with Dr. Oz explaining the benefits of the various products being advertised here.

    418 <div id="footer">
    419
    420
    421  <p>
    422
    423 *The Dr. Oz Show is a registered trademark of ZoCo 1, LLC, which is not
    424 affiliated with and does not sponsor or endorse the products or services
    425  of 100% Pure Garcinia Cambogia With Svetol ®. All Rights Reserved.</p>
    426
    427 <p>*Reference on our Web Sites to any publication or service of any
    428 third party by me, domain name, trademark, trade identity, service mark,
    429  trade identity, logo, manufacturer or otherwise does not constitute or
    430 imply its endorsement or recommendation by Company, its parent,
    431 subsidiaries and affiliates.</p>
Yeah, to be on the safe side: let's mention him, but not too much. If you were wondering, the "conclusion" is written using the "clear" class style, while the bottom message is using the "footer" class style. The CSS files show that the clear will be really visible, the footer not so much (It will be this color on a white background)

One of the things that is quite impressive is the number of mentions of go.php: no less than 25 references. This is the target of pretty much every link in the file.

There is another php file used in a iframe: imp.php.

imp.php

That file is included as an iframe of size 0x0. When requested, it gives a single line, an IMG tag, that requests http://traffictrackingsys.com/imp.ashx?CID=237591&AFID=&SID=, another script. Fuzzying the CID parameters, or even removing it, didn't change the GIF file returned, which is a 1x1 pixel.

It is apparently known to be used by malware sites.

go.php

Getting this file is really interesting due to the number of redirects found:

Connecting to goodwaystoloseweightsolution.com
To http://traffictrackingsys.com/click.ashx?CID=237591&AFID=266107&SID=empty
To http://www.clclckck.com/aff_c?offer_id=48&aff_id=4
To http://wehasoffers.go2cloud.org/aff_c?offer_id=48&aff_id=4
To http://processingordersonline.com/rd/r.php?sid=155&pub=410028&c1=
To http://authenticgarciniacambogia.net/intl/special/?click_id=782623411&c1=&c2=&c3=&AFID=410028&SID=

That is no less than 5 redirects! The presence of some with parameters may indicate that the same sites may discriminate between different campaigns. More on this later.

The page contains three javascript includes, one of which couldn't be found (js/11.js). It also contains a form to order the "good", with the POST going to https://www.drstation.com/index.php?main_page=two_step_form_processor. Interestingly, the connection is done through HTTPS.

The certificate information is valid and gives:


subject=/OU=Domain Control Validated/CN=www.drstation.com
issuer=/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certificates.godaddy.com/repository/CN=Go Daddy Secure Certification Authority/serialNumber=07969287

The site www.drstation.com is running the LimeLight CRM. I don't know whether the found usage is "normal" for that site or if it was compromised.

For this branch, no further analysis.

clclckck.com

This site is interesting. It sits in the path when getting the go.php file, and it takes two parameters: offer_id and aff_id (affiliate?). Let's play a little with these parameters.


offer_id = 10    aff_id = 1               <Nothing>
offer_id = 20    aff_id = 1               Green Coffee
offer_id = 30    aff_id = 1               Green Coffee
offer_id = 36    aff_id = 1               Green Coffee Beans
offer_id = 38    aff_id = 1               Green Coffee Beans
offer_id = 40    aff_id = 1               <Redirect to www.puresaffronslims.com>
offer_id = 44    aff_id = 1               Green Coffee
offer_id = 45    aff_id = 1               <Redirect to iluv.clickbooth.com>
offer_id = 46    aff_id = 1               Garcinia Cambogia
offer_id = 47    aff_id = 1               <Nothing>
offer_id = 48    aff_id = 1               Garcinia Cambogia
offer_id = 48    aff_id = 2               <Nothing>
offer_id = 48    aff_id = 3               Garcinia Cambogia
offer_id = 48    aff_id = 4               Garcinia Cambogia
offer_id = 49    aff_id = 1               <Nothing>
offer_id = 50    aff_id = 1               Green Coffee

Other random values returns one of the following: nothing, a redirect, 'Green Coffee', 'Green Coffee Beans' or 'Garcinia Cambogia'. Here is a visual representation of the path taken (redirect, POST or clicks)



There is a constant: the payment/ordering site usually posts to "www.drstation.com."

Next: the different actors.

Wednesday, April 24, 2013

It is time to patch ... again!

A recent article on net-security reports that a study from the Finnish Security Giant F-Secure, 87% of the corporate computers are not adequately patched, which represents a huge security risk.

I can't say I am surprised: more than my share have I seen corporate computers lacking some updates. Not to say SEVERAL updates. And more than often: critical updates, things for which an exploit is available and used in the wild.

Dealing with a large number of vulnerabilities is something difficult: with time, patches may depend upon patches, or pieces that seem to be independent may become interdependent. If you have helped a friend install updates or if you are yourself working in this field, you have probably experienced the dreaded "one more reboot and we should be okay." Which usually ends up needed another 4 hours of work and several reboots. If you multiply that by two hundred (or thousand) users, some of them having a quasi religious repulsion to rebooting, you understand why the situation can deteriorate really quickly.

An indicator I frequently use is the average vulnerability (publication) age: it is based on the CVE IDs and gives an idea of "how bad is the patch front lagging." This is coupled with the publication year distribution to give an idea of whether things are slipping.


That kind of visual representation usually helps explaining why a situation is bad: it is better than dropping an A-Bomb of "You have more than a hundred thousand vulnerabilities that are 2 years old or more" but at the same time, it gives an idea of where the issue is. In this case, there is a spike for 2011, closely followed by 2012, which can be a lead to start investigating why things went wrong: did someone leave at that time? Was there a massive OS upgrade? Did the tool used to distribute updates come out of license?

If explaining the gravity of that kind of situation is easy and usually well-understood, problems start when it is time to remediate! I have heard excuses ranging from "we can't because it will break our main application" to "but surely we can't connect to each and every system to fix it, can we?" The median being "we are looking at implementing a tool that will automate the deployment of the patches." Of course, what is not mentioned is that during the time it takes to look at the tool, implement it, configure it and start having something decent out of it, nearly a year goes by during which no problem is addressed. And at best new vulnerabilities pile up and make the situation worse.

Is there a way out? Yes: the most important is to have a documented patch policy that permits the IT team to implement fixes as they are needed. While a cyclic patch program is okay, there is a need for out-of-band operations, such as urgent patches (Adobe Flash anyone?) or to fix situations that have deteriorated past a certain point, for example a machine that has vulnerabilities more than 6 months old. This is were a good management support is important, to make sure that objections can be dealt with, rather than representing a hard stop. Or starting to grant exemptions and free passes.

The server situation is a bit more complex: given that the team that deploys the patches is the same that will have to fix the mess should something go wrong, there is a natural tendency to leave these untouched if the fix doesn't bring anything needed. But given that all modern virtualization solutions offer a way or another to replicate and snapshot the servers, there is no excuse for not testing the patches. Again, it is up to management to insist on having results.

Lastly, it is important to have a set of metrics that reflects the reality, so progress can be shown.








Monday, April 22, 2013

Three simple steps to determine risk tolerance (CSOONLINE)

Modern security frameworks base themselves on a determination of the risks an entity's assets are facing, and how the entity addresses these by suppressing it (the vulnerability, the probability of occurrence or the threat is suppressed), mitigating it (something is put in place that will lower the risk below a certain threshold or transform it into an acceptable risk) or transferring/delegating it (for example: cover it with an insurance policy).

CSOONLINE has an article about "three simple steps to determine risk tolerance." I personally find it a bit thin and light, but there are a few good pointers in it.

First - If you don't have a formal risk policy / assessment framework, put one in place. Informal methodologies don't work, are inconsistent and mask the issues rather than solve it. This includes who can assume the risks, and what the assumptions are.

Second - Categorize risks whether they are enterprise or business unit - wide, and delegate these risks accordingly.

Third - Document how disputes around risks / delegations are to be solved.


Wednesday, April 17, 2013

Anatomy of a hack

In the light of the ongoing WordPress Attack Campaign, a "souvenir" of an attack against Joomla here.

Wednesday, April 10, 2013

My hosts have Firewall

Firewalls are not something new: even though the technology has evolved since their conception late 1980s. Originally a simple packet filtering, another evolution brought the concept of "state tracking" (stateful firewalls) , then the ability to look into the application protocol, called "deep packet inspection", "application inspection" or "application awareness."

Recently, vendors such as Palo Alto have gone a step further and are now able to recognize applications ("Apps") inside a protocol, for example, the Palo Alto firewalls are able to discern a game from a chat on Facebook.

Firewalls are still a major security component in network security, as they separate zones of different security level, for example "outside", "Exposed DMZ" and "Inside." However, these network firewalls are effective only when a connection goes from a network to another. Even if there are the so-called "transparent firewalls" which operate at Layer 2, they can't protect each and every machine individually.

That's where "Host Firewalls" have a role: instead of deploying a network device in front of each host, why not include the firewall part in the host itself? That way, the machine is protected not only from the other zones, but also from the other computers in the same network! Important? Let's go through a few examples.

Example 1 - Servers in an Exposed DMZ

This may be the most obvious example: several servers are in a DMZ exposed to the Internet. These present some web, file transfer and remote access services to computers on the Internet.

The front-end firewall protects these servers from the Internet and the internal networks by letting only the relevant protocols enter the DMZ, and performing some protocol validations such as restricting the methods used in HTTP, or the commands permitted in a FTP session. The front-end firewall also protects the other networks against the DMZ by letting only the relevant services through: DNS, possibly access to a back-end database and so forth.

Now, what happens if a server is compromised? The attacker will scan the local network to find other hosts, in order to be able to get further inside the network. This can be done either by using some credentials found on the first compromised servers or by exploiting other vulnerabilities. But in order to do that, the attacker has to be able to create connections from the compromised server to the neighboring servers.

If each host in the DMZ has its own firewall, the attacker might be prevented from being able to move laterally, and even more: provided that blocked access are logged, the sysadmins may detect the attempts and find the compromised server.

Example 2 - Workstations in an internal network

Another trivial example, but let's think of it: if you have a well segregated network in which the different departments/business units are separated from each other, what about the communications within a department? Do you think that user A in accounting often have to share documents on her computer to user B in the same department?

Over the last few years, attackers have been increasingly successful at exploiting the human component of computer networks through social engineering. For instance, an employee gets an e-mail with a link or a pdf file and a text crafted to incite the user to click or open. Once this is done, the malware payload executes on the computer.

One of the first steps the attacker will take is to scan nearby machines to further compromise the network, which is called "lateral movement." This will ensure its connection persistence to the network should the infected machine "0" be cleaned.

This lateral movement can be done in a variety of ways, let's mention 2: vulnerability exploitation and account compromise.


  • Vulnerability Exploitation


Nothing new: the attacker will use an internally accessible vulnerability to propagate his/her malware to other machines. Being that he/she is in the internal network, without much filtering, the possibilities expands to additional protocols such as UPnP, Netbios or RDP.


  • Account Compromise


The attacker will dump the local user database and try to find usernames and passwords. Depending on the policies in place, this can be quite easy. Additionally, the Windows OS store a cached version of the  username/password to allow access to the device in the event the domain controllers are not accessible.

If the attacker is able to find a username/password pair, he/she can login to the systems with the user's privileges.

But the account compromise can also give the attacker access to a remote access solution if the company has one and it doesn't use a different source of authentication, or if the passwords are reused. This is for another discussion.

One of the conditions for the lateral movement to be successful is that the systems on the network accept the connections from the neighboring hosts. If the administrators have limited the ability for a
non IT workstation to be able to connect to the other workstations, this will seriously hinder the possibility for an attacker to spread using exploits or - under some conditions - compromised credentials.

Example 3 - Internal servers

It can also be that the administrators have a need to protect certain servers from being accessed from certain networks or workstations. Usually, the system admins delegate that tasks to the network or firewall team. While this works pretty well in practice, a better approach is to duplicate these security steps by having it implemented both at the network and at the host levels.

There are many reasons for that, let's mention the top two:

  • Prevention against  the failure of a  layer
System security is about planning the "what ifs": what if a "permit ip any any" is added by mistake in a network device? At that point, whatever is on the host will protect it against unauthorized communications.

  • Proximity of the rules

The closer the rules are from a host, the more effective they are: implementing a set of rules in a firewall 5 hops away from a server will only protect against what has to cross the firewall, which means none of the networks connected to the intermediate 5 hops will undergo any filtering. 

Having the rules directly on the hosts means that everything, including the hosts on the same network segment, can be filtered or permitted.  While it is possible to achieve the same at the network level by using port ACLs or equivalent, applying the same rules at the host can be more efficient, by distributing the filtering load across multiple machines. Again, nothing prevents a duplication of these tasks.

Application filtering

Another reason to implement the filtering on the hosts themselves is the ability to filter not only the ports/protocols but also what application(s) can use them. Coupled with an explicit deny, this can be a powerful tool to limit the possibility for an attacker to compromise a host, or in the event of a compromise, to prevent further activity. 

For example, there is no reason the netbios services on a Windows machine should be accessed by anything else than the corporate networks. Also, is there any reasons Word should be making HTTPS connections to anything else than the MS networks?

Selectively allow applications outbound or inbound - to listen to a port - is a good way to start detecting when things are getting weird: if you see suddenly Adobe Reader making requests as it seems to crash, chances are something is wrong.

What's against: "blame the firewall"


Scott Adams summed it up in "Dilbert: Blame the firewall": at the first inconvenience, people will start blaming the firewall for not being able to "work". If we look at the cold facts, it usually means: "your firewall prevented me from accessing my radio/video/game." Only in rare cases is the problem really a work-related problem.

Unfortunately, people are wired that way: they tend to blame anything they don't understand. I actually did the experiment once: I warned the IT team and the whole company I was about to install an additional firewall to protect our network. My boss was aware: it was a fake announcement. However, the day after the "change", the helpdesk started having complaints concerning non working connections. And a C-level person even demanded the new firewall to be taken offline!

Another problem that frequently arise is the "exception by level": a "sufficiently privileged" person in a company demanding an exception on the sole reason ... he has the power to request it. In that scenario, the highest authorities in the company have a strong educational role to play, by letting everyone in the management staff know that even though they could grant themselves the exception, they won't. And so shouldn't anyone else.

And my favorite last one: the "victory by permanent yes". In certain companies, prior to get an exception, a manager or exec has to approve the exception. More than often, the exception is almost guaranteed to be approved, which leads to the user feeling that the procedure is just bureaucratic, and to the technician feeling that there is no point in having a procedure that almost guarantees an approval for the request, regardless how stupid it can be.

Conclusions

A typical network is akin to a mellow cake: a crunchy layer with a gooey inside: once the perimeter has been cleared, not a lot prevents lateral movements. Modern attacks leverage that, and target directly the users inside the networks: a successful social engineering is all it takes to get inside the network and have little to no resistance while moving from machine to machine.

Most, if not all, of the tools are present in all modern OS to implement a strong security, however, this implies that the system administrators start understanding network concepts, which is, in my experience, not an easy thing to achieve. Even more, it requires that the management buys into the concept, forces all the IT players to be at least proficient in each discipline (system, network, application). And maybe the most problematic: the management has to stand on the IT/Security side and refuse all changes unless they have a strong business justification, but also to make sure that IT/Security is not seen as the problem, but rather as a defense against potential problems.




Monday, April 8, 2013

Peaceful matter-antimatte pairing looks more real

Interesting. This is the first time I read about the Majorana fermions, a very particular fermion which is also its own antiparticle. Scientists at University of Illinois at Urbana-Champaign are one step closer to showing these exists.

The article in the New Scientist.

Friday, April 5, 2013

IT Pro confession: How I helped in the BIGGEST DDoS OF ALL TIME


Spamhaus is a Dutch company specialized in gathering lists of spam sources on the Internet and publishing them. Many mail sites - mine was as well until I moved it to Google - use their service to drop the spam before it even hits the mailbox. So, positive.

Enters another player, CuberBunker, another Dutch company, specialized in hosting pretty much anything except "terrorism and pedopornography." At least that's what it claims.

Spamhaus volunteers started detecting spam being sent from the CyberBunker servers, and they proceeded into blacklisting them. That started the beef between the two. You will find the CloudFlare blog entry here, CloudFlare being Spamhaus's hosting company.

CyberBunker clients, instead of doing something technically clever, ran a script-kiddie type DDoS: a massive attack using DNS amplification. As someone put it, that was akin to trying to take a person down by using a machine gun in a crowd.

This was made possible due to the large number of DNS accepting recursive resolutions from anywhere on the Internet.

Trevor Pott, a sysadmin, took the courage to write a short article for The Register on how he was an unwilling help for the CyberBunker side: he explains the mistake he made and how he corrected them. He also has a few cool tricks, some of them helped in limiting the damage. Even the US-CERT has a document on securing the common DNS servers. Oh, this document was published in 2006 ... the problem is nothing new. Some more information and stats can be found on OpenResolver.

Bottom line:


  • Sysadmins tend to forget about the "small" services that keep the network running: NTP, DNS, DHCP and so forth, and often, these servers are left unpatched or untested;
  • That sucks, but different version of the same software may have different default configurations! The keyword is "inventory", and knowing where each component of your infrastructure runs. And reading the release notes;
  • There is no one-magic-bullet-security-stuff that will solve all the problem: Trevor had a few layers in place that mitigated the damages caused by the misconfiguration. Namely, he limited the bandwidth available for DNS to 500Kb/s. He initially saw a 10Mb/s surge;
  • If you have people like Trevor who understand what they are doing, your network is in good hands.

Wednesday, April 3, 2013

Panorama From Curiosity Rover Brings Mars to Your Computer Screen

The only word to describe this is "wow." A panorama of Mars.

Enjoy the panorama!

Monday, April 1, 2013

Employees deliberately ignore security rules

... or at least that's the perception Information Security teams have in 80% of the cases of a survey done by Lieberman.

That number doesn't really surprise me: for one, your usual IS team's perception is in my view always a bit biased toward the "the users are evil/stupid" side and second, most employees care only for one thing: having their job done as quickly as they can.

The main culprit? Convenience! People tend to forget all their security reflexes when two conditions are met: (a) what they are working with is not their own, and (b) bypassing security will be convenient or helpful for them.

As an example of (a): no one with a brain would put a post-it in his/her wallet with the PIN code for the neighboring bank card. However, lots of developers don't see a problem in storing the same information (credit card number, CVV2, name, address) in a single, unencrypted table.

And for (b), we probably all know someone who used an HTTPS proxy to bypass a filtering proxy, or who has all the passwords for his/her applications in a spreadsheet called "password".

Being in the IS team is often exhausting and depressing: seeing people do "stupid" things because it's convenient can be really taxing, especially when you don't have a good management support, and seeing very stupid things getting away without even as much as a slap on the wrist as led a few people I know to reconsider their careers. In that regard, being a IS professional is akin to be a cop: you are celebrated when you help capture the bad guy, but loathed when you stop someone and fine him for not respecting a speed limit.

Employees in a company, like drivers on a road, usually don't consider the policies as applying to them, or justify their acts but stating that the policy is "retard", that not being able, for example, to surf Facebook over lunchtime from a desk computer is stupid and why should it be the case. No matter how reasonable the explanation is (risk of having malware coming in, of having confidential information exposed and so forth), the employee will always reply that "this happens to the others, I am careful and I pay attention not to do anything insecure."

In that regard, management and executives have a few important roles to play: educate the users by the example and show that, even if they could demand the exemption, they don't bypass the security policies, and by making sure that there are commensurate reactions to any actions: I am not talking about firing an employee because he/she clicked on a link in an e-mail, but having an appropriate reaction, such as the obligation to follow a security course or assist in repairing the damages caused. For that last one, I have lived in my previous lives a number of situations where a user would introduce a virus in the network, but would be allowed to go home at 5pm, while IT had to work overnight to clean the systems.

In my views, it is important that this perception of "the users are plotting against us" changes, and it won't as long as the users don't understand that the IS team is not the problem, but rather an help.