Wednesday, January 30, 2013

18.781 Theory of Numbers - MIT OpenCourseWare

MIT has released an introduction course to number theory under the 18.781 course.

Monday, January 28, 2013

Red October

Another story that could have been a plot for a James Bond movie ... if it were not in real life.

I saw it on Slashdot: searchers at Kaspersky Labs have uncovered a very powerful malware apparently designed and engineered to act as a virtual spy and collect sensitive and very sensitive data: once it infects a host, besides trying to replicate to the internal networks, it slowly reports all the office documents, plus a number of other formats usually used by NATO.

Was it developed by a state-nation? Hard and too early to tell, but the researchers claim that there are at least Chinese and Russian involvements in it. Another possibility is that this is a product from the crime underground, and that all the information collected is to be sold to the highest bidder.

While it seems to target the official entities of Eastern Europe, it was also found in agencies around the globe, as shown in the following map.


The malware uses at least three exploits: CVE-2009-3129 (buffer overflow in excel when opening crafted spreadsheet), CVE-2010-3333 (buffer overflow in several office components when processing RTF data) and CVE-2012-0158 (Buffer overflow in a windows component while processing tree views).  I said "at least" as the malware is highly modular, there is a chance that additional payload could have been delivered to exploit specific situations.

The initial entry is a classic social engineering technique in which the victim is tricked into clicking a file. This promptly exploited the three vulnerabilities mentioned to execute code onto the victim's computer. This first phase completes with the connection to a Command & Control ("C&C") server, which provides additional orders and modules.

Phase 2 is when the spy work is done: removable medias are searched, connected phones's contact lists are downloaded, Windows Mobile Phones are infected, survival mechanisms are put in place and so forth. A lot of information is captured one time, some of it that can be used to further infect the local network, for instance windows cached or DB credentials. The malware also tries to exploit directly hosts by using MS08-067 . If admin credentials were found, it will also use them to propagate.

Kaspersky's searchers have classified the modules into 10 groups: recon, password, email, USB drive, keyboard, persistence, spreading, mobile, exfiltration and USB infection. A second paper from Kaspersy's searchers describes each module they found. The description of each module is very interesting, and lot of information is scattered all across the document that can be used to generate some IDS/IPS signatures..

Friday, January 25, 2013

Japan Cyber Cops Cuff Evil Cat

May 25th, 2012 by Rob Henriquez
This sounds like the plot for the next James Bond movie: a cyber villain, a coded message and the police running after a computer malware. Except this took place in Japan, in real world.

Japanese news corporation received promises of big scoops and several quizzes in e-mails. The police took over and had to acknowledge that it mis-treated a few suspects in an e-mails threat case concerning different venues, a few of which are frequented by the Imperial Family.

After going to a mountain top, the cyber coppers finally captured a cat bearing a drive with a copy of (apparently) the iesys.exe virus. Dubbed the "remote control virus", it allows - as it is the case for many malwares - an attacker to seize control of a machine and use it as a proxy to send network traffic, for instance threatening e-mails, surfing illegal websites or performing DDoS attacks.

In the process, it seems the Japanese police coerced some suspects into confessing the e-mail threats. This led to an embarrassing snafu and the police presenting apologies to four different people. After the incidents and the apologies, the police agreed that it has to better its cyber crime procedures.

Link to the press releases: here and in the Register.




Friday, January 18, 2013

Scientists Extend Einstein’s Relativity to the Universe’s First Moments

When one tries to apply Einstein's equations to very small spaces or times, the whole stuff breaks down and makes weird predictions. So, for the last decades, astrophysicists have been looking to unify Einstein's and quantum theories.

About 30 years ago, Ashtekar managed to do so and made Einstein's equation "quantum certified." A number of surprising results came out ... One of them is that the early universe was made of discrete cells that could assume multiple simultaneous states. Another prediction is that the Big Bang was rather a Big Bounce of a previous universe, that would have endured a Big Crunch!

The theory also predicts that there should be imprints. If found, it would confirm the Quantum Loop Theory, and possibly give other challenges to the astrophysicist: what was there before? How did the whole universal cycle started? Will it end?








Wednesday, January 16, 2013

Largest structure challenges Einstein's smooth cosmos

Slowes and his team have discovered a group of quasars that challenges Einstein's Large Scale Homogeneity Principle. This principle is simply the expression that, at large scales, the universe looks homogeneous.

However, going through the Sloan Digital Sky Survey, he and his team found a group of quasars that is above the limit generally admitted to be "large scale", meaning that in order to maintain the principle as valid, its lower limit has to be increased dramatically.  Which also means that Astrophysics just got a bit harder ;-)

Here is the link to the article in the New Scientist and the published paper on Arxiv.




Monday, January 14, 2013

Friday, January 11, 2013

Potty-mouthed Watson!

Do you remember Watson, IBM's super AI which socked two human players at Jeopardy? Well, it seems that an experimenter played Frankenstein and let the beast awakens after a visit to Urban Dictionary. A now potty-mouthed Watson had to be piped down and refrained from going on curse parades. Other reference and here.

Maybe a new AI test is needed: after the Turing test, let's have the Tourette test!


Wednesday, January 9, 2013

Post Exploitation - Discovering Network Information in Windows

For those who don't know it (already), Rapid7's Metasploit is a framework for penetration testing. It includes hundreds of exploits and has several modules for connecting to the compromised hosts. It also features modules for the post exploitation, such as hash dump or cached information retrieval.

Here is a cool article on the Post Exploitation Network Discovery in Windows. It explains how to gather network information, discover potential neighboring hosts and so forth. This only scratches the surface and will open the way for more discoveries.



People interested can also look at these books (on Amazon): Metasploit, Metasploit Penetration Testing Cookbook and Metasploit Toolkit for Penetration Testing, Exploit Development, and Vulnerability Research. I own the first one and I recommend it to anyone interested in a crash course.



Monday, January 7, 2013

They’re guilty of ID theft, but don’t ask the government how/where they got the personal info?

An article on DataBreaches.net caught my eye: the story of one Travonn X. Russell whose residence was raided and searched, and in which all the material needed for ID theft was found: social security numbers, address, bank information and so forth.

The author comments and asks why the source of the material has not been mentioned.

While this is a totally legit question, I think the answer is quite obvious: there may be a parallel investigation concerning the source of the material, possibly from a larger data breach. Any information concerning this may be illegal, but may also compromise months of painstaking, methodic work: the information can vanish quickly, the relevant systems may be outside the investigators' jurisdiction and the theft may have actually gone for a long time without being detected.

At the core of the problem is this question: "How can we force PII holders to act responsibly?" There is a ton of regulations and legal compliance rules, such as PCI DSS, HIPAA/HITECH and various states' law in the US to mention but a few; but frequently companies will be found in violation of these with TJX being the most famous. In addition, the heartland case proved that PCI DSS compliance is not enough. I tend to agree: PCI DSS and other regulations actually set the bare minimum requirements, each industry is supposed to go the extra "relevant" mile. As it was put nicely:



"Compliance is not Security Ensurance."



However for a lot of companies being just compliant is the target, and any additional effort is not taken. The reasons are legions:
  • Cost of additional measures;
  • "Incompetence" of key players;
  • "Brand engineering";
  • Lack of incentives.

Cost of additional measures

Measures that go above and beyond the bare minimum may have a cost. However, the perception is that they will incur a cost. As that cost is real, as opposed to the cost of an incident which is potential, a number of companies will not engage into providing additional security.

As a result, these companies are compliant, but they are not secure. But they don't care: legally  they can't be sued for non-compliance.

"Incompetence" of key players

And by "incompetence" I mean "ignorance", "carelessness" and "lack of understanding". As a worker in the Information Security field, I have had my share of recommendations ignored or bypassed. When a reason was provided - a.k.a my advice didn't fall into a MBH (Management Black Hole) - it was from time to time a good reason, from time to time a bad one.

As I am writing this article, I am also playing Wargames. Back in the day, to approach a company's computer, you had to have at least a M.Sc. in math or something related. It meant that there was a "barreer" to cross between being able to lay your hands on a keyboard. And even before being able to touch it, you had to undergo a training for the tools you were about to use.

Nowadays, anyone who can sign a piece of paper can own a computer. However, that doens't mean that these understand everything that goes with it. The equivalent would be to hand your key cars to any random key on your street and hope she/he won't cause an accident or won't break the car down.

"Brand engineering"

IT professionals have heard slogans such as "If you buy XYZ, you can't be wrong." And several brands are playing on this: they market products that will ensure your compliance with any given standards. However, this is generic and covers only what is in the tests.

An example: I have used Rapid7's Nexpose in various situations. This is a very good product, the scanner is super cool and accurate. And it has modules for generating compliance reports. And in an instance, I had almost everything up to date, but a few applications (PDF Creator, Notepad+ and such) were obsolete. Nevertheless, the system came out as compliant: the outdated applications were not part of the automated tests.

"Brand engineering" is usually a substitute for people who don't understand fully and in depth the domain they are working on: not understanding the OS part of a system and blindly using Microsoft products, not understanding the storage portion and blindly ordering EMC parts, not understanding the networking aspect and blindly ordering Cisco components. All these vendors want you to think that by buying their products, you can skimp on IT personnel and hire less qualified people.

While this can work for a while, as soon as you get into the specifics, you are stuck with something that may or may not do what you need, that costs a lot of money and that has only a generic approach to your problem. And with people you can't rely on to find the solutions to your conundrums.

Lack of incentives

In life, all is about the balance of powers: the force and counterforce. If an individual or an organization has the ability to impose a decision for which he/it doesn't suffer the consequences or has no liability for, the risk is the "they will get used to it" kind of mindset.

In this case, there is no real incentive beyond the "damage to the brand reputation": if you are PCI-DSS compliant and your credit card records got compromised, from a legal standpoint you are golden. However, for the people affected by the breach, the situation is not as comfortable.

Instead of imposing a set of minimum requirements, the PCI-DSS and other regulations should say "for each record compromised, a fine of $10k will be imposed, with no limits": 10 records compromised? That's $100K. 5,000? That's $50M.

The lack of incentive here is that past the initial minimal requirements, there is currently no real way of forcing corporations and organizations to behave when it comes to managing the PII.

It seemed like a good idea ...


There are a number of ideas that seem to be a good idea, but that quickly fade into a security and privacy nightmare, especially when it comes to PII.

  1. Everybody needs a laptop

In a few positions, I actually refused to get a laptop and ask for a desktop instead. The truth is, companies are throwing laptops at employees in the hope they will be more productive and work from home but:

  • Laptops get lost or stolen;
  • Laptops require whole disk encryption;
  • Laptops are slower than same-price desktop.


Laptops get lost or stolen ...

... and there is a cost associated with that. A report from the Ponemon Institute sets the price tag for a stolen laptop to around $50K. That's a hefty sum to pay for people who may or may not use them at home.

Let's do some math: a company has 300 employees equipped with laptops. Of these, 10% will be stolen during their active life - usually around 3 years. That is 30 laptops will go MIA.

Each loss will cost around $50K, over 3 years, that represents $1.5M, or $500K per year.

Laptops require whole disk encryption ...

... and people who understand how to use it correctly. There is no doubt over the use of disk encryption to protect data at rest. I just affirm that (a) laptops are not the best candidates for it and (b) people usually don't understand what it entails.

WDE (Whole Disk Encryption) or FDE (Full Disk Encryption) is the process by which a computer will have its permanent storage encrypted to prevent an attacker from being able to extract data should he get the physical storage or a copy of it.

For a laptop, a partial encryption is irrelevant: other parts of the OS can contain some information associated with the protected data, such as temporary files, or even the decryption keys.

However, there are a few issues with WDE/FDE:

  • It slows down the computer: try me, do a disk intensive task on a computer whose storage is completely encrypted;
  • There may be bugs in the code that make recovery possible: yes, if there is a bug in the implementation, then your data can be recovered;
  • There exist attacks that circumvent W/FDE: ever heard of Bruce Schneier's Evil Maid attack?
  • There is a difference between shutdown, hibernate and sleep modes: most of the people I know, even IT people, don't realize that a laptop in sleep mode WILL NOT require the keys when waking up. Hibernate and Shutdown do.    

Laptops are slower than same-price desktop ...

... and sincerely, I am more productive on a machine that doesn't start to swap when I have 5 windows open. Especially if the pagefile is on a WDE disk ...



Instead of providing laptops to everybody, provide a good remote access solution and/or a "thin" client solution. For instance, the internal workstations can be equipped with Linux for the mundane tasks (surfing the web, doing heavy calculations) and  access to a Windows Terminal Server of any form for the other tasks (e-mails and so forth). From the outside, there will be a secure access to the same infrastructure, allowing the people to have access to the same desktop inside and outside the company, without the need for a laptop.

Other solutions are of course possible.

  1. The BYOD craze
It is tempting, for a user, to request or even demand to be able to access the company on the latest iToy or the shiny new laptop "that is way faster than the usual crap IT provides me with." And often, accounting will look at it with the "great, we don't have to pay for that" look in its face. 

What are the problems with that? Several actually. Let's mention only a few. 

If you want to work on a personal device, you need to move some of the data to the device. This means that it is only a matter of time before confidential or PII bearing data is present on the device, and moved in and out of the company. 

And then? 

First, can you guarantee that the data is encrypted? Most likely not. Or you may install either a W/FDE product on the personal laptop, or install some encryption tool on the device. That will probably slow the device down to the point the user says "no more."

Second, how do you provide support for these? If the user is experiencing a problem with is personal device, does the IT department have to help? If so, how do you make sure your IT department has the knowledge to reply to any question on any kind of device known to man? 

Third - a big one -, usually company laptops come with some security package (AV, firewalls and so forth). How do you guarantee that all personal devices have the same level of trust?  

Lastly comes the question: "user XYZ who used his personal device is no longer with the company. How do we guarantee he deleted all the company data?" Remote wipe? In some cases, this can be "Nay nay": you may erase the user's personal data at the same time. He accepted it in the BYOD clause of his contract? Are you sure it is airtight? And there is no backup home? What if the device in question is a laptop and is not remote managed by the company IT department? There is no way you can be sure the user got rid of everything

As a result and in my mind, BYOD is a big no-no when it comes to companies dealing with PII, for the risk outweights the benefits by far. 

  1. Delegation of the IT knowledge/know how 
A typical one: someone thinks that it would save money to reduce the IT team to the bare minimum, and outsource the engineering and implementation to an integrator. This is often a bad plan, as by losing the knowledge and know how, a company puts itself in a position where the integrator is the only one able to do any change, but also the understanding of the security implications can be lost. 

Understanding the security consequences of a change is paramount to a company that deals with confidential information such as PII, Credit Cards or Medical Records. Without that, there can be no educated decision on whether a change is a good idea or not. 

  1. Granting exceptions ...
Ok, I will make some enemies there. One of the biggest problems with security is ... exceptions. When someone asks to be exempted from a particular rule, there is always the risk of abuse. Another big risk with exception is the "snowball" effect: someone gets an exemption, then someone else says "hey, me too then", and before you know it, your systems are teeming teeming with "special cases" and "exceptions". 

Most of the time, the exception is there because someone didn't like the constraint and would rather have their convenience than making the system secure. 

Most of the companies will then opt for a complex mechanism requiring an exec or the CSO to approve these exception requests. Unfortunately, in practice it doesn't work that well: the exceptions being required most of the time by the executive management, the approval mechanism is often either bypassed or biased, and the perception is that the mechanism is just a long way to get a "yes" anyways. 

The issue arises when a problem happens: if this is due to an approved exception, who is responsible? Who will pay for the damages? Who will perform the work? Is an individual responsibility engaged?



Friday, January 4, 2013

SANS - Securing the Human

The SANS has an interesting section about securing the weakest point in Computer Security: the user.

For the last few years, the attackers have focused on trying to fool users into clicking on link or executing programs on their computers, either by sending e-mails, leaving USB thumbdrives in parkings or even mailing CDs. Combined with vulnerabilities in common desktop applications such as Adobe Acrobat, Adobe Flash or Oracle Java, this proved to be an optimal process: the attacker, instead of trying to pry the perimeter open, tried to have his payload be directly injected at the hearth of the network. To that, you have to add the "mellow cake" network: hard at the perimeter but gooey inside.

While there is a huge room for improvement on many network (segmentation/segregation of machines, network access control and so forth), securing the human is by far the most efficient way of raising the security level of a network.

Let's make a thought experiment: what if on your organization's networks, no one would be to click on links in e-mails, no one would ever connect a USB thumb drive or device to any computer and surfing would be limited to corporate/professional website? What would the result be? I claim it would lower the risk of compromise by multiple orders of magnitude.

Dedicated to security, the SANS has started a series of advices to "secure the human": there is a monthly video and various resources, such as guides and documents. For example, December's is about  the seven steps to secure a computer.

Aimed primarily at CSOs and technical security personel, I think everybody will gain by getting there and reading some of the docs.

The worse that could happen is that our security level will be raised.


Wednesday, January 2, 2013

CS191x - Quantum Mechanics and Quantum Computation

BerkeleyX has a few new courses available, among which an introduction to Quantum Computations. In a few words, Quantum Computation is the application of Quantum Mechanics to computer science, of which searchers say that it may revolutionize that field and and especially the way we see NP-Algorithms.

The course is scheduled to start on 02/06/2013, the duration is not yet known. Registration is open.

People interested in a text book can have a look at the excellent Yanofsky & Mannucci's Quantum Computing for Computer Scientists.