Monday, December 30, 2013

Using openDNS

One of the main alleys to distribute malware is through the Web: an e-mail contains a link, which is clicked and *bam* the machine is infected. The mechanisms behind that are usually similar across the various strains: the website is accessed, there is a client-side exploit that downloads a piece of malware, which executes and connects to a site to get its instruction or dump its payload.

A common theme is the use of DNS to resolve a name to an IP address: some names have hundreds of "A" records, corresponding to as many compromised machines. A freshly infected system will try to resolve one of these, then connect and proceed as explained above.

Numerous open source initiatives exist to establish lists of these "bad domains" or "malware domains". The most famous is the Malware Domain List, which has several options (csv, hosts, ...) that can be used to generate either a DNS or proxy black list, firewall rules and so forth.

However, this requires that you have either your own DNS or proxy server, or that your firewall supports an automated way of importing the list. Not always possible. In addition, this covers only malware, and for instance, there is no categorization. And unless you add more tools, you have little to no visibility on what is dropped.

This is where OpenDNS comes into play. The service is presented as a traditional DNS server, two in fact, and people can add it instead of their servers: as a forwarder in a corporate server, as a DNS server in a small router or host. Immediately, the known malware domains are dropped and returns the IP of a server operated by OpenDNS to inform that the attempted resolution was nefarious. This also includes typos ( instead of

But the power of OpenDNS starts when you register with an account - even a free one. Then, you have access to domain filtering by categories - have you ever wanted to drop all these adware sites? - and to statistics. Note that by default the stat collection is disabled.

These consist in the number and type of resolutions, presented in an hourly format, the number of unique domains resolved, the list of resolved domains and how many times over the requested period (a day or multiple days), the list of blocked domains and the reason.

The OpenDNS team is constantly implementing new features, and there is an "idea bank" where users can submit proposals or requests, such as a filtering base on the geolocation of the IP returned, or more logging and alerting.

But what good does it do if the only thing you can see is that "a machine in your network has attempted to resolve a known bad name"? That's why they have developed an agent to install on the end machine: it forces the resolution to go through OpenDNS and provides some more information, allowing for the quick identification of systems.

OpenDNS also offers other services, such as a web filtering proxy and more.

All in all, this is a really nice service to use. It is not expensive at all and can really complement a security solution by providing an additional filtering layer.

Tuesday, December 24, 2013

UK finally pardons Alan Turing

Alan Turing was a genius, with accomplishments spawning multiple domains such as mathematics, computer science, cryptography and more. Without him, WWII would have taken a whole different turn and may have ended with the Nazis winning. Nothing less.

However, his "crime" was that he was gay in UK in the fifties: the UK officially made homosexual relations legal in 1967. As such, Alan Turing was forced to undergo a hormone therapy to "suppress his urges."

Queen Elizabeth II finally pardoned Alan Turing.

Friday, December 6, 2013

Wednesday, December 4, 2013

Applying Computer Science skills to Medicine and Biology

Interesting story: following her husband's disease, a computer scientist started applying her skills in natural language processing to parse texts and papers, and has drawn some conclusions. You may find her paper on her MIT page.

Monday, November 11, 2013

PCI DSS 3.0 is out

The Payement Card Industry (PCI) Security Standards Council has released the version 3.0 of the Data Security Standards (DSS). These can be found in the Documents section.

Version 3.0 brings lots of changes: some controls have been rephrased for clarity, several controls related to policies and operational procedures have been added and some accent is put on the treatment of vulnerabilities. A summary of the changes can be found here.

Sunday, September 8, 2013

Tokyo to host the 2020 Olympic Summer Games

Congratulations to the Japanese people for getting these games!

Monday, September 2, 2013

Hacked through the mains!

A few months ago, I was confronted to an issue: my wireless network was not powerful enough to get to the very far confines of my office, and my desktop computer would periodically lose its connection to my small LAN. To solve this I went to a nearby computer store and I bought a pair of ethernet-to-mains modems from TP-Link, which I used for a few weeks. They are

As I didn't have a Windows machine at that time, I left the modems in their default configuration. Then, as I had a few disconnection issues and I didn't like the fact that the traffic was not encrypted, I replaced the pair with a very long cable.

Recently, I found them back and I decided to play a bit with them. To my surprise, as soon as I plugged the first one, it picked up a connection. Surprising as the other one was still in my hand ... I decided to plug the cable and check what network I was connected to.

The router is a Netgear's DGND3300B, an interesting model with a few cool features, such as a Traffic Meter, a built-in shared drive - provided that a USB device is plugged in -, a media server and much more. Well, it's also the case that the box has a default username/password combination of "admin/password".

This gave me access to my neighboor's router. So basically, I pwned his network: I could have disabled the DHCP server to install my own in order to play man-in-the-middle (MiTM), activated random features or even leeched on his network.

These little powerline modems are cool, but they are also very dangerous: in the same way as a wireless connection, it is very difficult to put an exact border to the network once it is extended through the power network, and it falls on the user to make sure the devices are properly configured to only talk to each other, and never to any other device that could be reachable.

Wednesday, August 14, 2013

Smartphone Experts notifies customers of hack

That's the usual story: a payment-processing site/application got hacked, customers' data lands in hackers'hands, company notifies customers. However, there is something that really shocks me:

Although stored customer data were encrypted, Diana Kingree, the Senior Vice President of Commerce, noted that the hacker may have been able to use a decryption feature of the system to view customers’ names, addresses, credit or debit card number, CVV, and card expiration date.
The PCI-DSS Requirements state in point 3.2.2

Do not store the card verification code or value (three-digit or four-digit number printed on the front or back of a payment card) used to verify card-not-present transactions. 
This code is meant to be used when the person executing and the card used to execute the transaction are not physically present where the payment takes place. This is, to some extent, a password or a PIN. Why companies still store that CVV code? Beats me.

Storing the CVV defeats its whole purpose: making sure that the person doing the payment possesses the card. By having it in the same database as the credit card number and expiration date, its role is completely negated.

Monday, August 12, 2013

Friday, August 9, 2013

Chinese Hacking Team Caught Taking Over a Decoy Water Plant

Not a surprise, but still, quite a shock: a security researcher set up a decoy water plant, simulating everything from the workstations to the industrial control systems and caught some hackers who infiltrated his systems. If they got in his system, chances are they are already inside other providers such as energy and telecommunications providers.

The article is here.

Wednesday, August 7, 2013

Researchers develop new method for understanding network connections

This is interesting: a team of researchers at MIT have designed a way to find the underlying network under an observed network.  This allows to find the direct dependencies in a network, separating in the process the indirect links, or elements that just "tag along" other elements.

The paper is here, but behind a paywall.

Monday, August 5, 2013

You’ve got mail: Someone else’s medical test results

Doctors and Internet don't mix well: here is an article on a few mix-ups involving e-mails and doctors. A reporter for the Boston Globe received a few e-mails with medical results for people with names similar to hers.

From this, a possible attack I can think of is to create a number of e-mails with names similar to a target. There is a chance a misspelling at a doctor's office will land you some results.

Friday, August 2, 2013

FACTBOX - Hacking talks that got axed

The struggle between security researchers and private companies is nothing new: there are numerous examples of researchers or hackers being coerced into not talking about a vulnerability found in a product, in a website or even in a hardware product. This also include not talking at hacker conventions.

There may be various reasons for that, such as "the company doesn't want its reputation to publicly suffer" or "black hats may get the information and turn it to their advantage." There may also be an unsaid reason: some companies develop exploits for these vulnerabilities and sell them to "trustworthy" Governments and Agencies. Examples include VUPEN and the (in)famous FinFisher, part of Gamma Group. It is to be noted that the latter doesn't explicitly mention that its products are reserved for the same "trustworthy" Governments and Agencies, and as a reminder, a Gamma Group offer was found among torture equipment in 2011 in Egypt, when rioters invaded the State Security Investigations Services HQ. Given that private companies do it, there is no reason to believe that governmental agencies around the world don't do the same.

In that light, any publication of any kind of vulnerability is a hindrance: not only it may force the vendor to take action and fix the vulnerability, but it also gives other security researchers a base on where to start looking for ways of detecting or mitigating the vulnerability.

The argument of "it may help the bad guys" is not entirely valid: the cybercrime world has shown many times it can find vulnerabilities on its own, be if for software Zero-days or hardware hacks. To believe that a security researcher is the only one to look for vulnerability for a given piece of technology is simply unrealistic: if it can lead to money - and most of the time it can - the bad guys will have an interest in it.

Remains the reputation concern, which may also be a poor excuse. A number of companies. mostly dealing in the Open Source movement, have opted to publicly disclose everything concerning vulnerabilities and breaches. As a result, some have actually gained recognition and the trust of their users, as they know what to expect. A real excuse is the cost of fixing a vulnerability: it may take a lot of work, which translates into hard cash for the companies, and that often for products available for free (think "Adobe Reader", "Adobe Flash" or "Oracle Java" to name a few of the usual suspects.) On the hardware side, it is even worse: if it is possible to distribute a patch, applying it to millions of cars or door locks is problematic, as this fix may need a special tech.

A concept that has been developed over the last few years is "responsible disclosure", a discussion between the security researcher who found the vulnerability and the company that makes the affected product. The "responsible" part is that a delay is negotiated before the vulnerability is made public. However, this has been slowly replaced by "vulnerability commercialization": a company, such as iDefense or TippingPoint, pays any vulnerability (with a proof of concept) discovered. The question is: "but what happens after?"

That concept of disclosure is very sensitive: it has been used in the past as a form of blackmail against the affected company, either to have them address the problem quickly or to simply extort money. These companies are no angels either, and often used the courts to threaten the security researchers.

As you see, this is a very difficult topic, and not one I expect to see settled in the near future.

Wednesday, July 31, 2013

Social Engineering as the Biggest Threat to Help Desk Security

From a recent SANS survey, 69% of respondents found that social engineering is the biggest threat to help desk security. Here is a link to the article on and a link to the SANS white paper.

Monday, July 29, 2013

“NASDAQ is owned.” Five men charged in largest financial hack ever

I am not surprised: "NASDAQ is owned" are the words sent in a IM from an eastern Europe hacker just after he got some administrative credentials for one of NASDAQ's internal networks.

As I said in the past: "there are two types of companies, the ones that have been hacked and the ones that don't know it yet."

Wednesday, July 10, 2013

Free Science Books!

If you like a good science book to accompany you, you will find a lot of free ones here.

Monday, July 8, 2013

NY state will fight fake IDs

An interesting article on initiatives taken by the New York state against fake IDs. I disagree with the comment "While the old-school images might seem odd, the new production method and a barrage of features both seen and unseen will make the licenses virtually impossible to forge."

Nothing is impossible to forge. It is just a matter of technical difficulties, cost and time. And making a document "sufficiently close" to the legit version is enough to achieve deception.

Monday, July 1, 2013

AT&T Archives: The UNIX Operating System

A cool video from 1982 featuring some of the big names: Kernighan, Ritchie, and Aho to mention but a few.

Thursday, June 20, 2013

Monday, June 17, 2013

Linux Non Root Exploits

Don't expect any earth shaking revelations or any mind blowing hack, but this is always a good reminder that even a non privileged user can do some harm. This is true of any Operating System, on any platform: the simple fact of having access to a machine can be enough to cause some trouble.

Friday, June 14, 2013

Vast array of medical devices vulnerable to serious hacks, feds warn

The ICS-CERT has emitted a warning concerning hard-coded passwords in medical devices.  I see there a parallel with the vulnerabilities found in the SCADA devices: applications that used to be on disconnected networks, or even specialized networks now get a web front-end, developers who spent years focusing on the functionalities now have to include security, and not very disseminated devices.

I suspect that before long, there will be other issues found: buffer overflows, authentication/authorization bypass and other tricks.

Wednesday, June 12, 2013

Hard drive failures

An interesting study of hard drive failures. Good statistics background needed to fully appreciate it.

Monday, June 10, 2013

Casting a net : phishing and spearphishing

Phishing and spearphishing are terms almost daily used. The former covers the whole family of attacks in which an attacker tries to gain some information from his victim. The distinction to spearphishing comes when the attacker has a prior knowledge of his targets.

At large, the attacker massively sends e-mails to targets without knowing them, and has a generic message, such as a communication from a bank, a government agency (IRS, FBI, ...), an e-mail or social network provider, or any other entity. The goal is to have the target either provides some information about itself such as first and last names, e-mail or social network credentials, date of birth, social security or credit card number, passport informations and so forth; or click on a link that is serving some malware to compromise its computer. The latter form can be used to install some crimware or a botnet agent, collect personal information, or access bank accounts.

A botnet agent can, in turn, be used to distribute spam or be used as an anonymizing proxy to access illegal content. All the connections will seem to be originated from the victim's computer.

In this scenario, the attackers often use spam lists, address books of e-mail addresses collected by spammers and used to distribute junk mail. These address lists can contain millions of addresses, and even if as low as 0.01% of the targets fall from it, that still represents a significant number - if the list has "only" 100,000 valid entries, and 0,01% of them provide  the information, that is 10 people who will become victims of the phishing attack. On the other hand, an attacker with a list containing 1 million valid entries and a success rate of 1% will make 10,000 victims.

The information collected can either be exploited directly by the phisher or sold to other parties. For example, the credentials to access a valid Bank of America account with an $18,000 balance costs $800 [2].

Corporations, governmental and non-governmental agencies have to face a more specific type of attack: spearphishing. The attacker will gather as much information as possible on the target, including subscriptions, center of interests, relationships between people and so forth. The idea is to be able to craft an e-mail to will have a very high likelihood of being read and reacted upon to achieve a higher rate of success, with industry statistics indicating an average rate of 19% [1].

The aims behind the attack is to gain some information, for example credentials to access a corporate remote access, but also often to plant a piece of code categorized as an advanced persistent threat ("APT"). Once installed, these can stay for months without being detected, quietly sending data off of the network to the attacker. That data can be some intelectual property, but also classified or sensitive information,  commercial offers or client's list.  

Protecting against phishing or spearphishing is not easy as it appeals to our emotions and feelings: the fear of being prosecuted by a government agency or of losing a job, the lust of easy money or of an unbelievable opportunity, the compassion towards people suffering or in distress, the trust we usually have for authority figures. Against that, people have to start questioning and thinking: if it is too good to be true, then it most likely isn't. If it seems legit but a bit off, then it most likely is.

[1] Bimal Parmar, "Protecting against spear-phishing", Computer Fraud and Security, January 2012,
[2] "Zero-Days Hit Users Hard at the Start of the Year", TrendLab 1Q 2013 Security Roundup, January 2013,

Friday, May 31, 2013

ODE - It's a question of options

Recently, I have modeled - very summarily - a dampened spring driven by a given signal on one end and free on the other with Octave. The first results baffled me: when the signal was not starting at the beginning of my time range and when it returned to its original value under a certain time, the output would be completely flat, as shown in the following figure.

This is clearly wrong: there is no way that kind of signal won't result in a residual oscillations that will dampen over time. Interestingly enough, when the driving signal repeats, the output is different. But do you notice something weird?

Yup! The oscillations start only at the third descending edge: there is no way this can be the correct solution.

So where did it go wrong?

Let's change the start time for the driving oscillation.

When the first falling edge is at t=1, t=3 or t=5, the simulation shows no oscillation. When the first falling edge occurs at t=2 or t=4, the model seems to behave correctly, and when the first falling edge is at t=6, the first two falling edges are missed. This looks like the time step is variable, and if it falls on an "unchanging" output, it doesn't bother computing the intermediate values.

Octave's lsode_options() allows to get or set the options used by lsode(). These options include the tolerances, the integration method used, but also the minimum and maximum step size. By default the minimum is set to 0 (no minimum) and the maximum is set to -1 (no maximum): the step size can take any value and skip has many points as needed.

Here is the output from lsode_options() right after Octave was started.

Options for LSODE include:

  keyword                                             value
  -------                                             -----
  absolute tolerance                                  1.4901e-08
  relative tolerance                                  1.4901e-08
  integration method                                  stiff
  initial step size                                   -1
  maximum order                                       -1
  maximum step size                                   -1
  minimum step size                                   0
  step limit                                          100000

What if we change the maximum step size to be half of the pulse duration? Let's try that with the command lsode_options("maximum step size",0.5). The options are now

Options for LSODE include:

  keyword                                             value
  -------                                             -----
  absolute tolerance                                  1.4901e-08
  relative tolerance                                  1.4901e-08
  integration method                                  stiff
  initial step size                                   -1
  maximum order                                       -1
  maximum step size                                   0.5
  minimum step size                                   0

  step limit                                          100000

And my test set looks like this.

Which is way better: this is consistent with the spring oscillations starting at the first falling edge. Redoing the first example with the new options, I have:

And that  is consistent with the real life experience.

Wednesday, May 15, 2013

Bitcoins mining and Super Computer

Some of the images in this article gave me a good laugh: the comparisons are really great. And actually, if you look at it, what you can gain in bitcoins is probably inferior to the cost due to power. Unless you have a botnet at your disposal.

Monday, May 6, 2013

Firewalling with iptables

Almost everybody knows what a firewall is. And nobody disputes the need for them in a network design or to protect a host.

Linux has a very powerful firewall: iptables. It is actually more than just a firewall and can perform:

  • Filtering
  • Packet mangling
  • Packet marking

In this post, we will just look at the filtering aspect.

Historically, firewalls have started simply as filters: a firewall would drop or reject a packet based on the source and destination addresses, the protocol used (IP, ICMP, TCP, UDP) and the source and destination ports. However, this proved to have some serious limitations: for instance, an administrator would have had to put all rules both ways, such as

permit TCP -> port 80


permit TCP port 80 ->

This unfortunately has a very serious side effect: any machine on the network can access any machine on the network on any TCP port, provided that the source is set to 80! 

To address that, certain vendors developed workarounds such as the "established" keyword, which check that the packet doesn't have only the SYN flag. However, while this prevents establishing a session, it still permits someone to scan the inside network by using the same trick as previously and forcing the TCP flags to Ack, Push or anything that is non SYN.

Came the stateful firewalls! The idea is simple: the firewall, having access to the packets, can track the state of each session and allow back only the frames leading to an adjacent valid state. For example, if host has sent a SYN packet to from port 3340 to port 80, the only valid replies are either a "RST" from port 80 to port 3340 (the port 80 on is closed or the connection has been refused), or a SYN-ACK from port 80 to port 3340. In the latter case, the connection is "half open", and the only valid replies are a RST (the connection is being dropped) or an ACK from port 3340 to port 80, at which point the connection is fully open and the data transfer can start.

This brought a slight disadvantage: in order to be able to track the session, the firewall needs to see all the packets. Asymmetric routing is no longer possible and if load balancing between multiple firewalls is needed either the  state tables need to be shared between all the members or the load balancing needs to take the sessions into account.

There is also another issue: if the notion of state is defined for TCP, UDP and ICMP are connectionless, meaning that there is no notion of session. So how to proceed in this case? This usual answer is to consider that the first datagram "opens" the "session", and that anything in return on the same set of ports is to be accepted. For ICMP, it is a bit different: there is no notion of port, so the ICMP Type is used instead. For instance, an ICMP Type 8 indicates an "Echo Request", for which there are only a few valid replies: Type 0 (Echo Reply), Type 3 (Unreachable), Type 4 (Source Quench) or Type 11 (TTL Expired). 

TCP has a mechanism to indicate to a sender that a segment has been received out of state, or that the port is closed, by replying with the RST (reset) flag set. UDP doesn't have the same mechanism, and instead rely on ICMP Type 3 Code 3 ("Port Unreachable") to convey the information.

Okay, now that we set the scene, let's start with iptables.

As I said, iptables is also a firewall. It has a modular design that allows for the quick development and implementation of new protocols and services. More on this a bit later. It operates by using chains: a collection of rules whose possible actions are, for the packet filter, to accept, deny, reject a packet or even call another table. This feature allows an almost "programmatic" view of the rules, with calls and returns, code sharing and so forth.

Let's start with an example.

I have a small web, name and mail server exposed to the Internet. It has directly a public IP so I won't need NAT, and it doesn't provide access to a network: this is purely a server.

It needs to allow SMTP, POP/IMAP, DNS, HTTP and HTTPS access to the Internet, and SSH access only to a small number of IPs. The wish is also that it has a blacklist to put the undesirable IPs and a noise list to suppress the various unsolicited packets generated by neighboring machines (Netbios, DHCP, ...). Lastly, we want to keep track of the various regions as defined by the IANA.

The rule set will look like this:

No panic! It looks way more complex than it really is.

Note: iptables does more than filtering by the use of various tables. The "filter" is one table among many, the others include "nat"for address translation, "mangle" for various operations on packets and "raw" for very, very specialized treatment. In the following, we will use the "filter" table, which is also the default table.

The "filter" table comes with 3 chains:

  • INPUT - for packets destined to local sockets
  • OUTPUT - for packets locally generated 
  • FORWARD - for packets that are routed between two subnets

Our server is not used as a router, so the only chains of concerns are INPUT and OUTPUT. These built-in chains also take a policy: this is the default action to take when none of the rules matches the packet. There are 4 built-in actions:

  • ACCEPT - the packet is let through
  • DROP - the packet is discarded
  • QUEUE - the packet is put in the queue to the userspace. This won't be discussed here.
  • RETURN - the process returns to the calling chain

For the INPUT chain,  the only two targets that make sense as a policy are ACCEPT and DROP. This is the equivalent of an implicit last rule.

To create a new chain, the -N <chain> switch is used. In our case, we need to create the chains NOISE, BLACKHOLE, REGIONS, APNIC, LACNIC, ARIN, RIPE, AFRINIC and SSH.

iptables -N NOISE
iptables -N BLACKHOLE
iptables -N REGIONS
iptables -N APNIC
iptables -N LACNIC
iptables -N ARIN
iptables -N RIPE
iptables -N AFRINIC
iptables -N SSH

To verify the chains and their content, the -L [<chain>] switch can be used. There are also the -n (numerical) and -v (verbose) switches. 

iptables -L -nv
iptables -L INPUT -n
iptables -L REGIONS

Let's populate the INPUT chain. The order in which I want the process to occur is:

  1. Drop the NOISE
  2. Drop the BLACKHOLE
  3. Accept new HTTP sessions (go to REGIONS)
  4. Accept new HTTPS sessions (go to REGIONS)
  5. Accept new DNS sessions (go to REGIONS)
  6. Accept new SSH sessions (go to SSH)
  7. Accept already established sessions
  8. Drop and log everything else

This translates to:

iptables -A INPUT -i eth0 -j NOISE
iptables -A INPUT -i eth0 -j BLACKHOLE
iptables -A INPUT -i eth0 -p tcp -m tcp --dport 80  \
                  -m state --state NEW -j REGIONS
iptables -A INPUT -i eth0 -p tcp -m tcp --dport 443 \
                  -m state --state NEW -j REGIONS
iptables -A INPUT -i eth0 -p tcp -m tcp --dport 53  \
                  -m state --state NEW -j REGIONS
iptables -A INPUT -i eth0 -p udp -m udp --dport 53  \
                  -m state --state NEW -j REGIONS
iptables -A INPUT -i eth0 -p tcp -m tcp --dport 22  \
                  -m state --state NEW -j SSH
iptables -A INPUT -i eth0 -m state --state ESTABLISHED \
                  -j ACCEPT
iptables -A INPUT -i eth0 -j LOG
iptables -A INPUT -i eth0 -j DROP
Note: the LOG target requires that the ipt_LOG module be loaded.

There are a few switches that I will describe later. 

If we let the firewall that way, nothing would go through. Why? Because the REGIONS and SSH chains are empty, and would just return back to INPUT. The process would go down to the DROP target, at which point the packet would be discarded.

Let's then populate REGIONS:

iptables -A REGIONS -j APNIC
iptables -A REGIONS -j LACNIC
iptables -A REGIONS -j ARIN
iptables -A REGIONS -j RIPE
iptables -A REGIONS -j AFRINIC
iptables -A REGIONS -j ACCEPT

The closing ACCEPT target is to make sure that IPs in subnets not allocated to a region are processed. At this point, our firewall let anyone on the Internet access the server on HTTP, HTTPS and DNS. The regions can be populated with the information from the IANA. 

That's also where an explicit RETURN target is needed: APNIC has been assigned the subnets, but the was transferred to AFRINIC. This would give the following rules:

iptables -A APNIC -s -j RETURN
iptables -A APNIC -s -j ACCEPT

Dropping the noise:

If the server is on a network where there are MS Windows machines, there is a good chance the firewall is going to log an incredible quantity of noise, mostly broadcasts due to Netbios.

A way to avoid that is to drop that noise early in the process, and that's the reason for the NOISE chain.

iptables -A NOISE -i eth0 -m udp --sport 138 -j DROP
iptables -A NOISE -i eth0 -m udp --dport 138 -j DROP

This is not to be confused with the role of the BLACKHOLE: that latter chain is to drop all traffic coming from certain sources. Technically, both could be in the NOISE chain, but I usually like to separate them.

An entry in the BLACKHOLE would be:

iptables -A BLACKHOLE -i eth0 -s -j DROP

The reason for being in the BLACKHOLE, in my case, is too many logged attempts to access my machine on closed ports, or the presence in several blacklists.

Managing the server through SSH

In the current state, it is possible to access the server on HTTP, HTTPS and DNS, but not SSH. Again, to populate the table and assuming that only the RFC1918 network should be able to manage that server:

iptables -A SSH -i eth0 -s -j ACCEPT

There is no need to duplicate the tcp/22 configuration: that chain is accessed only through the statement in INPUT that has the -j SSH, which already matches only new SSH connections.

These switches I haven't talked about yet

In the various examples, I mentioned a few switches I haven't described. That's the case for the -i <interface>, the -m tcp, -m udp or -m state

The -i <interface> adds a condition to the rule, namely that the packet entered through the mentioned interface. This allows to separate the roles, and for example have an interface dedicated to the management, another to the services provided. If this is not mentioned, the rule is applied to all interfaces, including the loopback. 

The -p switch specifies the protocol to match, in this case, tcp or udp. Other values are possible.

-m specifies a match: it gives access to command-line switches depending on the modules. For example, if the rule has to match a udp connection, there is no need to have the module responsible for inspecting tcp segments be used. I used three different modules: tcp, udp and state. The first two are used to inspect specific options of these protocols, such as source or destination ports, the last one is used to access the conntrack module information, which keeps track of all connections. This allows to differentiate between a new session (trying to establish), an established session (already initiated), an invalid session (for example out-of-state TCP segments) and RELATED sessions (sessions related to another session, for example FTP data ports).

With -m tcp or -m udp, two of the additional switches are --sport <port> and --dport <port>, which match respectively the source and the destination port. If they are not specified, any port is valid. With -m state, the --state <state> switch becomes available, to match specific conntrack information.

This was a short introduction to iptables, and going over all of its possibilities would require several books. A good starting point is the man page.

Monday, April 29, 2013

Spam Part II - The Different Actors

In part I, we went over a few things regarding the landing page and the way the requests flow between the different sites. Let's have a look at the various sites, the IPs and the countries involved.

Name IP Country
---------------------------------------------------------------- US DE SE FR DE NL DE UY NL UY CA NL DE DE US US US FR SE UY DE CA DE UY US DE DE DE US 
(Alias for US US US US US US

The majority of servers are hosted in the US and Germany. We can breakdown the hosting companies for these two countries.


The situation for Germany is interesting, due to the only 2 providers involved. Furthermore the upstream provider for "TT Internationl d.o.o." is ... "Hetzner." It seems highly unlikely that a single hoster/provider  is unlucky enough to be the only one to serve the spam. A quick search on Google reports that several of their servers are known to distribute malware. SpamHaus goes even further and write down the two "TT International d.o.o.o" networks seen as dirty.

Given the number of hosts returned for "", it could be a number of compromised servers, possibly members of a botnet. Another interesting point of view is who registered what name.
The last 3 are US based registrars. The first one is Ukrainian based.

The "" is load-balanced across various servers on the Amazon EC cloud. The name is resolved with a TTL of 60 seconds between:,,, or are some of the IPs returned.

Conclusion ...

After discussing with my friend, she told me her account was hacked into and she had to change her password. The account was used to send an e-mail to all her contacts.

The front-end URL was most likely hacked into and a script is present that redirects only browsers based on the user agent. That way, if a search engine hits the page, it doesn't notice the redirection and ultimately the spam content.

The number of machines suggests this is not an amateur campaign, but rather an organization is behind it, and that the system can be extended to accommodate for further "products". The Amazon EC is leveraged to provide core redirection, which can ensure persistence should some parts of the chain be cleaned or turned off.

Lastly, it also seems that some of the servers are from less-than-reputable hosting companies known to harbor malware and criminal activities, reinforcing the idea that a criminal organization is behind it.

Friday, April 26, 2013

Analysis of a spam site

Last week, I received an e-mail from a contact of mine. Immediately, I knew something was wrong: the subject was her name between <> and the body was only a single line.

Before 12/2012, it was possible to find where an e-mail was sent using Hotmail: the header "X-Originating-IP" contained the IP of the machine used to send the e-mail through the web interface. Now, this is no longer possible, or at least not easily, as microsoft as decided to replace the "X-Originating-IP" by a "X-EIP", which contains something that seems to be a hash. If you have more information on this, let me know.

Warning: do not copy any of the following links in your browser unless you know exactly what you are doing! I have not tested any of them for possible malware. You have been warned!

The single line is actually a link ( So, let's wget that bad boy.

Without a user-agent, wget doesn't hide its nature. In this case, this is welcomed by a 403 code (Unauthorized). Interesting. What about changing the user-agent to match a Windows 7 with IE9? The corresponding user-agent string is "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)". And yes, this changes the reply! The same test with a Windows XP/IE7 returns the same page.

The request to "" returns a 302 (Moved) to "", which also returns a 302 to "".

There are two parameters in the middle URL: a and c. I played a bit and found the following:

  • The "a" parameter seems to be some form of counter, but isn't used to select a specific page: be it there or not, the same pages are returned;
  • Two pages are returned: 'GarciniaCambogiaDiet' and 'GreenCoffeDiet' in alternance;
  • The "c" paramter seems to select the campaign, but I was not able to find another set of sites. Yet.
Code analysis - first landing page

Let's dive into the first page. 

There are a few javascripts present: one returns the date with the day of the week and the month name, the other one is the usual "you are about to pass on a once in a lifetime opportunity, do you want to reconsider?" type of message box, executed when the user leaves or closes the page.

Most of the body is the usual crap: "facts", "user comments", "leave us a comment" (which is just a decoy, there is no form or no script attached to it). Dr Oz is mentioned in the text 

    196               <h2>Conclusion</h2>
    197 <p><a  href="go.php" target="_blank"><strong>Pure Garcinia Cambogia</strong></a>
    198  is made from HCA the finest 100% Garcinia Cambogia fruits on the
    199 planet. We offer the highest potency Garcinia Cambogia extract available
    200  which meets all of the criteria put forth by Dr. Oz. We are confident
    201 that it will work for you, as it has for so many others.</p>
And there is another mention of the Doctor at the bottom of the page. There is also a video from Youtube with Dr. Oz explaining the benefits of the various products being advertised here.

    418 <div id="footer">
    421  <p>
    423 *The Dr. Oz Show is a registered trademark of ZoCo 1, LLC, which is not
    424 affiliated with and does not sponsor or endorse the products or services
    425  of 100% Pure Garcinia Cambogia With Svetol ®. All Rights Reserved.</p>
    427 <p>*Reference on our Web Sites to any publication or service of any
    428 third party by me, domain name, trademark, trade identity, service mark,
    429  trade identity, logo, manufacturer or otherwise does not constitute or
    430 imply its endorsement or recommendation by Company, its parent,
    431 subsidiaries and affiliates.</p>
Yeah, to be on the safe side: let's mention him, but not too much. If you were wondering, the "conclusion" is written using the "clear" class style, while the bottom message is using the "footer" class style. The CSS files show that the clear will be really visible, the footer not so much (It will be this color on a white background)

One of the things that is quite impressive is the number of mentions of go.php: no less than 25 references. This is the target of pretty much every link in the file.

There is another php file used in a iframe: imp.php.


That file is included as an iframe of size 0x0. When requested, it gives a single line, an IMG tag, that requests, another script. Fuzzying the CID parameters, or even removing it, didn't change the GIF file returned, which is a 1x1 pixel.

It is apparently known to be used by malware sites.


Getting this file is really interesting due to the number of redirects found:

Connecting to

That is no less than 5 redirects! The presence of some with parameters may indicate that the same sites may discriminate between different campaigns. More on this later.

The page contains three javascript includes, one of which couldn't be found (js/11.js). It also contains a form to order the "good", with the POST going to Interestingly, the connection is done through HTTPS.

The certificate information is valid and gives:

subject=/OU=Domain Control Validated/
issuer=/C=US/ST=Arizona/L=Scottsdale/, Inc./OU= Daddy Secure Certification Authority/serialNumber=07969287

The site is running the LimeLight CRM. I don't know whether the found usage is "normal" for that site or if it was compromised.

For this branch, no further analysis.

This site is interesting. It sits in the path when getting the go.php file, and it takes two parameters: offer_id and aff_id (affiliate?). Let's play a little with these parameters.

offer_id = 10    aff_id = 1               <Nothing>
offer_id = 20    aff_id = 1               Green Coffee
offer_id = 30    aff_id = 1               Green Coffee
offer_id = 36    aff_id = 1               Green Coffee Beans
offer_id = 38    aff_id = 1               Green Coffee Beans
offer_id = 40    aff_id = 1               <Redirect to>
offer_id = 44    aff_id = 1               Green Coffee
offer_id = 45    aff_id = 1               <Redirect to>
offer_id = 46    aff_id = 1               Garcinia Cambogia
offer_id = 47    aff_id = 1               <Nothing>
offer_id = 48    aff_id = 1               Garcinia Cambogia
offer_id = 48    aff_id = 2               <Nothing>
offer_id = 48    aff_id = 3               Garcinia Cambogia
offer_id = 48    aff_id = 4               Garcinia Cambogia
offer_id = 49    aff_id = 1               <Nothing>
offer_id = 50    aff_id = 1               Green Coffee

Other random values returns one of the following: nothing, a redirect, 'Green Coffee', 'Green Coffee Beans' or 'Garcinia Cambogia'. Here is a visual representation of the path taken (redirect, POST or clicks)

There is a constant: the payment/ordering site usually posts to ""

Next: the different actors.

Wednesday, April 24, 2013

It is time to patch ... again!

A recent article on net-security reports that a study from the Finnish Security Giant F-Secure, 87% of the corporate computers are not adequately patched, which represents a huge security risk.

I can't say I am surprised: more than my share have I seen corporate computers lacking some updates. Not to say SEVERAL updates. And more than often: critical updates, things for which an exploit is available and used in the wild.

Dealing with a large number of vulnerabilities is something difficult: with time, patches may depend upon patches, or pieces that seem to be independent may become interdependent. If you have helped a friend install updates or if you are yourself working in this field, you have probably experienced the dreaded "one more reboot and we should be okay." Which usually ends up needed another 4 hours of work and several reboots. If you multiply that by two hundred (or thousand) users, some of them having a quasi religious repulsion to rebooting, you understand why the situation can deteriorate really quickly.

An indicator I frequently use is the average vulnerability (publication) age: it is based on the CVE IDs and gives an idea of "how bad is the patch front lagging." This is coupled with the publication year distribution to give an idea of whether things are slipping.

That kind of visual representation usually helps explaining why a situation is bad: it is better than dropping an A-Bomb of "You have more than a hundred thousand vulnerabilities that are 2 years old or more" but at the same time, it gives an idea of where the issue is. In this case, there is a spike for 2011, closely followed by 2012, which can be a lead to start investigating why things went wrong: did someone leave at that time? Was there a massive OS upgrade? Did the tool used to distribute updates come out of license?

If explaining the gravity of that kind of situation is easy and usually well-understood, problems start when it is time to remediate! I have heard excuses ranging from "we can't because it will break our main application" to "but surely we can't connect to each and every system to fix it, can we?" The median being "we are looking at implementing a tool that will automate the deployment of the patches." Of course, what is not mentioned is that during the time it takes to look at the tool, implement it, configure it and start having something decent out of it, nearly a year goes by during which no problem is addressed. And at best new vulnerabilities pile up and make the situation worse.

Is there a way out? Yes: the most important is to have a documented patch policy that permits the IT team to implement fixes as they are needed. While a cyclic patch program is okay, there is a need for out-of-band operations, such as urgent patches (Adobe Flash anyone?) or to fix situations that have deteriorated past a certain point, for example a machine that has vulnerabilities more than 6 months old. This is were a good management support is important, to make sure that objections can be dealt with, rather than representing a hard stop. Or starting to grant exemptions and free passes.

The server situation is a bit more complex: given that the team that deploys the patches is the same that will have to fix the mess should something go wrong, there is a natural tendency to leave these untouched if the fix doesn't bring anything needed. But given that all modern virtualization solutions offer a way or another to replicate and snapshot the servers, there is no excuse for not testing the patches. Again, it is up to management to insist on having results.

Lastly, it is important to have a set of metrics that reflects the reality, so progress can be shown.

Monday, April 22, 2013

Three simple steps to determine risk tolerance (CSOONLINE)

Modern security frameworks base themselves on a determination of the risks an entity's assets are facing, and how the entity addresses these by suppressing it (the vulnerability, the probability of occurrence or the threat is suppressed), mitigating it (something is put in place that will lower the risk below a certain threshold or transform it into an acceptable risk) or transferring/delegating it (for example: cover it with an insurance policy).

CSOONLINE has an article about "three simple steps to determine risk tolerance." I personally find it a bit thin and light, but there are a few good pointers in it.

First - If you don't have a formal risk policy / assessment framework, put one in place. Informal methodologies don't work, are inconsistent and mask the issues rather than solve it. This includes who can assume the risks, and what the assumptions are.

Second - Categorize risks whether they are enterprise or business unit - wide, and delegate these risks accordingly.

Third - Document how disputes around risks / delegations are to be solved.

Wednesday, April 17, 2013

Anatomy of a hack

In the light of the ongoing WordPress Attack Campaign, a "souvenir" of an attack against Joomla here.

Wednesday, April 10, 2013

My hosts have Firewall

Firewalls are not something new: even though the technology has evolved since their conception late 1980s. Originally a simple packet filtering, another evolution brought the concept of "state tracking" (stateful firewalls) , then the ability to look into the application protocol, called "deep packet inspection", "application inspection" or "application awareness."

Recently, vendors such as Palo Alto have gone a step further and are now able to recognize applications ("Apps") inside a protocol, for example, the Palo Alto firewalls are able to discern a game from a chat on Facebook.

Firewalls are still a major security component in network security, as they separate zones of different security level, for example "outside", "Exposed DMZ" and "Inside." However, these network firewalls are effective only when a connection goes from a network to another. Even if there are the so-called "transparent firewalls" which operate at Layer 2, they can't protect each and every machine individually.

That's where "Host Firewalls" have a role: instead of deploying a network device in front of each host, why not include the firewall part in the host itself? That way, the machine is protected not only from the other zones, but also from the other computers in the same network! Important? Let's go through a few examples.

Example 1 - Servers in an Exposed DMZ

This may be the most obvious example: several servers are in a DMZ exposed to the Internet. These present some web, file transfer and remote access services to computers on the Internet.

The front-end firewall protects these servers from the Internet and the internal networks by letting only the relevant protocols enter the DMZ, and performing some protocol validations such as restricting the methods used in HTTP, or the commands permitted in a FTP session. The front-end firewall also protects the other networks against the DMZ by letting only the relevant services through: DNS, possibly access to a back-end database and so forth.

Now, what happens if a server is compromised? The attacker will scan the local network to find other hosts, in order to be able to get further inside the network. This can be done either by using some credentials found on the first compromised servers or by exploiting other vulnerabilities. But in order to do that, the attacker has to be able to create connections from the compromised server to the neighboring servers.

If each host in the DMZ has its own firewall, the attacker might be prevented from being able to move laterally, and even more: provided that blocked access are logged, the sysadmins may detect the attempts and find the compromised server.

Example 2 - Workstations in an internal network

Another trivial example, but let's think of it: if you have a well segregated network in which the different departments/business units are separated from each other, what about the communications within a department? Do you think that user A in accounting often have to share documents on her computer to user B in the same department?

Over the last few years, attackers have been increasingly successful at exploiting the human component of computer networks through social engineering. For instance, an employee gets an e-mail with a link or a pdf file and a text crafted to incite the user to click or open. Once this is done, the malware payload executes on the computer.

One of the first steps the attacker will take is to scan nearby machines to further compromise the network, which is called "lateral movement." This will ensure its connection persistence to the network should the infected machine "0" be cleaned.

This lateral movement can be done in a variety of ways, let's mention 2: vulnerability exploitation and account compromise.

  • Vulnerability Exploitation

Nothing new: the attacker will use an internally accessible vulnerability to propagate his/her malware to other machines. Being that he/she is in the internal network, without much filtering, the possibilities expands to additional protocols such as UPnP, Netbios or RDP.

  • Account Compromise

The attacker will dump the local user database and try to find usernames and passwords. Depending on the policies in place, this can be quite easy. Additionally, the Windows OS store a cached version of the  username/password to allow access to the device in the event the domain controllers are not accessible.

If the attacker is able to find a username/password pair, he/she can login to the systems with the user's privileges.

But the account compromise can also give the attacker access to a remote access solution if the company has one and it doesn't use a different source of authentication, or if the passwords are reused. This is for another discussion.

One of the conditions for the lateral movement to be successful is that the systems on the network accept the connections from the neighboring hosts. If the administrators have limited the ability for a
non IT workstation to be able to connect to the other workstations, this will seriously hinder the possibility for an attacker to spread using exploits or - under some conditions - compromised credentials.

Example 3 - Internal servers

It can also be that the administrators have a need to protect certain servers from being accessed from certain networks or workstations. Usually, the system admins delegate that tasks to the network or firewall team. While this works pretty well in practice, a better approach is to duplicate these security steps by having it implemented both at the network and at the host levels.

There are many reasons for that, let's mention the top two:

  • Prevention against  the failure of a  layer
System security is about planning the "what ifs": what if a "permit ip any any" is added by mistake in a network device? At that point, whatever is on the host will protect it against unauthorized communications.

  • Proximity of the rules

The closer the rules are from a host, the more effective they are: implementing a set of rules in a firewall 5 hops away from a server will only protect against what has to cross the firewall, which means none of the networks connected to the intermediate 5 hops will undergo any filtering. 

Having the rules directly on the hosts means that everything, including the hosts on the same network segment, can be filtered or permitted.  While it is possible to achieve the same at the network level by using port ACLs or equivalent, applying the same rules at the host can be more efficient, by distributing the filtering load across multiple machines. Again, nothing prevents a duplication of these tasks.

Application filtering

Another reason to implement the filtering on the hosts themselves is the ability to filter not only the ports/protocols but also what application(s) can use them. Coupled with an explicit deny, this can be a powerful tool to limit the possibility for an attacker to compromise a host, or in the event of a compromise, to prevent further activity. 

For example, there is no reason the netbios services on a Windows machine should be accessed by anything else than the corporate networks. Also, is there any reasons Word should be making HTTPS connections to anything else than the MS networks?

Selectively allow applications outbound or inbound - to listen to a port - is a good way to start detecting when things are getting weird: if you see suddenly Adobe Reader making requests as it seems to crash, chances are something is wrong.

What's against: "blame the firewall"

Scott Adams summed it up in "Dilbert: Blame the firewall": at the first inconvenience, people will start blaming the firewall for not being able to "work". If we look at the cold facts, it usually means: "your firewall prevented me from accessing my radio/video/game." Only in rare cases is the problem really a work-related problem.

Unfortunately, people are wired that way: they tend to blame anything they don't understand. I actually did the experiment once: I warned the IT team and the whole company I was about to install an additional firewall to protect our network. My boss was aware: it was a fake announcement. However, the day after the "change", the helpdesk started having complaints concerning non working connections. And a C-level person even demanded the new firewall to be taken offline!

Another problem that frequently arise is the "exception by level": a "sufficiently privileged" person in a company demanding an exception on the sole reason ... he has the power to request it. In that scenario, the highest authorities in the company have a strong educational role to play, by letting everyone in the management staff know that even though they could grant themselves the exception, they won't. And so shouldn't anyone else.

And my favorite last one: the "victory by permanent yes". In certain companies, prior to get an exception, a manager or exec has to approve the exception. More than often, the exception is almost guaranteed to be approved, which leads to the user feeling that the procedure is just bureaucratic, and to the technician feeling that there is no point in having a procedure that almost guarantees an approval for the request, regardless how stupid it can be.


A typical network is akin to a mellow cake: a crunchy layer with a gooey inside: once the perimeter has been cleared, not a lot prevents lateral movements. Modern attacks leverage that, and target directly the users inside the networks: a successful social engineering is all it takes to get inside the network and have little to no resistance while moving from machine to machine.

Most, if not all, of the tools are present in all modern OS to implement a strong security, however, this implies that the system administrators start understanding network concepts, which is, in my experience, not an easy thing to achieve. Even more, it requires that the management buys into the concept, forces all the IT players to be at least proficient in each discipline (system, network, application). And maybe the most problematic: the management has to stand on the IT/Security side and refuse all changes unless they have a strong business justification, but also to make sure that IT/Security is not seen as the problem, but rather as a defense against potential problems.