Firewalls are not something new: even though the technology has evolved since their
conception late 1980s. Originally a simple packet filtering, another evolution brought the concept of "state tracking" (stateful firewalls) , then the ability to look into the application protocol, called "deep packet inspection", "application inspection" or "application awareness."
Recently, vendors such as Palo Alto have gone a step further and are now able to recognize applications ("Apps") inside a protocol, for example, the Palo Alto firewalls are able to discern a game from a chat on Facebook.
Firewalls are still a major security component in network security, as they separate zones of different security level, for example "outside", "Exposed DMZ" and "Inside." However, these network firewalls are effective only when a connection goes from a network to another. Even if there are the so-called "transparent firewalls" which operate at Layer 2, they can't protect each and every machine individually.
That's where "Host Firewalls" have a role: instead of deploying a network device in front of each host, why not include the firewall part in the host itself? That way, the machine is protected not only from the other zones, but also from the other computers in the same network! Important? Let's go through a few examples.
Example 1 - Servers in an Exposed DMZ
This may be the most obvious example: several servers are in a DMZ exposed to the Internet. These present some web, file transfer and remote access services to computers on the Internet.
The front-end firewall protects these servers from the Internet and the internal networks by letting only the relevant protocols enter the DMZ, and performing some protocol validations such as restricting the methods used in HTTP, or the commands permitted in a FTP session. The front-end firewall also protects the other networks against the DMZ by letting only the relevant services through: DNS, possibly access to a back-end database and so forth.
Now, what happens if a server is compromised? The attacker will scan the local network to find other hosts, in order to be able to get further inside the network. This can be done either by using some credentials found on the first compromised servers or by exploiting other vulnerabilities. But in order to do that, the attacker has to be able to create connections from the compromised server to the neighboring servers.
If each host in the DMZ has its own firewall, the attacker might be prevented from being able to move laterally, and even more: provided that blocked access are logged, the sysadmins may detect the attempts and find the compromised server.
Example 2 - Workstations in an internal network
Another trivial example, but let's think of it: if you have a well segregated network in which the different departments/business units are separated from each other, what about the communications within a department? Do you think that user A in accounting often have to share documents on her computer to user B in the same department?
Over the last few years, attackers have been increasingly successful at exploiting the human component of computer networks through
social engineering. For instance, an employee gets an e-mail with a link or a pdf file and a text crafted to incite the user to click or open. Once this is done, the malware payload executes on the computer.
One of the first steps the attacker will take is to scan nearby machines to further compromise the network, which is called "lateral movement." This will ensure its connection persistence to the network should the infected machine "0" be cleaned.
This lateral movement can be done in a variety of ways, let's mention 2: vulnerability exploitation and account compromise.
- Vulnerability Exploitation
Nothing new: the attacker will use an internally accessible vulnerability to propagate his/her malware to other machines. Being that he/she is in the internal network, without much filtering, the possibilities expands to additional protocols such as UPnP, Netbios or RDP.
The attacker will dump the local user database and try to find usernames and passwords. Depending on the policies in place, this can be quite easy. Additionally, the Windows OS store a cached version of the username/password to allow access to the device in the event the domain controllers are not accessible.
If the attacker is able to find a username/password pair, he/she can login to the systems with the user's privileges.
But the account compromise can also give the attacker access to a remote access solution if the company has one and it doesn't use a different source of authentication, or if the passwords are reused. This is for another discussion.
One of the conditions for the lateral movement to be successful is that the systems on the network accept the connections from the neighboring hosts. If the administrators have limited the ability for a
non IT workstation to be able to connect to the other workstations, this will seriously hinder the possibility for an attacker to spread using exploits or - under some conditions - compromised credentials.
Example 3 - Internal servers
It can also be that the administrators have a need to protect certain servers from being accessed from certain networks or workstations. Usually, the system admins delegate that tasks to the network or firewall team. While this works pretty well in practice, a better approach is to duplicate these security steps by having it implemented both at the network and at the host levels.
There are many reasons for that, let's mention the top two:
- Prevention against the failure of a layer
System security is about planning the "what ifs": what if a "permit ip any any" is added by mistake in a network device? At that point, whatever is on the host will protect it against unauthorized communications.
The closer the rules are from a host, the more effective they are: implementing a set of rules in a firewall 5 hops away from a server will only protect against what has to cross the firewall, which means none of the networks connected to the intermediate 5 hops will undergo any filtering.
Having the rules directly on the hosts means that everything, including the hosts on the same network segment, can be filtered or permitted. While it is possible to achieve the same at the network level by using port ACLs or equivalent, applying the same rules at the host can be more efficient, by distributing the filtering load across multiple machines. Again, nothing prevents a duplication of these tasks.
Application filtering
Another reason to implement the filtering on the hosts themselves is the ability to filter not only the ports/protocols but also what application(s) can use them. Coupled with an explicit deny, this can be a powerful tool to limit the possibility for an attacker to compromise a host, or in the event of a compromise, to prevent further activity.
For example, there is no reason the netbios services on a Windows machine should be accessed by anything else than the corporate networks. Also, is there any reasons Word should be making HTTPS connections to anything else than the MS networks?
Selectively allow applications outbound or inbound - to listen to a port - is a good way to start detecting when things are getting weird: if you see suddenly Adobe Reader making requests as it seems to crash, chances are something is wrong.
What's against: "blame the firewall"
Scott Adams summed it up in "
Dilbert: Blame the firewall": at the first inconvenience, people will start blaming the firewall for not being able to "work". If we look at the cold facts, it usually means: "your firewall prevented me from accessing my radio/video/game." Only in rare cases is the problem really a work-related problem.
Unfortunately, people are wired that way: they tend to blame anything they don't understand. I actually did the experiment once: I warned the IT team and the whole company I was about to install an additional firewall to protect our network. My boss was aware: it was a fake announcement. However, the day after the "change", the helpdesk started having complaints concerning non working connections. And a C-level person even demanded the new firewall to be taken offline!
Another problem that frequently arise is the "exception by level": a "sufficiently privileged" person in a company demanding an exception on the sole reason ... he has the power to request it. In that scenario, the highest authorities in the company have a strong educational role to play, by letting everyone in the management staff know that even though they could grant themselves the exception, they won't. And so shouldn't anyone else.
And my favorite last one: the "victory by permanent yes". In certain companies, prior to get an exception, a manager or exec has to approve the exception. More than often, the exception is almost guaranteed to be approved, which leads to the user feeling that the procedure is just bureaucratic, and to the technician feeling that there is no point in having a procedure that almost guarantees an approval for the request, regardless how stupid it can be.
Conclusions
A typical network is akin to a mellow cake: a crunchy layer with a gooey inside: once the perimeter has been cleared, not a lot prevents lateral movements. Modern attacks leverage that, and target directly the users inside the networks: a successful social engineering is all it takes to get inside the network and have little to no resistance while moving from machine to machine.
Most, if not all, of the tools are present in all modern OS to implement a strong security, however, this implies that the system administrators start understanding network concepts, which is, in my experience, not an easy thing to achieve. Even more, it requires that the management buys into the concept, forces all the IT players to be at least proficient in each discipline (system, network, application). And maybe the most problematic: the management has to stand on the IT/Security side and refuse all changes unless they have a strong business justification, but also to make sure that IT/Security is not seen as the problem, but rather as a defense against potential problems.