Not long ago, I was asked to install a front end for some web applications. For high availability sake, it was decided to install two web servers, each accessing the same back-end database.
The choice was between a few commercial products, Foundry being the most known, and several opensource alternatives. I thus decided to play with one of them, Nginx.
I have already installed Apache and Squid as reverse proxies and front-end load balancers, but Nginx was still new for me. So why hesitate?
From a few articles I read on the Internet ([1], [2] and [3]) show that Nginx outperfoms Apache in several scenarios, one of them being a large number of concurrent connections. In that regard, in then makes sense to think of:
- Nginx as a Front-end, eventually with a twin and using VRRP or clusterd to provide front-end high availability, and to eventually serve the static parts;
- Several Apache back-ends with your favorite language: php, perl, ... or any other Tomcat, JBoss, Zope ...;
- A way to centralize the database (More on this later).
The front-end can have multiple roles, from just acting as a reverse proxy between clients and back-end servers to also encrypt the traffic on-the-fly, compress, take decisions based on geoIP and so on. The sky is the limit!
My first test was an easy setup: a Nginx front-end and two back-end Apache servers. This was easily accomplished, with only a few directives:
In the
http{} section, I declared my server farm:
upstream loadbalanced {
server 192.168.1.71:80;
server 192.168.1.72:80;
}
And in the server{} section, I declared that everything has to be sent to these upstream servers:
location / {
proxy_pass http://loadbalanced;
}
This is the most basic setup and performs a round-robin selection between the two servers: each new connection is redirected to the next server. From here, let's try two things:
- Shut the Apache process on one of the back-end server
- Shut the complete server down
Scenario 1 is detected immediately, and Nginx forwards all requests to the second back-end. Mission accomplished!
Scenario 2, OK as well: Nginx tried for a bit, then switched to second machine. Again, if we except a 30 seconds wait time, no error was returned. This can be tuned at will, see [4]. In the same document, you will see the options to control the load balancing, to make sticky sessions and so on.
My second test was a wee bit more "complex": why waste precious CPU cycles on the application servers when the front-end can compress, encrypt and serve static contents such as pictures. This leaves plenty of CPU resources to execute all the scripts on the back-end servers.
So, objectives:
- The front-end compresses and encrypts the connection with the client;
- The connection between the back-ends and the front-end is in clear text;
- The front-end serves static content.
That's an easy job.
First, let's create a self-signed certificate:
openssl genrsa -out nginx-key.pem
openssl req -new -key nginx-key.pem -out nginx-req.pem
<Bunch of questions suppressed>
openssl x509 -req -signkey nginx-key.pem -in nginx-req.pem -out nginx-cert.pem
Next, let's configure Nginx for SSL. Most distributions have a default "ssl.conf" file in /etc/nginx/conf.d. In there, you can find most of the needed declarations.
#
# HTTPS server configuration
#
server {
listen 443;
server_name _;
ssl on;
ssl_certificate /etc/nginx/ssl/nginx-cert.pem;
ssl_certificate_key /etc/nginx/ssl/nginx-key.pem;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:-MEDIUM:-LOW:-SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://loadbalanced;
}
}
No big mysteries there if you are a bit familiar with the Apache configuration. The
ssl_protocols and
ssl_ciphers declarations are in the openssl-like format. Again, I would strongly advise disabling SSLv2 as it has some weaknesses, and leaving only the "HIGH" encryption.
This alone gives me the encryption by the front-end. To compress, simply add
gzip on;
within the
server{} section.
The next and last part is to serve the static content from nginx itself. To make things easy, I isolated the images in /images. To serve them directly from nginx rather than from the back-end server, I'll declare that all URLs that start with a '/images' shall be served from the local system rather than being passed to the upstream servers:
location /images {
root /usr/share/nginx;
}
And that's it. From here, my front-end encrypts, compresses and serves the image from its local drive.
Bibliography
[1]
http://joeandmotorboat.com/2008/02/28/apache-vs-nginx-web-server-performance-deathmatch/
[2]
http://www.wikivs.com/wiki/Apache_vs_nginx
[3]
http://blog.webfaction.com/a-little-holiday-present
[4]
http://wiki.nginx.org/NginxHttpUpstreamModule