Sunday, October 10, 2010

A clustered Samba Fileserver, Part II - Replicated block device using DRBD

In this part, I will present the configuration of DRBD.

DRBD works by creating resources, that match a local device to a device on a remote computer. Logically, on both computer, the resource definition shall be identical, which doesn't mean the the actual physical device shall be the same.


Here is the resource created for this test:

resource r0 {
  on linuxnas01 {
    device    /dev/drbd1;
    disk      /dev/sdb;
    address   192.168.255.1:7789;
    meta-disk internal;
  }
  on linuxnas02 {
    device    /dev/drbd1;
    disk      /dev/sdb;
    address   192.168.255.2:7789;
    meta-disk internal;
  }
}
It creates a resource named r0, that exists on machines linuxnas01 and linuxnas02. On linuxnas01, the physical device associated with the resource is /dev/sdb. In this scenario, it's the same on linuxnas02. It makes the configuration easier to understand, but this is in no way mandatory.

First comment, this is for a resource in "primary-secondary" mode, which means that one node is consider to have a rw privilege, while the other is ro.

The line "meta-disk internal;" specifies that the drbd metadata (see [1] for a reference on these) are to be written on the physical device specified by the resource. This will have some importance when we will create the filesystem.

Also, it has an performance side to consider: each write operation will result in (at least) two actual accesses: one to write the sectors concerned, and the second to update the meta-data. If the goal is to put in place a filesystem that will need to have heavy performance, a better solution is to store these metadata on another physical device, for instance a high speed disk such as a flash drive or a 15K SAS disk.

On Fedora, don't forget to open the firewall. By default, only port 22/tcp is allowed, which prevent the DRBD connection to establish. As a result, one would see the nodes staying in the "Unknown" state. Also, you have to load the modules, either manually with "/etc/init.d/drbd start" or add them to the required rcX.d, with chkconfig or update-rc.d, depending again on your flavor.


Once the resource is configured, attached and up'd (see [2] for the chapter dealing with the configuration), it appears and starts to sync between the nodes.

[root@linuxnas01 ~]# drbd-overview
  1:r0  SyncSource Secondary/Secondary UpToDate/Inconsistent C r----
    [==>.................] sync'ed: 19.2% (1858588/2293468)K

udevadm can also be used to check that the device exists.

[root@linuxnas01 ~]# udevadm info --query=all --path=/devices/virtual/block/drbd1
P: /devices/virtual/block/drbd1
N: drbd1
W: 61
S: drbd/by-res/r0
S: drbd/by-disk/sdb
E: UDEV_LOG=3
E: DEVPATH=/devices/virtual/block/drbd1
E: MAJOR=147
E: MINOR=1
E: DEVNAME=/dev/drbd1
E: DEVTYPE=disk
E: SUBSYSTEM=block
E: RESOURCE=r0
E: DEVICE=drbd1
E: DISK=sdb
E: DEVLINKS=/dev/drbd/by-res/r0 /dev/drbd/by-disk/sdb

My excerpt shows Secondary/Secondary. To force a node to be "primary", let's use "drbdadm primary <resource>". Issued on the first node, this forces drbd to recognize that linuxnas01 is indeed the primary node.

At this point, I have a working resource on both machine. The next step is to create the filesystem on the primary node.


Bibliography

[1] DRBD web site, chapter 18, http://www.drbd.org/users-guide/ch-internals.html
[2] DRBD web site, chapter 5, http://www.drbd.org/users-guide/ch-configure.html

No comments:

Post a Comment