DRBD 8.0.11 + GFS2 + CMAN (Raid 1 over network with fence)

This guide is here to help people install and understand the concept of using general purpose pc's as a distributed high availability san sollution. Requires lenny or newer releases of debian to work properly. The nodes in this case will be called mail1 (10.0.0.1) and mail2 (10.0.0.2) respectively. In the following example the drbd was put upon a software raid where /dev/md4 is the raid1 device on both nodes.This is an initial guide and does not neccesarely work according to your interests i suggest one studies the man pages and understand how things really work before playing with this on a release server.

Install neccesary packages on mail1 and mail2

#apt-get install drbd8-module-source
#apt-get install drbd8-utils
#apt-get install gfs2-tools
#apt-get install cman

DRBD (Distributed Block Device) 8.0.11

Start off by installing drbd on your lenny or newer release of debian

#modprobe drbd
(if this fails attempt to make the module manually)

Next we edit the drbd configuration file /etc/drbd.conf on both hosts.

#example /etc/drbd.conf configuration file
resource r0 {
       protocol C;
       startup {
               become-primary-on both;
}
       net {
               allow-two-primaries;
               cram-hmac-alg "sha1";
               shared-secret "123456";
               after-sb-0pri discard-least-changes;
               after-sb-1pri violently-as0p;
               after-sb-2pri violently-as0p;
               rr-conflict violently;
}
  syncer {
  rate 100M;
  }
on mail1 {
  device    /dev/drbd0;
  disk      /dev/md4;
  address   10.0.0.1:7789;
  meta-disk internal;
}
on mail2 {
  device    /dev/drbd0;
  disk      /dev/md4;
  address   10.0.0.2:7789;
  meta-disk internal;
 }
}

Create the md on both nodes

#drbdadm create-md /dev/md4

Next step one can attempt to start the drbd on both nodes

#/etc/init.d/drbd start

Next activate all drbd nodes as primaries

#drbdadm primary all (Do not do this if you are not using GFS or another filestystem that can handle both nodes beeing in primary mode)

To check the status of both nodes you can try some of the following commands

#cat /proc/drbd
&/or
#drbdadm state all

CMAN is a symmetric, general-purpose, kernel-based cluster manager

Cman requires the configuration file /etc/cluster/cluster.conf

#example /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster name="network-raid1" config_version="2">
   <cman two_node="1" expected_votes="1">
   </cman>
   <clusternodes>
     <clusternode name="mail1" votes="1" nodeid="1">
      <fence>
       <method name="single">
        <device name="human" ipaddr="10.0.0.1"/>
      </method>
     </fence>
    </clusternode>
    <clusternode name="mail2" votes="1" nodeid="2">
     <fence>
      <method name="single">
        <device name="human" ipaddr="10.0.0.2"/>
      </method>
     </fence>
   </clusternode>
  </clusternodes>
  <fence_devices>
  <fence_device name="human" agent="fence_manual"/>
 </fence_devices>
</cluster>

Once the configuration file is present we can start cman

#/etc/init.d/cman start

If your nodes get stuck at fencing you should look that you do not have any odd /etc/hosts definitions

To verify that everything is running as planned you can check the nodes using cman_tool

#cman_tool nodes
will give you the status of the nodes
#cman_tool status
Gives a detailed status of the cman itself and the current nodes configuration

Now that drbd and cman is running we can create the gfs2 filesystem

GFS2 Global File System 2

we already defined in the cluster.conf network-raid1 so this will have to go into the gfs2 file system creation process creating the gfs2 file system on /dev/drbd0

#mkfs.gfs2 -t network-raid1:* -p lock_dlm -j 4 /dev/drbd0
(notice the * after network-raid1: it is there so that it can be mounted by both hosts at the same time)

The file system can now be mounted provided the gfs2 module is loaded into the kernel on both devices

#modprobe gfs2
#mkdir /mnt/drbd
#mount -t gfs2 /dev/drbd0 /mnt/drbd

you can now test the file system by creating a directory on one node and listing it on the other

on mail1
#mkdir /mnt/drbd/test
on mail2
#ls /mnt/drbd/
test
#rm -rf /mnt/drbd/test

Congratulations you now have a working raid1 over network with fence