Translation(s): English - Français


Old article create initially with version 8.3.7 of drbd, the current version under Debian Jessie is version 8.9.2rc1, but the commands are the same.

The Distributed Replicated Block Device (DRBD) is a distributed storage system over multiple different hosts like a network RAID 1. The data is replicated below the filesystem at the block layer over TCP/IP.

A proper DRBD setup, especially in HA environments with Pacemaker etc. requires are more complex setup than described here. Please refer to the appropriate guides. The following example has nothing more than "Hello World" character. Please make sure to use identical drbd versions on all nodes.

Prepare the Debian system

The module portion of DRBD is shipped with Debian kernels, but the user space tools and configuration files ship with the drbd8-utils package.

apt-get install drbd8-utils

Prepare simulated physical blockdevices

I didn't have any empty space on my box, so I created a loopback device on either node. I didn't find a proper way how to setup loop back files during the boot process (maybe via udev rules?)[0]. This setup is of course not suitable in production, the use of loop devices it is not recommended due to deadlock issues. [1] If you do have space, use physical or LVM backed partition. Simulation on a virtual partition of 1GB

# dd if=/dev/zero of=drbd.testimage bs=1024k count=1024
# losetup /dev/loop1 drbd.testimage

Create drbd resource

Place a file with ending .res in /etc/drbd.d/ and include the following content. NOTE : The hostname needed identical to the node name !

# cat drbd-demo.res 
resource drbddemo {
  meta-disk internal;
  device /dev/drbd1;
  syncer {
    verify-alg sha1;
  }
  net {
    allow-two-primaries;
  }
  on node1 {
    disk /dev/loop1;
    address 192.168.1.101:7789;
  }
  on node2 {
    disk /dev/loop1;
    address 192.168.1.102:7789;
  }
}

The loopback images and the configuration must be created on both nodes. The initialization must be done on both, too.

#drbdadm create-md drbddemo

Bring the device up

In node1

# uname -n
node1
# modprobe drbd
# drbdadm up drbddemo
# cat /proc/drbd
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757 

 1: cs:WFConnection ro:Secondary/Unknown ds:Inconsistent/DUnknown C r----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:1048508

In node2

Make node2 the primary one:

# uname -n
node2
#  
# modprobe drbd
# drbdadm up drbddemo
# drbdadm -- --overwrite-data-of-peer primary drbddemo
# cat /proc/drbd 
version: 8.3.11 (api:88/proto:86-96)
srcversion: 2D876214BAAD53B31ADC1D6 

 1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:4176 nr:0 dw:0 dr:9616 al:0 bm:0 lo:2 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508
        [>....................] sync'ed:  0.4% (1048508/1048508)K
        finish: 0:43:41 speed: 0 (0) K/sec

Now we should have /dev/drbd1 up and running.

Store sample data

Let's create a filesystem on that block device and put some data on it.

# mkfs.xfs /dev/drbd1
meta-data=/dev/drbd1             isize=256    agcount=4, agsize=65532 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=262127, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =Internes Protokoll     bsize=4096   blocks=1200, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =keine                  extsz=4096   blocks=0, rtextents=0

# mount /dev/drbd1 /mnt
cat <<-END > /mnt/mytestfile
> drbd demo (it's not spelled drdb ;-) )
> END

On the other node you should get a warning when trying to mount /dev/drbd1. This is only possible in read only mode. I also switched off the secondary node and put much more data on the primary one. After restarting the backup node (loop mount the file image and launch /etc/init.d/drbd restart), the sync process started immediatly. You may verify this with a simple watch cat /proc/drbd .

Pitfalls

I tried version 8.3.7 from stable together with 8.3.11 from stable-backports on the other node. The result was a protocol failure (drbd_send_block() failed) and an unresponsive system due to endless reconnect attempts. In this particular case, I used squeeze-stable on the target and squeeze-backports on the source.

After restarting the source node with stable shipped kernel, the sync process started immediatly.

# cat /proc/drbd 
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757 

 1: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r----
    ns:0 nr:284320 dw:284320 dr:0 al:0 bm:17 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:764188
        [====>...............] sync'ed: 27.4% (764188/1048508)K
        finish: 0:15:55 speed: 416 (320) K/sec

Finally I brought both nodes on stable-backport niveau and all ran fine, too.

References

Notes

  1. http://wiki.vps.net/os-related-issues/setting-up-drbd-with-loopback-devices-and-heartbeat-on-vps-net-ubuntu/

  2. http://www.drbd.org/users-guide/ch-configure.html