Differences between revisions 1 and 12 (spanning 11 versions)
Revision 1 as of 2012-04-30 14:04:14
Size: 4754
Editor: ?MarcusOsdoba
Comment: simple drbd walkthrough
Revision 12 as of 2019-09-07 10:30:23
Size: 6052
Editor: nodiscc
Comment: add categories
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
~-[[DebianWiki/EditorGuide#translation|Translation(s)]]: none-~ ~-[[DebianWiki/EditorGuide#translation|Translation(s)]]: English - [[fr/DrBd|Français]]
Line 4: Line 4:
The Distributed Replicated Block Device (DRBD) is a distributed storage system over multiple different hosts. Similar to RAID 1 the data is replicated below the filesystem layer over TCP/IP. {{{#!wiki caution
Old article create initially with version 8.3.7 of drbd, the current version under Debian Jessie is version 8.9.2rc1, but the commands are the same.
}}}

<<TableOfContents>>

The Distributed Replicated Block Device (DRBD) is a distributed storage system over multiple different hosts like a network RAID 1. The data is replicated below the filesystem at the block layer over TCP/IP.
Line 8: Line 14:
== Prepare the Debian system ==

The module portion of DRBD is shipped with Debian kernels, but the user space tools and configuration files ship with the drbd8-utils package.

{{{apt-get install drbd8-utils}}}
Line 9: Line 21:
I didn't have any empty space on my box, so I created a loopback device on either node. Because I didn't find a proper way how to setup loop back files during the boot process (maybe via udev rules?), this setup is of course not suitable in real scenarios. I didn't have any empty space on my box, so I created a loopback device on either node. I didn't find a proper way how to setup loop back files during the boot process (maybe via udev rules?)[0]. This setup is of course not suitable in production, the use of loop devices it is not recommended due to deadlock issues. [1] If you do have space, use physical or LVM backed partition. Simulation on a virtual partition of 1GB
Line 11: Line 24:
# dd if=/dev/zero of=drdb.testimage bs=1024k count=1024
# losetup /dev/loop1 drdb.testimage
# dd if=/dev/zero of=drbd.testimage bs=1024k count=1024
# losetup /dev/loop1 drbd.testimage
Line 15: Line 28:
== Create drdb resource == == Create drbd resource ==
Line 17: Line 30:
'''NOTE : The hostname needed identical to the node name !'''
Line 19: Line 33:
resource drdbdemo { resource drbddemo {
Line 40: Line 54:
#drbdadm create-md drdbdemo #drbdadm create-md drbddemo
Line 44: Line 58:
'''In node1'''
Line 48: Line 63:
# drbdadm up drdbdemo
# cat /proc/drbdversion
# drbdadm up drbddemo
# cat /proc/drbd
Line 57: Line 72:
'''In node2'''
Line 61: Line 78:
# drbdadm create-md drdbdemo #
Line 63: Line 80:
# drbdadm up drdbdemo
# drbdadm -- --overwrite-data-of-peer primary drdbdemo
# drbdadm up drbddemo
# drbdadm -- --overwrite-data-of-peer primary drbddemo
Line 94: Line 111:
On the other node you should get a warning when trying to mount /dev/drbd1. This is only possible in read only mode. On the other node you should get a warning when trying to mount /dev/drbd1. --(This is only possible in read only mode.)-- You can't mount resource in node1.
Line 96: Line 113:

== Invert resource (node2 -> node1) ==
Here is how to invert the primary / secondary mode between the two nodes.

In '''node2''' :
{{{
umount /mnt
drbdadm secondary drbddemo
}}}
In '''node1''' :
{{{
drbdadm primary drbddemo
mount /dev/drbd1 /mnt
}}}
Line 118: Line 149:

== Notes ==
 0. http://wiki.vps.net/os-related-issues/setting-up-drbd-with-loopback-devices-and-heartbeat-on-vps-net-ubuntu/
 0. http://www.drbd.org/users-guide/ch-configure.html

----

CategoryHardware | CategorySystemAdministration | CatgeoryRedundant: merge with other CategoryRaid pages

Translation(s): English - Français


Old article create initially with version 8.3.7 of drbd, the current version under Debian Jessie is version 8.9.2rc1, but the commands are the same.

The Distributed Replicated Block Device (DRBD) is a distributed storage system over multiple different hosts like a network RAID 1. The data is replicated below the filesystem at the block layer over TCP/IP.

A proper DRBD setup, especially in HA environments with Pacemaker etc. requires are more complex setup than described here. Please refer to the appropriate guides. The following example has nothing more than "Hello World" character. Please make sure to use identical drbd versions on all nodes.

Prepare the Debian system

The module portion of DRBD is shipped with Debian kernels, but the user space tools and configuration files ship with the drbd8-utils package.

apt-get install drbd8-utils

Prepare simulated physical blockdevices

I didn't have any empty space on my box, so I created a loopback device on either node. I didn't find a proper way how to setup loop back files during the boot process (maybe via udev rules?)[0]. This setup is of course not suitable in production, the use of loop devices it is not recommended due to deadlock issues. [1] If you do have space, use physical or LVM backed partition. Simulation on a virtual partition of 1GB

# dd if=/dev/zero of=drbd.testimage bs=1024k count=1024
# losetup /dev/loop1 drbd.testimage

Create drbd resource

Place a file with ending .res in /etc/drbd.d/ and include the following content. NOTE : The hostname needed identical to the node name !

# cat drbd-demo.res 
resource drbddemo {
  meta-disk internal;
  device /dev/drbd1;
  syncer {
    verify-alg sha1;
  }
  net {
    allow-two-primaries;
  }
  on node1 {
    disk /dev/loop1;
    address 192.168.1.101:7789;
  }
  on node2 {
    disk /dev/loop1;
    address 192.168.1.102:7789;
  }
}

The loopback images and the configuration must be created on both nodes. The initialization must be done on both, too.

#drbdadm create-md drbddemo

Bring the device up

In node1

# uname -n
node1
# modprobe drbd
# drbdadm up drbddemo
# cat /proc/drbd
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757 

 1: cs:WFConnection ro:Secondary/Unknown ds:Inconsistent/DUnknown C r----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:1048508

In node2

Make node2 the primary one:

# uname -n
node2
#  
# modprobe drbd
# drbdadm up drbddemo
# drbdadm -- --overwrite-data-of-peer primary drbddemo
# cat /proc/drbd 
version: 8.3.11 (api:88/proto:86-96)
srcversion: 2D876214BAAD53B31ADC1D6 

 1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:4176 nr:0 dw:0 dr:9616 al:0 bm:0 lo:2 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508
        [>....................] sync'ed:  0.4% (1048508/1048508)K
        finish: 0:43:41 speed: 0 (0) K/sec

Now we should have /dev/drbd1 up and running.

Store sample data

Let's create a filesystem on that block device and put some data on it.

# mkfs.xfs /dev/drbd1
meta-data=/dev/drbd1             isize=256    agcount=4, agsize=65532 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=262127, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =Internes Protokoll     bsize=4096   blocks=1200, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =keine                  extsz=4096   blocks=0, rtextents=0

# mount /dev/drbd1 /mnt
cat <<-END > /mnt/mytestfile
> drbd demo (it's not spelled drdb ;-) )
> END

On the other node you should get a warning when trying to mount /dev/drbd1. This is only possible in read only mode. You can't mount resource in node1. I also switched off the secondary node and put much more data on the primary one. After restarting the backup node (loop mount the file image and launch /etc/init.d/drbd restart), the sync process started immediatly. You may verify this with a simple watch cat /proc/drbd .

Invert resource (node2 -> node1)

Here is how to invert the primary / secondary mode between the two nodes.

In node2 :

umount /mnt
drbdadm secondary drbddemo

In node1 :

drbdadm primary drbddemo
mount /dev/drbd1 /mnt

Pitfalls

I tried version 8.3.7 from stable together with 8.3.11 from stable-backports on the other node. The result was a protocol failure (drbd_send_block() failed) and an unresponsive system due to endless reconnect attempts. In this particular case, I used squeeze-stable on the target and squeeze-backports on the source.

After restarting the source node with stable shipped kernel, the sync process started immediatly.

# cat /proc/drbd 
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757 

 1: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r----
    ns:0 nr:284320 dw:284320 dr:0 al:0 bm:17 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:764188
        [====>...............] sync'ed: 27.4% (764188/1048508)K
        finish: 0:15:55 speed: 416 (320) K/sec

Finally I brought both nodes on stable-backport niveau and all ran fine, too.

References

Notes

  1. http://wiki.vps.net/os-related-issues/setting-up-drbd-with-loopback-devices-and-heartbeat-on-vps-net-ubuntu/

  2. http://www.drbd.org/users-guide/ch-configure.html


CategoryHardware | CategorySystemAdministration | ?CatgeoryRedundant: merge with other ?CategoryRaid pages