Recently I had to install a new server that will act as a mail server (Zarafa but that doesn't matter) and being member of a DRBD cluster (to replicate automagically the Zarafa MySQL DB and Attachments on disks to the other node) . Fine, except that only one physical node was at my disposal : we'll convert the existing M\$ Exchange server physical box to CentOS/DRBD after the migration. So what ?

I was thinking about that nice feature in mdadm when you want to create a Linux software Raid 1 array but with only one available disk ("mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 missing" for those of you who don't know that nice feature) and add the second disk later .. That would be cool to do exactly the same with DRBD : one node active and then add the missing one later .. Don't try to find a 'missing' parameter in the drbd.conf file .. but that's possible (even if not documented in the online docs). Do you remember that nice parameter you use when you initialize your first DRBD resource (drbadm -- --overwrite-data-of-peer primary \$resourcename) ? Why not testing it with only one available node ? Yes, it works .. In fact that remembers me the name of that parameter in the previous DRBD versions (aka  "-- --do-what-I-say" ) :  that was really a way of instructing DRBD to do what you wanted it to do.

The only "issue" found so far is that it isn't possible to use the "drbdadm resize" command online to extend its size (yes, I use the nested LVM configuration : so backend disks / LVM / LV as a DRBD device / LVM / new LV on top of the drbd device) but I can easily understand why such operation really needs a connection to the second real node (which obviously is missing here)

Oh, while i'm talking about DRBD you have to know (if you use it already) that DRBD 8.3.2 (and the corresponding kABI kmods) are available in the [testing] repo ;-)