[Pacemaker] DRBD < LVM < EXT4 < NFS performance

Christoph Bartoschek ponto at pontohonk.de
Sat Jun 2 15:55:34 EDT 2012


> Dedicated replication link?
> 
> Maybe the additional latency is all that kills you.
> Do you have non-volatile write cache on your IO backend?
> Did you post your drbd configuration setings already?

There is a dedicated 10GB Ethernet replication link between both nodes.

There is also a cache on the IO backend. I have started some additional 
measurments with dd and oflag=direct.

On a remote host I get:

- With  enabled drbd link:   3 MBytes/s
- With disabled drbd link:   9 MBytes/s

On one of the machines locally:

- With  enabled drbd link:  24 MBytes/s
- With disabled drbd link:  74 MBytes/s

Same machine but a parition without Drbd and LVM:

- 90 MBytes/s

This is our current drbd.conf:

global {
    usage-count yes;
}
common {
  syncer {
    rate 500M;
  }
}
resource lfs {
  protocol C;

  startup {
    wfc-timeout         0;
    degr-wfc-timeout  120;
  }
  disk {
    on-io-error detach;
    fencing resource-only;
  }
  handlers {
    fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
    after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
  }
  net {
    max-buffers    8000;
    max-epoch-size 8000;
  }
  on d1106i06 {
    device     /dev/drbd0;
    disk       /dev/sda4;
    address    192.168.2.1:7788;
    meta-disk  internal;
  }
  on d1106i07 {
    device     /dev/drbd0;
    disk       /dev/sda4;
    address    192.168.2.2:7788;
    meta-disk  internal;
  }
}

Thanks
Christoph





More information about the Pacemaker mailing list