Mirrored SAN vs. DRBD

Every now and then we get asked “why not simply use a mirrored SAN instead of DRBD”? This post shows some important differences.

Basically, the first setup is having two servers, one of them being actively driving a DM-mirror (RAID1) over (eg.) two iSCSI volumes that are exported by two SANs; the alternative is using a standard DRBD setup. Please note that both setups need some kind of cluster manager (like Pacemaker).

Here are the two setups visualized:
The main differences are:

# SAN DRBD
1. High cost, single supplier Lower cost, commercial-off-the-shelf parts
2. At least 4 boxes (2 application servers, 2 SANs) 2 servers are sufficient
3. DM-Mirror has only recently got a write-intent-bitmap, and at least had performance problems (needed if active node crashes) Optimized Activity Log
4. Maintenance needs multiple commands Single userspace command: drbdadm
5. Split-Brain not automatically handled Automatical Split-Brain detection, policies via DRBD configuration
6. Data Verification needs to get all data over the network – twice Online-Verify transports (optionally) only checksums over the wire
7. Asynchronous mode (via WAN) not in standard product Protocol A available, optional proxy for compression and buffering
8. Black Box GPL solution, integrated in standard Linux Kernel since 2.6.33

So the Open-Source solution via DRBD has some clear technical advantages — not just the price.

And, if that’s not enough — with LINBIT you get world-class support, too!

4 thoughts on “Mirrored SAN vs. DRBD

  1. Hello, I have a cluster made ​​with Proxmox 2.2 primary primary. You can ‘use DRDB without Fencing? This cluster of virtual machines per server a Private Cloud. No website. I would like to use for disaster recovery.
    I’m just starting out with DRDB, but I find it’s a good solution!

    • Hi edumax64,
      We’re happy to hear your first impressions with DRBD are good, but I’m afraid I don’t 100% understand your question.

      While we don’t touch on fencing above , Fencing is generally preferred and normally recommended. However, you can use DRBD without fencing (single primary only, and even then only if you are sure of what are you doing). With dual primary on the other hand, fencing is essential.

      Alone DRBD simply replicates data, but teamed with a cluster manager (like pacemaker) it can help to provide high availability. For disaster recovery we offer a tool to allow DRBD replication across high latency, low throughput networks such as a WAN or the internet. Which has seen great success with those currently using it. (see DRBD Proxy for more information)

  2. Have you ever set up DRBD Proxy combined with a Master-Master cluster manager to serve geo-localised data?

    • If “Master-Master” means dual-primary, then no. That doesn’t make much sense.

      The DRBD Proxy is used on bandwidth/latency limited connections; driving Dual-Primary with a cluster-filesystem would mean that the DLM has to share the bandwidth, too – and that would make the latency even worse.

      If you can split your data, make two (or more) DRBD volumes – and have one active on either side.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

What is 7, multiplied by 9 ?
Please leave these two fields as-is:
IMPORTANT! To be able to proceed, you need to solve the following simple math (so we know that you are a human) :-)