DRBD 8.4.3: faster than ever

For the people who don’t already have DRBD 8.4.3 deployed: here’s another good reason — Performance.

As you know DRBD marks the to-be-changed disk areas in the Activity Log.

Until now that meant that for random-write workloads a DRBD speed penalty of up to 50%, ie. each application-issued write request translated to two write requests on storage.


With DRBD 8.4.3 Lars managed to reduce that overhead1, from 1:2 down to 64:65, ie. to about 1.6%. (In sales speak “up to 64 times faster” ;) )

Here are two graphics showing the difference on one of our test clusters; both using 10GigE and synchronous replication (protocol C):

Random Writes Benchmark, Spinning Disk
The raw LVM line shows the hardware limit of 350 IOPS; while 8.4.2 and 8.3.15 are quickly limited by harddisk seeks, the 8.4.3 bars go up much further – in this hardware setup we get 4 times the randwrite performance!


When using SSDs the difference is even more visible ­— the 8.4.2 to 8.4.3 speedup is a factor ~16.7.

Random Writes Benchmark, SSD
Again, the raw LVM line shows the hardware limit of 50k IOPS; 8.4.2 needs to wait for the synchronous writes (at 1.5k IOPS), but 8.4.3 gives 25k IOPS, at least half the pure SSD speed.


Please note that every setup is different — and storage subsystems are very complex beasts, with many, non-linear, interacting parts. During our tests we found many “interesting” (but reproduceable) behaviours – so you’ll have to tune your specific setup2,3.


Furthermore, the activity log can now be much bigger4; but, as the impact on performance of leaving the “hot” area is now very much reduced, you may even want to lower the al-extents – ie. tune the AL-size to the working set, to reduce re-sync times after a failed Primary.

And, last but not least, the AL can be striped – this might help for some hardware setups, too.
Please see the documentation for more details.

BTW: these changes are in the DRBD 9 branch too, so you won’t lose the benefits.


  1. At least for I/O that is “sufficiently parallel”; ie. a single thread doing small synchronous totally random writes may see no enhancement, while a database with quite some connections should benefit nicely.
  2. To give an example: under certain circumstances, directly on this SSD, increasing IO-depth beyond some limit can seriously decrease IOPS.
  3. Or ask LINBIT to help you tune your systems. (One Cluster Health Check is included with every Platinum Subscription, BTW.)
  4. The limit is now 65534 extents, ie. more than 10 times the previous size

6 thoughts on “DRBD 8.4.3: faster than ever

  1. What about 1GigE network ? Could you please provide test results from the same environment just for 1GigE?

  2. Hi!
    Unfortunately the hardware is already used elsewhere.

    When talking about performance please always look at the bottlenecks.

    When using 1GiGE with a fast IO subsystem you will quickly find a bottleneck at 110MB/s, because DRBD is replicating in full synchronous mode with protocol C.
    So, the performance improvements with DRBD 8.4.3 may not be as visible as in our example, because your network is holding you back.

    For more details on how to tune the hell out of DRBD just write me a mail.

    Best David

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

What is 2, multiplied by 8 ?
Please leave these two fields as-is:
IMPORTANT! To be able to proceed, you need to solve the following simple math (so we know that you are a human) :-)