In the ongoing 8.4. efforts we’re currently testing the effects of using early write submits – both to local storage and to the peer node(s).
Ie., when DRBD can guess in advance that write requests will soon be sent, it can prematurely send the data pages to the other node; so if the application then does write to storage, all that is needed is a small “
do it” packet. The smaller packet size can be transmitted over the (much faster) meta-data DRBD connection, and so reduces latency by a fair amount.
See this performance improvements in a trial run:
Early-write performance improvements
For configuration there’s a new item in the
early-write, using a time value in the usual tenths-of-a-second unit. Eg.
10 will cause DRBD to send the data one second before the application tries to write it.
You can expect that feature in the next proprietary 8.4.5 release of DRBD, so stay tuned!
As an update to the earlier blog post, take a look below. Continue reading
The threading model in DRBD Proxy 3.1 received a complete overhaul; below you can see the performance implications of these changes. Continue reading
A question we see over and over again is
umount so slow? Why does it take so long?
Part of the answer was already given in an earlier blog post; here’s some more explanation. Continue reading
For the people who don’t already have DRBD 8.4.3 deployed: here’s another good reason — Performance. Continue reading
Every now and then we get asked “why not simply use a mirrored SAN instead of DRBD”? This post shows some important differences. Continue reading
DRBD 8.4.1 introduces a new feature:
read-balancing, which is configured in the
disk section of the configuration file(s). This feature enables DRBD to balance read requests between the Primary/Secondary nodes. Continue reading
DRBD tries to ensure data integrity across different computers, and it’s quite good at it.
But, as per the old saying Trust, But Verify it might be a good idea to periodically test whether the nodes really have identical data, similar to the checks that are done for RAID sets. Continue reading
The sync-rate controller is used for controlling the used bandwidth during resynchronization (not normal replication); it runs in the
SyncTarget state, ie. on the (inconsistent) receiver side. Continue reading
The TL;DR version: don’t use
data-integrity-alg in a production setup. Continue reading