Quote:
Originally Posted by AndrewX
Disk i/o rises somewhat incrementially with the amount of containers per master node. You can check your I/O wait percentage via top (CPU % wait is your I/O load). Try this command, it should give you a result similar to this one:
[root@master ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 34.3023 s, 31.3 MB/s
Anything below 20MB/s is very unusual. FYI, RAID1 mirrors contents over 2 drives. 2x 250 GB makes 250 GB mirrored on 2 drives, which has faster read speads but obviously the same write speed. 2x RAID0 makes 500 GB and different parts of data are written accross the 2 drives simultaneously. It has faster read AND write speeds, with double the risk of a drive crash. RAID5 does not have that problem due to added parity but requires 3 drives.
So make sure your server has the right RAID configuration, and obviously hardware RAID is better than software, make sure which one you have so the benefit of your 3 drive server is not wasted.
|
My Digital Ocean VPS
# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 5.7137 s, 188 MB/s
My dedicated
# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 8.98492 s, 120 MB/s