6x WD2000EARS/EARX Advanced Format / RAID / Encryption Write Performance Issue (Linux)

I’m using 1x WD2000EARS and 5x WD2000EARX in a mdadm RAID 6 on Linux.  With full disk encryption on the RAID array (LUKS),  I get quite low write speeds. Now I know that RAID 5/6 isn’t famous for write performance, but I want to make sure this is not an alignment issue.

From what I’ve heard, mdadm’s 1.2 metadata format is advanced-format-safe, but I’m not sure about the effects of the LUKS encryption and the fact that I’m not using any partitions.

I’ve done some write erformance testing:

single drive, single possibly aligned partition, no encryption: ~110MB/s (starting sector 40 but parted claims it’s not aligned for optimal performance)
single drive, single aligned partition, no encryption: ~110MB/s (maybe VERY slightly faster than “possibly aligned”)
single drive, single unaligned partition, no encryption: ~70-80MB/s (starting sector 37)

single drive, single aligned partition, encrypted: ~85-90MB/s
single drive, no partitions, encrypted: ~90-95MB/s

array of whole drives, default ext4, no encryption: ~55MB/s
array of whole drives, default ext4, encrypted: ~35MB/s
array of whole drives, optimized ext4, encrypted: ~35MB/s (mkfs.ext4 -t ext4 -j -b 4096 -m 0 -L Data -E stride=128,stripe-width=768 /dev/mapper/md2_crypt )

So my question is: Is it normal for an encrypted RAID6 to be that slow?

Everything’s wrong in here.

4K WD GP drives have RAID issues on Linux, no matter the build, plus the drives themselves are not meant to be used on a RAID 5 and WD barely lists them some a “consumer” RAID 1 or 0, way below your current build.

I’d suggest to return the drives and trade them for RAID-specific drives.

I found this discusion:
http://forums.debian.net/viewtopic.php?f=7&t=50961

This bugreport is a bit old: “Properly align encrypted LV - LUKS device”
https://bugzilla.redhat.com/show_bug.cgi?id=488722

This is form 2010, but offers some info “Consider increasing LUKS_STRIPES because of 4096 byte sector hds”
http://code.google.com/p/cryptsetup/issues/detail?id=54

Sadly, i cannot say anything about the expected speed because i never used raid5/6 for my desktop and those drives.

It seems that there is a slowdown when using luks.

ThePizzaMatrix wrote:

Everything’s wrong in here.

 

4K WD GP drives have RAID issues on Linux, no matter the build, plus the drives themselves are not meant to be used on a RAID 5 and WD barely lists them some a “consumer” RAID 1 or 0, way below your current build.

 

I’d suggest to return the drives and trade them for RAID-specific drives.

Performance is not an issue per se for this array. Definitely not worth spending money on more expensive drives. I just want to know whether this slowness is what’s to be expected from an encrypted raid6 of caviar greens or whether there’s a software issue.

I recently found this:

http://www.linuxquestions.org/questions/linux-software-2/optimize-filesystem-performance-on-top-of-mdadm-raid5-luks-845868/

And this also:

http://superuser.com/questions/305716/bad-performance-with-linux-software-raid5-and-luks-encryption

I would expect that kind of performance hit; you are using software RAID which creates a lot of overhead for every I/O.

But he is not compainig about the performance of the green drives per se, he is just asking what is the cause of the slowdown of the volume when using luks over a raid5 mdadm array compared of just using the raid5 mdadm array.

raid5 will be slower on writes, but not so much when reading.

So, asuming that he is using it for storage of movies, or other big files, it will be slow only when copying the file, not when reading. Again, it is known that software raid5 will be slower that hardware raid5 (and these drives are not qualified for harware raid in general).

The question is basically “why does luks over raid5 mdadm array is much slower that just raid5 mdadm?”.

Fair enough.  The reason is the same, software encryption on top of software RAID; the effects appear additive, not multiplicative:

Array clear vs. single clear = 50% hit from 110 MB/s = 55MB/s

Single clear vs. single cypher = ~14-18% hit from 110 MB/s = 15-20 MB/s

Array cypher vs. single clear = 110MB/s - (55 MB/s + 20 MB/s) = 35 MB/s

Non-disk hardware (CPU, memory speed, motherboard architecture) is directly relevant to this volume’s performance but was not mentioned.  An AES-supporting CPU would probably go a long way.

@Mantrum:

can you specify how do you performed  the performance tests of the arryays / disks ?

for example, if you used dd, which blocksize, etc.

Thanks in advance.