I assume this is the review you are referring to:
The first thing I notice is that the reviewer has cherry-picked the specifications. He mentions the Buffer To Host transfer rate (6 Gbps) which is essentially useless, but he doesn’t mention the maximum sustained data transfer rate. The latter spec would probably place the WD models last in the comparison tests. Moreover, you need to ask yourself whether the “Enterprise Synthetic Workload Analysis” reflects your own usage patterns, or whether it is limited to server applications. For example, how relevant are the 4K read/write tests?
As for the figure of 224 MB/sec, the only reference I can find is 224 IOPS. That’s not the same thing.
IOPS = Input/Output Operations Per Second
The difference in the maximum sustained data transfer rate for the AF and non-AF WD drives appears to be directly related to the increase in the number of data bits per track. In a non-AF drive each 512-byte sector in a group of 8 would have its own overhead bytes, whereas in an AF drive a 4KB sector would have less combined overhead. This means that the remaining bits could be used for data storage rather than overhead. The typical improvement in data density is of the order of 10%.
171 / 154 = 1.11, ie an increase of 11%
To achieve the observed 48% performance increase, the drive relies on intelligent caching (“Dynamic Cache Technology”) and probably intelligent seeking. NCQ allows the drive to queue several I/O commands, and the drive could then optimise its seeking by accessing the closest sectors, thus minimising latency.
For example, instead of seeking to LBA 0 then LBA 1000 and then back to LBA 500, the drive could seek to LBA 0 then LBA 500 and then to LBA 1000. The other thing that all drives do is to cache an entire track in memory, the idea being that, if a head is sitting on a track, then the drive may as well retrieve all sectors within the track, not just the requested ones, in anticipation of the next read.
One other thing I notice is that the reviewer states that included in the benchmarks was a “128K (Sequential)” test, yet the graph shows the results for 128K 100% Read/Write Throughput. Can a “throughput” test really be viewed as a “sequential” test? Aren’t they measuring different things? AISI, a true sequential test would be limited by the rate at which data could be retrieved from the platters, rather than from cache, so a “sequential” result of 186.7MB/s doesn’t seem possible.