Let's take two imaginary examples. Let's say we have a drive with a single 500GB platter, and another with four 125GB platters. Since both platters are the same physical size, it is clear that the 500GB platter has 4 times as many bits per square inch (data density) as the 125GB platter.
If we assume that the only difference between the two platters is one of scale, then it is clear that the 500GB platter has twice as many tracks per inch, and twice as many bits per track, as the 125GB platter.
A drive's transfer rate is dependent on its RPM and on the number of bits per track. Assuming that the two drives spin at the same speed, then once again it is clear that the single-platter drive will have a transfer rate that is double that of the 4-platter drive.
So the rule-of-thumb relationship appears to be ...
(transfer rate A) / (transfer rate B) = sqrt(density A / density B)
This rule assumes that all other factors are equal, so it cannot be used to compare AF models with non-AF models.
In your case I'm only offering one plausible explanation. You could probably get the same benchmark result if you took a 2-platter drive and reduced the number of bits per track while increasing the number of tracks per inch. I can't see WD doing something like that, though.
BTW, Seagate has a 2TB model which has three variants -- 4 heads, 5 heads, or 6 heads. The 4-head version is a lot faster than the 6-head. Only the 4-head version was mentioned in Seagate's Product Manual.
At Tom's Hardware I saw a WD 500GB drive which benchmarked like a shortstroked 750GB model (two 500GB platters, 3 heads, reduced number of zones).