The average data rate is the average “sustained” data rate. It represents the transfer rate to and from the hard drive’s platters. The maximum transfer rate will be at the outermost zone, and the minimum rate at the innermost zone. This is because the outermost zone has the most number of bytes per track.
Data transfer rate (bytes per second) = bytes/track x revolutions/sec
The access time is the time required for the read head to read the target sector. The head needs to seek to the target track, and then wait for the target sector to pass underneath. Therefore the average access time is equal to the average seek time (approximately one half stroke), plus the average rotational latency, plus the OS overhead. A 7200 RPM drive has an average latency of 4 msec (one rotation requires 8.33 ms). If you examine the spread of data points in your access time graph, you will see that it spans about 8 msec, ie the latency of one full rotation.
As for the burst rate results, I confess that I don’t have a satisfactory explanation. AISI, two striped 6Gb/s SATA drives would have a maximum data rate of 2 x 600 MB/s, ie 1200 MB/s. However, your result is nearly twice that figure. Either HD Tune is miscalculating the data rate, or perhaps the result reflects a read cache in the OS.
BTW, does the read performance curve have a long flat section at the beginning, or does it have a smooth downward slope?