Help understanding Throughput performance numbers on HGST Firmware WD Drives

I have three Ultrastar drives, Two 18TB WDC WUH721818ALE6L4 and one 10 TB HGST HUH721010ALE600. Can anyone explain the numbers for Throughput performance? Is that an important one to keep track of? WD firmware ( Black ) drives don’t show this reading at all in Crystal Disk or WD Dashboard. I’ve attached a couple of screen shots as examples. Any insights would be appreciated.

Thanks.


WUH drives are rebadged HGST models with HGST’s firmware architecture.

SMART attribute definitions are not standardised, and vendor specific documentation is rarely publicly available. Usually we need to work out the meanings of the numbers on our own.

Let’s start with Seek Time Performance.

model              Current/Worst   Raw
---------------    -------------   ---
HUH721010ALE600    128 / 128       18
WUH721818ALE6L4    140 / 140       15
                   100 / 100       25 (calculated below)

Let’s assume that these two drives interpret these attributes in the same way. The first thing we can see is that lower Raw numbers produce larger Current/Worst (“normalised”) numbers, so this suggests that lower Raw numbers are “healthier”.

Next we can see that the difference in the Raw values is 3, and this corresponds to a difference in the Current/Worst values of 12. This suggests that the normalised value inreases by 4 for each unit in the Raw value. If we assume that a normalised value of 100 is “good”, we can see that this corresponds to a Raw value of 25.

One possible explanation could be that the Raw value is the full-stroke seek time in milliseconds. A faster drive would have a lower seek time, so this is consistent with the data.

The Raw value of Spin-Up Time is a multi-byte parameter. It is best viewed in hexadecimal mode.

model              Current/Worst   Raw
---------------    -------------   ---
HUH721010ALE600    151 / 151       38684459424 = 0x901C601A0 -> 0x0009 / 0x01C6 / 0x01A0 -> 9 / 454 / 416
WUH721818ALE6L4     84 /  84       25791234375 = 0x601470147 -> 0x0006 / 0x0147 / 0x0147 -> 6 / 327 / 327

I confess I don’t understand the trend in these numbers. If we assume that the Raw value is the spin-up time in tens of milliseconds, then a drive that spins up 4.16 seconds gets a better score (151) than a drive that spins up in 3.37 seconds. Both drives are 7200 RPM models. That said, they are different models, so it could be that the parameters are weighted differently.

The Throughput Performance numbers are a mystery to me also.

model              Current/Worst   Raw
---------------    -------------   ---
HUH721010ALE600    134 / 134       96
WUH721818ALE6L4    148 / 148       48

I think that the only way to understand these numbers is to compare them against numbers from the same models. One way is to search the Internet, another is to keep monitoring and recording the attributes for your own drives and adding them to the above tables.

I found a SMART report in another forum:

https://hardforum.com/threads/smart-errors-are-these-bad.1916079/

Throughput Performance

model              Current/Worst   Raw
---------------    -------------   ---
HUH721010ALE600    134 / 134       96
                   135 / 135       92  <-- from hardforum

Spin-Up Time

model              Current/Worst   Raw
---------------    -------------   ---
HUH721010ALE600    151 / 151       38684459424 = 0x901C601A0 -> 0x0009 / 0x01C6 / 0x01A0 -> 9 / 454 / 416
                   100 / 100       0  <-- from hardforum

The normalised Throughput Performance gets better as the Raw value decreases. It appears that 4 points (or maybe 3 points, accounting for rounding) in the Raw value correspond to 1 point in the normalised value. We need more data to get a more accurate understand of this attribute.

hen looking at throughput performance numbers on HGST firmware WD drives, it’s important to understand that throughput generally refers to the data transfer rate, or how quickly the drive can read or write data. These numbers can vary depending on several factors like drive model, interface (SATA, SAS), and whether the drive is using older or newer firmware.

HGST (now part of Western Digital) drives, especially those with WD firmware, may show different throughput depending on their intended use case—whether they’re designed for enterprise, NAS, or consumer applications. If you’re comparing the throughput numbers, you should consider the sequential read/write speeds (for large file transfers) as well as random read/write speeds (important for more fragmented data or system operations).

Additionally, firmware updates can impact throughput by improving efficiency, reducing latency, or enhancing error correction, so it’s always worth checking if your drive is running the latest firmware. Keep in mind, other factors like your system’s interface (SATA III vs. NVMe) and whether you’re using a RAID setup can also influence these throughput numbers.

If you’re seeing discrepancies or unexpected performance, make sure you’re comparing the numbers under the same conditions (e.g., no background tasks, consistent benchmarking methods) to get an accurate picture of what your drive is capable of.

I use M.2 NVMe SSD for operating systems as these have faster random performance.

Disks are better as block oriented storage as random speed varies widely in my experience. RAID boxes also are sluggish as disk media reads vary substantially.

Disks are cheap which is why they still are in use.

SMART readings are hit and miss in data centers with millions of disks. Disks fail often which is why redundancy is several layers deep.

My old HUH721212ALE601 is stuffed in a USB box and it has racked up 31,000 hours and it is still going strong.

I find USB boxes convenient as I can use several on a hub and copy files that are important

7-zip can make block backups and quickpar can make redundancy files such that the loss of parts is not fatal to recovery

I AM using an NVMe stick for my OS ( Samsung 980 Pro 1TB ). If you look at the screen shots from Crystaldisk, you’ll notice 12 drives. They’re ALL internal as my workstation can accommodate that many. The first 4 are 980 Pro SSD’s ( one 1TB for OS and three 2TB’s for video editing ). The second 4 are Samsung 870 EVO’s ( two 2TB’s and two 4TB’s ) which are for immediate storage, playback and quick reference. The last 4 are large capacity HDD’s which are for long term storage only. I’m not running a RAID at all as each drive has it’s own individual function. For back-up and redundancy, I have 6 WD My Book/Elements external drives between 10-20 TB each and one HGST Ultrastar 10TB living in a Startech external USB case for backing up EVERYTHING! My question was more one of curiosity, because I just like knowing what things mean! That’s all.

Sorry there screenie was too crowded to see

I use laptops so NAS boxes are pretty much the way I run my show

USB disks galore are mostly for redundant copies

I’m sorry, but this conversation has strayed WAY off topic. My question was about one thing only, and I haven’t got time to get into discussions about unrelated subjects.

I found these CDI screenshots for the WUH721818ALE6L4:



These are the updated spin-up time results:

model              Current/Worst   Raw
---------------    -------------   ---
WUH721818ALE6L4     84 /  84       25791234375 = 0x601470147 -> 0x0006 / 0x0147 / 0x0147 -> 6 / 327 / 327
WUH721818ALE6L4     83 /  83       0006015C0154 -> 0x0006 / 0x015C / 0x0154 -> 7 / 348 / 340
WUH721818ALE6L4     82 /  82       000701600169 -> 0x0007 / 0x0160 / 0x0169 -> 7 / 352 / 361

I think that a normalised value of 100 corresponds to 0.0 seconds. Every additional 0.2 sec increase in spin-up time reduces the normalised value by 1.

model              Current/Worst   Raw
---------------    -------------   ---
WUH721818ALE6L4     100 / 100      0.0 seconds
WUH721818ALE6L4     90 /  90       2.0 seconds
WUH721818ALE6L4     85 /  85       3.0 seconds
WUH721818ALE6L4     84 /  84       3.2
WUH721818ALE6L4     83 /  83       3.4
WUH721818ALE6L4     82 /  82       3.6
WUH721818ALE6L4     80 /  80       4.0 seconds

These are the updated Throughput Performance results:

model              Current/Worst   Raw
---------------    -------------   ---
WUH721818ALE6L4    148 / 148       0x30 -> 48
HUH721010ALE600    135 / 135       92
WUH721818ALE6L4    136 / 136       0x60 -> 96

An increase in the normalised value by 1 corresponds to a 4-point reduction in the raw value. This means that lower raw numbers equate to higher throughput.

Throughput could be measured as the amount of work done in a given time, in which case larger numbers would be better. Alternatively, it could be measured as the amount of time taken to perform a given task, in which case smaller numbers would be better. Therefore, this attribute appears to be a time parameter.

The advantage of my tooless USB boxes is that disks can be swapped out as desired and copying data is just as easy.

Crystal disk chokes when disks start using spare sectors which is standard in hard disks as the plates are pushed to the limit, someday the default will be fixed

What do you mean by that???

Why do SMART tools report attributes differently?
https://www.hddoracle.com/viewtopic.php?p=22249#p22249

Here’s the Wiki page on S.M.A.R.T.

Scroll down the page to read the description of Throughput Performance. Just note that “real throughput” also depends on the rest of your PC not just the drives performance.

Bear in mind that not all drives report the same set of attributes.

No clue why your temps and spin-up time values are so large.

That’s because you didn’t bother to read or understand my posts.

This drive (HDS722580VLAT20) has no reallocated or pending sectors, but it has a high (?) read error rate and low throughput. This suggests that these two attributes are related, as expected.

https://smartmontools-support.narkive.com/jeBruhRe/failed-smart-self-check-but-only-failing-attribute-is-throughput-performance

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      RAW_VALUE
1   Raw_Read_Error_Rate     0x000b  094   094   060    Pre-fail  262163 = 0x40013 -> 0x0004 / 0x0013 = 4 / 19
2   Throughput_Performance  0x0005  001   001   050    Pre-fail  7373 FAILING_NOW

I find disk throughput improves after a zero wipe as the drive is able to recover from possible problems