This is the result after smartctl -t short /dev/sdb
I enetered smartctl -a /dev/sdb to get the results printed, is that correct?
=== START OF INFORMATION SECTION ===
Device Model: WDC WD30EFRX-68EUZN0
Serial Number: WD-WCC4N3DXUEFJ
Firmware Version: 82.00A82
User Capacity: 3,000,592,982,016 bytes
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: 9
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Mon Sep 11 11:59:19 2017 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 241) Self-test routine in progress…
10% of test remaining.
Total time to complete Offline
data collection: (40080) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x703d) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.
When I do dd if=/dev/zero of=/dev/sdb, I see no Leds blinking on the device
In terminal, when I press enter, it goes to the next line and the cursor blinks, but nothing is shown
It appears to have worked and the drive repaired the faulty sectors. If you scan back to your earlier post you will see that Current_Pending_Sector was 68 and now it is 0. The drive must have repaired them without needing to reallocate them. This is good.
If you power down the system and reboot, you may get the option to now rebuild the array. Go ahead and let it do this and after it completed, a final long test will verify all is well again.
Yup, good to go ahead. The errors can be cleared and they will stop once the array has been brought back online. If you have the manual build option, you can continue.
Ok, because if I click manual rebuild I get this message:
Warning: The Volume is degraded now. Remember to rebuild the RAID for data integrity.
Auto-Rebuild Configuration allows you to enable or disable the Auto-Rebuild feature. You can also manually rebuild by clicking Next. Please note that rebuilding will erase all data on the newly inserted drive.
I think it’s merely saying that your data is “unsafe” until a rebuild has taken place and the array has been brought back online. Rebuilding is an intensive process but Raid 1 arrays are least burdened. If you are naturally concerned, make sure you have a separate backup just in case.
Other than that, it’s a simple matter of following the steps in the manual. A manual rebuild is the way to go at this stage.
It looks like everything is going perfect so far, the update takes quite some time, but I can still access my data and I see that its updating only drive 2.
I want to thank you very much for all you help, I really appreciate it!
If you ever need a photographer and I’m in your town I will do it for free for you
Excellent news and really not a problem. Information on the forums is a little sparse on how to repair the drives so hopefully your experience will be of use again to others.
Just be sure to not trouble the data until the replication is complete. In theory, rebuilding a raid array does allow you to work in the meantime but WD’s implementation of it is not enterprise grade - I am unclear how the box will react when there is new data that needs to be sync’d across the array to maintain parity.
Should you ever be in London, feel free to look me up and we can grab a pint.
I’m not working or adding stuf while it’s rebuilding, I’ll ket it finish first
Btw, do you have any idea why this drive became faulty, is it just random bad liuck or something else?
Cool, I am i London quite often for work, so count on a message from me to grab one!
Could be caused by anything really, power cuts, knocks, or factory defects. What’s interesting (and if i recall this correctly) the drive didn’t flag a SMART error that the WD firmware then consequently read as a degraded disk, so it is possible (also very hypothetical!) that the defect was caused by the WD software (incidentally, what version are you running?). Another scenario is perhaps the drive was lagging and with the data written to sda, the write to sdb failed and so knocking the drive out of the array. Or, there was a write operation at the same time that a jolt knocked the heads of sdb hard enough for the write to fail and the drive to flag a bad sector. In any event however, it was a minor error because the sector was still readable and then zeroed out on the format.
Hardware Raid setups have a separate memory cache that would only empty once the drive verifies the data was correctly written to it. I do not know how WD configure these boxes to ensure that minor write/read failures of this nature are mitigated - perhaps they don’t and it takes very little for an array to fail? Hopefully someone will come along to fill in the gaps.
If you can spare the time (tonight say), a final long test would be a good measure of things once the array is back online.