Sure hope someone can help. We had a momentary power loss yesterday. Our 4TB Sharespace configured with RAID5 is plugged into a UPS but only on surge suppression. Upon restart, the NAS shows 4 failed drives and states the “data volume” doesn’t exist. Can’t access the NAS from anywhere within the network. Have tried a reboot and a shutdown. No success. The data is not backed up. We’re in the middle of migrating data from the NAS to a new server. Still have 1.3 TB to go. Is there anything I can do to recover the 1.3 TB of data still on the NAS or are we pooched?
Sorry to hear that. I suspect the news is mostly bad,
AFAIK RAID5 rebuilds from multiple HDD failures isn’t possible. I’ve also been told a damaged RAID5 on a Sharespace can’t be rebuilt anyway, but someone else may know more.
If the data is valuable enough a recovery lab may be able to help you. I’ve used them three times over the last ten years. They succeeded two out of three times. In all cases the drives were damaged by power supply issues.
Update: After much googling and forum research, I believe the physical integrity of the drives are OK. It’s the disk array that’s pooched. I was able to establish a SSH connection and run cat /proc/mdstat which leads me to believe I may be able to temporarily restore & mount the datavolume so I can get my data copied off to another ext HDD. If this doesn’t work, then I’m in the market for a reliable data recovery company. I’m keeping my fingers & toes crossed.
Mommabear wrote:
Update: After much googling and forum research, I believe the physical integrity of the drives are OK. It’s the disk array that’s pooched. I was able to establish a SSH connection and run cat /proc/mdstat which leads me to believe I may be able to temporarily restore & mount the datavolume so I can get my data copied off to another ext HDD. If this doesn’t work, then I’m in the market for a reliable data recovery company. I’m keeping my fingers & toes crossed.
Its very correct to use ssh to manually rebuild the array. 4 failed leds doesnt mean that all 4 disks a bad. Just use
cat /proc/mdstat
to view the state of array
and use mdadm to assemble it again and syncing. I did this a few month ago and everything OK.