Hi everyone,
I have been using the nas for over 6 months. Never had any issues. Today I am seeing 4 solid red lights underneath the disk arrays, the system is down. I logged in to the dashboard and it says 0mb available. I checked the disk the disk status are all healthy. However the raid configuration seems to have just disappeared. On the storage->raid profile-> Raid health it says no configured volumes , setup a Raid mode to configure new Raid volumes on this device.
I tried the following options - (1) reboot, (2)shutdown and removed and inserted the disks and powered up,(3) shutdown and removed the disk and powered up and shutdown again and then inserted the disks and powered up. All the steps the lights turn blue and flickered until it starts and then goes back to solid red.
I ran the quick system diagnostics it didn’t pick up anything. I am now running the full test and it’s running for a while now…
Is there any way to bring my system back to how it was before ? Or the only option is to reconfigure raid and lose all the data ?
Has anyone came across this issue ? Any suggestions would be of great help. Thanks
My PR4100 running Raid5 with 4 6TB drives just did the same thing. It was about 95% full containing acronis system backups. I was attempting to log onto it to cull the older backups when it changed from full capacity to zero capacity and no raid configured. It apparently lost the raid configuration.
In my past experience, I’ve been able to recover from a raid configuration issues on other systems by putting similar blank drives into the unit, configuring it identically to what it had been and then swapping the drives out for the originals. That did not work for this unit. My guess is raid information stored on the drives somehow got corrupted.
Any thoughts on how I might recover the data would be hugely appreciated. Hoping the only answer is not really all data is lost.
df
blkid
mount
gdisk -l /dev/mmcblk0
gdisk -l /dev/sda [same for /dev/sdb, /dev/sdc, /dev/sdd]
Kernel info
dmesg
This last one is quite fat, please post it to pastebin and just provide the link here.
I may be able to help you but I need all this info.
Note: SSH access is at your own responsibility and risk. These commands just print information, but some of these tools can also be used to wipe disks, resulting in data loss and even bricking your NAS. It is advised to create a USB rescue disk.
Hi Tfl - I have the same problem, drive 1 was showing red for one year, other 3 raid 5 were fine. Finally I changed drive 1 to JBOD 2 days ago and now all the others are red, no volume.
root@MyCloudPR4100 dev # mdadm --D /dev/md0
mdadm: unrecognized option ‘–D’
Usage: mdadm --help
for help
root@MyCloudPR4100 dev # mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Nov 22 17:20:50 2020
Raid Level : raid1
Array Size : 2094080 (2045.00 MiB 2144.34 MB)
Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Nov 22 17:32:44 2020
State : clean, degraded
Active Devices : 3
Name : MyCloudPR4100:0 (local to host MyCloudPR4100)
UUID : 8996083b:975be7a1:673eb395:843c62bc
Events : 7
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
- 0 0 3 removed