Thanks to everyone in this post to share their experience, My situation is as follows,
WD ShareSpace with firmware ver. 2.2.90 with MioNet 4.3.0.8
It has 4 Disks of 1 T.B each with RAID 5 configured and only one Volume. I was unable to access data in it. When i try accessing it on network it shows all shares but does not allow to go in it.
During a data copy process i suppose it reached Full space utilization and then “DataVolume” became unavailable. We tried restarting it but to no success. Currently it is showing alert of " [Volume Status] Data Volume ‘DataVolume’ doesn’t exist!". While all 4 disks are showing “Failed” status and volume as Unassigned.
I feel lucky that this post helped me a lot and did work for me in combination, mainly I followed @footleg instructions,
~ $ mdadm --assemble -f /dev/md2 /dev/sd[abcd]4
mdadm: forcing event count in /dev/sdd4(3) from 4283834 upto 4283840
mdadm: clearing FAULTY flag for device 3 in /dev/md2 for /dev/sdd4
mdadm: /dev/md2 has been started with 3 drives (out of 4) and 1 spare.
~ $ pvcreate /dev/md2
No physical volume label read from /dev/md2
Physical volume “/dev/md2” successfully created
~ $ vgcreate lvmr /dev/md2
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Volume group “lvmr” successfully created
~ $ lvcreate -l 714329 lvmr -n lvm0
Incorrect metadata area header checksum
Logical volume “lvm0” created
~ $ fsck.ext3 /dev/lvmr/lvm0
Some of output from commands differ from @footleg output above. Sad that during all process i did not take output for you lovely people. still there were little difference than mwentioned above. only that last command of fsck.ext3 did gave couple of lines that it initiated something. then my cursor kept blinking and i waited for more than 50 minutes and to try checking status i cancelled it thru Ctrl+C, then i checked status and it showd 3 drives healthy and first disk as dead (It seemed much better than 4 Dead). When i tried mdadm -D /dev/md2 it showed same 3 fine and 1 dead drive but no percentage against Rebuild status (infact this line was not there) and State : clean, degraded but no mention of rebuild. I think it was due to my fault of cancelling the process.
I did try running fsck.ext3 command again but it said something is busy, then i tried changing lvmr to lvmr1 as suggested in another reply to this post, but it did not work.
Finally i took all drives out numbering them 1 to 4 from bottom to top. Placed them in a system connecting SATA cables from sata 0 to sata 3 in order. Downloaded Fedora Desktop 14 Live and burnt it on CD. Boot this system from CD. When it logs in desktop it gave error on top right bar that i have one disk with serious bad sectors. I did went through this message, there it showed disk problem status, under multi-disk devices i clicked and then selected to mount this volume (it was named lvmr as tried earlier through SSH, so i believe it played a positive part as well). I have copied some 30 GB data from it and it seems fine. Further copy is in progress now, thank you all to bring back the dead data whom i thought lost.
On WD part they must not be serious with this type of support. Believe me problems happen with every kind of most stable product but there must be good response and some resolution. I have no knowledge of linux still you people threw in option which i tried and succeeded, i hope they could do this little thing as well. after all it is their product and i m not sure wether this company’s directors or owner have read through this community data here. Officially i m waitng for their reply to email that i submitted yesterday.