This is #6 of a multipart solution. Please read the entire document for the full solution.
3. Re-establishing lvm. This is a two-step process. First, declare the appropriate lvm physical volume with the correct uuid. Second, use the restore command, vgcfgrestore, which will leverage the lvm file we saved earlier to do the rest of the work for us.
Use pvcreate with the uuid for the physical volume you found in the lvm configuration file. Reviewing back to the earlier config file we find this line under the pv0 properties:
id = “Tz6B09-FDMG-5N7N-2Xjg-tIcM-ceUq-GnqB75”.
This is your lvm physical volume uuid. Therefore, your command and response are ( the command should be one line though wrapped for display here as two lines):
sudo pvcreate --uuid “Tz6B09-FDMG-5N7N-2Xjg-tIcM-ceUq-GnqB75” --restorefile /etc/lvm/backup/vg0 /dev/md2
Couldn’t find device with uuid Tz6B09-FDMG-5N7N-2Xjg-tIcM-ceUq-GnqB75.
Physical volume “/dev/md2” successfully created
Results can be verified. Note md2 with its correct uuid:
scott@ubuntu:~$ sudo pvdisplay
“/dev/md0” is a new physical volume of “203.69 MiB”
— NEW Physical volume —
PV Name /dev/md0
VG Name
PV Size 203.69 MiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID J6oLG5-u3zv-8qH0-H5x8-WOQO-2ih3-g8pfg5
“/dev/md1” is a new physical volume of “1019.69 MiB”
— NEW Physical volume —
PV Name /dev/md1
VG Name
PV Size 1019.69 MiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID kVCFaA-9n8o-o5nJ-37G7-9eEx-80f0-ziAShZ
“/dev/md2” is a new physical volume of “2.72 TiB”
— NEW Physical volume —
PV Name /dev/md2
VG Name
PV Size 2.72 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID Tz6B09-FDMG-5N7N-2Xjg-tIcM-ceUq-GnqB75
Next, run the restore command. (This command looks for the configuration file we saved earlier in /etc/lvm/backup. The config file must have the same name as the specified volume group, eg vg0):
scott@ubuntu:~$ sudo vgcfgrestore vg0
Restored volume group vg0
That was easy. Let’s verify the result. Note the results of vgdisplay and lvdisplay particularly the associated uuid. In this one step you have correctly reconfigured both the volume group and the logical volume with their correct properties and uuid. This saves you from having to run separate vgcreate, lvcreate or trying to understand the “extents” listed under Total PE.
scott@ubuntu:~$ sudo vgdisplay
— Volume group —
VG Name vg0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 255
Cur LV 1
Open LV 0
Max PV 255
Cur PV 1
Act PV 1
VG Size 2.72 TiB
PE Size 4.00 MiB
Total PE 714182
Alloc PE / Size 714182 / 2.72 TiB
Free PE / Size 0 / 0
VG UUID hgntSv-jM5G-LnY8-GDn9-2Lkh-C9kH-cSv2k2
scott@ubuntu:~$ sudo lvdisplay
— Logical volume —
LV Name /dev/vg0/lv0
VG Name vg0
LV UUID OMmCHE-IMw3-nyw7-NplM-w5ht-FIsq-raWAkX
LV Write Access read/write
LV Status available
# open 0
LV Size 2.72 TiB
Current LE 714182
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 768
Block device 252:2
This concludes restoring the lvm logic layer. We are now ready to repair the file/folder structure.
3. Repair directory structure. This is done using fsck.ext3 with the VOLUME GROUP/LOGICAL VOLUME AND NO SPECIFIED FILE TYPE. So for my instance I will reference /dev/vg0/lv0.
It is a really good sign if you see the “NASRAID: recovering journal.” The raid on the WDSS is called NASRAID and recovering journal implies the system has found directory information to build on. Pass 1 could take more than an hour. Go get a cup of coffee.
The parameter “-y” tells the command to assume yes to all confirmation. You almost certainly want to do this. If you do not you will be standing there with your finger on the y key through a thousand confirmations!
scott@ubuntu:~$ sudo fsck.ext3 -y /dev/vg0/lv0
e2fsck 1.42 (29-Nov-2011)
NASRAID: recovering journal
NASRAID has gone 1027 days without being checked, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
/lost+found not found. Create? yes
Pass 4: Checking reference counts
Pass 5: Checking group summary information
NASRAID: FILE SYSTEM WAS MODIFIED
NASRAID: 33539/365674496 files (8.8% non-contiguous), 113033843/731322368 blocks
This concludes repair of the file system. You are ready to mount the drive.
4. Mounting the drive. First, create a mount point or directory to which the new file system will be linked. I’m using the subdirectory /mnt and adding an additional arbitrarily-named subdirectory, “recovered”.
Make the new directory.
scott@ubuntu:~$ sudo mkdir /mnt/recovered
Now, mount your recovered drive using the lvm device to the new directory.
scott@ubuntu:~$ sudo mount /dev/vg0/lv0 /mnt/recovered
No real acknowledgement…
You should now be able to view and explore your restored files under the /mnt/recovered directory.
This concludes the primary recovery of a WD Sharespace as an example of a Linux based NAS. The following discussion is really an appendix to review related matters.
Please continue to the next posting for additional helpful topics:
Cloning healthy and unhealthy drives
Determining Raid Parameters when not known
Correcting mistakes