unburying of this topic…
okay, I read a lot of this EXTREMLY interesting and useful topic; now I have some questions
1-st here are the base of my reflexion :
After I logged in through ssh:
1/ “fdisk -l /dev/sdb” didn’t gave me a lots of interresting info, but:
- fdisk works, okay it’s definitly MBR
2/ “mdadm --detail --scan” give me useful and quite nice infos
/dev/md0 and /dev/md1 are raid1
/dev/md2 is raid5
why nice :
/dev/md0 (~200MB) (miror-raid) is the root directory as stated in “mount” or “df -h”
/dev/md1 (~1GB) (miror-raid) is ??? don’t know… maybe a log of md2, the size seems to fit… ?
/dev/md2 (~3TB) (raid5) is where all datas seemed to be stored.
3/ “pvdisplay” show as expected lvm is used on the raid-5 md2
“vgdisplay” and “lvdisplay” didn’t show anything interresting, except all the size is used.
2-nd here are my reflexion…(and I don’t have the space to backup my datas to test my theory)
This is a quite advanced theory, I know it
Basicaly ShareSpace is a “simple” “LVM over raid-5” system.
If I remove a 1TB disk and replace it with a 4TB disk from WD, I assume I will got 4 partitions (big assumption):
- sdX1 (for root partition on /dev/md0), sdX2 (for log?), sdX3 (for ???, maybe log of sdX1) and sdX4 (for datas on /dev/md2)
Is this sequence possible?
1/ I unmount all /shares/* and /DataVolume
2/ I set-faulty and remove let’s say /dev/sda4
/dev/md2 will become in a “clean,degraded state”
3/ I use fdisk on /dev/sda to :
3.1/ remove sda4 partition
3.2 create an extended partition
3.3 create a new 1.9TB logical partition (/dev/sda5)
3.4 create an other 1.9TB logical parition with the SAME space as /dev/sda5
that’s the 1-st weak (and weakest) point of my theory, is it possible to fdisk a runing disk??? to my experience yes but it depends…
4/ I should have now 5 partitions /dev/sda1(untouched), /dev/sda2 (untouched), /dev/sda3 (untouched), /dev/sda5 and /dev/sda6
using mdadm, I add /dev/sda5 and /dev/sda6 on /dev/md2
Here is the hard technical point, we have to modify the array to take into account all partitions, here 5 partitions : /dev/sda5, /dev/sda6, /dev/sdb4, /dev/sdc4 and /dev/sdd4 (if not done properly, /dev/sda5 will be added to the array and /dev/sda6 may be used as a spare disk…maybe an idea…) “mdadm --grow …” will do the trick
this will be a RAID-5 array with 5 used partition and no spare
5/ once the array is synced (probably not necessary), if working as I expect, “vgdisplay” should be showing some free space (the size of the smallest used partition in the array md2)
6/ growing the logical volume with lvextend, lvresize or any lvm tools.
7/ finally resize2fs to take all the modification into account
8/ mount again on /shares directories and /DataVolume
The 2-nd weak point of this theory… will all these modification be kept after a reboot?
* if the assembly and start of the array is hardcoded, it will definitly fail
* if the hard-disks are checked before use (manufactured by WDC, have partitions and EXACTLY 4 partitions), it will definitly fail
but if it has a clever behaviour and let the system take it’s own decision based on “what is under my hand” (as I have done on other systems) the array may remain intact.
If it works, and this operation repeated on all disks, we will have a 4 disks x 2 partitions x ~2TB thus ~16TB server… not good ideau on MY opinion as to MY mind as raid-array should never be 100% trusted… bad hard drives happened : trust me
*** IMPORTANT *** the last weak point ***
If it works, it WILL BE NECESSARY to change from raid-5 to raid-6 : think about it 1 sec…
if one hard-disk fails… 2 partitions of the array fail… an apocalypse on raid-5, but could be managed on raid-6
What is sure, the Web-interface will not be useful nor useable anymore (unless wix"Files".class are tweaked), at least for the Disk/Raid management pages.
I directly go through the end :
- same as before, 5 partitions on each disk
- using lvm, I create a logical partition using /dev/sda5 and /dev/sda6
pvcreate /dev/sda5, pvcreate /dev/sda6
vgcreate disk_a /dev/sda5 /dev/sda6
lvcreate disk_a -L 100%FREE -n disk
- using mdadm, I create/add/mod a 4-disks array using the the logical partition of lvm
- using lvm,…no ShareSpace use lvm to create the
LVM (to manage “big” disk") over MDadm (to manage failure) over LVM (to manage shares)
The weak point : beware during boot, I don’t have any Idea if LVM over RAID will properly be initialized.
I fear the webadmin tools will not be useful nor useable for volume management.