Hi again,
I’ve been running OMV on my MyCloud 4TB for a while now, and am now getting massive problems with the root partition having completely filled up (0 bytes remaining). This now means that the OMV GUI won’t progress past login screen and the most other services won’t run at all.
I was able to recover about 80MB or so from deleting old logs, but this has now filled up again as well. I can’t find any obvious large files that are taking up all the space or that could be deleted easily. apt-get clean etc have also been run and there’s just no more space to eek out.
I therefore hoped to be able to just resize the root partition to give it another Gb or so, but now I’m set up to do this with an external caddy and my MyCloud dismantled, I find that the partition structure is more complicated than I first thought:
GParted reports:
15MB unallocated
489Mb swap
2x 1.91gb raid (assume root)
95Mb unknown fs
1Mb unknown fs
2Mb unknown fs
3.63Tb ext4
Now I’m not sure what to do. Is it even possible to resize the root RAID partitions, or will the firmware no longer recognise the OS? I’m also not sure I trust GParted to move around the other partitions if it can’t figure out what they are.
– edit –
OK, just seen gbo’s post above which is pretty much doing exactly what I want to end up with (single 4Gb root partition). However I appear to be having the same issues as Tony_Le_moine with
mdadm: set device faulty failed for /dev/sda2: Device or resource busy
Inspecting the RAID with mdadm, I see the following:
sudo mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Wed May 13 19:46:29 2015
Raid Level : raid1
Array Size : 1999808 (1953.27 MiB 2047.80 MB)
Used Dev Size : 1999808 (1953.27 MiB 2047.80 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sat Jul 9 16:19:31 2016
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 7ed83112:0452ad6d:997ceb89:519129b4
Events : 0.2712
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 2 1 active sync /dev/sda2
Now, does this mean that the other RAID partition has been removed and I just need to resize the partitions? I also assume these steps need to be done with a live CD rather than via SSH?
I’ll admit to being a little lost now - I’m fine with moving partitions about in Windows (and setting up partitions in Linux), but dealing with RAID on remote login is a little beyond my experience
edit - for completeness
sudo parted -l
Model: ATA WDC WD40EFRX-68W (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
3 15.7MB 528MB 513MB linux-swap(v1) primary
1 528MB 2576MB 2048MB ext3 primary raid
2 2576MB 4624MB 2048MB ext3 primary raid
5 4624MB 4724MB 99.6MB primary
6 4724MB 4824MB 101MB primary
7 4824MB 4826MB 1049kB primary
8 4826MB 4828MB 2097kB primary
4 4828MB 4001GB 3996GB ext4 primary
Model: Linux Software RAID Array (md)
Disk /dev/md0: 2048MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Number Start End Size File system Flags
1 0.00B 2048MB 2048MB ext3
Model: Linux Software RAID Array (md)
Disk /dev/md1: 2048MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Number Start End Size File system Flags
1 0.00B 2048MB 2048MB ext3