Clean OS (Debian), OpenMediaVault and other "firmwares"

Hi all,

I’ve OMV working flawless, but I would like to change the harddrives for higher capacity. Is there any way to migrate the OS to the new drives? I can replicate the data inside, is not too much.

Thanks

Well, I moved all files to the harddisk, extracted the tar

root@wdmc:/tmp/Kernel-4.15.0-plus# tar -xf libs-4.15.0+.tar.xz -C /

cleared out the mdblocks

dd if=/dev/zero of=/dev/mtdblock1
dd if=/dev/zero of=/dev/mtdblock2

and finally

root@wdmc:/tmp/Kernel-4.15.0-plus# dd if=uImage of=/dev/mtdblock1
7235+1 records in
7235+1 records out
3704603 bytes (3.7 MB, 3.5 MiB) copied, 1.71092 s, 2.2 MB/s
root@wdmc:/tmp/Kernel-4.15.0-plus# dd if=uRamdisk of=/dev/mtdblock2
5234+1 records in
5234+1 records out
2679902 bytes (2.7 MB, 2.6 MiB) copied, 1.26184 s, 2.1 MB/s

but the system does not boot again.

Well, s**t…
OS can’t find something in kernel (i think). Need check bootlog (uart console).

This is exactly the problem - I don’t have this UART device.

I managed to do a rescue boot from USB and I did copy the kernel 4.14 files in place.

The box is now booting in rescue mode, I have telnet access and my disk devices look like this

/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md124 : active raid1 sda3[0]
      484386752 blocks [2/1] [U_]
      bitmap: 4/4 pages [16KB], 65536KB chunk

md125 : active raid1 sda1[0] sdb1[1]
      498624 blocks [2/2] [UU]

md126 : active raid1 sda2[0] sdb2[1]
      3499968 blocks [2/2] [UU]

md127 : active raid0 sdb3[0]
      7810025984 blocks 512k chunks

It is no longer md0, md1, md2 - is this my problem now?

I corrected this by doing

/ # mdadm -S /dev/md126
mdadm: stopped /dev/md126
/ # mdadm --assemble --verbose /dev/md0 /dev/sda1 /dev/sdb1
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
mdadm: added /dev/sdb1 to /dev/md0 as 1
mdadm: added /dev/sda1 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 2 drives.
/ # mdadm --assemble --verbose /dev/md1 /dev/sda2 /dev/sdb2
mdadm: looking for devices for /dev/md1
mdadm: /dev/sda2 is identified as a member of /dev/md1, slot 0.
mdadm: /dev/sdb2 is identified as a member of /dev/md1, slot 1.
mdadm: added /dev/sdb2 to /dev/md1 as 1
mdadm: added /dev/sda2 to /dev/md1 as 0
mdadm: /dev/md1 has been started with 2 drives.

but after reboot the device names are changed again.

So I cannot get further:

/ # init 2
# Loading, please wait...
# Mounting filesystems...
mount: mounting sysfs on /sys failed: Device or resource busy
mount: mounting proc on /proc failed: Device or resource busy
# Check disk and init
### Press any key to stop and run shell... (2)mdadm: No arrays found in config file or automatically
# /dev/md1 not exist!
# Runing a shell...
ifup: interface lo already configured
ifup: interface eth0 already configured

How to solve the md___ name problem?

Try “Option #2”.

Also type “sync” before reboot.
sync && reboot -f

I did exactly these steps, result:

/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sda3[0]
      484386752 blocks [2/1] [U_]
      bitmap: 4/4 pages [16KB], 65536KB chunk

md1 : active raid1 sda2[0] sdb2[1]
      3499968 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      498624 blocks [2/2] [UU]

md127 : active raid0 sdb3[0]
      7810025984 blocks 512k chunks

but after sync and reboot it looks like this:

/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md124 : active raid1 sda3[0]
      484386752 blocks [2/1] [U_]
      bitmap: 4/4 pages [16KB], 65536KB chunk

md125 : active raid1 sda1[0] sdb1[1]
      498624 blocks [2/2] [UU]

md126 : active raid1 sda2[0] sdb2[1]
      3499968 blocks [2/2] [UU]

md127 : active raid0 sdb3[0]
      7810025984 blocks 512k chunks

unused devices: <none>

Hmm… Try to change md127 too? (To md4) or remove 2nd disk?
Or use “option #1” - write config to /etc/mdadm.con (rootfs) and check /etc/fstab

But /etc is inside /dev/md0, isn’t it?

I already modified the conf file, but it doesn’t work.

After a reboot it shows always the same issue.

In almost all instructions I found they tell to run

update-initramfs -u

after doing modifications, but this does not work with the WD device.

So obviously all my modifications are gone after e reboot.

If I go back to

uImage-3.2.70_marvell

it works.

root@wdmc:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sda3[0]
      484386752 blocks [2/1] [U_]
      bitmap: 4/4 pages [16KB], 65536KB chunk

md0 : active raid1 sda1[0] sdb1[1]
      498624 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      3499968 blocks [2/2] [UU]

md127 : active raid0 sdb3[0]
      7810025984 blocks 512k chunks

unused devices: <none>

The md127 is correct.

I was trying all kernels from

https://fox-exe.ru/WDMyCloud/WDMyCloud-Mirror-Gen2/Dev/

but the newer files don’t work.

What to do?

Sorry, no ideas.
Something wrong with kernel or ramdisk (auto-assemble md raid’s).
But when i test it - i don’t get this problem…

One last try:

I followed your steps

# Download latest kernel and install:
wget http://fox-exe.ru/WDMyCloud/WDMyCloud-Gen2/Debian/Dev/Debian-kernel-bin_4.15.0-rc6.tar.gz
tar xvf Debian-kernel-bin_4.15.0-rc6.tar.gz -C /

# Cleanup:
# Remove old network controller records:
rm /etc/udev/rules.d/70-persistent-net.rules
# Remove all "ipv6" records (lines of code) from /etc/network/interfaces
sed '/ipv6/d' /etc/network/interfaces -i
sed '/inet6/d' /etc/network/interfaces -i  

and used the 4.15.0-rc6 uImage file and did

dd if=uImage of=/dev/mtdblock1

and this time it worked.

root@wdmc:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
VERSION_CODENAME=stretch
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
root@wdmc:~# hostnamectl
   Static hostname: wdmc
         Icon name: computer
        Machine ID: c163733b868f4d2c84b3f6b8f68b78b2
           Boot ID: be6682552dd94338b1006bc1e3d525b1
  Operating System: Debian GNU/Linux 9 (stretch)
            Kernel: Linux 4.15.0-rc6
      Architecture: arm

But this time the root filesystem is no longer /dev/md1 but is is /dev/sdb2, which was part of /dev/md1.

Why this happened?

Its a kernel for MyCloud Gen2 (One disk, Debian), so he look’s for 3 partitions:

SWAP_DEV=/dev/sda1
DATA_DEV=/dev/sda2
ROOT_DEV=/dev/sda3

Ps: 4.15.0-plus - is newer.

Does it matter to have the uImage and uRamdisk in /boot or do I need them only in /dev/mtdblock_x ?

I would like to upgrade the kernel as well… are the many improvements/bugfixes?

My Cloud Gen2 (Single-disk) - /dev/mtd0 - contain Bootloader, that loaded kernel and ramdisk from HDD (From /boot/ folder).
My Cloud Mirror and almost all other 2-and-more-disks devices have kernel on internal flash (/dev/mtdblock1). You can delete /boot/ folder on these devices.

So, If I did this:

dd if=/dev/zero of=/dev/mtdblock1
dd if=uImage of=/dev/mtdblock1
dd if=/dev/zero of=/dev/mtdblock2
dd if=uRamdisk of=/dev/mtdblock2

I could remove both drives and the “box” would also boot?

It is still strange, if I install uImage and uRamdisk from

https://fox-exe.ru/WDMyCloud/WDMyCloud-Mirror-Gen2/Dev/Kernel-4.15.0-plus/

into the mdblocks, the device does not appear on network at all.

All 3 LEDs are steady blue.

Yes.
But if uImage/uRamdisk from WD firmware - device will boot into wd firmware and show error (“No disks, please insert some”)
If Debian - Boot’s into recovery (with Telnet access)
If DSM - Boot’s into DSM Installer.

As said above, I pushed the 4.15plus files into the mdblocks on my mirror Gen2 and the device boots not even in rescue mode.

To force the rescue mode I removed the HDDs.

What could be wrong?

At the moment I can only boot from USB into rescue mode.