After firmware 5.27.157 update, RAID configuration is gone and consequently no access to data

root@HB-Cloud2 ~ # mknod /dev/md1 b 9 1
root@HB-Cloud2 ~ # mdadm --assemble --run /dev/md1 /dev/sda2 /dev/sdb2
mdadm: Fail create md1 when using /sys/module/md_mod/parameters/new_array
mdadm: /dev/md1 has been started with 2 drives.
root@HB-Cloud2 ~ # cat /mnt/HD_a4/.systemfile/hd_volume_info.xml
<?xml version="1.0" encoding="UTF-8"?>
<config>
  <volume_info>
    <item>
      <volume>1</volume>
      <web_cmd>9</web_cmd>
      <sdx3_flag>0</sdx3_flag>
      <raid_mode>raid1</raid_mode>
      <file_type>ext4</file_type>
      <size>3722</size>
      <device>sdasdb</device>
      <spare/>
      <mount>/dev/md1</mount>
      <to_be_sync/>
      <used_device>/dev/sda2 /dev/sdb2 </used_device>
      <dev_num>2</dev_num>
      <scsi_mapping>3</scsi_mapping>
      <volume_encrypt>0</volume_encrypt>
      <raid_uuid>UUID=d7807461:9d6f590d:c98d1f5c:3a3bef51</raid_uuid>
      <scsi0_serial>sda:WD-WXC2D90A78ZK</scsi0_serial>
      <scsi1_serial>sdb:WD-WXC2D90KF2ZV</scsi1_serial>
    </item>
  </volume_info>
</config>
root@HB-Cloud2 ~ # cat /mnt/HD_b4/.systemfile/hd_volume_info.xml
<?xml version="1.0" encoding="UTF-8"?>
<config>
  <volume_info>
    <item>
      <volume>1</volume>
      <web_cmd>9</web_cmd>
      <sdx3_flag>0</sdx3_flag>
      <raid_mode>raid1</raid_mode>
      <file_type>ext4</file_type>
      <size>3722</size>
      <device>sdasdb</device>
      <spare/>
      <mount>/dev/md1</mount>
      <to_be_sync/>
      <used_device>/dev/sda2 /dev/sdb2 </used_device>
      <dev_num>2</dev_num>
      <scsi_mapping>3</scsi_mapping>
      <volume_encrypt>0</volume_encrypt>
      <raid_uuid>UUID=d7807461:9d6f590d:c98d1f5c:3a3bef51</raid_uuid>
      <scsi0_serial>sda:WD-WXC2D90A78ZK</scsi0_serial>
      <scsi1_serial>sdb:WD-WXC2D90KF2ZV</scsi1_serial>
    </item>
  </volume_info>
</config>
root@HB-Cloud2 ~ #

1 Like

Well, the XML config files look fine. Let me know the Scan Disk test results. Then reboot again to see what happens.

Scan disk is greyed out, as well as Format disk…

1 Like

Heureka! The drive is back! 3.58 TB Free, diagnostics healthy!

Unfortunately there is no Nobel Prize for cloud repair, I would nominate you!
Is there a brief error description?

1 Like

Yes, I rebooted, but skipped the scan disk as it was not available.

1 Like

And I just got an email notification, thanking me for updating (the FW), with release notes. Sounds like a bad joke.

1 Like

Interesting. Is Scan Disk available now? Don’t run it, I just want to know if it’s still greyed out.

I strongly suspect that manually recreating the /dev/md1 device and restarting the RAID 1 array, then waiting for a period of time allows the system daemon to detect it and make any necessary changes.

It’s just a theory, but something very similar happened recently while I was helping another user with a nearly identical problem. All of the sudden, their RAID 5 array was back and all appeared to be normal again. I’ve never found a reason why.

Rebooting may have been incidental, but it could just as easily be a contributing factor. Regardless, you should create backups and replace those SMR hard drives ASAP, because they’re an accident waiting to happen in a RAID environment.

Yes, Scan Disk & Format Disk is available now.
Thanks again!

1 Like

I had the same problem - missed Volume1 of 2… but shutdown and restart again helped…

Did not help in this case, as stated in the thread…

I lost access to my JBOD volume 1. These instructions didn’t help, but I assume it may be something different I need to type out

root@MyCloudEX2Ultra ~ # mdadm --assemble --run /dev/md1 /dev/sda2
/dev/sdb2;
mdadm: no recogniseable superblock on /dev/sda2
mdadm: /dev/sda2 has no superblock - assembly aborted
root@MyCloudEX2Ultra ~ # cat /proc/mdstat;
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
2094080 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk