What is my PR4100 doing?

Hello! My PR4100 seems to be in a state where it’s deleting data; maybe someone in here can tell me how i can stop that. :grimacing:
Here is how i got here: I wanted to upgrade my 4x4TB RAID 5 to 4x12TB, so I did that by going through the “expand capacity” process in the Raid Menu. Swapping the drives and rebuilding them took a while but it went pretty smoothly. Then it got stuck on “wait to resize” for almost a week.
The size of the Raid hadn’t changed yet but I was able to access the data again. But when I compared a folder to one I had backed up earlier I found out that files were missing. Since I wasn’t too happy with the PR4100’s network speed anyway I then planned to get a completely new NAS and use the 12TB drives with that instead. So I shut down the NAS, took them out and put the original drives back in. This might have been a mistake. :slight_smile:
After reboot it looked fine at first. But then I got a “Alert: Volume Degraded” email and noticed that the amount of free space was going up. I checked the Raid status and it said “resizing”. So I turned it off. :slight_smile:
So… how do I fix this? Can I just reload the default configuration? Will I be able to access my data after that and can I recover the lost files?
Any help on this is highly appreciated! Thanks!

It’s probably too late, but there may be a chance.

Enable SSH and run the following commands, one at a time, then post the results. The results of some commands will be lengthy, so I’ve grouped them according to what should fit within a single post.

How to Access WD My Cloud Using SSH (Secure Shell)

Also, please format the results before posting by selecting the text and clicking the </> preformatted-text button, it makes things so much easier to read.

Group 1 (One post):

  • smartctl --smart on --info /dev/sda;
  • smartctl --smart on --info /dev/sdb;
  • smartctl --smart on --info /dev/sdc;
  • smartctl --smart on --info /dev/sdd;

Group 2 (One post):

  • cat /proc/mdstat;
  • mdadm --detail /dev/md1;

Group 3 (One post):

  • mdadm --examine /dev/sda2;
  • mdadm --examine /dev/sdb2;
  • mdadm --examine /dev/sdc2;
  • mdadm --examine /dev/sdd2;

Thanks for the help. Here are the results:

root@wdNAS ~ # smartctl --smart on --info /dev/sda;
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-4.14.22] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD40EFRX-68N32N0
Serial Number:    WD-WCC7K0TNY8PZ
LU WWN Device Id: 5 0014ee 26725e5b4
Firmware Version: 82.00A82
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Dec  4 13:22:56 2023 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF ENABLE/DISABLE COMMANDS SECTION ===
SMART Enabled.
root@wdNAS ~ # smartctl --smart on --info /dev/sdb;
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-4.14.22] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD40EFRX-68N32N0
Serial Number:    WD-WCC7K4HFZDJL
LU WWN Device Id: 5 0014ee 26725e6ee
Firmware Version: 82.00A82
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Dec  4 13:23:53 2023 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF ENABLE/DISABLE COMMANDS SECTION ===
SMART Enabled.
root@wdNAS ~ # smartctl --smart on --info /dev/sdc;
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-4.14.22] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD40EFRX-68WT0N0
Serial Number:    WD-WCC4E3J4XCEC
LU WWN Device Id: 5 0014ee 25ff502c2
Firmware Version: 82.00A82
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Dec  4 13:24:36 2023 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF ENABLE/DISABLE COMMANDS SECTION ===
SMART Enabled.
root@wdNAS ~ # smartctl --smart on --info /dev/sdd;
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-4.14.22] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD40EFRX-68WT0N0
Serial Number:    WD-WCC4EHAH8UJD
LU WWN Device Id: 5 0014ee 2b54b0cc5
Firmware Version: 82.00A82
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Dec  4 13:25:27 2023 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF ENABLE/DISABLE COMMANDS SECTION ===
SMART Enabled.
root@wdNAS ~ # cat /proc/mdstat;
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid5 sdb2[1] sdd2[3] sdc2[2]
      11708466624 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]
      bitmap: 8/8 pages [32KB], 262144KB chunk

md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      2094080 blocks super 1.2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>
root@wdNAS ~ # mdadm --detail /dev/md1;
/dev/md1:
           Version : 1.0
     Creation Time : Tue Nov 26 16:28:44 2019
        Raid Level : raid5
        Array Size : 11708466624 (11166.06 GiB 11989.47 GB)
     Used Dev Size : 3902822208 (3722.02 GiB 3996.49 GB)
      Raid Devices : 4
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Mon Dec  4 13:26:48 2023
             State : clean, degraded
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : bitmap

              Name : 1
              UUID : d3d455a8:7a1bf2d7:96ab1313:e28f5d7f
            Events : 24352

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2
root@wdNAS ~ # mdadm --examine /dev/sda2;
/dev/sda2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : d3d455a8:7a1bf2d7:96ab1313:e28f5d7f
           Name : 1
  Creation Time : Tue Nov 26 16:28:44 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805644528 (3722.02 GiB 3996.49 GB)
     Array Size : 11708466624 (11166.06 GiB 11989.47 GB)
  Used Dev Size : 7805644416 (3722.02 GiB 3996.49 GB)
   Super Offset : 7805644784 sectors
   Unused Space : before=0 sectors, after=368 sectors
          State : clean
    Device UUID : 8dd4f833:af5d7e63:1a72ba82:b01cc1f9

Internal Bitmap : 2 sectors from superblock
    Update Time : Sat Nov 25 11:40:21 2023
       Checksum : 6dc690a - correct
         Events : 2517

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@wdNAS ~ # mdadm --examine /dev/sdb2;
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : d3d455a8:7a1bf2d7:96ab1313:e28f5d7f
           Name : 1
  Creation Time : Tue Nov 26 16:28:44 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805644528 (3722.02 GiB 3996.49 GB)
     Array Size : 11708466624 (11166.06 GiB 11989.47 GB)
  Used Dev Size : 7805644416 (3722.02 GiB 3996.49 GB)
   Super Offset : 7805644784 sectors
   Unused Space : before=0 sectors, after=368 sectors
          State : clean
    Device UUID : 8ca85999:7df781da:96afc30d:53fe989b

Internal Bitmap : 2 sectors from superblock
    Update Time : Mon Dec  4 13:28:49 2023
       Checksum : 102f4235 - correct
         Events : 24360

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : .AAA ('A' == active, '.' == missing, 'R' == replacing)
root@wdNAS ~ # mdadm --examine /dev/sdc2;
/dev/sdc2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : d3d455a8:7a1bf2d7:96ab1313:e28f5d7f
           Name : 1
  Creation Time : Tue Nov 26 16:28:44 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805644528 (3722.02 GiB 3996.49 GB)
     Array Size : 11708466624 (11166.06 GiB 11989.47 GB)
  Used Dev Size : 7805644416 (3722.02 GiB 3996.49 GB)
   Super Offset : 7805644784 sectors
   Unused Space : before=0 sectors, after=368 sectors
          State : clean
    Device UUID : f87c55ba:30c52ab4:21747312:df91665d

Internal Bitmap : 2 sectors from superblock
    Update Time : Mon Dec  4 13:29:19 2023
       Checksum : d1513c8b - correct
         Events : 24362

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : .AAA ('A' == active, '.' == missing, 'R' == replacing)
root@wdNAS ~ # mdadm --examine /dev/sdd2;
/dev/sdd2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : d3d455a8:7a1bf2d7:96ab1313:e28f5d7f
           Name : 1
  Creation Time : Tue Nov 26 16:28:44 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805644528 (3722.02 GiB 3996.49 GB)
     Array Size : 11708466624 (11166.06 GiB 11989.47 GB)
  Used Dev Size : 7805644416 (3722.02 GiB 3996.49 GB)
   Super Offset : 7805644784 sectors
   Unused Space : before=0 sectors, after=368 sectors
          State : clean
    Device UUID : 40b89dab:8ff3ded5:d81f62ef:808b1f09

Internal Bitmap : 2 sectors from superblock
    Update Time : Mon Dec  4 13:29:50 2023
       Checksum : 6cf54bad - correct
         Events : 24364

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : .AAA ('A' == active, '.' == missing, 'R' == replacing)

Also after restarting now all the drive LEDs are red and the display says “RAID roaming enabled”

Run the following command, then post the results.

  • mdadm --add /dev/md1 /dev/sda2;

Afterwards, run the following commands again, then post the results.

  • cat /proc/mdstat;
  • mdadm --detail /dev/md1;
root@wdNAS ~ # mdadm --add /dev/md1 /dev/sda2;
mdadm: added /dev/sda2
root@wdNAS ~ # cat /proc/mdstat;
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid5 sda2[4] sdb2[1] sdd2[3] sdc2[2]
      11708466624 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]
      [>....................]  recovery =  0.2% (9653372/3902822208) finish=437.6min speed=148266K/sec
      bitmap: 8/8 pages [32KB], 262144KB chunk

md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      2094080 blocks super 1.2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>
root@wdNAS ~ # mdadm --detail /dev/md1;
/dev/md1:
           Version : 1.0
     Creation Time : Tue Nov 26 16:28:44 2019
        Raid Level : raid5
        Array Size : 11708466624 (11166.06 GiB 11989.47 GB)
     Used Dev Size : 3902822208 (3722.02 GiB 3996.49 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Mon Dec  4 13:55:59 2023
             State : clean, degraded, recovering
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : bitmap

    Rebuild Status : 0% complete

              Name : 1
              UUID : d3d455a8:7a1bf2d7:96ab1313:e28f5d7f
            Events : 24512

    Number   Major   Minor   RaidDevice State
       4       8        2        0      spare rebuilding   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2

It seems that there may be some hope afterall, but it may take a long time (hours to days) to learn the outcome. Your RAID array is rebuilding, so run the following command again from time to time to see the status.

  • mdadm --detail /dev/md1;

Specifically, look for the following line, which will tell you the percentage of the rebuild process that’s complete. DO NOT REBOOT until the process is complete, and DO NOT SWAP DRIVES again without backups. Good luck!

Awesome! Thank you so much! I’ll be more cautious in the future, I promise! :smiley:

No problem. I forgot to mention that the following command can also be used to check the rebuild status, plus it includes the estimated time to finish.

  • cat /proc/mdstat;

And from the looks of it, the rebuild should take about 7-8 hours to complete.

0.2% (9653372/3902822208) finish=437.6min

Well, that’s not too bad.

Oh, one more thing: In the storage section it still says “expanding”? Do you think this is gonna be a problem?

Honestly, I’m not sure.

The dashboard RAID status appears to be a bit confused because the drives were swapped in the middle of the process, and my guess is that rebooting after the rebuild is complete may clear it up. If not, it should be possible to fix it manually.

Regardless, the RAID rebuild should buy you time to create proper backups.

Yeah, I’ll definitely do backups after the rebuild has finished… :grimacing:
Again, thanks for your help! I’ll follow up with the results of the rebuild in a few hours.

Looks like it worked. :slight_smile:

root@wdNAS ~ # mdadm --detail /dev/md1;
/dev/md1:
           Version : 1.0
     Creation Time : Tue Nov 26 16:28:44 2019
        Raid Level : raid5
        Array Size : 11708466624 (11166.06 GiB 11989.47 GB)
     Used Dev Size : 3902822208 (3722.02 GiB 3996.49 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Dec  5 00:47:04 2023
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : bitmap

              Name : 1
              UUID : d3d455a8:7a1bf2d7:96ab1313:e28f5d7f
            Events : 32989

    Number   Major   Minor   RaidDevice State
       4       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2

Dashboard still says “expanding” and wants me to swap out drive 1 now. :person_shrugging:

Another problem is now that I can’t access one of my shares via SMB or FTP. I checked via SSH and all the files are there though. Permissions seem fine too. Is there an easy way to find out what the problem is? Or should I just add a new share and copy the files over manually?

Indeed it does, and I guess luck is on your side, because I was convinced there would be a very different outcome. It never hurts to try, so I saw no downsides.

The dashboard status is stored separately from the actual mdadm RAID configuration, and can be safely ignored while you create backups. Afterwards, a full system reset should clear it.

Despite years of RAID experience, I don’t use software RAID on consumer NAS devices, so I haven’t delved too deeply into it’s core functionality within My Cloud OS5. What I do know is that it’s status is stored in several locations, including the following.

  • /mnt/HD_a4/.systemfile
  • /mnt/HD/HD_a2/.systemfile
  • /var/www/xml/expansion_volume_info.xml

Considering the confused state of things, I’d say the safest move is to create a new share and move the files manually.