Not booting, and partitions missing

Our DL-4100 stopped booting (only blinking 1Hz blue powerbutton and spontaneous restarts every ~3 minutes)
I took the drives out and put them in a Linux machine, and MDADM recognized 2 RAID groups, and assembled them as md126 (11TiB) and md127 (~2T1B)

However, when I look at them with fdisk, no partitions seem to be defined:

root@ubunas2:~# fdisk -l /dev/md126
Disk /dev/md126: 10.9 TiB, 11989469822976 bytes, 23416933248 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
root@ubunas2:~# 

mdadm shows the RAID-5 to be up, active and clean:

root@ubunas2:~# mdadm --detail /dev/md126
/dev/md126:
           Version : 1.0
     Creation Time : Sat Mar 16 16:07:10 2019
        Raid Level : raid5
        Array Size : 11708466624 (11166.06 GiB 11989.47 GB)
     Used Dev Size : 3902822208 (3722.02 GiB 3996.49 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Mar  6 12:53:37 2020
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : bitmap

              Name : 1
              UUID : 5fae39ae:783b2795:6fa686f3:9de09210
            Events : 16289

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       34        1      active sync   /dev/sdc2
       3       8       66        2      active sync   /dev/sde2
       2       8       50        3      active sync   /dev/sdd2
root@ubunas2:~#

Finally dump2efs seems to be happy to confirm the partition as well:

root@ubunas2:~# dumpe2fs /dev/md126
dumpe2fs 1.44.1 (24-Mar-2018)
Filesystem volume name:   <none>
Last mounted on:          /mnt/HD/HD_a2
Filesystem UUID:          611766f4-558b-4038-be17-e8ecc5b4cd1a
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr filetype meta_bg extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              731783168
Block count:              2927112448
Reserved block count:     9757012
Free blocks:              2436765958
Free inodes:              730756099
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              32
RAID stripe width:        32
Flex block group size:    16
Filesystem created:       Sat May 23 21:04:57 2015
Last mount time:          Fri Mar  6 12:53:37 2020
Last write time:          Fri Mar  6 12:53:37 2020
Mount count:              3
Maximum mount count:      -1
Last checked:             Sat Jan 11 23:51:16 2020
Check interval:           0 (<none>)
Lifetime writes:          7301 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      20798344-b764-444f-9dba-21abfca6e06e
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke journal_64bit
Journal size:             128M
Journal length:           32768
Journal sequence:         0x06604839
Journal start:            0

^C
root@ubunas2:~#

But obviously, I cannot mount the filesystem, as there’s no partition defined…
(The same goes for md127, which seems to be recognized as a SWAP RAID-1)

root@ubunas2:~# mdadm --detail /dev/md127
/dev/md127:
           Version : 0.90
     Creation Time : Sun Jan 19 06:32:04 2020
        Raid Level : raid1
        Array Size : 2097088 (2047.94 MiB 2147.42 MB)
     Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
      Raid Devices : 4
     Total Devices : 4
   Preferred Minor : 127
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Feb 28 04:53:11 2020
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              UUID : b544f175:b19bb410:a78b0f9d:96026709
            Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1
root@ubunas2:~# 

Any help would be appreciated. We have a backup of the most important data, but if we can also keep the less important data through recovery, that’d be great!

Those are the member-partitions that make up the array, not the data-partition ON the array…

The partition table of the array itself is clearly empty:

root@ubunas2:~# fdisk -l /dev/md126
Disk /dev/md126: 10.9 TiB, 11989469822976 bytes, 23416933248 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
root@ubunas2:~# 

You’ve clearly got it all figured out, so… good luck.

What kind of response is that?