Cat /proc/mdstat


#1

When i connect into my My Cloud using ssh and do following command:
cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[1]
1999808 blocks [2/1] [_U]

md0 : active raid1 sda1[0]
1999808 blocks [2/1] [U_]

unused devices:

Questions :
Is mdadm correctly configured ?
What means 2/1 and ‘underscore’U and U’underscore’ ; sda1[0] and sda2[1] ?

blkid shows :
/dev/sda1: UUID=“f46f2901-3958-6958-0db8-a32f4bb8783d” TYPE=“linux_raid_member”
/dev/sda2: UUID=“4b50abd6-6795-6e56-997c-eb89519129b4” TYPE=“linux_raid_member”
/dev/sda3: UUID=“34ad7ff9-099c-4eac-b6af-c2998cf1c237” TYPE=“swap”
/dev/sda4: UUID=“49bbd778-1a26-4a49-8b9b-99742b7a9091” TYPE=“ext4”
/dev/md0: UUID=“ef69a43f-7379-49d6-a2c4-7927a8f17863” TYPE=“ext3”
/dev/md1: UUID=“f46f2901-3958-6958-0db8-a32f4bb8783d” SEC_TYPE=“ext2” TYPE=“ext3”

UUID of /dev/sda1 is not the same as UUID of /dev/sda2 !

dmesg shows :
8.764544] GPT:disk_guids don’t match.
[ 8.764555] GPT: Use GNU Parted to correct GPT errors.
[ 8.764666] sda: sda1 sda2 sda3 sda4 sda5 sda6 sda7 sda8

8.966195] md: raid1 personality registered for level 1

10.487700] md: Autodetecting RAID arrays.
[ 10.533532] md: Scanned 2 and added 2 devices.
[ 10.538024] md: autorun …
[ 10.540837] md: considering sda2 …
[ 10.544449] md: adding sda2 …
[ 10.547719] md: sda1 has different UUID to sda2
[ 10.553168] md: created md1
[ 10.556007] md: bind
[ 10.558781] md: running:
[ 10.562312] bio: create slab at 1
[ 10.566600] md/raid1:md1: active with 1 out of 2 mirrors
[ 10.572047] md1: detected capacity change from 0 to 2047803392
[ 10.578159] md: considering sda1 …
[ 10.581784] md: adding sda1 …
[ 10.585060] md: created md0
[ 10.587873] md: bind
[ 10.590654] md: running:
[ 10.594251] md/raid1:md0: active with 1 out of 2 mirrors
[ 10.599718] md0: detected capacity change from 0 to 2047803392
[ 10.605779] md: … autorun DONE.
[ 10.616081] md0: unknown partition table
[ 10.625993] kjournald starting. Commit interval 5 seconds
[ 10.660288] EXT3-fs (md0): using internal journal
[ 10.665030] EXT3-fs (md0): mounted filesystem with ordered data mode
[ 10.671477] VFS: Mounted root (ext3 filesystem) on device 9:0.
[ 10.677891] async_waiting @ 1
[ 10.680880] async_continuing @ 1 after 0 usec
[ 10.685947] Freeing init memory: 320K
[ 40.827472] md1: unknown partition table
[ 44.719748] EXT3-fs (md0): error: cannot change data mode on remount. The filesystem is mounted in data=ordered mode and you try to remount it in data=writeback mode.
[ 44.813391] EXT3-fs (md0): error: cannot change data mode on remount. The filesystem is mounted in data=ordered mode and you try to remount it in data=writeback mode.

GPT guids don’t match with GPT errors ? How to correct ?

i think , i could solve now this problem after

  • mdadm --stop /dev/md1

  • mdadm /dev/md0 -a /dev/sda1

  • cat /proc/mdstat shows now :
    Personalities : [raid1]
    md0 : active raid1 sda2[1] sda1[0]
    1999808 blocks [2/2] [UU]

  • mdadm -Esv shows
    ARRAY /dev/md0 level=raid1 num-devices=2
    UUID=f46f2901:39586958:0db8a32f:4bb8783d
    devices=/dev/sda2,/dev/sda1

  • lsblk -o NAME,FSTYPE,UUID,RO,RM,SIZE,STATE,OWNER,GROUP,MODE,TYPE,MOUNTPOINT,LABEL,MODEL
    shows :
    NAME FSTYPE UUID RO RM SIZE STATE OWNER GROUP MODE TYPE MOUNTPOINT LABEL MODEL
    sda 0 0 1.8T running root disk brw-rw—T disk WDC WD20EFRX-68E
    |-sda1 linux_raid_member f46f2901-3958-6958-0db8-a32f4bb8783d 0 0 1.9G root disk brw-rw—T part
    | -md0 ext3 ef69a43f-7379-49d6-a2c4-7927a8f17863 0 0 1.9G root disk brw-rw---T raid1 / |-sda2 linux_raid_member f46f2901-3958-6958-0db8-a32f4bb8783d 0 0 1.9G root disk brw-rw---T part |-md0 ext3 ef69a43f-7379-49d6-a2c4-7927a8f17863 0 0 1.9G root disk brw-rw—T raid1 /
    |-sda3 swap 34ad7ff9-099c-4eac-b6af-c2998cf1c237 0 0 489M root disk brw-rw—T part [SWAP]
    |-sda4 ext4 49bbd778-1a26-4a49-8b9b-99742b7a9091 0 0 1.8T root disk brw-rw—T part /DataVolume
    |-sda5 0 0 95M root disk brw-rw—T part
    |-sda6 0 0 96M root disk brw-rw—T part
    |-sda7 0 0 1M root disk brw-rw—T part
    `-sda8 0 0 2M root disk brw-rw—T part