Welcome to the new Linux Foundation Forum!

Lab 17.1 Creating Software Raid - mdadm: RUN_ARRAY failed: Invalid argument

ultraninjaultraninja Posts: 20
edited December 2015 in LFS201 Class Forum

I'm using the LFS Ubuntu 14.1 lab VM.

Basically, I added an additional 1GB virtual drive to the VM in Virtualbox. Partitioned it with fdisk.

pvcreate /dev/sdb1

vgcreate VG /dev/sdb1

lvcreate -L 200M -n MD1 VG

lvcreate -L 200M -n MD2 VG

roo[email protected]:/home/student# lvdisplay
--- Logical volume ---
LV Path /dev/VG/MD1
LV Name MD1
VG Name VG
LV UUID BVR2X8-jltC-MCSd-WulR-2IXQ-uc7c-6SEjvr
LV Write Access read/write
LV Creation host, time ubuntu, 2015-12-28 15:16:00 -0600
LV Status available
# open 0
LV Size 200.00 MiB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0

--- Logical volume ---
LV Path /dev/VG/MD2
LV Name MD2
VG Name VG
LV UUID A8P9JO-RKwK-XlUB-Ifw7-mrtI-yfFO-zV3lQY
LV Write Access read/write
LV Creation host, time ubuntu, 2015-12-28 15:16:03 -0600
LV Status available
# open 0
LV Size 200.00 MiB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1

[email protected]:/home/student# mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/VG/MD1 /dev/VG/MD2
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 204608K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: RUN_ARRAY failed: Invalid argument

[email protected]:/home/student# cat /proc/mdstat
Personalities :
unused devices: <none>


I've also tried adding 2x 1GB virtual disks in Virtualbox, then trying to create a raid array with those 2 disks. Same error "mdadm: RUN_ARRAY failed: Invalid argument"

After battling with this for a few hours, reading man pages, and googling, I'm still getting the error.

Comments

  • coopcoop Posts: 289
    What kernel are you running? If you are running the custom kernel on the LF VM,
    it is likely not configured for RAID properly; it is intended for *this course* that you use the stock distro kernel. (the custom kernels are for use in kernel devel courses)
    Check uname -r ; if it is reporting 4.3.0 or whatever, you are probably toast.

    If you are using the Ubuntu stock kernel (choose at boot time) you should be ok.
    Check if CONFIG_MD_RAID* is set (look at /boot/config*)

    If this is not the problem, let us know/
  • ultraninjaultraninja Posts: 20
    edited December 2015
    Thanks for the reply. Yup, it was the kernel. I just installed the generic kernel from the repositories. I have 2 remaining questions:
    1) How can I check if RAID is supported in a booted kernel?
    [email protected]:/boot# grep RAID config-`uname -r`
    CONFIG_RAID_ATTRS=m
    CONFIG_BLK_DEV_3W_XXXX_RAID=m
    CONFIG_SCSI_AACRAID=m
    CONFIG_MEGARAID_NEWGEN=y
    CONFIG_MEGARAID_MM=m
    CONFIG_MEGARAID_MAILBOX=m
    CONFIG_MEGARAID_LEGACY=m
    CONFIG_MEGARAID_SAS=m
    CONFIG_SCSI_PMCRAID=m
    CONFIG_MD_RAID0=m
    CONFIG_MD_RAID1=m
    CONFIG_MD_RAID10=m
    CONFIG_MD_RAID456=m
    CONFIG_DM_RAID=m
    CONFIG_DMA_ENGINE_RAID=y
    CONFIG_ASYNC_RAID6_TEST=m
    CONFIG_ASYNC_RAID6_RECOV=m
    CONFIG_RAID6_PQ=m
    
    [email protected]:/boot# grep RAID config-4.2.0 
    # CONFIG_RAID_ATTRS is not set
    # CONFIG_BLK_DEV_3W_XXXX_RAID is not set
    # CONFIG_SCSI_AACRAID is not set
    # CONFIG_MEGARAID_NEWGEN is not set
    # CONFIG_MEGARAID_LEGACY is not set
    # CONFIG_MEGARAID_SAS is not set
    # CONFIG_SCSI_PMCRAID is not set
    # CONFIG_MD_RAID0 is not set
    # CONFIG_MD_RAID1 is not set
    # CONFIG_MD_RAID10 is not set
    # CONFIG_MD_RAID456 is not set
    # CONFIG_DM_RAID is not set
    CONFIG_RAID6_PQ=m
    

    I'm assuming the "is not set" values indicate the kernel wasn't compiled with RAID support on the 4.2 kernel. Oh, and the # commented out RAID lines.

    2) I created the raid array with mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/sda3 /dev/sda4 yet when I check the array it's actually /dev/md127. I'm not sure why its md127 rather than md0. Maybe my mdadm.conf isn't correct.
    [email protected]:~$ sudo fdisk -l
    
    Disk /dev/sda: 42.9 GB, 42949672960 bytes
    255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000045d1
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048    37750783    18874368   83  Linux
    /dev/sda2        37752830    41940991     2094081    5  Extended
    /dev/sda3        41940992    42350591      204800   83  Linux
    /dev/sda4        42350592    42760191      204800   83  Linux
    /dev/sda5        37752832    41940991     2094080   82  Linux swap / Solaris
    
    Disk /dev/md127: 209 MB, 209518592 bytes
    2 heads, 4 sectors/track, 51152 cylinders, total 409216 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md127 doesn't contain a valid partition table
    
    [email protected]:/home/student# cat /proc/mdstat 
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : active (auto-read-only) raid1 sda3[0] sda4[1]
          204608 blocks super 1.2 [2/2] [UU]
          
    unused devices: <none>
    
    

    Thanks for the help
  • Hi,

    You can check if the md support is included in the kernel config file. So just do:

    grep _MD_ /boot/config-`uname -r`

    It uses to be included as modules.

    Happy new year!
    Luis.
  • The reason I was getting /dev/md127 rather than /dev/md0:

    Ubuntu Forums - RAID starting at md127 instead of md0

    1: mdadm.conf needed to be edited to remove the --name directive.
    For example:
    ARRAY /dev/md0 UUID=e4665ceb:15f8e4b6:b186d497:7d365254
    

    2: You need to update initramfs so it contains your mdadm.conf settings during boot.
    sudo update-initramfs -u
    

    Looks good now:
    [email protected]:~# mdadm --detail /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Thu Jan 21 14:27:14 2016
         Raid Level : raid1
         Array Size : 204608 (199.85 MiB 209.52 MB)
      Used Dev Size : 204608 (199.85 MiB 209.52 MB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Thu Jan 21 15:32:57 2016
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : LFS-VirtualBox:0  (local to host LFS-VirtualBox)
               UUID : 8b25d1ce:4b396710:01d76fec:b0ba507d
             Events : 32
    
        Number   Major   Minor   RaidDevice State
           0       8       17        0      active sync   /dev/sdb1
           1       8       18        1      active sync   /dev/sdb2
    [email protected]:~# cat /proc/mdstat 
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
    md0 : active raid1 sdb2[1] sdb1[0]
          204608 blocks super 1.2 [2/2] [UU]
          
    unused devices: <none>
    
  • I'm glad to know that you solved it!

    Regards,
    Luis.
Sign In or Register to comment.