Welcome to the Linux Foundation Forum!

Lab 17.1 Creating Software Raid - mdadm: RUN_ARRAY failed: Invalid argument

Posts: 20

I'm using the LFS Ubuntu 14.1 lab VM.

Basically, I added an additional 1GB virtual drive to the VM in Virtualbox. Partitioned it with fdisk.

pvcreate /dev/sdb1

vgcreate VG /dev/sdb1

lvcreate -L 200M -n MD1 VG

lvcreate -L 200M -n MD2 VG

root@ubuntu:/home/student# lvdisplay
--- Logical volume ---
LV Path /dev/VG/MD1
LV Name MD1
VG Name VG
LV UUID BVR2X8-jltC-MCSd-WulR-2IXQ-uc7c-6SEjvr
LV Write Access read/write
LV Creation host, time ubuntu, 2015-12-28 15:16:00 -0600
LV Status available
# open 0
LV Size 200.00 MiB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0

--- Logical volume ---
LV Path /dev/VG/MD2
LV Name MD2
VG Name VG
LV UUID A8P9JO-RKwK-XlUB-Ifw7-mrtI-yfFO-zV3lQY
LV Write Access read/write
LV Creation host, time ubuntu, 2015-12-28 15:16:03 -0600
LV Status available
# open 0
LV Size 200.00 MiB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1

root@ubuntu:/home/student# mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/VG/MD1 /dev/VG/MD2
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 204608K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: RUN_ARRAY failed: Invalid argument

root@ubuntu:/home/student# cat /proc/mdstat
Personalities :
unused devices: <none>


I've also tried adding 2x 1GB virtual disks in Virtualbox, then trying to create a raid array with those 2 disks. Same error "mdadm: RUN_ARRAY failed: Invalid argument"

After battling with this for a few hours, reading man pages, and googling, I'm still getting the error.

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 916
    What kernel are you running? If you are running the custom kernel on the LF VM,
    it is likely not configured for RAID properly; it is intended for *this course* that you use the stock distro kernel. (the custom kernels are for use in kernel devel courses)
    Check uname -r ; if it is reporting 4.3.0 or whatever, you are probably toast.

    If you are using the Ubuntu stock kernel (choose at boot time) you should be ok.
    Check if CONFIG_MD_RAID* is set (look at /boot/config*)

    If this is not the problem, let us know/
  • Posts: 20
    edited December 2015
    Thanks for the reply. Yup, it was the kernel. I just installed the generic kernel from the repositories. I have 2 remaining questions:
    1) How can I check if RAID is supported in a booted kernel?
    1. root@ubuntu:/boot# grep RAID config-`uname -r`
    2. CONFIG_RAID_ATTRS=m
    3. CONFIG_BLK_DEV_3W_XXXX_RAID=m
    4. CONFIG_SCSI_AACRAID=m
    5. CONFIG_MEGARAID_NEWGEN=y
    6. CONFIG_MEGARAID_MM=m
    7. CONFIG_MEGARAID_MAILBOX=m
    8. CONFIG_MEGARAID_LEGACY=m
    9. CONFIG_MEGARAID_SAS=m
    10. CONFIG_SCSI_PMCRAID=m
    11. CONFIG_MD_RAID0=m
    12. CONFIG_MD_RAID1=m
    13. CONFIG_MD_RAID10=m
    14. CONFIG_MD_RAID456=m
    15. CONFIG_DM_RAID=m
    16. CONFIG_DMA_ENGINE_RAID=y
    17. CONFIG_ASYNC_RAID6_TEST=m
    18. CONFIG_ASYNC_RAID6_RECOV=m
    19. CONFIG_RAID6_PQ=m
    20.  
    21. root@ubuntu:/boot# grep RAID config-4.2.0
    22. # CONFIG_RAID_ATTRS is not set
    23. # CONFIG_BLK_DEV_3W_XXXX_RAID is not set
    24. # CONFIG_SCSI_AACRAID is not set
    25. # CONFIG_MEGARAID_NEWGEN is not set
    26. # CONFIG_MEGARAID_LEGACY is not set
    27. # CONFIG_MEGARAID_SAS is not set
    28. # CONFIG_SCSI_PMCRAID is not set
    29. # CONFIG_MD_RAID0 is not set
    30. # CONFIG_MD_RAID1 is not set
    31. # CONFIG_MD_RAID10 is not set
    32. # CONFIG_MD_RAID456 is not set
    33. # CONFIG_DM_RAID is not set
    34. CONFIG_RAID6_PQ=m

    I'm assuming the "is not set" values indicate the kernel wasn't compiled with RAID support on the 4.2 kernel. Oh, and the # commented out RAID lines.

    2) I created the raid array with mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/sda3 /dev/sda4 yet when I check the array it's actually /dev/md127. I'm not sure why its md127 rather than md0. Maybe my mdadm.conf isn't correct.
    1. student@ubuntu:~$ sudo fdisk -l
    2.  
    3. Disk /dev/sda: 42.9 GB, 42949672960 bytes
    4. 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors
    5. Units = sectors of 1 * 512 = 512 bytes
    6. Sector size (logical/physical): 512 bytes / 512 bytes
    7. I/O size (minimum/optimal): 512 bytes / 512 bytes
    8. Disk identifier: 0x000045d1
    9.  
    10. Device Boot Start End Blocks Id System
    11. /dev/sda1 * 2048 37750783 18874368 83 Linux
    12. /dev/sda2 37752830 41940991 2094081 5 Extended
    13. /dev/sda3 41940992 42350591 204800 83 Linux
    14. /dev/sda4 42350592 42760191 204800 83 Linux
    15. /dev/sda5 37752832 41940991 2094080 82 Linux swap / Solaris
    16.  
    17. Disk /dev/md127: 209 MB, 209518592 bytes
    18. 2 heads, 4 sectors/track, 51152 cylinders, total 409216 sectors
    19. Units = sectors of 1 * 512 = 512 bytes
    20. Sector size (logical/physical): 512 bytes / 512 bytes
    21. I/O size (minimum/optimal): 512 bytes / 512 bytes
    22. Disk identifier: 0x00000000
    23.  
    24. Disk /dev/md127 doesn't contain a valid partition table
    25.  
    26. root@ubuntu:/home/student# cat /proc/mdstat
    27. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    28. md127 : active (auto-read-only) raid1 sda3[0] sda4[1]
    29. 204608 blocks super 1.2 [2/2] [UU]
    30. unused devices: <none>
    31.  

    Thanks for the help
  • Hi,

    You can check if the md support is included in the kernel config file. So just do:

    grep _MD_ /boot/config-`uname -r`

    It uses to be included as modules.

    Happy new year!
    Luis.
  • The reason I was getting /dev/md127 rather than /dev/md0:

    Ubuntu Forums - RAID starting at md127 instead of md0

    1: mdadm.conf needed to be edited to remove the --name directive.
    For example:
    1. ARRAY /dev/md0 UUID=e4665ceb:15f8e4b6:b186d497:7d365254

    2: You need to update initramfs so it contains your mdadm.conf settings during boot.
    1. sudo update-initramfs -u

    Looks good now:
    1. root@LFS-VirtualBox:~# mdadm --detail /dev/md0
    2. /dev/md0:
    3. Version : 1.2
    4. Creation Time : Thu Jan 21 14:27:14 2016
    5. Raid Level : raid1
    6. Array Size : 204608 (199.85 MiB 209.52 MB)
    7. Used Dev Size : 204608 (199.85 MiB 209.52 MB)
    8. Raid Devices : 2
    9. Total Devices : 2
    10. Persistence : Superblock is persistent
    11.  
    12. Update Time : Thu Jan 21 15:32:57 2016
    13. State : clean
    14. Active Devices : 2
    15. Working Devices : 2
    16. Failed Devices : 0
    17. Spare Devices : 0
    18.  
    19. Name : LFS-VirtualBox:0 (local to host LFS-VirtualBox)
    20. UUID : 8b25d1ce:4b396710:01d76fec:b0ba507d
    21. Events : 32
    22.  
    23. Number Major Minor RaidDevice State
    24. 0 8 17 0 active sync /dev/sdb1
    25. 1 8 18 1 active sync /dev/sdb2
    26. root@LFS-VirtualBox:~# cat /proc/mdstat
    27. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    28. md0 : active raid1 sdb2[1] sdb1[0]
    29. 204608 blocks super 1.2 [2/2] [UU]
    30. unused devices: <none>
  • I'm glad to know that you solved it!

    Regards,
    Luis.

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training