Boot Arch Linux from an mdadm array

This method will allow you to configure your /boot directory or partition on a raid 0, 5, or 6. This was not previously possible before grub2. The code in this write-up was tested against a virtual machine with 2 20GB hard disks, with a couple of raid 0 partitions.

First, Boot your arch install image. Once running in your live environment, configure your partitions. It’s very important to remember that your first partition must begin further into the disk than the typical default of 2048 sectors, 4096 worked for me. My test box has the following partition table on each disk.

[root@archiso ~]# fdisk -l /dev/sda

Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x2a8e3edb

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        4096    37752831    18874368   fd  Linux raid autodetect
/dev/sda2        37752832    41943039     2095104   fd  Linux raid autodetect

Next create your arrays.

mdadm --create /dev/md0 --raid-devices=2 --level=0 /dev/sda1 /dev/sdb1
mdadm --create /dev/md1 --raid-devices=2 --level=0 /dev/sda2 /dev/sdb2

Once your arrays are set, run through your normal arch setup process, skipping all of the bootloader steps. Remember on the “Prepare Hard Drives” step to choose “Manually configure block devices, filesystems, and mountpoints”, and choose the appropriate /dev/mdX devices.

After the install is complete, we have a few more steps to get grub2 installed. Make sure you’ve got networking.

dhcpcd eth0

Prepare your installed root, and chroot into it.

cp /etc/resolv.conf /mnt/etc/resolv.conf 
mount -o bind /proc/ /mnt/proc/
mount -o bind /sys/ /mnt/sys/
mount -o bind /dev/ /mnt/dev/
chroot /mnt/

Now that we are essentially running within our installed environment, we can install grub2.

pacman -Sy grub2

Ignore the pacman upgrade request, and allow pacman to remove the “grub” package, as we don’t need it. Next, prepare your mdadm.conf file.

mdadm --examine --scan > /etc/mdadm.conf

Add “mdadm” to the “HOOKS” array in your /etc/mkinitcpio.conf. As of this writing, a bug prevents your /boot/grub/grub.cfg from generating correctly unless you uncomment the line “GRUB_TERMINAL_OUTPUT=console” in your /etc/default/grub. Generate your /boot/grub/grub.cfg and install grub2 to every device in your array.

grub-mkconfig -o /boot/grub/grub.cfg
grub-install /dev/sda
grub-install /dev/sdb

Re-generate your init image.

mkinitcpio -p linux

You should see a message go by saying

Custom /etc/mdadm.conf file will be used in initramfs for assembling arrays.

If you don’t see this message, make sure you didn’t miss any steps above. You should be able to boot from any disk in your array into your new arch install.

If you hit busybox, boot back into your live environment, chroot back into your installed environment, modify your /etc/default/grub to uncomment “GRUB_DISABLE_LINUX_UUID=true”, and re-run the grub-mkconfig command from above. I’ve seen a udev/UUID bug that sometimes prevents your initramfs mdadm from correctly assembling your arrays.

Most of the information in the article was pieced together from https://wiki.archlinux.org/index.php/Software_RAID_and_LVM and https://wiki.archlinux.org/index.php/GRUB2 and maybe a couple of other places.

  • Ionut Biru

    syslinux has support for raid and you can use it straight from installer and is much more simple than grub2

    • http://travishegner.com travis.hegner

      I’m not too familiar with syslinux, but it sounds like I should look into it! Thanks!

    • TehGhodTrole

      > syslinux has support for raid
      > and you can use it straight\
      > from installer and is much more
      > simple than grub2

      DUH! Syslinux does not support RAID 0.

  • ray clancy

    Pleased to find your blog about arch and mdadm raid boot.
    I note the earlier date for that data and am interested in performing the feat with Compact Flash devices.

    Does your procedure still perform with the lastest linuxkernel version 3.3.1-1?
    Mdadm has a later version as well as does mkinitcpio.

    Please advise the status of the procedure with latest upgrades. Thanks

    • http://travishegner.com travis.hegner

      I have not done a fresh install with the newest versions, but I can say that I have several machines installed with that method, and upgraded to the latest versions without issue. I can’t imagine that the procedure has changed at all, but the best thing you can do is try it out and let us know!

  • ray clancy

    Performed the procedure using two maxel lCF cards 16GB each.
    Partitioned each device with 100mb free space at beginning. Device sdb had primary 100mb as bootable and the rest as primary ext3.
    Device sdc had 100mb primary swap and the rest as primary ext3.
    mdadm created /dev/md1 from /sdb2 and /sdc2.
    Ran archiso to install arch to the drives from internet mirror. Skipped manual partitioning with “done”. Utilized mountpoints step to establish each element, boot,swap and /dev/md1..
    Proceeded to install core elements via internet.
    Setup the config system elements and skipped grub.

    Booted into archlinux in another device and mounted /dev/md1 in /mnt/md. Cd to mnt/md and setup chroot into the raid system devices.

    Downloaded grub2-bios with pacman.
    Installed grub-mkconfig -o /boot/grub/grub.cfg
    ran…grub-install /dev/sdb
    ran…mdadm -D –scan >>./etc/mdadm.conf….

    Ran mkinitcpio -p linux and obseerved the required mdadm entry in the init sequence.

    Rebooted the /dev/sdb and reached the log-in prompt.

    Thus the system was installed via raid0 with two 16GB CF cards.

    Spent an hour adding xorg,xfce4, vlc,kdenlive,mirage,gparted,hdparm,firefox among others.

    Runing hdparm on the raid device md1 reports read speed of 90mb/s which isn’t indicative of the speed for two devices in raid zer0.

    This may be due to hdparm not responding to a raid array or it may mean my system is JBOD.

    Perhaps there is another reason for that

    At present, I have reservations concerning the speed reading but the system is 29mb in size for md1, and boots with the raid0 devices.

    This has loaded the latest linux kernel from the internet mirror.

  • ray clancy

    The results for this array called /dev/md1 indicate no speed increase from raid array in hdparm:

    sh-4.2# hdparm -tT /dev/md1
    
    /dev/md1:
     Timing cached reads:   1982 MB in  2.00 seconds = 990.90 MB/sec
     Timing buffered disk reads: 272 MB in  3.01 seconds =  90.41 MB/sec
    sh-4.2# hdparm -tT /dev/sda2
    
    /dev/sda2:
     Timing cached reads:   1988 MB in  2.00 seconds = 994.16 MB/sec
     Timing buffered disk reads: 258 MB in  3.02 seconds =  85.33 MB/sec
    sh-4.2# hdparm -tT /dev/sdb2a2
    /dev/sdb2a2: No such file or directory
    sh-4.2# hdparm -tT /dev/sdb2  
    
    /dev/sdb2:
     Timing cached reads:   1990 MB in  2.00 seconds = 994.97 MB/sec
     Timing buffered disk reads: 258 MB in  3.02 seconds =  85.51 MB/sec
    sh-4.2# 
    
    • http://travishegner.com travis.hegner

      Hmm… Seems like you may have a configuration issue with your mdadm array. I have the same type of setup in my laptop with the following results:

      [thegner@it-thegnerlptp ~]$ sudo hdparm -Tt /dev/sda
      
      /dev/sda:
       Timing cached reads:   14692 MB in  1.99 seconds = 7369.05 MB/sec
       Timing buffered disk reads: 294 MB in  3.02 seconds =  97.46 MB/sec
      [thegner@it-thegnerlptp ~]$ sudo hdparm -Tt /dev/sdc
      
      /dev/sdc:
       Timing cached reads:   14152 MB in  1.99 seconds = 7096.72 MB/sec
       Timing buffered disk reads: 294 MB in  3.02 seconds =  97.35 MB/sec
      [thegner@it-thegnerlptp ~]$ sudo hdparm -Tt /dev/md0
      
      /dev/md0:
       Timing cached reads:   14502 MB in  1.99 seconds = 7272.79 MB/sec
       Timing buffered disk reads: 650 MB in  3.01 seconds = 216.28 MB/sec
      

      What is the output of cat /proc/mdstat ?

      Mine looks like:

      [thegner@it-thegnerlptp ~]$ cat /proc/mdstat 
      Personalities : [raid0] 
      md1 : active raid0 sda2[0] sdc2[1]
            9695232 blocks super 1.2 512k chunks
            
      md0 : active raid0 sda1[0] sdc1[1]
            115341312 blocks super 1.2 512k chunks
            
      unused devices: 
      
  • ray clancy

    Solved my dilemma by placing the two drives in MASTER sata slots so now the hdparm read speed is 180mb/sec.

    Thanks for the reply and the procedure which enables grub2 to run in a partition which is necessary for the raid system to function in boot.

    Pleased as punch with the raid0 bootable again!

    It may be impacted in the next upgrade of the kernel but can be overcome if it occurs with the grub2 approach you provided.

    My system has always been a maverick!

    • http://travishegner.com travis.hegner

      Wonderful! I’m glad you got it working.

  • ray clancy

    Latest linux kernel upgrade to linux=3.3.4-1 with an uprade of hdparm resulted in the read speed in hdparm falling back to normal expected values.

    Your post indicates you had the higher read speed readings as well. You may experience the same change as has occured with my system.

  • http://mjanja.co.ke Alan

    I just finished an Arch install on md RAID1, and found this post helpful.

    In the end, though, I found that the mdadm_udev HOOK was necessary, rather than the apparently deprecated mdadm. Also, with this new hook you don’t even need an /etc/mdadm.conf (and fyi, my fstab has partitions listed by their UUIDs). Hope that helps someone.

  • ray clancy

    Alan:

    Interesting indeed. I tried mdadm_udev several times and it was rejected as no such animal.

    Do you need to download something?