Gentoo Software Raid Installation

Version 0.2 – 02/09/2004 – G.Tiller

Gentoo Raid Installation and other sundry ways to wack your box by G.Tiller

This documentation is free for all to read and use. If something’s wrong please feel free to correct it. I am not the best typist this side of Stone Mountain. I had a spare server and some old WD 4.3 GB drives around so I thought I would venture down this road for awhile.

Gentoo Linux Installation with Raid Installation Using Raidtools.

If you haven’t gotten a copy of O’reilly’s book Managing Raid On Linux I would suggest you get one as it has a lot of information about Raid. You should also check out the Raid information at The Linux Documentation Project. The HOWTO’s have information pertaining to booting raid using lilo.

Step # 1.)  Install Identical hard drives on your IDE controllers. One hard drive on each controller is the preferred method. Here is my configuration:

IDE0 – First hard drive
IDE1 -  CD drive
IDE2 -  Second hard drive  ( Port 1 of Promise Technologies
Ultra100 TX2 )
IDE3 -  Third hard drive   ( Port 2 of Promise controller )

Note: Identical hard drive will eliminate some of the problems that may occur if you use disks of different sizes and geometries.

Step # 2.) Boot from the Gentoo LiveCD.

Step # 3.) Partition the hard disks. My partition layout is below.

I used WD 4.3 GB Drives in my test server.

/dev/hda1    128MB     bootable  partition type -  fd
/dev/hda2    256MB               partition type -  82 ( swap)
/dev/hda3    1024MB              partition type -  fd
/dev/hda4    remainder           partition type -  fd

The other 2 hard drives were partitioned the same.

Step # 4.) Load the kernel raid modules you intend to use.

Modprobe raid0 raid1 raid5

Step # 5.) Create the raidtab file. The raidtab file must reside in /etc.

My raidtab file:

raiddev /dev/md0
raid-level        1
nr-raid-disks        2
nr-spare-disks        1
persistent-superblock    1
chunk-size        4
device            /dev/hda1
raid-disk            0
device            /dev/hde1
raid-disk            1
device            /dev/hdg1
spare-disk        0

raiddev /dev/md1
raid-level        1
nr-raid-disks        2
nr-spare-disks        1
persistent-superblock    1
chunk-size        4
device            /dev/hda2
raid-disk            0
device            /dev/hde2
raid-disk            1
device            /dev/hdg2
spare-disk        0

raiddev /dev/md2
raid-level        5
nr-raid-disks        3
nr-spare-disks        0
persistent-superblock    1
parity-algorithm        left-symmetric
chunk-size        32
device            /dev/hda3
raid-disk            0
device            /dev/hde3
raid-disk            1
device            /dev/hdg3
raid-disk            2

raiddev /dev/md3
raid-level        1
nr-raid-disks        2
nr-spare-disks        1
persistent-superblock    1
chunk-size        4
device            /dev/hda4
raid-disk            0
device            /dev/hde4
raid-disk            1
device            /dev/hdg4
spare-disk        0

Step # 6.) Create the raid arrays

mkraid /dev/md0 /dev/md1 /dev/md2 /dev/md3

This may take awhile depending on the size of your partitions.

Check /proc/mdstat to see the progress of the arrays.

cat /proc/mdstat  – this will show active arrays and those being built with a estimated finish time.

Step # 8.)  Install Gentoo using the normal installation instructions except use /dev/md(x) instead of using the actual hard disk device. I did not replicate the Gentoo Installation instructions in these instructions so be careful to not use this as a complete installation procedure. I just referenced parts of the relevant sections.

Installation Section 4.i. Creating Filesystems

Create your filesystems as you desire. I used ext3.

mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2
mkfs.ext3 /dev/md3

swapon /dev/md1     ( swap doesn?t really need to be raided, if you choose not to then you will need multiple swap entries in your fstab file. )

Installation Section 4.j. Mounting

mount /dev/md3 /mnt/gentoo
mkdir /mnt/gentoo/boot
mount /dev/md0 /mnt/gentoo/boot

Install Section 6.a. Chrooting

copy your raidtab file from /etc to /mnt/gentoo/etc

You must also bind the /dev directory to /mnt/gentoo/dev even though you are using the LiveCD version. This is so that the bootloader can find the raid devices as they are not part of the stage-3 tarball.

mount -o bind /dev /mnt/gentoo/dev

Configureing the Kernel

Installation Section 7.c Default: Manual configuration

I used the manual method of configuring and installing the kernel due to problems I encountered using the latest genkernel program. Also I am not using initrd when booting the kernel.

Be sure and configure the Raid options to compile into the kernel and not as modules. This is necessary for the kernel to boot and load properly from RAID. If you use modules for your RAID you must use initrd to boot and mount root.

Installation Section 8.a Filesystem Information

Edit /etc/fstab and change the partition information to reflect the raid arrays.

for example  /dev/BOOT change to /dev/md0
/dev/ROOT change to /dev/md3
/dev/SWAP change to /dev/md1

also be sure to set your  to the proper filesystem type.

Installation Section 9.c Using LILO

Get the latest version of lilo ( emerge lilo )  This insures you have the latest fixes and updates to LILO.

Create a lilo.conf file in /etc.

This information was taken from Boot+Root+Raid+Lilo HOWTO documentation found at the Linux Documentation Project, www.tldp.org.

My lilo.conf file for example:

# Start of Gentoo Linux Raid Configuration

disk=/dev/md3
bios=0×80
sectors=63
heads=255
cylinders=523
partition=/dev/md0
start=63
boot=/dev/hda
map=/boot/map
install=/boot/boot.b

# optional parameters – your choice
prompt
timeout=300
vga=791
# End of optional parameters

# Kernel Configuration
image=/boot/kernel-2.4.20-gentoo-r5
root=/dev/md3
read-only
label=Gentoo
append=”hdc=ide-scsi”

The HOWTO documentation shows a second set of lilo entries for the second boot drive but I can?t get it to work. Lilo complains of syntax errors when you define those entries twice in lilo.conf. I don’t know enough about lilo to tell whether or not is is supposed to allow you to specify two boot devices.

However, I was able to use the example lilo.conf file located in  “/usr/doc/lilo-22.5.8-r1″ to construct a new lilo.conf that does not complain.

My new lilo.conf is as follows:

# ———- Start of Lilo Configuration ———- #

# — Boot Array — #
boot=/dev/md0

# — Auxillary Boot records for a parrallel raid array — #
raid-extra-boot=auto

# — Disks to boot from —#
disk=/dev/hda
bios=0×80   # — first  disk on ide0 — #
disk=/dev/hde
bios=0×82   # — first disk on ide2 — #

# — Wait for the user to choose and set the display mode — #
prompt
timeout=300
vga=791

# — Use the Menu interface , lilo > 22.3 — #
install=menu
menu-title=”Gentoo Linux Raid Configuration”
menu-scheme=”wk:Wg:wk:Gk”

# — Set the Default to Boot — #
default=Gentoo

# — Both kernels use the same root array — #
root=/dev/md3
read-only

# — Kernel Image to boot — #
image=/boot/kernel-2.4.22-r5
label=Gentoo
append=”reboot=warm hdc=ide-scsi”

# ———- End of Lilo Configuration ——— #

Be sure and emerge raidtools and emerge mdadm otherwise your raid arrays will not be started.

Follow the rest of the Installation manual as you desire and then reboot.

This procedure worked well for me.

Gentoo Linux Installation with Raid Installation using Mdadm Raid Tools.

Step # 1.)  Install Identical hard drives on your IDE controllers. One hard drive on each controller is the preferred method. Here is my configuration:

IDE0 – First hard drive
IDE1 -  CD drive
IDE2 -  Second hard drive  ( Port 1 of Promise Technologies
Ultra100 TX2 )
IDE3 -  Third hard drive   ( Port 2 of Promise controller )

Note: Identical hard drive will eliminate some of the problems that may occur if you use disks of different sizes and geometries.

Step # 2.) Boot from the Gentoo LiveCD.

Step # 3.) Partition the hard disks. My partition layout is below.

I used WD 4.3 GB Drives in my test server.

/dev/hda1    128MB     bootable  partition type -  fd
/dev/hda2 256MB               partition type -  82 ( swap)
/dev/hda3 1024MB              partition type -  fd
/dev/hda4 remainder           partition type -  fd

The other 2 hard drives were partitioned the same.

Step # 4.) Load the kernel raid modules you intend to use.

Modprobe raid0 raid1 raid5

Step # 5.) Begin creating your raid arrays with mdadm:

CREATE A RAID1 ARRAY

mdadm –create /dev/md0 –level=1 –raid-disks=2 /dev/hda1 \
/dev/hde1 –spare-disks=1 /dev/hdg1

or use this command:

mdadm -C /dev/md0 -l1 -n2 /dev/hda1 /dev/hde1 -x1 /dev/hdg1

CREATE A RAID1 ARRAY

mdadm –create /dev/md1 –level=1 –?raid-disks=2 /dev/hda2 \
/dev/hde2 –spare-disks=1 /dev/hdg2

or use this command:

mdadm -C /dev/md1 -l1 -n2 /dev/hda2 /dev/hde2 -x1 /dev/hdg2

CREATE A RAID5 ARRAY

mdadm –create /dev/md2 –level=5 –parity=left-symmetric \
–raid-disks=3 /dev/hda3 /dev/hde3 /dev/hdg3

or use this command:

mdadm -C /dev/md2 -l5 -pls -n3 /dev/hda1 /dev/hde1 /dev/hdg1

CREATE A RAID1 ARRAY

mdadm –create /dev/md3 –level=1 –raid-disks=2 /dev/hda4 \
/dev/hde4 –spare-disks=1 /dev/hdg4

or use this command:

mdadm -C /dev/md3 -l1 -n2 /dev/hda4 /dev/hde4 -x1 /dev/hdg4

This may take awhile depending on the size of your partitions.

Check /proc/mdstat to see the progress of the arrays.

cat /proc/mdstat  – this will show active arrays and those being built with a estimated finish time.

Step # 6.)  Assembling / Starting your Arrays.

You must next assemble the arrays. ( This is equivalent to using the raidstart command from the raidtools set of programs. This command madm –assemble utilizes a configuration file  “/etc/mdadm.conf” to determine which arrays and disks to start. After creating my arrays I noticed that I did not need to assemble them. They were already  started by the create command. The O’reilly book did not say this would happen when creating the arrays but I suppose this is ok and that the mdadm –assemble command is used to start the arrays at boot time.

Create or add entries to your mdadm.conf file. See the example below.

DEVICE    /dev/hda1 /dev/hde1 /dev/hdg1
ARRAY    /dev/md0 level=1 num-devices=2 \
devices=/dev/hda1,/dev/hde1,/dev/hdg1

DEVICE    /dev/hda2 /dev/hde2 /dev/hdg2
ARRAY    /dev/md1 level=1 num-devices=2 \
devices=/dev/hda2,/dev/hde2,/dev/hdg2

DEVICE    /dev/hda3 /dev/hde3 /dev/hdg3
ARRAY    /dev/md2 level=5 num-devices=3 \
devices=/dev/hda3,/dev/hde3,/dev/hdg3

DEVICE    /dev/hda4 /dev/hde4 /dev/hdg4
ARRAY    /dev/md3 level=1 num-devices=2 \
devices=/dev/hda4,/dev/hde4,/dev/hdg4

MAILADDR=root@yourdomain.XXX

The commands below will assemble the arrays and start them.

mdadm –assemble –scan /dev/md0
mdadm –assemble –scan /dev/md1
mdadm –assemble –scan /dev/md2
mdadm –assemble –scan /dev/md3

Step # 7.)  Install Gentoo using the normal installation instructions except use /dev/md(x) instead of using the actual hard disk device. I did not replicate the Gentoo Installation instructions in these instructions so be careful to not use this as a complete installation procedure. I just referenced parts of the relevant sections.

Installation Section 4.i. Creating Filesystems

Create your filesystems as you desire. I used ext3.

mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2
mkfs.ext3 /dev/md3

swapon /dev/md1     ( swap doesn’t really need to be raided, if you choose not to then you will need multiple swap entries in your fstab file. )

Installation Section 4.j. Mounting

mount /dev/md3 /mnt/gentoo
mkdir /mnt/gentoo/boot
mount /dev/md0 /mnt/gentoo/boot

Install Section 6.a. Chrooting

copy your mdadm.conf file from /etc to /mnt/gentoo/etc

You must also bind the /dev directory to /mnt/gentoo/dev even though you are using the LiveCD version. This is so that the bootloader can find the raid devices as they are not part of the stage-3 tarball.

mount -o bind /dev /mnt/gentoo/dev

Configureing the Kernel

Installation Section 7.c Default: Manual configuration

I used the manual method of configuring and install ing the kernel due to problems I encountered using the latest genkernel program. Also I am not using initrd when booting the kernel.

Be sure and configure the Raid options to compile into the kernel and not as modules. This is necessary for the kernel to boot and load properly from RAID. If you use modules for your RAID you must use initrd to boot and mount root.

Installation Section 8.a Filesystem Information

Edit /etc/fstab and change the partition information to reflect the raid arrays.

for example  /dev/BOOT change to /dev/md0
/dev/ROOT change to /dev/md3
/dev/SWAP change to /dev/md1

also be sure to set your  to the proper filesystem type.

Installation Section 9.c Using LILO

Get the latest version of lilo ( emerge lilo )  This insures you have the latest fixes and updates to LILO.

Create a lilo.conf file in /etc.

This information was taken from Boot+Root+Raid+Lilo HOWTO documentation found at the Linux Documentation Project, www.tldp.org.

My lilo.conf file for example:

# Start of Gentoo Linux Raid Configuration

disk=/dev/md3
bios=0×80
sectors=63
heads=255
cylinders=523
partition=/dev/md0
start=63
boot=/dev/hda
map=/boot/map
install=/boot/boot.b

# optional parameters – your choice
prompt
timeout=300
vga=791
# End of optional parameters

# Kernel Configuration
image=/boot/kernel-2.4.20-gentoo-r5
root=/dev/md3
read-only
label=Gentoo
append=”hdc=ide-scsi”

The HOWTO shows a second set of lilo entries for the second boot drive but I can’t get it to work.  Lilo complains of syntax errors when you define those entries twice in lilo.conf. I don’t know enough about lilo to tell whether or not is is supposed to allow you to specify two boot devices.

However, I was able to use the example lilo.conf file located in “/usr/doc/lilo-22.5.8-r1″ to construct a new lilo.conf that does not complain .

My new lilo.conf is as follows:

# ———- Start of Lilo Configuration ———- #

# — Boot Array — #
boot=/dev/md0

# — Auxillary Boot records for a parrallel raid array — #
raid-extra-boot=auto

# — Disks to boot from —#
disk=/dev/hda
bios=0×80   # — first  disk on ide0 — #
disk=/dev/hde
bios=0×82   # — first disk on ide2 — #

# — Wait for the user to choose and set the display mode — #
prompt
timeout=300
vga=791

# — Use the Menu interface , lilo > 22.3 — #
install=menu
menu-title=”Gentoo Linux Raid Configuration”
menu-scheme=”wk:Wg:wk:Gk”

# — Set the Default to Boot — #
default=Gentoo

# — Both kernels use the same root array — #
root=/dev/md3
read-only

# — Kernel Image to boot — #
image=/boot/kernel-2.4.22-r5
label=Gentoo
append=”reboot=warm hdc=ide-scsi”

# ———- End of Lilo Configuration ——— #

Be sure and emerge raidtools and emerge mdadm otherwise your raid arrays may not be started.

Follow the rest of the Installation manual as you desire but be sure to edit the file “/etc/init.d/checkfs”. This script has to be modified to include the mdadm command necessary to start the raid array for swap and any other arrays that are not set to auto.

These are the changes I made to “/etc/init.d/checkfs”

line # 41 change this: if [ -f /proc/mdstat -a -f /etc/raidtab ]
to: if [ -f /proc/mdstat -a -f /etc/raidtab -o -f /etc/mdadm.conf ]

line # 48 add this:  if [ -f /etc/raidtab ]
then

line # 105 add the following:   elif [ -f /etc/mdadm.conf ]
then
mdadm ?As
retval=$?
fi
if [ ?${retval}? ?gt 0 ]
then
rc=1
eend${retval}
else
ewend ${retval}
fi

Follow the final instructions in the Installation manual to perform your reboot.

This procedure worked well for me.

Now for some test scenarios.

Server Configuration

MB – ASUS A7N8X
RAM – 512 M
Video – Geforce MX-400 with 64M
Disk Drives – 3 Western Digital 4.3 GB ata-33
CDROM – HP 8200+ cdrw

Test # 1 – Server is booted up and running normally. This test is to fail the primary boot drive “/dev/hda”. ( pulling the power plug )

Note:  I do not recommend doing this to achieve hot-swappability as this could cause severe damage to your MB or Drives.

Result:  Server still up and running, raid-device 0 of all arrays marked as faulty. Reconstruction begun using spare drive. After reconstruction mdadm –detail shows the faulty raid-device to be 2 and the spare is now active as raid-device0.

Test # 2 – Rebooting server with a failed primary boot drive, “/dev/had”. This test is to determine if the alternate boot strategy works as indicated with lilo.

Result:  Reboot with a failed “/dev/had” was successful. Server booted off of “/dev/hde” which was marked a HDD-2 in the bios since it is on the Promise controller.

Test # 3 – Reinsertion of raid-device 0 ( “/dev/hda” ) into the live system. I do not recommend reinserting a drive into the live system.  Your hardware could suffer severe damage.

Result:  System is still up and running. (I put the power plug back in). The disk is not recognized by the system. The device “/dev/hda” no longer exists in the /dev directory.

System must be rebooted in order for the missing drive to be add back into the /dev directory. ( there may be a way to do this without rebooting but I don’t know how to make the active system detect the drive without doing a reboot.)

After the reboot you must hot-add the drive back into your arrays.

mdadm -a /dev/md0 /dev/hda1
mdadm -a /dev/md1 /dev/hda2
mdadm -a /dev/md2 /dev/hda3
mdadm -a /dev/md3 /dev/hda4

This will invoke reconstruction of the newly added drive. Under raid1 the newly added drive becomes the spare if you had originally configured your arrays with a spare drive.

Under raid5 the newly added drive is again made an active part of the array.

You should also set up a way to monitor your arrays and have it alert you to problems. You can use the “mdadm –monitor –scan” command to alert you through email or some external alerting program.

While these tests are not very detailed ( and not necessarily a very smart way of testing ), they do show that the raid scenario that I built does seem to work properly. There are different raid configurations possible as well as other types of failures that can happen that I don?t have the hardware resources to try or the burning desire to destroy my motherboards and drives.

As with everything your mileage may vary.