1

RHEL/Centos 6.x software raid LIVE! both LVM and standard partitions with grub

It seems that there are still many machines out in the world today that have a need for software raid to protect its data. Recently I’ve been working on some POS machines which can house 2 drives but does not have any type of raid option for protection. This post will walk through creating a software raid1 and also talk about even changing partition sizes and also deal with those systems that have LVM instead of standard partitions. (I am a big fan of LVM and use it as much as possible even on small drives.) This post is a bit in depth and the instructions must be followed in order and system not rebooted until stated. This will work on both physical and virtual machines( not sure if it will ever be needed for virtual machines) and its always best to have the same size drives or only create partitions big enough that both drives can hold the same size partitions.

Make sure mdadm is installed by doing yum install mdadm -y
Install the 2nd drive and reboot – verify the new drive. In my case I’ve added sdb which is the same size as my first disk. The 2nd disk can be bigger or smaller however the partitions will need to be adjusted so that all the partitions together can fit both drives.
[root@cent6 ~]# cat /proc/partitions
major minor #blocks name
8 0 8388608 sda
8 1 1048576 sda1
8 2 7339008 sda2
8 16 8388608 sdb
253 0 6287360 dm-0
253 1 1048576 dm-1

Layout of my original disk
[root@cent6 ~]# fdisk -l /dev/sda
Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d4e48
Device Boot Start End Blocks Id System
/dev/sda1 * 1 131 1048576 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 131 1045 7339008 8e Linux LVM
[root@cent6 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 cent6vg lvm2 a--u 7.00g 0
[root@cent6 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cent6vg 1 2 0 wz--n- 7.00g 0
[root@cent6 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root cent6vg -wi-ao---- 6.00g
swap cent6vg -wi-ao---- 1.00g

You will see that my sda has 2 partitions /dev/sda1 for /boot and /dev/sda2 for LVM. The LVM contains my root and swap lvol using cent6vg as my volume group.

Check the 2nd disk to make sure that the partitions are all cleared and the disk is clean
[root@cent6 ~]# fdisk -l /dev/sdb
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000eb497
Device Boot Start End Blocks Id System

IF you are wanting to keep the partitions the same you can just clone the partitions using sfdisk. The following command will take the partitions tables from sda and apply it to sdb. IF there is any data on sdb it will be overwritten.
[root@cent6 ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb --force
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 1044 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0 - 0 0 0 Empty
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot Start End #sectors Id System
/dev/sdb1 * 2048 2099199 2097152 83 Linux
/dev/sdb2 2099200 16777215 14678016 8e Linux LVM
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Warning: partition 1 does not end at a cylinder boundary
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

[root@cent6 ~]# cat /proc/partitions
major minor #blocks name

8 0 8388608 sda
8 1 1048576 sda1
8 2 7339008 sda2
8 16 8388608 sdb
8 17 1048576 sdb1
8 18 7339008 sdb2
253 0 6287360 dm-0
253 1 1048576 dm-1

You see that on sdb the partition tables were copied exactly like sda

IF you are wanting to change the partition table for some reason like maybe increasing the size of /boot or want to increase swap then you will manually have to create the partitions using either parted or fdisk and if you are wanting to lvm the root disk you can create a new lvm structure and use it as well. Please make sure that the new spaces are bigger than the used space on the original disk.

Clear the superblocks on the new disk. For my example since I only have /dev/sdb1 and /dev/sdb2 I will clear those.
[root@cent6 ~]# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb1
[root@cent6 ~]# mdadm --zero-superblock /dev/sdb2
mdadm: Unrecognised md component device - /dev/sdb2

Since my devices did not have any superblocks to clear it comes back as Unrecognised which is perfect.

Time to load some kernel drivers. (For my example I will be creating raid1 however you can create raid0/raid5/etc as well)
[root@cent6 ~]# mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm: array /dev/md0 started.
[root@cent6 ~]# mdadm --create /dev/md1 --metadata=0.90 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm: array /dev/md1 started.
[root@cent6 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md1 : active raid1 sdb2[1]
7338944 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
1048512 blocks [2/1] [_U]

unused devices:

md0 will become the new /boot and md1 will become the new pvol. If you are not using LVM you can create multiple partitions for swap, root disk, etc and follow the same steps to create additional raid sets.

You will see that I created 2 raid1 devices with only 1 disk in it for now. Lets find out what type of filesystem is on /dev/sda1 (/boot) by looking at mount and create the filesystem
[root@cent6 ~]# mount | grep boot
/dev/sda1 on /boot type ext4 (rw)
mkfs.ext4 /dev/md0

FOR LVM(as in this example) – The pvol will be replicated and moved over to md1 from sda2 and sda2 will be joined to md1. Will pvcreate md1 then extend the VG then move the pvol then reduce the VG then remove the original /dev/sda2 and join it into md1.

[root@cent6 ~]# pvcreate /dev/md1
Physical volume "/dev/md1" successfully created
[root@cent6 ~]# vgextend cent6vg /dev/md1
Volume group "cent6vg" successfully extended
[root@cent6 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 cent6vg lvm2 a--u 7.00g 7.00g
/dev/sda2 cent6vg lvm2 a--u 7.00g 0
[root@cent6 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cent6vg 2 2 0 wz--n- 13.99g 7.00g
[root@cent6 ~]# pvmove /dev/sda2 /dev/md1
/dev/sda2: Moved: 0.6%
/dev/sda2: Moved: 9.4%
/dev/sda2: Moved: 35.9%
/dev/sda2: Moved: 85.7%
/dev/sda2: Moved: 100.0%
[root@cent6 ~]# vgreduce cent6vg /dev/sda2
Removed "/dev/sda2" from volume group "cent6vg"
[root@cent6 ~]# pvremove /dev/sda2
Labels on physical volume "/dev/sda2" successfully wiped
[root@cent6 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 cent6vg lvm2 a--u 7.00g 0
[root@cent6 ~]# cat /proc/partitions
major minor #blocks name

8 0 8388608 sda
8 1 1048576 sda1
8 2 7339008 sda2
8 16 8388608 sdb
8 17 1048576 sdb1
8 18 7339008 sdb2
253 0 6287360 dm-0
253 1 1048576 dm-1
9 0 1048512 md0
9 1 7338944 md1
[root@cent6 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cent6vg-root
5.8G 695M 4.8G 13% /
tmpfs 499M 0 499M 0% /dev/shm
/dev/sda1 976M 32M 893M 4% /boot
[root@cent6 ~]#
[root@cent6 ~]#
[root@cent6 ~]#
[root@cent6 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 cent6vg lvm2 a--u 7.00g 0
[root@cent6 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md1 : active raid1 sdb2[1]
7338944 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
1048512 blocks [2/1] [_U]

unused devices:

The following will vastly speed up the raid syncing process
[root@cent6 ~]# echo 100000 > /proc/sys/dev/raid/speed_limit_min

We will now add sda2 into md1
[root@cent6 ~]# mdadm --add /dev/md1 /dev/sda2
mdadm: added /dev/sda2
[root@cent6 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md1 : active raid1 sda2[2] sdb2[1]
7338944 blocks [2/1] [_U]
[>....................] recovery = 4.1% (303488/7338944) finish=1.5min speed=75872K/sec

md0 : active raid1 sdb1[1]
1048512 blocks [2/1] [_U]

unused devices:

FOR Standard partitions instead of LVM
mkfs with the same filesystem type for the other md* devices such as md1… and mount the md1 device to /mnt and start the copy. (I am a big fan of rsync so will use rsync here)
mount /dev/md1 /mnt
rsync -aAXv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} / /mnt

We wouldn’t be able to add /dev/sda2… onto md1… yet since it is still being used so we will need to configure initramfs and grub and reboot into the new environment so please follow the initramfs and grub steps below and also use blkid and update the mounted filesystem with the UUID for /boot, /, and other filesystems needed.

Please wait until the syncing process is complete before proceeding. you can run watch cat /proc/mdstat and it will refresh every 2 sec so that you can keep a tab on the progress
[root@cent6 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md1 : active raid1 sda2[0] sdb2[1]
7338944 blocks [2/2] [UU]

md0 : active raid1 sdb1[1]
1048512 blocks [2/1] [_U]

unused devices:

My md1 is now in sync and not degraded since i have both sda2 and sdb2 on it.

Now to generate some config files and copy /boot and setup grub and fstab and generate a new initramfs
Generate mdadm.conf
[root@cent6 ~]# mdadm --examine --scan > /etc/mdadm.conf
[root@cent6 ~]# cat /etc/mdadm.conf
ARRAY /dev/md0 UUID=fce7e007:9d9eb32e:4939fa6b:98dab77a
ARRAY /dev/md1 UUID=805afa6e:f9f16540:4939fa6b:98dab77a

We want to update fstab so that it will use the new md0 device instead of /dev/sda1
[root@cent6 ~]# blkid
/dev/sda1: UUID="d36bd560-e7ed-4f12-a18c-2369b53fdc0b" TYPE="ext4"
/dev/sda2: UUID="805afa6e-f9f1-6540-4939-fa6b98dab77a" TYPE="linux_raid_member"
/dev/sdb1: UUID="fce7e007-9d9e-b32e-4939-fa6b98dab77a" TYPE="linux_raid_member"
/dev/sdb2: UUID="805afa6e-f9f1-6540-4939-fa6b98dab77a" TYPE="linux_raid_member"
/dev/mapper/cent6vg-root: UUID="d8e0b910-ea14-4dad-bea2-68bf90a8fdec" TYPE="ext4"
/dev/mapper/cent6vg-swap: UUID="6959a544-7e9a-4db8-9fae-2b4d070e4c58" TYPE="swap"
/dev/md0: UUID="f075c927-28c4-42c4-bd81-acda4abb7159" TYPE="ext4"
/dev/md1: UUID="3N8qhj-kTxw-EXZQ-smqm-Hfgv-k2ja-oA70G9" TYPE="LVM2_member"
[root@cent6 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed Dec 14 22:37:18 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/cent6vg-root / ext4 defaults 1 1
UUID=d36bd560-e7ed-4f12-a18c-2369b53fdc0b /boot ext4 defaults 1 2
/dev/mapper/cent6vg-swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0

We will replace d36bd560-e7ed-4f12-a18c-2369b53fdc0b with f075c927-28c4-42c4-bd81-acda4abb7159 from the above example.

Create a new initramfs
[root@cent6 boot]# cd /boot
[root@cent6 boot]# pwd
/boot
[root@cent6 boot]# ls
config-2.6.32-642.el6.x86_64 initramfs-2.6.32-642.el6.x86_64.img System.map-2.6.32-642.el6.x86_64
efi lost+found vmlinuz-2.6.32-642.el6.x86_64
grub symvers-2.6.32-642.el6.x86_64.gz
[root@cent6 boot]# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.old
[root@cent6 boot]# dracut --mdadmconf --add-drivers "raid0 raid1 raid5 raid10" --filesystems "ext4 ext3 swap tmpfs devpts sysfs proc" --force /boot/initramfs-$(uname -r).img $(uname -r)
ls
[root@cent6 boot]# ls
config-2.6.32-642.el6.x86_64 initramfs-2.6.32-642.el6.x86_64.img symvers-2.6.32-642.el6.x86_64.gz
efi initramfs-2.6.32-642.el6.x86_64.img.old System.map-2.6.32-642.el6.x86_64
grub lost+found vmlinuz-2.6.32-642.el6.x86_64

time to edit the /boot/grub/grub.conf file. We will remove any rd_NO_MD if it exists and add fallback and 2nd entry for the 2nd drive so that if the first drive is failed or removed it will fallback and boot from the 2nd drive.
BEFORE :
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/mapper/cent6vg-root
# initrd /initrd-[generic-]version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS 6 (2.6.32-642.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-642.el6.x86_64 ro root=/dev/mapper/cent6vg-root rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=cent6vg/swap rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=cent6vg/root rd_NO_DM rhgb quiet
initrd /initramfs-2.6.32-642.el6.x86_64.img

AFTER :
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/mapper/cent6vg-root
# initrd /initrd-[generic-]version.img
#boot=/dev/sda
default=0
fallback=1
timeout=5
#splashimage=(hd0,0)/grub/splash.xpm.gz
#hiddenmenu
title CentOS 6 (2.6.32-642.el6.x86_64)
root (hd1,0)
kernel /vmlinuz-2.6.32-642.el6.x86_64 ro root=/dev/mapper/cent6vg-root LANG=en_US.UTF-8 rd_LVM_LV=cent6vg/swap SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=cent6vg/root
initrd /initramfs-2.6.32-642.el6.x86_64.img
title CentOS 6 (2.6.32-642.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-642.el6.x86_64 ro root=/dev/mapper/cent6vg-root LANG=en_US.UTF-8 rd_LVM_LV=cent6vg/swap SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=cent6vg/root
initrd /initramfs-2.6.32-642.el6.x86_64.img

I removed some extra items out of the lines as well that is really not needed. Also we want hd1 to be first so that /dev/md0 will be mounted first so that we can add /dev/sda1 onto md0 after the reboot

now to sync /boot to md0
[root@cent6 ~]# mount /dev/md0 /mnt
[root@cent6 ~]# rsync -ravt /boot/ /mnt/
sending incremental file list
./
.vmlinuz-2.6.32-642.el6.x86_64.hmac
System.map-2.6.32-642.el6.x86_64
config-2.6.32-642.el6.x86_64
initramfs-2.6.32-642.el6.x86_64.img
initramfs-2.6.32-642.el6.x86_64.img.old
symvers-2.6.32-642.el6.x86_64.gz
vmlinuz-2.6.32-642.el6.x86_64
efi/
efi/EFI/
efi/EFI/redhat/
efi/EFI/redhat/grub.efi
grub/
grub/device.map
grub/e2fs_stage1_5
grub/fat_stage1_5
grub/ffs_stage1_5
grub/grub.conf
grub/iso9660_stage1_5
grub/jfs_stage1_5
grub/menu.lst -> ./grub.conf
grub/minix_stage1_5
grub/reiserfs_stage1_5
grub/splash.xpm.gz
grub/stage1
grub/stage2
grub/ufs2_stage1_5
grub/vstafs_stage1_5
grub/xfs_stage1_5
lost+found/

sent 53505171 bytes received 475 bytes 35670430.67 bytes/sec
total size is 53496849 speedup is 1.00
[root@cent6 ~]# umount /mnt

Now to configure grub
[root@cent6 ~]# grub
Probing devices to guess BIOS drives. This may take a long time.

GNU GRUB version 0.97 (640K lower / 3072K upper memory)

[ Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists the possible
completions of a device/filename.]
grub> root (hd0,0)
root (hd0,0)
Filesystem type is ext2fs, partition type 0x83
grub> setup (hd0)
setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 27 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+27 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub> root (hd1,0)
root (hd1,0)
Filesystem type is ext2fs, partition type 0x83
grub> setup (hd1)
setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 27 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+27 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub> quit
quit

Now we are ready for our first reboot however make sure that you double check your /boot/grub/grub.conf and /etc/fstab and match up the UUID with blkid before rebooting…. and REBOOT!

After the reboot we will see that our md0 is still missing the 2nd disk and md1 is in sync with 2 disks and /boot was mounted from /dev/md0 and pvol is on md1
[root@cent6 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
7338944 blocks [2/2] [UU]

md0 : active raid1 sdb1[1]
1048512 blocks [2/1] [_U]

unused devices:
[root@cent6 ~]#
[root@cent6 ~]#
[root@cent6 ~]#
[root@cent6 ~]#
[root@cent6 ~]#
[root@cent6 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
7338944 blocks [2/2] [UU]

md0 : active raid1 sdb1[1]
1048512 blocks [2/1] [_U]

unused devices:
[root@cent6 ~]# cat /proc/partitions
major minor #blocks name

8 0 8388608 sda
8 1 1048576 sda1
8 2 7339008 sda2
8 16 8388608 sdb
8 17 1048576 sdb1
8 18 7339008 sdb2
9 0 1048512 md0
9 1 7338944 md1
253 0 6287360 dm-0
253 1 1048576 dm-1
[root@cent6 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cent6vg-root
5.8G 697M 4.8G 13% /
tmpfs 499M 0 499M 0% /dev/shm
/dev/md0 992M 53M 889M 6% /boot
[root@cent6 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 cent6vg lvm2 a--u 7.00g 0

Time to add /dev/sda1 onto md0
[root@cent6 ~]# mdadm --add /dev/md0 /dev/sda1
mdadm: added /dev/sda1
[root@cent6 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
7338944 blocks [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[1]
1048512 blocks [2/1] [_U]
[==================>..] recovery = 93.8% (984960/1048512) finish=0.0min speed=196992K/sec

unused devices:

Once md0 is in sync please run mdadm --examine --scan > /etc/mdadm.conf again just to make sure and reboot for the final time…. REBOOT!

Now we are in business! the 2 entries in GRUB will show that first it will try hd1 then hd0 if hd1 is failed so we have redundancy at the grub and mbr level. Also mount and pvs will show that it is using our md* devices and /proc/mdstat is showing good status for both drives.

[root@cent6 ~]# mount
/dev/mapper/cent6vg-root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md0 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
[root@cent6 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 cent6vg lvm2 a--u 7.00g 0
[root@cent6 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1] sda2[0]
7338944 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
1048512 blocks [2/2] [UU]

unused devices:

And now we are complete!

I know that there might be better faster ways of doing this but this process worked for me!

jlim0930

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.