While my server suffered some (serious) downtime behind the scenes
I dove into the concept of RAID setups on Linux. I al ready had configured
a Software RAID used to save my precious data but had no clue
there were differences in RAID setups.
“That explained my config of this RAID, I configured my
onboard VIA VT6420 SATA RAID Controller with it’s menu to be a
RAID 1 controller and in Linux I configured this setup with Yast
to be a Software Raid 1 config.???? Yeah you know just do, don’t read.
I took this dive because my root disk was at the end of it’s life.
Smarctl -H didn’t gave me a pretty sight. just a failing hard disk.
Luckily I was able to rescue the most of my files that were
on the root disk only an openSUSE 11.2 installation and some config
files of apps. But I decided that I wanted my root disk
to be a RAID 1 setup as well.
I ordered two SATA disks and a SATA RAID 0/1 controller. Then my
mission started to get this setup working as a root disk.
Searching my way through Google about how to do this I discovered
that there are multiple options to create a RAID config on Linux.
The two relevant for my were:
Software RAID (Operating system based)
You can create a Software RAID you don’t need a RAID controller for
this config, so there I was looking good with my just bought RAID controller :-).
Fake-Raid (Firmware/driver-based RAID)
You can create a Fake-Raid setup by utilizing a Fake-Raid controller capabilities.
You can got loose on the whole RAID thing at wikipedia
“Hardware RAID controllers are expensive and proprietary. To fill this
gap, cheap Fake-Raid controllers were introduced.”
I decided to go with the Fake-Raid solution, didn’t want my just-bought
RAID controller to be a waste of money, all though it only cost me 20 EURO
but it’s all about the principle :-).
Below a summary of all the things I had to do to let me 20 EURO
be a worthy investment.
Create a RAID 1 setup with the menu tool on the RAID controller.
Then create a file-system on the RAID 1 config.
To find out the device name of your RAID device use this command
# dmraid -ay
# ls -la /dev/mapper/
Then use cfdisk or fdisk or whatever partition program to create a file-system.
In my situation it was
# cfdisk /dev/mapper/sil_bgadaecebiea
To be able to boot from a Fake-Raid device you have to have
a initrd file which contains a driver which supports this.
From the man of mkinitrd
dmraid =Include support for Software-Raid over device mapper
(known as Fake-Raid)
dm = Include support for device mapper in general
md = Include support for Software RAID (md)
kpartx = Include support for kpartx partitioning. Always use this
if you have device mapper devices.
What was important for me, and something that I found out later
is that you can’t boot from a FakeRaid config and use a Sofware based
RAID device at the same time.
“Both of the drivers will attach themselves to all the drives with
any partitions sets as type fd (Linux raid autodetect).”
Later on the road when I was able to boot from my FakeRaid root disk I
converted my Software RAID setup to a Fake-Raid as well.
Found out that creating a Fake-Raid capable initrd file on openSUSE
is a mission own it own.
“Next we have to edit another file: /lib/mkinitrd/scripts/boot-dmraid.sh
(Note, that I don’t know the state, it was originally in)
you have to change the line that reads
/sbin/dmraid -a y -p
With all the trip/tricks/hints in mind my final mkinitrd commando looked liked this.
#mkinitrd -f “dm dmraid kpartx”
In order to be able to boot from a FakeRaid device I had to add the
following two parameters to my GRUB menu.lst and set root to my new
root = /dev/mapper/sil_bgadaecebieap1
root_dm = 1
kernel (hd0,0)/boot/vmlinuz-220.127.116.11-0.1-default root=/dev/mapper/sil_bgadaecebieap1 root_dm=1 root_dmraid=1 splash=silent agp=off clocksource=hpet 001 showopts
To install the GRUB bootloader on my RAID setup I had to create the following
hertogjan:/boot/grub # cat device.map
Then install GRUB
grub> root (hd0,0)
grub> setup (hd0)
And there you go if everything is working out for you you
should be able to boot from your Fake-RAID root disk.
It’s seems booting from a “broken” Fake-Raid on Linux is not supported
as can be read on this website at the bottom.
“Booting with degraded array
One drawback of the fake RAID approach on GNU/Linux is that
dmraid is currently unable to handle degraded arrays, and will
refuse to activate.
In this scenario, one must resolve the problem from within another
OS (e.g. Windows) or via the BIOS/chipset RAID utility.
Alternatively, if using a mirrored (RAID 1) array, users may temporarily
bypass dmraid during the boot process and boot from a single drive:
1. Edit the kernel line from the GRUB menu
2. Remove references to dmraid devices
(e.g. change /dev/mapper/raidSet1 to /dev/sda1)
3. Append disablehooks=dmraid to prevent a kernel panic
when dmraid discovers the degraded array
4. Boot the system “
Sites already mentioned in the above story that were very
helpful in my attempt fixing this one.