RAID
Software
Introduction
https://wiki.archlinux.org/index.php/RAID
Cancel A RAID Resync
sudo /usr/share/mdadm/checkarray -x --all
HOWTO: Rename A RAID Array
http://askubuntu.com/questions/63980/how-do-i-rename-an-mdadm-raid-array
MD Device Number Suddenly Changes
FIX - http://ubuntuforums.org/showthread.php?t=1764861
BEST PRACTICE - http://askubuntu.com/questions/211180/ubuntu-server-12-04-mdadm-device-number-suddenly-changes
Change Preferred Minor MD Superblock On Software RAID Hard Disk
This only works on 0.9 Software Raid metadata.
A hard disk drive failure on one of my servers prompted a manual software RAID 1 check and resync. Using good old SystemRescueCD, this was done but in doing so it inadvertently changed the preferred minor md superblock number on the array's hard disk drives from 3 to 124.
mdadm --misc --examine /dev/sda3 |grep 'Preferred Minor' Preferred Minor : 124
To fix this, stop the array and reassemble it again but update the superblock as it runs. Make sure you declare which hard disk drive partitions you are using as you assemble:-
mdadm --verbose --misc --stop /dev/md124 mdadm --verbose --assemble --update=super-minor --run /dev/md3 /dev/sda3 /dev/sdb3
To check it has worked, run the following 2 commands:-
mdadm --misc --examine /dev/sda3 |grep 'Preferred Minor' Preferred Minor : 3
cat /proc/mdstat md3 : active raid1 sdb3[0] sda3[1] 10490368 blocks [2/2] [UU]
HOWTO: Install MDADM Without Postfix
sudo apt-get install mdadm --no-install-recommends
HOWTO: Send Test Email
mdadm --monitor --scan --test --oneshot
ERROR: ubuntu boot failed device or resource busy
Possibly a hard disk fault or the fact that the initraamfs has not given enough time for the RAID devices to assemble properly.
The fix is to add a delay to the boot process.
- Hold the right SHIFT key down to bring up the GRUB menu
- Select 'Advanced options for Ubuntu...'
- Choose 'Ubuntu Recovery Mode'
- Press the e key
- Add rootdelay=90 on the line just before the root= part
- Press F10 to boot into Recovery Mode
- FSCK the disks
- Drop to root prompt
- reboot
- Repeat for normal 'Ubuntu' line
- When properly booted, do the following...
sudo -i echo "sleep 60" > /etc/initramfs-tools/scripts/init-premount/delay_for_raid_array_to_build_before_mounting chmod a+x /etc/initramfs-tools/scripts/init-premount/delay_for_raid_array_to_build_before_mounting update-initramfs -u reboot
Thanks - http://ubuntuforums.org/showthread.php?t=2241430
Thanks - http://www.linuxtopia.org/online_books/linux_kernel/kernel_configuration/re58.html
HOWTO: Set Up A 3TB Disk
With hard disk drives larger than 2TB, you need to use different software and commands to set up Linux Software RAID 1.
The crucial fact is that they use the new GPT or (GUID Partition Table) layout.
You use parted instead of fdisk, and sgdisk instead of sfdisk.
This will make a disk with 3 partitions...
- bios_grub (boot) ~3Mb
- raid (swap) ~1Gb
- raid (rootfs) ~3Tb
Stage 1 - Partition The Drives (parted)
parted -a optimal /dev/sda (parted) mklabel gpt (parted) unit mib (parted) mkpart primary 1 3 (parted) name 1 grub (parted) set 1 bios_grub on (parted) mkpart primary 3 1000 (parted) name 2 swap (parted) set 2 raid on (parted) mkpart primary 1000 -1 (parted) name 3 rootfs (parted) set 3 raid on (parted) align-check optimal 1 (parted) align-check optimal 2 (parted) align-check optimal 3 (parted) print Number Start End Size File system Name Flags 1 1049kB 3146kB 2097kB grub bios_grub 2 3146kB 1049MB 1045MB swap raid 3 1049MB 3001GB 3000GB rootfs raid (parted) quit
Stage 2 - Copy Partitions To Other Drives (sgdisk)
sgdisk --backup=table /dev/sda sgdisk --load-backup=table /dev/sdb sgdisk -G /dev/sdb sgdisk --backup=table /dev/sda sgdisk --load-backup=table /dev/sdc sgdisk -G /dev/sdc
Stage 3 - Check Partitions
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 2.7T 0 disk ├─sda1 8:1 0 2M 0 part ├─sda2 8:2 0 997M 0 part └─sda3 8:3 0 2.7T 0 part sdb 8:16 0 2.7T 0 disk ├─sdb1 8:17 0 2M 0 part ├─sdb2 8:18 0 997M 0 part └─sdb3 8:19 0 2.7T 0 part sdc 8:32 0 2.7T 0 disk ├─sdc1 8:33 0 2M 0 part ├─sdc2 8:34 0 997M 0 part └─sdc3 8:35 0 2.7T 0 part
Stage 4 - Set Up RAID Arrays (mdadm)
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sda2 /dev/sdb2 --spare-devices=1 /dev/sdc2 mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sda3 /dev/sdb3 --spare-devices=1 /dev/sdc3
Stage 5 - Format (Ubuntu Installer or mkfs.ext4)
...
Deleting The RAID Superblock / Wiping A Hard Disk Drive
- Remove the superblock
mdadm --zero-superblock /dev/sdX1
If you get this error...
mdadm: Couldn't open /dev/sdX1 for write - not zeroing
Then stop the arrays which have already been started by mdadm...
mdadm --manage --stop /dev/md127 mdadm --manage --stop /dev/md126 mdadm --manage --stop /dev/md125 mdadm --manage --stop /dev/md124
Then stop the mdadm service...
/etc/init.d/mdadm stop
And try again...
mdadm --zero-superblock /dev/sdX1
- Delete the partitions
dd if=/dev/zero of=/dev/sdX bs=512 count=1
- Securely wipe the data
time shred -n 1 -vz /dev/sdX
Tips To Speed Up RAID Rebuilding And Resync
http://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html
Remove mdadm Rebuild Speed Restriction
sudo cp /etc/sysctl.conf /etc/sysctl.conf_ORIG sudo nano /etc/sysctl.conf dev.raid.speed_limit_max = 51200
Where 51200 is the value to the speed in KB/s you would like to use, in this case 50 MB/s.
Adding Bitmap Indexes To mdadm
Including a bitmap index to a mdadm before rebuilding the array can also speed up the process of rebuilding.
sudo mdadm --grow --bitmap=internal /dev/md0
After the array has been rebuilt the bitmap index can be removed:
mdadm --grow --bitmap=none /dev/md0
NOTE: The above example assumes the array can be found at md0.
Thanks go to James Coyle's article here.
Replacing A Failed Hard Drive In A Software RAID1 Array
Oh no, the second hard drive has broken and we need to swap it!
- Mark the hard drive as failed in all arrays
mdadm --manage /dev/md0 --fail /dev/sdb1 mdadm --manage /dev/md1 --fail /dev/sdb2
- Remove the hard drive from all arrays
mdadm --manage /dev/md0 --remove /dev/sdb1 mdadm --manage /dev/md1 --remove /dev/sdb2
- Power down the system
poweroff
- Replace the hard drive and boot the system
- Copy the partitions from the current disk to the new disk
sfdisk -d /dev/sda | sfdisk /dev/sdb
- Check the partitions match
fdisk -l /dev/sda /dev/sdb
- Add the new hard drive to all arrays
mdadm --manage /dev/md0 --add /dev/sdb1 mdadm --manage /dev/md1 --add /dev/sdb2
- Check when finished
cat /proc/mdstat
- Reboot just for good measure
reboot
http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array
Scan, Unmount, Stop and Start A RAID Array
Scan
mdadm --examine --scan
Unmount
umount /dev/md0
Stop
mdadm --misc --verbose --stop /dev/md0
Start
mdadm --assemble --verbose --run /dev/md0
Scan And Start Array
sudo mdadm --examine --scan ARRAY /dev/md/0 metadata=1.2 UUID=a3ae9396:8085fdd8:ce69dd19:28a50bb8 name=sysresccd:0 spares=1 ARRAY /dev/md/1 metadata=1.2 UUID=3ce66ea0:21de7908:856e1488:18471642 name=sysresccd:1 spares=1 sudo mdadm --assemble --scan mdadm: /dev/md/1 has been started with 2 drives and 1 spare. mdadm: /dev/md/0 has been started with 2 drives and 1 spare. sudo cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda2[0] sdc2[2](S) sdb2[1] 1020352 blocks super 1.2 [2/2] [UU] md1 : active raid1 sda3[0] sdc3[2](S) sdb3[1] 2929110464 blocks super 1.2 [2/2] [UU] bitmap: 0/22 pages [0KB], 65536KB chunk unused devices: <none>
Use Setup RAID Array For Swap And Root Filesystem
sudo mkswap --label swap /dev/md0 Setting up swapspace version 1, size = 996.4 MiB (1044836352 bytes) LABEL=swap, UUID=85ec8988-da79-4747-9025-32a185b1cb95 swapon -L swap sudo free -m total used free shared buff/cache available Mem: 3860 43 3671 2 145 3773 Swap: 996 0 996 sudo mkfs.ext4 -L rootfs /dev/md1 mke2fs 1.42.13 (17-May-2015) Creating filesystem with 732277616 4k blocks and 183074816 inodes Filesystem UUID: 38f6c857-7c5b-45e2-92ef-a4da101b9b9e Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
Start Array With 1 Drive To Access Data
Find names of arrays to use...
sudo mdadm --examine --scan
Assemble array with just one drive...
sudo mdadm --assemble --run /dev/md1 /dev/sdc1 sudo mdadm --assemble --run /dev/md2 /dev/sdc2 sudo mdadm --assemble --run /dev/md3 /dev/sdc3 sudo mdadm --assemble --run /dev/md4 /dev/sdc4
Check filesystem type...
sudo blkid /dev/md1 sudo blkid /dev/md2 sudo blkid /dev/md3 sudo blkid /dev/md4
Create mount directories...
sudo mkdir /mnt/md{1,2,3,4}
Mount as required...
sudo mount -v -t ext3 /dev/md3 /mnt/md3
Read as required...
sudo ls -lah /mnt/md3
When you have finished...
sudo sync sudo umount /mnt/md3 sudo mdadm --stop --scan
...job, done.
Extending a RAID Device
To add a new device to an existing array, use the command in the following form as root:
mdadm raid_device --add component_device
This will add the device as a spare device.
To grow the array to use this device actively, type the following at a shell prompt:
mdadm --grow raid_device --raid-devices=number
Assume the system has an active RAID device, /dev/md3, with the following layout (that is, the RAID device created in Example 5.2, “Creating a new RAID device”):
mdadm --detail /dev/md3 | tail -n 3 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1
Also assume that a new SCSI disk drive, /dev/sdc, has been added and has exactly one partition. To add it to the /dev/md3 array, type the following at a shell prompt:
mdadm /dev/md3 --add /dev/sdc1 mdadm: added /dev/sdc1
This will add /dev/sdc1 as a spare device. To change the size of the array to actually use it (all the time), type:
mdadm --grow /dev/md3 --raid-devices=3
Hardware
How to read a RAID1 hard disk if you can’t use the RAID Controller?
http://robert.penz.name/4/how-to-read-a-raid1-hard-disk-if-you-cant-use-the-raid-controller/
Miscellaneous
HP Smart RAID Array HP 410i
http://hwraid.le-vert.net/wiki/SmartArray
http://sysadm.pp.ua/linux/hpraid-monitoring.html
http://downloads.linux.hp.com/SDR/downloads/MCP/Ubuntu/dists/
http://downloads.linux.hp.com/SDR/downloads/MCP/Ubuntu/pool/non-free/?C=M;O=D
wget http://downloads.linux.hp.com/SDR/downloads/MCP/GPG-KEY-MCP sudo apt-key add GPG-KEY-MCP apt-get update apt-get install hpacucli
A basic report showing the status of your raid arrays is:
hpacucli ctrl all show config
http://www.randomhacks.co.uk/installing-hp-proliant-support-pack-psp-on-ubuntu-12-04/
http://www.datadisk.co.uk/html_docs/redhat/hpacucli.htm
Investigation
server:~# lspci |grep -i 'raid' 02:05.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID (rev 01)
server:~# lsmod |grep -i raid |sort async_memcpy 2304 1 raid456 async_tx 6316 3 raid456,async_xor,async_memcpy async_xor 3520 1 raid456 md_mod 67036 6 raid10,raid456,raid1,raid0,multipath,linear megaraid_mbox 25872 2 megaraid_mm 8284 1 megaraid_mbox raid0 6368 0 raid10 18560 0 raid1 18016 0 raid456 117264 0 scsi_mod 129356 7 sg,sd_mod,libata,mptspi,mptscsih,scsi_transport_spi,megaraid_mbox xor 14696 2 raid456,async_xor
server:~# ll /dev/megadev0 crw-rw---- 1 root root 10, 60 2013-04-25 13:55 /dev/megadev0