Direct Attached Storage
Introduction
XCASE: Mini Tower Raid System. Ideal for home server or workplace storage. 5 Hotswap Bays.
http://www.xcase.co.uk/Cfi-B8253ER-5-Drive-Hotswap-External-Raid-System-p/storage-cfi-b8253er.htm
This is 5 bay DAS (Direct Attached Storage) enclosure for high capacity storage needs. With built-in ventilating protective screen front panel, the unit utilizes cable-less backplane design to support 5 hard drives. Hard drives are installed on an easy access tray for simply insertion and removal. Hard drive cooling fan located on the back of unit. It is a compact RAID tower use only one cable to access 5 SATA hard drives utilizing the SATA II port multiplier technology and also support RAID 0, 1, 5, 10.
I have decided to use this device in a Linux Software RAID Level 10 (RAID-10) array set up with four 1.5TB hard disk drives - giving me a total of 3TB of file storage space.
Linux Software MD RAID Level 10
The Linux kernel software RAID driver (called md, for "multiple device") can be used to build a classic RAID 1+0 array...
http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
8 x HDD RAID 10 means striping 4 pairs of RAID 1. As long as any two failed drives are not in the same paired RAID 1, the RAID 10 will just keep on going without any sign of slowing down. Thus, you can have up to 4 failed drives in 8 x HDD RAID 10, and the array remains intact. However, you don't really want to play russian roulette, and you ought to replace any failed drive as soon as you can. A degraded RAID 10 won't suffer performance loss, but a degraded RAID 5 will slow down by ~50%.
4 x HDD
mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
a=b RAID 1 + RAID 0 c=d RAID 1
8 x HDD
mdadm --create /dev/md0 --level=10 --raid-devices=8 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
a=b RAID 1 + c=d RAID 1 + RAID 0 e=f RAID 1 + g=h RAID 1
In my 8 drive DAS...
[8|/dev/sde|WD-WMAZA1547403]\ = RAID 1 [7|/dev/sdd|WD-WMAZA1455936]/ + [6|/dev/sdc|WD-WMAZA1455763]\ = RAID 1 [5|/dev/sdb|WD-WMAZA1461618]/ + RAID 0 [4|/dev/sdi|WD-WMAZA1263294]\ = RAID 1 [3|/dev/sdh|WD-WMAZA1447312]/ + [2|/dev/sdg|WD-WCAZA6573412]\ = RAID 1 (which is why it was only using these 2 drives when recovering) [1|/dev/sdf|WD-WMAZA1548052]/
Installation In Ubuntu Linux
- Add hard drives
- Plug in esata and power cables
- Plug into Ubuntu Linux PC
- Partition each drive as type 'fd'
- Use 'mdadm' to create RAID10 array
- Format with chosen file system
- Use and enjoy!
http://www.youtube.com/watch?v=fIWLaqT0-DQ
More to come. For now, command line history... :-)
cat /proc/partitions modprobe raid10 mknod /dev/md0 b 9 0 fdisk /dev/sdb sfdisk -d /dev/sdb | sfdisk /dev/sdc sfdisk -d /dev/sdb | sfdisk /dev/sdd sfdisk -d /dev/sdb | sfdisk /dev/sde apt-get install mdadm cat /proc/mdstat mdadm --create /dev/md0 --verbose --raid-devices=4 --level=10 --assume-clean /dev/sdb1 /dev/sdc /dev/sdd1 /dev/sde1 cat /proc/mdstat mkfs.ext4 /dev/md0 reboot
fdisk output - one hard disk drive
# fdisk -l /dev/sdb Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x97837bce Device Boot Start End Blocks Id System /dev/sdb1 1 182401 1465136001 fd Linux raid autodetect
Partitions output
# cat /proc/partitions major minor #blocks name 8 0 1465138584 sda 8 1 1454002956 sda1 8 2 1 sda2 8 5 11133013 sda5 8 16 1465138584 sdb 8 17 1465136001 sdb1 8 32 1465138584 sdc 8 33 1465136001 sdc1 8 64 1465138584 sde 8 65 1465136001 sde1 8 48 1465138584 sdd 8 49 1465136001 sdd1 9 5 2930271872 md0
Kernel output
[ 74.873945] md: bind<sdb1> [ 74.875586] md: bind<sde1> [ 74.877133] md: bind<sdd1> [ 75.075389] md: bind<sdc1> [ 75.086003] raid10: raid set md0 active with 4 out of 4 devices [ 75.086021] md0: detected capacity change from 0 to 3000598396928
MD output
# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid10 sdc1[1] sdd1[2] sde1[3] sdb1[0] 2930271872 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none>
df output
# df -H -T -x tmpfs Filesystem Type Size Used Avail Use% Mounted on /dev/sda1 ext4 1.5T 314G 1.1T 23% / /dev/md0 ext4 3.0T 2.0T 805G 72% /mnt/3TB
EDGE10 DAS 801 t
http://www.edge10.com/digital_storage.php
This is the listing with ONE of my drives which has failed...
[8|/dev/sde|WD-WMAZA1547403] [7|/dev/sdd|WD-WMAZA1455936] [6|/dev/sdc|WD-WMAZA1455763] [5|/dev/sdb|WD-WMAZA1461618] [4|/dev/sdh|WD-WMAZA1263294] [3|/dev/sdg|WD-WMAZA1447312] [2| empty | ] [1|/dev/sdf|WD-WMAZA1548052]
cat /proc/partitions /proc/mdstat major minor #blocks name 8 0 1465138584 sda 8 1 1457032242 sda1 8 2 8106310 sda2 8 16 1953514584 sdb 8 17 1953514552 sdb1 8 32 1953514584 sdc 8 33 1953514552 sdc1 8 48 1953514584 sdd 8 49 1953514552 sdd1 8 64 1953514584 sde 8 65 1953514552 sde1 8 80 1953514584 sdf 8 81 1953514552 sdf1 8 96 1953514584 sdg 8 97 1953514552 sdg1 8 112 1953514584 sdh 8 113 1953514552 sdh1 254 384 7814057728 md_d6 md_d6 : active raid10 sdc1[1] sdf1[4] sde1[3] sdb1[0] sdh1[7] sdd1[2] sdg1[6] 7814057728 blocks 64K chunks 2 near-copies [8/7] [UUUUU_UU]
UPDATE: Tuesday, 23 August 2011
Partition the new drive, using one of the other drives to copy...
sudo sfdisk -d /dev/sdc |sfdisk --force /dev/sdg
Check the drive has partitioned correctly...
sudo fdisk -l /dev/sdg Disk /dev/sdg: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdg1 1 243202 1953514552 fd Linux raid autodetect
Hot add the new drive to the RAID10 array...
sudo mdadm --manage /dev/md_d6 --add /dev/sdg1
Check the kernel log...
dmesg |tail -n 15 [ 983.627653] md: bind<sdg1> [ 984.075096] RAID10 conf printout: [ 984.075104] --- wd:7 rd:8 [ 984.075111] disk 0, wo:0, o:1, dev:sdb1 [ 984.075117] disk 1, wo:0, o:1, dev:sdc1 [ 984.075122] disk 2, wo:0, o:1, dev:sdd1 [ 984.075127] disk 3, wo:0, o:1, dev:sde1 [ 984.075132] disk 4, wo:0, o:1, dev:sdf1 [ 984.075136] disk 5, wo:1, o:1, dev:sdg1 [ 984.075140] disk 6, wo:0, o:1, dev:sdh1 [ 984.075145] disk 7, wo:0, o:1, dev:sdi1 [ 984.075259] md: recovery of RAID array md_d6 [ 984.075264] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 984.075269] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 984.075289] md: using 128k window, over a total of 1953514432 blocks.
Which means...
wd == working disks rd == raid disks wo == write-only o == online
Check the RAID status...
cat /proc/mdstat md_d6 : active raid10 sdg1[8] sdf1[4] sdc1[1] sdh1[6] sdi1[7] sdd1[2] sde1[3] sdb1[0] 7814057728 blocks 64K chunks 2 near-copies [8/7] [UUUUU_UU] [>....................] recovery = 1.8% (35372032/1953514432) finish=493.3min speed=64795K/sec
This is the listing with all of my drives...
[8|/dev/sde|WD-WMAZA1547403] [7|/dev/sdd|WD-WMAZA1455936] [6|/dev/sdc|WD-WMAZA1455763] [5|/dev/sdb|WD-WMAZA1461618] [4|/dev/sdi|WD-WMAZA1263294] [3|/dev/sdh|WD-WMAZA1447312] [2|/dev/sdg|WD-WCAZA6573412] [1|/dev/sdf|WD-WMAZA1548052]
UPDATE: Wednesday, 1 May 2012
Today, I went to mount my DAS and found that all the drives were marked as SPARE. Panic set in (obviously).
$ cat /proc/partitions /proc/mdstat major minor #blocks name 0 1465138584 sda 8 1 1457032242 sda1 8 2 8106310 sda2 8 16 1953514584 sdb 8 17 1953514552 sdb1 8 32 1953514584 sdc 8 33 1953514552 sdc1 8 48 1953514584 sdd 8 49 1953514552 sdd1 8 64 1953514584 sde 8 65 1953514552 sde1 8 80 1953514584 sdf 8 81 1953514552 sdf1 8 96 1953514584 sdg 8 97 1953514552 sdg1 8 112 1953514584 sdh 8 113 1953514552 sdh1 8 128 1953514584 sdi 8 129 1953514552 sdi1 Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d6 : inactive sdg1[5](S) sdb1[0](S) sdi1[7](S) sdh1[6](S) sdf1[4](S) sde1[3](S) sdc1[1](S) 13674601024 blocks unused devices: <none>
The solution? Power everything off. Wait 5 minutes. Try again. If all OK, put the correct info into mdadm.conf:-
sudo mdadm --scan --detail >> /etc/mdadm/mdadm.conf sudo service mdadm restart
The system had a Funny Five Minutes... but it was not funny for me.