INSTALLASI CENTOS, RAID, DAN LVM
Proses installasi Centos sebagian besar sama seperti distro lainnya. Dapat kita lihat di screenshot-screenshot berikut ini. Cuma nantinya kita akan lebih detail di langkah pembuatan partisi.
Karena screenshotnya cukup banyak, yang akan ditampilkan disini adalah yang berhubungan dengan setting RAID dan LVMnya saja. Untuk yang proses installasi lengkapnya dapat dilihat di PDFnya yang bisa di download dari menu download.
Setelah selesai installasi kita lihat kondisi partisi dan mount pointnya seperti ini:
[root@server-mail ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVolRoot
4.9G 2.2G 2.5G 47% /
/dev/md0 289M 17M 257M 7% /boot
none 506M 0 506M 0% /dev/shm
/dev/mapper/VolGroup00-LogVolHome
68G 85M 64G 1% /home
TESTING REDUNDANCY RAID-1
Nah, kita telah sampai ke tahap yang menarik, yaitu kita mentesting apakah RAID-1 yang telah kita setel tadi benar-benar dapat bekerja dengan baik.
Sebelumnya kita lihat dahulu kondisi RAIDnya.
Kita lihat di command dmesg:
[root@server-mail ~]# dmesg
ata1: SATA max UDMA/133 cmd 0xEFF0 ctl 0xEFE6 bmdma 0xEF60 irq 185
ata2: SATA max UDMA/133 cmd 0xEFA8 ctl 0xEFE2 bmdma 0xEF68 irq 185
ata1: dev 0 cfg 49:2f00 82:7c6b 83:7b09 84:4003 85:7c69 86:3a01 87:4003 88:207f
ata1: dev 0 ATA, max UDMA/133, 160086528 sectors:
ata1: dev 0 configured for UDMA/133
scsi0 : ata_piix
ata2: dev 0 cfg 49:2f00 82:7469 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:207f
ata2: dev 0 ATA, max UDMA/133, 156301488 sectors: lba48
ata2: dev 0 configured for UDMA/133
scsi1 : ata_piix
Vendor: ATA Model: Maxtor 6Y080M0 Rev: YAR5
Type: Direct-Access ANSI SCSI revision: 05
SCSI device sda: 160086528 512-byte hdwr sectors (81964 MB)
SCSI device sda: drive cache: write back
sda: sda1 sda2
Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
Vendor: ATA Model: WDC WD800JD-22LS Rev: 06.0
Type: Direct-Access ANSI SCSI revision: 05
SCSI device sdb: 156301488 512-byte hdwr sectors (80026 MB)
SCSI device sdb: drive cache: write back
sdb: sdb1 sdb2
Attached scsi disk sdb at scsi1, channel 0, id 0, lun 0
device-mapper: 4.4.0-ioctl (2005-01-12) initialised: dm-devel@redhat.com
md: raid1 personality registered as nr 3
md: Autodetecting RAID arrays.
md: autorun ...
md: considering sdb2 ...
md: adding sdb2 ...
md: sdb1 has different UUID to sdb2
md: adding sda2 ...
md: sda1 has different UUID to sdb2
md: created md1
md: bind
md: bind
md: running:
raid1: raid set md1 active with 2 out of 2 mirrors
md: considering sdb1 ...
md: adding sdb1 ...
md: adding sda1 ...
md: created md0
md: bind
md: bind
md: running:
raid1: raid set md0 active with 2 out of 2 mirrors
md: ... autorun DONE.
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
Tapi RAID-1 nya ini belum selesai, dia secara otomatis sedang membangun pasangannya. Dapat kita lihat dari command ini:
[root@server-mail ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1] sda2[0]
77842880 blocks [2/2] [UU]
[==================>..] resync = 93.0% (72464960/77842880) finish=2.5min speed=34857K/sec
md0 : active raid1 sdb1[1] sda1[0]
305088 blocks [2/2] [UU]
resync=DELAYED
unused devices:
Dan di top terlihat aktivitas process rsyncnya:
[root@server-mail ~]# top
top - 20:58:46 up 30 min, 2 users, load average: 1.10, 1.08, 0.86
Tasks: 95 total, 1 running, 94 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2% us, 0.7% sy, 0.0% ni, 99.0% id, 0.0% wa, 0.2% hi, 0.0% si
Mem: 1034376k total, 254720k used, 779656k free, 9792k buffers
Swap: 950264k total, 0k used, 950264k free, 121768k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
216 root 15 0 0 0 0 S 0.7 0.0 0:33.04 md1_raid1
217 root 15 0 0 0 0 D 0.3 0.0 0:22.58 md1_resync
4015 root 16 0 3492 1004 776 R 0.3 0.1 0:00.01 top
1 root 16 0 1960 592 508 S 0.0 0.1 0:00.71 init
2 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
Dapat pula kita lihat status detailnya Raid dengan command ini:
[root@server-mail ~]# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Fri Mar 3 03:08:01 2006
Raid Level : raid1
Array Size : 77842880 (74.24 GiB 79.71 GB)
Device Size : 77842880 (74.24 GiB 79.71 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Thu Mar 2 20:59:49 2006
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
UUID : 1aba11f9:82f69106:f2c8fc07:22f0e395
Events : 0.36
Setelah selesai proses building mirrornya kita dapat lihat status raid tersebut:
[root@server-mail ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1] sda2[0]
77842880 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
305088 blocks [2/2] [UU]
unused devices:
Dapat pula kita lihat status partisinya dengan command fdisk:
[root@server-mail ~]# fdisk -l
Disk /dev/sda: 81.9 GB, 81964302336 bytes
255 heads, 63 sectors/track, 9964 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 38 305203+ fd Linux raid autodetect
/dev/sda2 39 9729 77842957+ fd Linux raid autodetect
Disk /dev/sdb: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 38 305203+ fd Linux raid autodetect
/dev/sdb2 39 9729 77842957+ fd Linux raid autodetect
Disk /dev/md0: 312 MB, 312410112 bytes
2 heads, 4 sectors/track, 76272 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Device Boot Start End Blocks Id System
Disk /dev/md1: 79.7 GB, 79711109120 bytes
2 heads, 4 sectors/track, 19460720 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
INSTALL GRAB
masuk ke Grub command line:
# grub
Install grub pada MBR:
grub> device (hd0) /dev/sdb (or /dev/hdb for IDE drives)
grub> root (hd0,0)
grub> setup (hd0)
grub>quit
TESTING: CABUT SALAH SATU HARDDISK
Ok, setelah yakin bahwa RAID telah berjalan. Kita matikan komputernya, lalu kita cabut salah satu harddisknya.
Hidupkan komputer.... dan.... Linux tetap berjalan! Hehe.. jangan senang dulu, langkah selanjutnya nanti adalah membangun kembali mirror yang 'rusak' itu.
Ketika kita hidupkan dengan hanya 1 buah harddisk, di dalam dmesg kita kan dapat melihat bahwa Linux telah mendeteksi bahwa salah satu harddisknya mengalami kegagalan, ada message: raid1: raid set md1 active with 1 out of 2 mirrors.
[root@server-mail ~]# dmesg
ata1: SATA max UDMA/133 cmd 0xEFF0 ctl 0xEFE6 bmdma 0xEF60 irq 185
ata2: SATA max UDMA/133 cmd 0xEFA8 ctl 0xEFE2 bmdma 0xEF68 irq 185
ata1: SATA port has no device.
scsi0 : ata_piix
ata2: dev 0 cfg 49:2f00 82:7469 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:207f
ata2: dev 0 ATA, max UDMA/133, 156301488 sectors: lba48
ata2: dev 0 configured for UDMA/133
scsi1 : ata_piix
Vendor: ATA Model: WDC WD800JD-22LS Rev: 06.0
Type: Direct-Access ANSI SCSI revision: 05
SCSI device sda: 156301488 512-byte hdwr sectors (80026 MB)
SCSI device sda: drive cache: write back
sda: sda1 sda2
Attached scsi disk sda at scsi1, channel 0, id 0, lun 0
device-mapper: 4.4.0-ioctl (2005-01-12) initialised: dm-devel@redhat.com
md: raid1 personality registered as nr 3
md: Autodetecting RAID arrays.
md: autorun ...
md: considering sda2 ...
md: adding sda2 ...
md: sda1 has different UUID to sda2
md: created md1
md: bind
md: running:
raid1: raid set md1 active with 1 out of 2 mirrors
md: considering sda1 ...
md: adding sda1 ...
md: created md0
md: bind
md: running:
raid1: raid set md0 active with 1 out of 2 mirrors
md: ... autorun DONE.
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
Selanjutnya, melalui command mdstat juga dapat kita lihat statusnya:
[root@server-mail ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[1]
77842880 blocks [2/1] [_U]
md0 : active raid1 sda1[1]
305088 blocks [2/1] [_U]
unused devices:
Melalui fdisk juga dapat dilihat harddisk mana yang sedang hidup.
[root@server-mail ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 38 305203+ fd Linux raid autodetect
/dev/sda2 39 9729 77842957+ fd Linux raid autodetect
Disk /dev/md0: 312 MB, 312410112 bytes
2 heads, 4 sectors/track, 76272 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Device Boot Start End Blocks Id System
Disk /dev/md1: 79.7 GB, 79711109120 bytes
2 heads, 4 sectors/track, 19460720 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
Terlihat bahwa cuma sda yang hidup.
LVM terlihat tidak terpengaruh:
[root@server-mail ~]# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 74.22 GB
PE Size 32.00 MB
Total PE 2375
Alloc PE / Size 2374 / 74.19 GB
Free PE / Size 1 / 32.00 MB
VG UUID utCkc5-dkn3-srHi-nGLS-s76d-BcE3-BkFYEx
Oya, Linux akan memberitahu kita bahwa kondisi RAID kita degraded / turun melalui email:
From: mdadm monitoring < root@server-mail.mydomain.com >
Date: Today 11:12:29 am
This is an automatically generated mail message from mdadm
running on server-mail.mydomain.com
A DegradedArray event had been detected on md device /dev/md0.
Faithfully yours, etc.
Hehe.. menarik bukan? Kita akan menerima 2 buah email sebab kita memiliki 2 buah raid device yaitu md0 dan md1.
Baiklah, sekarang kita matikan kembali komputer. Dan pasang kembali harddisk yang tadi kita copot. Kita hidupkan komputer dan kita pantau apa yang terjadi di dalam dmesg.
[root@server-mail ~]# dmesg
ata1: SATA max UDMA/133 cmd 0xEFF0 ctl 0xEFE6 bmdma 0xEF60 irq 185
ata2: SATA max UDMA/133 cmd 0xEFA8 ctl 0xEFE2 bmdma 0xEF68 irq 185
ata1: dev 0 cfg 49:2f00 82:7c6b 83:7b09 84:4003 85:7c69 86:3a01 87:4003 88:207f
ata1: dev 0 ATA, max UDMA/133, 160086528 sectors:
ata1: dev 0 configured for UDMA/133
scsi0 : ata_piix
ata2: dev 0 cfg 49:2f00 82:7469 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:207f
ata2: dev 0 ATA, max UDMA/133, 156301488 sectors: lba48
ata2: dev 0 configured for UDMA/133
scsi1 : ata_piix
Vendor: ATA Model: Maxtor 6Y080M0 Rev: YAR5
Type: Direct-Access ANSI SCSI revision: 05
SCSI device sda: 160086528 512-byte hdwr sectors (81964 MB)
SCSI device sda: drive cache: write back
sda: sda1 sda2
Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
Vendor: ATA Model: WDC WD800JD-22LS Rev: 06.0
Type: Direct-Access ANSI SCSI revision: 05
SCSI device sdb: 156301488 512-byte hdwr sectors (80026 MB)
SCSI device sdb: drive cache: write back
sdb: sdb1 sdb2
Attached scsi disk sdb at scsi1, channel 0, id 0, lun 0
device-mapper: 4.4.0-ioctl (2005-01-12) initialised: dm-devel@redhat.com
md: raid1 personality registered as nr 3
md: Autodetecting RAID arrays.
md: autorun ...
md: considering sdb2 ...
md: adding sdb2 ...
md: sdb1 has different UUID to sdb2
md: adding sda2 ...
md: sda1 has different UUID to sdb2
md: created md1
md: bind
md: bind
md: running:
md: kicking non-fresh sda2 from array!
md: unbind
md: export_rdev(sda2)
raid1: raid set md1 active with 1 out of 2 mirrors
md: considering sdb1 ...
md: adding sdb1 ...
md: adding sda1 ...
md: created md0
md: bind
md: bind
md: running:
md: kicking non-fresh sda1 from array!
md: unbind
md: export_rdev(sda1)
raid1: raid set md0 active with 1 out of 2 mirrors
md: ... autorun DONE.
md: Autodetecting RAID arrays.
md: autorun ...
md: considering sda1 ...
md: adding sda1 ...
md: sda2 has different UUID to sda1
md: md0 already running, cannot run sda1
md: export_rdev(sda1)
md: considering sda2 ...
md: adding sda2 ...
md: md1 already running, cannot run sda2
md: export_rdev(sda2)
md: ... autorun DONE.
Terlihat ada yang aneh bahwa /dev/sda dianggap sebagai tidak fresh, padahal yg dicopot adalah sdb. Ini karena kita langsung mencopot sdb dan memasangnya kembali, dimana di dalam sdb tersebut masih ada informasi mirrornya, jadi keliru mendeteksinya. Tapi tidak apa-apa, memang sebab nanti jika memang di situasi sebenarnya, kita akan memasang harddisk yang benar-benar kosong.
Di mdstat terlihat bahwa salah satu mirror mati, cuma sdb yang ada:
[root@server-mail ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1]
77842880 blocks [2/1] [_U]
md0 : active raid1 sdb1[1]
305088 blocks [2/1] [_U]
unused devices:
MEMBANGUN KEMBALI RAID
Untuk membangun RAID nya lagi, kita tambahkan drive sda itu satu per satu ke dalam raid device md0 dan md1:
[root@server-mail ~]# mdadm /dev/md0 --add /dev/sda1
mdadm: hot added /dev/sda1
[root@server-mail ~]# mdadm /dev/md1 --add /dev/sda2
mdadm: hot added /dev/sda2
Linux segera otomatis membangun kembali RAID-nya:
[root@server-mail ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[2] sdb2[1]
77842880 blocks [2/1] [_U]
[>....................] recovery = 1.0% (815104/77842880) finish=23.6min speed=54340K/sec
md0 : active raid1 sda1[0] sdb1[1]
305088 blocks [2/2] [UU]
unused devices:
Setelah selesai sinkronisasi:
[root@server-mail ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
77842880 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
305088 blocks [2/2] [UU]
unused devices:
Linux kita telah kembali ke kondisi semula. RAID-1 dengan 2 harddisk sebagai mirror :)
Resources:
ttp://linux2.arinet.org
- http://www.kernelhardware.org
Tidak ada komentar:
Posting Komentar