60M

Logical volume 'mysql' created

# mkfs -t ext3 /dev/test/mysql

mke2fs 1.38 (30-Jun-2005)

...(Lines skipped)...

This filesystem will be automatically checked every 36 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.

# mkdir /mnt/mysql

# mount /dev/test/mysql /mnt/mysql

6.2.1.3. Handling a drive failure

You can simulate the failure of a RAID array element using mdadm :

# mdadm --fail /dev/md0 /dev/sdc1

mdadm: set /dev/sdc1 faulty in /dev/md0

The 'failed' drive is marked with the symbol (F) in /proc/ mdstat :

# cat /proc/mdstat

Personalities : [raid1]

md0 : active raid1 sdc1[2](F) sdb1[0]

63872 blocks [2/1] [U_]

unused devices: <none>

To place the 'failed' element back into the array, remove it and add it again:

# mdadm --remove /dev/md0 /dev/sdc1

mdadm: hot removed /dev/sdc1

# mdadm --add /dev/md0 /dev/sdc1

mdadm: re-added /dev/sdc1

# cat /proc/mdstat

Personalities : [raid1]

md0 : active raid1 sdc1[1] sdb1[0]

63872 blocks [2/1] [U_]

[>....................] recovery = 0.0% (928/63872) finish=3.1min speed=309K/sec

unused devices: <none>

If the drive had really failed (instead of being subject to a simulated failure), you would replace the drive after removing it from the array and before adding the new one.

Do not hot-plug disk drivesi.e., physically remove or add them with the power turned onunless the drive, disk controller, and connectors are all designed for this operation. If in doubt, shut down the system, switch the drives while the system is turned off, and then turn the power back on. 

If you check /proc/mdstat a short while after readding the drive to the array, you can see that the RAID system automatically rebuilds the array by copying data from the good drive(s) to the new drive:

# cat /proc/mdstat

Personalities : [raid1]

md0 : active raid1 sdc1[1] sdb1[0]

63872 blocks [2/1] [U_]

[=============>.......] recovery = 65.0% (42496/63872)

finish=0.8min speed=401K/sec

unused devices: <none>

The mdadm command shows similar information in a more verbose form:

# mdadm -D /dev/md0

/dev/md0:

Version : 00.90.03

Creation Time : Thu Mar 30 01:01:00 2006

Raid Level : raid1

Array Size : 63872 (62.39 MiB 65.40 MB)

Device Size : 63872 (62.39 MiB 65.40 MB)

Raid Devices : 2

Total Devices : 2

Preferred Minor : 0

Persistence : Superblock is persistent

Update Time : Thu Mar 30 01:48:39 2006

State : clean, degraded, recovering

Active Devices : 1

Working Devices : 2

Failed Devices : 0

Spare Devices : 1

Rebuild Status : 65% complete

UUID : b7572e60:4389f5dd:ce231ede:458a4f79

Events : 0.34

Number Major Minor RaidDevice State

0 8 17 0 active sync /dev/sdb1

1 8 33 1 spare rebuilding /dev/sdc1

6.2.1.4. Stopping and restarting a RAID array

Вы читаете Fedora Linux
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату