# mdadm.conf written out by anaconda

DEVICE partitions

MAILADDR root

ARRAY /dev/md0 spare-group= red uuid=5fccf106:d00cda80:daea5427:1edb9616

ARRAY /dev/md1 spare-group= red uuid=aaf3d1e1:6f7231b4:22ca60f9:00c07dfe

The name of the spare-group does not matter as long as all of the arrays sharing the hot spare have the same value; here I've used red . Ensure that at least one of the arrays has a hot spare and that the size of the hot spare is not smaller than the largest element that it could replace; for example, if each device making up md0 was 10 GB in size, and each element making up md1 was 5 GB in size, the hot spare would have to be at least 10 GB in size, even if it was initially a member of md1 .

6.2.3.5. ...configuring the rebuild rate for arrays?

Array rebuilds will usually be performed at a rate of 1,000 to 20,000 KB per second per drive, scheduled in such a way that the impact on application storage performance is minimized. Adjusting the rebuild rate lets you adjust the trade-off between application performance and rebuild duration.

The settings are accessible through two pseudofiles in /proc/sys/dev/raid , named speed_limit_max and speed_limit_min . To view the current values, simply display the contents:

$ cat /proc/sys/dev/raid/speed_limit*

200000

1000

To change a setting, place a new number in the appropriate pseudo-file:

# echo 40000 >/proc/sys/dev/raid/speed_limit_max

6.2.3.6. ...simultaneous drive failure?

Sometimes, a drive manufacturer just makes a bad batch of disksand this has happened more than once. For example, a few years ago, one drive maker used defective plastic to encapsulate the chips on the drive electronics; drives with the defective plastic failed at around the same point in their life cycles, so that several elements of RAID arrays built using these drives would fail within a period of days or even hours. Since most RAID levels provide protection against a single drive failure but not against multiple drive failures, data was lost.

For greatest safety, it's a good idea to buy disks of similar capacity from different drive manufacturers (or at least different models or batches) when building a RAID array, in order to reduce the likelihood of near- simultaneous drive failure.

6.2.4. Where Can I Learn More?

? The manpages for md , mdadm , mdadm.conf , hdparm , smartd , smartd.conf , mkfs , mke2fs , and dmraid

? The manpages for iscsid and iscsiadm

? The Linux-iSCSI project: http://linux-iscsi.sourceforge.net

? The Enterprise iSCSI Target project: http://iscsitarget.sourceforge.net/

6.3. Making Backups

Hard disks are mechanical devices. They are guaranteed to wear out, fail, and lose your data. The only unknown is when they will fail.

Data backup is performed to guard against drive failure. But it's also done to guard against data loss due to theft, fire, accidental deletion, bad editing, software defects, and unnoticed data corruption.

6.3.1. How Do I Do That?

Before making backups, you must decide:

? What data needs to be backed up

? How often the data needs to be backed up

? How quickly you need to restore the data

? How far back in time you need to be able to restore

Based on this information, you can develop a backup strategy, including a backup technology, schedule, and rotation.

6.3.1.1. Determining what data to back up

Any data that you want to preserve must be backed up; usually, this does not include the operating system or applications, because you can reinstall those.

Table 6-5 lists some common system roles and the directories that should be considered for backup.

Table 6-5. Directories used for critical data storage in various common system roles

System role Standard directories Notes
Вы читаете Fedora Linux
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату