When choosing a mass storage solution, the most important factors to consider are speed, reliability, and cost. It is very rare to have all three in favor, normally a fast, reliable mass storage device is expensive, and to cut back on cost either speed or reliability must be sacrificed. In designing my system, I ranked the requirements by most favorable to least favorable. In this situation, cost was the biggest factor. I needed a lot of storage for a reasonable price. The next factor, speed, is not quite as important, since most of the usage would be over a one hundred megabit switched Ethernet, and that would most likely be the bottleneck. The ability to spread the file input/output operations out over several disks would be more than enough speed for this network. Finally, the consideration of reliability was an easy one to answer. All of the data being put on this mass storage device was already backed up on CD-R's. This drive was primarily here for online live storage for easy access, so if a drive went bad, I could just replace it, rebuild the file system, and copy back the data from CD-R's.
To sum it up, I need something that will give me the most amount of storage space for my money. The cost of large IDE disks are cheap these days. I found a place that was selling Western Digital 30.7gb 5400 RPM IDE disks for about one-hundred and thirty US dollars. I bought three of them, giving me approximately ninety gigabytes of online storage.
I installed the hard drives in a system that already had one IDE disk in as the system disk. The ideal solution would be for each IDE disk to have its own IDE controller and cable, but without fronting more costs to acquire a dual IDE controller this would not be a possibility. So, I jumpered two disks as slaves, and one as master. One went on the first IDE controller as a slave to the system disk, and the other two where slave/master on the secondary IDE controller.
Upon reboot, the system BIOS was configured to automatically detect the disks attached. More importantly, FreeBSD detected them on reboot:
ad0: 19574MB <WDC WD205BA> [39770/16/63] at ata0-master UDMA33 ad1: 29333MB <WDC WD307AA> [59598/16/63] at ata0-slave UDMA33 ad2: 29333MB <WDC WD307AA> [59598/16/63] at ata1-master UDMA33 ad3: 29333MB <WDC WD307AA> [59598/16/63] at ata1-slave UDMA33
At this point, if FreeBSD does not detect the disks, be sure that you have jumpered them correctly. I have heard numerous reports with problems using cable select instead of true slave/master configuration.
The next consideration was how to attach them as part of the file system. I did a little research on vinum(8) (Chapter 13) and ccd(4). In this particular configuration, ccd(4) appeared to be a better choice mainly because it has fewer parts. Less parts tends to indicate less chance of breakage. Vinum appears to be a bit of an overkill for my needs.
CCD allows me to take several identical disks and concatenate them into one logical file system. In order to use ccd, I need a kernel with ccd support built into it. I added this line to my kernel configuration file and rebuilt the kernel:
pseudo-device ccd 4
ccd support can also be loaded as a kernel loadable module in FreeBSD 4.0 or later.
To set up ccd, first I need to disklabel the disks. Here is how I disklabeled them:
disklabel -r -w ad1 auto disklabel -r -w ad2 auto disklabel -r -w ad3 auto
This created a disklabel ad1c, ad2c and ad3c that spans the entire disk.
The next step is to change the disklabel type. To do that I had to edit the disklabel:
disklabel -e ad1 disklabel -e ad2 disklabel -e ad3
This opened up the current disklabel on each disk respectively in whatever editor the EDITOR environment variable was set to, in my case, vi(1). Inside the editor I had a section like this:
8 partitions: # size offset fstype [fsize bsize bps/cpg] c: 60074784 0 unused 0 0 0 # (Cyl. 0 - 59597)
I needed to add a new "e" partition for ccd(4) to use. This usually can be copied of the "c" partition, but the fstype must be 4.2BSD. Once I was done, my disklabel should look like this:
8 partitions: # size offset fstype [fsize bsize bps/cpg] c: 60074784 0 unused 0 0 0 # (Cyl. 0 - 59597) e: 60074784 0 4.2BSD 0 0 0 # (Cyl. 0 - 59597)
Now that I have all of the disks labeled, I needed to build the ccd. To do that, I used a utility called ccdconfig(8). ccdconfig takes several arguments, the first argument being the device to configure, in this case, /dev/ccd0c. The device node for ccd0c may not exist yet, so to create it, perform the following commands:
cd /dev sh MAKEDEV ccd0
The next argument ccdconfig expects is the interleave for the file system. The interleave defines the size of a stripe in disk blocks, normally five hundred and twelve bytes. So, an interleave of thirty-two would be sixteen thousand three hundred and eighty-four bytes.
After the interleave comes the flags for ccdconfig. If you want to enable drive mirroring, you can specify a flag here. In this configuration, I am not mirroring the ccd, so I left it as zero.
The final arguments to ccdconfig are the devices to place into the array. Putting it all together I get this command:
ccdconfig ccd0 32 0 /dev/ad1e /dev/ad2e /dev/ad3e
This configures the ccd. I can now newfs(8) the file system.
newfs /dev/ccd0c
Finally, if I want to be able to mount the ccd, I need to configure it first. I write out my current configuration to /etc/ccd.conf using the following command:
ccdconfig -g > /etc/ccd.conf
When I reboot, the script /etc/rc runs ccdconfig -C if /etc/ccd.conf exists. This automatically configures the ccd so it can be mounted.
If you are booting into single user mode, before you can mount the ccd, you need to issue the following command to configure the array:
ccdconfig -C
Then, we need an entry for the ccd in /etc/fstab so it will be mounted at boot time.
/dev/ccd0c /media ufs rw 2 2
The Vinum Volume Manager is a block device driver which implements virtual disk drives. It isolates disk hardware from the block device interface and maps data in ways which result in an increase in flexibility, performance and reliability compared to the traditional slice view of disk storage. vinum(8) implements the RAID-0, RAID-1 and RAID-5 models, both individually and in combination.
See the Chapter 13 for more information about vinum(8).
FreeBSD also supports a variety of hardware RAID controllers. In which case the actual RAID system is built and controlled by the card itself. Using an on-card BIOS, the card will control most of the disk operations itself. The following is a brief setup using a Promise IDE RAID controller. When this card is installed and the system started up, it will display a prompt requesting information. Follow the on screen instructions to enter the cards setup screen. From here a user should have the ability to combine all the attached drives. When doing this, the disk(s) will look like a single drive to FreeBSD. Other RAID levels can be setup accordingly.
This, and other documents, can be downloaded from ftp://ftp.FreeBSD.org/pub/FreeBSD/doc/.
For questions about FreeBSD, read the
documentation
before contacting <questions@FreeBSD.org>.
For questions about this documentation, e-mail <doc@FreeBSD.org>.
Закладки на сайте Проследить за страницей |
Created 1996-2024 by Maxim Chirkov Добавить, Поддержать, Вебмастеру |