Contributed by grey on from the hardware recommendations dept.
Hello my fellow OpenBSD Journal readers. I need your collective opinions and experience regarding a specific inquiry. At work, a couple of Linux advocates are trying to convince management to replace aging proprietary file servers with SUSE or Debian ones. I would like to use OpenBSD instead. We will either build our own servers or order them from a third party. I have found documentation on several OpenBSD RAID drivers but I would like to read about your first hand experience with these cards.
Which SCSI or IDE RAID cards have you had the best experience with? Any horrible issues or scenarios? Direct support for OpenBSD from the hardware vendor is a big plus for me. I would like to support the vendors who offer technical specifications and documentation to the OpenBSD development team.
Also, I have read horror stories online about people complaining that having a RAID array on a proprietary controller can lead to headaches later on. If a vendor stops supporting that card or goes out of business and your card needs to be replaced for whatever reason then your data is locked in. Have any of you experienced that?
I thank you all for sharing your knowledge.
(Comments are closed)
By Breeno (70.64.188.156) on
Of course, you aren't using the RAID as a backup solution, right? It isn't a backup solution, it is a redundancy solution. A backup solution is something you can take off-site in case the building burns down, or floods, or gets picked up by a tornado, or the building is broken into and the disks are stolen, or... well, you should get the picture by now. A redundancy solution only provides additional insurance against downtime.
They have a number of products at their website (www.accusys.com.tw) that you may want to look into. I've heard only good things about the ACS-7500. Granted, it's not SCSI, but it is an indicator that their SCSI stuff is probably good as well.
Comments
By Michael Joncic (195.70.110.145) on
brgds
Michael
By Anonymous Coward (69.197.92.181) on
I believe the aac driver for adaptec raid controllers is able to tell you when a drive in your array fails, and the array is in degraded mode, at least it can for some models. These controllers aren't the greatest performance wise, but they are probably your best bet for being able to know what's going on with the array at least. Performance wise, the icp controllers are very good.
By dafros (83.31.155.11) admin@chem.uw.edu.pl on
Comments
By Anonymous Coward (69.197.92.181) on
Comments
By Anonymous Coward (143.166.255.16) on
All relevant RAID cards have XOR engines. Calculating or not calculating parity runs at virtually the same speed. You would never notice the difference.
Gosh the amount of ignorance on RAID is astonishing.
RAID 0: The fastest one out there but no redundancy RAID 1: Slow on writes because you are going as slow as the slowest drive; fast reads but not as fast as RAID 5 or 0. RAID 5: Almost as fast as RAID 0; add a spindle and it'll be as fast. RAID 10: depending on the implementation or either real good or real bad. Don't forget the RAID 1 part is as slow as the slowest member. RAID 50: Fast and very redundant; bus speed becomes the issue.
This is very general and can easily be disproved; all RAID cards have different behavior and performance sweet spots. Choose your RAID set wisely for your application.
No one is right here...
Comments
By Anonymous Coward (69.197.92.181) on
And as to your random slander of RAID 10, all RAID levels can be bad "depending on implimentation", that has nothing to do with anything. The fact is, in worst case scenario, RAID 10 is just as redundant as RAID 5, faster, faster when degraded, and wastes more disk space. You may even be able to survive multiple disk failures if you do 1+0 instead of 10.
Comments
By Anonymous Coward (67.64.89.177) on
By ansible (69.243.11.47) ansible@xnet.com on
All relevant RAID cards have XOR engines. Calculating or not calculating parity runs at virtually the same speed. You would never notice the difference.
No. In a healthy RAID-5 if you want to read a small chunk of data, you just read it. In a sick RAID-5 array, you have to read bits from the entire array, because you have to reconstruct the data. So you have increased the amount of reading you have to do.
RAID 1: Slow on writes because you are going as slow as the slowest drive;
This is just as true of RAID-5, isn't it? Besides, if you're building a RAID array with stuff you've dug out of your extra parts bin, you've got other problems. If you have any sense, you build a new server with all the same drives purchased at the same time (with spares).
So in general, you're wrong about RAID-1 being slow. In fact RAID-1 will often give you better performance on reads than RAID-5. Write performance can vary vs. RAID-5 depending on the size of the writes.
These days, with the cost per gigabyte of drives having fallen so far, it doesn't make sense to do RAID-5 anymore, in my opinion. RAID-1 can also be much more robust as well (you can have a three-drive mirror, for example).
Comments
By sthen (81.168.66.229) on
By dafros (83.31.129.104) on
Comments
By Anonymous Coward (69.197.92.181) on
By Anonymous Coward (69.197.92.181) on
By cruel (195.39.211.10) on
...
iop0 at pci1 dev 1 function 0 "DPT SmartRAID (I2O)" rev 0x01: I2O adapter <ADAPTEC 2110S>
iop0: interrupting at irq 5
ppb0 at pci1 dev 1 function 1 "DPT PCI-PCI" rev 0x01
pci2 at ppb0 bus 2
...
iop0: configuring...
ioprbs0 at iop0 tid 521: <ADAPTEC, RAID-5, 380E> direct access, fixed
scsibus0 at ioprbs0: 1 targets
sd0 at scsibus0 targ 0 lun 0: <I2O, Container #00, > SCSI2 0/direct fixed
sd0: 52503MB, 6693 cyl, 255 head, 63 sec, 512 bytes/sec, 107526144 sec total
...
By Simon (217.157.132.75) on
Not must fun in have hardware RAID if you aren't told when a drive fails.
Comments
By Nonesuch (163.192.21.46) on
This includes many of the SCSI-based "PERC" controllers in mid-range Dell rackmount servers.
By Shane (144.136.76.4) on
By Charles Hill (216.229.170.65) on
If it is the latter, you'll save yourself tons of headaches by getting a supported OS (Linux).
For example, Dell just bumped their line of PowerEdge servers from 17xx to 18xx. The changes included Intel E64T Xeons, Double DDR RAM and an update to the RAID controller (optional).
The lovely update came with drivers for Red Hat, Dell's supported version of Linux. It also came with source code, fortunately. We use Debian where I work and it took 3 1/2 *days* to get everything configured properly with a kernel that supported all of our equipment. ("All" being about 1/2-dozen models of Dell servers, of which only the PERC 4e/Si RAID was the problem.)
After this little fiasco, I'd vote for SUSE Linux as being the best compromise between not-Windows, yet still getting some vendor support.
OTOH, if you're able to select specific RAID cards or are building your own equipment, then go for it. Being able to control the hardware means you can happily use OpenBSD with your RAID.
-Charles
Comments
By Marco Peereboom (143.166.226.18) slash@peereboom.us on www.peereboom.us
From a SCSI perspective I run ahc, ahd, mpt, isp & siop (I am sure I'm forgetting one) and they all do their job. I particularly like mpt.
FWIW,
/marco
By ansible (69.243.11.47) ansible@xnet.com on
RAID on OpenBSD, I don't recommend trying to use it.
I've had decent sucess with software RAID on OpenBSD, and
it can report the health of the drives, which is very important.
I don't think there is as much of an issue as far as speed goes
if you're running RAID-1.
It is not an optimal solution, of course. The system doesn't respond
as quickly to a drive failure as a hardware solution would. You
should definitely test it with your application before putting it
into production.
Comments
By Anonymous Coward (67.34.129.203) on
Doesn't RAIDframe just immediately fail the disk if it gets an unrecoverable error? (and then kick in the hot spare if there is one)
Comments
By ansible (69.243.11.47) ansible@xnet.com on
spewing on the console first. Then, after the OS has decided
that this is a permanent failure, the upper levels get notified
and the rebuild on the hot spare starts.
I've seen hardware RAID solutions notice a disk failure within a few seconds.
There ought to be (or maybe already is) a kernel tunable parameter which controls how long/often temp failures turn into a permanent failure.
I haven't researched the issue myself. It's not a big deal to me, as
hardware failures aren't that frequent. But it is something to be aware of. You ought to be testing any kind of RAID system before
putting it into production anyway. This is kind of hard on non-hot-pluggable hardware, but you need to see if you've done things right. I usually just yank the power cable from a drive to fail it
manually.
Comments
By Anonymous Coward (69.197.92.181) on
Comments
By ansible (69.243.11.47) ansible@xnet.com on
Maybe you've already done this... I would want to fully investigate the hardware setup to make sure there aren't any other problems.
I've had issues on some systems where the cabling wasn't quite up to snuff, and there were problems running the drives with the higher levels of DMA on the controllers.
At any rate, I haven't really pushed a RAIDFrame system like you apparently have. I mostly used OBSD for firewall and router systems. My fileservers are still all Linux.
Still, it would be interesting to set up a new system with SATA drives, and see what that does with RAIDFrame.
Comments
By Anonymous Coward (69.197.92.181) on
By Anonymous Coward (66.189.124.7) on
If you can't afford/don't want the Dell PERC cards, go with a true hardware RAID solution, since the software is built-in to the card. We're also using some old PERC cards with OpenBSD, but I can't recall their model names. In any case, check out the OpenBSD supported hardware list.
By Anonymous Coward (216.162.177.130) treyg@griffinsystems.com on
By doxavg (4.17.250.5) on
Comments
By Anonymous Coward (69.197.92.181) on
By Anonymous Coward (138.217.52.28) on
By Brad (211.29.226.142) on
If you intend to run OpenBSD as a heavily used server you'll find GENERIC won't do, you need to recompile and crash a number of times before you get the kernel variables correct, each time you will be the one looking bad.
My experience is with Dell's and Perc 3's.
Comments
By Anonymous Coward (69.197.92.181) on
By Bob (202.37.106.8) eletter@rawl.co.nz on
3.2 with very good results. Would ideally prefer hardware RAID but the
big downsides have always been (a) lack of RAID status feedback (b) lack
of RAID control from the OS level and (c) the proprietary data formats
used to support HW RAID.
Item (a) is the biggest drawback by far - RAID buys time (not backups)
and diminished status information equates to diminished time to manage
disk replacements. This is a big issue for us.
Item (c) is addressed by backups - although it is often good to know
disk data can be restored if so required.
We throw CPU and SCSI cards at raidctl in order to negate the SW versus
HW resource issues.
Biggest single gripe with raidctl (in comparison to HW RAID used on many
UNIX variants) is the downtime while parity rebuilds after an
unsolicited shutdown. Not a insurmountable issue as obsd seldom if
ever crashes - esp. as we are very picky on hardware/OS compatibity.
Would be interested to hear feedback from others.
Bob