AMD Processors
Decrease font size
Increase font size
Topic Title: Motherboard and SCSI U320 controller
Topic Summary:
Created On: 02/29/2008 06:55 PM
Status: Read Only
Linear : Threading : Single : Branch
Search Topic Search Topic
Topic Tools Topic Tools
View similar topics View similar topics
View topic in raw text format. Print this topic.
 02/29/2008 06:55 PM
User is offline View Users Profile Print this message

Author Icon
Snipe656
Junior Member

Posts: 5
Joined: 02/29/2008

I am in the process of building myself a new workstation and trying to find a motherboard that either has a built in U320 SCSI controller or one that is known to work with some known U320 SCSI controller.

I have an AMD Athlon 64 X2 6400+ 3.2GHz Socket AM2 125W CPU and I have a Gigabyte GA-M57SLI-S4 motherboard for it currently and since I already have the case that fits this board then I need a motherboard that will fit into that case. The only controller card that I have tried so far on that is an LSI MegaRaid 320-2E and after spending weeks of trying different drives, cables, medias, and backing planes along with working with support at LSI and Gigabyte, I have determined the motherboard and controller do not agree with one another. I prefer to find out about a known combination that works over trying different parts in hopes of finding a working combination.

The reason I am wanting a U320 SCSI controller is because I have 6 15k rpm U320 73GB drives that I want to run in a RAID 5 configuration and I prefer to have a setup where I could run 3 drives on their own channels so a 2-channel controller with one single RAID 5 array.

If I am just hoping for an impossible combination here due to my CPU selection then I am open to CPU, Motherboard, controller recommendations.

-------------------------
-- Aaron Rouse
http://www.happyhacker.com
 03/01/2008 10:00 AM
User is offline View Users Profile Print this message

Author Icon
MU_Engineer
Dr. Mu

Posts: 1837
Joined: 08/26/2006

Your best bet is to find a motherboard that has a PCI-X (NOT PCIe aka PCI Express) slot and use an add-in PCI-X U320 SCSI card. There are a couple Socket AM2 workstation boards with PCI-X slots that would be a drop-in replacement for you current Gigabyte unit.

1. ASUS M2N32-WS Pro (http://www.newegg.com/Product/...Item=N82E16813131026)
$204.99, two 64-bit 133 MHz PCI-X slots.

2. ASUS M2N-LR (http://www.newegg.com/Product/...Item=N82E16813131134)
$214.99, two 64-bit, 133 MHz PCI-X slots.

There are other boards with PCI-X slots, but they would require a different CPU and RAM and thus be much more expensive for you.

If you want a board with built-in SCSI, you will have to get different processors and RAM. The only ones I turn up at first glance are dual socket 940 (first-generation Opteron, registered DDR memory), a hoary old E7320-based socket 775 board that can only take Pentium 4s, and a few dual socket 771 Intel Xeon boards that take FBDIMM memory.

-------------------------
 03/01/2008 10:27 AM
User is offline View Users Profile Print this message

Author Icon
Snipe656
Junior Member

Posts: 5
Joined: 02/29/2008

Thanks, I ended up looking more into the boards with built in SCSI controllers and most that I found it looked like they did not even support RAID 5 and were just good for mirroring. I failed to mention but I do have an LSI MegaRAID 320-2X sitting here as well so the obvious choice will be one of those Asus motherboards. Looks like the M2N-LR has better reviews than the MSN32-WS so probably will go with the better reviewed one.

-------------------------
-- Aaron Rouse
http://www.happyhacker.com
 05/09/2008 10:44 PM
User is offline View Users Profile Print this message

Author Icon
mapesdhs
Junior Member

Posts: 6
Joined: 05/09/2008

Snipe656 writes:
> Thanks, I ended up looking more into the boards with built in SCSI controllers and most that I found
> it looked like they did not even support RAID 5 and were just good for mirroring. ...

It's impossible to find consumer mbds that have PCIX or built-in controllers. Thankfully, the
ASUS workstation boards seem to be a good price, and at least the ASUS M2N32-WS Pro
has dual 16x PCIe for SLI, decent overclocking features, etc.

Hey, what model disks do you have? I'm using a 147GB Maxtor Atlas 15K II as a system disk
(the fastest of any 15K U320 I've tested so far) and 3 x SUN-branded Fujtisu 147GB 15Ks
for software-RAID data.


> ... I failed to mention but I do have an LSI MegaRAID 320-2X sitting here as well so the obvious choice
> will be one of those Asus motherboards. ...

I have the same LSI card, aswell as an LSI PCIe U320 card (eBay bargain for $90) and am in
a similar situation to you. Currently my drives are linked via boring 32bit/33 PCI. Still, for a
system with the same CPU/gfx (see sig), I did end up with the #2 spot for PCMark05 and #6
for 3DMark06. Pretty good for a $70 mbd! 8)

But now I feel I really want to allow the disks to perform as fast as they are able. I once had a
Dell 650 with PCIX and built-in LSI U320 RAID, so I know how fast it can be, but the Dell
only had DDR266 RAM which killed 3D speed completely, so I upgraded (my new system with
the same X1950 Pro AGP card is as much as 6X faster!). Funny part is, selling the Dell paid
for the entire upgrade. It was the week AMD halved the 6000+ price, so the timing was perfect.

It's my bday this month so my brother is paying for the new mbd. 8) Since I have a PCIe U320
card, I did consider the ASUS AX78, ie. the 2nd PCIe 16x slot (wired as 4x) could be used to
hold the LSI card which is indeed a x4. However, the board has no PCIX, and I would really like
to make proper use of the LSI PCIX card. So, after lots of searching these past few days,
reading reviews, comparison articles, etc. it does seem the ASUS M2N32-WS is the best
available. I didn't come across the M2N-LR board during my research, but looking at the specs
I see it only has one 16x PCIe slot, so no SLI, or using the spare slot for PCIe HBAs.

Btw, if you do ever have to buy SATA but still want mega speed, then consider the Areca 1280
or lesser cards: more than 1100MB/sec with the 24-port version, and I know someone in
Finland who has 6 disks on the 12-port version and gets 800MB/sec. The cards are expensive,
but for that kind of speed I'm happy to live on rice for a month. For the moment though,
it's easier to get cheap 2nd-hand 10K/15K SCSI, and the speeds scale nicely (I get more than
600MB/sec with my Octane2).


> ... Looks like the M2N-LR has better reviews than the MSN32-WS so probably will go with the
> better reviewed one.

Hmm, I didn't come across any unfavourable reviews of the WS board; do you have a reference?

Did you buy the LR board in the end? How did it go? I hope it all worked out ok! Good to know
there are other 15K U320 users out there. 8) Always makes me giggle when Sandra says my
3-disk stripe has an avg access time of 0ms. Ahh, the joys of SCSI. Beats me why people
bother with SATA when a 2nd-hand 15K SCSI is cheaper and faster than the best 10K new
SATA (pretty much impossible to get 10K SATA 2nd-hand). The three Fujitsus I have only cost
$100 each, at a time when they were $900 new.

Anyway, my plan is to get the WS mbd along with an ASUS 8800GT-TOP 512MB 700/2000.
I can retain all of the other parts, which includes the 6000+, U120 Extreme cooler, Scythe and
Thermaltake fans, RAM, disks, etc.

Btw, my main desire for fast disk speed is to help support advanced video encoding; I'm working
on combo SGI/PC video editing solutions - SGI for capture/edit/playback, PC for final format
conversion. But I also use the PC system for Oblivion/Stalker.


Dear AMD, given the mad cost of the most expensive enthusiast/overclocker boards, why
doesn't someone just once include a PCIX slot or two on one of these boards? If this is indeed
the target market, then surely these are the very people who are most likely to want to pick up
a cheap 2nd-hand PCIX U320 SCSI card and experiment with what SCSI has to offer?

Ian.

-------------------------
Centurion Plus 534, Thermaltake 680W
ASUS M2N32 WS Pro PCIe/PCIX, Athlon64 X2 6000+ 3.25GHz (U120E), 4GB DDR2/800
Gigabyte 8800GT Zalman 512MB, 790/1790/980
LSI20320IE 4x PCIe U320 SCSI, LSI 22230R PCIX U320 SCSI
13 x 147GB 15K U320 SCSI, 20X Liteon DVDRW
 05/09/2008 11:34 PM
User is offline View Users Profile Print this message

Author Icon
Snipe656
Junior Member

Posts: 5
Joined: 02/29/2008

I have 6 Seagate ST373454LC drives, I am running three on each channel of the card and doing RAID 5. Only reason I am running them is because I got all six for $200 although one turned out to be bad, got my money back on that then the replacement I ordered from another place turned out to be the wrong size so sent it back and they never gave me the correct drive or my money back but ultimately got a replacement from yet another place so guess my HD investment might be near $300 but had I got my money back on that one then it would be around $240.

I ended up with the WS motherboard, I believe the other lacked some of the integrated pieces that at the time I did not want to buy cards for but I might be wrong on that memory. I did read a few random bad reviews on the WS but they were all around the SATA and poor documentation, I think I found those via going to different parts websites(new egg, tiger direct, etc.) and reading the comments on the boards in the user reviews. The computer has only been up and running for 1-2 months and so far I have not really had any issues to speak of. MY UPS has been acting up this week and I discovered that the suspend mode was not working right but a quick change to the setting in the bios has seemed to resolve that issue. I tried running some free HD benchmarking tool on the computer but the number it gave back was close to the prior box and I doubt they are the same speed since that box used the IBM drives with the old Mylex card, certain does not seem as slow but not going to complain since overall the box is doing what I need it to do. Oh, I also ordered an open boxed WS motherboard from New Egg that was at a significant discount from a boxed one. It lacked the manual and I think some cables but the manual can be downloaded and cables is no issue for me.

I am not overclocking or anything of that nature, although I would assume this rig would have the cooling to handle any extra heat. I did run into an issue with the hard drives getting scorching hot. I have a full tower Armor case and three of the drives I mounted above the power supply in their cage with I guess it is a 80mm or 120mm fan next to that. The other three I mounted in the included cage Armor has that has some massive fan on it and takes up 3 5.25 bays. The ones above the supply got HOT but the others were always cool so I got another one of those big cages and solved that problem.

The reason I went down this entire path was because I development web based business applications and my primary client is switching to MOSS(SharePOint). When I started to get involved in that and found I needed to setup virtual machines of Win2k3 on my old machine, I first discovered I needed more HD space. So initially I tried a pair of mirrored SATA drives to replace my 5 9GB 80-pin SCSI drives. That change in HD's presented a massive drop in performance of the box, which just made me decide to stay SCSI since it worked great for me for so many years. Since I also figured I needed more RAM I used this as an excuse to build a new workstation. I just hope that I did not make a mistake by only putting 4GB of RAM in it but can always add more later although the chips I got have some big heatsinks on them that if I were to put four chips in there they would interfere with the massive CPU fan I got.

Overall I am happy with the box, but I often wonder if I should have just done a pair of Xeons with one of the many boards for Intel chips with built in SCSI controllers or at least slot configurations that better met the needs for the LSI card offerings.

-------------------------
-- Aaron Rouse
http://www.happyhacker.com
 05/10/2008 12:12 AM
User is offline View Users Profile Print this message

Author Icon
MU_Engineer
Dr. Mu

Posts: 1837
Joined: 08/26/2006

Originally posted by: mapesdhs
It's impossible to find consumer mbds that have PCIX or built-in controllers.

Thankfully, the ASUS workstation boards seem to be a good price, and at least the ASUS M2N32-WS Pro has dual 16x PCIe for SLI, decent overclocking features, etc.


The reason for the lack of PCI-X slots and SCSI controllers on most motherboards are that both are outmoded. PCI-X is slower and less flexible than PCIe and SCSI is barely faster but much less flexible than SATA or SAS. You see PCIe on almost every new motherboard made today and a $40 budget consumer motherboard's PCIe x16 slot will run a SAS controller card just as well as a $400 workstation board's PCIe x16 slot will.

SAS isn't super common on single-socket boards as SATA disk performance is decent enough. You generally only see 15k SAS drives in e-mail and DB servers and the like and those tend to be pretty big iron- dual-socket or better.

I have the same LSI card, aswell as an LSI PCIe U320 card (eBay bargain for $90) and am in a similar situation to you. Currently my drives are linked via boring 32bit/33 PCI. Still, for a system with the same CPU/gfx (see sig), I did end up with the #2 spot for PCMark05 and #6 for 3DMark06. Pretty good for a $70 mbd! 8)

But now I feel I really want to allow the disks to perform as fast as they are able. I once had a Dell 650 with PCIX and built-in LSI U320 RAID, so I know how fast it can be, but the Dell only had DDR266 RAM which killed 3D speed completely, so I upgraded (my new system with the same X1950 Pro AGP card is as much as 6X faster!). Funny part is, selling the Dell paid for the entire upgrade. It was the week AMD halved the 6000+ price, so the timing was perfect.


If you want to keep the PCI-X U320 SCSI card so badly, there are plenty of dual-socket motherboards out there that will work very nicely. AMD dual socket 1207/F motherboards are not ridiculously priced for dual-socket setups at $300-400 for a decent board, versus about $400 minimum for an Intel dual socket 771 unit.

Btw, if you do ever have to buy SATA but still want mega speed, then consider the Areca 1280 or lesser cards: more than 1100MB/sec with the 24-port version, and I know someone in Finland who has 6 disks on the 12-port version and gets 800MB/sec. The cards are expensive, but for that kind of speed I'm happy to live on rice for a month. For the moment though, it's easier to get cheap 2nd-hand 10K/15K SCSI, and the speeds scale nicely (I get more than 600MB/sec with my Octane2).


Duly noted.

Good to know there are other 15K U320 users out there. 8) Always makes me giggle when Sandra says my 3-disk stripe has an avg access time of 0ms. Ahh, the joys of SCSI. Beats me why people bother with SATA when a 2nd-hand 15K SCSI is cheaper and faster than the best 10K new SATA (pretty much impossible to get 10K SATA 2nd-hand). The three Fujitsus I have only cost $100 each, at a time when they were $900 new.


People bother with SATA because putting together a SCSI system is much more difficult and expensive than putting together an SATA system:

1. All devices on one channel share bandwidth. You can put three okay 15k drives or two really good 15k drives on a channel before you run into an I/O bottleneck. Each SATA link is point-to-point, which means that you can put as many disks as the controller itself will support without seeing an interface bandwidth bottleneck.

2. Most SCSI controllers I've seen are one or two-channel setups, meaning that you can put 2-6 drives per controller without any bottlenecks. However, 8-port SATA controllers are very common and inexpensive and there are 16- and 24-port controllers out there also.

3. You will need to have multiple SCSI controllers to address enough disks to make an extremely fast array while you can use a single SATA controller.

4. The $50 per disk you may save in going for $100 used 74 GB 15k SCSI disks vs. $150 74 GB 10k new SATA disks likely won't be made back up when you count in the additional costs of controllers and SCSI cabling. SATA cables are abundant and dirt cheap.

The real pros go for SAS as SAS has the advantages of SATA in the fact that you can many disks on a single controller without bottlenecking the interface bandwidth as well as the smaller, easier-to-route cabling and backwards compatibility with standard SATA disks for mass storage purposes.

Btw, my main desire for fast disk speed is to help support advanced video encoding; I'm working on combo SGI/PC video editing solutions - SGI for capture/edit/playback, PC for final format conversion. But I also use the PC system for Oblivion/Stalker.


I would recommend SSDs for you as they are very fast but enough SSD storage capacity to work with video is expensive enough to make SCSI equipment look cheap by comparison.

You also need to consider what component of your system is the slowest link in the chain before you go wild with the disks. I'm guessing it will either be the network link between the SGI and the PC (GbE tops out at ~110 MB/sec is all) or the encoding speed of the X2 6000+. I'd consider putting a Phenom 9850 in place of the X2 6000+ as the 9850 has twice the core count of the X2 6000+ and much-upgraded SSE capabilities. Or if you want something much faster, go for a dual-socket quad-core Opteron setup, where you have eight cores and then can get SCSI and PCI-X onboard for your SCSI disks you already have.

Dear AMD, given the mad cost of the most expensive enthusiast/overclocker boards, why doesn't someone just once include a PCIX slot or two on one of these boards? If this is indeed the target market, then surely these are the very people who are most likely to want to pick up a cheap 2nd-hand PCIX U320 SCSI card and experiment with what SCSI has to offer?


Most enthusiasts are gamers first and foremost. A very fast HDD subsystem will do little more than speed up level load times a few seconds. Gamers mostly care about framerate and detail level, which is mostly GPU-dependent and to a smaller extent CPU-dependent. This is why those extremely-expensive gamer motherboards have provisions to run two, three, or even four graphics cards in tandem. They also are set up to withstand putting well over 100 amps through the CPU socket for overclocking purposes. PCI-X does nothing but take away extra board space that could better be used for more voltage regulators, chipset cooling, or additional PCIe x16 slots for more GPUs. Besides, gamers are notorious to pay a ton of money for things, so they'd just buy PCIe SCSI HBAs to run SCSI disks if they wanted to play with SCSI.

-------------------------
 05/10/2008 06:25 AM
User is offline View Users Profile Print this message

Author Icon
mapesdhs
Junior Member

Posts: 6
Joined: 05/09/2008

Snipe656 writes:
> I have 6 Seagate ST373454LC drives, I am running three on each channel of the card and
> doing RAID 5. ...

I see! I only managed to obtain two of that model, one of which I have in my SGI Octane2 as
the main data disk for all my personal files. The 15K speed helps a lot when I need to search
my email archive for some reference (35% faster than a 10K in tests). Think I sold the other one.

Just random luck I suppose; I ended up finding more Maxtor/Fujitsu/Hitachi drives instead.


> Only reason I am running them is because I got all six for $200 although one turned out to
> be bad, got my money back on that then the replacement I ordered from another place ...

Wow! Bummer about the bad one, but still, $50 each is still very good.


> ... I did read a few random bad reviews on the WS but they were all around the SATA and
> poor documentation, I think I found those via going to different parts websites(new egg,
> tiger direct, etc.) and reading the comments on the boards in the user reviews. ...

Ah, that would explain it, I can imagine typical users being more concerned about SATA issues,
though it makes you wonder, if what they wanted was SATA then why go for the WS in the
1st place? Seems an odd choice.


> The computer has only been up and running for 1-2 months and so far I have not really had
> any issues to speak of. ...

Good to hear!


> ... MY UPS has been acting up this week and I discovered that the suspend mode was not

I don't have any UPSs at all. Bad me.


> running some free HD benchmarking tool on the computer but the number it gave back was
> close to the prior box and I doubt they are the same speed since that box used the IBM

The benchmarks I used were:

SiSoft Sandra
PCMark2002
PCMark2005
My own movie conversion test using a large uncompressed video file.


> ... Oh, I also ordered an open boxed WS motherboard from New Egg that was at a significant
> discount from a boxed one. It lacked the manual and I think some cables but the manual can be
> downloaded and cables is no issue for me.

Yes, I have a similar option open to me here, www.dabs.com is offering 2nd-hand WS boards
for 70 UKP ($140), but they're described as 'warranty repaired' which I don't like the sound of.


> ... I did run into an issue with the hard drives getting scorching hot. I have a full tower Armor
> case and three of the drives I mounted above the power supply in their cage with I guess it is

Hot drives was something I thought about early on. I bought one of those ultra-thin fan coolers
for the system disk (clips underneath) and, rather like the 3-bay fan you refer to, I constructed
a custom front plate to hold a 120mm fan on the front of the system which sucks air out past
the other 3 disks. Seems to work ok. No space for any more disks though; my next system will
use a larger case.


> So initially I tried a pair of mirrored SATA drives to replace my 5 9GB 80-pin SCSI drives.
> That change in HD's presented a massive drop in performance of the box, which just made
> me decide to stay SCSI since it worked great for me for so many years. ...

It's interesting that you observed such a performance drop. I wonder if many other professional
users have gone through that experience given the push to stop using SCSI which has occured
in recent years.


> ... I just hope that I did not make a mistake by only putting 4GB of RAM in it but can always
> add more later ...

At least RAM is reasonably cheap now.


> ... although the chips I got have some big heatsinks on them that if I were to put four chips in
> there they would interfere with the massive CPU fan I got.

Hmm, that's a pain. I expect there are low-profile DIMMs available though.


>Overall I am happy with the box, but I often wonder if I should have just done a pair of Xeons
>with one of the many boards for Intel chips with built in SCSI controllers or at least slot
> configurations that better met the needs for the LSI card offerings.

Yes, that's a viable alternative, though more expensive I expect. I was just surprised the WS was
only approx. $200, given how expense the top-end gamer/enthusiast boards usually are (eg.
Striker Extreme, Skulltrail, etc.) though I suppose the high cost of the latter is more due to the
support for lots of gfx cards. Just seems silly to me that such boards don't include even a single
PCIX slot - one would think extreme gamers would love the speed advantage offered by SCSI,
especially since 2nd-hand 15K U320 that is still under warranty is cheaper than a new 10K SATA.

Glad to hear it worked out ok for you!

Ian.

-------------------------
Centurion Plus 534, Thermaltake 680W
ASUS M2N32 WS Pro PCIe/PCIX, Athlon64 X2 6000+ 3.25GHz (U120E), 4GB DDR2/800
Gigabyte 8800GT Zalman 512MB, 790/1790/980
LSI20320IE 4x PCIe U320 SCSI, LSI 22230R PCIX U320 SCSI
13 x 147GB 15K U320 SCSI, 20X Liteon DVDRW
 05/10/2008 08:16 AM
User is offline View Users Profile Print this message

Author Icon
Snipe656
Junior Member

Posts: 5
Joined: 02/29/2008

In regards to the comments of just go PCIe instead of PCIx, that is exactly what I originally tried to do. My research found that the fastest U320 controller out there at the time was the first LSI card I bought. Unfortunately I spent probably 1-2 months with tech support from both LSI and Gigabyte trying to diagnose why Windows XP 64 or 32 would not install on it. Only reason I finally figured out it was because the card was incompatible with the motherboard was because I happened to have two older Mylex cards and 5 extra SCSI drives to prove that the computer had no other issues causing my install ones. Actually the experience I had could be a lesson for many to not even try to mess with SCSI because most people are not going to want to sit and try different pieces of hardware to find a combination that works. I ultimately ended up losing a bit of money on that first motherboard but luckily was able to exchange the LSI card. Interesting enough though I had read a review where someone had used that model motherboard with that LSI card and bragged about the speed. I did try more than one LSI card because I exchanged the first one for another(same model) but did not try more than one motherboard. I did though successfully install and use Windows on that motherboard with not only my SCSI setup but also an IDE drive and also a SATA.

-------------------------
-- Aaron Rouse
http://www.happyhacker.com
 05/10/2008 08:49 AM
User is offline View Users Profile Print this message

Author Icon
mapesdhs
Junior Member

Posts: 6
Joined: 05/09/2008

MU_Engineer writes:
> The reason for the lack of PCI-X slots and SCSI controllers on most motherboards are that both
> are outmoded. PCI-X is slower and less flexible than PCIe and ...

The max bandwidth is certainly way less than PCIe, but none of the controller cards need that
kind of speed (option cards seem to be only 1x or 4x PCIe) and tests show that few games really
need the speed of 16x PCIe in terms of gfx bandwith. Most games will run much the same with
the link reduced to 8x or even 4x, the only exception being MS FSX which slows down because
it's written very badly. For storage, affordable PCIe cards don't have many ports anyway, usually
no more than four, so the PCIe links are hardly being tickled. Cost is also an issue.


> SCSI is barely faster but much less flexible than SATA or SAS. ...

No suprise SCSI is not faster than SAS, but it's easily faster than SATA; the access times of
the latter are still pretty woeful compared to 15K SCSI, and I don't know of any afordable SATA
disk that can yet beat the 98MB/sec of a 2nd-hand still-under-warranty Maxtor 15K II. Other
SCSI disks are even faster, but I'm happy with the Maxtors, partly because their warranty support
people are really helpful.

Plus, 10K SATA and SAS just isn't that affordable 2nd-hand, whereas it's easy to obtain 15K
SCSI (by that I mean drives that still have valid end user warranties). Indeed, 10K SATA 2nd-hand
is almost non-existent.


> You see PCIe on almost every new motherboard made today and a $40 budget consumer
> motherboard's PCIe x16 slot will run a SAS controller card just as well as a $400 workstation
> board's PCIe x16 slot will.

Very true. Problem is, SAS drives are very expensive, eg. $850 for a 147GB 15K.
When they do appear, prices are usually 2X more than for a 15K SCSI of the same capacity.


> dual-socket setups at $300-400 for a decent board, versus about $400 minimum for an Intel
> dual socket 771 unit.

Alas, that's kinda more than I want to spend. And wouldn't that mean changing the CPU? I want
to keep my 6000+ for the moment.


> People bother with SATA because putting together a SCSI system is much more difficult and
> expensive than putting together an SATA system:

I've found completely the opposite to be the case. 2nd-hand HBAs are cheap, I bought 12-bay
Sagitta units for $50 each, and older SCSI is insanely cheap 2nd-hand (with lots of drives in
a unit, they don't need to be individually fast, so 73GB 10K for $20 each is no problem). SCSI
cables also cost lilttle 2nd-hand, eg. 10m VHDCI cable cost me $10 off eBay Germany, or I can
get 4m VHDCI in quantity for $16 each.

I should perhaps mention that I'm doing all this stuff on a budget. If I was buying new, then
the issues/choices would be different.


> 1. All devices on one channel share bandwidth. You can put three okay 15k drives or two really
> good 15k drives on a channel before you run into an I/O bottleneck. ...

Although the max speed of the bus may be reached with just a few drives, having more does greatly
improve random read/write, and the access times get better aswell. Performance definitely still
improves with more than 3 drives.


> 3. You will need to have multiple SCSI controllers to address enough disks to make an extremely
> fast array while you can use a single SATA controller.

That's true, but the speed of even a single PCIX card is more than enough for me. I would love to
have one of the Areca cards, but they're far too expensive. The LSI card only cost me $90.


> vs. $150 74 GB 10k new SATA disks

I would compare to 147GB+ 10K SATA since shops here don't sell the 74GB models at all. A 150GB
10K SATA is about $240 here, whereas a 147GB 15K SCSI still under warranty 2nd-hand is about
$120, and the access time of the SCSI is much better.


> ... likely won't be made back up when you count in the additional costs of controllers and SCSI
> cabling. SATA cables are abundant and dirt cheap.

You're talking new pricing, for which that would be true. No need to buy new though, plenty of
2nd-hand HBAs available.


> I would recommend SSDs for you as they are very fast but enough SSD storage capacity to
> work with video is expensive enough to make SCSI equipment look cheap by comparison.

Correct, SSD is far too costly. 250GB SSD is $8000!


>You also need to consider what component of your system is the slowest link in the chain before
> you go wild with the disks. I'm guessing it will either be the network link between the SGI and the
> PC (GbE tops out at ~110 MB/sec is all) ...

The disk speed isn't a factor for the networking side of things. It's more for the encoding.


> or the encoding speed of the X2 6000+. ...

The encoding path uses uncompressed video, for which disk speed helps a lot. Not just for the
encoding itself but also with respect to manipulating the files.


> I'd consider putting a Phenom 9850 in place of the X2 6000+ as the 9850 has twice the core
> count of the X2 6000+ and much-upgraded SSE capabilities. ...

Alas, my next system will almost certainly be using a quad-core Intel, probably a Nehalem. Sad
to say evn the lesser current Intel quads perform much better than Phenom. Right now if I was
putting together a totally new system I'd be getting either a quad-core Intel or an E8400 dual
and overclocking it (4.1GHz with air cooling no problem), which would easily beat the Phenom.
One problem is that the apps are not that mature yet for this sort of task, and the Phenom's
low clock speed & low overclocking potential kinda jurt.


>Or if you want something much faster, go for a dual-socket quad-core Opteron setup, where
> you have eight cores and then can get SCSI and PCI-X onboard for your SCSI disks you
> already have.

Not an option for me atm as I don't want to change my CPU. Right now the setup is more
experimental as I work out the precise codec paths to use and other issues. My next system,
for production work, will be a new CPU/case/etc.


> Most enthusiasts are gamers first and foremost. A very fast HDD subsystem will do little more
> than speed up level load times a few seconds. Gamers mostly care about framerate and detail

True I guess, though movie stuff is something they also seem to be into a lot these days.


> ... so they'd just buy PCIe SCSI HBAs to run SCSI disks if they wanted to play with SCSI.

Also true, though it's hard to find any dual-channel PCIe cards. Single-channel LSI PCIe here is
$275 (so my $90 eBay win was quite good), while the only 'affordable' PCIe SATA card is
$140 for only 4 ports. Ironically, the one decent card available for $160 (8-port HP SAS/SATA
with RAID) is of course PCIX instead. QED.

Ian.

-------------------------
Centurion Plus 534, Thermaltake 680W
ASUS M2N32 WS Pro PCIe/PCIX, Athlon64 X2 6000+ 3.25GHz (U120E), 4GB DDR2/800
Gigabyte 8800GT Zalman 512MB, 790/1790/980
LSI20320IE 4x PCIe U320 SCSI, LSI 22230R PCIX U320 SCSI
13 x 147GB 15K U320 SCSI, 20X Liteon DVDRW
 05/10/2008 09:11 AM
User is offline View Users Profile Print this message

Author Icon
mapesdhs
Junior Member

Posts: 6
Joined: 05/09/2008

Snipe656 writes:
> ... Actually the experience I had could be a lesson for many to not even try to mess with SCSI
> because most people are not going to want to sit and try different pieces of hardware to find
> a combination that works. ...

I would say that sort of occurence is more a vote against the particular mbd or HBA, not SCSI
in general. Mbd makers often cut corners in how they support various types of hardware.

Ian.

-------------------------
Centurion Plus 534, Thermaltake 680W
ASUS M2N32 WS Pro PCIe/PCIX, Athlon64 X2 6000+ 3.25GHz (U120E), 4GB DDR2/800
Gigabyte 8800GT Zalman 512MB, 790/1790/980
LSI20320IE 4x PCIe U320 SCSI, LSI 22230R PCIX U320 SCSI
13 x 147GB 15K U320 SCSI, 20X Liteon DVDRW
 05/10/2008 09:19 AM
User is offline View Users Profile Print this message

Author Icon
Snipe656
Junior Member

Posts: 5
Joined: 02/29/2008

There are many ways to look at things but I doubt you can contact any MBD manufacturer and ask them specifically if they support that LSI card and you will get a response better than "we think it will but we never have tested that specifically" Which means it comes down to just buying, hoping and then testing to see if it works. I contacted both LSI and Gigabyte prior to buying and neither could tell me with 100% confidence but when I found that one consumer review I decided to try it and it failed for me but took me a long time to verify it definitely was those two pieces out of all brand new parts to me that were not playing well with one another.

-------------------------
-- Aaron Rouse
http://www.happyhacker.com
 05/10/2008 10:24 AM
User is offline View Users Profile Print this message

Author Icon
mapesdhs
Junior Member

Posts: 6
Joined: 05/09/2008

Snipe656 writes:
> There are many ways to look at things but I doubt you can contact any MBD manufacturer and ask
> them specifically if they support that LSI card and you will get a response better than "we think it
> will but we never have tested that specifically" Which means it comes down to just buying, hoping
> and then testing to see if it works. ...

Alas, this is the typical luck of the draw when dealing with PC hw. I did run into something
similar with my mbd: the LSI PCIX card was conflicting somehow, though thankfully the advice
from Asrock very quicly allowed me to work out what was wrong (IRQ conflict with the parallel
port I think it was). No problems with a QLA12160 though, which is what my system disk
sits on (it's the three Fujitsus that are connected to the LSI).


> definitely was those two pieces out of all brand new parts to me that were not playing well with
> one another.

Too many combinations to ever be sure. Still, for some, it's all part of the fun of building such
systems oneself. 8)

Ian.

-------------------------
Centurion Plus 534, Thermaltake 680W
ASUS M2N32 WS Pro PCIe/PCIX, Athlon64 X2 6000+ 3.25GHz (U120E), 4GB DDR2/800
Gigabyte 8800GT Zalman 512MB, 790/1790/980
LSI20320IE 4x PCIe U320 SCSI, LSI 22230R PCIX U320 SCSI
13 x 147GB 15K U320 SCSI, 20X Liteon DVDRW
 07/06/2008 05:16 PM
User is offline View Users Profile Print this message

Author Icon
TommyB0y
Junior Member

Posts: 2
Joined: 07/06/2008

Nice setup.

I also have SCSI drives, picked up 4 36GB 10K rpm U320 drives for $100 a few months ago, so I am about to add it to my one I have now, which I run windows off of. I know I could have gotten a single SATA drive with 3 times the capacity for the same price, but I dont care, I also have a 1TB SATA striped array.

My 6 year old computer still keeps up with everything and transfers files so fast its insane. Im taking out my SATA RAID to put in other computers, and going to just have the SCSI striped array. I have an older P-II Xeon system with 4 9GB U160 drives, and that was awesome at the time.

You cannot beat the SCSI U320 speed with SATA, although its close. And since you have 15K rpm drives there is no way SATA will touch it.

SCSI has such awesome throughput, not just theoretical bandwidth.

Anyway, I completely agree the way to go was PCI-X, that is an awesome setup, and matured compatability. I use PCI64/66 and its still smoking fast. PCI-X quickly replaced PCI64-66, but I'm doing just fine.

And LSI is the best for this, especially with 64MB of cache or more.

-------------------------
From the year 2002 I have a Tyan Tiger MPX upgraded to dual Athlon MP 2600+ Barton in 2004 with one PCI 64-bit 66Mhz PCI U320 SCSI controller for Windows on 10K RPM Maxtor Atlas and on 64-bit 66Mhz SATA Raid controller with 4 250GB stiped array. AGP 4X with Radeon X1950Pro. All on Antex TruePower 550 power supply in an old Gatway P-120 tower. Its still screaming.
 07/06/2008 09:43 PM
User is offline View Users Profile Print this message

Author Icon
mapesdhs
Junior Member

Posts: 6
Joined: 05/09/2008

TommyB0y writes:
> Nice setup.

Thanks! Or at least it was. All changed now, pretty much as planned.
I'll redo my sig later.

I bought the M2N32 WS Pro mbd (new), 93 UKP + shipping & tax, which
is approx. $238 total.

After some researching various 8800GT models, I decided not to buy
the ASUS 8800GT-TOP as it's already at the top of how fast it can go
due to the cooler it uses. So instead I bought the Gigabyte 8800GT
TurboForce edition with the Zalman cooler, and by heck what a card!
Comes pre oc'd to 700MHz core, but I was able to push it to 790. 8)


> I also have SCSI drives, picked up 4 36GB 10K rpm U320 drives for
> $100 a few months ago, ...

That was a good deal!


> I also have a 1TB SATA striped array.

Eventually I'll probably buy a couple of SATAs for longer term
storage, but for the main work I wanted a fast SCSI array; more on
that in a moment.


> My 6 year old computer still keeps up with everything and transfers
> files so fast its insane. ...

I'm not surprised.


> And since you have 15K rpm drives there is no way SATA will touch it.

Certainly makes a difference. Apart from the power-on SCSI BIOS
checks, the XP bootup is fast and the post-login delay before one can
properly do things is very short, only 1 or 2 seconds at most.


> SCSI has such awesome throughput, not just theoretical bandwidth.

It certainly scales well. FC scales even more of course, but that's
another league. Would love to have a high-end SGI with a completely
unnecessary 10GB/sec RAID or something (and that's low-end
compared to what they're capable of). Well, I do actually have a
high-end SGI of sorts, but it's just a low-spec version of what it
can be, so not that interesting yet (Origin300 with 4 CPUs and 8GB
RAM; max config is 64 CPUs, etc.)


> Anyway, I completely agree the way to go was PCI-X, that is an
> awesome setup, and matured compatability. I use PCI64/66 and its
> still smoking fast. PCI-X quickly replaced PCI64-66, but I'm doing
> just fine.

I finished doing the testing of the various cards I had, so here are
the results.

The LSI 320-2E card was certainly the fastest overall, but interestingly
the PCIX card achieved the same speeds for sequential read, random
read and random write. Where the PCIe card did much better was for
buffered read, buffered write and sequential write. However, with
only 4 disks being tested, and not a matched set either, it could be
that both cards could do better, and perhaps more importantly the
PCIe card was running a hw stripe using the SCSI card's BIOS, whereas
the PCIX card was running a Windows Volume (I couldn't work out a way
to stripe across both channels with the PCIX card, which was a
surprise). Perhaps a different model PCIX card would do better if it
supported hw stripes using both channels - I'll see if I can get hold
of a 22320R, maybe that can do it. Anyway, I'll redo the tests at
some point with one of my larger arrays and more drives, preferably
all the same type.

The LSI 20320IE also ran well (single-channel PCIe), good performance.

The LSI 320-1 PCIX card was awful (I mean really woeful, as low as 50
or 60MB/sec sometimes), but I'm sure something must be wrong somewhere
as reviews showed the card getting 140MB/sec or more, though I must
say even that is pretty dire IMO for a card that's supposedly U320.
Discussing on other forums, the opinion was that this model card
never did do very well for RAID 0 or 1 as the original designers
didn't bother supporting it that much. However, I'll look into it
again at some point, see if I can work out why it was running slow.

So, here is the main direct comparison of the dual-channel cards, and
again sorry that one card is using a Windows Volume while the other is
running hw RAID0 via the SCSI BIOS (denoted in the table below as
hw vs. sw RAID). 4 disks were used, each channel having a Maxtor
Atlas 15K II and a SUN-badged Fujitsu MAU3147NC (turned out 1 of
my 3 Fujitsus was going bad, so I had to rejig things around a bit).
Testing was done with SiSoft Sandra. I did try testing with HDTach,
but couldn't work out how how to make it test a Windows Volume.

Certainly looks like the newer RAM on the PCIe card is helping a lot
for buffered access, but I think this shows quite well that for an
array using such a small number of drives, the bottleneck for
sequential read (the main operation my tasks will be doing, ie.
reading uncompressed video files) is not the speed of the cards.

Note that for both cards, using 4 disks on 1 channel lowered the
sequential read/write speeds by at least 35%. Thus, using both
channels is definitely better, which is as it should be.

However, after all that, the final choice of what to use for the time
being was decided by more practical matters. It was quickly obvious
that the PCIe card runs very hot - I ended up running the tests with
a 12" desk fan pointed at the open case. Thus, I've decided to keep
the 320-2E PCIe card stored away for use with my production system
which I'll be sorting out next year, most likely a Nehalem, with a
much larger case and at least 6 drives. I did consider getting one of
those mountable directional minifans for cooling the PCIe card, but
that was just more work and I wanted to setup my new system now. So,
I'm saving the PCIe card for later, but the key thing is that it
worked ok and I'm pleased, given it was less than $200.

Thus, in the end I used the LSI 20320IE PCIe for the system disk
(which I changed to a Seagate 147GB 15K - don't have enough Maxtors
now to have one spare for a system disk), and the LSI 21320R for the
4-disk RAID setup. I hope to replace the Fujitsus and the Seagate in
the future with more Maxtors (HDTach confirmed the Atlas 15K II is
easily the fastest of any of these 15Ks, including a Hitachi
147GB/15K I tested) but that'll have to wait until I have more funds.

Oh! One other thing: the 320-2E PCIe card is indeed a Dell PERC 4e/DC
card, as so many of these 2nd-hand 320-2Es are (wierd though, no
labeling that gives this away), but I was able to put on the LSI BIOS
just fine which allowed me to install normal LSI drivers. I'll add
the modified drivers for the Dell version to my site soon anyway, but
good to know that reflashing the LSI BIOS does seem to work ok, or at
least it did in my case (others have reported problems doing this,
going back to the Dell BIOS as a last resort). Best of all though,
one of the cards I bought came with a fairly recent version of the
complete LSI MegaRAID driver suite (May 2005) so I'll add an ISO of
this CD to my site when I can, along with all the other drivers and
BIOS images I obtained. Newer versions of drivers might be available
for download from lsi.com, but in some cases drivers for certain
cards are not available from lsi.com - I had to get a few from LSI
China and LSI Japan. I intend to add the whole lot to my site, make
sure there's always somewhere to get them, save people the
frustration of searching lsi.com and getting nowhere. The files are
not up yet, but the URL will be www.sgidepot.co.uk/depot, and I have
two mirror sites.


> SATA Raid controller with 4 250GB stiped array. AGP 4X with Radeon
> X1950Pro. All on Antex TruePower 550 power supply in an old Gatway
> P-120 tower. Its still screaming.

My earlier card was an X1950 Pro AGP, which I'd oc'd to 641/786 (note
that I obtained better results than many PCIe-based X1950 systems on
review sites). Love that card, ran really well. Think you might be
interested to know though how I got on with the new 8800GT (same
ref as earlier). As expected, the CPU score is similar, which naturally holds
back the overall score from being quite a bit higher (would be around
14500 if I was using a Core2Quad).

Anyway, the increase in gfx speed has been considerable. With the
X1950, I'd been playing Oblivion/Stalker at 1600 x 1200 with high
detail settings, though the odd feature still turned off. With the
8800GT, I've been able to increase this to a whopping 2048 x 1536
with max detail settings and all features on! 8) I was hoping for
1920x1200 with max settings, so I'm pleasantly surprised at the
results. Not yet rerun my original Oblivion tests, but I expect the
results will be pretty OTT. For reference, the 8800GT cost 102 UKP
+ shipping & tax, which is about $256 total, though the price
has dropped since May, down to 95 UKP + tax now.

Ian.

-------------------------
Centurion Plus 534, Thermaltake 680W
ASUS M2N32 WS Pro PCIe/PCIX, Athlon64 X2 6000+ 3.25GHz (U120E), 4GB DDR2/800
Gigabyte 8800GT Zalman 512MB, 790/1790/980
LSI20320IE 4x PCIe U320 SCSI, LSI 22230R PCIX U320 SCSI
13 x 147GB 15K U320 SCSI, 20X Liteon DVDRW
Statistics
112018 users are registered to the AMD Processors forum.
There are currently 0 users logged in.

FuseTalk Hosting Executive Plan v3.2 - © 1999-2014 FuseTalk Inc. All rights reserved.



Contact AMD Terms and Conditions ©2007 Advanced Micro Devices, Inc. Privacy Trademark information