AMD Processors
Decrease font size
Increase font size
Topic Title: SB700 RAID Linux drivers
Topic Summary: Has anyone seen it?
Created On: 06/12/2008 04:02 PM
Status: Read Only
Linear : Threading : Single : Branch
Search Topic Search Topic
Topic Tools Topic Tools
View similar topics View similar topics
View topic in raw text format. Print this topic.
 06/12/2008 04:02 PM
User is offline View Users Profile Print this message

Author Icon
zek
Junior Member

Posts: 1
Joined: 06/12/2008

Hi,

I'm looking for Linux RAID drivers for the Southbridge SB700 chipset. Has anyone seen any?
 06/13/2008 12:15 PM
User is offline View Users Profile Print this message

Author Icon
vsingh
Voodoo Programmer

Posts: 3919
Joined: 10/15/2005

Linux should pick up your RAID controller automatically, or you have to load a certain module. Is Linux not recognizing your RAID controller?
 06/17/2008 11:55 PM
User is offline View Users Profile Print this message

Author Icon
MU_Engineer
Dr. Mu

Posts: 1837
Joined: 08/26/2006

Are you talking about BIOS RAID or the chipset's SATA ports? BIOS-based RAID is usually pretty sketchy under Linux but the SATA ports just should use the sata_ahci module (IIRC.) I'd say forget the BIOS-based RAID and go with md raid set up in Linux. It is a ton faster too- trust me, I have used both and stick with md.

-------------------------
 06/23/2008 09:37 PM
User is offline View Users Profile Print this message

Author Icon
Kab
Senior Member

Posts: 1349
Joined: 02/03/2007

There is no extra driver needed for Linux environments.
 06/24/2008 03:37 AM
User is offline View Users Profile Print this message

Author Icon
Overmind
Assimilator

Posts: 8052
Joined: 01/22/2004

Originally posted by: MU_Engineer

Are you talking about BIOS RAID or the chipset's SATA ports? BIOS-based RAID is usually pretty sketchy under Linux but the SATA ports just should use the sata_ahci module (IIRC.) I'd say forget the BIOS-based RAID and go with md raid set up in Linux. It is a ton faster too- trust me, I have used both and stick with md.


Can you detail a bit ?
I'm interested in this.
Why would a software RAID be faster than a hardware one ?
This is possible only if some super-caching system is used in sfotware-mode.
What about stability issues and RAID management ?

-------------------------

World's best Red Alert 2: Yuri's Revenge mod and Star Trek: Starfleet Command 3 mod: Overmind.ro
 06/25/2008 03:19 PM
User is offline View Users Profile Print this message

Author Icon
vsingh
Voodoo Programmer

Posts: 3919
Joined: 10/15/2005

@Overmind: The md software RAID in this case would be faster due to the fact that Linux doesn't fare too well, performance or support wise, with BIOS-based RAID. Here is a pretty comprehensive guide on the Linux MD RAID driver:

http://tldp.org/HOWTO/Software-RAID-0.4x-HOWTO.html
 06/25/2008 11:22 PM
User is offline View Users Profile Print this message

Author Icon
MU_Engineer
Dr. Mu

Posts: 1837
Joined: 08/26/2006

Originally posted by: Overmind

Can you detail a bit ?

I'm interested in this.

Why would a software RAID be faster than a hardware one ?


There are several reasons:

1. A real hardware RAID like one on a several-hundred-dollars-plus 3ware, Areca, or Intel RAID controller has a little bitty ARM I/O processor in the 300-800 MHz range computing the XOR calculations needed for running a RAID 3, 4, 5, 6, or any nested RAID level that includes those RAID types (50, 53, etc.) Software-based RAID in an anywhere-near-modern system has a couple-GHz, SIMD-supporting multicore x86 processor available to do the calculations. You do the math as to which one is faster. I'll give you a hint- my two and a half-year-old Athlon 64 X2 4200+ can XOR 7077 MB/sec worth of data on a single core under md, according to what my dmesg says.

2. The available bandwidth to the CPU is much higher than bandwidth to any onboard I/O processor and latency is probably lower as well. This helps to speed XOR calculations in cases of distributed parity RAID.

3. Probably the biggest reason for extra speed is that you can use disks attached anywhere in an md array, whereas you have to have the disks all on one card for hardware-based RAID is used. This means that you can span an md RAID across multiple PCIe disk controllers, massively increasing the amount of aggregate bandwidth available to the array as a whole.

4. Motherboard-based RAID is software RAID as well as there is no controller chip on board to do I/O calculations. The calculations are handled by the CPU just as they are in md raid- the OS just sees the array as bootup rather than the individual drives at first and then assembling the array later on in the boot sequence.

This is possible only if some super-caching system is used in sfotware-mode.


Nothing like that exists specifically for Linux RAID. The OS caches some things read from disk into RAM, but it does that regardless if there is a RAID or not. However, there are DDR or DDR2 memory modules used as caches on high-end hardware RAID cards in write-back mode which can speed up small writes and cached reads. I think you may be able to use that memory in either md or hardware RAID though.

What about stability issues and RAID management ?


I have run md RAID 5 for years and it has been rock-solid. The only way I can think that you could foul things up in md but not hardware RAID is if you have an overclocked CPU and a bus lock keeping the hardware RAID card at stock speed. But if you run any parity-containing RAID level, you are likely not going to be overclocking so this is moot.

Management of RAID under md is MUCH better than with a hardware card. In fact, RAID management is the biggest advantage to md over hardware RAID. You are not tied to a single specific RAID controller card. If your HW RAID card dies, you need to get an identical or at least similar one otherwise you cannot read the array. With md, it will work as long as you have enough ports to connect the HDDs to. You can also easily add drives to a RAID 5 (perhaps 4 or 6 as well, I have not tried) while the array is active as well.

-------------------------
 07/01/2008 04:56 PM
User is offline View Users Profile Print this message

Author Icon
Snake24
Junior Member

Posts: 1
Joined: 07/01/2008

This stack supports SB700 with:

http://www.ciprico.com/solutions-vst.html
 07/02/2008 01:16 AM
User is offline View Users Profile Print this message

Author Icon
Overmind
Assimilator

Posts: 8052
Joined: 01/22/2004

Thanks for the info, MU_Engineer.
One more thing: if current mainboards RAIDs are soft RAIDs that would conclude that they are faster then a dedicated controller, right ?

-------------------------

World's best Red Alert 2: Yuri's Revenge mod and Star Trek: Starfleet Command 3 mod: Overmind.ro
 07/02/2008 09:59 AM
User is offline View Users Profile Print this message

Author Icon
MU_Engineer
Dr. Mu

Posts: 1837
Joined: 08/26/2006

It all depends on the controller and drivers. Many motherboard-based controllers such as the NVIDIA NForce SATA controllers cannot handle very much I/O due to architectural or bandwidth limitations, whereas most PCIe/PCI-X (NOT PCI) discrete cards have plenty of bandwidth available to them. So YMMV.

However, you do NOT have to run the discrete controller's RAID setup to get an array- you can use the discrete card as simply faster SATA ports and still use md RAID. I have such as setup with a HighPoint RR2310 4-port SATA-300 controller on a PCIe x4 bus. I ignore HighPoint's RAID setup, load sata_mv, and then see my three HDDs individually at boot. Then md puts them into an array, giving me the best possible setup.

-------------------------
 02/21/2010 06:44 AM
User is offline View Users Profile Print this message

Author Icon
tomaszg
Newbie

Posts: 1
Joined: 02/21/2010

Please let me refresh this post.
I'v set up sucessfully about twenty Debian/Ubuntu based servers with mdadm-based RAID, levels from 0 to 10, always were better than any software RAID on Windows.
Now i have to set up virtual server on Ubuntu 9.10 64 bit.
I have following motherboard with SB700:
">http://www.gigabyte.co...Pro.....ductID=3063

I have 4 pcs 250 Gigs Samsung drives, i tried to configure them as RAID (RAID setting in the BIOS). This controller is detected by Ubuntu's kernel, but even if I create logical volume in BIOS it's not visible in Ubuntu installer and I'm unable to create partitons on it. I can bring up only iSCSI from install menu. Syslog says something about unable to find valid logical volume..
So once again I've used the-best-software-ever called mdadm to create two RAID 10 volumes while installing Ubuntu (ext3 for /boot, ext4 for /, swaps are separately).
Now it looks like synchronizing drives over and over (HDD led blinks) and install program stuck on "Downloading file 1 from 5...." so I guess that's trying to install packages.
(after 1,5h)
Ok now it's installing Grub, I will report resoults if someone is interested.
EDIT:
Correct. /proc/mdstat is reporting resync process at 118 MB/s, about 20% to complete.
Why mdadm always synchronize volumes even if they are new/empty?
 12/15/2010 04:22 AM
User is offline View Users Profile Print this message

Author Icon
kullan
Lurker

Posts: 1
Joined: 12/15/2010

BIOS-based RAID is usually pretty sketchy under Linux but the SATA ports just should use the sata_ahci module (IIRC.) I'd say forget the BIOS-based RAID and go with md raid set up in Linux. It is a ton faster too- trust me, I have used both and stick with md.

-------------------------
http://www.weddingdressesinlove.com/
Statistics
112018 users are registered to the AMD Processors forum.
There are currently 0 users logged in.

FuseTalk Hosting Executive Plan v3.2 - © 1999-2014 FuseTalk Inc. All rights reserved.



Contact AMD Terms and Conditions ©2007 Advanced Micro Devices, Inc. Privacy Trademark information