Mckennma and Slapnut are right: RAID 0 is pretty dangerous except for things that you don't mind losing. RAID 5 is not all that much slower but it is far more reliable as you can have one HDD die and your data is still available. If you really want a lot of HDD speed, then consider a nested RAID setup like 1+0 (aka RAID 10.) That will give you at least as much speed as your proposed RAID 0 setup but it also can tolerate at least one of your HDDs dying on you without losing data. The downside to RAID 10 is that you need 4 identical HDDs and you only get the storage capacity of two.
On another note, if you are thinking about running RAID, you need to consider how you are going to implement the RAID and how you're going to use the computer. There are many different ways to run RAID, and all have plusses and minuses. I'll go through a brief run-down of them:
1. Motherboard-based RAID. The HDDs are connected to motherboard SATA ports and the RAID is set up in BIOS. The host OS sees the array a single disk and the CPU does the data-crunching. Moving the disks to a different motherboard breaks the array. This is what you're proposing.
2. Software-controller-based RAID: The HDDs are connected to a discrete add-in card and the array is set up in either the controller's BIOS hook menu or in software while the OS is running. The array is visible to the OS as a single disk. The computer's CPU again does the data crunching. Moving the card from computer to computer doesn't affect anything but moving the disks to another controller breaks the array. Cards like HighPoint's RocketRAID series have this functionality. Such cards cost anywhere from $30 for a cheap 2-port card on up to a few hundred for a 16-port PCIe x8 card.
3. Hardware-controller-based RAID: The HDDs are connected to a discrete add-in card and the array is set up in either the controller's BIOS hook menu or in software while the OS is running. The array is visible to the OS as a single disk. There is a little I/O processor and cache memory on the card that do data crunching. This reduces the load on the host computer's CPU, but unless you get a very fast little IOP (500 MHz or better) and 1GB or more of cache, it ends up being slower than software-based RAID. Moving the card from computer to computer doesn't affect anything but moving the disks to another controller breaks the array. These cards start at a few hundred for a 2-port unit and go up to $1000 or more for a 16-port unit. Intel, 3ware, and Areca make these kinds of cards.
4. OS-based RAID: The HDDs are connected to any SATA or IDE ports that are accessible to the OS. The OS sees the disks or partitions as separate disks and then assembles the disks into a RAID during the boot process. The OS completely controls the RAID and the CPU does the data crunching. Moving the disks from motherboard to mottherboard or controller to controller does not break the array as the OS on the disks controls the array. This is much better implemented on UNIX-type computers than on Windows. Windows XP has support for software RAID 0 and 1, and Windows Server 2003 can handle RAID 5 as well. Linux handles JBOD and RAIDs 0, 1, 4, 5, 6, and 10.
I'd never recommend #1 as it brings in all of the worst characteristics of all of the options: motherboard dependence and a failure to offload RAID data crunching from the CPU. #2 is a little better than #1 because you can move the card from computer to computer and have no problems. So you can upgrade your computer and keep your array and its data. However, with any discrete card-based solution, I'd suggest buying two identical cards in case one dies. Because if a card dies, then your array is dead until you find another identical or nearly-identical card to take its place. The software-controller-based solutions work pretty well if you run a smaller number (4-8 or less) HDDs. #3 is a little easier on your host computer but is far more expensive than software-controlled cards and slower unless you buy a very nice unit. If you run a big array of 8 disks or more, then this is the way I'd go. Also, larger hardware-based cards support nested RAID levels like 5+0 right out of the box.
Fully OS-run RAID is the most flexible as one can set up more than one array on a set of disks. I have a 45GB RAID 0 scratch disk array as well as a 440GB RAID 5 data partition across my 3 HDDs. Generally you must allocate entire disks to be one kind of array with motherboard, software cards, or hardware card-controlled RAID. However, you can't install your OS on a software RAID partition that's nor RAID 1 as the OS must load to be able to process an array. So if you want your OS on a RAID 5 or RAID 10, you must use a different method of control. My OS sits on a completely separate HDD that's not in any array so that it can load the array during boot.
I would suggest that unless you want to run Linux, go buy a third or fourth HDD and two identical 4 or 8-port software-based PCIe x4 controllers like the HighPoint 2310 or 2320. With three or four HDDs, this will provide good performance at a price of $400 to $700- $100 each for HDDs and $150-250 each for the controllers. If you want hardware-based RAID instead, add $200 to $400 to that cost. A good RAID is wonderful to have but not cheap no matter how you do it. I run my array using a software RAID card simply as a "dumb" SATA adapter (e.g. running in "no RAID" mode) on a very fast PCIe x4 bus as my motherboard's integrated SATA ports can't handle the traffic. It cost me $140 for my controller and $240 for my 3 HDDs, for a total of $380 in all. By using OS-controlled RAID I can use any SATA ports so I only needed to get one card. Your cost will vary, but you will see a small to decent uptick in performance and a great increase in reliability of your data.
Which brings me to another thing with RAID: Using RAID does NOT mean that you do not have to back up your data!
You still have to follow proper backup procedures; RAID just means that you might not have to use your backups AS OFTEN.