Warning

 

Close

Confirm Action

Are you sure you wish to do this?

Confirm Cancel
Member Login

Arrow Left Previous Page
Page / 2
Posted: 1/26/2011 9:52:35 PM EDT
Putting together a new computer and I decided to go with a RAID 0. Is there anything specific I should look for in the HDDs? I read that I have to get a HDD with TLER, Is that true?
Link Posted: 1/26/2011 10:03:15 PM EDT
TLER might be a big deal some day, right now it's just WD hype.

I just replaced my SCSI drive RAID with SATA. I went with notebook drives to save space as well. Not quite as fast, plenty more space, and very very quiet.

Four of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16822148374

One of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16817998143

And I'm all set.

I created two LDs in the onboard raid controller; 30GB RAID-10 for the windows install and "important" stuff, the rest to a RAID-0 for the "fast" stuff. Here's my HDTune of the RAID-0.



Not trying to convince, just what I did. Can't say much about it since it's only been up and running for an hour or two.
Link Posted: 1/26/2011 10:07:54 PM EDT
15K SAS drives and a good raid controller.
Link Posted: 1/26/2011 10:10:51 PM EDT
SSD
Link Posted: 1/26/2011 10:12:02 PM EDT
Anything outside of high dollar business class stuff then it's all a crap shoot. Either way ff you have a raid 0 setup you just shouldn't have important data on it.
Link Posted: 1/26/2011 10:14:04 PM EDT

Originally Posted By allenNH:
TLER might be a big deal some day, right now it's just WD hype.

I just replaced my SCSI drive RAID with SATA. I went with notebook drives to save space as well. Not quite as fast, plenty more space, and very very quiet.

Four of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16822148374

One of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16817998143

And I'm all set.

I created two LDs in the onboard raid controller; 30GB RAID-10 for the windows install and "important" stuff, the rest to a RAID-0 for the "fast" stuff. Here's my HDTune of the RAID-0.

http://gallery2.pingslave.com/d/4754-1/HDTune_Benchmark_AMD_____4_0_Stripe_RAID0.png

Not trying to convince, just what I did. Can't say much about it since it's only been up and running for an hour or two.

When I see stuff like this I realize just how far behind the curve I am.

I recently bought a 4-bay D-Link NAS box and an armload of 500G drives. After reading up on stuff and cogitating a while I just set it up as a JBOD - if a drive conks out some day, no big deal, everything is duplicated elsewhere in addition to the NAS. Every time I go to a computer forum I see lots of "Having trouble with my RAID....." etc etc etc.


Link Posted: 1/26/2011 10:17:58 PM EDT
Originally Posted By Dumpster_Baby:

Originally Posted By allenNH:
TLER might be a big deal some day, right now it's just WD hype.

I just replaced my SCSI drive RAID with SATA. I went with notebook drives to save space as well. Not quite as fast, plenty more space, and very very quiet.

Four of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16822148374

One of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16817998143

And I'm all set.

I created two LDs in the onboard raid controller; 30GB RAID-10 for the windows install and "important" stuff, the rest to a RAID-0 for the "fast" stuff. Here's my HDTune of the RAID-0.

http://gallery2.pingslave.com/d/4754-1/HDTune_Benchmark_AMD_____4_0_Stripe_RAID0.png

Not trying to convince, just what I did. Can't say much about it since it's only been up and running for an hour or two.

When I see stuff like this I realize just how far behind the curve I am.

I recently bought a 4-bay D-Link NAS box and an armload of 500G drives. After reading up on stuff and cogitating a while I just set it up as a JBOD - if a drive conks out some day, no big deal, everything is duplicated elsewhere in addition to the NAS. Every time I go to a computer forum I see lots of "Having trouble with my RAID....." etc etc etc.




Well I haven't had any trouble, but this is only the second SATA (or ATA, or IDE) RAID I've ever built or used; I was, and still somewhat am, a "SCSI bigot." The first one was in this same machine and was just two 320G drives in RAID-1.

Just don't put anything important on a RAID-0. Since most onboard controllers can only do 0, 1, and 10/1+0, use 1 or 10. If you buy a better controller some day, go with 5 or 50.
Link Posted: 1/26/2011 10:19:48 PM EDT
[Last Edit: 1/26/2011 10:21:04 PM EDT by Choncer]

Originally Posted By hoosier122:
SSD


Even with DMA, you're outperforming the rest of the system for a shit ton of money if you want REAL storage.


OP, i'd say grab some 15Krpm raptor drives and watch that baby time travel.



ETA: Forgot to click "quote" i should quit drinking for the night.
Link Posted: 1/26/2011 10:28:40 PM EDT
[Last Edit: 1/26/2011 10:31:42 PM EDT by DedoBOT]
raid 0? I will loose my sleep .
To OP question-all device storages which i purchase lately - Xserve,Gtech16tb,Laciee12 comes configured with regular Hitachi drives. So,get a Hitachi and archive/backup often.
Link Posted: 1/26/2011 10:30:32 PM EDT
[Last Edit: 1/26/2011 10:31:07 PM EDT by wingnutx]
Fastest RPM you can get in the price range you want.

Why RAID-0?
Link Posted: 1/26/2011 10:48:53 PM EDT
[Last Edit: 3/24/2011 5:34:46 PM EDT by TZLVredmist]
Crucial c300 SSD!
Link Posted: 1/27/2011 6:11:23 AM EDT
Originally Posted By wingnutx:
Fastest RPM you can get in the price range you want.

Why RAID-0?


Because I want the fastest RAID I can have. I'll be doing incremental backups to a hard drive, so i don't have to worry about redundancy. Although putting windows on a RAID 10 and everything else on a RAID 0 sounds interesting...
Link Posted: 1/27/2011 8:18:58 AM EDT
Originally Posted By AgentDavis:
Originally Posted By wingnutx:
Fastest RPM you can get in the price range you want.

Why RAID-0?


Because I want the fastest RAID I can have. I'll be doing incremental backups to a hard drive, so i don't have to worry about redundancy. Although putting windows on a RAID 10 and everything else on a RAID 0 sounds interesting...


On controllers that support it I would normally do raid 5 or 50, but this one doesn't, so I'm using 10.

I keep the drive small and only install "critical" stuff there that I can't afford to be without since my computer pays my bills. The less important stuff gets installed to D:; e.g. when I install a game, I change the install directory from "C:\program files\...." to "D:\program files\..."

If I'm being really anal retentive I'll set up three logical disks instead of two, the first as a redundant RAID, and the second and third as RAID-0.

The first LD (c:) is 10-50GB and gets windows and critical programs as above.

The second LD (d:) is 10-20GB and gets a temp directory and the swap file. I change the windows system and user temp and tmp directories to subdirectories under d:\temp; e.g. d:\temp\system\temp, d:\temp\system\tmp, d:\temp\user\temp, and d:\temp\user\tmp.

The third LD (e:) gets the rest of the space and is used for installed programs, downloads, etc.

What this means (for me) is that if a drive fails, I can still boot and do my job. Once the failed drive is replaced, I can rebuild the destroyed RAID-0 arrays and recover programs from backup –– they don't have to be reinstalled since I routinely back up the drives, and the registry and start menu data are preserved as they're on the RAID-10 LD.

Of course since I adopted this system, I haven't had a drive fail. They only ever fail when I'm living dangerously with only RAID-0..
Link Posted: 1/27/2011 8:21:15 AM EDT
Originally Posted By Choncer:
OP, i'd say grab some 15Krpm raptor drives and watch that baby time travel.


+1 I used Seagates for my mirror, as WD were sold out at the time. They work, but are meh. occasionally have problems, then it'll go into rebuilding mode.

Link Posted: 1/27/2011 8:23:14 AM EDT
Why not SSD?
Link Posted: 1/27/2011 8:28:53 AM EDT

Originally Posted By Khemist:
Why not SSD?

Cost

any drives that arent "eco" "green power" "green" will be fine avoid eco drives!!
Link Posted: 1/27/2011 8:30:03 AM EDT
Stripe 4 drives.

Profit!

(Beware of crashing. )

Link Posted: 1/27/2011 8:51:18 AM EDT
Originally Posted By DarkCharisma:
Stripe 4 drives.

Profit!

(Beware of crashing. )



I plan on doing that eventually, Unless SSD gets cheap before I can afford it.
Link Posted: 1/27/2011 8:58:13 AM EDT
Originally Posted By JohnMikerson:

Originally Posted By Khemist:
Why not SSD?

Cost

any drives that arent "eco" "green power" "green" will be fine avoid eco drives!!


+10

I have a "green" external drive. While it is pretty fast while running (eSata) there's usuallya 10 second spin up delay. sucks ass.
Link Posted: 1/27/2011 8:58:15 AM EDT

Originally Posted By AgentDavis:
Putting together a new computer and I decided to go with a RAID 0. Is there anything specific I should look for in the HDDs? I read that I have to get a HDD with TLER, Is that true?

The fastest of the supported drives listed by the raid card manufacturer.
Link Posted: 1/27/2011 9:30:29 AM EDT
Originally Posted By AgentDavis:
Because I want the fastest RAID I can have. I'll be doing incremental backups to a hard drive, so i don't have to worry about redundancy. Although putting windows on a RAID 10 and everything else on a RAID 0 sounds interesting...


RAID 0 is fine as long as you know what you're getting into. Mainly, that the zero stands for "zero redundancy." You seem to realize that, so no problem there.

Are the built-in, firmware RAID controllers actually good for running RAID 0? I experimented using mine (Intel ICH-9R) for a RAID-1 setup, but I was not impressed.

First, the Intel Matrix software required to use the RAID features made Windows XP unresponsive for the first minute or so after booting. That remained true even after I dissolved the RAID volume & began using the disks in "Just a Bunch of Disks" mode. To get rid of the Matrix software and the startup delay, I eventually had to reinstall Windows XP.

Maybe they've mitigated that issue with Windows 7. Don't know; haven't tried 7 w/RAID 1.

Second, the drivers required to run a RAID-1 were Windows-only. So I couldn't run Ubuntu off a RAID. I had to install Ubuntu on a third hard drive, and avoid accessing the disks that made up the Windows RAID-1 volume (else it would have to "rebuild" on the next boot, which could take an hour or more).

Neither of those things would be true with a real, hardware-based RAID expansion card, but boy do they cost. $200-$300 the last time I looked, and that was only what I could find on Newegg. IT professionals probably have their top-pick RAID cards that cost even more.

Maybe the built-in controllers are good for running RAID-0 to get a speed bost. I haven't felt like reinstalling everything just to try it, and all my stuff will no longer fit on one 1 TB drive.


Link Posted: 1/27/2011 10:54:46 AM EDT
Originally Posted By Khemist:
Why not SSD?


There's no reason to use them outside of environments where you really need to not have moving parts.

Any RAID, SSD or not, can easily saturate the bus and achieve overall times and speeds just as good as a single SSD. When you do it with mechanical drives, it's a lot cheaper for the space.

Link Posted: 1/27/2011 10:56:07 AM EDT
Originally Posted By Objekt:
Originally Posted By AgentDavis:
Because I want the fastest RAID I can have. I'll be doing incremental backups to a hard drive, so i don't have to worry about redundancy. Although putting windows on a RAID 10 and everything else on a RAID 0 sounds interesting...


RAID 0 is fine as long as you know what you're getting into. Mainly, that the zero stands for "zero redundancy." You seem to realize that, so no problem there.

Are the built-in, firmware RAID controllers actually good for running RAID 0? I experimented using mine (Intel ICH-9R) for a RAID-1 setup, but I was not impressed.

First, the Intel Matrix software required to use the RAID features made Windows XP unresponsive for the first minute or so after booting. That remained true even after I dissolved the RAID volume & began using the disks in "Just a Bunch of Disks" mode. To get rid of the Matrix software and the startup delay, I eventually had to reinstall Windows XP.

Maybe they've mitigated that issue with Windows 7. Don't know; haven't tried 7 w/RAID 1.

Second, the drivers required to run a RAID-1 were Windows-only. So I couldn't run Ubuntu off a RAID. I had to install Ubuntu on a third hard drive, and avoid accessing the disks that made up the Windows RAID-1 volume (else it would have to "rebuild" on the next boot, which could take an hour or more).

Neither of those things would be true with a real, hardware-based RAID expansion card, but boy do they cost. $200-$300 the last time I looked, and that was only what I could find on Newegg. IT professionals probably have their top-pick RAID cards that cost even more.

Maybe the built-in controllers are good for running RAID-0 to get a speed bost. I haven't felt like reinstalling everything just to try it, and all my stuff will no longer fit on one 1 TB drive.




What happens if my drives get out of sync?
Link Posted: 1/27/2011 11:02:08 AM EDT
Originally Posted By Objekt:
Originally Posted By AgentDavis:
Because I want the fastest RAID I can have. I'll be doing incremental backups to a hard drive, so i don't have to worry about redundancy. Although putting windows on a RAID 10 and everything else on a RAID 0 sounds interesting...


RAID 0 is fine as long as you know what you're getting into. Mainly, that the zero stands for "zero redundancy." You seem to realize that, so no problem there.

Are the built-in, firmware RAID controllers actually good for running RAID 0? I experimented using mine (Intel ICH-9R) for a RAID-1 setup, but I was not impressed.

First, the Intel Matrix software required to use the RAID features made Windows XP unresponsive for the first minute or so after booting. That remained true even after I dissolved the RAID volume & began using the disks in "Just a Bunch of Disks" mode. To get rid of the Matrix software and the startup delay, I eventually had to reinstall Windows XP.

Maybe they've mitigated that issue with Windows 7. Don't know; haven't tried 7 w/RAID 1.

Second, the drivers required to run a RAID-1 were Windows-only. So I couldn't run Ubuntu off a RAID. I had to install Ubuntu on a third hard drive, and avoid accessing the disks that made up the Windows RAID-1 volume (else it would have to "rebuild" on the next boot, which could take an hour or more).

Neither of those things would be true with a real, hardware-based RAID expansion card, but boy do they cost. $200-$300 the last time I looked, and that was only what I could find on Newegg. IT professionals probably have their top-pick RAID cards that cost even more.

Maybe the built-in controllers are good for running RAID-0 to get a speed bost. I haven't felt like reinstalling everything just to try it, and all my stuff will no longer fit on one 1 TB drive.




Sounds like you just had a 'crummy' raid controller. There were a lot of them out there doing some (or all) of the raid levels in software. Some did it well, but most did it very poorly. That said, the built-in windows RAID-0 striping will give you better performance than any RAID controller out there simply due to the speed of your main CPU vs. the embedded CPU on the raid controller.

I'm an IT professional, and my top pick RAID controller companies are both out of business, and these days you're left with (lesser performing) choices of intel, Adaptec, and Dell PERC (rebranded Adaptec). I personally preferred ICP Vortex in the enterprise, and Mylex for small companies / home office.

ICH9R was a software raid controller, not even a "firmware" raid controller as you stated (which all the real raid controllers are). It's no wonder you had poor performance, driver issues, etc. The non-software controllers don't even need drivers to run, they appear to the host as a normal drive –– RAID operation is transparent.

I'm quite satisfied with mine so far, onboard AMD SB710. I just wish it supported more levels, especially 5 and 50, so I could run everything redundant without wasting all that space. As it is, I'm running a 30GB RAID-10 and the rest of the 500x4 as RAID-0.
Link Posted: 1/27/2011 11:11:04 AM EDT
Redundant array of INEXPENSIVE drives. RAID.


IMO, RAID 0 just increases your chances of a hardware failure by 100% for a modest gain in performance.

I had 2x Seagate Barracuda 7200.10 500GB drives in RAID0 and it worked fine for 3 years, but then it pooped out on me. Fortunately, it was my OS drive and not my data drive, which is RAID 1 (the only worthwhile 2-drive RAID IMO).

So if SSDs are out of range for you, and you want some performance increase, I suggest the WD Velociraptor 10,000 RPM drives. Best compromise ever. [I think NewEgg stock recertified ones for about 50% off MSRP]
Link Posted: 1/27/2011 4:45:34 PM EDT
[Last Edit: 3/24/2011 5:36:15 PM EDT by TZLVredmist]
Originally Posted By allenNH:
Originally Posted By Khemist:
Why not SSD?


There's no reason to use them outside of environments where you really need to not have moving parts.

Any RAID, SSD or not, can easily saturate the bus and achieve overall times and speeds just as good as a single SSD. When you do it with mechanical drives, it's a lot cheaper for the space.



My current crucial SSD Drive RAPES my old Velociraptor RAID-0 setup. not even funny it's so much faster. i am going to have to call bullshit on the above. Sorry...
Link Posted: 1/28/2011 6:33:00 AM EDT
Originally Posted By TZLVredmist:
Originally Posted By allenNH:
Originally Posted By Khemist:
Why not SSD?


There's no reason to use them outside of environments where you really need to not have moving parts.

Any RAID, SSD or not, can easily saturate the bus and achieve overall times and speeds just as good as a single SSD. When you do it with mechanical drives, it's a lot cheaper for the space.



My current Micron SSD Drive RAPES my old Velociraptor RAID-0 setup. not even funny it's so much faster. i am going to have to call bullshit on the above. Sorry...


You can call bullshit, but that doesn't make it bullshit. You just didn't have enough drives in your RAID.
Link Posted: 1/28/2011 9:55:24 AM EDT
Originally Posted By allenNH:
Sounds like you just had a 'crummy' raid controller. There were a lot of them out there doing some (or all) of the raid levels in software. Some did it well, but most did it very poorly. That said, the built-in windows RAID-0 striping will give you better performance than any RAID controller out there simply due to the speed of your main CPU vs. the embedded CPU on the raid controller.


Yes, and I'm not sure the other software-dependent fake RAIDs you used to see a lot - Silicon Image 3112, 3114, 3116 for example - are any better. I have a PCI "RAID" card with a SI 3114 that I use solely to provide an extra eSATA hookup. There are plenty of these cheap fake-RAID cards available, but I almost feel sorry for anyone who tries to actually use them to run a RAID.

Originally Posted By allenNH:I'm an IT professional, and my top pick RAID controller companies are both out of business, and these days you're left with (lesser performing) choices of intel, Adaptec, and Dell PERC (rebranded Adaptec). I personally preferred ICP Vortex in the enterprise, and Mylex for small companies / home office.

ICH9R was a software raid controller, not even a "firmware" raid controller as you stated (which all the real raid controllers are). It's no wonder you had poor performance, driver issues, etc. The non-software controllers don't even need drivers to run, they appear to the host as a normal drive –– RAID operation is transparent.


Perhaps the best term I've seen for the ICH9R and similar crapola (Silicon Image 3112, 3114 as noted above) is "fake RAID." All the promises of a decent hardware RAID card, with little to no delivery.

Originally Posted By allenNH:
I'm quite satisfied with mine so far, onboard AMD SB710. I just wish it supported more levels, especially 5 and 50, so I could run everything redundant without wasting all that space. As it is, I'm running a 30GB RAID-10 and the rest of the 500x4 as RAID-0.


Nice to know there are some onboard/fake RAIDs that aren't a complete joke. I was thinking of going AMD for my next build anyway.

What kinds of read speeds do you see from your 4-disk RAID 0? I have some pretty nice, fast HDDs (Samsung Spinpoint F1's) that will read/write as fast as ~100 MB/s, with 60-70 MB/s being more typical. Putting them in a RAID 0 would probably help load times when dealing with large files. And at less cost than going SSD.
Link Posted: 1/28/2011 9:59:08 AM EDT
If you're going for cheap drives you might as well go for a nested RAID 0+1 setup. That way you have backup when your RAID1 array fails.
Link Posted: 1/28/2011 10:15:44 AM EDT
Originally Posted By Fatbert:
If you're going for cheap drives you might as well go for a nested RAID 0+1 setup. That way you have backup when your RAID1 array fails.


This. I can't justify the expense of a real, hardware RAID controller for RAID 1 or above, so 0+1 is likely what I'll do the next time I build a system.

You get the redundancy of RAID 1 with some of the speed boost of RAID 0.

Unfortunately, I don't think Windows 7 can do RAID 0+1 natively, it's either RAID 0 or RAID 1. Is there a third-party app to do it in software?
Link Posted: 1/28/2011 10:27:06 AM EDT
I drill hard drives for the DoD. I can tell you that the tuffest drives out there are the Maxtor Cheetah drives.
Damn things are hard to kill. YMMV
Link Posted: 1/28/2011 10:42:45 AM EDT
Originally Posted By Objekt:

What kinds of read speeds do you see from your 4-disk RAID 0? I have some pretty nice, fast HDDs (Samsung Spinpoint F1's) that will read/write as fast as ~100 MB/s, with 60-70 MB/s being more typical. Putting them in a RAID 0 would probably help load times when dealing with large files. And at less cost than going SSD.


I have an HDTune (read only) benchmark up at the top of the thread. Average read is just over 200MB/s, latency is fine as well. If there are other benchmarks you'd like me to run, just name 'em and I'll do my best so long as they're free or have a trial.

This speed is about on-par with most standalone SSDs from what benchmarks I've seen. If I need it any faster, I'll get a real controller with 6 or 8 channels and add more disks.
Link Posted: 1/28/2011 10:48:24 AM EDT
Originally Posted By Objekt:
Originally Posted By Fatbert:
If you're going for cheap drives you might as well go for a nested RAID 0+1 setup. That way you have backup when your RAID1 array fails.


This. I can't justify the expense of a real, hardware RAID controller for RAID 1 or above, so 0+1 is likely what I'll do the next time I build a system.

You get the redundancy of RAID 1 with some of the speed boost of RAID 0.

Unfortunately, I don't think Windows 7 can do RAID 0+1 natively, it's either RAID 0 or RAID 1. Is there a third-party app to do it in software?


I want to clarify something here. Make sure (as sure as you can) that you are running RAID-10 (aka 1+0) and not RAID-0+1. They seem the same on the surface. Performance is the same, as is the space used/available. The major difference is in the worst case scenario. A RAID 0+1 can only survive the failure of a single disk, while fully half of the drives in a RAID-1+0/10 can fail, before the array goes offline.

The order of the numbers is the order that the LDs are stacked.

In 0+1, you're making two RAID-0 disks and then mirroring them with RAID-1. If one disk fails, that entire RAID-0 array is out of the mix, and you're running on only half of your disks.

In 1+0, you're making 'n' RAID-1 disks, and then striping them together into a RAID-0. If one disk fails, only that single RAID-1 LD is in a degraded state, and redundancy is maintained in all of the other RAID-1 LDs –– a disk in each one of them could fail and you'd still be online.

The rule is when you're combining RAID-0 with anything else, you want the 0 to be the last number in the series –– never the first.
Link Posted: 1/28/2011 10:59:45 AM EDT
Tag

My RAID 5 array has been rebuilding for over 3 days now, and it says it has one day left.

I think I'm gonna pull all of the pertinent data off of it and change it to a RAID 10, this is kind of ridiculous, and I don't need the little extra speed, or the extra space.

That Thermaltake enclosure is awesome, it matches my Thermaltake case. Too bad I'm out of SATA ports.

Link Posted: 1/28/2011 11:07:46 AM EDT
Originally Posted By Izzman:
Tag

My RAID 5 array has been rebuilding for over 3 days now, and it says it has one day left.

I think I'm gonna pull all of the pertinent data off of it and change it to a RAID 10, this is kind of ridiculous, and I don't need the little extra speed, or the extra space.

That Thermaltake enclosure is awesome, it matches my Thermaltake case. Too bad I'm out of SATA ports.



Yeah, rebuild times on RAID-5 can be a real pain in the neck, especially if you're using the system at the same time and doing a background rebuild. Rebuilding a failed mirror drive in RAID-1 is a lot faster.

The enclosure is great, and they also have a 6 drive one. I didn't get that one because I too am out of SATA ports now. My motherboard only has 5, so 4 are being used for the enclosure and one is for a standalone 500G drive. I'm going to be moving all the stuff off that drive and onto the RAID-0 though, to free that port up for a SATA blu-ray burner.
Link Posted: 1/28/2011 11:22:04 AM EDT
Originally Posted By allenNH:
Originally Posted By Izzman:
Tag

My RAID 5 array has been rebuilding for over 3 days now, and it says it has one day left.

I think I'm gonna pull all of the pertinent data off of it and change it to a RAID 10, this is kind of ridiculous, and I don't need the little extra speed, or the extra space.

That Thermaltake enclosure is awesome, it matches my Thermaltake case. Too bad I'm out of SATA ports.



Yeah, rebuild times on RAID-5 can be a real pain in the neck, especially if you're using the system at the same time and doing a background rebuild. Rebuilding a failed mirror drive in RAID-1 is a lot faster.

The enclosure is great, and they also have a 6 drive one. I didn't get that one because I too am out of SATA ports now. My motherboard only has 5, so 4 are being used for the enclosure and one is for a standalone 500G drive. I'm going to be moving all the stuff off that drive and onto the RAID-0 though, to free that port up for a SATA blu-ray burner.


I have 8, and I'm using the two 6gb/s for DVD drives. 4 for the array, one for a standalone hard drive, and one for eSATA. I can ditch the eSata one now though because i just bought one of those thermaltake USB 3.0 docking stations.

If pcie RAID cards weren't so expensive I'd be tempted to add that setup like you have, purely for aesthetic reasons. that's a slick little unit.
Link Posted: 1/28/2011 11:57:17 AM EDT
Originally Posted By Izzman:
Originally Posted By allenNH:
Originally Posted By Izzman:
Tag

My RAID 5 array has been rebuilding for over 3 days now, and it says it has one day left.

I think I'm gonna pull all of the pertinent data off of it and change it to a RAID 10, this is kind of ridiculous, and I don't need the little extra speed, or the extra space.

That Thermaltake enclosure is awesome, it matches my Thermaltake case. Too bad I'm out of SATA ports.



Yeah, rebuild times on RAID-5 can be a real pain in the neck, especially if you're using the system at the same time and doing a background rebuild. Rebuilding a failed mirror drive in RAID-1 is a lot faster.

The enclosure is great, and they also have a 6 drive one. I didn't get that one because I too am out of SATA ports now. My motherboard only has 5, so 4 are being used for the enclosure and one is for a standalone 500G drive. I'm going to be moving all the stuff off that drive and onto the RAID-0 though, to free that port up for a SATA blu-ray burner.


I have 8, and I'm using the two 6gb/s for DVD drives. 4 for the array, one for a standalone hard drive, and one for eSATA. I can ditch the eSata one now though because i just bought one of those thermaltake USB 3.0 docking stations.

If pcie RAID cards weren't so expensive I'd be tempted to add that setup like you have, purely for aesthetic reasons. that's a slick little unit.


Yeah, the 6 bay one looks even cooler.

Sure beats the pants off the SCSI SCA one it replaced, in the looks and size departments anyway:


The main thing I miss (other than the ungodly performance) is the "one wire" nature of SCSI. One drive? One ribbon cable. 15 drives? One ribbon cable. I didn't have to go buy a new controller just to add a few drives –– at least not before the # of drives were reaching numbers well beyond what you can fit in any normal PC case.

Link Posted: 1/28/2011 12:07:19 PM EDT
[Last Edit: 1/28/2011 12:08:34 PM EDT by Phoebus]
Originally Posted By allenNH:
Originally Posted By Khemist:
Why not SSD?


There's no reason to use them outside of environments where you really need to not have moving parts.

Any RAID, SSD or not, can easily saturate the bus and achieve overall times and speeds just as good as a single SSD. When you do it with mechanical drives, it's a lot cheaper for the space.



Not necessarily.

SSDs are far, far superior for random read and write. In addition, you would need 750MBps to saturate a 6Gbps bus, so that definitely doesn't fit the definition of "any RAID, SSD or not".
Link Posted: 1/28/2011 1:02:50 PM EDT
Originally Posted By Phoebus:
Originally Posted By allenNH:
Originally Posted By Khemist:
Why not SSD?


There's no reason to use them outside of environments where you really need to not have moving parts.

Any RAID, SSD or not, can easily saturate the bus and achieve overall times and speeds just as good as a single SSD. When you do it with mechanical drives, it's a lot cheaper for the space.



Not necessarily.

SSDs are far, far superior for random read and write. In addition, you would need 750MBps to saturate a 6Gbps bus, so that definitely doesn't fit the definition of "any RAID, SSD or not".


You're right, I shouldn't have said "any RAID", but try this: Find a single SSD that cannot be outperformed by a mechanical disk RAID for less money, while still having just as much space –– if not a whole lot more.

Access times are interesting, but long gone are the days when the number was anything detectable by a human being. When every drive out there has an average seek time of under 30ms, IMHO it really doesn't matter to end users if the SSDs are doing it in 1ms, or even 0.1ms.

WIth only four (laptop, so "slow") drives my average seek time is down around 13ms and read speeds are on par with any single SSD –– and I have 2TB of storage for less than the cost of a single 160GB ocz vertex 2.

2,000GB for $260, or 160GB for $300, with performance so close that outside of initial spin-up during boot you'd never even notice a difference. I'll take the one with almost twice the space, and redundancy (thanks to RAID10) on the boot volume.
Link Posted: 1/28/2011 1:31:21 PM EDT
[Last Edit: 1/28/2011 1:33:37 PM EDT by Phoebus]
Originally Posted By allenNH:
Originally Posted By Phoebus:
Originally Posted By allenNH:
Originally Posted By Khemist:
Why not SSD?


There's no reason to use them outside of environments where you really need to not have moving parts.

Any RAID, SSD or not, can easily saturate the bus and achieve overall times and speeds just as good as a single SSD. When you do it with mechanical drives, it's a lot cheaper for the space.



Not necessarily.

SSDs are far, far superior for random read and write. In addition, you would need 750MBps to saturate a 6Gbps bus, so that definitely doesn't fit the definition of "any RAID, SSD or not".


You're right, I shouldn't have said "any RAID", but try this: Find a single SSD that cannot be outperformed by a mechanical disk RAID for less money, while still having just as much space –– if not a whole lot more.

Access times are interesting, but long gone are the days when the number was anything detectable by a human being. When every drive out there has an average seek time of under 30ms, IMHO it really doesn't matter to end users if the SSDs are doing it in 1ms, or even 0.1ms.

WIth only four (laptop, so "slow") drives my average seek time is down around 13ms and read speeds are on par with any single SSD –– and I have 2TB of storage for less than the cost of a single 160GB ocz vertex 2.

2,000GB for $260, or 160GB for $300, with performance so close that outside of initial spin-up during boot you'd never even notice a difference. I'll take the one with almost twice the space, and redundancy (thanks to RAID10) on the boot volume.


I agree a single SSD is always going to lose out for space on a per-performance basis, dollar for dollar. For the OP's case, you're definitely correct.

Just to give a different use case example: I prefer to keep my OS disk fully separate from my data, since I reconfigure that pretty frequently, and love a super-fast boot. For me, a 60GB SSD for the OS, plus a RAID 10 for my data, is working quite well in my desktop. 60-80GB SSDs with pretty great performance are really quite inexpensive these days. I prefer not to have the OS on the data array, and from a physical space standpoint that leaves me with single drive or two drive solutions, and a RAID 1 of rotational drives isn't going to win me any prizes. I could do a RAID 0 for the OS drive of two magnetic disks, but that's just more failure rate than I'm willing to have.

SSDs definitely have use cases that are not limited only to low-power or high shock risk applications. One of my clients at work runs a large scale application that has extremely high random write and random read rates. They are currently using RAID 10s of enterprise 150GB SLC SSDs to good affect. Obviously not cheap, but they work best for the application (a high spindle count, dedicated FC SAN array would probably be better for their application, but I haven't been able to get them to go that direction yet).

And then there is the low-power, speed-sensitive arena, where they do really shine. My wife and I each have laptops with 120GB SSDs in them. Not inexpensive, but the performance and battery life are great.

For the OP's specific situation, though, SSDs are probably not the best fit, of course.
Link Posted: 1/28/2011 11:39:53 PM EDT
Originally Posted By Phoebus:

Just to give a different use case example: I prefer to keep my OS disk fully separate from my data, since I reconfigure that pretty frequently, and love a super-fast boot. For me, a 60GB SSD for the OS, plus a RAID 10 for my data, is working quite well in my desktop. 60-80GB SSDs with pretty great performance are really quite inexpensive these days. I prefer not to have the OS on the data array, and from a physical space standpoint that leaves me with single drive or two drive solutions, and a RAID 1 of rotational drives isn't going to win me any prizes. I could do a RAID 0 for the OS drive of two magnetic disks, but that's just more failure rate than I'm willing to have.



Why do the arrays need to be physically separate? I just create multiple logical volumes with differing raid levels on the same physical disks. I can reinstall as often as I like to one of the volumes without affecting the others, and as for boot time.. by the time my memory is done counting and the USB done initializing, the drives are spun up (I don't have them on a delay with only 4). I doubt the difference between booting with this array vs. a single SSD would be noticeable.


SSDs definitely have use cases that are not limited only to low-power or high shock risk applications. One of my clients at work runs a large scale application that has extremely high random write and random read rates. They are currently using RAID 10s of enterprise 150GB SLC SSDs to good affect. Obviously not cheap, but they work best for the application (a high spindle count, dedicated FC SAN array would probably be better for their application, but I haven't been able to get them to go that direction yet).


Again, I'm not saying that SSDs aren't faster drive for drive, but that for the price, they could have an equally fast RAID array of mechanical disks, with even more space –– or a much faster RAID of equal space. This will change one day, maybe one day very soon, but today that's just the way it is.


And then there is the low-power, speed-sensitive arena, where they do really shine. My wife and I each have laptops with 120GB SSDs in them. Not inexpensive, but the performance and battery life are great.

For the OP's specific situation, though, SSDs are probably not the best fit, of course.


I agree there, if battery life is a concern, you want good performance, and space is a secondary concern, SSDs are the best option. As you point out, that's not the OPs situation.
Link Posted: 1/28/2011 11:45:26 PM EDT
look into getting some of the samsung spinpoint 500gb single platter drives

they have a faster read/write than the velociraptors, and only a marginally slower seek time.

2 of those raided together will kill most single SSD's

of course..if you're a geek like me, you'll have 2 SSD's raided together for OS and games, and several of the single platters raided together for data
Link Posted: 1/29/2011 1:02:51 AM EDT

Originally Posted By allenNH:
Originally Posted By Objekt:
Originally Posted By Fatbert:
If you're going for cheap drives you might as well go for a nested RAID 0+1 setup. That way you have backup when your RAID1 array fails.


This. I can't justify the expense of a real, hardware RAID controller for RAID 1 or above, so 0+1 is likely what I'll do the next time I build a system.

You get the redundancy of RAID 1 with some of the speed boost of RAID 0.

Unfortunately, I don't think Windows 7 can do RAID 0+1 natively, it's either RAID 0 or RAID 1. Is there a third-party app to do it in software?


I want to clarify something here. Make sure (as sure as you can) that you are running RAID-10 (aka 1+0) and not RAID-0+1. They seem the same on the surface. Performance is the same, as is the space used/available. The major difference is in the worst case scenario. A RAID 0+1 can only survive the failure of a single disk, while fully half of the drives in a RAID-1+0/10 can fail, before the array goes offline.

The order of the numbers is the order that the LDs are stacked.

In 0+1, you're making two RAID-0 disks and then mirroring them with RAID-1. If one disk fails, that entire RAID-0 array is out of the mix, and you're running on only half of your disks.

In 1+0, you're making 'n' RAID-1 disks, and then striping them together into a RAID-0. If one disk fails, only that single RAID-1 LD is in a degraded state, and redundancy is maintained in all of the other RAID-1 LDs –– a disk in each one of them could fail and you'd still be online.

The rule is when you're combining RAID-0 with anything else, you want the 0 to be the last number in the series –– never the first.

I did not know that - thank you for correcting my mistake.

Link Posted: 1/29/2011 11:57:39 AM EDT
Originally Posted By Zack3g:
look into getting some of the samsung spinpoint 500gb single platter drives[/img]


I don't know whether this carries over to the 500 GB model, but I have two of the Samsung Spinpoint F1 1 TB drives (model HD103UJ), and they're pretty darn fast. Benchmarks suggest around 100 MB/s best case, with 60-70 MB/s transfers being more typical.

Are they as fast as Western Digital Velociraptors? Probably not, as the Spinpoint F1 is 7200 rpm instead of 10000 rpm. At the time I bought mine - late 2008 - they were $95, way cheaper and with much more capacity than any of the Velociraptors. Boy do you pay for that 10000 rpm speed.

The only problem was, both of them went defective, developing more and more bad sectors. I had to RMA both in turn. The replacements have been fine for going on ~2.5 years, so go figure.
Link Posted: 1/30/2011 1:11:31 PM EDT
[Last Edit: 3/24/2011 5:39:03 PM EDT by TZLVredmist]
I ran that same HD tune benchmark on my SSD drive just for shits and giggles. I got the SSD Drive for christmas as my wife does the website/marketing for Crucial.com


OP's setup on the left, SSD on the right.





ETA: All my stuff is on a 300gb Velociraptor. My OS, and BFBC2 is on the SSD. I use my machine for gaming and that's about it. Pics and music are on the other drive.o
Link Posted: 1/30/2011 1:15:19 PM EDT
Originally Posted By Zack3g:
look into getting some of the samsung spinpoint 500gb single platter drives

they have a faster read/write than the velociraptors, and only a marginally slower seek time.

2 of those raided together will kill most single SSD's

of course..if you're a geek like me, you'll have 2 SSD's raided together for OS and games, and several of the single platters raided together for data


LOL how do you have any money to buy guns and ammo?
Link Posted: 1/30/2011 1:18:26 PM EDT
Originally Posted By allenNH:
TLER might be a big deal some day, right now it's just WD hype.

I just replaced my SCSI drive RAID with SATA. I went with notebook drives to save space as well. Not quite as fast, plenty more space, and very very quiet.

Four of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16822148374

One of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16817998143

And I'm all set.

I created two LDs in the onboard raid controller; 30GB RAID-10 for the windows install and "important" stuff, the rest to a RAID-0 for the "fast" stuff. Here's my HDTune of the RAID-0.

http://gallery2.pingslave.com/d/4754-1/HDTune_Benchmark_AMD_____4_0_Stripe_RAID0.png

Not trying to convince, just what I did. Can't say much about it since it's only been up and running for an hour or two.


How did you make a RAID 10 and a RAID 0 with only four drives?

10 needs at least four, and 0 needs at least two. Unless you sliced the drives. I hate controllers that do that. They make it a BITCH to rebuild arrays if there is a failure.
Link Posted: 1/30/2011 1:20:33 PM EDT
C400's
Link Posted: 1/30/2011 1:48:40 PM EDT
Originally Posted By TZLVredmist:
I ran that same HD tune benchmark on my SSD drive just for shits and giggles. I get the SSD Drives for free as my wife does the website/marketing for Crucial.com


OP's setup on the left, SSD on the right.



http://myweb.cableone.net/rnjacobson/SSD.jpg

ETA: All my stuff is on a 300gb Velociraptor. My OS, and BFBC2 is on the SSD. I use my machine for gaming and that's about it. Pics and music are on the other drive.


Holy shit dude! My ssd looks like shit compared to yours

1tb storage on the left intel ssd on the right

Link Posted: 1/30/2011 2:21:24 PM EDT
[Last Edit: 1/30/2011 2:23:02 PM EDT by TZLVredmist]
Originally Posted By LCPL4ever:
Originally Posted By TZLVredmist:
I ran that same HD tune benchmark on my SSD drive just for shits and giggles. I get the SSD Drives for free as my wife does the website/marketing for Crucial.com


OP's setup on the left, SSD on the right.



http://myweb.cableone.net/rnjacobson/SSD.jpg

ETA: All my stuff is on a 300gb Velociraptor. My OS, and BFBC2 is on the SSD. I use my machine for gaming and that's about it. Pics and music are on the other drive.


Holy shit dude! My ssd looks like shit compared to yours


http://i41.photobucket.com/albums/e296/wombatturd/hgckyhfhyck.png



CRUCIAL OR GO HOME!!! I am running the drive on an Asus Rampage III Formula via the 6Gb/s SATA 3.0 connection.
Link Posted: 1/31/2011 7:21:05 AM EDT
Originally Posted By Matthew_Q:
Originally Posted By allenNH:
TLER might be a big deal some day, right now it's just WD hype.

I just replaced my SCSI drive RAID with SATA. I went with notebook drives to save space as well. Not quite as fast, plenty more space, and very very quiet.

Four of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16822148374

One of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16817998143

And I'm all set.

I created two LDs in the onboard raid controller; 30GB RAID-10 for the windows install and "important" stuff, the rest to a RAID-0 for the "fast" stuff. Here's my HDTune of the RAID-0.

http://gallery2.pingslave.com/d/4754-1/HDTune_Benchmark_AMD_____4_0_Stripe_RAID0.png

Not trying to convince, just what I did. Can't say much about it since it's only been up and running for an hour or two.


How did you make a RAID 10 and a RAID 0 with only four drives?

10 needs at least four, and 0 needs at least two. Unless you sliced the drives. I hate controllers that do that. They make it a BITCH to rebuild arrays if there is a failure.


I have never heard the term "sliced" before in this context.

I used the first 15G of each drive to create a RAID-10 logical volume, and then the remaining 485 to make the RAID-0 volume. Every real controller can do this, and I've never had a problem rebuilding.
Arrow Left Previous Page
Page / 2
Top Top