Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations cowski on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

SSD 1

Status
Not open for further replies.

dik

Structural
Apr 13, 2001
26,049
I'm planning to put together another desktop computer and was planning to use a Samsung 1T HD and a solid state drive for the OS.

Is there any problem that anyone is aware of for doing this? Reliability, speed, whatever?

thanks, Dik
 
Replies continue below

Recommended for you

Also, is there an advantage using a PCIe SSD as opposed to a SATA3?

Dik
 
SSDs have a limited number of write/erase cycles, so if you're a high cycle disk user, then that might be an issue.

There may be other issues unrelated to your specific application, such as zeroizing.

TTFN

FAQ731-376
Chinese prisoner wins Nobel Peace Prize
 
Thanks, IR... I would have thought that SSD's would be more reliable than regular HD's. I'm a bit surprised; do you have an idea of how much less reliable they are?

Is there an anvantage in using a PCIe or SATA type of interface?

Thanks, Dik
 
Depends greatly on the flash memories used. Somewhere between 10k and 100k cycles is probably typical. They used to be on the order of 1000k cycles, but things shrank, you got more density, and the trade was the W/E endurance of the memories.

Note, however, as I stated earlier, you'd need to be cranking quite a few W/E cycles on the devices. Unfortunately, most of the SSD will get few W/E cycles, but other parts, like the memory swap file might get lots of cycles. Even then, assuming a 10k cycle limit, over a 2-yr period, working every single workd day, you'd still need to be cranking an average of 19 cycles per day, which would require a minimum of 38 writes to any particular memory location.

If this is a general purpose machine running, say, simulations or general computing, it's probably OK. There may be other, more application-specific, usages that might drive up the number of W/E cycles.

TTFN

FAQ731-376
Chinese prisoner wins Nobel Peace Prize
 
Thank-you, sir...

Dik
 
A couple of caveats/comments:
> The presumption of writing the entire disk to get MTTR or MTBF is not necessarily a valid presumption. Typically, a hard drive is occupied with three types of files, system files that cannot be moved, files that are seldomly moved, and files that are regularly moved. Thus, it's possible that certain files are rewritten, in place, i.e., not moved, so that W/E cycling is substantially higher within those used drive blocks, compared to others.
> Failure rates increasing with time. That's potentially expected, since hard drives in particular, and memories in general, are not necessarily initiated into wear mode simultaneously across all memory locations. To wit, a hard drive, initially, as low usage, i.e., very few of the memory locations are used. Thus, the memory locations that are not used are either not detectable for failures, or cannot enter into the constant failure rate regime because they aren't being used. Since the failure rate is essentially proportional to the number of memory locations under constant usage, it would not be surprising that failure rates are initially lower.

TTFN

FAQ731-376
Chinese prisoner wins Nobel Peace Prize
 
Again thanks... similar to article in Tom's Hardware...

Dik
 
About the PCI-e interface cards, some motherboards will not run them if a lot of option roms are loaded ie using onboard raid w PCI-e controller plugged in. It's picky. I have a PCIe SSD and it's blazing fast through the PCIe interface but it's option rom prevents me from activating the onboard RAID. Here's what I would do- get an intel board w their onboard raid. Put a sata SSD on as boot and two huge drives in a mirrored raid array. Reliable storage with speed. You can always add another SSD in a striped array for more speed. With my rig I couldn't use the onboard raid so i used windows raid to set up my storage drives. That allowed me to use the PCIe boot drive (ocz revodrive) i would never run anything terabyte sized wo backup.

Chris Krug Maximum Up-time, Minimum BS
 
my new computer runs like a pig on fire... Windows boots in approx 2 sec... haven't timed it... just fast! Expensive... PCIe SSD was about $800... but, worth the fun...

Dik
 
I just took delivery of a big box of parts with a sata ssd. For comparison, I'll let you know the windows score for the drive when i get it all put together. Unfortunately my dvd was not sata and i'm waiting for one of them right now.
 
Caution about SSDs - it is OK when you're just using them with applications but developing code on them is another matter.

When I was working in cable systems, I had to port a piece of code from a Sun Workstation to a Windows PC. The code took 1s to run on the Sun Workstation and 15 minutes on the Windows PC. I got the 15 minutes down to 1 minute. Still took 1 second on the Sun Workstation.

If you develop a program which does a lot of disk access and it runs very fast on an SSD, try it on a system with a hard disk. See how bad it really is before you release it to the general public. You may be able to make massive improvements to get the time down.
 
I used the pcie because the sata was a bit of a bottleneck... about 4x the price, however...

Dik
 
My windows experience puts it as 7.7 out of 7.9. It is pretty quick on 6GB/s sata, but nothing like 2 or even 5s. It almost looks like I am waiting for the silly windows animation to finish. Any way to bypass this?
How about the tranquillity? Sooo quiet.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor