SSD
SSD
(OP)
I'm planning to put together another desktop computer and was planning to use a Samsung 1T HD and a solid state drive for the OS.
Is there any problem that anyone is aware of for doing this? Reliability, speed, whatever?
thanks, Dik
Is there any problem that anyone is aware of for doing this? Reliability, speed, whatever?
thanks, Dik
RE: SSD
Dik
RE: SSD
There may be other issues unrelated to your specific application, such as zeroizing.
TTFN
FAQ731-376: Eng-Tips.com Forum Policies
Chinese prisoner wins Nobel Peace Prize
RE: SSD
Is there an anvantage in using a PCIe or SATA type of interface?
Thanks, Dik
RE: SSD
Note, however, as I stated earlier, you'd need to be cranking quite a few W/E cycles on the devices. Unfortunately, most of the SSD will get few W/E cycles, but other parts, like the memory swap file might get lots of cycles. Even then, assuming a 10k cycle limit, over a 2-yr period, working every single workd day, you'd still need to be cranking an average of 19 cycles per day, which would require a minimum of 38 writes to any particular memory location.
If this is a general purpose machine running, say, simulations or general computing, it's probably OK. There may be other, more application-specific, usages that might drive up the number of W/E cycles.
TTFN
FAQ731-376: Eng-Tips.com Forum Policies
Chinese prisoner wins Nobel Peace Prize
RE: SSD
Dik
RE: SSD
http
Dik
RE: SSD
> The presumption of writing the entire disk to get MTTR or MTBF is not necessarily a valid presumption. Typically, a hard drive is occupied with three types of files, system files that cannot be moved, files that are seldomly moved, and files that are regularly moved. Thus, it's possible that certain files are rewritten, in place, i.e., not moved, so that W/E cycling is substantially higher within those used drive blocks, compared to others.
> Failure rates increasing with time. That's potentially expected, since hard drives in particular, and memories in general, are not necessarily initiated into wear mode simultaneously across all memory locations. To wit, a hard drive, initially, as low usage, i.e., very few of the memory locations are used. Thus, the memory locations that are not used are either not detectable for failures, or cannot enter into the constant failure rate regime because they aren't being used. Since the failure rate is essentially proportional to the number of memory locations under constant usage, it would not be surprising that failure rates are initially lower.
TTFN
FAQ731-376: Eng-Tips.com Forum Policies
Chinese prisoner wins Nobel Peace Prize
RE: SSD
Dik
RE: SSD
Chris Krug http://krugtech.com/
Maximum Up-time, Minimum BS
RE: SSD
Dik
RE: SSD
RE: SSD
When I was working in cable systems, I had to port a piece of code from a Sun Workstation to a Windows PC. The code took 1s to run on the Sun Workstation and 15 minutes on the Windows PC. I got the 15 minutes down to 1 minute. Still took 1 second on the Sun Workstation.
If you develop a program which does a lot of disk access and it runs very fast on an SSD, try it on a system with a hard disk. See how bad it really is before you release it to the general public. You may be able to make massive improvements to get the time down.
RE: SSD
Dik
RE: SSD
How about the tranquillity? Sooo quiet.